Standard 5: Provider Quality, Continuous Improvement, & Capacity

Click the arrow for more information

5.1 Effective Quality Assurance System that Monitors Progress Using Multiple Measures

The EPP Quality Assurance Process utilizes decentralized, program-specific groups to assess program outcomes and to make program changes. To inform program improvement, faculty consider multiple, formative measures of candidate development such as key course assessments and field experience evaluations. Summative measures of candidates' knowledge and skills include the PRAXIS series of exams and student teaching evaluations. Stakeholders' perceptions of program quality and candidate preparedness are gleaned through surveys, focus groups, and advisory councils. Data from these measures are described in the standards one, three, and four narratives. EPP data are posted on the LiveText exhibit center and analyzed by faculty. Meeting Minutes and Program Review items document decisions. The Selected Improvement Plan focuses, in part, on the refinement and consistent implementation of the Quality Assurance Process through the creation of a Framework for Reporting Meetings.

The EPP uses LiveText to archive and analyze key assessments that are developed by program faculty to assess candidate progress and program effectiveness. In all programs, lead teachers work with instructional teams to cooperatively design, implement, evaluate and calibrate key course assessments measuring candidates' knowledge, skills, and professional dispositions as they progress throughout the program. Assessments are plentiful, varied, developmental, and longitudinal. Student feedback is captured at multiple points. Candidates complete course evaluations each semester. Their proficiency is measured through Field Experience Evaluations and Student Teaching Evaluations. P-12 partners, completers, and advisory council members provide feedback during program/advisory council meetings and focus group sessions and through surveys (Completer Perceptions, Student Teacher Survey, Employer Perceptions, Meeting Minutes). Regular program and department meetings provide opportunities for the faculty to voice evaluative thoughts about institutional operations. Department heads are part of the EPP leadership team; they bring forward discussion items. Faculty have the opportunity to evaluate the dean and department chairs. Faculty Senate representatives are drawn from the college to serve the institution, which provides oversight to the EPP.

5.2 Quality Assurance System Relies on Measure Yielding Reliable, Valid, and Actionable Data

Examination of LiveText Key Course Assessments, surveys, evaluations, and proprietary data reports indicates the EPP uses varied measures that are relevant, representative, cumulative and actionable. Key Course Assessments are developed by faculty, who as subject matter experts, impart face validity. The establishment of content/construct validation processes for EPP-created assessments is part of the Selected Improvement Plan. Proprietary assessments are valid for the purposes they are developed to serve and provide the EPP with excellent feedback on candidate performance.

This EPP included the use of reliable and valid measures and attention to sources of bias. In instances where bias might occur, the EPP used multiple raters, anonymous scoring, and validated instruments and/or rubrics. The EPP estimated inter-rater reliability by calculating the correlation between the ratings of two raters of the TPA Eligibility Portfolio. University supervisors and cooperating teachers entered scores on the LiveText Field Experience Module. The correlation between these ratings gave the faculty an estimate of the reliability or consistency between the raters. Reliability was determined by the correlation of the scores from two or more independent raters and the coefficient of agreement of the judgments of the raters. The Selected Improvement Plan addressed the system for establishing validity and reliability for EPP assessments in response to some instances where inadequate data were available to evaluate the assessment.

LiveText charts and graphs illustrate the analysis of the scores of the common assessments; results were expressed in statistical terminologies. Internal consistency reliability was used to assess the consistency of results across items within an assessment. Results were correlated to the criterion to determine how well they represent the criterion behavior or knowledge. All survey items were aligned with EPSB Kentucky Teacher Standards, EPP professional dispositions or theme, field/practicum experience legislated mandates, and student teaching guidelines. The survey covered the content area appropriately in appropriate ratios. An EPP ad-hoc committee rated each survey item in terms of whether the knowledge or skills measured by each question were essential. Results were statistically analyzed and the survey was modified to improve the rational validity.

5.3 Results for Continuous Improvement are Used

Data from the multiple measures described in 5.1 and 5.2 are posted on the LiveText Exhibit Center. Each fall, EPP program coordinators work with faculty and Advisory Councils to analyze data to inform program improvement (Meeting Minutes, Program Review). These discussions are an integral part of the continuous assessment process to ensure programs remain viable and relevant. Program meeting minutes are stored in the Exhibit Center. Meeting minute formats changed over the years within programs and varied between programs. Turnover among program members and differing understanding of the required elements of minutes contributed to this variance. Meeting minutes provide evidence of databased discussions and decision-making. The Selected Improvement Plan addresses expectations for this process and creates structures to assist faculty in documenting program activities, including meeting minute templates (Framework for Reporting Meetings). Program and advisory council meeting minutes and Program Review documents demonstrate strong evidence of stakeholder participation and continuous improvement discussions.

5.4 Measures of Completer Impact are Analyzed, Shared, and Used in Decision-Making

Kentucky is not an EdTPA state; EPPs construct their own versions of teacher performance assessments (TPA). This provider's TPA includes a pre-assessment, lesson designed to address perceived academic needs based upon pre-assessment data, formative assessments, post assessment to determine percentile of student academic growth, and extensive reflection of how candidates impacted student learning. Teacher candidates are first introduced to this model in their educational assessment and evaluation courses. Candidates implement TPA-style mini-units during clinical experiences in advanced methods courses. Student teachers complete TPAs as part of their final eligibility portfolios. To document students' growth in achievement, student teachers provide test scores and student growth percentiles (see 4.1).

Ongoing student teacher evaluations are based on instructional outcomes, P-12 students' learning, and uses of assessment in instruction. 2012-2013 survey data indicated 85.15% of student teachers used extensive opportunities to analyze data to evaluate P-12 student learning. Cooperating teachers' responses indicated 91% of student teachers used P-12 student assessment information and program data to meet instructional objectives. Ninety-five percent of student teachers reflected on teaching and planned ways to improve effectiveness (Student Teaching Evaluations). Electronic systems for the collection, aggregation, and dissemination of field experience data were used. Student Teacher TPA Eligibility Portfolio grades showed that the student teacher candidates reported a high percentage of P-12 students' learning gains from lesson pre-assessments to post-assessments.

Analysis of ST Impact on Student Learning data evidenced a positive impact on students' learning. The percentages of students achieving the lesson targets and showing learning gains were high for the REA 412 pre-service teacher candidates. Overall, findings indicate candidates increased their abilities to impact student learning as they progressed through their programs. The average percentage of P-12 students showing learning gains reported during 2012-2013 academic year (with mean ranges from 3.5-4.0 for the 213 student-teacher candidates) demonstrated that student teacher candidates taught so that all students could learn.

Due to lack of a current state-wide system of measuring completer effectiveness and the positive impact on P-12 student learning, the EPP uses standardized test results, online school report card (where available), and other indicators of influence of our graduates on student achievement. As data become available to institutions of higher education, the EPP will use completer data as another critical indicator of our program's effectiveness. Through the Selected Improvement Plan, the EPP will ensure the validity and reliability through cross-program evaluation and an analysis in relation to national scoring norms.

5.5 Relevant Stakeholders are Involved in Program Evaluation

The EPP's Quality Assurance Process provides opportunities for practitioner feedback on program goals and outcomes during each assessment cycle. The EPP gathers feedback through advisory councils, focus group sessions, surveys, and the collaborative development and implementation of clinical experiences (see standard 2). The Selected Improvement Plan identifies as a priority the establishment of standards for the development of these stakeholder partnerships.

The EPP maintains additional channels of communication with stakeholders at different levels. The Dean sits on the Western Kentucky Education Cooperative Board with regional superintendents. This provides superintendents' access to him. The Dean's Student Advisory Council provides candid feedback to the Dean regarding program experiences. P-12 partners engage in innovative programming and clinical partnerships. Recent projects include a Professional Development School Pilot, which will be used to develop processes necessary to scale up Professional Development School Efforts.

Sufficient evidence demonstrates that appropriate stakeholders are involved in program evaluation and improvement. Evidence from meeting minutes provided in LiveText shows greater strength in the involvement of stakeholders in programs. Beyond minutes of advisory council meetings, evidence established structures and relationships and emergent innovation meets standard 5.5. The Selected Improvement Plan will further refine these practices.

Conclusion

A preponderance of evidence documents the EPP meets Standard 5 criteria. The EPP maintains a Quality Assurance system and uses multiple, formative and summative measures to inform program improvement. The nature of assessments ranges greatly, inclusive of proprietary assessments with extensive validity/reliability information and programmatic assessments judged based on face validity. Stakeholders are involved at multiple levels and the EPP stays connected to employers through established channels. The EPP has systems in place to collect and analyze data; these systems are both inclusive and exclusive of stakeholder involvement. The absence of a state-wide data collection system hinders the ability to fully measure completer effectiveness and achievement. The Selected Improvement Plan details how a tighter connection between programmatic/departmental discussion, and data that are collected (as evidenced through existing minutes), will strengthen the EPP.