Standard 5: Provider Quality, Continuous Improvement, & CapacityClick the arrow for more information
5.1 Effective Quality Assurance System that Monitors Progress Using Multiple Measures
The EPP Quality Assurance Process utilizes decentralized, program-specific groups
to assess program outcomes and to make program changes. To inform program improvement,
faculty consider multiple, formative measures of candidate development such as key
course assessments and field experience evaluations. Summative measures of candidates'
knowledge and skills include the PRAXIS series of exams and student teaching evaluations.
Stakeholders' perceptions of program quality and candidate preparedness are gleaned
through surveys, focus groups, and advisory councils. Data from these measures are
described in the standards one, three, and four narratives. EPP data are posted on
the LiveText exhibit center and analyzed by faculty. Meeting Minutes and Program Review
items document decisions. The Selected Improvement Plan focuses, in part, on the refinement
and consistent implementation of the Quality Assurance Process through the creation
of a Framework for Reporting Meetings.
The EPP uses LiveText to archive and analyze key assessments that are developed by program faculty to assess candidate progress and program effectiveness. In all programs, lead teachers work with instructional teams to cooperatively design, implement, evaluate and calibrate key course assessments measuring candidates' knowledge, skills, and professional dispositions as they progress throughout the program. Assessments are plentiful, varied, developmental, and longitudinal. Student feedback is captured at multiple points. Candidates complete course evaluations each semester. Their proficiency is measured through Field Experience Evaluations and Student Teaching Evaluations. P-12 partners, completers, and advisory council members provide feedback during program/advisory council meetings and focus group sessions and through surveys (Completer Perceptions, Student Teacher Survey, Employer Perceptions, Meeting Minutes). Regular program and department meetings provide opportunities for the faculty to voice evaluative thoughts about institutional operations. Department heads are part of the EPP leadership team; they bring forward discussion items. Faculty have the opportunity to evaluate the dean and department chairs. Faculty Senate representatives are drawn from the college to serve the institution, which provides oversight to the EPP.
5.2 Quality Assurance System Relies on Measure Yielding Reliable, Valid, and Actionable Data
This EPP included the use of reliable and valid measures and attention to sources of bias. In instances where bias might occur, the EPP used multiple raters, anonymous scoring, and validated instruments and/or rubrics. The EPP estimated inter-rater reliability by calculating the correlation between the ratings of two raters of the TPA Eligibility Portfolio. University supervisors and cooperating teachers entered scores on the LiveText Field Experience Module. The correlation between these ratings gave the faculty an estimate of the reliability or consistency between the raters. Reliability was determined by the correlation of the scores from two or more independent raters and the coefficient of agreement of the judgments of the raters. The Selected Improvement Plan addressed the system for establishing validity and reliability for EPP assessments in response to some instances where inadequate data were available to evaluate the assessment.
LiveText charts and graphs illustrate the analysis of the scores of the common assessments; results were expressed in statistical terminologies. Internal consistency reliability was used to assess the consistency of results across items within an assessment. Results were correlated to the criterion to determine how well they represent the criterion behavior or knowledge. All survey items were aligned with EPSB Kentucky Teacher Standards, EPP professional dispositions or theme, field/practicum experience legislated mandates, and student teaching guidelines. The survey covered the content area appropriately in appropriate ratios. An EPP ad-hoc committee rated each survey item in terms of whether the knowledge or skills measured by each question were essential. Results were statistically analyzed and the survey was modified to improve the rational validity.
5.3 Results for Continuous Improvement are Used
5.4 Measures of Completer Impact are Analyzed, Shared, and Used in Decision-Making
Kentucky is not an EdTPA state; EPPs construct their own versions of teacher performance
assessments (TPA). This provider's TPA includes a pre-assessment, lesson designed
to address perceived academic needs based upon pre-assessment data, formative assessments,
post assessment to determine percentile of student academic growth, and extensive
reflection of how candidates impacted student learning. Teacher candidates are first
introduced to this model in their educational assessment and evaluation courses. Candidates
implement TPA-style mini-units during clinical experiences in advanced methods courses.
Student teachers complete TPAs as part of their final eligibility portfolios. To document students'
growth in achievement, student teachers provide test scores and student growth percentiles
Ongoing student teacher evaluations are based on instructional outcomes, P-12 students' learning, and uses of assessment in instruction. 2012-2013 survey data indicated 85.15% of student teachers used extensive opportunities to analyze data to evaluate P-12 student learning. Cooperating teachers' responses indicated 91% of student teachers used P-12 student assessment information and program data to meet instructional objectives. Ninety-five percent of student teachers reflected on teaching and planned ways to improve effectiveness (Student Teaching Evaluations). Electronic systems for the collection, aggregation, and dissemination of field experience data were used. Student Teacher TPA Eligibility Portfolio grades showed that the student teacher candidates reported a high percentage of P-12 students' learning gains from lesson pre-assessments to post-assessments.
Analysis of ST Impact on Student Learning data evidenced a positive impact on students' learning. The percentages of students achieving the lesson targets and showing learning gains were high for the REA 412 pre-service teacher candidates. Overall, findings indicate candidates increased their abilities to impact student learning as they progressed through their programs. The average percentage of P-12 students showing learning gains reported during 2012-2013 academic year (with mean ranges from 3.5-4.0 for the 213 student-teacher candidates) demonstrated that student teacher candidates taught so that all students could learn.
Due to lack of a current state-wide system of measuring completer effectiveness and the positive impact on P-12 student learning, the EPP uses standardized test results, online school report card (where available), and other indicators of influence of our graduates on student achievement. As data become available to institutions of higher education, the EPP will use completer data as another critical indicator of our program's effectiveness. Through the Selected Improvement Plan, the EPP will ensure the validity and reliability through cross-program evaluation and an analysis in relation to national scoring norms.
5.5 Relevant Stakeholders are Involved in Program Evaluation
The EPP maintains additional channels of communication with stakeholders at different levels. The Dean sits on the Western Kentucky Education Cooperative Board with regional superintendents. This provides superintendents' access to him. The Dean's Student Advisory Council provides candid feedback to the Dean regarding program experiences. P-12 partners engage in innovative programming and clinical partnerships. Recent projects include a Professional Development School Pilot, which will be used to develop processes necessary to scale up Professional Development School Efforts.
Sufficient evidence demonstrates that appropriate stakeholders are involved in program evaluation and improvement. Evidence from meeting minutes provided in LiveText shows greater strength in the involvement of stakeholders in programs. Beyond minutes of advisory council meetings, evidence established structures and relationships and emergent innovation meets standard 5.5. The Selected Improvement Plan will further refine these practices.