The Texas Principal Evaluation and Support System (T-PESS) is a new principal evaluation system for the state of Texas designed to support principals in their professional development and help them grow and improve as campus and instructional leaders. It was piloted by approximately 60 districts in the 2014–2015 school year, and implemented as a refined system in the 2015–2016 school year in approximately 200 districts. TPESS will be rolled out statewide in the 2016–2017 year.
T-PESS has three measures of principal effectiveness. The three measures are:
For the pilot years, student growth was not factored into the overall rating. (See #F.1) Since statewide implementation, the following percentages are assigned to determine the overall rating:
|Experience as principal on particular campus||Rubric||Goal-Setting||Student Growth|
|2 or more years||60%||20%||20%|
Starting in the spring of 2012, TEA worked with a principal advisory committee to build principal standards. This committee began by determining best practices for principals to be effective leaders and improve student performance, primarily by reframing the central role of the principal as the instructional leader of a campus. This work concluded in the fall of 2013 with a comprehensive set of principal standards that capture the aspirational practices all principals can strive toward regardless of their level of experience or the context of their position.
During the spring of 2014, a principal steering committee, comprised of campus principals, central office administrators, members of the higher education community, and principal association members, was convened to build a state principal evaluation system. The committee developed an evaluation system tied to the principal standards and focused on creating a process that would be used for continuous professional growth. The system they created provides actionable, timely feedback that allows principals to reflect consistently on their practice and strive to implement practices that will improve their performance.
During the 2014–2015 school year, T-PESS was piloted in more than 60 districts across the state. TEA took feedback from pilot districts to refine T-PESS for implementation in approximately 200 districts during the 2015–2016 school year, with statewide rollouts and continuous improvement and refinement hereafter.
The two steering committees that built both the teacher and principal evaluation systems established the same primary goals for growth-oriented supervision and evaluation—feedback that is specific, ongoing, and timely, and a process that builds relationships between appraisers and appraisees so they can deepen their understanding of effective practices together. With that in mind, both systems:
TEA, in conjunction with McREL International, a nonprofit entity with extensive experience with school leadership, provided statewide “train the trainer” sessions at Education Service Centers (ESCs) during the spring of 2015 to prepare a cadre of experts to train district appraisers. ESCs will continue to build up their training capacity to support statewide usage in the coming years. Similar to what is currently in place with the existing teacher appraisal system, ESCs will build support systems for districts as they implement best practices in evaluation, including professional development and guidance for appraisers on structuring and conducting conferences, gathering data and artifacts, and coaching principals.
As the Texas Education Code indicates, districts have the option of creating their own evaluation system. T-PESS will become the state recommended principal evaluation system.
One of TEA’s major ongoing initiatives is to better align preparation, evaluation, professional development, mentorship, and career pathways around a set of standards and practices that act as a foundation and bring the entire timeline of an educator’s career into alignment. One of the first steps was to establish the principal standards that reinforce the principal’s role as the instructional leader of the campus. In addition, preparation programs will be trained on the new principal evaluation system so that they can build the skills necessary to effectively perform or appraise these practices in both aspiring principals and leaders of principals.
Yes, the components of T-PESS—a rubric, a goal setting process, and student growth—are used in districts throughout the state and in many states throughout the nation. They provide three sources of information that create a more complete picture of a principal’s effectiveness.
The principal steering committee that convened in the spring of 2014 (see #A.3) began the rubric development process by looking primarily at two sources. First, they reviewed literature and research that indicated the campus factors and principal practices that had the greatest effect on improving student performance. Second, they reviewed the Texas principal evaluation standards and examples of principal evaluation systems from other states. Finally, they reviewed and analyzed a base rubric that attempted to capture those practices in language that aligned with the new principal standards created during the fall of 2013 and currently in Chapter 149 of the Texas Administrative Code.
Over the course of several meetings throughout the spring, the committee revised and edited the evolving rubric so that it accurately articulated the appropriate progression of principal practices, differentiating performance across an ordinal scale from “developing” to “distinguished.” Equally important, they focused on making sure the rubric captured the context of Texas principals and allowed for enough flexibility in application so that all districts could use the rubric regardless of their size or location.
The full rubric is available here.
The rubric has five standards:
There are 21 total indicators within those five standards, with five indicators in School Culture and four indicators in each of the remaining standards.
A principal’s goals and initiatives are determined through collaboration between the principal and the principal appraiser. Goals and initiatives should take into consideration the individual performance goals of the principal, the goals and initiatives for the campus, and broader goals and initiatives at the district level. At the end of the Beginning-of- Year Goal Setting / Refinement Conference, the appraiser of the principal needs to sign off on the goals captured on the “Beginning-of- Year Goal Setting Form.”
All goal forms can be found here.
There are goal-setting forms provided to guide the process from the beginning of the year to mid-year and the end of the year. The absolute priority is that the principal engages in a collaborative goal setting process with the appraiser based on a careful review of the T-PESS rubric and consideration of district and campus priorities. The goal(s) should identify the area(s) of professional growth most appropriate for the principal in the given year. A district may choose to customize or enhance the goal setting forms, however it is recommended the same framework be followed for the annual process: Beginning-of- Year, Mid-year, End-of- Year.
The principal and appraiser should meet formally to discuss progress toward goal attainment. However, formative assessment may take place throughout the year when the appraiser visits the campus or meets with the principal informally.
During the Mid-year Progress Meeting, the appraiser should indicate whether the principal is progressing or not progressing toward attainment and include a narrative of evidence of progress and/or revise the plan if necessary. The appraiser should use the “Progress toward Goal Attainment Form” to include any relevant feedback and comments that will assist the principal and promote growth.
During the End-of- Year Performance Discussion, the principal and appraiser should meet to discuss final ratings and review goals. The appraiser should use the “End-of- Year Goal Attainment Form” to indicate whether the principal achieved or did not achieve the established goals and provide a narrative of evidence that supports the determination. Relevant feedback and comments articulated during this conference should assist the principal in identifying areas for improvement and should promote growth in practice.
T-PESS seeks to establish with this process that:
Student growth measures how much a student progresses academically during a given period of time. It takes into consideration a student’s entering achievement when measuring how much the student grew over the year, and, as opposed to measuring student proficiency on a single assessment, student growth isn’t concerned with whether or not a student passes a test. By measuring growth, a teacher who has students who enter multiple years behind grade level could still demonstrate his or her effectiveness based on how much those students progress during that year. Students who move from three years behind to one year behind make considerable growth, and although a proficiency measure would still show those students as unable to pass the test, student growth would capture the remarkable progress (two years’ worth) those students made during their time with that teacher. Campus-level student growth captures the aggregation of growth demonstrated by the campus’s teachers and students. Student growth also incentivizes principals to address the needs of all students, including those who are unlikely to meet certain levels of proficiency and those who are likely to meet them regardless of how much they learn in a year.
Student growth data should be used just as any other data collected during an evaluation – as information that will help to inform principals about their campus’s strengths and potential areas of improvement so they can better impact all students the following year. Student growth is one measure in a multiple measure evaluation system, and the inclusion of student growth data in a formative evaluation process provides for a more complete understanding of which students are being reached and how much students have progressed in a given year.
TEA will work with districts to determine the appropriate measures for student growth for the various campus configurations and will seek to account for the variability in campus contexts throughout the state. Campus-level value-add scores are one potential measure of student growth. Other suggested measures of student growth include but are not limited to:
|Elementary School||Middle School||High School|
|Indices of State Accountability System||Indices of State Accountability System||Indices of State Accountability System|
|School Systems||School Systems||School Systems|
|Literacy Measures (TPRI/DRA/Dibels)||% of Students in Algebra 1 or other advanced curriculum||Advanced Placement Participation and Scores|
|District-wide Assessments||District-wide Assessments||ACT and SAT Participation and Scores|
|AP/IB Participation and Scores|
|Graduation Rates/Dropout Rates|
|% College and Career Ready|
For the options listed above, growth would be determined based on year-over- year progress in a measure identified as appropriate for a particular campus.
TEA will work with districts, ESCs, and experts in alternative growth measures to build the resources and guidelines that administrators will need to implement student growth measures. As indicated in FAQ #D.2, the purpose of student growth data is to provide principals with a better sense of how much of an impact the campus, under their leadership, has had on the progress of all their students, regardless of the students’ achievement levels. Most importantly, student growth data allow principals to make informed decisions about the goals and initiatives that will best impact all students the following year. Although a score does need to be calculated, the value of a student growth measure lies primarily in the feedback it provides to principals so that they can improve their practice.
A set of FAQs focused specifically on VAM can be found here.
The availability of student growth data depends on the measure in question. Some measures won’t be available until later in the summer, such as value-add scores, AP scores, or College and Career Readiness percentiles, because the data underpinning those measures won’t be processed until that time.
The timing of finalized student growth data, however, reinforces two critical concepts in T-PESS. Student growth is one of multiple measures of campus performance and a principal’s practice, and decisions should take into consideration more than just single-year student growth. Second, in a formative evaluation process like T-PESS, the timing of student growth data reinforces the ongoing loop between evaluation, feedback, and development. Discussions about a principal’s practice should be ongoing and should evolve over the course of the year. Student growth data can be analyzed when available and should be taken into consideration when a principal modifies or adjusts his or her goals and initiatives at the beginning of a new school year.
A principal’s end-of- year score will include rubric-based results and the principal’s goals and initiatives results. Student growth will also be a part of a principal’s evaluation score. The online system will calculate scores for appraisers.
The scoring on the rubric is based on an additive/cumulative process. Appraisers will use data, evidence, and artifacts throughout the year to determine the final rating in each indicator, including evidence of performance discussed during conferences earlier in the school year. A principal receives a particular rating once the evidence demonstrates that the principal has accomplished what each of the descriptors in a particular performance level capture for that indicator. Lack of evidence that the descriptors in the performance level “Developing” have been accomplished, for example, would result in a rating of “Not Demonstrated/Improvement Needed” for a particular indicator.
Only those who have attended the entire two-day T-PESS training can appraise principals. Those individuals have had a certificate issued to them indicating that they are certified appraisers on T-PESS.