In June, 2005, the unit demonstrated its commitment to enhanced use of assessment to prove and improve learning when a Coordinator of Assessment and Accreditation and an academic program specialist were hired. The academic program specialist is a staff person responsible for collecting and organizing data. Reporting directly to the Associate Dean, the coordinator prepares regular reports for the NCATE Steering Committee, the TEEB, the College of Education Leadership Team and the Unit Assessment Committee. The coordinator serves as NCATE Co-Coordinator and works closely with all programs in developing performance-based assessments at clearly identified transition points; in collecting, analyzing, and reporting data from program and unit assessments; and in monitoring the implementation and impact of data-driven changes to assess the extent of their intended effects.
Standard II. Element 1: Assessment SystemDevelopment of the Initial Certification Assessment System (Exhibit 13)The history of the current Initial Certification Unit Assessment begins with the Standards of the TEEB (Exhibit 4), a standing committee of the University Senate representing all Teacher Education Programs at Towson University. Standards for Students Enrolled in Teacher Education Programs, a document originally developed in 1987 for initial preparation programs, has been revised four times in the intervening years to respond to the requirements of state, national, and institutional standards. The TEEB standards include requirements for admission to initial preparation programs and for entry into clinical practice, as well as a definition of professional behavior. In 1995, the teacher education unit adopted the INTASC Principles and the Maryland Essential Dimensions of Teaching as its performance standards. Since 2000, the INTASC Principles have served as unit-wide performance standards for syllabi and for multiple assessments, including the internship evaluation, the professional portfolio, and post-graduate and employer surveys, in order to assess the performance of candidates for initial certification. Surveys aligned to the INTASC Principles allow the unit to triangulate cohort data based on graduating interns' program evaluations with their assessed performance as candidates, their post-graduation program evaluations (one and three years following graduation), and the assessment by employers collected during their first year of teaching. The INTASC-aligned professional portfolio assessment model was developed collaboratively by the CPP and its PDS partners in 1998. Required prior to program completion, the summative professional portfolio is evaluated by P-16 educators, using common scoring guides for artifacts as well as for an oral "defense" by the intern. As a result of the 2004-2005 revisions to the portfolio model, which were reviewed and affirmed by several school system partners, interns must include an artifact for the INTASC 8 Principle, which uses assessment to document evidence of student learning. The required focus of the intern's portfolio "defense" must reveal understanding of how to impact student learning (Exhibit 26). As noted in the Institutional Overview and Section 2 of the Conceptual Framework, the NCATE Steering Committee was re-established in spring 2005 to renew and focus preparation for the 2007 NCATE/MSDE Accreditation. Reflecting the Conceptual Framework's emphasis on collaboration and professional community, the Steering Committee comprises unit-wide faculty, including the TEEB representatives from the Fisher College of Mathematics and Science and the College of Health Professionals, as well as two representatives from local school systems where candidates serve their initial and advanced program internships. The NCATE Steering Committee also served as the Unit Assessment Committee until fall 2006 when a subcommittee of the TEEB was created to provide permanent structure to the monitoring of the Unit Assessment System. In spring 2005, in response to the revised NCATE review process, the NCATE Steering Committee made three important recommendations for revising the Unit Initial Certification Assessment System: 1) expansion of the internship evaluation to include assessment of respective SPA standards; 2) development of a comprehensive, unit dispositions plan; and 3) assessment of all interns' ability to impact student learning.
Revision of Internship Evaluation
Assessment of Interns' Impact on Student Learning
Unit Dispositions Plan
Development of the Assessment System for Advanced Programs for Continuing Preparation and Other School Personnel
New components of the Unit Assessment System for advanced programs were piloted in all advanced programs in spring 2006 and fully implemented in fall 2006, resulting in a more comprehensive and integrated set of evaluations to measure student performance. The overall assessment system is applicable to all graduate programs, (e.g., continuing preparation of teachers and those for other school personnel). However, assessment at the graduate level is more program specific than it is at the initial level. Therefore, data presented throughout this report are frequently program-specific.
Assessment of Dispositions in Advanced Programs
Midpoint Assessment
Capstone Assessment
Follow-Up Surveys
Relationship of Assessment Systems of Initial and Advanced Certification Programs to the Conceptual Framework, State, and Professional Standards
Key Assessments Used to Monitor Candidates' Performance
Content Knowledge
All candidates in advanced programs demonstrate their content knowledge through earned Bachelor's degrees from accredited institutions with required GPA's of 3.0 and teacher certification. At both the midpoint and completion transition points of their programs, content knowledge is measured by required course assessments, aligned to state, national or institutional standards, including capstone experiences which serve as final assessments of content mastery required for exit from all graduate programs. Following program completion, the graduate school surveys all completers, and program surveys collect data on their respective completers' assessment of the quality of their preparation in content knowledge.
Pedagogical and Professional Knowledge
All advanced programs require evidence of professional and pedagogical knowledge at Admission through documented teacher certification and letters of recommendation. Midpoint assessments, identified by each program, and required maintenance of a 3.0 GPA, are used to document progress in professional and pedagogical knowledge in graduate programs. All advanced programs have capstone experiences which include required assessment of pedagogical and professional knowledge in field-based settings.
Dispositions
Student Learning
The Unit Assessment System for advanced programs identifies the assessment(s) used by each graduate program to evaluate candidates on their ability to impact student learning directly or to create positive environments for student learning during required capstone experiences in field-based settings. Data and detailed assessment information are available in the SPA reports (See Assessment #5) or Graduate Assessment Notebooks, available in the Exhibit Room.
Use of Assessment to Determine Admission to, Continuation in, and Completion of ProgramThere are four transition points in both initial and advanced programs. Multiple assessments at each transition point are used to identify candidates with potential to become successful teachers, to serve as advanced professionals, or to assume roles as other school personnel. Content knowledge, professional and pedagogical skills, dispositions, and ability to impact and/or support student learning are assessed systematically in all programs and used for both formative and summative purposes. Each required assessment defines a minimum level of competency based on common scoring guide criteria. Candidates must perform at the satisfactory level on each required assessment in order to progress to the next transition point in the program. Formative assessments provide candidates with ongoing feedback which may include opportunities to redo assignments in order to meet the required standards.
Process to Ensure Fairness, Accuracy, Consistency and Freedom from
Bias in Assessment System
Assessments as Predictors of Candidate Success
As documented in Standard I, multiple years of data collected from program graduates and local school system employers validate the assessment system through consistently high satisfaction ratings based on statements and questions aligned to the INTASC Principles. Towson's program completers who are licensed and hired to teach in Maryland also consistently express high levels of satisfaction with their preparation for practice in surveys sent to them twice, one-year, and three-years following graduation (Exhibit 28). Survey ratings of employers of Towson completers practicing in public schools in Maryland validate their strong performance in their first years of practice. The CPP recently concluded a five-year, longitudinal retention study to examine whether candidates who are prepared in PDS stay in the teaching profession longer than candidates from earlier years who were not trained using the PDS model. Data indicate that PDS contribute to greater teacher retention (Exhibit 56). Graduate programs use surveys which collect data from their completers as well (Exhibit 32).
Use of Assessments in Management and Improvement of
Unit's Operations and Programs
Reflecting the Conceptual Framework, the unit is committed to preparing educators for diverse and inclusive communities of learners, including systematic exposure to heterogeneous populations. Accordingly, the unit is responsive to initial preparation candidates' feedback on unit operations gathered from graduating interns on Program Evaluation Day; candidates are asked to respond to questions (# 12 and 13) that assess their opportunities to work with diverse and inclusive communities of learners during their internships. The CPP uses this data to assess its operational success on providing unit candidates with experience with diverse and inclusive communities of learners, and to revise internship placements/sites. External data trends have influenced changes as well. For example, as a result of low reading scores in P-12 schools, MSDE now mandates state approval of reading courses required of teacher candidates. Similarly, the statewide need for teachers who are better prepared to both use and teach technology led to the development of the Maryland Teacher Technology Standards (MTTS). These are but two examples that prompted subsequent realignment of our course requirements to ensure that preparation of our candidates meets state standards. After receiving the analysis of unit data from the Unit Assessment Committee's annual report to the TEEB, the Dean of the College of Education is required to report annually to the University Senate through the TEEB Report, summarizing the unit's performance, including its operations and programs, and requesting support for changes, as needed.
Standard II. Element 2: Data Collection, Analysis, and Evaluation
Timeline for Collecting and Reviewing Data
Candidate Assessments
Surveys
Program Reviews
Program Operations Data
Summary and Analysis of Data
Frequency of Summary and Analysis
Responsibility for Summary and Analysis
The Coordinator of Assessment and Accreditation is responsible for summarizing unit and program data for the Unit Assessment Committee which is responsible for analyzing all unit data. As a subcommittee, the Unit Assessment Committee reports its findings to the TEEB, chaired by the Dean of the College of Education, who reports its recommendations annually in the spring to the University Senate.
Information Technologies Used to Maintain Assessment System
Advanced programs submit their data using EXCEL or WORD formats to facilitate aggregating of data.
Records of Formal Candidate Complaints and their Resolution
Standard II. Element 3: Use of Data for Program ImprovementThe unit is committed to answering the question, "Now what?" as it pertains to the regular analysis of data. Programs are required to document faculty analysis of program data annually, including changes planned as a result of that analysis. The Data Analysis Reports are required to revisit programmatic changes documented in earlier reports and analyze whether those changes are having the desired effects, based on current data. Most program improvements are generated through this process. However, the Unit Assessment Committee may also recommend changes, based on their analysis of unit-wide data that have implications for individual programs. As reported in Standard I, data reveal that candidates are meeting national, state, and institutional performance standards.
Use of Data to Improve Performance
Candidates
Use of Data to Discuss or Initiate Program or Unit Changes on a Regular Basis
Data-Driven Changes
How Assessment Data are Shared
|
|