College of Education


Accreditation

Section III: Evidence for Meeting Each Standard

STANDARD II: ASSESSMENT SYSTEM AND UNIT EVALUATION

Prev Page / Table of Contents

The unit has an assessment system that collects and analyzes data on the applicant qualifications, the candidate and graduate performance, and unit operations to evaluate and improve the unit and its programs.

In June, 2005, the unit demonstrated its commitment to enhanced use of assessment to prove and improve learning when a Coordinator of Assessment and Accreditation and an academic program specialist were hired. The academic program specialist is a staff person responsible for collecting and organizing data. Reporting directly to the Associate Dean, the coordinator prepares regular reports for the NCATE Steering Committee, the TEEB, the College of Education Leadership Team and the Unit Assessment Committee. The coordinator serves as NCATE Co-Coordinator and works closely with all programs in developing performance-based assessments at clearly identified transition points; in collecting, analyzing, and reporting data from program and unit assessments; and in monitoring the implementation and impact of data-driven changes to assess the extent of their intended effects.

Standard II. Element 1: Assessment System

Development of the Initial Certification Assessment System (Exhibit 13)
The history of the current Initial Certification Unit Assessment begins with the Standards of the TEEB (Exhibit 4), a standing committee of the University Senate representing all Teacher Education Programs at Towson University. Standards for Students Enrolled in Teacher Education Programs, a document originally developed in 1987 for initial preparation programs, has been revised four times in the intervening years to respond to the requirements of state, national, and institutional standards. The TEEB standards include requirements for admission to initial preparation programs and for entry into clinical practice, as well as a definition of professional behavior. In 1995, the teacher education unit adopted the INTASC Principles and the Maryland Essential Dimensions of Teaching as its performance standards. Since 2000, the INTASC Principles have served as unit-wide performance standards for syllabi and for multiple assessments, including the internship evaluation, the professional portfolio, and post-graduate and employer surveys, in order to assess the performance of candidates for initial certification. Surveys aligned to the INTASC Principles allow the unit to triangulate cohort data based on graduating interns' program evaluations with their assessed performance as candidates, their post-graduation program evaluations (one and three years following graduation), and the assessment by employers collected during their first year of teaching.

The INTASC-aligned professional portfolio assessment model was developed collaboratively by the CPP and its PDS partners in 1998. Required prior to program completion, the summative professional portfolio is evaluated by P-16 educators, using common scoring guides for artifacts as well as for an oral "defense" by the intern. As a result of the 2004-2005 revisions to the portfolio model, which were reviewed and affirmed by several school system partners, interns must include an artifact for the INTASC 8 Principle, which uses assessment to document evidence of student learning. The required focus of the intern's portfolio "defense" must reveal understanding of how to impact student learning (Exhibit 26).

As noted in the Institutional Overview and Section 2 of the Conceptual Framework, the NCATE Steering Committee was re-established in spring 2005 to renew and focus preparation for the 2007 NCATE/MSDE Accreditation. Reflecting the Conceptual Framework's emphasis on collaboration and professional community, the Steering Committee comprises unit-wide faculty, including the TEEB representatives from the Fisher College of Mathematics and Science and the College of Health Professionals, as well as two representatives from local school systems where candidates serve their initial and advanced program internships. The NCATE Steering Committee also served as the Unit Assessment Committee until fall 2006 when a subcommittee of the TEEB was created to provide permanent structure to the monitoring of the Unit Assessment System.

In spring 2005, in response to the revised NCATE review process, the NCATE Steering Committee made three important recommendations for revising the Unit Initial Certification Assessment System: 1) expansion of the internship evaluation to include assessment of respective SPA standards; 2) development of a comprehensive, unit dispositions plan; and 3) assessment of all interns' ability to impact student learning.

Revision of Internship Evaluation
During the summer of 2005, departments revised the INTASC-based internship evaluation which had been common to the entire unit for two years to include sections customized to reflect the standards of their respective professional associations (SPA's). As these new internship evaluations were completed, they were added to the newly developed Teacher Internship Management System (TIMS), an electronic internship evaluation process which had been piloted with the previous INTASC-aligned internship evaluation in spring 2005. New evaluation forms, which included SPA standards, were used for all programs beginning in fall 2005 (Exhibit 24).

Assessment of Interns' Impact on Student Learning
During the summer of 2005, the Portfolio Committee, comprising representative unit faculty, identified common elements to be required in all lesson plan formats across the unit and developed an assessment to measure candidates' impact on student learning. Based on the work of Emerson Elliott in his paper, Student Learning in NCATE Accreditation (2005) and Richard Stiggins' paper, Specifications for a Performance-Based Assessment System for Teacher Preparation (2000), the Portfolio Committee created a template to ensure common approaches to lesson planning across all programs, specifications for a required INTASC 8 artifact to document evidence of student learning in candidates' summative portfolios, and a scoring guide for portfolio reviewers to use in assessing the required artifact. The Director of the CPP submitted a draft of the new portfolio requirements to local school system representatives, and their input was incorporated into the final document. All programs were required to submit lesson plan formats aligned to what came to be known as the JPTAAR approach. The JPTAAR acronym stands for the core of activities identified by Student Learning in NCATE Accreditation (2005): Judges Prior Learning, Plans Instruction, Teaches, Assesses, Analyzes Results, and Reflects on Changes (Exhibit 52). Implementation of aligned lesson plan formats began in fall 2005. The new portfolio requirements were approved by both the NCATE Steering Committee and the COE department chairs for implementation beginning in fall 2005. Each program assesses its candidates' impact on student learning through the required portfolio artifact for INTASC Principle 8, Assessment, using the scoring guide common to the unit, or through a program-specific assessment aligned to SPA standards and identified in the Unit Assessment System.

Unit Dispositions Plan
The development of a comprehensive dispositions plan began with the fall 2005 faculty retreat when the entire faculty was surveyed to identify dispositions which defined the theme of Professional Conscience, a part of the unit's Conceptual Framework since 2000. That list of dispositions, drawn from INTASC Principles, national program standards (e.g., SPA's, NBPTS), and current research and practice in other IHE's, was categorized, reviewed, and prioritized by faculty groups including departments, graduate directors, the TEEB, and the NCATE Steering Committee (Exhibit 15). Once consensus on the Essential Dispositions for the unit was achieved, observable behaviors for each disposition were identified for use in scoring assessments of dispositions. Each department and/or program submitted a three-phase implementation plan for assessing the Essential Dispositions, using the common scoring guide and identifying how and where preassessment, formative assessment, and summative assessment of dispositions would occur (Exhibit 16). Beginning in spring 2006, all candidates have been required to demonstrate mastery of dispositions as part of their program completion requirements. Initial certification candidates must demonstrate mastery at the satisfactory level. The Unit Assessment System documents those programs which also assess dispositions as required by their respective SPA's, and data from those assessments may be found in SPA reports available in the Exhibit Room.

Development of the Assessment System for Advanced Programs for Continuing Preparation and Other School Personnel
Historically, advanced programs in the College of Education have required the standards of the Graduate School of the University for program admission and retention and for degree completion. As part of the self-study for accreditation, in response to recommendations from the NCATE Steering Committee, the Unit Assessment System for advanced programs (Exhibit 13) has added the following components: 1) a three-phase approach to assessing unit dispositions; 2) program-specific implementation (Exhibit 16) at identified midpoints of all programs to assess adequate progress and determine eligibility for continuation, 3) assessments required prior to completion of all programs, usually in capstone experiences, designed to document graduate candidates' mastery of content and their ability to directly affect and/or support student learning during field or clinical practice; and 4) follow-up surveys to assess completers' satisfaction with the quality of their programs.

New components of the Unit Assessment System for advanced programs were piloted in all advanced programs in spring 2006 and fully implemented in fall 2006, resulting in a more comprehensive and integrated set of evaluations to measure student performance. The overall assessment system is applicable to all graduate programs, (e.g., continuing preparation of teachers and those for other school personnel). However, assessment at the graduate level is more program specific than it is at the initial level. Therefore, data presented throughout this report are frequently program-specific.

Assessment of Dispositions in Advanced Programs
All candidates for advanced degrees in the College of Education are required to demonstrate dispositions at the target level prior to successful program completion. Using the unit's scoring guide for dispositions assessment, each advanced program has a dispositions assessment plan, approved by the NCATE Steering Committee, which identifies learning opportunities, formative and summative assessment points, and plans for assistance and support of candidates if necessary.

Midpoint Assessment
Although the sequence of courses in graduate programs may not be consistent for all candidates, each program has identified one or more courses that occur in the approximated midpoint of the program when candidates have had the opportunity to acquire significant new knowledge in their coursework. The midpoint assessments identified in the Unit Assessment System for advanced programs are used by each program to determine the candidate's eligibility to continue in graduate study.

Capstone Assessment
Each program has identified an assessment required for completion of the program, identified as a capstone experience. Designed to integrate content knowledge and professional/pedagogical knowledge, including research findings and data-based conclusions with implications for student learning, the capstone experience must also include a field-based component. Capstone assessments range from professional portfolios to extensive projects to theses.

Follow-Up Surveys
Although the Graduate School compiles satisfaction data at the program level from the University's Exit Survey, each graduate program has developed a program-specific survey in order to obtain feedback from completers tailored to its specific program goals (Exhibit 32).

Relationship of Assessment Systems of Initial and Advanced Certification Programs to the Conceptual Framework, State, and Professional Standards
All assessments at the initial and advanced levels are aligned to state and/or professional standards, and the unit's Conceptual Framework provides a structure for all unit and program-specific assessments through required alignment to its seven integrated themes: Theme #1: Ensuring Academic Mastery thorough content assessments at all four transition points; Theme #2: Reflecting upon and Refining Best Practices is assessed as professional and pedagogical knowledge and impact on student learning; Theme #3: Preparing Educators for Diverse and Inclusive Classrooms is assessed in all three phases of dispositions assessment and in professional and pedagogical knowledge; Theme #4: Utilizing Appropriate Technology is assessed in professional and pedagogical knowledge; Theme #5: Developing Professional Conscience and Theme #6 Developing Collaborative Partnerships are assessed through a systematic, three-phase approach to assessing dispositions as well as in Standard III, Field and Clinical Experiences; and Theme #7: Providing Leadership through Scholarly Endeavors is assessed through the University's systematic and comprehensive evaluation of faculty conducted by both candidates and peers. (P&T procedures require faculty and course evaluations, Exhibit 53, and data from Program Evaluation Day, when interns evaluate the quality of their programs, as well as their University supervisors, Exhibit 27.)

Key Assessments Used to Monitor Candidates' Performance
A comprehensive and integrated set of evaluation measures is used to monitor candidate performance, and manage and improve operations and programs.

Content Knowledge
Content knowledge is assessed in initial certification programs at all four transition points. The Unit Assessment System is designed to evaluate content knowledge through multiple assessments and indicators and is aligned to the Standards for Students Enrolled in Teacher Education Programs (Exhibit 4). Applicants must qualify at the first transition point, Admission, by meeting the content standards established for the unit by the TEEB, including state standards for passing the Praxis I examination. Further evidence of applicants' content knowledge is the completion of 45 credits of General Education content requirements, with a minimum GPA of 2.5 or 2.75 (depending on individual program standards), including the completion of English 102/190 or an equivalent course, with a grade of C or better. In the second transition point: Entry to Clinical Practice, candidates must meet program-specific requirements for content knowledge which have been approved by their respective SPA's. The third transition point, Exit from Clinical Practice/Program Completion, requires that all candidates earn a minimum score of 3 on a 5-point scale assessing the content portions (INTASC Principle 1) of their INTASC-aligned professional portfolios and the content standards of respective SPA's on Part 2 of the Internship Evaluation. All programs have identified additional content assessments at this transition point which have been reviewed by their respective SPA's. At the final transition point, After Program Completion, candidates evaluate the quality of their content preparation on the survey administered to them at Program Evaluation Day, held annually following the completion of the final internship and immediately prior to graduation. Praxis II Content Tests, required for licensure in the state of Maryland, provide data which document content knowledge based on state-established passing standards (Exhibit 22). In addition, surveys conducted of graduates in their first and third years of practice and of their employers during their first year of practice include specific questions designed to assess the quality of preparation in content knowledge provided by our programs.

All candidates in advanced programs demonstrate their content knowledge through earned Bachelor's degrees from accredited institutions with required GPA's of 3.0 and teacher certification. At both the midpoint and completion transition points of their programs, content knowledge is measured by required course assessments, aligned to state, national or institutional standards, including capstone experiences which serve as final assessments of content mastery required for exit from all graduate programs. Following program completion, the graduate school surveys all completers, and program surveys collect data on their respective completers' assessment of the quality of their preparation in content knowledge.

Pedagogical and Professional Knowledge
Pedagogical and professional knowledge is assessed as part of the Unit Assessment System during multiple course and field experiences required in the initial certification programs. Beginning in Entry to Clinical Practice, when extensive field experiences, including protected practice such as microteaching, begin in all programs, candidates are required to demonstrate, through multiple assessments identified in each program and reviewed by their respective SPA's, their ability to apply pedagogical and professional knowledge to the teaching of content (e.g., SCED, SPED; Exhibit 54). Each program has identified a common lesson planning format which must be aligned to the unit's required lesson planning components (JPTAAR, Exhibit 52). Initial candidates are assessed in lesson planning in one or more required SPA assessments. In addition all candidates are required to pass a program-specific methods course. At Transition Point 3, Exit from Clinical Practice/Program Completion, all candidates are assessed on their professional and pedagogical knowledge through related Principles in the INTASC-aligned Portfolio, as well as in the Internship Evaluation which is aligned both to INTASC and to the program standards of the specific SPA. As part of Maryland's Redesign, all programs are aligned to the Maryland State Teacher Technology Standards (MTTS, Exhibit 42), require 6 credits in Instructional Technology (ISTC 201, 301, or program-specific alternative courses), and assess candidates through a modified INTASC Principle 6 of the Professional Portfolio, on their ability to integrate technology in their teaching in order to present content in clear and meaningful ways. In Transition Point #3, After Program Completion, graduating interns and completers, both one and three years following graduation, assess the effectiveness of the program in providing them with pedagogical and professional knowledge, first on Program Evaluation Day and then on surveys one and three years following graduation. Employers' surveys rate Towson graduates during their first year of employment on their professional and pedagogical knowledge.

All advanced programs require evidence of professional and pedagogical knowledge at Admission through documented teacher certification and letters of recommendation. Midpoint assessments, identified by each program, and required maintenance of a 3.0 GPA, are used to document progress in professional and pedagogical knowledge in graduate programs. All advanced programs have capstone experiences which include required assessment of pedagogical and professional knowledge in field-based settings.

Dispositions
Dispositions have been defined for the unit through a common scoring guide based on observable behaviors expected of all candidates as evidence of Professional Conscience, one of the integrated themes of the unit's Conceptual Framework. Diversity proficiencies are aligned to unit dispositions which are assessed at multiple points, based on each program's disposition implementation plan (See Standard I, Element 6 of this report and Exhibit 16 for program-specific plans.) In addition, disposition-related INTASC Principles 3, 5, 9 and 10 are analyzed for further evidence of interns' values, commitments, and professional ethics.

Student Learning
Student learning is central to the unit's mission, and all initial certification candidates are required to demonstrate assessment literacy to analyze evidence of student learning and make appropriate adjustments to instruction based on data. The internship evaluations by both University supervisors and mentor teachers (Transition Point 3) measure impact on student learning through INTASC Principle 8 (Assessment) and the SPA standards on part 2 of the internship evaluation. Each program's SPA has reviewed data from Assessment #5 to ascertain interns' impact on student learning. The required Principle 8 artifact in the Professional Portfolio serves as the summative assessment of interns' ability to impact student learning (Exhibit 26). Finally, surveys of completers, alumni, and employers document program effectiveness in preparing teachers for impacting student learning in their early years of practice.

The Unit Assessment System for advanced programs identifies the assessment(s) used by each graduate program to evaluate candidates on their ability to impact student learning directly or to create positive environments for student learning during required capstone experiences in field-based settings. Data and detailed assessment information are available in the SPA reports (See Assessment #5) or Graduate Assessment Notebooks, available in the Exhibit Room.

Use of Assessment to Determine Admission to, Continuation in, and Completion of Program

There are four transition points in both initial and advanced programs. Multiple assessments at each transition point are used to identify candidates with potential to become successful teachers, to serve as advanced professionals, or to assume roles as other school personnel. Content knowledge, professional and pedagogical skills, dispositions, and ability to impact and/or support student learning are assessed systematically in all programs and used for both formative and summative purposes. Each required assessment defines a minimum level of competency based on common scoring guide criteria. Candidates must perform at the satisfactory level on each required assessment in order to progress to the next transition point in the program. Formative assessments provide candidates with ongoing feedback which may include opportunities to redo assignments in order to meet the required standards.

Process to Ensure Fairness, Accuracy, Consistency and Freedom from Bias in Assessment System
Fairness
results from the documented alignment of all required assessments and courses to the state, national, and institutional standards of the unit and its programs. Alignment ensures that assessment is congruent with what is taught in actual course content. Required assessments are announced to interns through core syllabi, and common scoring guides define the criteria for levels of performance, including the required minimal level of competency. Accuracy and elimination of bias in the assessment system are ensured by the required alignment of assessments to the depth and breadth of appropriate state, national, or institutional standards. Alignment ensures that course content reflects criteria that have been identified as important for professional educators. Candidates are expected to demonstrate their understanding of standards at levels that are consistent with the intent of the standards and with the expectations of the professional responsibilities for which their programs are preparing them. Faculty are trained to administer assessments in setting that are free of contextual distractions. Consistency in the assessment system is documented through results that have remained constant over the years that they have been administered. For example, consistent scores are clearly evident in INTASC-aligned assessments which have provided data on candidate performance since 2003. Training in the use of required scoring guides is provided to new faculty, and to all faculty when assessments are revised or added (Exhibit 50, agendas/minutes). External validation of required assessments is obtained through the SPA process or, for programs that are not SPA-reviewed, required submission of their assessment plans to the University Assessment Office for review (Exhibit 55).

Assessments as Predictors of Candidate Success
Multiple decision points require multiple indicators of performance which are designed to identify candidates with potential to become effective teachers. Assessments at the first three stages of the Unit Assessment System are predictive of success in the internship. Historically, candidates whose Praxis I scores and GPA's document prerequisite knowledge of content and who are able to demonstrate the observable behaviors associated with dispositions in formative assessment, as well as completing the requirements of program-specific assessments which are aligned to state, national and/or institutional standards, have been able to complete the summative requirements of the internship in content knowledge, professional and pedagogical knowledge, dispositions, and impact on student learning. Further success is documented in surveys of graduating interns and alumni and employer assessments of program completers' early years of practice. Candidates in advanced programs who meet admissions requirements, who maintain required GPA's through continuous enrollment, and who demonstrate an early understanding of dispositions have been able to complete required midpoint assessments and capstone experiences which measure their content knowledge, professional and pedagogical knowledge, dispositions and impact student learning either directly or through the creation of an environment conducive to student learning.

As documented in Standard I, multiple years of data collected from program graduates and local school system employers validate the assessment system through consistently high satisfaction ratings based on statements and questions aligned to the INTASC Principles. Towson's program completers who are licensed and hired to teach in Maryland also consistently express high levels of satisfaction with their preparation for practice in surveys sent to them twice, one-year, and three-years following graduation (Exhibit 28). Survey ratings of employers of Towson completers practicing in public schools in Maryland validate their strong performance in their first years of practice. The CPP recently concluded a five-year, longitudinal retention study to examine whether candidates who are prepared in PDS stay in the teaching profession longer than candidates from earlier years who were not trained using the PDS model. Data indicate that PDS contribute to greater teacher retention (Exhibit 56). Graduate programs use surveys which collect data from their completers as well (Exhibit 32).

Use of Assessments in Management and Improvement of Unit's Operations and Programs
The development and implementation of the unit's Teacher Information Management System (TIMS), an online internship evaluation, has facilitated the assessment of interns, making feedback from performance during the first internship experience available as formative data which can be used to improve individual performance during the second internship experience, and enabling the timely collection of unit data and disaggregation of data by program to identify patterns and comparisons which have implications for management and improvement. The unit continues to refine its use of the TIMS/technology which is aligned to the University's PeopleSoft database. This is a significant improvement for a program as large as Towson's unit. Aggregating data by programs over time allows for comparison and analysis that improves unit operations as well as program effectiveness. Program data are further disaggregated by site to allow for their comparison to larger, on-campus programs (Exhibit 25). Each department chair/ graduate program director is required to document faculty involvement in the use of data to improve programs, as well as to provide evidence in their Data Analysis Reports that any resulting changes are having the intended effect(s). (See Element 3 of this Standard for examples and Exhibit 57 for actual reports and analysis.)

Reflecting the Conceptual Framework, the unit is committed to preparing educators for diverse and inclusive communities of learners, including systematic exposure to heterogeneous populations. Accordingly, the unit is responsive to initial preparation candidates' feedback on unit operations gathered from graduating interns on Program Evaluation Day; candidates are asked to respond to questions (# 12 and 13) that assess their opportunities to work with diverse and inclusive communities of learners during their internships. The CPP uses this data to assess its operational success on providing unit candidates with experience with diverse and inclusive communities of learners, and to revise internship placements/sites.

External data trends have influenced changes as well. For example, as a result of low reading scores in P-12 schools, MSDE now mandates state approval of reading courses required of teacher candidates. Similarly, the statewide need for teachers who are better prepared to both use and teach technology led to the development of the Maryland Teacher Technology Standards (MTTS). These are but two examples that prompted subsequent realignment of our course requirements to ensure that preparation of our candidates meets state standards.

After receiving the analysis of unit data from the Unit Assessment Committee's annual report to the TEEB, the Dean of the College of Education is required to report annually to the University Senate through the TEEB Report, summarizing the unit's performance, including its operations and programs, and requesting support for changes, as needed.

Standard II. Element 2: Data Collection, Analysis, and Evaluation

Timeline for Collecting and Reviewing Data
The unit's assessment system is designed to monitor candidates' progress and to improve program efficacy. Using multiple assessments at each transition point ensures that program completers master the content knowledge, professional and pedagogical knowledge, and dispositions identified by professional, state, and institutional standards for their respective education professions. Table 57 describes the unit's timeline for collecting data on each component of the assessment system.

Table 57. Unit Assessment System – Operational Plan

Timeline

Component

Responsibility

August/September

Develop annual Data Analysis Report based on fall/spring semester and trend data reports, and data from program-required assessments.

Department chairs & program faculty

September/October

Analyze unit's spring semester and trend data reports and programs' annual Data Analysis Reports. Present findings to TEEB

Unit Assessment Committee

Submit annual Teacher Preparation Improvement Plan (i.e., TPIP) and NCATE reports.

Director CPP Director/Coordinator of Assessment and Accreditation

January

Analyze fall semester program data, program-required assessment data, and monitor impact of data-based changes identified in annual Data Analysis Reports.

Program faculty

May

Collect program evaluations from program completers; internship evaluations from mentors and University supervisors (annually), and graduates (annually and triennially) and employers (annually).

Interns defend professional portfolios to external evaluators. All of these are December and May activities.

CPP Director

June

Analyze program data from spring semester

Program faculty

COE Report to University Senate through the TEEB

Dean of COE

Candidate Assessments
Multiple assessments and indicators from both internal and external sources provide the unit with performance data from applicants, candidates, recent graduates, and faculty. In each program, data are collected at transition points each semester or year. Table 58 summarizes the data collection timeline.

Table 58. Unit Data Collection Timeline

Type of Data

Collection Frequency

Analysis Responsibility

Course Grades

Semester

Advisors

GPAs

Semester

Advisors

SPA and required program assessments

Semester

Department Chairs/Program Directors

PRAXIS I and II

Annually

CPP/Coordinator of Assessment

Intern Evaluations

Semester

CPP

Professional Portfolios

Semester

CPP

Graduating Interns’ Program Evaluation

Semester

CPP

Course Evaluations by candidates

Semester

CPP

Faculty Evaluations by candidates

Semester and annual

CPP

Alumni Survey

One and three years following graduation

CPP

Employer Survey

Annually

CPP

Surveys
Graduates of initial certification programs are surveyed twice, one year and three years following graduation (Exhibit 28). Selected employers are surveyed annually about their satisfaction with the performance of Towson graduates (Exhibit 29). Both surveys are aligned to INTASC Principles. Advanced programs have designed their own post-graduate surveys (Exhibit 32) and are required to report the data that they gather from those instruments in their annual Data Analysis Reports (Exhibit 57).

Program Reviews
Towson University requires graduate and undergraduate programs to submit annual assessment plans and data, except for departments involved in the SPA process (Exhibit 55). Professional association program reviews (SPAs) are accepted by the University Assessment Office as evidence of program effectiveness upon recommendation for national recognition. As shown in Table 2 (pp. 4-5), Towson University submitted 16 Program Reports to NCATE on September 15, 2006. Program Directors are required to submit updated data on their approved SPA assessments each semester to the Coordinator of Assessment and Accreditation.

Program Operations Data
Department chairs complete annual, program-specific Data Analysis Reports (Exhibit 57) following their receipt of the Unit Semester and Trend Data Report which is compiled by the Coordinator of Assessment and Accreditation each semester in collaboration with the CPP. Performance data for the unit, along with a summary of program Data Analysis Reports are compiled each semester by the Coordinator of Assessment and Accreditation for analysis by the Unit Assessment Committee. Following its annual analysis of unit data, the Unit Assessment Committee presents its findings to the TEEB, and the Dean of COE, in his role as chair of the TEEB, reports on the data and their implications for resources to the University Senate. Licensure data are reported by Educational Testing Services (ETS) annually.

Summary and Analysis of Data
Format
The CPP aggregates data submitted by programs for initial certification to create eight separate reports: Program Evaluation by Interns; Internship Evaluation by University Supervisors; Internship Evaluation by Mentor Teachers; Survey of Third Year Graduates; Survey of First Year Graduates; Survey of Employers; Dispositions of Candidates (Exhibit 25). Reports show mean scores for all programs and for the unit by INTASC standards on a five-point scoring guide. Part 2 of each Internship Evaluation reports performance on individual programs' SPA standards. All reports show trend data for each program and comparative data for all programs for the semester. Advanced programs compile data collected at each transition point, which must include trend data and submit in Excel or Word (Exhibit 57) to the Coordinator of Assessment and Accreditation. Because of the program-specific nature of graduate assessment, data are not aggregated for program comparison.

Frequency of Summary and Analysis
Data on the performance of interns are summarized by the CPP and distributed to programs each semester for analysis. Surveys of graduates and employers are collected and summarized annually by the CPP (Exhibits 28 and 29). Advanced programs summarize data on candidates each semester and on completers bi-annually (Exhibit 57). Unit Semester and Trend Data Reports are sent each semester to department chairs who work with program faculty to prepare annual Data Analysis Reports in September. The Coordinator of Assessment and Accreditation summarizes program reports and submits that summary, along with Unit Semester and Trend Data Reports to the Unit Assessment Committee (Exhibit 25). The Unit Assessment Committee presents its findings to the TEEB by the end of the fall semester.

Responsibility for Summary and Analysis
Annual Data Analysis Reports document the conclusions of program faculty based on their analysis of the most recent academic year's compiled program data; planned changes based on findings from the data; and analysis of whether the new data support the intended effects of previously implemented data-driven changes (Exhibit 57).

The Coordinator of Assessment and Accreditation is responsible for summarizing unit and program data for the Unit Assessment Committee which is responsible for analyzing all unit data. As a subcommittee, the Unit Assessment Committee reports its findings to the TEEB, chaired by the Dean of the College of Education, who reports its recommendations annually in the spring to the University Senate.

Information Technologies Used to Maintain Assessment System
The online TIMS is a part of PeopleSoft, which is Towson University's main database for student information. The TIMS is used for the following:

  • All COE students complete a TIMS application for the internship.
  • All internship placements are made in the TIMS, which is used by school systems as well as the University, to maintain a database of all internship placements.
  • All mentor teachers and University supervisors complete and submit online final internship evaluations using the TIMS. (Numbers of evaluations from mentor teachers and University supervisors may differ due to variation in program-specific internship assignments and program requirements. All programs spend a minimum of 100 days in the internship, yet some interns may have one supervisor for this time, and some may have more than one, depending on each program model.)
  • The TIMS is utilized to collect internship placement and evaluation data.

Advanced programs submit their data using EXCEL or WORD formats to facilitate aggregating of data.

Records of Formal Candidate Complaints and their Resolution
Grade appeal procedures are outlined in the Student Code of Conduct, published in the University catalog. Records of formal candidate complaints and their resolutions are maintained by both the Director of the CPP and in the office of the Dean of the College of Education.

Standard II. Element 3: Use of Data for Program Improvement

The unit is committed to answering the question, "Now what?" as it pertains to the regular analysis of data. Programs are required to document faculty analysis of program data annually, including changes planned as a result of that analysis. The Data Analysis Reports are required to revisit programmatic changes documented in earlier reports and analyze whether those changes are having the desired effects, based on current data. Most program improvements are generated through this process. However, the Unit Assessment Committee may also recommend changes, based on their analysis of unit-wide data that have implications for individual programs. As reported in Standard I, data reveal that candidates are meeting national, state, and institutional performance standards.

Use of Data to Improve Performance
Faculty
Faculty are required to include the course evaluation data in their annual reviews, and review course evaluations with their department chairs in order to consider changes for improvement as part of their Annual Review conferences. (See Standard V, pp. 86-87, for details of annual review process.) On Program Evaluation Day, interns evaluate University supervisors. Data from those evaluations are compiled by the CPP and reported to department chairs and the Associate Dean who work with faculty as necessary on improvement (Exhibit 58).

Candidates
Formative assessment in all programs is used to provide candidates with feedback designed to improve their performance. Advisors meet regularly with candidates, as required by the University advising system. Throughout their progressively responsible field experiences, candidates receive formative feedback for all field-based performance assessments, which are aligned to INTASC and program-specific national standards, as well as state standards (Exhibit 59). At the conclusion of the first internship experience, a three-way conference is held which includes the candidate, the University supervisor, and the mentor teacher so that the intern has clear direction on changes expected in the second internship experience. Minimal levels of competency are clearly defined for summative assessments at each transition point in the unit assessment plan. These required performance levels ensure that candidates demonstrate progressive growth in the knowledge, skills, and dispositions required of effective professionals as they move through the program.

Use of Data to Discuss or Initiate Program or Unit Changes on a Regular Basis
All programs submitted Data Analysis Reports to the Coordinator of Assessment and Accreditation, documenting the analysis of data from the preceding academic year and its implications for programmatic changes; and analysis of how the data support whether previously implemented changes are having the intended effects (Exhibit 57).

Data-Driven Changes
There are numerous examples of data-driven changes made to programs in recent years. Some examples of those changes are the addition of a course (SPED 401) to respond to feedback from interns expressing concern about their preparation to meet the needs of children with exceptionalities; changes to course content, such as the addition of a field experience to the Art program; elimination of the fall internship option in Music Education, based on analysis of internship performance; and design of an optional preparation course for the Math Content Praxis 2 Exam, based on inconsistent performance of interns. Information on these and other changes made in response to performance data may be found in Exhibit 57.

How Assessment Data are Shared
Data from the assessment system are compiled each semester and shared with the college community, including faculty and administration, external stakeholders who serve on oversight committees and coordinate PDS sites, and with the University community through unit representatives on the TEEB (Exhibit 60) and the University Assessment Office (Exhibit 61). PDS site coordinators review data from the previous cohort of interns and identify its implications for current interns during summer strategic planning which is based on the Maryland PDS Standards.

Next Page / Table of Contents


 

Map

Emergencies
410-704-4444

University Police
410-704-2134

Closings & News
410-704-NEWS (6397)

Text Alerts
Sign up now