Standard 2

Standard 2.1 Assessment System and Unit Evaluation

Upon learning that the unit’s advanced preparation programs were assigned a targeted area for improvement (AFI) (“At the advanced level, the unit’s use of data to make program improvements is inconsistent”) following the 2007 visit, the unit reviewed its entire data collection, organization, and reporting practices for the purposes of continuous program improvement. The initial outcome of this review was the Data Analysis Report (DAR). The DAR was designed to require each program to collect general and program-specific information for program review, analysis and improvement. In fall 2009, the Yearly Assessment System Update (YASU) was added to the DAR to create an annual SPA-like approach to review, analyze and interpret those data elements. This process allows programs to monitor and assess: program enrollments, faculty qualifications and alignments between required program assessments and program standards.

The YASU/DAR framework enables the unit to evaluate regularly the capacity and effectiveness of its assessment system, basing decisions about the unit’s effectiveness on candidate data, and upon data collected from inservice practitioners in their early careers. Similarly, the YASU/DAR allows unit programs to monitor and assess the effects of changes from year to year to determine if intended outcomes were met.

The YASU/DAR connects issues raised in the DAR from prior years with evidence in the current year documenting that concerns were addressed. A detailed explanation of the YASU/DAR process is in R.2.4.d.1. Exhibits R.2.4.a.1.a,b,c,d,e show the elements of the Unit’s comprehensive program assessment structure. Grounded in the unit’s Conceptual Framework (R.2.4.a.1.a), the unit assessment timelines indicate when performance-based assessments are administered for initial and advanced programs (R.2.4.a.1.b-c).

At the initial level, common assessments of knowledge, skills and dispositions are evaluated by common scoring guides. R.2.4.a.1.b shows the chronology of data collection across critical transition points in the Initial Certification programs (Admission; Entry to Clinical Practice, Exit from Clinical Practice), including post-completion data from program graduates.  The result of this multiple transition point data collection is an annual data set consisting of eight reports.

Report one is data from Program Evaluation (R.1.4.c.3), which is collected from interns completing their capstone internships using an online tool administered by the Center for Professional Practice (CPP). The assessment is aligned with InTASC Standards, and asks interns about their program preparation during field experiences and during the internship. Report 1 is a disaggregated – by program level and location – unit summary of the current year’s data with weighted means. Report 1A is an aggregated unit trend summary from the present year, and Report 1B is a program-specific trend summary (R.1.4.d.5-6).

Report two is university liaison/supervisor (ULS) ratings of interns, collected by the CPP using an online tool. The rating instrument consists of three parts (R.1.4.c.1). Part one collects data on the InTASC standards; Part two contains questions related to the Maryland Institutional Performance Diversity Criteria (MIPC-D); and Part three is program-specific SPA standards.

The rating instrument is identical for all programs, except for the SPA standards section. Report 2 is a disaggregated – by program level and location – unit summary of the current year’s InTASC data and the MIPC data. Report 2 is a disaggregated – by program level and location – unit summary of the current year’s data with weighted means. Report 2A is an aggregated unit trend summary from the present year, and Report 2B is a program-specific trend summary. Report 2C is a current-year summary sheet of aggregated SPA standards, and Report 2D is a program-specific SPA standard trend data table (R.1.4.d.7, 8, 9).

Report three is mentor teacher (MT) ratings of interns, collected by the CPP using the same online tool used by the ULS described above. Report 3 is a disaggregated – by program level and location – unit summary of the current year’s data with weighted means. Report 3A is an aggregated unit trend summary from the present year and Report 3B is a program-specific trend summary. Report 3C is a current-year summary sheet of aggregated SPA standards and Report 3D is a program-specific SPA standard trend data table (R.1.4.d.10,11,12).

Report four presents portfolio review scores. Program portfolio data on InTASC 6 – Assessment to Prove and Improve Learning - is collected and aggregated by program chairs and reported to the COE Dean’s Office. Report 4 is a disaggregated – level and location - summary of the current year’s data. Report 4B is a program specific trend report on portfolio scores (R.1.4.d.13).

Report 5 presents essential dispositions for educators. These data are reported by university liaisons to the CPP. Report 5 is a disaggregated – by program level and location – summary of the current year’s data. Report 5A summarizes unit trend data and Report 5 B summarizes program specific trend data on essential dispositions (R.1.4.d.23-24).

Report 6 presents data collected from first year graduates, using a paper-based survey. The InTASC-based survey collects data about the program’s ability to prepare the graduate for their teaching role and their ability to demonstrate the InTASC standards. Report 6 is a disaggregated – by level and location – summary of the current year’s data. Report 6A is a unit trend data summary and 6B is a program-specific trend data on the survey questions (R.1.4.d.16-17).

Report 7 presents data collected from third year graduates, using a paper-based survey. The InTASC-based survey collects data about the graduate’s ability to demonstrate the InTASC standards. Report 7 is a unit-level aggregated summary of the current year’s data and

Report 7A is a unit trend data summary (R.1.4.d.18-19).

Report 8 presents data collected from first year teacher’s principal, using a paper based survey. The survey asks principals to rate the first year teacher’s InTASC standards performance. Report 8 is a unit-level aggregated summary of the current year’s data. Report 8A is a unit trend data summary (R.1.4.d.20-21).

These multiple component reports enable programs to compare current candidate performance in light of historical trends in the program and unit.

Due to their designed focus on advanced content knowledge and skills, advanced programs collect program-specific sets of data. As noted on R.2.4.a.1.c, unit advanced programs collect data from a program-specific set of 6 – 8 required assessments across critical transition points in the advanced programs (Entry to Graduate Program; Midpoint of Graduate Study; Completion of Graduate Study), and from program completers. Programs for other school professionals collect data responsive to advanced SPA standards. Advanced programs for the continuing education of teachers collect data appropriate to SPA-like enhanced pedagogy and content expectations.

As shown in R.2.4.a.1.a,b,c,d,e, the unit’s assessment system consists of multiple candidate evaluation activities, completed by multiple internal and external sources at multiple times throughout the respective programs. R.2.4.a.1.d shows how the information gathered through the YASU/DAR process is analyzed to simultaneously monitor and assess candidate progress, program quality and potential for continuous improvement. 

Through the YASU/DAR process, each program analyzes and reflects upon its own set of data, guided by its specific professional standards. The value of the YASU/DAR for all unit programs has been evident since its introduction. In fall of 2011 unit programs at the initial and advanced levels submitted a total of 25 initial or advanced program reports. As reported in S.2.1, 23 of the 25 SPA reports received national recognition until 2022. The final two reports will be submitted in March 2014. We anticipate all 25 will be nationally recognized prior to the November, 2014 site visit.

The YASU/DAR’s value was further confirmed by the Assistant Vice President for Institutional Assessment hired in 2009. One of her mandates was to institute a regular, consistent, and ongoing reporting process for all university programs. Many unit programs completing the YASU/DAR process received awards for “Best Practices in Program Assessment” from the University Assessment Council (a subcommittee of the University Senate) (S.2.10). Importantly, YASU reports were recognized as exemplars for all University Programs to emulate. Unit programs currently submit their YASU to the Office of Assessment as evidence of on-going data collection, analysis, interpretation and program improvement. 

In fall 2013, the University’s Office of Assessment announced that all programs will complete their program reports through an online tool called ComplianceAssist. ComplianceAssist was selected by the institution to address Middle State reporting requirements. ComplianceAssist provides a common method for reporting required program data, and is being configured to allow unit programs to report their YASU/DAR, beginning in the 2014-15 academic year.

A TEEB committee is exploring various content management systems to support its assessment system; and to provide a common, flexible platform for electronic portfolio design, dissemination, updating and management.

 

Standard 2.2 Moving Toward Target

As documented in R.2.4.a.1.a,b,c,d,eassessments at the initial and advanced levels are aligned to professional and/or state standards and to the unit's Conceptual Framework (R.I.5.c.1). R.2.4.a.1.b-c show connections between the data collected through multiple assessments at multiple transition points for the unit’s initial and advanced programs and the Conceptual Framework’s seven integrated themes, i.e., Theme #1: Ensuring Academic Mastery through multiple content knowledge assessments at multiple transition points; Theme #2 and 4: Reflecting upon and Refining Best Practices and Utilizing Appropriate Technology through measuring professional and pedagogical knowledge and impact on student learning; Theme #3: Preparing Educators for Diverse and Inclusive Classrooms through dispositional assessments and in terms of the alignment between professional and pedagogical knowledge and Maryland Institutional Performance Criteria – Diversity (MIPC-D); Theme #5: Developing Professional Conscience through measuring candidate dispositions, and through Standard III, Field and Clinical Experiences; Theme #6 Developing Collaborative Partnerships through Towson University and PreK-12 PDS councils and TLN programming, and Theme #7: Providing Leadership through Scholarly Endeavors through the university’s systematic and comprehensive evaluation of faculty performance based on peers’ evaluations - using the Annual Review (AR) process, on candidate evaluations through course evaluation data (R.5.4.f.1); and on interns’ ratings of their university liaisons/supervisors (ULS) and mentor teachers (MT) during Program Evaluation (R.1.4.c.3).

The unit completed a comprehensive review of unit data collection, aggregation and summarization processes after its successful accreditation visit in 2007. As a result of this review, the unit made several revisions to its assessment system and introduced the Yearly Assessment System Update and Data Analysis Report (YASU/DAR). The revisions have improved the assessment system’s capacity and effectiveness to collect data on candidate performance and monitor and assess program quality and unit operations. The revised assessment system collects assessment data from candidates, graduates, PreK-20 faculty and other internal and external sources. These data are systematically collected as candidates progress through their programs. As described in Section 2.1, the assessment system data are regularly and systematically compiled, aggregated, analyzed and reported through the YASU/DAR.

Changes made to keep abreast of technology and professional standards

The unit continues to develop and use information technology to improve its assessment system. The unit adopted a new online-based assessment - administered by the Center for Professional Practice (CPP) - for three of its eight data set reports;  Report 1: Program Evaluation, Report 2: ULS internship evaluations, and Report 3: MT internship evaluations. CPP provides all unit programs with access to CampusLabs for Program Evaluation. In fall 2011, the unit piloted CampusLabs for capstone internship ratings made by mentor teachers (MT), university liaison/supervisors (ULS) (R.1.4.c.1-2), and interns’ program evaluations (R.1.4.c.3). In spring 2012, all Unit programs began using the CampusLabs evaluations. The new assessment tool enables unit data collection to be completed more quickly and efficiently. Consequently, programs can identify and propose necessary improvements through the YASU/DAR process (R.2.4.d.1), and implement corrective actions within the current academic year (examples: R.2.4.g).

The online tool also enables the unit to respond quickly to changes in professional, state and institutional standards, such as the unit adoption of the 2011 InTASC standards. Further program change processes are underway to reflect Maryland’s adoption of the Common Core (i.e. Maryland Common Core Curriculum Framework).

The university has recognized the unit’s model as an exemplar for program review and reporting. As a result, the university has adopted an online program assessment system called ComplianceAssist to enable regular, systematic, and ongoing program data collection and reporting. The ComplianceAssist system is being customized for unit programs’ use reporting the YASU/DAR outcomes. All unit programs will use ComplianceAssist for YASU/DAR beginning in Fall 2014.

Assessment system includes multiple assessments

As shown on R.2.4.a.1.b-c, data are collected from candidates and internal and external sources at multiple transition points: program entry; midpoint/initiation of capstone internship; exit from capstone internship, and; post-graduation. These evaluations are made by a number of separate and independent PreK-20 faculty (i.e. University Supervisors & PDS Liaisons (ULS), in-service Mentor teachers (MT)). These data are compiled, aggregated, summarized, analyzed and reported using the YASU/DAR process (R.2.4.d.1). The consistently high trend data ratings are viewed as validated by the consistency of data that are obtained from multiple sources, at multiple transition points, and across multiple raters.

Relationship of candidate performance and performance in schools. The unit continues to compile and summarize disaggregated data sets across program levels and locations (R.1.4.d). Thus, separate assessments are prepared for capstone internship evaluations from two independent sets of reviewers (ULS and MT), summative dispositions, 1st and 3rd year graduate surveys and 1st year employer surveys. The data sets include reports on data collected during the prior academic year as well as “trend data” indicating how program candidates have performed across time. By triangulating data from multiple sources and noting that candidate ratings are consistent across raters and time, the unit is confident that its candidate and program assessments are valid and that the data collected by those measures are reliable. In addition to the annual data collected from pre-service candidates, the unit also collects and reports data on graduates’ perceptions of the quality of their unit preparation experiences, after their first and third years in service. In addition to this graduate feedback, the unit also collects data from principals for graduates completing their first in-service year. These pre-service and in-service assessments reveal consistently strong performance across all ratings (content, Pedagogical Content Knowledge (PCK), Professional and Pedagogical Knowledge (PPK), and Student Learning), and across multiple internal and external sources (R.1.4.d.1,2,3,4).

Fairness, accuracy and consistency. As noted, data are collected at multiple transition points from multiple internal and external sources, using varied methods. By triangulating data from multiple sources and noting that candidate ratings are consistent across raters and time, the unit is confident that its candidate and program assessments are valid and that the data collected by those measures are reliable. The results of these independent observations are reported to programs in the annual data set and programs’ annual YASU / DAR reviews analyze and interpret those data elements. Where there are concerns, programs and the unit work together to resolve the issue. The intended outcome of the YASU/DAR Process is continuous improvement of candidate preparation experiences.

The unit changes its practices consistent with these studies. The implementation of the YASU/DAR requires all initial and advanced level programs to discuss the need for changes based on candidate performances and other factors (such as changes to professional and/or state standards). In response to state-based initiatives related to specific types of diversity proficiencies, the unit redesigned the YASU/DAR to provide greater focus on diversity; assessment tools were revised to address expanded MSDE Institutional Performance Criteria relating to individual diversity “elements” (MIPC-D, S.2.2); providing additional focus on candidates' abilities to differentiate instruction for English Language Learners (ELL) and Gifted & Talented (G&T) students, and candidate interactions with “other school personnel” (e.g. School Psychologists, Reading Specialists, Library Media Specialists).

Regular and comprehensive data on program quality, unit operations and candidate performance. As described in section 2.1, the unit creates a standards-based data set consisting of eight reports (R.1.4.d). These eight reports provide Unit wide candidate data and trend data over time (when standards have remained consistent for more than one year). In addition to unit-wide data, the data set distributed by the COE dean’s office to program directors and chairs includes interns ratings of the Unit programs; mentor teacher and University ratings of interns on InTASC, MSDE and SPA Standards; intern portfolio ratings on SPA and InTASC standards; intern summative dispositions data; surveys from 1st and 3rd year teachers; and surveys from 1st year employers. All data reports are disaggregated by program level (e.g. BS v MAT) and by location (e.g. Main Campus v Regional Centers) to enable the program to simultaneously track program candidate performance, assuring that all candidate are getting a uniform preparation experience regardless of differences in level or location. SPA reports also demonstrate program quality based on meeting SPA-established standards (S.2.1).

The unit also uses CampusLabs data to assess its unit faculty meetings and professional development (PD) activities. One example is the AY 2012-2013 PD focus on preparing for the Common Core curriculum frameworks (S.5.1). Evaluation summaries for these activities were distributed to the planning team, which used the information to modify subsequent PD activities. (S.2.12,13,14,15,16,17,18)

Advanced programs’ use of aggregation and analysis processes in YASU/DAR
As mentioned, development of the YASU/DAR was the unit’s response to a comprehensive review of its data analysis, summary and reporting processes. Advanced programs for Other School Professionals respond to SPA , and use YASU/DAR much like the initial preparation programs. Because these programs are more highly focused on content-specific outcomes, their assessment systems are more program-specific. These programs were able to write their 2011 SPA reports with full complements of data. Programs that received conditions on the recognition status were able to respond quickly with new data within one academic year. The national recognition status of the unit’s advanced programs’ SPA reviews attests to the value of the YASU/DAR for these programs. Continuing preparation programs align their outcomes with national standards (e.g. National Board for Professional Teaching Standards Five Themes, S.2.3) and use their annual data sets to monitor candidate performance; identify areas of potential program and candidate improvement; and propose changes intended to improve program operations. The addition of YASU/DAR to the assessment system has led to consistent use of data to support improvements in advanced programs. A selection of these data-based improvements can be found in AIMS (S.2.11).

Unit Outcomes Related to YASU/DAR use

Building on the unit's use of YASU/DAR processes in gathering, analyzing and reporting program information, the unit SPA programs submitted 25 reports in September 2011. These reports were well received by the SPA groups; 23 out of the 25 SPA reports have been nationally recognized until August of 2022 (S.2.1). For programs that received conditions on their recognition decisions, completing the YASU/DAR process allows a program to immediately include an additional two semesters of data with their re-submission.

By the time of the site visit, the unit anticipates all 25 SPA programs will be nationally recognized.

Plans and Timelines

The unit takes seriously its attempts to share program outcome data with the public. The unit’s public accountability measures website (S.2.5-6) is a place where the public can find out more about unit candidate performance on Praxis II examinations (required for teacher licensure in Maryland); the Maryland Complete Report Card (providing side-by-side comparisons of all teacher preparation programs in Maryland); the results of 1st and 3rd year graduate and 1st year principal survey data (with 1st year employer survey data); and recent Title II reports (R.1.4.b.1,2,3). These data show that the unit has a consistent and strong history of preparing high-performing candidates, as evidenced by high certification examination pass rates, graduate feedback and employer feedback.
A TEEB committee is exploring various content management systems to support its assessment system; and to provide a common, flexible platform for electronic portfolio design, dissemination, updating and management.

 

Standard 2.3 Areas for Improvement Cited in the Action Report from the Previous Accreditation Review

Responding to the advanced preparation programs’ Area for Improvement (AFI) (“At the advanced level, the unit’s use of data to make program improvements is inconsistent”), the unit reviewed its entire data compilation, aggregation, and reporting practices for the purposes of continuous candidate, program, and unit improvement. The result of that review was the Yearly Assessment System Update (YASU) and Data Analysis Report (DAR), described in sections 2.1, 2.2 and in exhibit R.2.4.d.1. Since 2009, the Unit has reported the results of advanced programs' use of the YASU/DAR, as found in the 2009, 2010, 2011, and 2012 NCATE Annual reports (R.2.6.1,2,3,4; AIMS).

Advanced Programs

All candidates in advanced programs demonstrate their content knowledge through earned Bachelor's degrees from accredited institutions with a university-required GPA of 3.0 or higher (R.2.4.b.3). In addition to Graduate School admissions standards, the advanced preparation programs continue to use program-specific:  1) three-phase approach to assessing unit dispositions (expecting “target-level” proficiency before program completion); 2) expectations for continuation eligibility, and 3) exit assessments. Due to the specialized nature of graduate preparation, assessment activities, data collection, and reporting at the graduate level are more program-specific than at the initial level.
Content knowledge is measured by required assessments aligned to professional, state, or institutional standards. Capstone experiences are final assessments of content mastery, required for exit from all graduate programs. Program-administered completer assessments (ex: R.1.4.c.9,10,11) collect data about completers’ program satisfaction, and about completers' assessments of their preparation for inservice practice (R.1.4.d.14).

Yearly Assessment System Update and Data Analysis Report (YASU/DAR)
Advanced programs complete an annual report on their program candidates’ performances. This report began as DAR in 2007, and was extended by the YASU processes in 2009. Together, YASU/DAR systematizes the unit-wide analysis and reporting of data-based decision-making processes.

YASU/DAR is intentionally SPA-like (Section 2.1, 2.2, R.2.4.d.1), requiring programs to report information about enrollments, completers, faculty qualifications, any assessment systems updates made during the prior academic year, and the prior year’s candidate performance data, including information about dispositions, MSDE diversity proficiencies, and technology proficiencies. DAR is an expanded version of a SPA report’s Section V, consisting of four questions. Program answers to these questions describe: how this year’s data reflect intended outcomes described in the previous DAR; emerging trends seen in this year’s data; how program faculty participate in analyzing data, and; how the program intends to address the trends found in this year’s data.

Improving candidate performance

The assessment system for each advanced programs identifies the program-specific assessment(s) used to evaluate candidates on their ability to directly impact student learning or to create positive environments for student learning during required capstone experiences in field-based settings. The following summaries reflect advanced programs’ consistent use of data to make improvements in programs, intending to improve candidate performance.

The Elementary/Secondary Education MEd reported a concern in its 2011-2012 YASU/DAR related to a decline in incoming candidates’ perceptions of their abilities to access relevant research and evaluate the quality of research.  Because the incoming cohort needed additional support in our core research methods courses such as EDUC 605, Research and Information Technology, and EDUC 761, Research in Education, the program director contacted instructors of EDUC 605 to alert them to the need for continuing academic support in this area. The program director also contacted the College of Education’s Library Liaison to help students navigate the Cook Library Research Portal and to help them identify and access the most germane scholarly journals for educational research. As a result of this effort, students who completed the program in 2012-2013, rated themselves highly (3.8/4.0) for their ability to “access relevant research and evaluate quality of research.” This may in part reflect the program’s two-year efforts to strengthen the development of students’ research skills with outreach to the university library staff and with closer cooperation with instructors of our core research courses: EDUC 605 and EDUC 761 (R.2.4.g.16)

The School Psychology MA/CAS program (NASP) collected feedback from candidates completing their practicum and internship experiences. Ratings from practicum and internship field supervisors initially averaged 3.2 on a scale of 1-4 on items that assessed candidates’ ability to participate meaningfully in team meetings (both general education and special education).  Candidates identified this as an area of weakness during class discussions. As a result, additional simulations of team meetings were instituted during practicum seminar and all second year candidates were required to develop a personal goal, using Goal Attainment Scaling, at the end of practicum related to team participation during internship.  Ratings on all items related to team participation on the internship field supervisor rating form increased and all were at or above 3.8 on a scale of 1-4. (R.2.4.g.25)

In the 2010 Annual Report, the REED MEd program (IRA) identified concerns with candidate performance in the program’s culminating course (REED 726, Advanced Internship in Reading), which includes a case study analysis project. The project requires candidates to diagnose and address problems in PreK-12 students’ reading performances and behaviors.  Given inconsistencies in prior-course instructor preferences related to case study analysis, candidates were dissatisfied with the comments received on their comprehensive portfolios. Using this feedback, the program identified a small set of case study analysis approaches for completing the final project. The result was an increase in student satisfaction with the project, as well as higher candidate scores on the project. (R.2.6.4)

Analysis of candidate data allowed the MEd in Early Childhood Education (NAEYC) to identify the source of some candidates’ lower scores on NAEYC proficiencies related to the diverse roles of Early Childhood professionals beyond the classroom. The source was an inconsistent instructional approach in program-required courses that were also required by other graduate programs in the College. As a result, some of the NAEYC requirements in these “shared” courses were not consistently being met. Once the inconsistency was localized, the program director worked with instructors for those courses to assure that the NAEYC-related proficiencies were being addressed in all sections (R.2.4.g.15).

Improving Program Quality

All advanced preparation programs disaggregate data according to location (Campus v. Regional Center). This disaggregation allows programs to show that candidate preparation outcomes are similar to one another, regardless of location (R.1.4.d.5, 7 and 10). R.I.5.e.1 shows which advanced programs are offered in their entirety at off-campus locations, including two programs that are offered fully online; Educational Leadership MS and Organizational Change CAS.

In the 2009 Annual Report, the ISTC Media generalist MS (ALA) program used candidate assessment data, candidate feedback and instructor feedback to identify mismatches between instructional development course outcomes and AASL Standards expectations. More closely aligning these course assignments and rubrics with AASL standards led to candidates improving their collaborative instruction abilities (R.2.6.4).

The Special Education MEd program (CEC) adopted the YASU/DAR cycle as a focus for its three annual data analysis retreats. The program completes major portions of its planning activities alongside its initial certification program in Special Education. The program identified specialization groups within its faculty, and each group selects and addresses a set of program improvement goals. Using the information from the data set, each group devises an action plan, reports that plan to the rest of the faculty, and then takes steps to affect necessary changes to the program. (R.2.6.1)

The Educational Leadership MS/ Organizational Change CAS program (ELCC) updated its program materials to address the current Educational Leadership Constituency Council (ELCC) standards. Candidate pass rates on state-mandated licensure tests was 76%; less than the state average, and below program expectations. The program identified the need for a renewed focus on ELCC Standards 1.1, 1.2, 1.3 and 1.4, and aligned MILF Standards. These standards focus on building level administrators abilities to “collaboratively facilitate the development, articulation, implementation, and stewardship of a shared school vision of learning through the collection and use of data to identify school goals, assess organizational effectiveness, and implement school plans to achieve school goals”. In the most recent test data available, the program’s pass rate on the School Leaders Licensure Assessment (SLLA) was 91% (21/23). (R.2.4.g.22, pp 1-2, 4)

The Music Education MEd program was reviewed in the 2012-2013 academic year by the National Association of Schools of Music (NASM). As a result of that visit, the music education faculty began a process to better align its program goals and outcomes with the methods used to collect data informing those goals and outcomes. The program determined that pre-interns need more preparation in the assessment of student learning, as demonstrated by low evaluations in this area during the capstone internship. A new assessment measure needs to be developed in order to gather more reliable and valid data on this standard. (R.2.4.g.21)

 

Standard 2.5 Maryland Redesign of Teacher Education

IIa. How does the unit assess proficiency in mathematics and science for early childhood and elementary education teacher candidates?

Early Childhood Education candidates must earn a minimum GPA of 2.75 in their prerequisite coursework – including 12 credits of mathematics, and 12 credits of science content. Once admitted the professional education major, candidates must maintain a minimum GPA of 3.0 in coursework, including three additional credits of mathematics and two credits of science content.

Elementary Education candidates must earn a minimum GPA of 2.75 in their prerequisite coursework – including 12 credits of mathematics, and eight credits of science content. Once admitted to the professional education major, candidates must maintain a minimum GPA of 3.0 in coursework, including six additional credits of science and five credits of mathematics content.

Exhibit R.1.5.d.2 provides evidence of ELED/EESE/ELEC candidates’ performance teaching math and science during their capstone internship as rated by MT and ULS, and as reflected in Praxis II pass rate.  MT ratings in math ranged from 4.29 to 4.64 and science ratings ranged from 4.22 to 4.67 (n=284). ULS ratings for math ranged from 4.05 to 4.56 and for science ranged from 4.02 to 4.72 (n=286). R.1.5.d.7 does the same for ECED/ECSE/ELEC candidates, using NAEYC Standard 5 Using content knowledge to build meaningful curriculum. MT ratings ranged from 4.54 to 4.76, and ULS ratings ranged from 3.91 to 4.63.

These data show consistently high scores for all candidates in the programs, regardless of location. Unit assessment data also show that ECED and ELED candidates are consistently able to demonstrate proficiency in mathematics and science content in their first years of their teaching careers (R.2.5.5).

IIb. How does the unit assess candidate proficiency for each of the seven Maryland Teacher Technology Standards (MTTS)?

The unit requires candidates to take courses that enable them to gain experience and skills in instructional integration of technology through the application of the MTTS and then demonstrate the MTTS-based knowledge, skills, and experience. R.2.5.4 illustrates the alignment of the seven MTTS with required courses in general education, instructional technology and in professional education. Although the exhibit focuses only on the primary alignment of the two required Instructional Technology courses. (ISTC 201, ISTC 301) with the MTTS, the courses address all seven of the MTTS (R.2.5.10).  Due to the new Towson Core Curriculum, ISTC 201 was redesigned as a Towson Seminar course, and is no longer required for ALL candidates. As a result, SCED 304, Education, Ethics and Change, will replace ISTC 201’s role with MTTS 1-3 beginning in Spring 2014. In addition to the required, signature assessments in these courses, MTTS proficiency is also addressed in the internship evaluations (R.1.4.d.15, MT: 4.54, n=783; ULS: 4.49, n=789).

IIc. How does the unit assess teacher candidate proficiency in reading instruction for all certification programs? 

Elementary education summative assessment of the capstone InTASC and SPA-aligned internship by mentor teachers and university supervisors: Reading proficiency. In addition to the InTASC 4 (Content) data presented for ECED and ELED in R.1.5.d.1, additional SPA-designed assessment of elementary education candidates' demonstrated proficiency in reading instruction during their capstone internship confirms that candidates are able to demonstrate their proficiency as delineated in professional standards.  R.1.5.d.2 contains three years of disaggregated data from MT and ULS (2010-11, 2011-12, 2012-13) reflecting ELED and EESE candidates’ proficiencies with ACEI Standard 2.1, showing that ELED candidates know and demonstrate proficiency in reading instruction in practice during their capstone internship (2012-2013 range 4.28-4.59, n=284).

Early childhood reading proficiency. Consistent with the SPA requirement for an identified minimal level of competence, exhibit R.1.5.d.7 confirms that ECED and ECSE candidates demonstrated reading proficiencies in their clinical internships, and in their first year survey. ECED data show that more than 96% of all candidates earn grades of B or better in MSDE approved reading courses. (R.2.5.6)

Secondary education reading proficiency. During their year-long internship, all secondary candidates complete a Validated Practices Project (VPP) within SCED 462, Seminar in Teaching Reading in the Secondary Content Area, which requires them to document, through a pre- to post-test design, their effectiveness in improving student learning of key content concepts. Exhibit R.2.5.11 records the mean score performance of the candidates who completed the VPP. The data show that secondary education interns are able to increase student reading scores on post-tests, using the VPP methodology.

IId. How does the unit assess candidate proficiency in knowledge, skills and dispositions related to Maryland assessments?  

The unit provides targeted instruction in Maryland’s assessments and accountability system through required professional coursework and field and clinical experiences, assessed through the unit’s assessment system. Unit programs focus on student learning, use of assessments to prove and improve learning, and creation of effective learning environments for all students. The alignment of unit program coursework to Maryland redesign priorities is shown in exhibit R.2.5.13.

IIf. How does the unit ensure that assessments are used to demonstrate candidate proficiency with the Maryland Instructional Leadership Framework (MILF)?

Throughout the Educational Leadership program, students are made aware of alignments between course objectives, coursework, comprehensive examinations, and the Maryland Instructional Leadership Framework (MILF). Exhibit R.2.5.6 demonstrates this comprehensive alignment among ELCC, ISLLC, and MILF standards, and program requirements. Candidates prepared in the Educational Leadership program have an impressive pass rate on Maryland’s standardized assessment (School Leadership Licensure Assessment (SLLA)(R.2.5.9).

 

2014 Institutional Report - Standard 2