DDM Webinar Part 6: Determining How to Integrate Assessments

January 18, 2018 | Author: Anonymous | Category: Science, Health Science, Pediatrics
Share Embed Donate


Short Description

Download DDM Webinar Part 6: Determining How to Integrate Assessments...

Description

Waiting Room Today’s webinar will begin shortly. REMINDERS: • Dial 800-503-2899 and enter the passcode 6496612# to hear the audio portion of the presentation • Download today’s materials from the sign-in page: • Webinar Series Part 6 PowerPoint slides • Correlation Example Excel file

Determining How to Integrate Assessments into Educator Evaluation: Developing Business Rules and Engaging Staff Webinar Series Part 6

Webinar Series Title

Date

Length

Time

1

Introduction: District-Determined Measures and Assessment Literacy

3/14

60 minutes

4-5pm

2

Basics of Assessment

4/4

90 minutes

4-5:30pm

3

Assessment Options

4/25

60 minutes

4-5pm

TA and Networking Session I

7/11

3 hours

9am-12pm

4

Determining the Best Approach to DistrictDetermined Measures

7/18

60 minutes

4-5pm

5

Measuring Student Growth and Piloting DistrictDetermined Measures

8/15

60 minutes

4-5pm

TA and Networking Session II

9/19

3 hours

2:30pm5:30pm

6

Integrating Assessments into Educator Evaluation: Developing Business Rules and Engaging Staff

10/24

60 minutes

4-5pm

7

Communicating Results

12/5

60 minutes

4-5pm

TA and Networking Session III

12/12

3 hours

2:30pm5:30pm

Sustainability

1/23

60 minutes

4-5pm

8

2

Audience & Purpose Target audience District teams that will be engaged in the work of identifying, selecting, and piloting DistrictDetermined Measures.

After today participants will understand: Examples of practical solutions to issues of fairness in using District-Determined Measures (DDMs). Practical examples of engaging educators in the process of implementing DDMs.

3

Agenda Student Impact Rating Rollout Reminder DDM Comparability Identifying Bias

Standardizing DDMs Ensuring Sufficient Variability

Q&A and Next Steps 4

Student Impact Rating Rollout: Date

Action

Sept. 2013:

Decide which DDMs to pilot and submit list to ESE.

Sept. 2013 – June 2014:

Pilot DDMs in at least the five required areas and research DDMs in additional areas.

June 2014:

Submit final plans, including any extension requests, for implementing DDMs during the 2014-15 school year*.

SY 2014-2015

Implement DDMs and collect Year 1 Student Impact Rating data for all educators (with the exception of educators who teach the particular grades/subjects or courses for which an extension has been granted).

SY 2015-2016

Implement DDMs, collect Year 2 Student Impact Rating, and determine and report Student Impact Ratings for all educators (with the exception of educators who teach the particular grades/subjects or courses for which a district has received an extension).

*ESE will release the June 2014 submission template and DDM implementation extension request form in December 2013.

5

DDM Key Questions Is the measure aligned to content? Does it assess what the educators intend to teach and what’s most important for students to learn?

Is the measure informative? Do the results tell educators whether students are making the desired progress, falling short, or excelling? Do the results provide valuable information to schools and districts about their educators?

6

Refining your Pilot DDMs  Districts will employ a variety of approaches to identify pilot DDMs (e.g., build, borrow, buy).  Key considerations: 1. How well does the assessment measure growth? 2. Is there a common administration protocol? 3. Is there a common scoring process? 4. How do results correspond to low, moderate, of high growth? 5. Is the assessment comparable to other DDMs?  Use the DDM Key Questions and these considerations to strengthen your assessments during the pilot year.

7

DDM Comparability: Two Types DDMs must be “comparable across schools, grades, and subject matter district-wide.” (Per 603 CMR 35.09(2)a) Comparability = Two types (Type 1) Comparable across schools (Type 2) Comparable across grades and subject matter

Learn more in Technical Guide B, page 9 and appendix G

8

Comparability (Type 1) Comparable across schools Example: Teachers with the same job (e.g., all 5th grade teachers) Where possible, measures are identical Easier to compare identical measures Do identical measures provide meaningful information about all students?

When might they not be identical? Different content (different sections of Algebra I) Differences in untested skills (reading and writing on math test for ELL students) Other accommodations (fewer questions to students who need more time)

9

Error and Bias  Error is the difference between true ability and a student’s score. Random error Student sleeps poorly, lucky guess, … etc

Systematic error (bias)  Error occurs for one type or group of students  ELL student misreads a set of questions  Systematic Error = Bias

 Why This matters?  Error (OK) decreases with longer/additional measures  Bias (BAD) does not decrease with longer/additional measures  Even with identical DDM, bias threatens comparability

10

When does bias occur? Situation: Students who score high on the pretest have less of an opportunity to grow because they cannot get more than a top score (Ceiling Effect). Situation: Special education students gain fewer points from pre-post test, and as a result are less likely to be labeled as having high growth.

11

Checking for Bias Do all students have an equal chance to grow? Is there a relationship between the initial score and gain score?

We can do this in EXCEL using correlation  We have  Pre-Test Score  Post-Test Score  Gain Score

Correlation formula in Excel:

=CORREL(PRE-TEST SCORES, GAIN SCORES)

 Type “=correl”, click formula  Highlight Pre-Test Scores, Press “Comma”  Highlight Difference Scores, Close Parentheses, Press “Enter”

12

Interpreting Correlation  Correlation is the degree to which two numbers are related  Correlation  Number between -1 and 1.  A zero correlation means numbers are unrelated  Closer to 1 or -1 means strong correlation

 DDMs should provide all students an opportunity to demonstrate growth  We want to see little to no correlation between pre-test scores and gain scores  A correlation above .3 or below -.3 suggests that there are systematic differences in gain for low and high ability students

13

Correlation Example Demonstration of computing Correlation between pre-test and gain Very Low Correlation students of all ability were equally likely to demonstrate growth

Negative Correlation Students of high ability systematically demonstrated less growth (due to ceiling effect)

Positive Correlation Students with lower scores generally grew less (bias)

14

Interpreting Correlation Strong correlation is an indication of a problem A low correlation is not a guarantee of no bias! Strong effect in small sub-population Counteracting effects at both low and high end Use common sense

Always look at a graph!  Create a scatter-plot graph and look for patterns

15

Example of Bias at Teacher Level Teacher A

Teacher B

Pre

Post

Gain

Pre

Post

Gain

3

4

1

3

4

1

3

4

1

8

14

6

3

4

1

8

14

6

3

4

1

8

14

6

8

14

6

8

14

6

Even though similar students gained the same amount

Teacher A’s average gain is 2 Teacher B’s average gain is 5

16

Solution: Grouping Grouping allows teachers to be compared based on similar students, even when the number of those students is different

Low Students High Students

Teacher

Average Growth

A

1

B

1

A

6

B

6

17

Addressing Bias: Grouping How many groups? What bias are you addressing? Enough students in each group?

Using Groups Weighted average Rule based (all groups must be above cut off) Professional judgment 18

Comparability (Type 2) Comparability across different DDMs Across different grades and subject matter Are different DDMs held to the same standard of rigor?

Does not require identical number of students in each of the three groups of low, moderate, and high Common sense judgment of fairness 19

One option: Standardization Standardization is a process of putting different measures on the same scale For example  Most cars cost $25,000 give or take $5,000  Most apples costs $1.50 give or take $.50  Getting a $5000 discount on a car is about equal to what discount on an apple?

Technical terms “Most are” = mean “Give or take” = standard deviation

20

Guest Speaker Jamie LaBillois – Executive Director of Instruction, Norwell Public Schools

21

Developing Local Norms  Student A      

English: Math: Art: Social Studies: Science: Music:

15/20 22/25 116/150 6/10 70/150 35/35

 We learned early on that we needed a process that would create one universal measurement unit to discuss student progress.

22

Transform the Data…

23

How?  Step One  Calculated the difference between Post and Pre (or any approach from Technical Guide B)  Step Two  Find the mean (average) of the difference scores  Step Three  Find the standard deviation of the difference scores

24

How? 

Now, we’re ready to “transform” the difference scores into a universal measurement system.



Step Four  Calculate the z-score of each individual difference score (observation – Mean) Z = -----------------------------------Standard Deviation



Step Five  Calculate percentile rank for each z-score

25

Developing Local Norms  Student A      

English: Math: Art: Social Studies: Science: Music:

15/20 22/25 116/150 6/10 70/150 35/35

 Student A      

English: Math: Art: Social Studies: Science: Music:

62 72 59 71 70 61

%ile %ile %ile %ile %ile %ile

26

Examining an Educator’s Impact  Grade 4 DIBELS Oral Reading Fluency  MEDIAN %ile per class:        

Teacher Teacher Teacher Teacher Teacher Teacher Teacher Teacher

1: 2: 3: 4: 5: 6: 7: 8:

65 71 59 59 62 57 29 50

%ile %ile %ile %ile %ile %ile %ile %ile

Evaluator’s Focus 27

Lessons Learned       

Growth vs. Achievement Robust Tool Timely Analysis Re-Assessment of Instruction Re-Assessment of Ability vs. Disability Development of Building-Based Evaluators Educator Engagement is Essential 28

Ensuring Sufficient Variability Technical Guide B’s two key questions: Is DDM aligned to content? Does the DDM provide information to educators and evaluations?

Lack of variability reduces information

29

Looking for Variability Problematic

200

200

150

150 # of students

# of students

Good

100

50

100

50

0

0 Low

Moderate

High

Low

Moderate

 The second graph is problematic because it doesn’t give us information about the difference between average and high growth because so many students fall into the “high” growth category.

High

30

Guest Speaker Experience with constructing measures with greater variability

31

Wrap-Up Today, we discussed three strategies for evaluating the fairness of your DDMs 1. Check for bias by computing the correlation between pre-test scores and gain scores. 

Remember: Zero correlation indicates that all students have an equal chance to demonstrate growth.

2. Standardization can help you compare DDMs in different content areas.

3. Look for variability in student growth. A lack of variability reduces the amount of information available to educators about their students.

32

Resources Available Now at http://www.doe.mass.edu/edeval/ddm/:  Technical Guide B  DDMs and Assessment Literacy Webinar Series  Technical Assistance and Networking Sessions  Core Course Objectives and Example DDMs Coming Soon  Using Current Assessments Guidance (Curriculum Summit)  Model Contract Language  DDM Pilot Plan Cohorts

33

Register for Webinar Series Part 7 Part 7: Communicating Results Date: December 5th, 2013 Time: 4-5pm EST (60 minutes) Register: https://air-event500.webex.com/airevent500/onstage/g.php?d=597905353&t=a 34

Questions Contact Craig Waterman at [email protected] Ron Noble at [email protected]

Feedback Tell us how we did: http://www.surveygizmo.com/s3/1421848/Dist rict-Determined-Measures-amp-AssessmentLiteracy-Webinar-6-Feedback

35

View more...

Comments

Copyright � 2017 NANOPDF Inc.
SUPPORT NANOPDF