My partner and I attended two NWEA MAP Foundation workshops over the weekend. This was our first year at a MAP partner school, and at the beginning the amount of data seemed overwhelming. Two testing seasons later we felt we had a better handle on it and were ready to learn more, which is why we jumped at the opportunity to attend the session – especially since the registration costs were covered by the State Department. We joined about twenty other teachers from Niamey, Freetown, Monrovia, and Abuja at the International Community School of Abidjan where presenter Terri Howard led us through the Stepping Stones to Using Data and Climbing the Data Ladder.
Here were our takeaways:
- How do you know when to differentiate? Double-digit figures in your class’ standard deviation (in the Teacher by RIT report) means you should differentiate your class. The Class by RIT report also gives useful data, especially when you click on the subject row headings (i.e. Math) to show the goal strands within, where students will be grouped according to RIT band. Aim for 2-4 groupings.
- Is the test a reflection of where they are? Standard Error (also in the Teacher by RIT) report indicates the reliability of your test; a 3 is normal and the lower, the better. If you have students scoring 4.8 or above you should strongly consider retesting, as this indicates they did things like quickly guessing through their questions.
- How do you get students on board? In addition to the usual goal-setting, schedule student-led conferences after the testing season (this can coincide with your normal parent-teacher conferences) where students can explain their results. This motivates them to try their best – no one likes to explain showing less-than-expected or negative growth. Having a visual reminder, such as a sticky note with their target score, also helps keep them focused according to NWEA researchers.
- How do you get parents on board? Education is the key here. Hold a parent workshop – at AISB we plan to do one just prior to the start of the testing season – where you explain RIT, growth, and especially DesCartes. Emphasizing DesCartes and avoiding discussion of percentiles helps keep parents focused on how MAP tests measure what students are ready to learn, not mastery. Use sample data at this session, as using their kids’ real data will distract them from learning about how to interpret the results.
- How do you distribute results to teachers? You can tweak your CRF to create non-existent classes, like one for all ESL students across grades, or one for all HS students, or one combining two small grade levels to see what they’d look like as a combined class (a common situation in small West African schools). In the Spring, you might give access to one grade’s data to their teachers for the following school year so that those teachers can begin to differentiate from the very start. Finally, you can submit Data Repair Requests with updated CRFs if you want to fix anything after the testing season.
On an unrelated note, regional trainings like this are always nice because they bring together a group of schools in a similar situation, and this is especially true in West Africa. The Ed Tech community always talks about creating a global personal learning network (PLN), but it was only by going to a face-to-face training that I could meet other educators who knew what it was like to teach in a school without internet, or one that had been shut down because of a coup, or one that had graduating classes of just half a dozen teachers. People outside of the region just don’t have the experience in the unique set of challenges we work with every day. And I’d argue that face-to-face conversation has a depth and flow that electronic communication misses. One superintendent has decided to eschew email completely:
So we left the conference having had good, old-fashioned conversations and notes, not Hangouts and hashtags, but still ready to use data to inform our instruction.