Ms. Quinones' Letter to the BOE:
"Dear Board of Education Members,
I have reviewed the MAP Presentation documents on Board Docs for the June 8, 2015 BOE meeting and listened to part of the Board discussion via podcast (although admittedly not the whole discussion). When I heard Dr. White mentioning that a more detailed analysis would have to be part of an explanation as to why students didn’t meet their growth targets, an analysis that looked, for example, at performance on Operations and Algebra questions versus Geometry, it struck me that while this is a very important exercise for teachers and administrators, it is well below the high level that you as a Board look at things and on which you should be basing decisions. As someone who has been working with and analyzing MAP scores for more years than I want to think about, I thought I would take the time to provide you with some foundational information that you may or may not already know to assist you with your analysis of our students’ performance. I am not by any means an expert statistician or data analyzer. I cannot urge you enough to find ways to tighten up current administrative roles and hire such a person without increasing total cost of administrating this District.
First, while those MAP presentations by District, by grade, by school certainly tell a story, a more accurate story would be told if we were comparing the same set of students every year. While the number may be small, there is a certain level of students who move in and move out of our District. I had advocated for years (back under Janet Stutz) to get data crunched using the same exact cohort of students from Year 1 to Year 2 for example in Grade 3 in Year 1 and Grade 4 in Year 2 and compare it to using all students in those grades each year to see if there was a difference. This has never happened. Just something to keep in mind when you look at a report that compares grades over years – these are not necessarily all the same kids and a handful of move ins or outs at the top or bottom within one grade could certainly change the data when you are only talking about several hundred students.
Next, while “quintile” data is interesting, I hope we can agree that a student consistently scoring at the 1st% ile has some different needs than a student consistently scoring at the 40th %ile (yet both are in the bottom 2 quintiles). Similarly, a student consistently scoring at the 98th %ile has some different needs than the one consistently scoring at the 61st %ile (yet both are in the top 2 quintiles). Also, as you know, the MAP norms are based on a National sampling of students. I found it interesting that we often assume in D181 that we should have a higher percentage of our students scoring in the top 2 quintiles than the nation as a whole, and therefore our bell curve should be a bit skewed to the right. In actuality, our numbers appear to be flat lined – no bell curve at all! For example, 4th grade math by quintile low to high (number of students): 85, 85, 84, 84, 84. You see this pretty much across all grades and all subjects on what was reported. (Throughout this letter I randomly use 4th grade math results as an example, no particular reason).
Looking at the quintile data for growth trends, please remember that making growth = maintaining achievement versus your grade level peers. It does not mean you improved more than those peers. That is, if a student scores in the 50th %ile in the fall on 4th grade math (RIT 204), that student must show 8 points growth (RIT 212) to maintainthat 50th %ile. (I am using RIT/Growth data from NWEA as to Fall & Spring Math. It comes from a lengthy Norms Report, but I would be happy to send you the PDF if you want to see it). A student who exceeds growth target has improved his/her percentile versus his norm group peers and a student who does not meet growth target has dropped.
So, using what was presented to you Monday night, looking at our bottom quintile in 4th grade math, 69 % of 85 students who started below the 21st %ile in math (approximately 59 students) met or exceeded their growth target. We don’t know how many met versus how many exceeded, but one category reflects maintenance and the other closing the gap, so it certainly would be nice to know. We do know 26 students in this group ended the year at a lower %ile ranking than they started. I am assuming that many of the District’s Special Education students comprise this lower quintile. If that were the case, I would certainly expect that there might be some who do not have the cognitive ability or emotional stability to show growth on this test. Maybe that is 59 students, but I doubt it. As a BOE I would hope the questions you are asking of your Administration is that for kids who DO have the ability to make growth, is there a process to increase the intensity of the intervention for this group because unless the growth target was EXCEEDED, the gap is not closing for these kiddos. Whether that was because of how they scored on certain types of questions or what is going on in other Districts is totally irrelevant. Even more so for the ones who may not have IEPs; something is missing in the educational programming for these students.
Looking at the next quintile (21st-40th %ile) 62% of the 85 students who started here made (or exceeded) growth (53 students). The other 32 students did not maintain their %ile versus their Nationally normed peers. I am sure this group includes many of the D181 RTI kiddos. Hopefully something is being done to help them close the gap. Unlike the PARCC test, the MAP is self adjusting – that is, when a student gets too many questions wrong the test starts getting easier so the final RIT score can be compared across grade levels to see at what level the student is functioning. For example, the 4th grader at the 21st %ile with a RIT of 201 is answering questions correctly similar to those answered correctly by the average (50th %ile) 3rd grader (RIT 203), a full grade level below on skills. Does their programming reflect this need? Is it intensive enough to help them close that gap? For at least 32 of them this year the answer is, “no.”
Looking at the next quintile (41st-60th %ile), these are your solidly average kids based on National norms and 63% of the 84 students who started here made (or exceeded) growth (53 students). The other 31 students did not maintain their %ile versus their Nationally normed peers. Again, at a BOE level, I believe that it is not only that over a third of our 4th graders did not achieve as well as the average in the nation as a whole that should be your focus, but without seeing a breakdown of meets versus exceeds it is hard to know how many actually improved their %ile. Of those 31 who did not make growth, a telling statistic would also be how many fell into a lower quintile than they started the year with.
Looking at the next quintile (61st-80th %ile), these are high achieving kids based on National norms and of the 84 students who started here 60% made (or exceeded) growth (50 students). The other 34 students did not maintain their %ile versus their Nationally normed peers. The question you should be asking is why are 2/5 of some of our highest achieving kids falling behind their peers nationally? The answer does not lie in further detailed analysis of their MAP test scores – it lies in their educational program.
Finally, looking at the top quintile (81st %ile and above). These are our advanced learners, most of who should easily be able to not only meet growth targets, but also exceed them. I say most because when looking at these results, bear in mind that the 99th %ile in the Fall for 4th grade math was 237 and in the Spring was 248. The test the 4th graders took, however, has a RIT that goes up to at least 259. This is a huge band. I am not sure whether the 4th grader who scored a 250 in the Fall would have had a growth target of “0” because they could actually have a lower RIT in the Spring and still score in the 99th %ile. How these students are accounted for in the reporting would be an interesting piece of information to track down. Putting those students aside, it seems clear that when only 44% of 84 students or 37 students met growth targets allowing them to maintain standing versus their nationally normed peers that their educational needs are probably NOT being met.
Of course, MAP growth is just one data point based on two test periods. While highly reliable and valid, particularly as we get multiple scores on the same children over the years, there will always be outliers who for some reason on that day just didn’t test up to their ability. I personally saw this a lot this spring where I teach. While many students take every single assessment seriously and do their best, by the time they were done with the first round of PARCC testing and just several weeks later had to take Spring MAP, there was certainly a level of lower motivation and burn out I had not seen before. It is my opinion (no research to support this, just observation) that Fall scores for this year were more accurate because the students are “fresher.” This is not to say I would advocate Fall to Fall growth comparisons because I think these results would be full of “noise.” For instance, did the child spend the summer watching reruns or going to summer school? There may have been growth/lack of growth but was it due to District Curriculum/teaching or something else? I am glad to see there is just one round of PARCC being proposed for the end of next year, so hopefully any lack of motivation that may have been involved this year from over-assessing would not impact MAP testing next year.
Lack of motivation or not, these scores suggest that “raising the floor to raise the ceiling” isn’t working particularly well for either those at the bottom or the top. Perhaps the saddest part of all for the top quintile students is that it has been over three years since Dr. Moon presented her findings about the weaknesses in our programming for these kids. We have graduated three classes of 8thgraders who will not get another chance at an appropriate elementary school education, and yet still there is no defensible educational program in place for the advanced learner, gifted, or whatever you want to call them. Dr. Stutz couldn’t do it, Dr. Russell couldn’t do it, and Dr. Schneider hasn’t done it. I would argue that we are further failing these students than we were before the money was spent on Dr. Moon. At least then they had a “part time solution to a full time issue.” Now, there is no solution. It is every school for themselves (just look at The Lane math acceleration numbers for next year versus the other schools) with no evidence that any particular approach is backed by data showing its ability to produce results.
Former BOE President Mr. Turek recently acknowledged we are still not meeting the higher performing students’ needs. I am sure current BOE President Mrs. Garg would acknowledge this as well. But, at the end of the day, what I think and you think and other parents think is not as important as what the data is telling us. The data is telling us what is in place now is not working. Of course, this is no surprise to many of us who knew there was NO research to back the current educational model for a demographic like that at D181, particularly the higher achieving students. This should have been a red flag to the BOE a long time ago. How much longer do these students need to wait before D 181 hires someone who has the background knowledge and experience to formulate appropriate educational programming for all students (appropriate being defined as research-supported to provide challenge and growth no matter what the student’s ability level)?
The time to hire such a person is now. And looking at the lack of real progress with the majority of our lower quintile students in terms of closing the gap with their peers, it appears the lack of a full time dedicated head of Special Education has also had a negative impact.
I thank you for your time and attention to this matter. Please feel free to contact me if you have any questions about what I have written.