In
my last entry, I explained some of the major issues with the Common Core State
Standards (CCSS) themselves. Today, I’d like to delve into the testing mandate
intertwined with CCSS through the Race to the Top initiative. First, a little
background.
When
the U.S. Department of Education dangled $4 billion dollars over states that
were struggling with fiscal crises following the 2008 market crash, it would
have been conceived as gross negligence to reject the federal government’s
offer. The Race to the Top, as it was billed, would “free” states from the
mandates and sanctions of No Child Left Behind AND give them a much-needed
financial boost. But swallowing the bait meant the line was still attached.
States must adopt the CCSS for their schools, they must construct and connect
to a Longitudinal Data Base (inBloom – to be covered later) and they must test
students every year at the end of grades 3-8 and before graduation, to ensure
that their students are progressing and that teachers are “Effective”. States
gobbled it whole in 2009, not entirely cognizant of the implications (or
associated costs) until much later. In fact, the CCSS weren’t even published
until 2010!
What are "High Stakes" tests?
The
term “High-Stakes” is used to describe any test that is used to determine a
student’s promotion into certain programs or even the next grade. The stakes
become even higher when they are tied to a person’s career. For the first time
in history, how a child performs on a test does not reflect on him or her, but
only on the teacher. This is a very slippery slope. Imagine if a doctor, let’s
say a cardiologist were to be measured on whether or not their patient's health
improves. Patients are assigned a doctor for one year only, and move on to
another the next year. If the patient’s health does not improve significantly
within the year, as measured on a blood test (Why not an EKG? We’ll get to that
later) the doctor is deemed “Ineffective.” If the doctor is ineffective for two
years, he or she is no longer permitted to practice medicine. Surely by now you
are considering all of the factors outside of the doctor’s control: heredity,
past medical care, a lack of motivation to become well, the doctor has a bedside manner that differs from one the patient prefers, a preference for
cheese, etc. Yet, this is precisely what these High-Stakes tests are designed
to do, and the measurement vehicle lacks validity.
|
|
The
Race to the Top initiative factors these scores into a teacher’s Annual
Professional Performance Review (APPR). In fact, High Stakes tests account for
40% of a teacher’s APPR. So if a teacher receives a perfect score on every other aspect of evaluation, he or she may still be deemed "ineffective" if students do not show "growth" (this is a nebulous term, as the goalposts are unknown and are moved every year) on the high-stakes tests. Teachers working in high-needs areas, poverty stricken
communities, with special education students, or even honors students(!) are virtually assured of an
“ineffective” rating. In a seeming attempt to “soften” that blow, and to
address objections made by teachers of subjects without State tests, New York
introduced SLO Testing (Standard Learning Objectives) to account for at least 20% of the
teacher’s APPR. Again, students may demonstrate "sufficient" growth on this measure (assigning the teacher 80% effectiveness) but it is the nebulous growth metric on the State Exam which actually determines the teacher's rating. Imagine getting an 80 on a test and being told that you failed! Teachers of
subjects such as Art, Music, Home and Careers, etc., may still be held
accountable for their students’ ELA or Math grades, even though they do not
teach that subject, otherwise their SLO accounts for the full 40% of their APPR. In essence, students
are given a pre-test in the beginning of the year and a post-test at the end of
each course. So now, not only do students have a high-stakes test at the end of
each year in ELA and Math, but they must also take pre-and post SLO’s in every
subject. That seems like an awful lot of time being spent on High Stakes
Testing, doesn’t it?
The Effect on our Students and Schools
So why should parents be concerned?
Students who do not perform well on these tests must, by mandate, be placed in
remedial classes. This results in a narrowing of the curriculum for many
students. Imagine if you were feeling ill (or hungry), but had to go to work.
You are expected to perform at your peak for an annual review that day –
whether or not you get a raise depends on it. But due to the fact that you are
not feeling well (or are hungry) you avoid, or are curt with your customers,
you make mistakes on simple addition, costing the company hundreds (or
thousands) of dollars, or you can’t finish the day because you are simply too
exhausted (or hungry) to continue. How would your performance review turn out?
For students taking these tests, any number of factors can influence their
performance on a given day. Imagine that these children know that they will not
be allowed to have recess, or art, or music, or go to gym class next year if
they do poorly on the Math or ELA exam – or both – because they must take
a remedial class for the subject they tested poorly on. What if the entire
school does poorly, but a handful of children do well? A school is going to
program for the majority, and if it means cutting music sections in order to
add four more sections of ELA Academic Intervention Services, that is what will
happen. With their limited resources, schools will place even the top scoring
students within these mandated classes rather than offer non-mandated
enrichment. That’s pretty high-stakes, if you ask me. If my child knows that
she will have to drop Art to take Math AIS based on a low score on a
high-stakes test, she is under a tremendous amount of stress when she takes
that test. It is not stress induced by her teacher, or myself; it is induced by
the High Stakes.
What are the tests for?
So,
why such an emphasis on these tests? One may have heard that these tests inform
instruction and give teachers the data they need about a particular student’s
strengths or weaknesses. This could not be further from the truth. They also measure how a particular child felt about himself, school, his friends, the weather, his teacher, whether his mother yelled at him, etc. on one particular morning in April. Teachers
will not see the test results until roughly 6 months after the tests have been
taken. By then, the students who took them have moved on to other teachers. In
addition, teachers receive a score and nothing more. They know the question
number and the “standard” the question was supposed to have addressed, but
there is no way for the teacher to know WHY the question was answered
incorrectly – or if it was even answered
at all. Unfortunately - the type
of data that would actually answer these questions will NEVER be released.
Honestly, those tests could have been graded within days (NOT MONTHS) and to be
of ANY value, those tests could and SHOULD have been brought back to the
classroom for interpretation, analysis, and exploration with the students who
took them. But teachers will never see exactly what their students got right,
or what within the questions may have tripped them up.
Have you seen the cherry-picked questions and the paragraph-long
explanations that accompany why a particular answer was correct/incorrect on
the NYS Exams given in April 2013? If it takes that much explanation to
determine why a choice is wrong, there is something wrong with the measurement
instrument. What you will also notice is that many of the correct responses
(I'm using the ELA) required interpretation of connotations and nuances -
SOCIAL constructs that require cultural capital rather than content-specific
comprehension. These students were expected to then conduct these types of
"nuanced analyses" (have a paragraph-long inner monologue arguing the
merits of each answer choice) for EVERY question - for 90 minutes - for THREE
days. A handful of these types of questions would have been sufficient to
challenge the intellectually advanced students - which STATISTICALLY should
only be 15%. Three-days worth is abusive - it is inappropriate for the physical
and mental developmental stages for these children.
Simply put, Pearson's tests were inaccurate measuring tools that
were then scored according to "normative" processes rather than
standards-based. The cut scores were then deliberately set so that more than
half of the top of the bell were considered "failing". If I know
anything about statistics, it's this: "statistics can be made to say
whatever you want them to say." To then tie these types of “cooked books”
to the professional careers of thousands of teachers throughout the country is
unconscionable. When this practice inevitably dismantles the teaching
profession, then what? Well, then there’s a lot of money to be made...
Starve the Beast
Have your children refuse to take these High-Stakes tests that have no positive impact on their education. See the links on the right for more information.
Fighting
the federal intrusion into what should be a state and local matter has proven
to be comparable to battling the Krakken. There are many more tentacles (the
commandeering of student’s private, personally identifiable information by third
parties; the intrusion of unqualified TFA ranks; privatization; educational programming
based on student-aptitude tracking, etc…) that will be addressed in upcoming
entries.