Concerned about testing?

Welcome to PsychoBabble!

“Psycho” refers to psychometrics – that is, the science underlying educational and psychological tests and measurement procedures. “Babble,” defined as a meaningless confusion of words, refers to the way most of us perceive any technical discussion of testing. Our hope is that this ongoing series of articles will help our readers understand and make sense of the current educational testing landscape, here in Durham and across the country – why there is so much testing; what it can and can’t tell us; how we can best embrace the good and accept the not-so-good; and perhaps most important, how we can educate all stakeholders about the fundamentals of educational testing so as to defuse what has become such a mysterious, threatening, and misunderstood part of our lives.



July, 2010 – EOG Score Reports

What can I tell about my child’s achievement from looking at her individual EOG score report?

The short answer is “not much.” Really, the only piece of information on an individual EOG score report that is meaningful with respect to your child’s achievement is his or her proficiency level. That’s it. I know you want to know more, but you’re going to have to be satisfied with knowing that your child has attained a given level of proficiency as defined by the state of North Carolina. (Click here to see NC State Board policy GCS-N-002).

Here’s why…

A good friend of mine – a caring, highly educated woman herself – commented to me that she was pleased to see that her fourth-grade son had improved from the 82nd to the 88th percentile on the EOG math test…

Danger, danger Will Robinson!!!


Forgive the archaic reference to “Lost in Space,” but this is precisely the type of response that concerns me. Through no fault of our own, the vast majority of us are regrettably uninformed about how to understand what we see on our children’s (and our schools’ and districts’) score reports. There are a number of reasons why my friend’s interpretation of her son’s EOG results is ill-advised:
  1. There is a world of difference between percentile rank and percent correct. Percentile rank is a number from 1 to 99, expressed as a percent, which indicates how a student performed in comparison to his or her peers. My friend’s son scored as well as or better than 88% of his peers; consequently, he is in the 88th percentile. Percentile rank depends not only on how well a student performs on a test, but also how well his or her peers perform on a test.

    Percent correct is simply how many points a student gets on a test out of the number of points possible, expressed as a percent. Mathematically, percent correct is 100 times the number of items a student answered correctly divided by the number of items on the test. (The 100 simply changes the number from a decimal, like 0.88, to a percentage, 88%.) Percent correct is generally what we are referring to when we think to ourselves, “Oh, my son got an 88% on that test! That’s an improvement from the 82% he got last time.” An 88% as an indication of percent correct means, for example, that a student answered 44 out of 50 questions correctly. No matter how his or her peers performed, this student scored an 88%.

    Percentile rankings are only meaningful, then, when a student’s performance is measured in comparison to the performance of other students; assessments that yield such information are called norm-referenced tests. Because the purpose of statewide accountability assessments such as the EOGs is to provide information about student achievement with respect to a state’s content standards (here, the North Carolina Standard Course of Study), we are interested in student performance measured in comparison to our expectations of mastery as defined by these content standards. That is, we’re looking at performance in reference to criteria (the standards) – thus the term criterion-referenced test.

  2. Percentile rankings are based on norming groups. Okay, so maybe my friend didn’t confuse percentile rank with percent correct. Maybe her delight was based on a belief that last year he performed as well as or better than 82% of his peers and this year he performed as well as or better than 88%. The problem with that assessment is this: the norming group (that is, the group of peers to which a student’s performance is compared) is not composed of all North Carolina students taking the EOGs in a particular grade this year. Instead, the norming group is “students who took the test in the norming year” (see Understanding the Individual Student Report for the North Carolina End-of-Grade Tests). For the Mathematics EOG, the norming year is the first year the current edition of the test was administered, which is 2006. (The norming year for the Reading EOG was 2008.)

    That means, then, that last year, when he was in 3rd grade, my friend’s son scored as well or better than 82% of North Carolina students who were in 3rd grade and took the EOGs in 2006. This year, in 4th grade, he scored as well or better than 88% of North Carolina students who were in 4th grade and took the EOGs in 2006. We can’t compare the 82nd and the 88th percentile because these ranks refer to two different sets of peers!

If you take away anything from this PsychoBabble installment, let it be that percentile rankings, as far as EOGs are concerned, need to be taken with the proverbial grain of salt. Rankings are on the score reports largely because parents are accustomed to seeing them, but they really don’t mean anything useful. The EOGs are criterion-referenced tests designed to measure the extent to which students are mastering the NC SCS. They were not designed — nor should they be used — to determine how well our child is doing in comparison to his or her peers.

Despite what all of us may believe, at least during the weeks our children are being tested, educational assessment is not inherently evil. In fact, we really can glean a lot of constructive information from testing, but we need to know and understand what we’re looking at. Stay tuned for more, and please feel free to write in with questions!

Download PsychoBabble entries in PDF format:


Other helpful links:






PsychoBabble was conceived and is written by Jennie Peters (with input from Kevin Joldersma, PhD). Jennie holds a PhD in Educational Psychology and a Masters degree in Arts in Education; she has worked as a psychometrician under contract to various state departments of education. She is currently at work on a book to demystify the complexities of educational testing. Please email her at psychobabble.nc@gmail.com with any questions you have about testing, and she will try to address them as quickly and as clearly as possible.

from Jennie Peters on Wednesday, July 28, 2010