At Bell Curves, we love going through old SATs to beef up our vocabulary and assuage potential polemical debates on deleterious topics. What can we say, we’re mercurial that way.
We’ve had many questions over the years about SAT score reporting policies and more recently the Score Choice policies. Hopefully this will shed some light on these policies and help make the testing and application process a little less of a mystery.
When people say “test prep,” what they mean varies greatly, and it’s usually limited to what they did themselves or what they’ve heard of. As part of this blog, we hope to provide a bit more insight into some of the options for test preparation. Our team has blogged quite a bit about free prep resources (check out our two most popular post on test prep here and here), so it’s high time we devote a little space to the commercial products.
The College Board has redesigned student reports for the PSAT. The new reports have several advantages:
With the PSAT on the horizon on October 13th (or 16th), many students are struggling to factor this test (yet, another one!) into their college admissions plans and profile. To help students and parents navigate this stressful period we offer you this insight into the role the PSAT plays.
Let’s start with the basics:
- The PSAT is a shorter, slightly easier practice SAT
- The PSAT is offered in schools to Juniors and many Sophomores (and even some Freshmen)
Many parents I speak to ask me about the SAT essay, its weight in the total scoring, its role in admissions decisions and more importantly how to improve scores. Parents and students often are confused by the requirements of the SAT essay and how it differs from those most common to High School English classes. Many of you might have even heard test prep “experts” speak to strategies for improving SAT essay scores that seemed off the wall and far-fetched. I thought I’d shed some light on the issue.
First, here is what the College Board says:
People often ask me how I became the Sultan of Standardized Tests, the Baron of the Bubble, and the Prince of POE, or they just ask how I got so good at taking tests. It’s taken me a bit but after ruminating on the question I think I’ve arrived at not only an answer but advice that will let others try to develop some of the same talent. The answer I’ve arrived at is “I was a smartass as a kid.” Now I know that sounds crazy but keep reading and I promise it will make sense.
Consider the skills that define a proper smartass:
A deeper analysis must be made of John Hechinger’s article, SAT Coaching Found to Boost Scores — Barely. If we are to take Mr. Hechinger’s conclusions at face value — conclusions that seem more concerned with drawing a crowd than with accurate reporting — then we are voluntarily subjugating the few facts in this discussion to a series of anecdotes and conjecture.
Mr. Hechinger draws a variety of conclusions about the ethical practices of test preparation companies, particularly about (1) the claims of substantial increases in student performance on standardized tests and (2) the validity of practice tests given by these companies.
In making his first point, Mr. Hechinger directs us to look at the study recently published by the NACAC. The NACAC report does not discuss improvement, only effect. The report makes no claims about the veracity of any particular “improvement” at all, but instead seeks to demonstrate that the net effect of expensive test preparation only differs by 30 points from other forms of less expensive preparation. Mr. Hechinger completely disregards one of the major premises of the report — that improvement and effect are two entirely different measurements. Unfortunately, by missing the major premise of the report, the conclusions he presented seem to further cloud the distinction the report attempts to make.
To address the second point, I counter that providing inaccurate scores is at least partly counter-productive for test preparation companies, which rely on analyzing student performance to enable instructors to direct and focus student preparation. Inflating or deflating scores simply disallows companies from providing effective training. If we accept, as Mr. Hechinger proposes, that the major form of marketing is trumpeting score improvements, companies have strong incentives to take actions that increase such improvements (and not just the perception thereof). The fact that there exist incentives for companies to skew results in their favor does not necessarily mean that they will. In fact, since there are no real tests on the market and ETS, the maker of the SAT, only publishes its own “practice tests” as a test-preparation option for students, one might argue that the same incentive exists for ETS. Since most students likely to be taking these courses would have taken the PSAT and possibly even the SAT, students have their own external benchmarks against which to measure their improvement.
Furthermore, the experiences of one student who scored a perfect score on his SAT is an inappropriate and peculiar example when considering average score ranges and attempting to disparage an entire industry. This type of anecdotal evidence is hardly indicative of student experiences on a large scale. When considering this type of student experience, a reasonable objective observer must also take into account a number of variables that can affect a student taking the official test. Students report a wide variety of feelings, ranging from fear and anxiety to excitement and exhilaration. In some cases these reactions translate into a stronger performance (collectively, this is known as eustress) while in other cases these reactions will translate into worse performances. To cite an informal study of a small population of a few students who worked with one provider and who had similar experiences explicitly ignores those students who might have performed on par with their practice tests or even underperformed those practice tests.
Mr. Hechinger should have perhaps noted that the NACAC report concludes with the following points:
- students should be encouraged to prepare before taking admissions tests
- students should be counseled to use cheaper forms of test preparation
- commercial coaching or private tutoring may well be worth the cost
Finally, the article presents two propositions: (1) SAT coaching resulted in around 30 points in score improvement, and (2) a third of schools with tight selection criteria said that an increase of 30 points would “significantly improve students’ likelihood of admission.” Even assuming that the improvement from the preparation is only 30 points, the author seems compelled to ignore the glaring conclusion that the ‘modest benefit’ can have a very real effect on potential admission to selective schools. To the extent that students can avail themselves of companies (like my own) that offer students the opportunity to get quality test preparation at 20 to 50 percent of the rates charged by the providers highlighted in Mr. Hechinger’s piece, the net value of that “modest benefit” increases dramatically.
The increasingly competitive world of standardized testing and college admission has forced students to seek out every available resource. Until schools become more transparent with how they value SAT scores, college applicants will reasonably pursue any gains, modest or significant, within their reach. And until a more extensive and finely tuned study is performed (the need for which is continually noted in the NACAC report), it is irresponsible to draw conclusions about test preparatory companies or their effectiveness. The more salient question, ignored by both Mr. Hechinger and the NACAC report, is to what extent are students without access to high-quality test preparation disadvantaged by their inability to get those modest 30 points?
Hashim Bello is co-founder of Bell Curves, a test preparation company which seeks to deliver high quality test preparation to traditionally underserved youth.