I am on vacation this week but wanted to open a discussion on an interesting testing story by my AJC colleague Ty Tagami.
About two dozen school districts plan to ask the Georgia State Board of Education this week to allow them to substitute their own tests for the Georgia Milestones. Among the 22 systems seeking the board’s blessing are Marietta City Schools and Clayton, Cobb, Fayette and Newton counties.
The Legislature passed a bill this year allowing pilot programs in which districts create and administer tests designed to provide real-time data analysis.
Senate Bill 362 authorizes:
Beginning with the 2018-2019 school year, the State Board of Education shall establish an innovative assessment pilot program to examine one or more alternate assessment and accountability systems aligned with state academic content standards. The pilot program shall span from three to five years in duration, as determined by the state board and may include up to 10 local school system participants. A consortium of local school systems implementing the same innovative alternate assessment may participate in the pilot program and shall be counted as one of the ten pilot program participants.
The participating local school systems shall be selected by the state board in a competitive process and based on criteria established by the state board, including current compliance with the terms of their charter system contract or strategic waivers school system contract.
The local school systems participating in the pilot program shall be authorized to design and implement an innovative alternate assessment and accountability program which may include, but shall not be limited to, cumulative year-end assessments, competency based assessments, instructionally embedded assessments, interim assessments, performance based assessments, or other innovated assessment designs approved by the State Board of Education
The chief argument for allowing districts to take over the heavy lifting in testing is that they can create and give tests throughout the year to identify where kids are lagging and trigger targeted help. Districts contend the Milestones, administered at the end of the school year, enable only a post mortem of what went wrong after the fact.
As Tagami explained in his story:
“We need real-time information to help our teachers,” Putnam County School District Superintendent Eric Arena said last winter, when his district was leading the push for the bill in legislative hearings.
Putnam, which applied for participation with the state education board last month, is leading nine other districts in a consortium that wants to use a test developed by a company called Navvy Education.
This week, at state board meetings on Wednesday and Thursday, another dozen districts will attempt to join the program: Cobb and Newton counties are applying on their own; Marietta and Clayton are applying as part of another 10-district consortium, the Georgia MAP Assessment Partnership.
Cobb will use a homegrown test called Cobb Metrics, hiring an expert to determine “comparability” with the Milestones. Newton would use several nationally available private exams, including the Iowa Assessments and the Cognitive Abilities Test. The Georgia MAP consortium will use the MAP test, another well-known private exam created by NWEA, a multinational education organization.
Critics of dumping state tests usually cite two concerns. First, it would be a challenge to ensure individual districts are testing to sufficiently high standards. Second, parents would not be able to compare their children’s performance against peers in other systems.
The Thomas B. Fordham Institute posted a column this week in praise of state testing by Cory Koedel, an associate professor of economics and public policy at the University of Missouri. The column was in response to Fordham’s new study on high school grade inflation.
The study was done by Seth Gershenson, associate professor of public affairs at American University, research fellow at the Institute of Labor Economics, and technical advisor to the Institute for Education Policy at Johns Hopkins. Gershenson looked at the performance of all North Carolina high school students taking Algebra I from 2004–05 through 2015–16, including course transcripts, state end-of-course exam scores, and ACT scores. He concludes:
-Although many students get good grades, few earn top marks on the statewide end-of-course exams for those classes.
-Algebra I end-of-course exam scores predict math ACT scores much better than do class grades.
-During the decade studied, grade inflation was more severe in schools attended by affluent students than in those attended by lower-income pupils.
Writing about those results, Koedel says:
The empirical results are consistent with the many pressure points in the education system that incentivize lax grading standards, and the fact that few if any incentives exist to encourage honest but unpleasant assessments by teachers. When grades are higher everyone is happy: students, parents, teachers, and administrators. The pressure to give high grades is almost surely more pronounced at more affluent schools. As a professor myself, I am constantly nudged to weaken my standards in small ways. It reduces complaints from students, makes grading easier, makes everyone happier when they interact with me, improves my teaching evaluations, etc. No one pressures me to uphold rigorous standards.
Gershenson’s analysis points to the value of standardized tests as measures of student performance. These tests are routinely criticized, but they play an important role in our education system. Among other things, they help keep us grounded. Without the grounding these tests provide, the temptation to ignore performance deficiencies would only become more problematic.
In his own conclusion, Gershenson expresses concerns with eliminating end of course state tests:
Yet over the past several years, some states—such as Oklahoma, Ohio, and Texas—have moved to eliminate EOCs or reduce their importance. Although state policymakers must weigh the costs and benefits of any set of policies, this study demonstrates that end of course tests have tremendous value as diagnostic tools besides the benefits scholars have attributed to them in promoting student accountability. As described above, the EOCs provide complementary information to students, parents, and teachers that can help them better understand student achievement. And maintaining external assessments of student achievement enables policymakers and researchers to identify potentially inflated grades and independently gauge the extent of student learning. Given all these benefits, states should think twice before abandoning these valuable tools.