- ARTICLES & ANNOUNCEMENTS (CALIFORNIA FOCUS)
- ARTICLES & ANNOUNCEMENTS (NATIONAL FOCUS)
Source: California Department of Education – 20 February 2003
State Superintendent of Public Instruction Jack O’Connell today released the 2002 Base Academic Performance Index (API) that highlights California’s continued commitment to ensure the state’s six million school children are receiving an education based on world-class standards.
This year’s changes are the most far-reaching since the inception in 1999 of the API, the foundation of the California school accountability system. The 2002 Base API is the culmination of a long-term effort to use California’s comprehensive and rigorous standards as the benchmark for learning…
The new baseline for the first time puts the majority of the weight on tests specifically geared toward California’s high standards. Eighty percent of the API for elementary and middle schools will rest on the California Standards Tests (CST); while almost 90 percent of the API for high schools rests on the standards test and the California High School Exit Exam. Specifically, the API includes the CST English Language Arts, as well as the CST Mathematics results for grades 2-11, the CST Social Science results for grades 10-11, and the California High School Exit Exam (CAHSEE) results. The remainder of the weight continues to be placed on the national, standardized norm-referenced Stanford Achievement Test, Ninth Edition (Stanford 9). By placing limited weight on the norm-referenced test, it is then possible to focus on testing to California’s high standards while maintaining the ability to benchmark our students against the rest of the nation’s school children.
“The growing emphasis on standards-based tests for accountability provides schools with a more complete picture of how well their students are learning what is being taught in California classrooms,” O’Connell said. “It also challenges our schools to incorporate state-adopted academic standards into their instructional programs as quickly as possible.”
While not a finished product, the API now will become more predictable. With the addition of the CST Mathematics for all grades, the CST Social Science for grades 10-11, the CAHSEE results, and the new weighting system, the API baseline now contains almost all major indicators. Over the next few years, the API will continue to add indicators, including the standards-based Science tests as well as the California Alternate Performance Assessment. Eventually, the API will include graduation and attendance rates.
The purpose of the API is to measure the academic performance and progress of schools. It is a numeric index that ranges from a low of 200 to a high of 1000. The 2002 Base API establishes this year’s baseline for a school’s academic performance and sets an annual target for growth. The state has set 800 as the API score that schools should strive to meet.
Please note: Because the 2002 Base API includes new California standards-based tests as well as the CAHSEE, and because the calculation of the 2002 Base API is different from the 2001-2002 Growth API that appeared in October of last year, any comparison of the two would be inappropriate.
The 2002 Base API results currently are posted at the California Department of Education Web site at http://api.cde.ca.gov. The reports available online will include the 2002 Base API scores, statewide and similar schools rankings, and annual growth targets for elementary, middle, and high schools that have at least 100 valid student scores from the Standardized Testing and Reporting (STAR) program. Reports for schools with fewer than 100 valid student scores (but more than 10) have APIs marked with asterisks. The asterisk indicates the greater statistical uncertainty of an API based on fewer than 100 test scores. Small school API reports do not include similar schools rankings.
Schools are expected to meet their annual API growth targets during the upcoming 2003 STAR and CAHSEE testing. As in the past, schools that meet their growth targets and make at least five points growth (four points for all numerically significant subgroups) will be eligible for API-based awards programs. While the current state budget crisis may make it impossible to financially reward schools in the coming fiscal year, schools are eligible for non-monetary awards such as the California Distinguished Schools Program.
Source: The Sacramento Bee – 21 February 2003
State education officials released revised rankings Thursday that show how California’s 7,400 public schools measure up against each other…
The API is a score that ranges from a low of 200 to a high of 1,000. The state’s long-term goal is for every school to reach a score of at least 800. This year, 20 percent of elementary schools, 13 percent of middle schools and just 4 percent of high schools met that goal…
Gov. Gray Davis, who pushed for a school accountability system during his first election bid,…hailed the evolution of the API.
“The API holds schools to higher standards,” Davis said. “It reflects our belief that all students in all schools deserve challenging academic content that will prepare them for success in school and beyond”…
Along with the base API scores, each school received two rankings from a low of 1 to a high of 10. The first ranking indicates how the school performed compared with other schools statewide; the second “similar schools” ranking shows how the school performed compared with schools with similar poverty rates and ethnic ratios…
(3) “A Better Way is Needed to Measure Academic Progress in our Schools” by Tyler W. Cramer and Ginger Hovenic
Source: Business Roundtable for Education of the San Diego Regional Chamber of Commerce Foundation
The recently released report on how the state’s public schools ranked in performance…told Californians how their neighborhood schools compared with other schools serving students with similar socioeconomic backgrounds…
The API and its school rankings is a good-faith effort to establish a standardized accountability system with some teeth. However, it doesn’t answer the question that every parent, teacher, administrator, school board member, businessperson, and community member should be asking: Specifically, where is learning occurring most efficiently and effectively? More specific for parents: Is the fourth grade at ABC Elementary School a good place for Junior next year?
Under the state’s testing program, comparing a school’s API scores from year to year is supposed to reveal whether a school is “improving” or not. Unfortunately, the concept compares the compiled individual–or, “aggregated”–achievement scores of two entirely different populations of students. It’s like comparing apples to oranges…
The best way to measure whether efficient and effective learning is occurring at a specific school is to compare each student’s “pre-test” score–his or her 1999 SAT-9–against the same student’s “post-test” score–his or her 2000 SAT-9–to obtain “matched data.” Only when the differences between the pre-test and the post-test scores of each student are analyzed can the rate of learning be determined at a particular school…
If progress is the criterion for whether efficient and effective learning is taking place, it doesn’t make sense to compare schools that started at different performance levels as does the API and the school rankings. Only student populations that began at roughly the same level of student performance should be compared with each other to identify where efficient and effective learning is taking place.
To identify schools making progress, the Business Roundtable for Education of the San Diego Regional Chamber of Commerce Foundation led by Tyler Cramer, developed the Relative Progress Index (RPI). The RPI analyzed the SAT-9 test scores of over 23,000 3rd, 4th and 5th graders in the San Diego Unified School District.
The 1999 RPI Study measured the progress of these students in reading and math, comparing their 1998 SAT-9 pre-test results to their 1999 SAT-9 post-test results. The 2000 RPI Study used the 1999 SAT-9 as the pre-test and the 2000 SAT-9 as the post-test. Analyzing the pre and post-tests of the three grade levels of students in these two subjects yielded six “class/subject” progress scores for each school. Each class/subject progress score was then compared with the same class/subject progress scores from nine other elementary schools where the pre-test scores in that class/subject were closest to the pre-test score of the examined school’s class/subject. By ranking each examined school’s class/subject progress scores within their respective 10 class/subject comparison groups, each school received six Class/Subject RPI Scores, one each for reading and math progress in each of the three grades, with “10” being the highest and “1” the lowest. In addition, each class/subject progress score was ranked against all of the same class/subject progress scores in the district and was assigned RPI District Wide Scores with “10” being the highest and “1” the lowest.
The RPI, like the API, is intended to be used only as one measure among many in assessing the effectiveness and efficiencies of particular learning environments. In particular, until RPI Scores, API Scores or other measures can be correlated with particular instructional strategies and other controllable and non-controllable inputs, they can only point to where effective and efficient learning may or may not be occurring. They cannot, as yet, tell us which instructional strategies or other inputs are making a difference…
Source: U.S. Department of Education
On November 5, 2002, President Bush signed into law the Education Sciences Reform Act of 2002 establishing a new organization, the Institute of Education Sciences. The Office of Educational Research and Improvement, which had formerly been responsible for education research and statistics, expired upon enactment of the new Act. The Institute of Education Sciences reflects the intent of the President and Congress to advance the field of education research, making it more rigorous in support of evidence-based education. The Institute consists of the National Center for Education Research, the National Center for Education Statistics, and the National Center for Education Evaluation and Regional Assistance. On November 22, 2002, the President appointed Grover J. (Russ) Whitehurst to a six-year term as the first Director of the Institute.
The organizational structure of the Institute will be taking shape over the next several months. In the meantime, an ambitious series of research, evaluation, and statistics activities is moving forward.
Source: U.S. Department of Education
The purpose of the research program on Effective Mathematics Education is to support the identification of interventions and approaches in mathematics education that will result in improving mathematics achievement for all students and closing achievement gaps between minority and non-minority students, and between economically disadvantaged students and their more advantaged peers. The focus of the 2003 competition will be middle-school mathematics education.
Request for Applications (pdf): http://www.ed.gov/offices/IES/emer/MathRFP.pdf
…Because low achievement in math by U.S. students starting in the middle grades has serious consequences for students and the nation, and because the No Child Left Behind Act requires that states and localities use research proven practices in educating all children, it is critically important for the Institute of Education Sciences to fund research that will answer questions that are central to improving the effectiveness of mathematics education. Such research is largely lacking for mathematics education in middle school. Development and identification of more effective interventions and approaches in mathematics education, and understanding how to replicate the best of current practice, will result in improved mathematics achievement for all middle school students. The Institute of Education Sciences launches the Research on Effective Mathematics Education (REME) program to accumulate scientific evidence on interventions, approaches, and systems that support the development of mathematical proficiency by all students…
Research funded under the Research on Effective Mathematics Education program must target at least one of the two following goals:
Goal 1: Evaluating Effective Curriculum and Instruction. Identify and evaluate approaches to instruction and curricula in the middle grades with a focus on approaches that provide the best support for a successful transition to algebra. One area of interest is studies that examine the effects of curricula that have more depth and less breadth compared with the broader curricula that are more typical of U.S. practice. More focused curricula could be obtained by the adaptation and implementation of materials that have been used successfully by other nations, or through selective use and sequencing of materials that are currently available in the U.S. market. Another area of interest is research that explores the effects of different pedagogical approaches to instruction (e.g., teacher directed versus student directed activities, or project-based versus practice-based, or group-based versus individual), or that examines the effects of different sequences of instruction (e.g., simultaneous instruction in foundational skills, reasoning, and application versus instruction that stresses fluency in foundational skills before the introduction of more conceptual tasks). Research that examines the effects of teachers’ use of real life problems versus problems that involve only mathematical language and symbols in classroom lessons is also of interest, as is research that examines the role of assessment systems that provide regular and ongoing feedback to teachers and students on progress towards instructional objectives…
Goal 2: Replicating Successful Schools and Districts. Identify schools or districts that are succeeding in mathematics with children from minority and low-income backgrounds and determine how the approaches used in these schools and districts can be replicated in low-performing schools. The principal area of interest is the design and evaluation of models for replicating schools and districts that are successful in producing high mathematics achievement for all students…
The research must be carried out in school (or other education delivery) settings. Applicants must develop relationships with schools that will support the proposed research, and document that relationship in a detailed letter of support from the education organization(s)…
Applications must be submitted electronically by the application receipt date [April 18, 2003], using the ED standard forms and the instructions provided at the following web site: https://ies.asciences.com Potential applicants should check this site as soon as possible after February 21, 2003, when application forms and instructions first become available…The application form approved for this program is OMB Number 1890-0009.
Source: Education Week – 19 February 2003
In the 14 years since the National Council of Teachers of Mathematics issued its standards for teaching the subject, a debate has raged over whether schools should follow its recommendations and emphasize conceptual understanding as well as performance of skills.
Curricula based on the national standards have been on the textbook market for six years, and studies now are starting to be published that shed light on the issue of whether the new path was a good one to take.
A panel convened last year by the National Research Council is evaluating the research to determine whether it is rigorous enough to draw conclusions about the effectiveness of the new math curricula. The committee is scheduled to release its report this year.
And the Bush administration is going to be evaluating curriculum in its five-year effort to raise the quality of the nation’s math and science education.
But many math education researchers say that studies published in professional journals and a recent book suggest that the new curricula are on the right track.
The research fails, however, to answer definitively the big question in the debate: Are curricula that emphasize conceptual understanding the best way to teach mathematics? Or should schools continue the traditional approach of teaching the rudimentary skills of the discipline before expecting students to apply them in real-life situations?
The research conducted to date isn’t specific enough to identify which kind of curriculum works best in schools.
“A superintendent of a large school district contacted me not long ago to ask if there was research that would help him select an elementary school math curriculum that would be effective for the types of children served by his district,” Grover J. “Russ” Whitehurst, the director of the federal Institute of Education Sciences, told a U.S. Department of Education “summit” on math education this month.
“I had to tell him that there was no rigorous research on the efficacy of widely available elementary mathematics curricula, and that about all I could offer him was my opinion,” he said.
Supporters of the new curricula, though, say that enough research is showing positive results for schools to go ahead with the new programs while researchers define the ideal conditions for using them.
“I haven’t seen any evidence that they are failures and we should pull them out of schools,” says James Hiebert, a professor of education at the University of Delaware, located in Newark. “They are promising enough that we should pursue the implementation of them so we can collect long-term data.”
But critics maintain that the research hasn’t delivered nearly enough evidence to warrant schools’ switching to approaches that they say gloss over basic mathematical skills.
“I haven’t seen a study that really convinced me,” says Michael McKeowen, a co-founder of Mathematically Correct, the national parent group that is influential in organizing opposition to the new curricula, and a professor of medical science at Brown University. The studies often compare the results from new curricula with those from control groups whose teachers didn’t receive the same professional development, McKeowen says. Students who have the chance to study under teachers with such preparation, he says, are likely to perform better than those who learn from less qualified teachers.
While advocates on both sides of the debate differ on the best way to improve math education, there is a consensus that U.S. student performance in the subject needs to improve.
Even though 4th and 8th graders’ scores on the National Assessment of Education Progress rose steadily during the 1990s, U.S. students have scored poorly on several international studies.
In 1996, for example, the Third International Mathematics and Science Study found that U.S. 4th graders performed above the national average, that 8th graders were in the middle of the pack, and that high school seniors fell below average.
A repeat of the study given to 8th graders in 1999 found the 8th graders again to be around the international mean. http://www.edweek.com/ew/ewstory.cfm?slug=15timms.h20 )
That disappointing record adds urgency to the challenge for researchers: What does work best in math classrooms?
One study examining the Interactive Mathematics Program, one of the high school curricula influenced by the NCTM standards, paints a picture of how math classes are changing in some American high schools–and gives an example of how difficult it is to draw conclusions from one project.
When freshmen started at a suburban Philadelphia high school in 1997, the study notes, their math classes were different from the classes that had preceded them.
Instead of just teaching algebra–the common 9th grade math subject–the new Interactive Mathematics Program, known as IMP, included pieces of geometry and statistics. Students would have waited until 10th grade or later to learn such subjects under the previous curriculum.
The teaching methods used in the new freshman course also differed markedly from the problem-solving with x’s and y’s representing mythical variables that had dominated the high school’s Algebra 1 course before 1997. Now, teachers offered examples of problems from real-life situations. Students might be asked to find the length of shadows or graph the population growth of the West during the 19th-century migrations.
That same class of 9th graders went through two more years with the same curriculum, along the way supplementing their knowledge of algebra, geometry, and statistics with trigonometry, basic calculus, and other areas of the discipline that traditionally were taught in separate high school courses.
By the time the group reached the end of 11th grade, the researcher who tracked them found that they outperformed a similar group of students, two years ahead of them, who had studied under a traditional curriculum. What’s more, the students in the IMP group, as a whole, had taken more math classes and enrolled in more Advanced Placement courses than those ahead of them.
“It’s a piece of evidence pointing toward a positive effect of the curriculum,” says Steven L. Kramer, who conducted the study as his doctoral dissertation at the University of Maryland and is preparing the results for submission to peer-reviewed journals.
Kramer adds that it’s hard to identify whether the curriculum itself was instrumental in raising student achievement or whether it was one of several ingredients needed.
The students who learned under the Interactive Mathematics Program also studied in a block schedule, in which they attended math classes for 90-minute periods over a semester. Their predecessors had attended 45-minute classes that lasted over a whole school year. Kramer’s study was unable to conclude whether IMP would have succeeded under the previous schedule.
Also, the school, which Kramer has not named, decided to switch to IMP with the full support of the math faculty. The teachers all were given training in how to teach the new curriculum. Would the program have succeeded with a reluctant and unprepared staff of teachers? Common sense and other research suggest not.
That’s why Kramer and others suggest that his research needs to be viewed along with other studies with similar results to decide the effectiveness of the curriculum and the best circumstances for using it.
While studies such as Kramer’s are common, critics of the NCTM standards and the curricula based on them say the research doesn’t offer any evidence that the math innovations are working.
In such studies and others like it, the critics say, the teachers and students are aware that they are research subjects–an awareness that gives them an incentive to perform well. “These are weak designs because the very act of volunteering to use a new curriculum typically carries with it extra motivation to succeed, and thus biases the results towards the new curriculum,” Whitehurst, the Education Department’s research chief, argued at the math summit.
What the research literature lacks, according to Whitehurst and critics of the NCTM standards and the curricula based on them, is studies comparing student achievement under two specific curricula.
“They should exist, but they don’t exist,” says Bastiaan J. Braams, a research associate professor of mathematics at New York University and a critic of the NCTM. The research on the new curricula also is hampered, Braams says, by the fact that it has been conducted by people involved in designing the programs or who are supportive of them. “In general, I have a very low opinion of the research,” he says. “I see it as advocacy research.”
But supporters of the NCTM approach contend that both the quality and quantity of research they are producing provide evidence that the changes are having a positive impact.
In Standards-Based School Mathematics Curricula: What Are They? What Do Students Learn?, published this year, researchers published a collection of 13 studies that they say supports the new curricula.
“The evidence at hand is both promising and substantial,” Jeremy Kilpatrick, a professor of mathematics education at the University of Georgia, in Athens, writes in the book’s concluding essay. “It will not quiet the critics…but it should encourage those who welcome improvements in school mathematics and understand the difficulty of evaluating something as complex as a curriculum.”
In three studies of high schools using IMP, for example, Norman L. Webb found that students who studied under the curriculum performed as well on standardized tests and the SAT as students in a control group. He also found that students using IMP outperformed others when presented with tests that required them to use problem-solving rather than basic skills. They also took more math classes than did students studying a traditional approach, he says.
“We were able to show that the curriculum did what it said it was going to do,” says Webb, a senior research scientist at the Wisconsin Center for Education Research at the University of Wisconsin-Madison.
But it’s still difficult for him to declare IMP a victor in a competition between mathematics curricula, because true control groups are hard to maintain. In Webb’s study, he found that students would transfer from IMP to classes using the traditional curriculum, depending on their interests. Teachers who taught the traditional way started to use techniques, such as small-group projects, that are a common approach in IMP. They also altered what they taught to match the tests they knew researchers would be evaluating.
“It’s very complicated to come up with a good evaluation design,” Webb says.
McKeowen, the co-founder of Mathematically Correct, says that such complications make it tricky to reach definitive judgments about the efficacy of the new curricula.
He agrees that any study that tries to create a separate control group is bound to have the types of problems that Webb cites. The situation is unlike that in a drug study, he point outs, where participants do what they’re told because they don’t know whether they’re taking a placebo or the drug that’s being tested.
What is clear is that the debate over research into the new curricula is about to heat up.
The Education Department on Feb. 6 launched an initiative to highlight the need for improved mathematics instruction. Raising the quality of research in the field is a primary objective in the five-year effort. At the mathematics summit in Washington, department officials proposed to spend $120 million in the next fiscal year to conduct research to determine best practices for teachers, as well as to compare curriculum against one another. http://www.edweek.com/ew/ewstory.cfm?slug=22math.h22
What will it take, then, to determine the effectiveness of the new curricula?
Kramer, the researcher who studied students at the suburban Philadelphia high school, compares his study to a piece of evidence introduced in court. Alone, the evidence isn’t enough to prove guilt or innocence. But viewed along with other evidence, it can contribute to delivering a verdict, he says.
His study of the Interactive Mathematics Program gives hints, he says, about under which conditions students learn most effectively that are, in turn, supported by similar research.
Hiebert, the University of Delaware professor, suggests that educators and mathematicians need to live with the ambiguity of the research, just as people do in other fields. Recommendations on diet and exercise, he says, are always being updated based on new studies. And when new nutrition standards are introduced–such as suggestions to eat five fruits and vegetables every day–they are guidelines rather than hard-and-fast rules guaranteed to produce better health.
“Expecting a firm or solid proof [in math education] is unrealistic,” Hiebert says. “It’s unrealistic in a lot of other areas that people accept. We should expect the same thing in education.”[A short collection of Web resources follows this article.]