- ARTICLES & ANNOUNCEMENTS (CALIFORNIA FOCUS)
- ARTICLES & ANNOUNCEMENTS (NATIONAL FOCUS)
ARTICLES & ANNOUNCEMENTS (CALIFORNIA FOCUS)
Source: SFGate.com – 27 March 2002
A network of science teachers representing each of California’s 12,500 public and private schools is being built to share information on science education and, organizers hope, strengthen the field for the future.
The National Science Teachers Association launched its “Building a Presence for Science” program in California [on March 27] during a convention in San Diego.
The program, already in place in 19 states and the District of Columbia, establishes a pyramid network of teachers and education officials who act as liaisons for their respective schools.
In California, 300 educators are being chosen to serve as “key leaders.” Each of these will be assigned about two dozens schools and asked to enlist a teacher at each campus to serve as a “point of contact.”
Currently, 160 “key leaders” have been selected, with at least one person from each county, organizers said. The goal is to have every school in the state connected to the network within three years, said Maria Alicia Lopez-Freeman, a University of California researcher who helped organize the program.
The “Building a Presence” system has been used in other states to distribute the latest information on teaching methods, resources, and training opportunities to science teachers throughout the K-12 school system…
Until now, educators have had a “very antiquated way of communicating to teachers,” with photocopied mailings often ending up in the back of a teacher’s lesson planner, she said. The new system, which will rely largely on the Internet, “will change the way we’re doing business in the Information Age.”
Each point-of-contact person will be responsible for providing science teachers in their respective schools with the information sent through the network, including resources to support the goals of national and state science education standards. It will be up to teachers and schools to fit the program to their specific needs.
A key goal of program is to have elementary school teachers learn how to better incorporate science into other fields, such as a reading lesson. This is especially important for schools that are under pressure to increase student performance on state-mandated tests for literacy or math.
Many schools are so focused on reading or math that they find it hard to make time for science, said Scott Hill, chief deputy superintendent for the California Department of Education. The “Building a Presence” program will show teachers “how to build into a reading program quality science instruction and practice using scientific inquiry.”
The launch of the program in California is being supported by a $520,000 grant from the ExxonMobil Foundation.
Related Article: “E-mail Tree Will Assist Teachers of Science” by Chris Moran
* In 2004, California will begin testing 5th grade students in science as part of the STAR testing program. There will be a field test in 2003.
* Contact Deborah Tucker, Professional Development Coordinator for science at the California Department of Education, for more information related to the “Building a Presence for Science” program: firstname.lastname@example.org (916-323-4963).
Source: California Department of Education
The 278-page preprint version of the California Science Framework that was adopted on 6 February 2002 can be downloaded (pdf) from http://www.cde.ca.GOV/cfir/science/all.pdf
Source: NCTM Legislative and Policy Update – 29 March 2002
The Department of Education is seeking proposals to develop a national clearinghouse that summarizes programs and strategies that are proven to be effective in improving education. The “What Works Clearinghouse” will allow users to decide what programs have proven to be the most effective. This information will derive from reliable and scientific research.
The Department will be asking the contractor to develop and maintain five databases for the clearinghouse such as:
* An educational interventions registry that identifies potentially replicable programs, products, and practices that are claimed to enhance important student outcomes, and synthesizes the evidence related to these interventions
* An evaluation studies registry, which is linked electronically to the educational interventions registry, and contains information about the studies constituting the evidence of the effectiveness of the program, products, and practices reported
* An approaches and policies registry that contains evidence-based research reviews of broader educational approaches and policies
* A test instruments registry that contains scientifically rigorous reviews of test instruments used for assessing educational effectiveness
* An evaluator registry that identifies evaluators and evaluation entities that have indicated their willingness and ability to conduct quality evaluations of education interventions
For more information on the request for proposals for the “What Works Clearinghouse” visit the Department of Education’s Web site at:
Source: Education Week– 3 April 2002
…The aim of the Campbell Collaboration, which recently held its second annual meeting here at the University of Pennsylvania, is to find out what research has to say about the myriad interventions tried in the social sciences, and then translate those results into recommendations for policymakers, public administrators, educators, police agencies, and social workers.
Named for the late Donald T. Campbell, the American statistician who called for an “experimenting society,” the group formed three years ago with backing from foundations and government agencies…
The new interest in “evidence-based” social policy was apparent at the group’s February meeting. More than 200 researchers and policymakers from 16 countries showed up for the two-day event hosted by Penn’s graduate school of education…
The collaboration is gathering steam just as policymakers on both sides of the Atlantic Ocean are stepping up their demands for empirical proof that programs or approaches work.
Perhaps nowhere is that emphasis more apparent than in the United States, where such pressure increasingly is being felt in education. Federal lawmakers laced the new “No Child Left Behind” Act of 2001 with phrases such as “scientifically based research” and required states and school districts to rely on research in choosing, for example, school improvement programs and professional-development lessons.
In Britain, Prime Minister Tony Blair, who was elected after promising to build an “evidence-based” government, has even formed a Cabinet-level agency to guide that effort. And the Swedish government last year launched an initiative to better incorporate empirical knowledge into social work practice.
“If you look across the European scene over the last two years, there is a growing interest and conscientiousness among practitioners, policymakers, and researchers to make decisions and practices that are more research-based,” said Haluk Soydan, a social work professor from Sweden who co-chairs the Campbell steering group with Mr. Boruch…
The Campbell Collaboration uses a statistical technique known as meta-analysis to synthesize findings… Harris M. Cooper, a University of Missouri psychologist who heads Campbell’s methods group…said meta-analyses…give[s] a more finely grained look that takes into account the size of each individual effect, the consistency of those effects, and the confidence that researchers place on them.
To safeguard the objectivity of the reviews, researchers who take one on have to follow strict guidelines. Reviewers must ferret out every study ever done on a topic, whether published or not, as long as it meets the group’s criteria for a credible study. If reviewers have a potential bias toward a particular outcome, they have to reveal it, and they have to ensure that their funding comes from more than one source…
The reviews of educational interventions…will be slower in coming. Some of the hesitancy comes because those in the field have traditionally been wary of experiments and more quantitatively oriented studies. Some education researchers contend, for example, that it’s unethical to provide a potentially beneficial intervention to one group of children and not another. Others fear that researchers could “miss the trees for the forest” by focusing only on strict experiments with control groups and treatment groups.
For their part, the leaders of the Campbell Collaboration say they are not against qualitative research. They just haven’t figured out how to incorporate it.
“We know in these control trials we need to know what’s going on inside the black box…,” Mr. Boruch said. “That’s where good qualitative people come in. The challenge is how to integrate it all.”
The collaboration’s education group is taking some cues for now from the Centre for Evidence-Informed Policy in Education at the University of London’s Institute of Education. With support from the British government, the center coordinates research synthesesÑboth qualitative and quantitativeÑon “what works” in education.
Its forthcoming reports focus on such topics as problem-based learning and volunteer tutoring, and Campbell’s creators are hoping to borrow some of them for their own archives.
In the meantime, the education group’s co-chairman, C. Kent McGuire, wants to prod others to do Campbell reviews. Mr. McGuire, the U.S. Department of Education’s assistant secretary for educational research and improvement under President Clinton, is also enlisting policymakers to decide what research questions the education group should undertake.
“When they don’t feel like they ‘own’ the questions, policymakers will not trust the evidence,” Mr. McGuire said.
Whether the politicians will indeed “trust the evidence” is still an open question for the fledgling group… “I am not naive enough to assume that we’re going to get the British government or the U.S. government to give up some of their treasured beliefs,” said [Philip] Davies, a policy evaluator in the agency that Prime Minister Blair set up to lead his push for evidence-based government. “What we’re trying to say is this is what the evidence tells us about what works. The test is for government to run with it or not, and see if they get re-elected.”
Source: Education Week – 27 March 2002
Over the past 30 years, periodic statements of dismay and disappointment over the performance of research funded by the U.S. Department of Education have been a reliable staple in the field. Some of the complaints about the department’s Office of Educational Research and Improvement (OERI) seem to have considerable merit…
Attempts to restructure the OERI’s organization and programs could marginally improve the agency’s operation, but I would propose that the real problem of educational research lies elsewhere. It lies in the inflated expectations of consumers as to what can or should be accomplished through educational innovation and research.
Such expectations appear to be drawn from consumers’ experience with research in the areas of health or technology…But educational research is different. It deals not with the physical or biological world but with the social and behavioral world, a research environment where contextual variations matter greatly, but are elusive to gauge. The educational “treatment”–whether applied behavioral analysis, cooperative learning, or teaching social skills–must be studied in the context of powerful societal and biological factors that limit the impact of even the best of such innovations. Poverty, dysfunctional families, and hostile peer groups can and do cut into the effectiveness of education innovations.
Moreover, the payoff is of necessity limited. Meta-analyses of educational or social-behavioral interventions reveal that consistent gains can be obtained of from 0.5 to 0.75 standard deviations in whatever educational, social, and behavioral goals. These gains are certainly meaningful advances from the current status of the students affected, but they can be counted upon to be a great disappointment to those who had hoped for a breakthrough, a revolutionary improvement in education. It is the public’s search for such a breakthrough–when everything we know about human behavior suggests that only modest changes are possible through intervention, and then only with monumental and persistent effort–that is the problem. (This is clear when we consider, for example, psychotherapy, improving social skills for children with autism, or raising IQ scores.)
The matter is compounded by potential investigators who, aware of consumer interest and expectations, promise outcomes that cannot possibly be reached in order to gain research funds for their work. When the inevitable evaluation of their efforts is published, and the gains are perhaps one-quarter or one-half of what they had promised, another cycle of disappointment and despair ensues, even though gains have in fact been made.
One of the current hot educational issues, for example, is how we might close the “achievement gap” between minority and majority students that has been evident for a number of years. A large number of potential solutions have been proffered by ambitious educators and researchers who are content to ignore the fact that the basis for this gap lies only slightly in the educational process and teacher preparation, and more deeply in the sociocultural background of various subgroups in our society.
Although there would seem to be good evidence that the achievement gap involves cultural practices and habits that are extremely hard to change in the short run (one generation), these researchers focus mainly on the variables they can influence, namely the classroom and the educational process.
Certainly, improving the schools’ approach to these achievement-gap issues should result in some modest gains, and it does. But the American people aren’t looking for modest gains; they want a “cure.” The achievement gap should vanish with the application of some potent learning drug. Since no such elixir exists, we once again will be disappointed by the lack of magic in educational innovation.
Another example of how thoroughly embedded are our expectations is the fact that we can consider a standard solution to our problems of “inadequately designed educational research.” We are told that we should conduct intervention research using only randomized samples of experimental and control groups in our studies. Yet, this, again, ignores the essential truth that easily 80 percent of the factors influencing the outcome variables for both the experimental and control groups lie outside the “treatment” or “experimental program.” This is in stark contrast to studies of the effects of drug treatments in medicine, where the sheer potency of the treatment often can overwhelm any contextual factors in the study.
Of course, there is nothing wrong with using randomized selection of subjects, when that is possible and relevant, but doing so will hardly change the fact that the educational treatment–whatever it is–just does not have the power to dramatically change the behavior of the subjects. Not, at least, in a way that will satisfy the American consumer or the politician looking for solutions to problems, rather than improvements in coping with those problems.
Educators are not alone in having to settle for incremental gains. The physical therapist helping a child with cerebral palsy walk, the speech therapist helping a teenager control his stuttering, the rehabilitation personnel helping those with drug addictions learn to cope–all of these professionals have to settle for small gains. This is not because these specialists are incompetent, but rather because the behavior patterns of human beings are exceedingly difficult to change quickly or dramatically.
So, yes, let us reform the OERI, which like all organizations can and should improve its functioning. But let us also remember that we must make such improvement a continuing goal, rather than an instant cure. Only then we can avoid these periodic lapses into a generalized depression over the performance of our educational establishment.
Editor’s Note: The latest version of H.R. 3801 (“Education Sciences Reform Act of 2002”), which proposes replacing OERI with an Academy of Education Science, can be downloaded (pdf) from http://edworkforce.house.gov/markups/107th/fc/hr3801/320a1.pdf
Source: Boyce C. Williams (Vice President, NCATE) – 15 March 2002
In October of this year, the Specialty Areas Studies Board (SASB) will be conducting five-year reviews of revised program standards of…specialized professional associations…as well as a review of one set of new standards.
The board will also informally review the program standards of the professional associations that will present revised program standards in 2004 and will offer suggestions based on these tenets:
* Standards are aligned with the INTASC principles, including those on child and human development and learning, instruction, assessment, and professionalism, not just the principle on knowledge of subject content.
* Standards are written around what candidates should know and be able to do, rather than around the contents and experiences in teacher preparation programs.
* Program quality decisions are made on the basis of information on the demonstrated proficiencies of the candidates in relation to the standards.
* The material that programs are asked to provide includes whatever “contextual” information is needed to understand the outcomes of the program (and that could include something about the program).
* Standards incorporate the concept of the candidate having a positive effect on student (i.e., K-12 student) learning.
If you are interested in reviewing the program standards before they are presented for adoption by the SASB, please contact the specialized professional association(s) and request a copy of the standards or download them from the NCATE website (www.ncate.org). The program review coordinator and e-mail address are listed below each association’s name. If you have comments after reviewing the program standards, direct them to the specialized professional association(s) with a copy to my attention [Boyce Williams] at NCATE. Comments should be received by August 31, 2002.
The program standards scheduled for revision in 2003 won’t be reviewed at the next
SASB meeting, however, in the interests of allowing sufficient time for comment from the field, they are included in the listing…
Associations presenting revised program standards in 2002:
…International Society for Technology in Education (ISTE) — Educational
Computing Technology Leadership
Amy Vessel, email@example.com …
Associations presenting revised program standards in 2003:
…International Technology Education Association/Council for Technology
Teacher Education (ITEA/CTTE) — initial programs
James R. McCracken, firstname.lastname@example.org
National Council of Teachers of Mathematics
Marilyn Hala, email@example.com
National Science Teachers Association
Steven Gilbert, firstname.lastname@example.org …