By Ashley Berner
Deputy Director, Johns Hopkins Institute for Education Policy
July 14, 2016
Do private school-choice programs in the United States help or hinder the goals of a democratic education? The answer seems to be: it depends. It depends on how the enabling law is structured, which (if any) accountability regime exists, the amount of funding allowed, the subset of students affected, and the number and quality of options within the local educational ecosystem. It also depends on which grade level and which outcomes of interest (test scores, college enrollment, civic values?) we have in mind.
The question itself, however, highlights a larger cultural and political issue: the current way in which our country defines public education. The United States is an outlier among democratic nations in having chosen a uniform, as opposed to a pluralistic, structure for our school systems. The Netherlands, United Kingdom, Sweden, Denmark, and Australia are among the many nations in which governments fund, but do not necessarily operate, a wide spectrum of schools. The United States’ move to uniformity in the 19th century means that every departure from this norm, whether it be charter, private, or online, must justify itself based on superiority of outcomes. Research and policy on alternative models in this country are thus framed in inherently competitive terms. This piece will set out the dimensions of private school-choice programs, examine a recent meta-analysis, and explore ways in which our country’s cultural expectations shape our research questions and constrain public debates. (I explore international models and describe the advantages and challenges of educational pluralism in my book, No One Way to School: Pluralism and American Public Education, which Palgrave MacMillan will release in December 2016.)
What do we mean by “private school-choice programs?” Three mechanisms in the educational landscape enable families of little means to send their children to private schools.
- Scholarship programs are privately funded, means-tested (i.e., only available to low-income students) attempts to expand access to private schools, of which the Children’s Scholarship Fund is one prominent example.
- Education tax credits allow individuals or corporations to reduce their tax liability by giving a limited amount of money to state-approved scholarship funds for (mostly low-income) children to attend private schools. The credit may not be used to fund a school attended by the donor’s children. Tax credit money is not considered public, because it never goes through state treasuries.
- Vouchers are public school funds that parents may use to send their children to private schools. Most voucher programs are means-tested or school-tested—that is, only students whose families fall below a certain income level or who have attended “failing” schools are allowed to use them.
While privately funded scholarship programs existed well before the modern era, education tax credits and vouchers are new developments. The Supreme Court declared vouchers to be constitutional as recently as 2002 (Zelman v. Simmons-Harris 2002), and education tax credit programs passed federal constitutional muster only in 2011 (Arizona Christian School Tuition Organization v. Winn et al.. 2011). The majority of the country’s 50 choice programs have been created in the last five years (Wolf 2016). There has not been much time to investigate outcomes.
Much of the evidence in favor of expanding school choice has come from school-sector research, which compares the influence of public and non-public schools on students’ outcomes. Here, the research record is substantial and positive. Private high schools, and particularly Catholic and independents, seem to do a better job than public schools of raising academic attainment and closing racial achievement gaps. This holds even after controlling for family background (Coleman, Hoffer, and Kilgore 1982), (Bryk, Lee, and Holland 1993), (Jeynes 2007), (Pennings 2011), (“Cardus Education Survey 2014: Private Schools for the Public Good” 2014). There seems to be a private school advantage in nurturing citizenship, even after controlling for socioeconomic and other background factors (Wolf 2007), (Campbell 2008), (Campbell 2012). Many researchers have suggested that private schools benefit from their capacity to bring all stakeholders (children, teachers, principals and parents) around a shared cultural mission and a tightly defined, often rigorous, academic program.
But none of these studies addresses the outcomes of specific choice programs. Studies that have done so investigate a wide variety of very different programs across a spectrum of grade levels and selected outcomes. Taken together, this body of research unsurprisingly shows positive, neutral, or even negative findings depending on the program(s) analyzed. For example, a recent longitudinal study of Ohio’s voucher program showed negative outcomes for students who left public schools for private schools (Figlio and Karbownik 2016). On the other hand, D.C.’s program seems to benefit students in numerous ways (Stewart and Wolf 2014). Given such disparate findings, it is challenging, if not impossible, to draw comprehensive conclusions.
For instance, the country’s first tax credit program, created in Arizona in 1997, benefits middle-income, but not low-income, students arguably because the program does not restrict students’ eligibility, and well-resourced families are more likely than low-income families to use it (Wilson 2000), (Wilson 2002), (Melendez 2009).
Florida’s corporate tax credit program, in contrast, is available only to students who qualify for free or reduced-price lunch and who attended a public school in the year prior. Research suggests that the program boosts the test scores of urban, low-income students who left public schools and also of those who remained in them (Figlio, Hart, and Metzger 2010), (Figlio and Hart 2014). But Figlio’s research took place in an urban school district where the threat of Title I funding loss was real and the number of possible school placements large; such positive effects might not occur in suburban or rural districts. When comparing the effects of tax credits in Arizona and Florida, then, it is plausible that disparate findings relate to programmatic structure and to the locus of research (Miksic and Berner 2014).
The issue of program maturity also comes into play, as a recent dispute about Louisiana illustrates. Initial data from Louisiana’s voucher program, first implemented in the 2012-13 school year, showed a negative effect on student achievement (Louisiana Department of Education 2014). These results led scholar Mark Dynarski to question the merits of voucher programs in today’s educational milieu (Dynarski 2016) and others, including Superintendent John White, to defend Louisiana’s program based on evidence of improvement since the program’s first year, 2012-2013 (Dreilinger 2015), (Mills and Wolf 2016), (White 2016).
Generalizations about school choice are rendered difficult, additionally, because the research examines different grade levels and different outcomes of interest – from test scores to high school graduation and college enrollment. While individual studies of particular programs are useful, (Figlio and Karbownik 2016) (Litton et al. 2010), (Chingos et al. 2012), therefore, the question remains: Can we say anything, categorically, about the outcomes of choice programs?
There have been few attempts to corral the abundant, but scattered, research findings into meta-analyses that capture methodologically similar studies of comparable programs and that investigate analogous outcomes.  A new research project conducted by M. Danish Shakeel, Kaitlin Anderson, and Patrick Wolf attempts to do just that (Shakeel, Anderson, and Wolf 2016).
Participant Effects: Methodology and Findings
The authors of The Participant Effects of Private School Vouchers across the Globe: A Meta-Analytic and Systematic Review (2016) aim to create an analysis that is “up-to-date, complete, and provides a specific and verifiable determination of the average effect of an intervention upon an important outcome.” They limit their analysis to studies that relied upon randomized control trials (RCTs). RCTs are considered the gold standard of education research and constitute the highest category of evidence under the Every Student Succeeds Act (United States Congress 2015). Why is this? RCTs come closest, among methods, to establishing a causal relationship between intervention and effect. In this case, the RCTs compared the standardized test scores of students who sought funding for a non-public school placement and received it, with those of students who sought this intervention but did not. The research team treated all program types (scholarship, voucher, tax credit) as functionally equivalent – henceforth referred to collectively as “scholarships.”
The research team limited its search to studies published since 2005. It searched three academic databases, Google Scholar, and prominent think tank archives for research with key terms such as “school voucher,” “opportunity scholarship” and “randomized control trial.” This process yielded more than 9,000 studies. A more meticulous filter followed, in which the team selected only those studies that reported on math and language scores and had been written in English. They omitted duplicates (i.e., multiple papers on the same set of data written by the same research team).
The end result: 19 studies of 11 choice programs, both within the United States (Toledo; Dayton; New York City; Washington, D.C.; Charlotte; Milwaukee; and Louisiana) and overseas (Bogota, Colombia, and the Andhra Pradesh and Delhi provinces in India).
The programs included in this meta-analysis possessed several commonalities. All of them offered access to private schools for low-income, not middle- or high-income, students, the majority of whom were also racial minorities. The recipient schools in all programs had had experience serving low-income students. The maximum value across programs equaled less than half of the per-pupil spending in the corresponding public schools.
The programs also differed in important ways. The amount of funding varied across the programs, with some (mostly publicly funded) offering almost full tuition, others a much smaller percentage. The control groups enrolled in different types of schools: in India, students who did not win a scholarship ended up attending state-run, neighborhood schools; in the United States, substantial minorities (between 18% and 47%) of students who did not receive scholarships attended either private schools through other means, such as scholarships offered by the schools themselves or public charter schools.
The team identified 262 separate program effects (or outcomes) in standardized test scores for reading and math. The team then evaluated the effect sizes according to the number of years the students had spent in the program (from 1 to 4 years), the program location (United States or overseas), and whether the scholarship had been funded publicly or privately.
Shakeel et al., found statistically significant, positive effects of school-choice programs in their global analysis. When the effects are averaged across all 11 programs, choice programs increased students’ reading scores by 0.17 standard deviations and their math scores by 0.11 standard deviations. This translates to several months of student learning per year in both subjects. Furthermore, the effect deepens with every year a student participates in the program. Finally, direct public funding, such as from vouchers, boosted student test scores more than did private funding, such as from tax credits. The authors speculate that this finding results from the larger per-pupil allocation that comes from public, rather than private, funding.
The meta-analysis’ statistically significant, but modest positive, effects are driven almost entirely by overseas programs. When the researchers remove Bogota’s results, as they do at every juncture, the results become, as one analyst noted, “more modest still” (Northern 2016). When only the programs in the United States are considered, reading effects are null and math effects positive but, in comparison with other educational interventions, modest indeed (0.07 standard deviations, or approximately two months’ worth of learning).
Limitations and Future Research
Shakeel et al.’s meta-analysis offers a robust examination of the effects of choice mechanisms on students’ test scores and finds that the most conservative estimates indicate null or positive effects. However, the meta-analysis does not (could not) contravene all of the problems that confront school-choice research generally. Such differences as the variety of grade levels involved, and different programs’ maturity levels, still obtain. Additionally, Participant Effects does not explore the issue of attrition, which in some programs was as high as 50%. How might attrition influence results? The program effects could change if either the most able or the least able students tend to leave. Or is there a relationship between attrition and the programs’ structure (i.e., in some cases, are students insured funding until graduation, in others, not?) or to program quality (i.e., do some programs hold schools to higher academic standards, while others do not?). This study raises the additional challenge of variegated control groups. As noted, students in India and Colombia who failed to win scholarships generally enrolled in public sector schools, whereas those in the United States who failed to win vouchers often attended private or charter schools. This complicates the global comparative capacities of the meta-analysis: if the control groups differ so substantially between countries, are we actually comparing apples to apples?
Another limitation is the very thing that makes this meta-analysis possible: its singular focus on test scores. As the authors note, test scores are only one of many important outcomes to consider. Do choice programs boost college completion? Do they cultivate civic capacities? Do they influence racial and socioeconomic diversity negatively or positively? Do teachers thrive in one or the other? What about parent engagement? This meta-analysis simply cannot weigh in on these matters.
All that said, how might Participant Effects inform education policy conversations?
The conclusions reached by Shakeel et al.’s study should, at the very least, allay policymakers’ concerns that choice programs, per se, harm the students involved – at least as far as the average student in the average program and test scores are concerned. As with charter schools, the actual outcomes depend on a variety of factors, including the enabling laws and accountability requirements. The findings — that positive effects are greatest when the public and non-public performance gap is large and when the funding that enables access to non-public schools covers the full cost of attendance — could certainly inform the shape of new tax-credit or voucher programs. Since average effects are less important than specific effects, however, policymakers will want to focus on those that have the greatest evidence of success. Policymakers might want to ensure, further, that any “role model program” they select mirrors local needs and concerns as closely as possible.
But Shakeel et al.’s analysis suggests, by omission, a more important avenue policymakers could take: broadening the conversation. Conspicuously absent from Participant Effects are RCTs from many of the world’s well-established democracies. This is because, as the authors note, most of the world’s democratic school systems fund non-public schools as a matter of principle. These systems are educationally plural, based on the view that public education must include a variety of schools that comport with families’ philosophical, religious, or pedagogical priorities. A few examples of what pluralism looks like: The Netherlands fully supports 36 types of non-public schools; Norway provides 85%, Denmark 85%, and Luxembourg 90% of non-public schools’ operating costs (Charles Leslie Glenn, Groof, and Candal 2012). England funds all operations and 85% of capital costs (Miller 2001). Many of Canada’s and Germany’s provinces simply operate distinctive schools outright (Berner 2012). Still other democracies (Sweden among them) allow weighted per-pupil funding to follow students to any public or private school (Böhlmark and Lindahl 2012). Such systems also constrain funded schools in significant ways, such as by requiring common curricula and assessments or by establishing standards for hiring and admissions (Vickers 2011), (McCrudden 2011).
The United States is actually an outlier among democratic peers in having created uniform, rather than plural, school systems. Indeed, the very framing of the research question at hand, whether school choice boosts student outcomes, betrays the entrenched narratives that govern American education: in a uniform system, any departure from the norm must justify public support by invoking superior test scores, college enrollments, or other measurable outcomes. This sets up policymakers, practitioners, and researchers to view the matter in inherently competitive terms, even as our school systems stretch to accommodate new models.
Countries with longstanding pluralistic systems, in contrast, do not pit one school sector against another (C. Glenn 2005), (Charles Leslie Glenn, Groof, and Candal 2012). When asked recently whether his prestigious research institute had studied “school sector effects,” Jonathan Sharples, Senior Researcher at the UK’s Education Endowment Foundation, replied that no one had ever posed that question to him before. The important question, he said, was how all schools could be strong.
This explains why, when Shakeel’s team set out to examine international studies on choice mechanisms, they found no analogs from established, pluralistic systems. Choosing from among schools is considered unremarkable, and the motivation to compare entire sectors is thus quite limited.
Herein lies an opportunity. Why do international school systems matter to policy conversations in the United States? Because they offer hope for more congenial and honest education debates at home. Knowing that competition between school sectors need not be a hallmark of democratic schooling may give policymakers space to critique education debates here and to find a way towards comity. A framework that emphasizes the importance of all of a state’s or district’s students, teachers, and schools is a good place to start. Policymakers might also consider incentivizing collaborations that build trust.
Numerous examples of collaboration could be expanded or duplicated. Federal Title I funding is substantial and flows to children in high-poverty public schools and to children from impoverished families who attend non-public, often religious, schools. Title I thus offers space for leaders from Catholic, Jewish, Montessori, charter, and district schools – to name a few – to learn from one another as they work with low-income children and families.
At a practical level, solving teachers’ professional concerns about switching sectors seems to be a vital step in relaxing the boundaries. Deborah McGriff, Managing Partner at the New Schools Venture Fund, suggests that states and districts allow teachers’ fixed-benefit pensions to follow them from traditional to charter school positions (“Panel II: Stories from the Field” 2015), a policy which Tennessee adapted for teachers who had been hired before the legislature created the Achievement School District (“Tennessee ASD Employee Handbook” 2014).
Partnerships between district and charter schools seem to be increasing, albeit slowly (Whitmire 2014), (Doyle, Holly, and Hassel 2015). Partnerships even exist between independent schools and school districts, as was the case with the Lillie May Carroll Jackson School for Girls in Baltimore (Karpay 2016).
Some states and districts will, understandably, have little interest in expanding pluralism or in creating bridges between the school sectors. They should not make this determination, however, on the assumption that choice programs are ipso facto negative for students who use them, or that departing from the traditional district model is inherently undemocratic.
The opinions expressed in this commentary are the author’s own and do not necessarily reflect the views of The Johns Hopkins University or The Johns Hopkins Institute for Education Policy.
Arizona Christian School Tuition Organization v. Winn et al.. 2011, 563 United States Reports 125. Supreme Court of the United States.
Berner, Ashley. 2012. “School Systems and Finance.” In Balancing Freedom, Autonomy, and Accountability in Education, edited by Charles Glenn and Jan De Groof. Tilburg: Wolf Legal Publishers.
Böhlmark, Anders, and Mikael Lindahl. 2012. Independent Schools and Long-Run Educational Outcomes Evidence from Sweden’s Large Scale Voucher Reform. Munich: CESifo.
Bryk, Anthony S, Valerie E Lee, and Peter Blakeley Holland. 1993. Catholic Schools and the Common Good. Cambridge, Mass.: Harvard University Press.
Campbell, David. 2008. “The Civic Side of School Choice: An Empirical Analysis of Civic Education in Public and Private Schools.” BRIGHAM YOUNG UNIVERSITY LAW REVIEW, no. 2: 487–524.
———. 2012. “Introduction.” In Making Civics Count: Citizenship Education for a New Generation, edited by David Campbell, Meira Levinson, and Frederick Hess, 1–15. Cambridge: Harvard University Press.
“Cardus Education Survey 2014: Private Schools for the Public Good.” 2014. Cardus.ca. http://www.cardus.ca/store/4291/.
Chingos, Matthew M, Paul E Peterson, Brookings Institution, Brown Center on Education Policy, John F. Kennedy School of Government, and Program on Education Policy and Governance. 2012. The Effects of School Vouchers on College Enrollment: Experimental Evidence from New York City. Washington, D.C.: Brown Center on Education Policy at Brookings.
Coleman, James S, Thomas Hoffer, and Sally Kilgore. 1982. High School Achievement: Public, Catholic, and Private Schools Compared. New York: Basic Books.
Doyle, Daniela, Christen Holly, and Bryan C Hassel. 2015. “Is Detente Possible?: District-Charter School Relations in Four Cities.” Washington, D.C.: Thomas B. Fordham Institute and Public Impact. http://edex.s3-us-west-2.amazonaws.com/publication/pdfs/Is_Détente_Possible_Report_Final.pdf.
Dreilinger, Danielle. 2015. “Louisiana Voucher School Scores Improve in Program’s Third Year.” Times Picayune. December 21. http://www.nola.com/education/index.ssf/2015/12/louisiana_voucher_school_score.html.
Dynarski, Mark. 2016. “On Negative Effects of Vouchers.” 18. Evidence Speaks Reports. Washington, DC: Brookings. http://www.brookings.edu/research/reports/2016/05/26-on-negative-effects-of-school-vouchers-dynarski.
Elacqua, G, D Contreras, and F Salazar. 2008. “Scaling Up in Chile: Networks of Schools Facilitate Higher Student Achievement.” EDUCATION NEXT 8 (3): 62–68.
Figlio, David, and Cassandra Hart. 2014. “Competitive Effects of Means-Tested Vouchers.” American Economic Journal: Applied Economics 6 (1): 133–56. doi:10.1257/app.6.1.133.
Figlio, David, Cassandra M. D. Hart, and Molly Metzger. 2010. “Who Uses a Means-Tested Scholarship, and What Do They Choose?” Economics of Education Review, Special Issue in Honor of Henry M. Levin, 29 (2): 301–17. doi:10.1016/j.econedurev.2009.08.002.
Figlio, David, and Krzysztof Karbownik. 2016. “Evaluation of Ohio’s EdChoice Scholarship Program: Selection, Competition, and Performance Effects.” Columbus, OH: Thomas B. Fordham Institute. https://edexcellence.net/publications/evaluation-of-ohio%E2%80%99s-edchoice-scholarship-program-selection-competition-and-performance?utm_source=Fordham+Updates&utm_campaign=4c6addf156-Figlio_report_email_blast_7_7_167_5_2016&utm_medium=email&utm_term=0_d9e8246adf-4c6addf156-71489677&mc_cid=4c6addf156&mc_eid=c7759a06d0.
“Fiscal Year 2015 Budget: Summary and Background Information.” 2014. United States Department of Education. http://www2.ed.gov/about/overview/budget/budget15/summary/15summary.pdf.
Glenn, Charles. 2005. “What the United States Can Learn from Other Countries.” In What Americans Can Learn from School Choice in Other Countries, edited by James Tooley and David Salisbury, 79–90. Washington, D.C.: Cato Institute.
Glenn, Charles L. 2011. Contrasting Models of State and School: A Comparative Historical Study of Parental Choice and State Control. 1st ed. Continuum.
Glenn, Charles Leslie. 1988. The Myth of the Common School. Amherst: University of Massachusetts Press.
Glenn, Charles Leslie, and Ester J De Jong. 1996. Educating Immigrant Children: Schools and Language Minorities in Twelve Nations. New York: Garland Pub.
Glenn, Charles Leslie, Jan de Groof, and Cara Stillings Candal. 2012. Balancing Freedom, Autonomy and Accountability in Education. Tilburg: Wolf Legal Publishers.
Jeynes, William H. 2007. “Religion, Intact Families, and the Achievement Gap.” Interdisciplinary Journal of Research on Religion 3 (3).
Karpay, Jeannette. 2016. “Lillie May Carroll Jackson School for Girls: Baltimore’s Public-Private School Partnership.” Johns Hopkins Institute for Education Policy. June 3. http://education.jhu.edu/edpolicy/commentary/4.
Litton, Edmundo F., Shane P. Martin, Ignacio Higareda, and Julie A. Mendoza. 2010. “The Promise of Catholic Schools for Educating the Future of Los Angeles.” Catholic Education: A Journal of Inquiry and Practice 13 (3): 350–67. http://eric.ed.gov/?id=EJ914874.
Louisiana Department of Education. 2014. “Louisiana Scholarship Program Annual Report 2013-14.” Baton Rouge, LA: Louisiana Department of Education. http://www.louisianabelieves.com/docs/default-source/school-choice/2013-2014-scholarship-annual-report.pdf.
Lubienski, Christopher. 2016. “Review of A Win-Win Solution and the Participant Effects of Private School Vouchers Across the Globe.” National Education Policy Center. Accessed July 2. http://nepc.colorado.edu/thinktank/review-meta-analysis.
McCrudden, Christopher. 2011. “Religion and Education in Northern Ireland: Voluntary Segregation Reflecting Historical Divisions.” In Law, Religious Freedoms and Education in Europe, edited by Myriam Hunter-Hénin. Farnham, Surrey, England; Burlington, VT: Ashgate Pub.
Melendez, Paul Louis. 2009. “Do Education Tax Credits Improve Equity?” Doctoral Dissertation in Educational Leadership, Tucson, AZ: University of Arizona. http://arizona.openrepository.com/arizona/handle/10150/194044.
Miksic, Mai, and Ashley Berner. 2014. “What Are Education Tax Credits, and Why Should We Care?” CUNY Institute for Education Policy. February 21. http://ciep.hunter.cuny.edu/education-tax-credits-care/.
Miller, Helena. 2001. “Meeting the Challenge: The Jewish Schooling Phenomenon in the UK.” Oxford Review of Education 27 (4): 501–13. http://www.tandfonline.com/doi/abs/10.1080/03054980120086202.
Mills, Jonathan N., and Patrick Wolf. 2016. “The Effects of the Louisiana Scholarship Program on Student Achievement after Two Years.” School Choice Demonstration Project. Fayetteville, AR: University of Arkansas Department of Ed Reform. http://www.uaedreform.org/the-effects-of-the-louisiana-scholarship-program-on-student-achievement-after-two-years/.
Northern, Amber. 2016. “The Effects of Private School Vouchers across the Globe.” Thomas Fordham Institute. May 18.
Odhekar, Prachi Deshmukh. 2012. “India.” In Balancing Freedom, Autonomy, and Accountability in Education, edited by Charles Glenn and Jan De Groof. Tilburg: Wolf Legal Publishers.
“Panel II: Stories from the Field.” 2015. In AEI. Washington, DC: American Enterprise Institute. https://www.aei.org/events/the-state-of-entrepreneurship-in-k-12-education/.
Pennings, Ray. 2011. Cardus Education Survey. Hamilton, Ont.: Cardus.
Rodriguez, Jorge Alberto Mahecha, and Luis Enrique García De Brigard. 2012. “Colombia.” In Balancing Freedom, Autonomy, and Accountability in Education, edited by Charles Glenn and Jan de Groof. Tilburg: Wolf Legal Publishers.
Scafidi, Benjamin. 2012. “The Fiscal Effects of School Choice Programs on Public School Districts. National Research.” Friedman Foundation for Educational Choice. Available from: Foundation for Educational Choice. One American Square Suite 2420, Indianapolis, IN 46282. Tel: 317-681-0745; Fax: 317-681-0945; e-mail: email@example.com; Web site: http://www.edchoice.org. http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED529881.
Seider, Scott. 2012. Character Compass: How Powerful School Culture Can Point Students Toward Success. Harvard Education Press.
Stewart, Thomas, and Patrick J Wolf. 2014. The School Choice Journey: School Vouchers and the Empowerment of Urban Families. New York, NY: Palgrave MacMillan.
“Tennessee ASD Employee Handbook.” 2014. Memphis, TN: Tennessee Achievement School District. http://achievementschooldistrict.org/wp-content/uploads/2015/01/2014-15-ASD-Employee-Handbook.pdf.
Tooley, James. 2001. “If India Can, Why Can’t We?” The New Statesman, September.
———. 2009. The Beautiful Tree: A Personal Journey Into How the World’s Poorest People Are Educating Themselves. 1st ed. Cato Institute.
United States Congress. 2015. “S.1177 – 114th Congress (2015-2016): Every Student Succeeds Act.” December 10. https://www.congress.gov/bill/114th-congress/senate-bill/1177.
West, Martin R. 2016. “Schools of Choice: Expanding Opportunity for Urban Minority Students.” Education Next.
White, John C. 2016. “Give Vouchers Time: Low-Income Families Need as Many Quality School Options as Possible.” The Brookings Institution. Accessed June 24. http://www.brookings.edu/research/papers/2016/06/23-give-vouchers-time-low-income-families-need-as-may-quality-school-options-as-possible-white.
Whitmire, Richard. 2014. “Inside Successful District-Charter Compacts.” Education Next 14 (4).
Wilson, Glen Y. 2000. “Effects on Funding Equity of the Arizona Tax Credit Law.” Education Policy Analysis Archives 8 (0): 38. doi:10.14507/epaa.v8n38.2000.
———. 2002. “The Equity Impact of Arizona’s Tax Credit Program: A Review of the First Three Years (1998-2000).” Research Report. Tempe, AZ: Arizona State University. http://epsl.asu.edu/epru/documents/EPRU%202002-110/epru-0203-110.htm.
Wolf, Patrick J. 2007. “Civics Exam: Schools of Choice Boost Civic Values.” Education Next 7 (3): 67–72. http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ767503.
———. 2016. “School Choice Boosts Test Scores.” Education Next. May 11. http://educationnext.org/school-choice-boosts-test-scores/.
Shakeel, M. Danish, Kaitlin P. Anderson, and Patrick J. Wolf 2016. “The Participant Effects of Private School Vouchers across the Globe: A Meta-Analytic and Systematic Review.” Working Paper EDRE Working Paper 2016-17. Fayetteville, AR: University of Arkansas.
Zelman v. Simmons-Harris. 2002, 536 United States Reports 639. Supreme Court of the United States.
It is worth noting that Shakeel et al. spend quite a bit of time criticizing ten previous meta-analyses for omitting important research studies and/or for not selecting similarly rigorous research studies.
These are the Intent-to-Treat (ITT) estimates, which provide the most cautious estimates because many of the students who did not get the voucher/scholarship actually went to private school. Such students are still in the control group for the ITT estimates, but are in the treatment group for the treatment on the treated (TOT) estimates.
E.G., “The offer of a [scholarship] had a null effect on students [in reading] after one year, small impacts after two or three years (0.04 and 0.05 standard deviations, respectively), and a somewhat larger impact after four or more years (0.24 standard deviations),” p. 35.
Colombia’s PACES program (the Plan for the Expansion of Access in Secondary Education, the Spanish acronym of which is PACES) is the largest choice program in the world. It has been studied extensively and shown to have produced strong, positive effects, particularly during the 1990 (Rodriguez and De Brigard 2012).
Of course, policymakers will also need to consider their states’ constitutional constraints on funding for non-public and particularly religious schools. State constitutions vary substantially and they, not the federal constitution, constrain school choice mechanisms. Finally, because vouchers take resources from school districts, policymakers will want to run feasibility analyses on fixed versus variable costs to determine feasibility (Scafidi 2012).
The reasons we do not have plural systems in the United States – although we used to – are historical and cultural (Charles Leslie Glenn 1988), (Charles L. Glenn 2011). Historical and cultural biases in the 19th century moved us to a vaguely Protestant uniform system, and constitutional requirements rendered that system secular, beginning in the 1960s.
Developing countries are experimenting with vouchers because their state-run educational systems have not provided equal access to good schools, whereas private, even unregistered, schools in these countries, often boost student test scores significantly (Tooley 2001), (Tooley 2009). Chile’s voucher program (1981) allowed schools to take students’ academic record into consideration, and to interview parents, during the admission process. Both measures correlate to socioeconomic status. Therefore, Chile’s original program produced gains for students who were already advantaged. In 1994, Chile’s government struck down all selection processes in the lower grades and parent interviews in the upper grades. The country’s voucher program, which now educates 39% of Chile’s students, no longer privileges the middle class – and Chile outperforms most of its neighboring countries on international exams (Elacqua, Contreras, and Salazar 2008). India has a longs history of uniform, state-provided school systems which is only now beginning to differentiate for the reasons above – even in rural areas (Odhekar 2012).
This is not the case during periods of change. Alberta, Canada’s expansion of choice in the 1990s was contested; the Netherlands’ battle for plural funding occupied most of the 19th century; England’s newest school type, the academy, have engendered quite a lot of controversy.
 In fiscal year 2015, Title I constituted $14.4 billion out of the US Department of Education’s total discretionary budget of $68.6 billion (including Pell grants) (“Fiscal Year 2015 Budget: Summary and Background Information” 2014).