Volokh Ranks the Justices on Free Speech
Eugene Volokh has a very interesting post entitled Which Justices Have the Broadest (and Narrowest) Views of Free Speech?. Kennedy is broadest; Breyer is narrowist. Reading Eugene's nuanced and intelligent post got me thinking about the assumption, standard in political science, that a unidimensional left-right ideology line captures almost all of the variance among the justices. Is free speech an exception? Any political scientists out there willing to enlighten me?
Baude on Thomas & Scalia Split
The New Republic Online today has a column by Will Baude (Crescat Sententia) entitled Brothers in Law. Here's a taste:
Given the widely held perception of Thomas as an unserious justice who leans on Scalia for intellectual guidance, it probably surprised many Court watchers to see the justices parting ways on two key decisions during the last week--yesterday's decision striking down the Child Online Protection Act and Monday's decision in Hamdi v. Rumsfeld. But it shouldn't have come as a surprise at all. That's because the widely held myths about Thomas are largely false: He is neither a knee-jerk conservative nor Scalia's yes-man. Rather, he has carved out a distinct jurisprudence as an advocate of textualism, a style of reading laws and constitutions in which words are taken at face value rather than interpreted in historical context or mitigated by practical considerations. There are notable ideological differences between Scalia and Thomas. Scalia, for instance, takes a narrower view of free speech and is less willing to reverse previous Court decisions, even when it is clear that they departed from the original intentions of the Constitution's framers. Thomas, by contrast, sees himself as a staunch defender of the classically liberal vision of the country's founders.
Hasen on Larios
Be sure to check out Rick Hasen's What Does Today's Summary Affirmance in Larios v. Cox mean? over at Election Law Blog. This is the 1 person 1 vote case in which the lower court had struck down a redistricting plan for state legislators on the theory that political gerrmandering was illicit. Here's a taste from Rick's post:
The Supreme Court's summary affirmance today in Larios supports the result in the lower court, but not necessarily its reasoning. Thus, the case stands for the proposition that there is no 10% safe harbor any longer; state and local redistricting plans can be struck down even if the deviations are under 10%.
Using as a point of reference the Ninth Circuit's assertion in Newdow v. United States Congress that "[a] profession that we are a nation "under God" is identical, for Establishment Clause purposes, to a profession that we are a nation "under Jesus," a nation "under Vishnu," a nation "under Zeus," or a nation "under no god," this essay attempts to disentangle three themes that the modern discourse of religious freedom often conflates, with baneful effect. We can call these the "public secularism" principle, the "neutrality" principle, and the "nonsectarian principle." The essay argues that the first two of these principles have exercised a pernicious influence over First Amendment jurisprudence: but the third, if it could be extracted so that its own distinctive virtues could be appreciated, might provide valuable mooring for what is at present a deeply disoriented discourse.
Alexander & Schwarzschild on Grutter
Larry Alexander and Maimon Schwarzschild (University of San Diego School of Law and University of San Diego School of Law) have posted Grutter or Otherwise: Racial Preferences and Higher Education (Consitutional Commentary, Vol. 21, 2004) on SSRN. Here is the abstract:
Last year's Supreme Court decisions on affirmative action, Gratz and Grutter, are dubious as constitutional law, bringing to mind what John Hart Ely said about Roe v. Wade: "[I]t is not constitutional law, and gives almost no sense of an obligation to try to be." There was at best a cosmetic difference between the University of Michigan undergraduate school's crude "20-points-extra for minority applicants" (which the Court struck down) and the Law School's "holistic" and disingenuous preferences (which the Court upheld). The idea that the Law School has a "compelling state interest" in these racial and ethnic preferences is utterly inconsistent with the Court's suspect classification-compelling interest jurisprudence now extending back over many decades.
Yet the Grutter decision does not require public colleges and universities to have racial preferences in admissions, much less in faculty hiring or promotions. The decision merely permits admissions preferences. So the question is thrown back to the universities, or to the state legislatures, to decide about preferential affirmative action as a matter of policy. And in this article, we suggest that racial preferences, at least in higher education, have proved very bad as a matter of policy.
First, if you are going to give racial preferences, you have to identify people by race. It is not only invidious for the government to do that, it is increasingly impossible as people marry and have children outside the racial "affirmative action" boxes.
Second, racial preferences are bad for students and for educational institutions themselves. Preferences dilute admissions standards that, while far from perfect, are much better than "race" as admissions criteria. Preferential admissions tend to lower educational standards too, as schools try to disguise the educational gap between those admitted preferentially and those admitted by standard criteria. One of the worst outgrowths of racial preferences is that students admitted through such preferences are systematically mismatched educationally. A generation of minority students, who would have done well, or certainly no worse than average, at colleges where they would have been admitted on their merits, have instead been "cascaded" upwards to colleges where their preparation is significantly below average and where, entirely predictably, they do poorly.
Preferences, moreover, lead to identity politics and racial segregation on campus; they promote nihilism about academic quality; and they create a culture of dishonesty which inevitably spills over into many aspects of educational life. This article urges public - and private - colleges and universities to hold students and faculty of whatever race or ethnicity to the same high standards, and to reject the educational politics of racial and ethnic division which are implicit in preferential affirmative action.
Jinks on the Law of War
Derek Jinks (Arizona State University College of Law) has posted Protective Parity and the Law of War (Notre Dame Law Review, Vol. 79, 2004) on SSRN. Here is the abstract:
Traditionally, protective schemes in the law of war are tightly coupled to rigid status categories. The contours of these status categories (and the content of corresponding protective schemes) reflect the dual normative commitments of this body of law: military necessity and humanitarianism. Formal protection varies along a number of axes (including combatant status, nationality, territory, and the character of the conflict) because it is thought that these factors roughly track the vulnerability of and the security challenges posed by specific status groups. In early law of war treaties, specific status categories are defined in terms that encourage protection-seeking states (and at times individuals) to orient their behavior in ways that promote the objectives of humanitarian law. Protection, in these treaties, is a carrot for rule-regarding behavior - harsh, summary treatment at the hands of the enemy, the stick. Such an approach, by design, includes coverage gaps.
Beginning with the 1949 Geneva Conventions, this understanding of status has been in decline. Over the last half century, protective schemes have converged and coverage gaps have closed. From the human rights perspective, these developments are all to the good. The humanization of humanitarian law reflects the progressive trajectory of international law in which universal human rights trump parochial state interests. From the traditionalist perspective, the law of war has lost its compass. Protection of unlawful combatants (1) undermines the humanitarian ambitions of the law of war by compromising the protection of innocent civilians; and (2) undermines political and institutional support for the law of war by imposing on states obligations that are inconsistent with various security imperatives. Both views are flawed. Protection should, contra the human rights view, accommodate the realities of the battlefield. On the other hand, humane treatment of the enemy, irrespective of pre-capture conduct, furthers the military objectives of the capturing state.
My argument is that humanitarian protection in time of war should not vary by detainee status category - what I will call protective parity. The paper has a descriptive and a prescriptive dimension. Through an analysis of the legal situation of unlawful combatants, I illustrate that (1) protective schemes are converging; and (2) although the protective significance of POW status is declining, there are some persistent gaps in coverage. The unique protective significance of POW status (and the claims that justify this extra increment of protection) suggests that POWs are systematically over-protected (even if only to a modest extent) and unlawful combatants are systematically under-protected. To make this case, I offer a cluster of offensive claims and one defensive claim. On the offensive side, I argue that various claims for expanding or contracting humanitarian protection do not track status categories. In this way, the claims that undergird these ostensibly competing schools of thought support protective parity. Consider the following related points. If protective schemes compromise legitimate security interests (think of the policy arguments advanced by the United States to justify its treatment of the detainees in Cuba), then some status categories (e.g., POWs) are systematically over-protected. That is, these security-based claims, if valid, would apply irrespective of whether the detainees were properly classified as POWs or not. If humane treatment of the enemy increases battlefield effectiveness (because poor treatment discourages surrender, encourages reprisals, decreases troop morale, and decreases political support for the war effort), then some status categories (e.g., unlawful combatants) are systematically under-protected. On the defensive side, I argue that protective parity is consistent with the principle of distinction. Even if irregularization undermines distinction, the question is how best to encourage fighters to distinguish themselves from the civilian population. I maintain that protective status categories are an inefficient way to incentivize individual combatants because these categories necessarily trade on collective considerations - such as the organizational characteristics of the fighting force. The rule of distinction would be better served by an individualized war crimes approach that accorded all fighters substantial humanitarian protection and punished (in accord with basic requirements of due process) individual bad actors.
Merges on the Public Domain
Robert P. Merges (University of California, Berkeley - School of Law (Boalt Hall)) has posted A New Dynamism in the Public Domain (University of Chicago Law Review, Vol. 7 1, pp.183-203, 2004) on SSRN. Here is the abstract:
Many believe intellectual property has overreached, and that policymakers must respond. In this essay, I argue that the critique may have merit, but private parties are in some cases taking matters into their own hands. Firms and individuals are increasingly injecting information into the public domain with the explicit goal of preempting or undermining the potential property rights of economic adversaries. Biotechnology firms invest millions of dollars in publin domain gene sequenc databases, to prevent hold-ups by firms with patents on short gene sequences. Major software firms fight entrenched rivals by investing millions of dollars, contributing to open source operating systems. In both cases, property-preempting investments (PPI's) are made to offset the effects of competitors' property rights. Individuals and nonprofits are joining in too, with initiatives such as the Creative Commons project. All of these major private investments in the public domain reveal a self-correcting feature of the intellectual property system that has been overlooked until now, and signal that public lawmaking is not the only arena in which the excesses of intellectual property may be addressed.
"The political liberty of the subject," said Montesquieu, "is a tranquility of mind arising from the opinion each person has of his safety. In order to have this liberty, it is requisite the government be so constituted as one man needs not be afraid of another." The liberty of which Montesquieu spoke is directly promoted by apportioning power among political actors in a way that minimizes opportunities for those actors to determine conclusively the reach of their own powers. Montesquieu's constitution of liberty is the constitution that most plausibly establishes the rule of law. Montesquieu concluded that this constitution could best be achieved, and had been achieved in Britain, by assigning three fundamentally different governmental activities to different actors. He was wrong. His mistaken conclusion rested on two errors. The first of these was theoretical; the second, both empirical and theoretical.
First, Montesquieu's analysis was informed by the early eighteenth-century orthodoxy that no sovereign power could viably be divided. Montesquieu rightly saw that liberty from the arbitrary exercise of power would be served by apportioning power among multiple actors, but he thought the apportionment sustainable only if along essentialist lines. Lawmaking could be separated from law-executing, but neither of those kinds of power could durably be divided internally. The extent to which actors participated in the exercise of more than one kind of power Montesquieu viewed as a protective qualification to a primary essentialist separation. He failed to see that involving multiple actors in every exercise of power, albeit by permitting actors' individual involvement in the exercise of more than one kind of power, is the true protection against arbitrariness. Checks and balances, not essentialist separation of activities, prevent actors from conclusively determining the reach of their own powers. The critical liberty-promoting criterion for separation is not whether powers differ in kind, but whether apportionment will prevent actors from conclusively determining the reach of their own powers.
Second, Montesquieu did not appreciate the nature of the English common law and the mechanism that its doctrine of precedent established for authoritative judicial exposition of existing law. That empirical error caused him to distinguish and trivialize the English judicial function as merely the ad hoc determination of disputed facts. Consequently, Montesquieu failed to recognize the lawmaking character of English judicial exposition.
This essay analyzes implications of Montesquieu's mistakes for modern claims, both in Britain and in the United States, that liberty and the rule of law are promoted by separating power in certain contexts. In particular, this essay questions the British Government's recent claim that the values underlying separation of powers theory call for removing ultimate appellate jurisdiction from the House of Lords. It also traces Montesquieu's influence on the American founders' attempt to separate power along essentialist lines, and considers some sub-optimal consequences of that attempt, including the nondelegation quandary and the emergence of an unchecked judicial lawmaker.
Director of Research
Consumer Federation of America
Policy Leaders Identify Open Architecture as the Key to Internet’s Broadband Future
New book warns that FCC policy shift jeopardizes innovation and economic growth
WASHINGTON – In a book released today, leaders in Internet policy and other telecommunications experts explore new, technology-neutral approaches to preserving open communications networks and the freedom of the Internet. Open Architecture as Communications Policy details how network neutrality is imperative for the future of an innovative high-speed Internet, cautioning regulators not to impose legacy telecommunications regulation on Internet Protocol-based applications.
The book, edited by Mark N. Cooper and published by the Center for Internet and Society (CIS) at Stanford Law School, grew out of a forum on Capitol Hill cosponsored by CIS and the Consumer Federation of America.
“The book brings together many of the best minds on the convergence of communications technology and public policy and some of the strongest advocates of open architecture as the underpinning of the success of the Internet,” Cooper said.
“This book is especially relevant now, as the FCC attempts to reverse its 35-year long commitment to ensuring open, nondiscriminatory interconnection and carriage of data services on the nation’s telecommunications networks. Open architecture at the heart of the Internet and telecommunications networks created an environment for dynamic innovation and the widespread adoption of the Internet.
“With two cases pending Supreme Court review, a dozen proceedings ongoing at the FCC, and talk of a rewrite of the 1996 Telecom Act in the air, the future architecture of the Internet hangs in the balance. It is critical for policy makers to have a full appreciation for the importance of principles of open architecture as public policy.”
The book combines several classic works on open architecture and public policy with new essays and empirical studies from John W. Butler, Vinton G. Cerf, Earl W. Comstock, Mark N. Cooper, Michael J. Copps, Robert E. Kahn, Mark A. Lemley, Lawrence Lessig, Richard S. Whitt, and Timothy Wu.
The book is available for download at no charge under a creative commons license at:
Requests for review copies of Open Architecture as Communications Policy can be sent to Mark Cooper at firstname.lastname@example.org.
Paper copies of the book are available from Amazon.
Open Architecture as Communications Policy
Mark N. Cooper, Editor
Open architecture is the design principle on which the success of the Internet and information technologies rests. In this book, founders of the Internet and its most ardent defenders describe how open architecture was implemented in the end-to-end principle of the Internet, open interfaces of the personal computer, and nondiscriminatory interconnection and carriage for communications networks.
Empirical studies examine the convergence of technology and public policy that created a dynamic environment for decentralized innovation, rapid technological change, and strong economic growth. The digital communications platform became a general-purpose technology with a transformative power equaling or exceeding the great industrial technologies of a century earlier – railroads, electricity, and telecommunications.
Legal analyses demonstrate that the Federal Communications Commission inexplicably turned its back on the thirty-five year record of success of its Computer Inquiries, which ensured nondiscriminatory access to communications services. Case studies document the chill on innovation that results when owners of advanced telecommunications networks are allowed to close the platform, exclude service providers, restrict applications and limit the availability of network functionalities.
The book explores new, technology-neutral approaches to preserving both open communications networks and the freedom of the Internet.
FCC Commissioner Michael Copps established the policy context for the Capitol Hill symposium that gave the impetus for the book (Broadband Technology Forum: The Future Of The Internet In The Broadband Age, March 26, 2004) with a challenge for the “Internet in the broadband age… We need to make sure that it continues to foster freedom and innovation, that the openness that is its hallmark has a future every bit as bright as its past.”
Robert E. Kahn and Vinton G. Cerf, What Is the Internet (and What Makes It Work)?, INTERNET POLICY INSTITUTE (1999, revised 2004), provide a brief discussion of the architecture of the Internet through the chronology of the development of its fundamental technologies. Both of the authors were at the center of the creation of the seminal technologies. They are keenly aware of the role of institutions and public policies in the creation of the Internet.
Mark A. Lemley, Professor of Law at University of California at Berkeley, and Lawrence Lessig, Professor of Law at Stanford Law School and founder of the Center for Internet and Society, include a paper presenting a discussion of the design principle of the Internet: The End of End-to-End: Preserving the Architecture of the Internet in the Broadband Era, UCLA LAW REVIEW (2001). Not only does it explain how the design principle operates to promote innovation, but also it directly refutes many of the economic arguments made by those who would abandon, or allow the network facility owners to abandon, the end-to-end principle and open communications networks.
A study by Mark N. Cooper, Making the Network Connection, takes a broad view of the impact of the Internet. It attempts to use network theory and recent analyses of technological change to reinforce the long-standing claim that the open architecture of the Internet represents a fundamental change and improvement in the innovation environment. It concludes with an examination of the role of Internet Service Providers in the spread of the Internet. In a second Chapter entitled Anticompetitive Problems of Closed Communications Platforms, which draws from an earlier paper Open Communications Platforms: Cornerstone Of Innovation And Democratic Discourse In The Internet Age, JOURNAL OF TELECOMMUNICATIONS AND HIGH TECHNOLOGY LAW (2003), Cooper demonstrates the increased possibility of anticompetitive practices by firms that dominate key points of the digital communications platform. It links the potential harm back to the network theory by presenting a case study of the elimination of Internet Service Providers.
Timothy Wu, University of Virginia Law Professor, provides a detailed study of the customer contract provisions that threaten or infringe on the freedom for consumers to use the Internet and applications in a paper entitled Network Neutrality, Broadband Discrimination, (first published in JOURNAL OF TELECOMMUNICATIONS AND HIGH TECHNOLOGY LAW (2003), which also attempts to precisely define the characteristics of the Internet that should be preserved. Wu also includes a new analysis (from Broadband Policy: A User’s Guide, Journal OF TELECOMMUNICATIONS AND HIGH TECHNOLOGY LAW, forthcoming) that reviews several aspects of the current policy debate and offers a recommendation of nondiscrimination. Lawrence Lessig joins Wu in a formal proposal for network neutrality that was presented to the Federal Communications Commission (FCC) in an ex parte filed at the FCC.
Earl W. Comstock and John W. Butler, partners in the same firm, combine legal analysis from Access Denied: The FCC’s Failure to Implement Open Access as Required by the Communications Act, JOURNAL OF COMMUNICATIONS LAW AND POLICY, (2000) with the legal brief filed on behalf of Earthlink in the second case heard by the Ninth Circuit Court of Appeals involving broadband (Brand X v. FCC, 345 F. 3d 1120 9 9th Cir. 2003). Comstock and Butler show why the FCC has had so much trouble convincing the Ninth Circuit Court of Appeals that its approach to deregulating advanced telecommunications networks fits under the statute. Twice the Court found that the obligations of nondiscrimination and interconnection of Title II of the Communications Act apply to cable modem service. The detailed recounting of the history and purpose of the Computer Inquiries that runs through the legal arguments is a strong reminder that the FCC adopted the correct policy over 35 years ago when it recognized the fundamental importance of nondiscriminatory access to the essential telecommunications function of the network on which applications and services ride.
The book concludes with a discussion of Horizontal Leap Forward that combines a paper by Richard Whitt of MCI that formed the basis for Vinton Cerf’s comments to the forum and a letter from Cerf to Chairman Powell and Secretary of Commerce Evans. The paper picks up and develops the distinction between transmission and applications as it is being discussed in regard to contemporary digital networks. Whitt attempts to synthesize the emerging thinking about reforming regulation of communications by moving from the old vertical view, in which industries are regulated because of their underlying technologies or the services they provide, to a horizontal view, in which similar functionalities are treated similarly across networks, regardless of which technology is used. Open architecture at the physical or transmission layer is the key policy advocated in the paper.