Wednesday, February 29, 2012

Made of Paper: Moby Dick


About thirty-five years ago, I spent a miserably cold winter in upstate New York. Of course, any winter in upstate New York is likely to be miserable for a native of Louisiana. During that time, I consoled myself with what may be the best novel ever written, Herman Melville's Moby Dick.  It was more than a book. It was a revelation. There are a few works of fiction that make one feel like the authors must have been writing down the dictation of some voice greater than that of a simple human being. This story of the wandering outcast Ishmael on the ship guided by the madly questing Ahab has the quality of a parable, but a parable with many and changing points. It contains a world of literary forms, with extended soliloquies on philosophical subjects, tales within tales, and essays on cetology; all bound together by a grim fatalism.

Melville's earliest success came as a writer of adventure stories, based on his own sea voyages and life among South Sea cannibals. His most successful early book, Typee, published in 1846,  told of his experiences among the cannibal islanders of the Marquesas after he deserted his ship. Omoo, which came out a year later, is a fictionalized version of his voyage to Tahiti, a mutiny there on his ship, and his observations of life on that island. Already, Melville was becoming more than an writer of adventure stories with Omoo, which may have been why it was less popular than Typee with the general reading public.

Mardi, published in 1849, is a curious book. It starts out as another sea voyage, but about mid way through, the countries visited by the main characters turn symbolic and one-dimensional, and it becomes an allegory about life in America. It is as if Melville turned from the sea-tale to the philosophical narrative, and the two don't seem to fit together. But read as a first try at what he was to accomplish two years later in Moby Dick, it comes across as Melville's striving for a novel that would be more than a novel.

Moby Dick did not sell well and Melville became one of those largely ignored by his own era and discovered by posterity, although who knows what that means in a posterity that may cease reading books.  I left upstate New York as soon as spring arrived, having decided that I wanted to take a long bicycling trip through the rural parts of the eastern United States to return to New Orleans. I had to travel light, so I carried only one book in my panniers: The Portable Melville.

Tuesday, February 28, 2012

Money Motivates Students

Should we pay people to act in their own best interest? That question occurred to me when looking at a report on the Advanced Placement Inventive Program (APIP). This program offers cash incentives to students and teachers in inner city minority schools for passing scores on Advanced Placement exams, on the grounds that this will make them more likely to succeed in college and place them in positions to earn higher incomes later in life. According to the analysis of C. Kirabo Jackson of Northwestern University, participants in the APIP were more likely to graduate from college and to earn higher wages later in life.
I’m not surprised by the fact that you can motivate people by paying them. But to the extent that motivation can produce better outcomes for students, this must mean that poor outcomes were not the result of barriers or discrimination but of lack of motivation. So, those who lack motivation to achieve should be paid to get ahead. Why? So that they will be paid even more throughout their lives.
The report does say that 70% funding for APIP comes from private donors. But that still leaves 30% paid by school districts. Private donors, of course, can donate their money however they like. But it seems strange to me that taxpayers are paying under-motivated students to improve their own lives, especially when (a) there are plenty of jobs at the bottom of the labor market that need to be filled, and (b) some of those taxpayers may be paying to motivate other people’s children to compete for a limited set of economic opportunities with those taxpayers’ own highly motivated children.
So, the author looks at one important question about this kind of program: does it work? But there is another, more  essential question: even if it does work, does it make sense?

Monday, February 27, 2012

Riots in Afghanistan

The latest news from Afghanistan reports that rioting over the inadvertent burning of Korans by U.S. forces has begun to wane, but that tensions remain high in the country. But the attacks on our forces should lead us to ask some serious questions about what we are doing there. We are clearly not in that country by popular demand.
The U.S. invaded Afghanistan because its government, such as it was, provided a base to Osama bin Laden and his al Qaida organization. Now, bin Laden is dead. Al Qaida is less of a coherent group than a name for amorphous anti-Western radicalism. We will not root out anti-American sentiment in Afghanistan, no matter how long we stay there. We prefer the corrupt regime of Hamid Karzai to that of the Taliban, but the extent of his control over the country is limited, even with our support. When we leave, as we will have to do sooner or later, Karzai will either be replaced by the Taliban or he will come to terms with them.
It is extremely unlikely that we can hope to leave a stable pro-American government in power, as we managed to do in West Germany and Japan following our occupation of those countries. Those European nations were state societies, run by governmental bureaucracies and populated by people accustomed to bureaucratic governance.  After lopping off the objectionable leaderships, we could substitute them with leaders we found more congenial. Afghanistan is a tribal society. Even the ruthless Taliban could hold sway only in regions where they can coordinate with local leaders, and then only loosely. As the recent riots show, the Afghan version of Islam is not just a religious culture different from ours; it is a different social and political culture. In sum, we are not going to establish stability in Afghanistan. If we define victory as creating an order that will not collapse in turmoil as soon as we leave, then we are setting ourselves up for defeat.

Sunday, February 26, 2012

Consequences of the Cold War

Yesterday, I commented on the new biography of George F. Kennan by John Lewis Gaddis. Kennan interests me because he used broad historical thought to understand and respond to contemporary international issues. In particular, as Gaddis tells us, Kennan’s analysis of the Soviet Union made use of Edward Gibbon’s argument about the Roman Empire, that maintaining an empire strains the political resources of an imperial power.  While Kennan is often heralded as the architect of containment theory, he also became concerned that efforts to contain Communist expansion around the world would ultimately strain the resources of the United States.
As we look back at the Cold War, we may want to consider how the strains and challenges of that era re-shaped American government and society, and to think about how some of our current problems may be consequences of that re-shaping. War generally tends to concentrate power in a central government: as Randolph Bourne observed, “war is the health of the state.” The long global struggle with the Soviet Union, immediately following the hot war of the 1940s, moved more control from state and local governments toward Washington, D.C. As I’ve argued in a number of places, expanding federal control over public education had its roots in the effort to mobilize the socialization and technical training of Americans to face a foreign enemy. I think one can connect the increase in domestic centralization in  all aspects of American life to the projection of American power around the world. The emergence of government as the largest employer in the nation, with concomitant increases in expenditures, resulted from the projection of American power around the world.
We won the Cold War essentially by outspending the Soviets. The USSR was always much weaker than we were in its internal structure, since it maintained itself as a superpower by forcing converting much of its inefficiently produced surplus and the inefficiently produced surplus of its subject nations into military might.  The United States, with a more productive economy, could extend itself much further before going bankrupt. Nevertheless, the Cold War did place us on the path of ever-greater internal and foreign non-investment expenditures. The first crisis from these expenditures came at the beginning of the 1970s, when the effort to simultaneously fund a war in Southeast Asia and a war on poverty tended to both push up wages and prices and place a drag on productivity. We temporarily escaped from this dilemma by becoming an investment center. Cutting capital gains taxes in the 1980s, while retaining internal subsidies of consumption and external spending on our competition with the Soviets, did help in the re-starting of domestic investment and the attraction of foreign investment, but this was largely speculative investment in overvalued new communications technologies and in non-productive consumption. 
The end of our struggle with the Soviet Union has not ended history, and challenges from foreign powers remain with us. But perhaps the end of the Cold War, our current recession, and present debates over the proper size and activities of the federal government should lead us to reflect on how our present situation may have resulted from strains created by the interlinked projection of American power abroad and centralization of control at home.

Saturday, February 25, 2012

George F. Kennan: An American Life, by John Lewis Gaddis

George F. Kennan has long been one of my favorite historical characters, perhaps because he combined acting on the historical stage with criticizing the play’s production. I believe I first read his American Diplomacy, 1850-1950 when I was an undergraduate, although maybe it was late in high school. Later, when I was working in the resettlement of refugees from Vietnam, Cambodia, and Laos, I became interested in the broader geopolitical struggles behind the war in Southeast Asia, and I read Kennan’s classic 1946 “Long Telegram” from the USSR, his 1947 “X” article in Foreign Affairs, and his books Russia Leaves the War and Russia and the West Under Lenin and Stalin. Sometime after that, I read his The Decline of Bismarck’s European Order. The two volumes of Kennan’s memoirs sit on my shelf, but I confess that I have only begun the first. Kennan’s ability to bring a broad analytical perspective to the events of his own time and to the historical sources of those events, along with his talents as a writer and his reputation as one of the shapers of twentieth century American foreign policy, gave him a depth unusual among American public figures.
Biographer John Lewis Gaddis, a major authority in his own right on the Cold War, can certainly be counted as one of Kennan’s admirers. He ends this detailed but engaging biography with a brief essay arguing for Kennan’s claim to greatness. But Gaddis does not portray Kennan as a saint. He points out Kennan’s personal flaws of vanity and sensitivity (although I did wonder if these flaws might not have been normal human characteristics magnified by the self analysis in Kennan’s private journals. How vain and sensitive would we all appear if we recorded our private thoughts?) It also becomes clear in this biography that Kennan often advocated highly questionable policies, as when, in his Reith Lectures, he advocated U.S. withdrawal from a neutralized, unified Germany without sufficient guarantees against Soviet domination. Gaddis also makes clear that Kennan had little grasp of the role that domestic politics necessarily plays in American foreign policy, a strange shortcoming for a man who analyzed Soviet foreign policy in terms of the internal situation of the USSR. Gaddis acknowledges that Kennan spoke highly of President Kennedy, who flattered Kennan but largely ignored his advice and ideas in practice, and disliked President Reagan, who carried out much of the containment strategy Kennan outlined in the Long Telegram and X article, thereby hastening the collapse of the Soviet regime.
As Gaddis makes clear, though, Kennan did play a big part in many of the strategic decisions of the twentieth century.  In Portugal during World War II, Kennan was largely responsible for negotiating the use of the Azores as U.S. airbases. The Long Telegram and the article he wrote for Foreign Affairs under the pseudonym “Mr. X” helped turn official and public opinion away from the view that the only alternative to outright war with the Soviet Union was a Henry Wallace-style faith in the virtue and goodwill of the Stalinist dictatorship. As a policy planner after the war, he was more responsible than anyone else for the Marshall Plan.
Drawing on Kennan’s own papers, interviews with family members and others close to him, and on vast archival sources, Gaddis gives a detailed view of a man he clearly admires greatly, but not without reservation. Appropriately, he concentrates on Kennan’s intellectual and public life, considering the private life and passions only insofar as these shed light on temperament and motivations. I thought it particularly interesting that Kennan’s thinking on grand strategy emerged from his wide general reading, as well as his understanding of Russian history and culture. In particular, the idea of containing the Soviet Union until it collapsed due to its own contradictions and the strain of maintaining a vast empire apparently came to Kennan from his reading of Edward Gibbon.

Thursday, February 23, 2012

The Fisher Case and the Quality of Debate

Affirmative action is an emotional issue, and I should probably not be surprised that some on all sides of the issue take leave of reasoned argument when they turn to this topic. Still, I was taken aback by some of the rhetoric in today's debate among putative experts in The New York Times. The worst of these is unquestionably the piece by Columbia Law Professor Patricia J. Williams. Essentially, Professor Williams argues that we have not moved beyond race in American society. But rather than constructing a logical argument based on this premise, she lectures that "we need to acknowledge the race-conscious biases and anxieties lying in plain sight" and she excoriates a "counterproductive backlash that echoes the plaints of 'reverse racism.'" She concludes, "To argue that race doesn’t matter or shouldn’t be considered at all in admissions processes that are taking place in an echo-chambered world blaring with explicitly racialized competition is not merely hypocritical but foolish." I wonder if it has ever crossed Professor Williams' mind that it is possible for someone to think differently than she does without being a fool and a hypocrite.

Ian Haney-Lopez, also arguing for affirmative action, doesn't call his opponents names. He adopts another rhetorical strategy: accuse them of bad intentions. He believes the Court will likely declare affirmative action unconstitutional, and describes this as the end result of a long political war against "racial justice" by conservative politicians and judges. Well, at least the conspirators against justice aren't hypocrites.

After these poisonous diatribes, I could almost have been won over by George Washington University Law Professor Jeffrey Rosen, also arguing in favor of affirmative action, who observes that "reasonable people can disagree about the civic effects of affirmative action."  Rosen maintains that unless the Constitution clearly forbids a policy, courts should defer to the people's representatives, and that the 14th Amendment doesn't forbid race-conscious policies in some situations. Now, I am not an attorney or a Constitutional law scholar, but I would think that the equal protection clause of the 14th Amendment does require that all individuals receive the same treatment under the law. It does not require that the law counteract social and historical disadvantages by conferring legal advantages on members of underrepresented categories. Moreover, if it were left up to the people's representatives, as Rosen suggests, then probably affirmative action would be abolished in most places. So, I don't entirely buy the argument, but I appreciate the fact that he at least made one.

On the negative side, Peter H. Schuck, a law Professor at Yale University, engages in less demonizing than Professors Williams and Haney-Lopez, but he still inserts a dig at the "political pressure from minority activists [that] will never cease." Professor Schuck does, I think, make the very good point that the goals of affirmative action keep moving further away, observing that the University of Texas, not satisfied with achieving racial balances at the campus level, has been justifying its race-conscious admissions by seeking these balances even at the classroom level.  Unfortunately, at the end of his comments he also indulges in rhetorical excess, concluding that  "For the court to uphold the Texas system would compound the felony [of race-based preferences]." He doesn't accuse his opponents of being felons, I think, but I don't think the strong language helps his case.

Vikram Amar, Associate Dean and professor of law at U.C.,Davis takes a diplomatic approach. Well, he is a dean, so maybe he's acquired the habit of diplomacy on the job. He acknowledges that using race to remedy past discrimination or to create diversity may be inconsistent with the basic value of individual equality, but suggests that sometimes you just have to use race to get beyond racism, in a paradoxical formulation he borrows from Justice Harry Blackmun. Again, I am not a Constitutional scholar, but if I recall the Bakke and Grutter decisions correctly, the Court decided that remedying past discrimination is not a legitimate basis for race-conscious admissions policies, leaving only the diversity rationale. But Amar says "it is tough to know when affirmative action has outlived its usefulness." He suggests, in what I take is a line of argument similar to that of Professor Rosen that, "As wise as the court is, sometimes it should let the political processes decide when contested policies should sunset." It is nice of Dean Amar to praise the wisdom of the judges and not accuse them of being parties to a conspiracy against racial justice. But, again, I think that if affirmative action really were left up to political processes, it would have been gone long before now.

Stephen Hsu of the University of Oregon, a theoretical physicist and not a lawyer,  presents what I saw as the most reasoned argument. He is also the only one to introduce evidence. He argues that SAT results are reasonably good predictors of academic success and there are differences among the racial categories in average SAT results. Therefore, race-based preferences are counter to meritocratic ideals and bring in students whose abilities vary by race.  I happen to agree with this argument, but if I were looking for a way to challenge it, I would probably focus on just how well SAT tests actually do predict success. I might also argue that as long as we rely on current ability distributions, we can never hope to re-distribute abilities across categories in the future. Of course, such a response would raise the problem of using law and public policy as an instrument for re-designing a society. Ultimately, though, Hsu impressed me as not only having a logical, fact-based argument, but as making his case solely on the basis of that argument, rather than on accusations of foolishness, hypocrisy, insidious motivations, and criminal actions.

Wednesday, February 22, 2012

Affirmative Action Goes to Court Again

The United States Supreme Court has agreed to take up the case of Abigail Fisher, a white student who was denied admission to the University of Texas because of race-based admissions policies. The Court is unlikely to speak with one voice on this case, which will probably revolve around whether the Texas system exceeded the conditions for the use of race in decisions established by the 2003 Grutter decision. That earlier decision, like others, was a narrow 5-4 in favor of allowing racial considerations.  From the time that the Court began hearing affirmative action cases, differing interpretations of equal protection and freedom from discrimination have prevented unanimity of views. Majorities have consistently allowed race-based decisions in educational admissions and employment, but have agreed that these decisions must be subject to strict scrutiny. Especially in education cases, the tendency has been to accept affirmative action as a means of pursuing a compelling national interest of diversity, rather than as a means of compensating individuals or groups for past discrimination.

Regents of Univ. Cal. v. Bakke, 438 U.S. 265 was the most critical Supreme Court case regarding affirmative action in education. The case originated in 1973 when Allen Bakke, a white man, applied for admission to the University of California-Davis Medical School. Under its affirmative action program, the school had reserved 16 of 100 seats for minority or socioeconomically disadvantaged applicants, who were judged by a committee separate from the one that judged regular applicants and who could be admitted with lower grade point averages (GPAs) and Medical College Admissions Test (MCAT) scores than regular applicants. After Bakke was denied admission, he wrote to the chairman of the admissions committee complaining because he had not been considered for a reserved seat for the disadvantaged and because no whites received these reserved seats.  Bakke applied again in 1974, this time with a substantially higher MCAT score and was again denied admission, although minority applicants were with lower scores and GPAs than his own were admitted through the separate special admissions process. Bakke sued in the California Superior Court, maintaining that he had experienced discrimination, in violation of the Equal Protection Clause and Title VI of the Civil Rights Act of 1964, as well as the California Constitution. The case went before the California Supreme Court, which decided in Bakke’s favor by eight to one. The university then appealed to the Supreme Court.

The University of California maintained that it was justified in using race as a factor in admissions and that its separate admissions program was a legitimate way of doing so. Bakke maintained, again, that reserving places violated his right to equal treatment and subjected him to discrimination. Justices William Brennan, Byron White, Thurgood Marshall, and Harry Blackmun supported the use of race in admissions to educational programs in order to provide a remedy to minorities for the present-day consequences of past discrimination and racial prejudice. Chief Justice Warren Burger, Potter Stewart, John Paul Stevens, and William Rehnquist, opined that the admissions policy at Davis violated Bakke’s rights under the Equal Protection Clause and the Civil Rights Act. Justice Lewis Powell argued that treating individuals differently on the basis of race requires a compelling state interest. That compelling state interest seen was the achievement of a heterogeneous student body.

Justice Powell wrote the opinion of the Court, in which the four justices who favored race conscious admissions joined in part. A special admissions quota, such as the one employed by UC-Davis, could not be used because it constituted discrimination. Race could be treated as a factor, but was subject to strict scrutiny. Bakke was ordered admitted and the most important Court decision on educational affirmative action entered history as a split decision in which no other Justice agreed entirely with Powell’s opinion for the Court. The Bakke decision therefore meant that educational institutions could continue to seek to increase their admissions of members of racial minorities or other represented groups but only to increase diversity, not to compensate for past discrimination.  Moreover, membership in an underrepresented group could be only one of many factors in an admissions decision.

However, during the 1970s, the Court also found that whites, as well as minority members, were constitutionally protected as individuals from racial discrimination. In McDonald v. Santa Fe Trail Transportation Co., 427 U.S. 273 (1976), in a rare unanimous decision on this topic, the Court held that Title VII of the 1964 Civil Rights Act prohibited discrimination against whites, as well as non-whites. This placed the Court in a complicated position. On the one hand, Bakke allowed deciding (i.e., discriminating) on the basis of race to achieve perceived larger national ends. On the other hand, the Court also seemed to find against just this kind of discrimination.

Some members of the Court attempted to juggle this apparent contradiction through advancing the concept of "national interest" as a counterweight to individual rights and by subjecting race-based decisions to "strict scrutiny," meaning that racial decisions had to be (or present themselves as) narrowly tailored to address the specific goals of national interest. The two critical college admissions cases of the early twenty-first century were based on this juggling.

In the case of Grutter v. Bollinger 539 U.S. 306 (2003), In 1996, Barbara Grutter had been denied admission to the University of Michigan Law School, despite a 3.8 grade point average and a score of 161 on the Law School Admissions Test. The Law School maintained a policy that gave special consideration to members of minority groups, and Grutter’s attorneys argued that this policy had denied her a place and therefore constituted discrimination against her under the Equal Protection Clause of the Fourteenth Amendment and Title VI of the Civil Rights Act. At the same time, the Court considered the case of two white applicants to the University of Michigan’s undergraduate program in Gratz v. Bollinger 539 U.S. 244 (2003).  Jennifer Gratz had been classified as “well-qualified” when she applied in 1995 and Patrick Hamacher as “qualified” when he applied in 1997, but both were rejected. The university maintained an undergraduate admissions policy that automatically gave 20 points to underrepresented racial minorities.  Gratz had actually been rejected before the point system had been enacted, raising questions about whether she had standing to bring suit, but the Court ruled that she did.

Ultimately, the Court ruled that the Law School’s policy was acceptable, because it served the compelling national interest of diversity and simply took race into consideration, while the undergraduate admissions policy was not, because the point system was too inflexible and was not narrowly tailored to promote diversity. Grutter v. Bollinger was narrowly decided by a 5-4 majority, with an opinion written by O’Connor and joined by Stevens, Souter, Ginsberg, and Breyer, with Ginsberg writing a concurrence. Justice O’Connor argued that narrowly tailored race-based decisions for the sake of diversity were Constitutional. However, she also suggested that affirmative action could not be permanent in character and suggested that twenty-five years later it would no longer be necessary to consider race.  Justices Rehnquist, Scalia, Kennedy and Thomas all disagreed and wrote dissents, with Thomas strongly suggested that the Court should not wait twenty-five years to find the practice unconstitutional.

In Gratz v. Bollinger, Chief Justice Rehnquist wrote the 6-3 opinion, in which he was joined by O’Connor, Scalia, Kennedy, and Thomas. Justice O’Connor wrote a concurrence in which she was joined by Breyer, who also wrote a concurrence. The majority decided that the automatic point system was unconstitutional because it did not bring race into consideration on a flexible, individual basis. Justices Stevens, Souter, and Ginsberg all wrote dissents. Justice Ginsberg and Souter both said that the university should not be penalized for the openness and honesty of its affirmative action program.

The Fisher case is unlikely to end affirmative action in admissions altogether, but it does have the potential to restrict this type of policy. Its main issue will probably be whether the racial admissions policies of the University of Texas were necessary to achieve the supposedly compelling national interests. Since Texas already attempts to promote minority enrollments through a 10% plan (admitting the top 10% of each class, therefore taking in disproportionate numbers of students from mainly minority schools, regardless of the overall level of school performance), the defense will probably argue that Texas already has a system of preferences aimed at increasing minority enrollments. The good news for those who would like to see racial preferences diminished is that Elena Kagan has recused herself because she argued in favor of racial preferences in Texas when she was U.S. solicitor general. However, three justices (Sonia Sotomayor, Steven G. Breyer, and Ruth Bader Ginsburg) will probably support the use of racial preferences in Texas and elsewhere. Four (Chief John G. Roberts, Samuel Alito, Antonin Scalia, and Clarence Thomas) will probably try to restrict these preferences further.  This makes Justice Kennedy, who earlier opposed racial preferences in both Grutter and Gratz the swing voter, who could make the vote a tie (thus leaving in place a lower court decision in favor of race-based admissions) or join the other four in striking down the Texas policy. Even if the latter comes to pass though, there will still be the issue of whether the Court writes its decision narrowly, directed specifically toward the Texas case, or more broadly, establishing a usable precedent for further diminishing race-based policies around the country.

Tuesday, February 21, 2012

Made of Paper: Edward Dahlberg

Edward Dahlberg was a difficult character, a perennial misfit and a touchy misanthrope. Born out of wedlock to an itinerant lady barber, Dahlberg’s mother left him in an orphanage when he was 12.  He left this hard life for another one five years later, drifting around the western part of the U.S. before going into the army at the end of World War I. Back in America again, he enrolled first in U.C. Berkeley and then took a degree in philosophy from Columbia University. His extensive reading and his university studies bought him into a world apart from that of his hardscrabble childhood and youth.
Having no vocation but literature, Dahlberg made his way to Paris in the 1920s where he became part of a generation of expatriate writers. He joined the Communist Party, making his mark as a “proletarian” writer in Bottom Dogs, a novel based on the orphanage and his early bumming around his native country. Even then he was no party line conformist.  D.H. Lawrence, a writer whose distinctive brand of politics aligned with no socialist agenda, wrote the foreword to this novel. Lawrence also recognized Dahlberg’s legendary pessimism, reportedly exclaiming, “For God’s sake, Dahlberg, cheer up!”
The conventions and shibboleth of Communism accorded ill with Dahlberg’s independent personality and his growing intellectual elitism. By 1936, perhaps disgusted by Stalin’s purges as well as by his rejection of ideological regimentation, Dahlberg  denounced Communism as “necrophilic” and left the Party. He began to develop a unique style of writing, an elaborate and carefully wrought epigrammatic prose.
I found Dahlberg’s two masterpieces when I was rambling through the shelves of the old San Francisco public library, attempting to make up for the deficiencies of a late-twentieth century university education.  The essays he first published under the title Do These Bones Live? (later re-titled Can These Bones Live?) scrutinized European and American writers from the perspective of a despairing Hebrew prophet.  His autobiography, Because I Was Flesh, unsparingly examined his tawdry upbringing and his crotchety nature, but it managed to transmute these into visionary writing and to find in literature a justification for his existence. Dahlberg was certainly no saint, but Because I Was Flesh is one of the great works of confessional literature, a descendant of the Confessions of St. Augustine.


Monday, February 20, 2012

The Holiday

If you ask someone down here in the New Orleans area what holiday we celebrate today, you are likely to be told, “Lundi Gras, the day before Mardi Gras.” Many New Orleanians might be shocked and surprised to hear that the Post Office doesn’t close because it is the day before Mardi Gras. The national holiday is actually Washington’s Birthday, more commonly known as President’s Day. Proclaimed in Washington DC in 1880 and five years later extended to the rest of the country, Washington’s Birthday, on February 22, became the first federal holiday to memorialize an American citizen.  Unofficially, many Americans, especially in the school system, also celebrated Lincoln’s birthday on February 12.  In some places during the first half of the twentieth century, people recognized a long Patriot’s Week, which included Thomas Jefferson’s birthday on February 17. Perhaps it was inevitable that the memorials of these Mount Rushmore figures would all be somehow collapsed into a single holiday. In 1968, Congress detached the celebration of Washington’s Birthday from his actual date of birth by the Monday Holiday Act, which moved the observance of the first president’s birthday to the third Monday in February, which created a three-day weekend.
I’d like to celebrate Washington, if only we did celebrate him. I’d also advocate observing Jefferson Day by staging public readings of the Declaration of Independence, Notes on the State of Virginia, and the Jefferson-Adams letters. But since the mistakenly named Presidents Day serves mainly to sell mattresses and close down government offices, I suggest the memorial has been drained of all significance. The big tourist event of Mardi Gras isn’t the old local festival, either.

Sunday, February 19, 2012

Uniting America through National Service

The “Sunday Review” section of today’s New York Times contains a dialogue on “Uniting America through National Service,”  sparked by a recent suggestion by David Brooks that the nation needs a national service program in order to unite the separating classes in American society.  Those in favor of mandatory national service point out its potential for building individual character, uniting the nation, making national leaders consider military engagement abroad more carefully, and contributing to the civic infrastructure.  These arguments are generally thoughtful and moderate. Nevertheless, I remain intensely opposed to any program of mandatory national service precisely because it would be mandatory, because it would be national, and because it would require service.
A mandate incumbent upon all Americans would require a means of compulsion, a system of compulsion, and a program of compulsion.  By “means of compulsion,” I mean there would have to be some way of forcing people to take part, whether they choose to or not.  Force is least required when people will participate voluntarily. For most of American history, a peacetime draft was unworkable precisely because it would meet with so much resistance. Even the wartime draft during the Civil War provoked draft riots in New York. On the eve of American entry into World War I, no less a person than Speaker of the House Champ Clark declared of his home state “Missouri sees precious little difference between a conscript and a convict,” and during the war some Oklahomans rose up against the draft in the “green corn rebellion.” Although some draft resisters did go to jail in World War II, the Japanese attack on Pearl Harbor and popular feeling that the war was unavoidable created enough support among the American public that people were willing to accept a draft as a necessary measure for conducting a large-scale war. During the Cold War that followed, Americans for the first time largely accepted a peace time draft because they tended to see this as essential to maintaining defense against what was perceived as the ever-present threat of the Soviet Union. Even then, though, the draft only worked as a “selective service system,” that did not fall on every one. This selective system fell apart when popular support for American policy would no longer suffice to maintain even the selective use of compulsion. A universal mandate would mean that even if 90% of Americans supported national service, one out of every ten people would still have to be coerced. That is a pretty big portion of the population to coerce in a supposedly free society, and every indication is that real percentage would be much greater than this.
A system of compulsion must follow from the chosen means. In the case of the military draft, the system of compulsion is usually either a branch of the service or prison. It seems to me that if one advocates universal mandatory service, one must also advocate a vast system of prison camps for dissenters, as well as companies of national service for those who support or at least accept the universal mandate. This may well contribute to the national infrastructure. During the 1930s, both the new National Socialist service programs and the prison camps contributed to building the German infrastructure. Similarly, the Gulag provided slave labor to the Soviet system, while collectivization pressed the “free” population into service. Presumably, neither the American national service corps nor the American prison camps would be as intensely coercive as these totalitarian societies, but there is no question that a universal mandate would push everyone into some sort of system of coerced labor.
While in a system of political compulsion, people undergo a program of compulsion. I note that many of the advocates of mandatory national service maintain that this would make “better citizens” of Americans.  In order to achieve this goal, though, the national service companies (and the prison camps) would need to include, either explicitly or implicitly, a program of citizenship training.  This would of necessity follow an organizational definition of what constitutes good citizenship, and would unavoidably be a course of indoctrination by federal officers.  
This brings us to why the “national” part would be problematic. I often point to the Tocquevillean ideal of local voluntary organizations as a basis for American democracy. Along these lines, the sociologist Robert Nisbet argued that centralized national institutions tend to replace local institutions. A dictatorship arises when a polity does not consist of a nation-wide set of interconnections within local communities, but of atomized individuals connected to a powerful central state. If we want to destroy voluntary local associations and replace them with a system of authoritarianism, I can think of no better way than to force every individual into service to the central state.
The system of national compulsion would, further, entail a huge bureaucracy.  This, of course, should pose no problem for those who believe that government should be unlimited in size as well as in power. It would also mean a vast expansion of opportunities for a “new class” of administrators and power brokers who would decide what all the national servants should do.  It is unlikely, though, that the bureaucracy would be very efficient economically.
Finally, we come to the issue of service itself. Given the intellectual agility of the Supreme Court, I think it is entirely possible that the justices would not find compulsory service synonymous with involuntary servitude, and that mandatory national service would not be ruled as violating the thirteenth amendment. But if we turn everyone into a servant of the nation, this does not only mean we would take away every individual’s autonomy.  Since the nation is an abstraction, in real terms this would mean making every person in the United States a humble servant of government officials.  If that is what you want, then you should support the call for national service.

Saturday, February 18, 2012

Why the West Rules - For Now, by Ian Morris

Why the West Rules – For Now attempts to find the reasons for the global dominance of the West. Ian Morris begins with a little counter-history, telling the story of Queen Victoria and Prince Albert making obeisance to a Chinese envoy in 1848. Why, he asks, did history not turn out along these lines, rather as it did, with the British victorious in the Opium Wars and Asia and most of the rest of the world on the eve of being carved up into European and American colonies and spheres of interest? Morris sets out the two possible answers as long-term and relatively deterministic and short-term and relatively accidental. The first answer suggests that there is something about the West that always put it in the race. The second suggests that accidental historical conditions precipitated the industrial revolution and the rise of the west and that events could easily have turned out more like his counter-history.
The long-term versus short-term problem prompts him to take an unusually far-ranging approach to considering the rise of the west. He goes all the way back to the earliest pre-history in order to ask whether there have been fundamental differences between East and West.  After arguing that there are no such differences among human populations, he then casts dominance in terms of development.  This is where the book makes one of its most interesting contributions because he comes up with an original way to define development operationally, as well as conceptually. He defines social development as per capita energy consumption, organization (indicated by urbanization, or largest city size), war-making capacity, and information technology. These are sufficiently related to be parts of the single concept of development, but also sufficiently distinct to be separate dimensions.
Morris argues that only the core areas of what we now call the Middle East and eastern China provided the environmental conditions for agriculture and interrelated societies, justifying the West-Far East concentration of his book. However, the Middle East enjoyed an advantage in its potential for agricultural productivity. Later, the Mediterranean gave the rising classical cradle of the West a continuing advantage in urbanization and information exchange.  He maintains, therefore, that there has been a long-term basis to Western dominance.  He also claims, though, that development tends to create problems of its own, such as putting pressure on its resources, spreading communicable illnesses through increasing contacts, bringing in new and often hostile population groups, and over-extending the reach of the state.  Societies sometimes push development to another level by figuring out ways to respond to these problems, but often they reach a ceiling and collapse, having communicated their development strategies to societies on their fringes, which then move to the forefront. Readers of Gibbon and Toynbee will find the ideas of state over-extension and challenge-and-response familiar.
While the West enjoyed an early long-term advantage, this was not absolute. When the Roman Empire reached its ceiling and dissolved (here the social development measure is especially useful in demonstrating that there really was a decline and fall), the West began to lag and the East, defined as Chinese civilization began to catch up and eventually moved ahead. The salvation of the West came with its shift to the politically fragmented states of the Atlantic fringe. Their competition with each other and their access to an ocean that could give passage to America gave rise to the expanding market economy that gave rise to the modern dominance of the West.
The argument that social development tends to shift to fringe societies that become new centers might suggest that China, having been absorbed into the global economic and political system created by the West, is likely to move into a position of leadership.  On this crucial point, though, Morris hedges his bets, citing both those who claim that a single new entity of “Chimerica” will emerge and those who argue for a Sinocentric future. He also speculates that the old categories of “East” and “West” will become meaningless. The exponential increase of social development, especially in information technology, may create a Singularity, in which human beings merge with machines, resulting in an entirely new way of living that makes old geographic and political distinctions irrelevant. Or, the same exponential increase could produce a worldwide environmental catastrophe.  In reading these alternatives, I was not sure which possible future I thought was worse.
Why the West Rules is a fascinating approach to comparative world history. Morris apparently sees his measure of social development as the part of his account most in need of defense because he gives an appendix devoted to it. I actually found this index convincing as well as creative, although it is unavoidably rough, as Morris admits.  One of its limitations, I thought, was the near-exclusive focus on West and Far East.  This left me wondering how Morris might account for the emergence of societies with comparatively high degrees of urbanization and sophisticated information technologies (in the form of writing) in Mesoamerica, where many of the preconditions for social development seem to be lacking. The South Asian subcontinent receives mentions only in passing, although I can imagine its access to East and West by both land and sea making it a world leader in some alternative version of history.
The definition of the West in this book is also much broader than in the normal use of the term. For Morris, this incorporates all of the civilizations that grew out of the original core on the eastern edge of the Mediterranean, including what became the Muslim semi-circle of North Africa and the Middle East. Morris seems to be looking less at why the West, as a continuous entity, came to power, but how what we today call the West became what it is. In the process, he hints at why Europe and America achieved dominance over the societies associated with Islam, but never considers sufficiently how the former and the latter split apart, with consequences for the present that may be just as great as the consequences of the distinction between the West and the Far East.
Finally, Morris suggests that he looks at the past in order to draw conclusions about future trends. But then he ends on such an inconclusive note. I found his science fiction-like speculations about a possible coming Singularity of humans and technology implausible. This doesn’t mean that such a thing can’t happen. What we find implausible may be due to the limitations of imagination, rather than reality, but a line graph of technological innovation is a slender basis for this type of futurology. He also doesn’t come up with any solid suggestions about what his historical patterns suggest we should do to maintain Western dominance, encourage the growth of desired political values in a rising China, or simply maintain our own ways of life in a changing world.

Thursday, February 16, 2012

Getting Back to the Social Order Problem

Contemporary academic practitioners of the social sciences give little attention to questions of the nature and maintenance of social order.  Insofar as they touch on it at all, they tend to assume order to be inherently unjust and oppressive.  Our conferences have become celebrations of “transgression,” even while the conference goers not only conform carefully to the ideologies of their colleagues but also follow the highly patterned rituals of colloquia.

The refusal to take order seriously as a theoretical issue is a fairly recent historical development.  In 1951, Talcott Parsons and Edward Shils described the problem of social order as "one of the very first functional imperatives of social systems.”  This problem is not only a philosophical issue,  it is also a practical matter since the order or disorder of a social system has immediate consequences for the lives of its members.

 Dennis Wrong’s 1994 book, The Problem of Order, was one of the few relatively attempts in the past few decades to revive examination into this fundamental subject.  Wrong, the product of an older generation of social scientists, considered the forces that hold people together in social groups. Wrong pointed out that this term has two closely related but distinct meanings. It can refer to regularity or rule in human social interactions and it can refer to patterns of cooperation among actors.  He argues that people develop regularities as norms, roles, and institutions in the course of recurrent interactions.  In this sense, social order tends to "take care of itself," since the lives of human beings largely consist of interactions with others.  These interactions, though, may differ greatly in character, since they may be products of a variety of motivations.

The second type of order problem, according to Wrong, is that of conflict versus cooperation.  This is not an absolute choice, as suggested by Hobbes' unfortunate and misleading description of the natural human state as a "war of all against all."  Humans in a state of total conflict could exist no longer than the time it would take parents to murder their children. Perfect cooperation, at the other extreme, seems to be a social state that exists only in the imagination.  In response to classic functionalism's "oversocialized" conception of cooperative order as the product of norms imposed on individuals from an external society, Wrong argued that human beings produce particular blends of conflict and cooperation from expectations developed in the course of their dealings with one another.

It seems to me that one of the main tasks for social science today is to turn away from its obsession with advocacy and return to the issues of what constitutes order in social life, how it is maintained, and why societies vary in their combinations of conflict and cooperation.

Wednesday, February 15, 2012

Proposition 8, the Balance of Power, and Limited Government

Seth Long has an interesting post on the judicial overturn of California's Proposition 8. He points out that a majority of the 80% of California voters who cast ballots chose to end same-sex marriage in the state. Therefore, this represented the democratic will of the majority. In overturning the proposition, the 9th Circuit Court of Appeals denied the popular will by fiat.

I believe that Long is touching on some of the central concepts in the American political system here: the ideal of majority will and that of individual rights.  One of the consistent problems of democracy is the potential for the tyranny of the majority.  Under a purely democratic system, if a majority of people want to disenfranchise a minority they can do so: the will of the majority holds, whatever it may be. In the early years of the Republic, concerns of majority tyranny frequently concerned debtors voting their interests over those of creditors, which could dispossess the creditors of legitimately acquired assets and create an unsound economic system.  This may still be a problem: T.H. Marshall's idea of political democracy leading to economic democracy could be interpreted as majorities dispossessing minorities by voting to redistribute resources.

The members of the founding generation had two responses to the problem of majority tyranny.  One was the balance of power or mixed system response. Derived from Aristotle, this involved trying to balance the virtues and vices of democracy, aristocracy, and monarchy by incorporating elements of each. John Adams, uneasy about both democratic excess and elite power, was the foremost exponent of the balance of power approach. Interestingly, Adams and those of similar mind did not see the judiciary, but the upper house of the legislature as the aristocratic element in American government. This was one of the reasons that until the early twentieth century senators were appointed by state legislatures, rather than directly elected by the people. The senate was supposed to be a brake on democracy, rather than an expression of it.

However, the main structural limitation on democracy historically came to be the third branch of government, the judiciary, as the concept of judicial review developed from the time of John Marshall onward. Through judicial review, the courts exercise a veto over legislation because the courts decide whether a law or policy enacted by the people or their representatives is consistent with the Constitution. While a law or policy may be judged unconstitutional on a variety of grounds, violation of constitutionally protected rights has stood out as the most prominent. This has created a tendency for the courts to extend the idea of constitutional rights and for parties who cannot realize their political ends democratically to urge the courts to re-define those ends as rights.  If, for example, marriage is a matter of legitimate state policy, then the democratically elected representatives of states (or the voters directly, in a proposition system) can define marriage in the way that the representatives and voters believe will best serve the polity. If marriage (to anyone or anything one chooses) is a matter of individual right, then the aristocratic judicial branch can intervene to annul the will of the majority.  This is why same-sex marriage opponents have generally sought their goals through voting and legislation, while proponents have generally sought theirs through the courts.

The problem in the balance of power response to the problem of majority rule lies in the question of where to find the proper balance. Clearly, the courts have increased the number of rights people have, giving the courts ever greater power, creating the danger of a "kritocracy," or absolute rule of judges. There is also the problem of where judges find these rights.  The founders were heir to a natural law perspective, but contemporary courts have leaned more toward the utilitarian view that rights are defined by social ends. I argue that the "compelling national interest" defense of affirmative action is a utilitarian approach to legal rights. Race-based preferences, according to this defense, can be held constitutional and efforts to end those preferences can be held unconstitutional because the preferences are assumed to serve a social end. This makes rights dependent on the social and political program of the judges. In the case of defining marriage, I think judges sometimes lean toward an unrecognized expanded version of a natural law concept of a right (people have the inherent right to be happy, getting married as they choose makes them happy [supposedly], therefore all people have the right to marry as they chose) and sometimes toward a utilitarian concept of what judges believe will lead to a nondiscriminatory society.

The other response among the founders of the Republic to the problem of majority tyranny was limited government. If Adams leaned more toward balancing the powers of government, Thomas Jefferson leaned more toward limiting what government can do.  This response was essentially an argument for an equality of citizens based on independence. Jefferson in the elegance of Monticello was the equal of the poor yeoman because each could live on his own without hierarchical dependence. This response may be the most congenial to Long who would ideally prefer that government stay out of marriage altogether.

It is true that we all generally like government intervention when the government is on our side and tend to become anti-government pursues policies we don't like. Ultimately, though, I think the same-sex marriage debate does not easily fit into a "left-right" political dichotomy or of supporters of individual liberty versus supporters of government interference.  Instead, I think this issue involves a range of perspectives. There are traditionalists, who believe that marriage as historically defined is correct and should be maintained by government and that judicial interference simply involves the courts imposing the wrong policies. There are democratic institutionalists, who believe that majorities define their legal and social institutions, and that aristocratic courts illegitimately extend individual rights in violation of those institutions.  There are consistent libertarians, who follow the eloquent formulation of Jimmy McMillan on this topic ("If you want to marry your shoe, I'll marry you").  There are limited civil libertarians, who believe that the courts should mandate the specific kinds of rights that those limited libertarians favor. Finally, there are group-rights advocates dedicated to the advancement of  group interests and willing to employ whatever legislative or judicial means will advance those interests.

Tuesday, February 14, 2012

Made of Paper: Travels in Arabia Deserta

For a time in the second half of the 1970s I worked as a bicycle messenger in downtown San Francisco, pedaling up and down those steep hills carrying packages and letters between offices. I often spent my time off in the old San Francisco library, a beautiful old building with vaulted ceilings, wide staircases, and spacious reading rooms. The building is still there, but the library has been moved across the street, to quarters as inspiring as a warehouse. I discovered many treasures in the old place, but one of the works that made the deepest impression on me at that time was the two-volume Travels in Arabia Deserta by Charles M. Doughty.
Doughty was among the most eccentric of the eccentric English adventurers of the nineteenth century. A poet and scholar, a century before I read his book Doughty made his way to the Arabian Peninsula. Although he sought the help of his own government and that of the Ottoman Empire, the officials left him on his own in his journeys among the Bedouin tribesmen, who were traditionally xenophobic and then under the spreading spell of the Wahabi sect of Islam. Unlike the swashbuckling Captain Richard Francis Burton, Doughty openly presented himself as a Christian, which won the grudging admiration of some Bedouins but inspired others to threaten his life. Doughty was often beaten and mistreated. His patient endurance of this type of treatment later led Burton to denounce Doughty as a poor representative of English manhood. But some Bedouins treated Doughty, known among them as “Khalil” with friendship and kindness. His survival was probably due to concerns about how the Ottomans would respond to the killing of a Westerner, appreciation for Doughty’s simple medical skills, awe at his sheer audacity, adherence to Bedouin customs of hospitality, and reluctance to murder someone whom some of the Bedouins viewed as a lunatic.
The adventure story is only one side of the work, though. Doughty believed that the English language was in a state of decadence and required renewal by going back to its ancient qualities. He wrote Travels in Arabia Deserta in a archaizing dialect of his own devising, drawing primarily on the style of the King James Bible, but also on Spenser and even Chaucer. The effect is of the heritage of English literary language confronting the Arab world, a marvelous model for this deeply traditional English meditative soul confronting the harshness of an alien culture.
Doughty’s mannered style crept into my own writing for a while, probably with unfortunate results. Although it draws on so many influences, his language was so uniquely his own that it only fits the world that is his book. I think Travels also contributed to my own wanderings around distant parts of the planet in the years after I read it, although I was fortunate to face none of Doughty’s hardships.
Selecting the ten best books you’ve ever read is a popular game. My own list changes from time to time, but Travels in Arabia Deserta is always on it. Without question, I would rank it as the finest travel book ever written.

Sunday, February 12, 2012

A Call to National Servitude

In January 2012, the National Task Force on Civic Learning and National Engagement of the Association of American Universities and Colleges released its report, A Crucible Moment: College Learning and Democracy's Future. The report called for a program of civic learning and of training in civic engagement that would pervade every aspect of higher education.  This program would be linked to similar efforts at all other levels of schooling. In the words of the report, "[t]he central work of advancing civic learning and democratic engagement in higher education must, of course, be done by faculty members across disciplines, by student affairs professionals across divisions, and by administrators in every school and at every level. The fourth prominent group of actors are the students themselves [bold in the original]. The collective work of these groups should be guided by a shared sense that civic knowledge and democratic engagement, in concert with others and in the face of contestation, are absolutely vital to the quality of intellectual inquiry itself, to this nation’s future, and to preparation for life in a diverse world" (p. 2).  It called for fostering "a civic ethos across all parts of campus and educational culture" (p. 31)
The report cites a breathtaking array of “pressing issues,” including “growing global economic inequalities, climate change and environmental degradation, lack of access to quality health care, economic volatility, and more.” The answer to all of these problems lies in “expanding students’ capacities to be civic problem-solvers.” The report does not go into detail about how the professors and teachers, who do not necessarily possess such great social problem solving skills, will produce this generation of superbeings, but it does recommend that institutions foster what is variously called a “democratic ethos” and a “civic ethos” on every campus through “service learning” and “community engagement.”
The AAC&U is a highly politicized organization with its own distinctive view of how American society should be reconstructed and a fondness for expressing this view in the millenarian rhetoric of “struggles” and “calls to action.” Immediately after the election of President Barack Obama in 2008, the AAC&U issued a statement applauding the president’s election as a “…historic moment made possible by many years of struggle.” Not surprisingly, the AAC&U has had close ties to the Obama administration and its task forces operate as federal policy planning committees. This most recent task force issued its “call to national action” at an official White House event with the melodramatic title, “For Democracy’s Future: Education Reclaims Our Civic Mission,” sponsored by the White House Office of Public Engagement and the U.S. Department of Education.  

A Crucible Moment not only demands the participation of every person involved in education, but it insists that every course in every subject incorporate its message. Issued as “a call to national action,” the social and political agenda it intends to implement in every classroom comes with the imprimatur of the U.S. Department of Education. As I read the report, I find its demands for inculcating a “democratic ethos” perplexing. Beyond the fact that in the traditional American democracy neither the government nor committees sponsored and subsidized by the government decide what type of ethos the people should have, the ideal of integrating all levels of schooling into a unified social program, and inserting this program into every subject appears to be promoting a kind of new bureaucratic corporatism, aimed at absorbing everything into the state and leaving nothing outside the state. 

Saturday, February 11, 2012

Made of Paper: The Golden Bough

In his wonderful biography of James G. Frazer, Robert Ackermann observes that modern anthropologists often regard Frazer as an embarrassment. Aside from briefly visiting Greece, Frazer did no field work. His voluminous output owed much to the fact he spent almost his entire life at a library writing table. The Golden Bough generally takes travelers’ tales uncritically as reports of beliefs and practices in societies around the planet. Frazer had no clear concept of culture and his comparative approach wrenched ideas out of their context of meaning. Working without the idea of culture, his modern critics object, Frazer imposed the same positivistic evolution away from magic and toward science on all societies everywhere.
The Golden Bough was one of the grand explications of human life and thought that I read in my early twenties when I was reaching for some sort of comprehensive understanding. Today, I acknowledge some of the criticisms of the work, but I still believe that it is important and should be read in its entirety. Inspired by J.M.W. Turner’s painting of the golden bough incident in Virgil’s Aeneid, in which a priest of the goddess Diana was ritually murdered by his successor, Frazer made his way through a multi-volume account of mythological themes, describing how societies around the world have repeated themes such as the dying and reviving god, ritual sacrifice, and scapegoats.
At one level, The Golden Bough is a majestic compendium of myths. I think it also occupies an important place in the history of ideas, though. Frazer was educated as a classicist and became one of the founders of anthropology. He marks the turning from thinking about Greek and Roman antiquity as a canonical model to thinking about Greek and Roman society in the same way that Europeans were beginning to think about other societies.  His comparative approach to these societies did largely ignore culture, but I’m not sure that is entirely a problem because in finding similar patterns that could be lifted out of different contexts, Frazer was moving toward finding common forms of human thinking, or what we would today call cultural universals. Sitting in the library may have limited his depth of understanding and led him into some factual errors, but it also made possible a breadth of vision and synthesis that would not have been possible for an anthropological specialist.
Reading The Golden Bough as a founding work of late modernity, the questions I have about it have less to do with its methodological flaws in analyzing other times and places than what Frazer may unwittingly tell us about his own time and ours. The rejection of antiquity as the measure of the present, as it had been since the Renaissance, may have left modern social thinkers without historical normative standards, placing everything that happens on the same plane.  Frazer recounts anecdote after anecdote drawn from locations around the world, and these are held in place only by the fact all the anecdotes and all the locations follow the same movement away from magic and toward science.