In 1949, the Board of Regents of the University of California ordered that all university employees sign an oath affirming loyalty to the constitutions of the United States and state of California. The oath required that university employees deny any membership in organizations that advocated the overthrow of the US government. Thirty-one professors, among them internationally distinguished scholars, were dismissed for refusing to sign. Courts eventually struck down this oath as unconstitutional, but a watered-down version remains in effect today.
Critics at the time objected on the grounds of academic freedom and freedom of speech. Yet the oaths also raised important philosophical questions over objectivity and the range of ideas that are considered acceptable for classroom teaching. One might even say that the UC loyalty oath controversy, besides epitomizing the fear-driven conformity and suppression of dissent characteristic of McCarthyism and the “Second Red Scare,” formed one chapter in a longer struggle in which conservative and anti-intellectual forces have sought to malign and discredit university professors. From a 1951 article in the American Legion Magazine that stoked fears of university professors indoctrinating students with Stalinist propaganda to segregationist George Wallace lambasting “pointy-head college professors” to the modern-day Republican Party’s assault on tenure, teachers’ unions, and funding for public education, there is a widespread and poorly reasoned view, financed and propagated by powerful monied interests, that history professors cannot be trusted to teach their classes objectively. While it is true that most history professors are somewhere on the left end of the political spectrum, there are a number of problematic assumptions at the heart of the discourse of objectivity that warrant further investigation.
The Enlightenment and the Heroic Model of Science
The earliest professional historians of the mid to late-19th century drew upon Enlightenment era thinking when they established objectivity as the raison d’être of the discipline. When one thinks of the Enlightenment, what usually comes to mind are scientists overturning centuries of religious dogma; philosophers denouncing monarchy while making universal proclamations of human rights; great luminaries proposing that society should be governed by rationality, logic, and reason, rather than tyranny and absolutism; an unwavering optimism that humans could use their own free will, technological advancements, and information collected from empirical senses to create a better, modern, and just world. All of these characterized what historians Joyce Appleby, Lynn Hunt, and Margaret Jacob called “the heroic model of science” in their 1994 classic, Telling the Truth About History.
The values of detachment and objectivity that one normally associates with the hard sciences, it was supposed by nineteenth century Americans, could also be applied to history. Just as astronomers discovered the immutable laws of nature, so, too, could historians illuminate some of the rules of historical change through a dispassionate reading of primary sources. Leopold von Ranke, the famous German historian, and the American Frederick Jackson Turner are most often credited as two of the profession’s chief founding fathers. Professional historians were supposed to be balanced, neutral, and impartial. Fair and disinterested. Un-biased and value-neutral. Un-attached to the outcome. Like a judge in a courtroom, a historian was supposed to be evenhanded and free from political pressure. They were to reject presentism and advocacy.
It did not take long for future historians and thinkers to poke holes in this discourse. Putting western science up on a pedestal looked less tenable in the twentieth century given the embrace of Social Darwinism and eugenics by leading scientists, as well as the development of weapons of mass death and destruction during two world wars and the Cold War. An objective telling could not account for versions of history that were not founded on meticulous record-keeping and which relied heavily on oral tradition, which is inherently subjective. Anyone who has told a story multiple times knows that it is practically human nature to make slight changes to the details and meaning of the story with each iteration.
Then there was the Enlightenment’s blind spot on race. It was not just that Enlightenment empiricism and rationalism coincided with the expansion of the Atlantic Slave Trade or that some historians (though not without significant pushback and debate) have argued that slavery provided much of the material wealth and resources that enabled Europe to have the economic, cultural, and political advancements of which the Enlightenment was a crucial part. It was not just that John Locke was a major investor in the Royal African Company that transported African slaves along the Middle Passage or that a defining feature of classical liberalism—the right to property, free from governmental interference—was intended to not only safeguard property in the form of land and contracts, but human bodies, too.
It was that the Enlightenment tended to elevate as heroes a rather elite set of white men. And the first professional historians of the late-19th century who inherited Enlightenment-based objectivity made no effort to design a curriculum for a multiethnic and pluralistic society. One of the most persuasive arguments for covering more stories and sources written by women, people of color, and the LGBT community in a history class is that teaching in a democratic society implies that everyone’s voice deserves to be heard. It implies multiple narratives, not all of which underscore progress and achievement. A diverse curriculum should reflect a diverse population. As women, African Americans, Hispanics, and the LGBT community—all of them segments of the population that had been left out of the traditional canon—entered academia in greater numbers, they were justifiably skeptical that a single narrative celebrating the achievements of mostly white, male Protestants could ever be inclusive.
In fact, during the tumult of the 1960s, many scholars in the newly emerging fields of African American and women’s studies specifically went into academia to advance a political point of view, rejecting objectivity as careerist and oppressive. Public historians, too, shared many of their criticisms. If historians failed to cover these perspectives, Appleby et al. argued, they risked suppressing students’ access to important historical information and therefore, the ability to engage with the reality of the past. Should I encounter someone who pines for the “good ole days” when history was taught “objectively,” I might be tempted to respond in the same way that I respond to those who subscribe to the reactionary and ahistorical legal philosophy of originalism: why should the values of nineteenth and eighteenth-century elite white men constrain what we are able to talk about today?
By the 1970s, critics of objectivity found a home in the paradigm of postmodernism. Most closely associated with Michel Foucault and Jacques Derrida, and a distinguished lineage of other French and German theorists who preceded them, postmodernism (or perhaps more precisely, post-structuralism) rejected out of hand the Enlightenment notion that a rational individual could freely choose to acquire knowledge objectively, divorced from any sort of politics or ideology. As Appleby et al. wrote, “Postmodernists assert that the individual self is an ideological construct, a myth perpetuated by liberal societies whose legal systems depend on the concept of individual responsibility.” For postmodernists, truth is far from absolute. It is invented, constructed, and reflects the dynamics of power.
While some academics sneer contemptuously at postmodernism, there are certain core claims of the movement that most historians accept. Almost all of us accept that race, rather than being scientific, ingrained, and immutable, is in fact context-specific, arbitrary, and socially and culturally constructed. Another is that we all communicate reality through the imprecise cultural medium of language. Language is not objective. It is a cultural construct. Ever notice how one English word translates to three Spanish words or vice versa? This is because when you travel from England to Spain, you’re not just entering a different culture, but a different reality. And the presence of multiple realities and narratives would undermine the notion that a historian can illuminate a single, objective truth. The beauty of history is its art form; that one storyteller can take bits of evidence and describe a different story with different meanings compared to another historian who looks at the same evidence.
When Peter Novick in 1988 published That Noble Dream, a thoroughly researched and nuanced tome that charted how major American historians grappled with the question of objectivity, it was clear that postmodernism and the subsequent “cultural turn” of the 1980s had left an indelible mark on the profession. Novick channeled W. B. Gallie in calling objectivity an “essentially contested concept,” much like social justice or what it means to lead a Christian life. Its exact meaning would always be a matter of dispute. Admitting that he was more persuaded by the opponents of objectivity than its defenders, Novick wrote, “As a practical matter, I think it promotes an unreal and misleading invidious distinction between, on the one hand, historical accounts ‘distorted’ by ideological assumptions and purposes; on the other, history free of these taints. It seems to me that to say of a work of history that it is or isn’t objective is to make an empty observation; to say something neither interesting nor useful.”
There Are No Facts Without Bias
To understand why so many historians scoff at the notion that history should be taught as an objective science, we should consider the relationship between fact and interpretation at the heart of the profession. In 2006, the eminent Cornell historian Mary Beth Norton wrote a New York Times op-ed responding to an education law passed by the Florida state legislature, which stated that history should be fact-based, and not constructed. Norton was puzzled. Historians from a wide variety of persuasions are practically unanimous in agreeing that all history should have a solid grounding in facts. The point is not whether a history course is fact-based or not. It is that facts do not speak for themselves. You need some kind of model, paradigm, theory, or interpretative framework to put them together into a coherent narrative. If Florida Republicans really wanted a history course that was “not constructed,” they would have no history course at all because all history is constructed.
I suspect that Florida Republicans were making a political point in their educational law, not an epistemological or disciplinary one. They were implicitly attacking the ideological leanings of actors who tend not to vote for them, parroting the tired old line that college professors are contributing to the decline of America by indoctrinating students with postmodernism, Marxism, and liberalism. As I pointed out to intellectual historian Tim Lacy, whose recent blog post contained sections that reminded me of Norton’s, it is unsurprising that calls for history to be “fact-based,” divorced from analysis, interpretation, meaning, and value judgment, and couched in the language of objectivity, most often emanate from conservatives. In my experience of teaching history for over twelve years, I have found that it is most often conservatives who tend to gravitate toward old-fashioned historical narratives that are euphemistic, celebratory, prideful, romanticized, triumphalist, and nationalistic.
So when academic historians arrange facts in ways that do not celebrate their heroes, conservatives’ first impulse is to kill the messenger. If academic historians expose the ugly sides of tradition, militarism, conformity, patriotism, racism, inequality, organized religion, and the business community, however grounded in facts and empirical evidence they may be, conservatives shut down, revealing their latent authoritarianism. Conservatives’ insistence that history remain objective is anything but!
Let us consider some facts about the Reagan presidency to illustrate how all history is constructed. Since any professor who teaches the survey of modern US history from 1877 to the present has to cover so much material in one semester, s/he necessarily has to choose some bits of evidence while excluding others. In lecture, the professor presents the following facts:
– the number of workers in labor unions as a percentage of the overall workforce dropped;
– stock buybacks, which used to be banned by the SEC as market manipulation prior to the 1980s, became more common due to Reagan’s policy of deregulation;
– income inequality, as measured by the percentage of national income owned by the top 10% of Americans, increased;
– compensation in the form of wages remained relatively flat in spite of higher productivity
Even before any formal analysis or interpretation has taken place, the professor has already revealed her/his “bias.” (I prefer the term “informed perspective,” since calling someone “biased” seems to be an excuse for dismissing their views). Even the decision to talk about Reagan’s impact on the broader structure of the political economy reveals a certain politics and judgment of value. Why? Because the professor could have chosen to talk only about the Moral Majority, the end of the Cold War, Reagan’s personality and style of communication, the Iran-Contra scandal, and so on. This concept is even more important to understand in the age of digital technology, where more facts are available to us than ever before. The historian is necessarily obligated to choose only a few facts out of a seemingly infinite number, and in doing so, the historian is guided by an ideology that belies the notion of value-free objectivity.
Historians, of course, do not just choose and recite facts. There is a great deal of artistic creation involved in our craft. Those who call for historians to be more objective seem to be under the mistaken impression that getting the facts straight is the “be-all/end-all of good historical thinking,” as Lacy wrote. This assigns an incomplete and unfulfilling role to historians. Imagine that a student, after copying down the facts about tax cuts and wages, raises her/his hand and says, “Professor Campbell, do you think these developments were good or bad for the US?” Here the conversation gets interesting because we are asked to make a value judgment. If you are the professor in this situation and your response is to throw up your hands out of some ostensibly purist vantage point of objectivity, exclaiming that it’s not the role of a professor to impose a moral standard on the past, then what on earth are you doing in front of the classroom? Are you merely regurgitating random facts from an encyclopedia? Facts alone do not inspire people. Good historical thinking, research, and teaching demands that we don’t just list the facts, but derive meaning from them.
So to come up with a sensible interpretation, a historian might say something like, “Since the 1980s, the federal government has prioritized the interests of Big Business and the wealthy over the interests of lower and middle classes.” Or “in the last four decades, America’s wealthy have waged a largely successful class war, redistributing wealth upwards.” If the narrative you choose to write is “Americans were liberated from oppressive, inefficient, and wasteful government, staffed by mindless bureaucrats,” you’d bet you’re going to encounter stiff pushback from most of the professoriate for resorting to such dogmatic slogans. If your narrative is that “Since the 1980s, Americans have enjoyed greater freedom and prosperity,” a historian might naturally ask, freedom for whom? And did that freedom come at the expense of different segments of the population or finite resources in the environment?
Those who call for historians to be more objective, moreover, do not always account for how frequently the terms of debate shift over time. In 1960, it may have been considered reasonable to put forth states’ rights, and not slavery, as the chief cause of the American Civil War, but scarcely any historian today would do so without being laughed out of the room. Or take health care policy. In the early-1970s, Senator Ted Kennedy proposed a single-payer, national health care system that would have achieved universal coverage. The conservative alternative from the Nixon administration would have required employers to provide health insurance and offered federally-financed coverage to many low-income Americans. Since then, the terms of debate have shifted markedly to the right, based in no small part on the construction of a permanent and well funded media infrastructure that convinces millions of gullible voters that modest health care reforms that subsidize the private, for-profit system are somehow totalitarian takeovers of free enterprise itself. Anyone assuming that there is an inherent virtue in picking the 50-yard line between the two end zones occupied by the Democratic and Republican parties without considering the long-term historical trajectory of the nation’s political economy is missing the forest for the trees. The 50-yard-line of the early-70s would effectively be the 20-yard-line today.
Indeed, the sheer number of instances in which academics and journalists have cloaked their ideological positions within the authority of objectivity makes us wonder whether objectivity is little beyond a self-serving justification. What does it say about objectivity, Novick wondered, if the early-20th century historians who wrote the most respected histories of Reconstruction are considered racist and old-fashioned today or that antiracist narratives were labeled as crude and amateurish? Who would be surprised that historians who considered themselves objective had strong stances on mobilizing for WWI, isolationism, responsibility for the Cold War, Vietnam, and segregation? A few months ago, I met a self-described libertarian economist who had never heard of the term “Smithian,” praised the plutocrat-friendly and deficit-inducing tax cuts of the Trump administration, and most shockingly, was not sure whether human-caused climate change was a serious problem. Presumably because he had not personally researched the topic, he could not comment authoritatively on it. My jaw dropped. Upon completing their doctoral degrees, economists will make about $20,000 more per year in academia compared to historians, partially because the former have more opportunities in the private sector, meaning that public employers must offer higher salaries to attract top talent. But I’d also argue that another part of this pay disparity stems from economists having managed to convince the rest of society that they are practicing an objective science. As someone who has studied quite a bit of economic history, I get a bit incensed when economists—a notoriously pro-business, technophilic, and anti-union crowd—assume the mantle of objectivity. Especially when it is not deserved.
Defending Objectivity in a Postmodernist World
If objectivity in any absolute sense is illusory, should we assume that every argument deserves consideration? When phrased in this way, the obvious answer is “no.” Just look at the comments section of any article to see how cavalierly hard-earned expertise can be dismissed out of hand. Appleby et al. argued for a middle ground of “practical realism” situated somewhere between absolute objectivity and the moral relativism ushered in by postmodernism. The authors declared, “[w]e are emphasizing the human need for self-understanding through a coherent narrative of the past and the need for admittedly partial, objective explanations of how the past has worked.” As impossible as it is to access the full richness of past experience, the storytelling at the heart of history is still essential to our social and individual identities. The main critique of postmodernism that Appleby et al. underscored that it is nihilistic. Its chief purpose is to deconstruct models, not to offer any meaningful alternative.
Some prominent intellectuals continue to make the case for the admittedly imperfect goal of objectivity. Stanley Fish is one of them. In a lecture at UCSB I attended when I was in graduate school, Fish unpacked the pitfalls of academic freedom, criticizing professors who relished the freedom to say whatever they pleased in the classroom while failing to be scholarly. A professor who constantly opined about the mendacity and corruption of the Bush 43 and Trump administrations would be committing academic malpractice according to Fish unless the professor had developed a specific area of expertise in these topics, presumably backed by significant publications in peer reviewed journals.
I remain as unconvinced of Fish’s position as I was when I attended his talk some ten years ago. If I understand Fish’s position correctly, the only issue in which I am qualified to speak about confidently is Jacksonian era politics. Anything else would be venturing too far afield from my expertise. Suffice to say that I find Fish’s prescriptions too restrictive. I wouldn’t make a very good professor if the Bank War was the only topic in which I could express an informed perspective. The skills I’ve developed in writing my book can and have been used to research other topics I’ve found to be interesting, whether it be human-caused climate change or the international gold standard.
Most are not nearly as rigid as Fish. In response to Novick’s work, Thomas Haskell argued that objectivity, if properly understood, was a worthwhile pursuit. Haskell maintained that if political values came into conflict with intellectual values, there was no question which side should win out. Whereas Novick held that drawing any line between politics and scholarship was, itself, a political perspective, Haskell expressed concerns over abandoning objectivity completely. The public would refuse to continue to finance universities with taxpayer dollars, Haskell reasoned, if professors gave the impression that they were all committed propagandists.
Even if objectivity is philosophically impossible, it is undeniable that we are called on to be fair and reasonable in numerous aspects of our lives. There is a reason some of us get irritated when a story is told inaccurately, especially about ourselves. We are expected to be fair and dispassionate when we consider tenure and promotion; when we grade essays where students stake out a position with which we may personally disagree; when we interact with critics of our work. Tenure and peer review, as imperfect as they are, would not exist without a commitment to fairness, impartiality, honesty, and accuracy.
Accordingly, when we write and evaluate history, we should prioritize accuracy. Taking the post-modern view to its furthest logical conclusion—essentially, that all arguments are political, ideological, rooted in context-specific discourses, and morally relative—would rob us of the ability to distinguish intellectually rigorous, complex, and evidence-based conclusions from absurd ones. As much as I am inclined to favor Novick’s position over Haskell’s, I must admit that Haskell made a compelling point here.
One reason that Haskell’s argument holds merit is that, as the title of his book suggests, objectivity is not neutrality. Most people conflate the two. When (mostly conservative) critics complain that professors are not objective, what they really want is not just for historians to withhold a moral judgment about the past, which is certainly defensible, but to be politically neutral in virtually all matters. This would spare conservatives from the glaring embarrassment that it is quite possible to tell, without abandoning any intellectual scruples, that one of our two major political parties, more than the other, enthusiastically embraced the Iraq War; that one party denies the seriousness of human-caused climate change; that one party is comfortable erecting onerous, racially discriminatory barriers to voting in order to win elections; that one party actually thought a deranged and mendacious sociopath, serial sexual assaulter, and racist bully would make a good president.* For Haskell, there is nothing to admire in neutrality purely for neutrality’s sake and no contradiction between historical objectivity and a strong political commitment. One recently published article made the case that political neutrality and avoiding controversy shortchanges students and predictably, the conservatives in the comment section retreated to the tired old line of “bias” and presenting “both sides,” demonstrating that they had completely missed the author’s point.
Corporate, mainstream media may provide the best analog to the confusion of objectivity with neutrality that Haskell described. Too often mainstream media prioritizes ratings, advertising revenue, entertainment, consumerism, and short-term thinking at the expense of the public good and the information required for healthy democracies to endure. False objectivity, one of the most important concepts I learned as an undergraduate, occurs when mainstream media devote equal time to two sides in a debate when expertise based on verifiable evidence clearly leans toward one side. It is false because it fails to consider the merits of each side, assuming that both are equally virtuous or flawed. We see false objectivity at play when CNN pits the overwhelming scientific consensus on climate change personified by Bill Nye against Mark Morano, a used car salesman and mouthpiece for the fossil fuel industry. Or when the news media in the 2016 presidential campaign obsess over the Clinton “email scandal,” making it the most widely reported story and a significant factor in the disastrous election results. The 50-50, both-sides arrangement poisons the public discourse not only by distorting reality (to the extent that we can access it), but by actively misinforming the public and by giving a structural advantage to the side that lies more often.
Objectivity as One of Many Goals in the Classroom
Let’s imagine that a professor receives a comment on a student evaluation form that goes something like this: “This professor is extremely biased. His lecture on the Vietnam War shows that he hates America. I want to know more about what made America great.” How does the professor respond, if at all? Sadly, in this current consumer-driven business model of higher education, where the majority of professors are not tenured, the path of least resistance is to not bring up politics in the future. But regardless of what is expedient, what do we make of the merits of the student complaint? Not much, I’d say. Here is how the professor might respond:
- If you want a professor to consider your point of view, you have to use evidence. The claim that a professor is biased strikes me as a bit of an emotional reaction and intellectually lazy cop-out that does not truly engage evidence. So I’ll issue a challenge: find better evidence. Explain to me how the evidence (or the interpretation based on the evidence) is flawed. Has the professor ignored or minimized certain sources? Has the professor over-emphasized some facts? Are there alternative ways to interpret the sources and facts we’ve learned?
- “What made America great” is a shallow, vapid, and trite cliché. At a minimum, we want college students to express ideas in ways that are more sophisticated than a slogan or bumper sticker. The MAGA slogan also forces us to ask, when exactly was America great and did everyone share in that greatness?
- Every lecture, including the one on the Vietnam War, is based on a bunch of facts that can be verified. But facts alone are insufficient. Since there are almost an infinite number of facts to choose from, I have to use my training as a historian to select which facts I deem the most relevant or intriguing. I then interpret the facts, pooling them together into a narrative designed to ascribe meaning. Every historian engages in this process in one form or another. If that process results in an interpretation that does not exalt America’s heroes or promote conservative values, that may reveal something very telling about conservatism in its modern iteration.
- I recognize that it made you feel angry, or even ashamed, to learn that US soldiers killed women and children at the My Lai massacre in 1968; or that the US military burned down entire villages in Vietnam; or that American companies like Monsanto and Dow Chemical manufactured toxic defoliants like Agent Orange and napalm that left untold numbers of Vietnamese and Americans with long-term health problems like birth defects and cancer. But what if you continue to take classes and upon graduation, you realize that four to five different professors all said the same thing as I did about Vietnam? Would you then admit that I had a point?
- Your comment raises a very important question about the goal of a history class. Should it be to promote national unity? Much like objectivity, nationalism was popular in the nineteenth century and it contained a lot of flawed assumptions that many of us would find repulsive today. Nationalism tends to promote conformity and suppress individuality. It otherizes, excludes, dehumanizes, distracts from economic concerns, and engenders obedience where it is not deserved. A historian should strive for truth, not what makes people feel good about themselves (see a previous post about what I call the Disney or GI Joe version of history).
- There’s a level of immaturity to this comment. I lectured on the foreign policy mistakes of the Vietnam War and this led you to conclude that I hate America? If you look at the totality of American history and cannot admit that Americans have made many mistakes, I’d submit that you don’t have a very sophisticated view of the world. It’s part of the growth process that we admit mistakes and try to learn from them. Mature countries should have the same view of mistakes as mature individuals.
- It’s part of the professor’s job to expose you to new ideas. What would be the point of going to college if we only reinforced things you’ve already learned?
- There is no obligation to teach “both sides” if both sides are not evidence-based and intellectually rigorous.
- Even if you discover that a professor is liberal, communist, libertarian, or centrist, it does not necessarily mean that they can’t do their jobs well.
- Even judges, whom we expect to be objective and unbiased, are expected to hold views on the great issues of their day. What we ask of them is self-control and openness to rational persuasion, not self-immolation.
- Beyond just regurgitating facts, what I hope you get out of my class is a way of thinking about the world that demands evidence. This doesn’t mean everyone has to agree, but it does mean there is a minimum standard of evidence for plausible arguments. We are preparing you to be citizens in a democracy and workers in a complex world.
- Experts can be wrong, but it may be entirely possible that a scholar who has spent a lifetime studying a subject may know more about the topic at hand than you do!
- Sometimes true learning only comes from a sensation of being uncomfortable and the creation of tension in the mind.
- As Novick wrote, objectivity is one of THE central questions of what it means to be a historian. If you accuse a historian of not being objective, you’re making a charge as serious as saying a doctor doesn’t care about a patient’s health or a soldier doesn’t care about courage.
- It might be helpful to look up America’s long history with anti-intellectualism. Some criticisms of professors are valid. Others are put forth in bad faith. A very pernicious and toxic form of anti-intellectualism currently inhabits many quarters of the political right. Most dishonestly propagated by the likes of Newt Gingrich, this anti-intellectualism maligns professors as “coastal elites” who do not understand “real Americans.” Unpacking all of the various ways in which “elite” is a woefully inaccurate term would require its own separate post.
- As the historian Jonathan Wilson said, every lecture is an opportunity to build an argument. I make arguments in lecture based on facts and evidence in a manner that is not too dissimilar from the way you use primary sources to write an argumentative essay.
Some of these responses may strike other professors as overly direct and perhaps even harsh. But I would argue that the more privatized model of public education that has transpired in the last few decades—a model that devalues the labor and artistry of teaching, particularly with universities’ reliance on part-time labor—requires a bold response. I have seen some students assume the mentality of a customer firsthand. Now there are infamous “watchlists” that try to get professors fired, leading one to wonder how far we have come since the McCarthy era loyalty oaths. Not only is there is there a stultifying lack of imagination in how a university could be conceptualized as a public good, fully funded through taxes without any gimmicks over fees, but students may not even realize that the system that saddles them with decades of crippling student loan debt is the very same system that encourages them to view themselves as customers. We’re talking about a system where adjunct professors with doctoral degrees live in cars or have to choose between paying rent and the health care not provided to them by their employers. Thus, if a student recoils at student loan debt, I’d argue that to be consistent, they should also reject the customer mentality. When academia is run like a business, professors internalize that they should compromise some of their ethics in teaching not to offend customers.
To conclude, for the reasons that Haskell and Appleby et al. have identified, I am unwilling to dispense completely with objectivity as a worthy goal for historians. But it is only one goal. Alongside objectivity a historian must weight the complementary and sometimes competing goals of value judgment, aesthetic taste, skepticism of the ideological underpinnings of the production of knowledge, storytelling, democratic inclusion, equity, adherence to evidence, and learning from past mistakes. If objectivity is meant to thoughtfully consider multiple perspectives based on verifiable evidence, then I embrace it. If it is to erect false debates or kowtow to a consumerist business model that is antithetical to what education ought to be, then count me out.
Stephen W. Campbell is a lecturer in US history at Cal Poly Pomona and the author of the book, The Bank War and the Partisan Press: Newspapers, Financial Institutions, and the Post Office in Jacksonian America, which was just published by the University Press of Kansas in early 2019. Follow him on Twitter at @historian_steve.
 Louis F. Budenz, “Do Colleges Have to Hire Red Professors?” American Legion Magazine, November 1951.
 Joyce Appleby, Lynn Hunt, and Margaret Jacob, Telling the Truth About History (W. W. Norton & Company, 1994), 15.
 Ibid., 52.
 Ibid., 15; Peter Novick, That Noble Dream: The “Objectivity Question” and the American Historical Profession (Cambridge University Press, 1988), 2; Thomas L. Haskell, “Objectivity is Not Neutrality: Rhetoric vs. Practice in Peter Novick’s That Noble Dream,” History and Theory 29, no. 2 (May 1990), 131.
 Haskell, “Objectivity is Not Neutrality,” 144-145.
 Appleby et al., Telling the Truth About History, 202.
 Ibid., 215.
 Novick, That Noble Dream, 6.
 Appleby, Telling the Truth About History, 216.
 Novick, That Noble Dream, 14; Haskell, “Objectivity is Not Neutrality,” 137.
 Appleby et al., Telling the Truth About History, 229.
 Ibid., 230-250.
 Haskell, “Objectivity is Not Neutrality,” 151.
 Ibid., 132, 143, 151.
 Ibid., 143, 149.
 If you can’t tell why this president* deservers an asterisk next to his name, you haven’t been paying attention.
 Although Haskell and Novick shared quite a bit in common in terms of their attitudes toward objectivity, Haskell believed that Novick confused objectivity with neutrality. Haskell, “Objectivity is Not Neutrality,” 134-137.
 I have received comments that resemble this quote.
 Haskell, “Objectivity is Not Neutrality,” 139.
 Novick, That Noble Dream, 11.