December 4, 2016

Expertise, Neutrality, and the Liberal Academy

One of the things I (and many others) have been contemplating in the aftermath of the election is the role of expertise (and of experts) in democratic governance, and how this question about the role of expertise might relate to the role of the university. In Plato's Apology, Socrates observes that the Athenians regularly elect only qualified experts as generals, yet on most political questions they act as if every citizen were equally expert. Surely, Socrates suggests, this is not the case. Surely when it comes to the well-being of the city there is such a thing as expertise and, as in every other area, the experts are few and the ignorant are many. In the Republic and other writings of Plato this becomes an argument for a kind of technocracy, a rule by the expert few over the many. After all, even if the many are committed to choosing experts to lead them, how can they, not being experts themselves, identify who the experts are? If we want leaders who know what they're doing, democracy is not the answer.

While this last statement is amply supported by the outcome of the recent election (even if you are a Trump supporter, it should be obvious that Clinton has far more experience and expertise in government), there's actually a great deal to be said for the model of the Athenian generals. That is, there are reasons in favor of allowing the inexpert masses to choose which experts they want to lead them. In the absence of such a system, there is a strong tendency for the culturally recognized experts to become their own party governing for their own benefit. Less competent government truly directed at the good of the whole may be better than very competent government intended for the benefit of the few. Thus the difficulty is in finding leaders who both know what they are doing and have the good of the whole in mind. This is especially difficult in the realm of bank regulation, for instance, where it seems that almost everyone who really understands the system has at some point worked for Goldman Sachs! This creates at least the appearance that what is in the best interest of the experts in this area may not be the same as what is in the best interest of the nation as a whole. When Trump supporters talk about 'corruption in Washington' and 'draining the swamp', I suspect that what they mean is that they think most or all of the federal government is like this: there is a class of 'experts' or 'elites' governing for their own benefit.

What this shows, I think, is that a functioning democracy requires a degree of trust between the people and various experts: the Athenians needed to believe that the military experts had the requisite expertise and would use it for the benefit of the city. The military experts in turn had to earn that trust. In my view, this is not a bug it's a feature. The need for experts to earn the trust of the people keeps them accountable in a beneficial way. It helps to fight against the tendency for the experts to form their own party with its own interests. When this trust breaks down, the people are forced to choose between an expert who (they believe) does not have their best interests at heart, or a non-expert. It also, of course, increases the risk that the people will be deceived into choosing a non-expert in the belief that he or she is an expert.

I doubt if there is any one 'silver bullet' explanation for the surprising outcome of the election. In any event, I'll leave such explanations for actual experts! One thing that clearly has happened is that the erosion of trust in experts that has been underway in America for quite some time has come to a head. Many Americans feel that most of the experts can't be trusted—not because they don't really know what they're talking about, but because they've been somehow corrupted or compromised. It's hard to identify all the causes of this breakdown of trust. Certainly attacks on the credibility of scientists from religious conservatives has played a role, as have the similar attacks launched by certain industry groups (tobacco, petroleum, etc.). A factor that I am willing to bet played a big role among the rust belt voters widely believed to have swung the election is the disconnect between national economic metrics and their experience: if the 'experts' on TV are saying the economy is great but the economic conditions in my town are dismal, then the outcome the 'experts' consider good must not be what's good for people like me.

(I want to pause for a moment to note two things about this last observation. First, this is a perfectly good exercise of critical, independent thinking. Second, though, I suspect that the 'people like me' part of that thought does a lot to explain the racist/sexist/xenophobic elements of Trumpism. Since they keep saying things are great, but things aren't great for people like me (whoever that may be) they must be great for somebody else. Who? Of course, while the initial conclusion that the experts aren't taking adequate account of the conditions of certain groups is perfectly reasonable, to respond by scapegoating other groups is both irrational and immoral.)

What is the place of the university in all this? One of the aims of the university is to produce and credential experts. Saying this is of course to recognize that by 'experts' we don't mean 'people with degrees'. Rather, by 'experts' we mean people who actually know what they are doing, and the university tries to teach people to know what they are doing and issue credentials to recognize that people have learned this. Like any human institution, the university is imperfect, and even if it were perfect there would be people who gained expertise outside that system. Indeed, there are varieties of expertise that, for the most part, universities don't even try to teach. But if the university functions properly, then people ought to be able to go to the university in order to become experts, and they ought not to receive degrees unless they succeed in becoming experts. Thus (and I hope I am just belaboring the obvious here) insofar as the university does its job, one ought to be able to conclude from the fact that someone has a degree that she or he is an expert in the relevant field, but no matter how well the university does its job one will never be able to conclude from the fact that someone doesn't have a degree that he or she is not an expert.

The erosion of trust in experts is in part an erosion of trust in the university. Universities have for a very long time had a reputation as hotbeds of liberalism, and there are indeed studies to show that students tend to be more liberal at the end of their degrees than they are at the beginning. (I was one of those students; more on this in a moment.) If we take 'conservatism' to mean 'keeping things the same' and 'liberalism' to mean 'changing things', then it is inevitable, and not in any sense a problem, that universities are more liberal than the surrounding culture. Universities are, by design, places for new ideas to be explored, but it's for the best that not every idea that gets explored in the university gets taken up by the broader culture! If, however, we take 'liberal' and 'conservative' as indicating more specific political ideologies or perspectives on the world, then ideological uniformity in the university, to the extent that it really exists, is indeed problematic. Further, even if the university is a place for exploring new ideas, it had better not be a place for forgetting, ignoring, or ridiculing old or established ideas. It should be a place for critically examining old or established ideas along with the new ones. That means treating them respectfully.

(Another brief digression: one of the really difficult things we philosophy professors are forever trying to teach our students is that it is possible to criticize a position respectfully. Indeed, subjecting a position to reasoned criticism is far more respectful—it takes the position more seriously—than ignoring it completely or shrugging and saying 'to each their own'. To subject a position to reasoned criticism is to acknowledge that one needs reasons for rejecting it—it can't just be rejected out of hand—and to do this is to take the position seriously and treat it with respect.)

Many people believe that the university has a shared ideology of political liberalism, cosmopolitan multi-culturalism, and militant secularism from which dissent is not permitted, and this perception is also a contributing factor to the breakdown in trust of experts. At least this much is true: the average or typical university professor is more liberal, cosmopolitan, and secular than the average or typical American. As for the claim that dissent is not permitted, all I can say is that this has not been my experience at all. Others have reported different experiences, but at least when it comes to philosophy I think this often (probably not always) rests on a misunderstanding of how philosophy works. As a Christian philosopher I'm often asked by Christians whether I find academic philosophy hostile, and I'm always explaining that, while most philosophers think Christianity is a crazy view of the world, philosophers generally respect people who do a good job defending views they think are crazy. (Oh, modal realism...) For those who are not used to this sort of thing, the sort of criticism these philosophers level at such views can look like hostility and exclusion of dissent, and this appearance can be amplified by the unnecessarily combative tone in which (specifically analytic) philosophy is often conducted. However, it has been my consistent experience, not only with religion but also with ethics and politics, that philosophers really do respect those who defend views they regard as crazy. I used to be a libertarian and still endorse a view in that general vicinity, and I've always found that philosophers want to know why I endorse this position.

However, although I haven't personally felt demeaned or excluded for holding insufficiently liberal/secular views, I do think there are two other non-rational factors that can lead to convergence of views. One of these is just finding oneself in a new peer group, which leads to a new set of pressures. As someone who entered college with a very different set of views than I now hold, I do worry about this one. I spend a lot of time reflecting and critically examining my beliefs (like it's my job, you know), and I am able to explain, to my own satisfaction, the reasons and arguments behind my most important changes in belief. But it's also true that there's been a change in which views seem to me like intuitively crazy views and which one seem like common sense, and I suspect that being in a different peer group where different views are taken seriously is a big part of the reason for this shift. I don't know what to say about this other than that it's a reason for not relying on intuitions any more than we have to.

The second issue is even more difficult to address. This is the impossibility of what I call 'higher-order neutrality'. I've been thinking about this in connection with teaching and also with news coverage for quite some time now. The idea is this: in discussing controversial issues, we want professors and reporters (among others) to be neutral. This means laying out the two (or more) sides of the controversy fairly, choosing knowledgeable and articulate representatives of each side, and giving those representatives the freedom to make their case. In the news context the 'representatives' would typically be the experts or activists interviewed. In the teaching context, they might be authors assigned or they might be particular arguments for a view or particular versions of a view. Thus, for instance, when teaching philosophy of religion, I want as much as possible to present my students with the very best arguments for the existence of God, the very best arguments against the existence of God, and the very best criticisms of all the arguments we discuss.

However, and this is the problem I want to point out, the question of neutrality arises again at a higher level, and there's no way of being completely neutral here. By setting up the debate as between (traditional, Western) theism and atheism, I'm already making a judgment about which views need to be taken most seriously. Then I have to make a judgment about which are the best arguments for and against, and which authors give the best presentations of those arguments. Indeed, one is already exercising judgment and not being 'neutral' in judging that an issue is controversial. Thus, for instance, news reporters have to make a judgment about whether to treat climate change as an ongoing controversy (since the opinions of politicians and the general public are divided) or as an established fact (since a pretty robust consensus exists among experts). Note further: we can't treat everything as doubtful and subject to debate in a given context. As Wittgenstein observed, "If you tried to doubt everything you would not get as far as doubting anything. The game of doubting itself presupposes certainty" (On Certainty, sect. 115). We need enough shared presuppositions to set up the debate. (In my actual classes, I often try to make this explicit: "the people we're reading are all assuming X. If you don't believe X, that's fine; there are legitimate worries and questions about X. But for now we're going to assume it and see where that gets us.")

My point is this: in order to be neutral (or "fair and balanced") in a particular debate, one must choose a particular debate, and that involves determining what to take as shared presuppositions and what to call into question. It also involves determining who the disputants are going to be. One can then be fair by trying to present each of the disputants in the best possible light, and making sure to give each disputant room to make his or her case. This kind of first-order neutrality can be achieved only by making higher-order judgments about the debate, which will often involve taking a stand that may turn out to be controversial among one's students or audience.

It seems to me that recognizing the impossibility of higher-order neutrality and conducting ourselves carefully in the higher-order judgments we make is crucial to the academy (and the media) regaining the trust of the American public and ceasing to be regarded as partisan. Unfortunately, there are no easy answers here. There are some cases where there is a robust expert consensus in favor of certain claims, but they may be controversial among our students, and our adhering to a certain view may be seen as political/religious/cultural bias. I would take evolution, climate change, and the lack of a vaccine-autism link to be in this category, for instance. (Note that the last example is a case in which most of the people rejecting the expert consensus are on the political/cultural left.) There are other cases where, given the moral and political beliefs people already have, if people knew the facts most of them would probably come to the same conclusion. I suspect that the use of solitary confinement in American prisons might be an example in this category: regardless of political positions, I expect most people would find this practice objectionable if they knew how it was being used and what psychologists have discovered about its effects. Probably the consensus would be even stronger if you could add in information and evidence regarding the number of innocent people incarcerated and racial disparities in sentencing. You could talk about an expert consensus among ethicists here, I suppose (I'm not sure how many ethicists have written about this, honestly), but really the situation here is that once the facts are in the moral judgment is not difficult. My changes in belief have included items in both of these categories, though not only in these categories.

On the other hand, there is not and presumably never will be any kind of robust expert consensus on the most basic moral, political, and religious questions. If there was such a consensus, we should be suspicious as to how that consensus came about. Because not all partisan issues have the same kind of relationship to verifiable facts as those mentioned in the previous paragraph, I'm doubtful that whatever consensus may exist within the academy on broader moral, political, and religious issues is due to reason and evidence. As a result, I think we should, where possible, avoid including positions on these broader issues among our presuppositions. Returning to my own experience, probably the moment in my education when I (as a libertarian coming from a majority-Republican rural area) felt most excluded from the realm of the acceptable was in an undergraduate philosophy seminar on justice where it seemed to me (as an undergraduate) that the authors assigned spanned the range of political views from moderate Democrat to Marxist. The professor, to his credit, allowed me to voice fundamental criticisms of these approaches and did not in any way hold what I said against me. But the design of the class set my point of view outside the debate we were having (and it was not clear from the course description that this was going to be the case). Again, we can't possibly avoid doing this to some views. What I'm suggesting is that we should try (where possible) not to do it to views that we can predict that some of our students will endorse, and we should especially try to avoid doing it to all views that are perceived as being on 'the other side' from the perceived academic consensus.

Pulling this all back together: experts have lost the trust of the American people because they are perceived as partisan and self-interested. There is some truth in this. Experts in general, and universities in particular, need to earn that trust again. In order to do this, we need to demonstrate understanding of and respect for values and perspectives contrary to the perceived liberal academic consensus. This doesn't mean showing respect for racism, nor does it mean ceasing our attempts to combat ignorance. It does mean showing awareness that there can be conservative views that are not based on racism or ignorance or anything like that, and including these views in the curriculum where appropriate. Of course, plenty of professors already do this. But I do think we need to recognize that whenever we don't it reinforces the perception that conservative views are not welcome. This perception in turn feeds the narrative that experts are partisan, which prevents experts from playing the role our democracy needs them to play.

Posted by Kenny at December 4, 2016 9:18 AM
Trackbacks
TrackBack URL for this entry: http://blog.kennypearce.net/admin/mt-tb.cgi/798

Post a comment





Return to blog.kennypearce.net