Showing posts with label epistemology. Show all posts
Showing posts with label epistemology. Show all posts

Saturday, 14 March 2015

Disciplinary Epistemologies 101

Since there have been universities, there has been a crisis in them. We should probably look at the more recent hand-wringing about universities teaching students relativism in the light of recurring accusations that universities corrupt youth, but I’m not going to do that analysis here (or ever, even). Instead I’m going to tell you about my Research Methods class.

1.

I had recently that article for Ethika Politika in which Margaret Blume worries that varied distribution requirements at Yale University—some humanities, some social sciences, some hard sciences, a language or two, etc.—neither give students the sense that the different disciplines could speak to each other nor provide them with a framework in which to organize themselves; with this in the back of my mind, I was listening to my Research Methods instructor talk about the positivist underpinnings of quantitative research and the constructivist underpinnings of qualitative research. Putting the two together, I thought: Hey, maybe what Yale—and UBC, and whoever—needs is a mandatory introductory Research Methods and Disciplinary Epistemologies class.

My experience in undergrad left me with the acute sense that almost none of my peers knew why their own disciplines did things the way they did them, let alone why other disciplines made different decisions. No science student, for example, could tell me why they wrote everything in passive voice, and so they were generally immune to my editorial tirades about 1) cacophonic language and 2) awful epistemology re: denying that the Observer Effect exists.* Students in the humanities weren’t much better; in English, for instance, theory courses were optional for many students, and not all those on offer were great. The only ones who seemed to know these sorts of things were grad-students-to-be or people who took Introduction to Philosophy and listened to the professor.

So if the problem, as Blume would have it, is that undergraduate students had no idea how to put the puzzle pieces together, it seems like a Research Methods and Disciplinary Epistemologies class would be a great solution. I don’t agree, actually, that quantitative research implies positivism and qualitative research implies constructivism—that’s a long discussion, but suffice to say that I’m doing mixed methods research right now—but that’s the sort of conversation that might put the pieces together. Getting a whole big picture of all the disciplines would really help.

Now, there are some problems with this course, logistically. There are really only two people who I’d trust to teach the course: myself and my first-year Philosophy professor. There are probably others here and there, but that’s still a low enough percentage of the people I know that it’s worrisome. Maybe there’d need to be a set curriculum. The issue is that I trust neither insiders nor outsiders to teach a discipline’s epistemology; you’d probably have to have one of each. Maybe there could be modules: one professor handles the etic approach, and guest lecturers handle the emic approach.

2.

This lovely daydream lasted perhaps five minutes before I remembered that I had been a Teaching Assistant for a mandatory Intro to Literature class, and I had sworn off the idea of classes mandatory for all students then and there. You can lead a horse to water, they say, but you can’t make it drink; in my experience, quite a lot of horses won’t drink precisely because you lead them to water when they didn’t want to be lead. These we’ll-make-them-learn-these-things-by-making-it-mandatory schemes rarely work.

I have heard of exceptions, where a professor and batch of TAs manage to get all or most of the students into the humanities, at least in heart if not in enrollment. But this seems to require a dream team of excellent professors, excellent TAs, and excellent students; rarely do you get two, let alone three, of those requirements. In the end, you just can’t force students to accept what they’re not willing to accept.

And it occurred to me, too, that a lot of the content I’d want to teach might be well over the heads of most first year students. As a first year student it took me years to understand existentialism and Buddhism and constructivism, letting them slowly gestate long after I’d passed my Philosophy and Religious Studies finals, and I’m the sort of person who’s good at this sort of thing and won’t leave it alone.

So I’m going to have to come out against mandatory courses in university, no matter how well intended. I don’t think they do what we want them to do. But maybe we’re just doing them wrong?

3.

Leah wrote that maybe the framework-building should be extracurricular anyway, and I’m inclined to agree. Classes might not create incentives for truth-seeking; they are good at creating incentives for skill-building and material-mastering, but I don’t know how you could grade someone on whether or not they are right, on whether their commitment to their values is authentic, on how justified their decisions are. And if you aren’t grading students on something, not many of them are going to do it. We should encourage big questions in the classroom, but we can’t expect students to find them there.

And maybe casting students into a sea of relativism for a while is good for them, as I mentioned in my last post, so long as we give them some sense that they can and should and must get themselves ashore. We can’t get the students ashore for them, more than likely, and while we should think of ways to help them do so, the best method might just be for professors to model evaluativist thinking. For the most part they already do award evaluativist thinking in assignments, since every disciplinary epistemology I’ve encountered has been thoroughly evaluativist; we needn’t make “evaluativist thinking” a formal requirement.

And the not-so-secret subtext of Blume’s article seems to be “every school should be a Catholic school,” so maybe I shouldn’t be taking it as seriously as a critique of university. For instance, Blume’s suggestion that only Catholic theology can tie together the disciplines is just silly: even if you spot her that Catholicism is true, it’s hard to deny that Islamic theology, Buddhist epistemology, and historical materialism have been able to create a coherent, if not necessarily true, framework for all disciplines.

But, anyway, it’s something to think about. I wouldn’t mind teaching a Disciplinary Epistemologies and Academic Research course; I just wonder who I’d teach it to.



* In case you too are unaware of the sciences’ use of the passive voice, I’ll explain it: sciences use the passive voice (“The results were analyzed…” rather than “We analyzed the results”) in order to mask the researchers’ presence. In theory, the researchers shouldn’t matter, the sciences say; we are removing the personality etc. from the procedures. Of course, some version of that claim is true, but not to the extent removing the researcher from consideration entirely. The observer effect is often a serious one, and this grammatical elision hides the way researchers are involved in their research. Consider Nixon’s famous remark, “Mistakes were made”; passive voice is the mechanism by which responsible agents deny responsibility.
Moreover, the science students whose papers I edited never knew why they were supposed to use the passive voice, so they also never knew when they were supposed to use it. As a result, they used it in almost every sentence, even when it was confusing and unnecessary.

Sunday, 8 March 2015

A Mature Philosophy, Part II

Or, Personal Epistemology, the Perils of Education, and Two Ways of Not Being a Relativist

Knowledge is always uncertain, but some ideas are better than others. Evidence for propositions exists, but doesn’t that require claims about evidence for which we cannot have evidence? One-size-fits-all-answers usually fit no one but the person who made them, but then physics decided to go and be all universal; if everything is just physics on a super-complicated scale, shouldn’t there be universal answers to all questions? I used to think the way to address these questions sat somewhere between modernism and postmodernism, or maybe through postmodernism and out to the other side, but that approach wasn’t generating very many answers for me and it certainly wasn’t working for any of my interlocutors. And then I discovered personal epistemology through my work as a research assistant and thought it was a helpful—though perhaps only modestly helpful—way of framing issues of knowledge and uncertainty and relativism and absolutism and people being not just wrong but annoyingly wrong.

So I chose personal epistemology as the topic for a class assignment.* Specifically, it was a literature review (in academics, a literature review is a summary of the published research on a topic: you review the scholarly literature). I learned a lot: my understanding of personal epistemology is a lot more nuanced now, but more to the point I read a study that almost destroyed my fledgling faith in the idea, but then I realized there was a flaw in the study design (I think); still, even if the study design is flawed, there’s an interesting implication which I want to explore here.

But first, I should do a better job explaining personal epistemology.

1.

Personal epistemology refers to the beliefs a person has about knowledge and knowing. William Perry coined the term in his 1970 Forms of Ethical and Intellectual Development in the College Years, a longitudinal study of college students’ epistemologies. Most versions of the concept retain some element of Perry’s emphasis on the development of these beliefs across a person’s life. That said, there are a lot of competing ways of modelling personal epistemology. I’m going to focus on my favourites; there are some models (see King and Kitchener, for instance, or Elby and Hammer, at the bottom) which are prominent enough in the field but which I don’t know well enough to discuss.

Barbara Hofer, sometimes in collaboration with Paul Pintrich, has a more synchronous model, which looks at specific beliefs people have about knowledge at one time. You can think of it as a photograph rather than a video: higher definition, but only for a single moment. Hofer in particular looks at two aspects of two dimensions, for a total of four epistemic beliefs: complexity of knowledge and certainty of knowledge (paired under nature of knowledge), and source of knowledge and justification for knowledge (paired under process of knowing). Any given person can hold a naïve version of these beliefs or a sophisticated version. For instance, a naïve belief about the complexity of knowledge would be, “Knowledge is simple”; a sophisticated belief about the complexity of knowledge would be, “Knowledge is complex.” You can also consider what’s called domain specificity: a person might have naïve beliefs about mathematics but sophisticated beliefs about psychology, or vice versa. (I italicized the jargon terms so you can identify them as jargon and not my own interpolation.) Hofer and others usually imply (or state outright) that sophisticated beliefs are truer and/or more desirable than naïve ones.

Hofer’s model shows a lot of interesting differences between populations. Men tend to exhibit somewhat different epistemic beliefs than women do (men tend to hold more naïve beliefs than women do), and students in different academic disciplines also tend to exhibit different epistemic beliefs. There are also cultural differences; indeed, as I understand it Hofer is currently working on epistemic beliefs in different cultures.

Deanna Kuhn, on the other hand, is a scholar who looks more at the developmental side. Her scheme, like Perry’s original scheme, has stages that a person would ideally move through over the course of his or her life. That scheme looks like this:

Realists think assertions are copies of reality.
Absolutists think assertions are facts that are correct or incorrect according to how well they represent reality.
Multiplists think assertions are opinions that their owners have chosen and which are accountable only to those owners.
Evaluativists think assertions are judgements that can be evaluated or compared based on standards of evidence and/or argument.

Realists and absolutists agree that reality is directly knowable, that knowledge comes from an external source, and that knowledge is certain; multiplists and evaluativists agree that reality is not directly knowable, that knowledge is something humans make, and that knowledge is uncertain. However, things start to get more fine-grained when it comes to critical thinking: realists do not consider critical thinking necessary, while absolutists use critical thinking in order to compare different assertions and figure out whether they are true or false. Meanwhile, multiplists consider critical thinking to be irrelevant but evaluativists value critical thinking as a way to promote sound assertions and to improve understanding.

(There are actually six stages, but I’ve conflated similar ones for simplicity’s sake, as Kuhn does fairly often. The six stages are named and numbered thus: Level 0, Realist; Level 1, Simple absolutist; Level 2, Dual absolutist; Level 3, Multiplist; Level 4, Objective evaluativist; Level 5, Conceptual evaluativist. The differences between the two kinds of absolutist and two kinds of evaluativist are less marked than the differences between the larger groupings.)

There is a certain amount of work in education psychology trying to move children from lower stages to higher ones, but there are several challenges to this: for instance, teachers don’t have much training in this area, since traditional pedagogy is pretty thoroughly absolutist; there’s also the chance that children will retreat to a previous stage. Ordinarily, people develop because their beliefs about knowledge are challenged: when confronted by competing authorities, an absolutist worldview cannot adjudicate between them and so the person will be forced to adopt new beliefs about knowledge. However, each stage is more difficult than the previous one, and there’s always a chance that a person will find their new stage too difficult and retreat to a previous one (Yang and Tsai). So if you’re trying to move children along from realism to absolutism to multiplism to evaluativism, you need to push them, but not push them too hard.

Kuhn does account for domain-specificity, too. People tend to use one stage for certain kinds of knowledge while using a different stage for another kind of knowledge. In fact, people tend to attain new levels for the different knowledge domains in a predictable sequence, though I’m sure there are exceptions: people first move from absolutist to multiplist in areas of personal taste, then aesthetic judgements, then judgements about the social world, and finally judgements about the physical world; they then move from multiplist to evaluativist in the reverse order, starting with judgements about the physical world and ending with aesthetic judgements (but not judgements of taste, which rarely become non-relativist).

Both of these models have pretty good empirical backing as constructs and can predict a number of other things, such as academic success, comprehension of material, and so on. I’ll talk about how they might interact later. For the moment, though, I’ll let it rest and move on to that study I was talking about before.

2.

Braten, Strømsø, and Samuelstuen found in a 2008 study that students with more sophisticated epistemic beliefs performed worse at some tasks than students with naïve epistemic beliefs. They were using Hofer’s model, as described above, and looked at college students with no training in environmental science. These students were given a few different documents about climate change and asked to read and explain them. Students with sophisticated beliefs about the certainty or complexity of knowledge performed better than students with naïve beliefs about those same concepts, as predicted. However, students with sophisticated beliefs about the source of knowledge—that is, students who believe that knowledge is constructed, that knowledge is something people make—performed worse than students with naïve beliefs in this area—that is, students who believe that knowledge is received.

This finding seems to be a pretty major blow to the idea that we should be trying to get people to adopt sophisticated epistemic beliefs. Specifically, Braten, Strømsø, and Samuelstuen suggest that sophisticated epistemic beliefs about the process of knowing are more appropriate to experts; non-experts in those areas would do better with naïve epistemic beliefs. Even if sophisticated epistemic beliefs are right, they tend to make people wrong.

At first glance, this makes a sort of sense. A lot of people making bizarre claims about the physical world—creationists, anti-vaxxers, and climate change deniers all come to mind—rely pretty heavily on a constructivist view of science in their rhetoric. Is it possible that these people all have sophisticated beliefs about knowledge, but since they are non-experts in these areas they tend to evaluate the evidence really poorly? Would they be better off with naïve beliefs about knowledge? I’m going to be honest: this bothered me for a few days.

And this isn’t just a small problem. As Bromme, Kienhues, and Porsch point out in a 2010 paper, people get most of their knowledge second-hand. Personal epistemology research has so far focused on how people gain knowledge on their own, but finding, understanding, and assessing documents is the primary way in which people learn things. So if sophisticated beliefs wreak havoc with that process, we’re in trouble.

However, I think there’s a problem with the 2008 study.

Hofer’s model of epistemic beliefs has just two settings for each dimension: naïve and sophisticated. But Kuhn’s model shows that people are far more complicated than that. Specifically, multiplists and evaluativists both believe knowledge is constructed, but they do significantly different things in light of this belief. As Bromme, Kienhues, and Porsch point out, the multiplists in Kuhn’s study have little or no respect for experts, even going so far as to deny that expertise exists; both absolutists and evaluativists have great respects for experts, though for different reasons. Multiplists tend to outnumber evaluativists in any given population, however, and not by just a little bit. (The majority—in some studies the vast majority—of people get stuck somewhere in the absolutist-multiplist range.) So if you take a random sampling of students and sort out the ones with sophisticated epistemic beliefs, most of them will be multiplists rather than evaluativists according to Kuhn’s scheme. It therefore shouldn’t be at all surprising to find that most of them will have trouble understanding documents about climate change: they aren’t terribly interested in expertise, after all. But evaluativists may still be perfectly good at the task; they’re just underrepresented in the study.

Of course, this is conjecture on my part. It’s conjecture based on reading a lot of these studies and, I think, a sufficient understanding of the statistics involved, though feel free to correct me if I’m wrong on that count—I’m no statistician. But it’s still conjecture and I’d rather have empirical evidence. Alas, no one seems to have tried to resolve this problem, at least not that I could find.

Now, I can imagine a few different ways Hofer’s model and Kuhn’s model might fit together. Maybe each belief has only two settings—naïve and sophisticated—and Kuhn’s stages are different combinations of beliefs. So, realists would have only naïve beliefs; evaluativists would have only sophisticated beliefs; absolutists and multiplists would have some combination of naïve and sophisticated beliefs. This might mean that the beliefs would work together in certain ways to produce new results, and a combination of naïve and sophisticated beliefs don’t work well together. And there might be some important beliefs that Hofer is missing that influence how these work, too. Or, maybe epistemic beliefs have more than two settings. Maybe there are two kinds of naïve belief and two kinds of sophisticated belief. Either of these possibilities would explain the conflict between Kuhn’s results and Braten, Strømsø, and Samuelstuen’s results.

3.

Even if Braten, Strømsø, and Samuelstuen’s results aren’t a nail in the coffin for those of us who want to be prescriptive about personal epistemology, any explanation for those results still means something interesting—or upsetting—about personal epistemology. Being an evaluativist is probably the best thing to be, in all knowledge domains: it’s both true** and useful. However, being a multiplist might not be better than being an absolutist, at least not for everything. Maybe, overall, multiplism is better than absolutism; certainly it’s truer. But people pay a price for maturity when they shift from absolutism to multiplism: they lose respect for expertise.

And it’s even worse than it might seem at first, because most people who make it to multiplism don’t make it to evaluativism. So we can take a bunch of absolutists and try to get them to evaluativism, but we’ll lose a lot of them in multiplism, and they might well be worse off in multiplism than they were in absolutism. (I’m not convinced that they’re actually worse off—multiplists are far more tolerant than absolutists—but let’s assume they are.) If I ask people to develop more sophisticated epistemic beliefs, I’m asking them to take a real risk. The pay-off is high (and, might I add, true), but the risk isn’t insignificant.

I’m reminded of all the worry about universities turning students into relativists, unable to make commitments, only able to show how nothing is undeniably true, lost at sea among competing frameworks. I’ve been really skeptical of such arguments in the past, but maybe I’ve underestimated how big of a problem this is. (The Blume and Roth articles are still riddled with problems, though.) Maybe relativists are real, and maybe they’re in trouble! I was wrong! But the existence of relativists might still be a good thing, even if relativism itself is a less good thing: it means that education is actually moving people along the stages of epistemological development. The trick is to get them all the way up to evaluativism; or, to phrase it more pointedly, the trick is to get them into evaluativism so they don’t slide back down into absolutism. I don’t know what the results look like for people who regress: I suspect it’s harder to get them into multiplism again so that they can get to evaluativism. This is what Perry’s research suggested, but Perry’s research was… well, that’s another story.

There’s still a lot to hash out here: Perry’s work suggests that schools with absolutist professors tend to produce multiplists anyway, since students still need to reconcile conflicting authorities and that process is what drives personal epistemology’s development. In fact, he suggests that yesterday’s reactionary tended to get a pretty developed epistemology, since they were wrestling with absolutist professors; today’s reactionary, however, rebels against multiplist or evaluativist professors, and so doubles-down on absolutism. This is a problem.

And I also suspect—this time with nothing but anecdote, so mark it with an appropriate amount of salt-grains—that people in early stages can’t recognize or comprehend later stages very well. To an absolutist, evaluativism and multiplism probably look much the same, or else they think they’ve already achieved evaluativism. Meanwhile, to a multiplist, evaluativism probably looks like some kind of compromise with absolutism. Moving forward just looks wrong, until there’s nothing else you can do.

It’s hard to say what all of this means for higher education (or elementary and secondary education, for that matter). Do you focus on getting relativists through into evaluativism? Or do you focus on getting people out of absolutism and keeping them out of absolutism, trusting that they’ll find their way to evaluativism on their own (though Kuhn suggests this is very unlikely). Maybe universities aren’t the ones that can get them to evaluativism anyway? Or do you throw them a bunch of professors with strong but conflicting opinions, hoping that this will challenge them through to evaluativism? (Personally, I learned a lot from clearly evaluativist professors who stated and argued for their own beliefs in the classroom, but did the opposing views justice, too. That seems like a good compromise: when the student is ready for evaluativism, they’ll have a model for that way of thinking, but students who tend to be reactionary aren’t so likely to slip into truculent absolutism for the rest of their lives, which they’d probably do if they had explicitly relativist professors.)

It’s hard to say, but I think personal epistemology is a good place to start thinking about the issue. My goodness there are a lot of studies I wish I could do!

4.

I wrote this post because I wanted to get all of this off my chest, but also because I intend to talk a bit in a upcoming post about higher education in response to one those worried articles I mentioned before. (Thanks, Leah, for bringing it to my attention.) Personal epistemology won’t play a large part in that discussion, I don’t think—I haven’t written it yet, so I can’t be sure—but I wanted you to have these concepts down as background information. A lot of these “there’s trouble in higher education” articles tend to worry a lot about all these hippy relativists that universities put out, and if we want to address that issue, I think we should learn how that relativism fits into cognitive development, right? It’s looking like people need to be relativists before they can be right, and then they need to move forward from relativism rather than retreat back into absolutism.

Actually, that’s an almost perfect summary. I’ll add that to the end.

OK. I know I’m well beyond acceptable blog post length, but there are two more things.

a. Way back when Eve Tushnet wrote an article for the American Conservative called “Beyond Critical Thinking,” and then I wrote a thing called “Beyond Simple Acceptance,” because sometimes I’m a snide jerk. All that and the resulting back-and-forth is in the links peppering this post. Anyway, Eve was talking about how people come out of university unable to make intellectual stands because they’ve over-learned critical thinking; I was talking about how quite a lot of people (probably most people, probably you, probably me) don’t seem to be sufficiently capable of critical thinking, so I really didn’t think that there was a problem with folks learning too much of the stuff.

Maybe I should have named this post “Beyond Critical Thinking and Simple Acceptance.” In retrospect, I think Eve is clearly arguing for something like evaluativism but I thought she was backsliding to absolutism. So I was wrong in that. But Eve was wrong to say that critical thinking is the problem. Instead, the problem seems to be that universities aren’t shepherding people through multiplism into evaluativism. (Maybe it isn’t the job of a university to do that, but I don’t know where else people will learn it.)

Now, I absolutely do not want a university to teach people which beliefs to take a stand for. The thought of that makes my skin crawl; the whole Catholic college or Baptist bible institute seems … disingenuous at best. Private schools at the elementary and secondary levels are even worse. (Though Perry might remind me that the rebels would fare better in that system than if they were taught by relativists.) But universities would certainly do well to help any students who make it to multiplism move on through to evaluativism.

b. Because I read Unequally Yoked and Slate Star Codex, I have some passing knowledge of Less Wrong, the Center for Applied Rationality (CFAR), and the rationalist community generally (though I wish they’d change the name from rationalist because I’m fairly sure they aren’t disciples of Descartes, Spinoza, and Kant.) A major focus of these groups is to figure out how to reason better. CFAR is particularly looking at rationality outreach and research—testing whether the sorts of tricks Less Wrong develops are empirically supportable, and teaching these tricks to people outside the movement.***

I wonder about this, though. How much rational thinking can non-evaluativists learn? Would the resources be better spent moving people from absolutism into multiplism and from multiplism into evaluativism? Or does that come automatically when you teach rational thinking? Perry was clear: he thought that critical reasoning skills came from the development of personal epistemology. But Perry’s method was… not the best. It might be worth spending some resources to check this: is it better—in terms of outcome/cost—to move people into evaluativism, or to teach them the rationality tricks? I don’t have the resources to check any of that, but maybe the CFAR does.

TL;DR: It’s looking like people need to be relativists before they can be right; relativism isn’t great, though; people need to move forward from relativism rather than retreat back into absolutism.


* On the note of class assignments, I finish my program in April. Yay! Then I will need to start job hunting. Boo! But after that, I hope, I will have a job. Yay!

** I am asserting that it is true. This assertion is based on—well, on everything, really—but I want to make clear that psychology can’t really tell us much about epistemology in the philosophical sense; the job here is to determine what beliefs people do have about knowledge, not which beliefs are true. However, it seems pretty clear, philosophically, that knowledge is uncertain and constructed, that reality cannot be directly accessed, and so on.

*** The other focus for Less Wrong (and Slate Star Codex?) is the creation of a robot-god which will usher in a utilitarian paradise (and prevent the otherwise-inevitable rise of a robot-demiurge), so I’m still not sure what to make about their claim to reasonableness.

_____
Select Sources

Braten, Ivar, Helge I. Strømsø, and Marit S. Samuelstuen. “Are sophisticated students always better? The role of topic-specific personal epistemology in the understanding of multiple expository texts.” Contemporary Education Psychology 33 (2008): 814-840. Web.

Bromme, Rainer, Dorothe Kienhues, and Torsten Porsch. “Who knows what and who can we believe? Epistemological beliefs are beliefs and knowledge (mostly) to be attained from others.” Personal Epistemology in the Classroom. Edited by Lisa D. Bendrixen and Florian C. Feucht. Cambridge: Cambridge UP, 2010. 163-193. Print.

Elby, Andrew. “Defining Personal Epistemology: A Response to Hofer & Pintrich (1997) and Sandoval (2005).” Journal of the Learning Sciences, 18.1 (2009): 138-149. Web.

Elby, Andrew and David Hammer (2002) “On the Form of Personal Epistemology.” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 169-190. Print.

Hofer, Barbara K. “Dimensionality and Disciplinary Differences in Personal Epistemology.” Contemporary Education Psychology 25.4 (2000): 378-405.

Hofer, Barbara K. “Exploring the dimensions of personal epistemology in differing classroom contexts: Student interpretations during the first year of college.” Contemporary Education Psychology 29 (2004): 129-163. Web.

Hofer, Barbara K. “Personal Epistemology and Culture.” Knowing, Knowledge and Beliefs: Epistemological Studies across Diverse Cultures. Edited by Myint Swe Khine. Perth: Spinger, 2014. 3-22. Print.

King, Patricia M. and Karen Strohm Kitchener. “The Reflective Judgment Model: Twenty Years of Research on Cognitive Epistemology.” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 37-61. Print.

Kuhn, Deanna and Michael Weinstock. “What is Epistemological Thinking and Why Does it Matter?” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 121-144. Print.

Perry, William G. Forms of Ethical and Intellectual Development in the College Years, New York: Holt, Rinehart and Winston, Inc., 1970. Print.


Yang, Fang-Ying, and Chin-Chung Tsai. “An epistemic framework for scientific reasoning in informal contexts.” Personal Epistemology in the Classroom. Edited by Lisa D. Bendrixen and Florian C. Feucht. Cambridge: Cambridge UP, 2010. 124-162. Print.

Tuesday, 9 September 2014

The Singular Flavor of Souls

While I’m recording things I’ve read recently that do a far better job of articulating, and expanding, what I was trying to say last summer, I have two more things to mention that touch on what I was trying to get at with my posts on difference and the acknowledgement thereof. I assume there are many people for whom these ideas are well-trod ground, but they were new to me and it might be worth something to record my nascent reactions here.

In “From Allegories to Novels,” in which Borges tries to explain why the allegory once seemed a respectable genre but now seems in poor taste, he writes the following:
Coleridge observes that all men are born Aristotelians or Platonists. The Platonists sense intuitively that ideas are realities; the Aristotelians, that they are generalizations; for the former, language is nothing but a system of arbitrary symbols; for the latter, it is the map of the universe. The Platonist knows that the universe is in some way a cosmos, an order; this order, for the Aristotelian, may be an error or fiction resulting from our perfect understanding. Across latitudes and epochs, the two immortal antagonists change languages and names: one is Parmenides, Plato, Spinoza, Kant, Francis Bradley; the other, Heraclitus, Aristotle, Locke, Hume, William James. In the arduous schools of the Middle Ages, everyone invokes Aristotle, master of human reason (Convivio IV, 2), but the nominalists are Aristotle; the realists, Plato. […] 
As one would suppose, the intermediate positions and nuances multiplied ad infinitum over those many years; yet it can be stated that, for realism, universals (Plato would call them ideas, forms; we would call them abstract concepts) were the essential; for nominalism, individuals. The history of philosophy is not a useless museum of distractions and wordplay; the two hypotheses correspond, in all likelihood, to two ways of intuiting reality.[*]

But the distinction between nominalism and realism is not so keen as that, commentaries on Borges—Eco’s The Name of the Rose is notable—have noted, maybe missing that Borges might already have understood that.

I read Borges' essay months ago; Saturday, I read/skimmed the first chapter, written by Marcia J. Bates, of Theories of Information Behavior, edited by Karen E. Fisher, Sandra Erdelez, and Lynne (E. F.) McKechnie. In that chapter, I read this:
First, we need to make a distinction between what are known as nomethetic and idiographic approaches to research. These two are the most fundamental orienting strategies of all.
  • Nomothetic – “Relating to or concerned with the study or discovery of the general laws underlying something” (Oxford English Dictionary).
  • Idiographic – “Concerned with the individual, pertaining to or descriptive of single or unique facts and processes” (Oxford English Dictionary).
The first approach is the one that is fundamental to the sciences. Science research is always looking to establish the general law, principle, or theory. The fundamental assumption in the sciences is that behind all the blooming, buzzing confusion of the real world, there are patterns or processes of a more general sort, an understanding of which enables prediction and explanation of particulars.  
The idiographic approach, on the other hand, cherishes the particulars, and insist that true understanding can be reached only by assembling and assessing those particulars. The end result is a nuanced description and assessment of the unique facts of a situation or historical event, in which themes and tendencies may be discovered, but rarely any general laws. This approach is fundamental to the humanities. […]
Bates goes on to describe the social sciences as being between the two, the contested ground; at times, social sciences tend to favour one approach and then switch to the other. It is in the context of the social sciences that she talks about library and information science:
LIS has not been immune to these struggles, and it would not be hard to identify departments or journals where this conflict is being carried out. My position is that both of these orienting strategies are enormously productive for human understanding. Any LIS department that definitively rejects one of the other approach makes a foolish choice. It is more difficult to maintain openness to these two positions, rather than insisting on selecting one or the other, but it is also ultimately more productive and rewarding for the progress of the field.
I don’t think it’s difficult to see the realism/nominalism distinction played out here again, though it’s important to note that realism v. nominalism is a debate about the nature of reality, while the nomothetic v. idiographic debate concerns merely method (if method can ever be merely method).

Statistics, I think, is a useful way forward, though not sufficient. The idea of emergence, of patterns emerging at different levels of complexity, might also be helpful. Of course, my bias is showing clearly when I say this: Coleridge would say that I am a born Aristotelian, in that it is the individual that exists, not the concept. And yet it is clear that patterns exist and must be accounted for, and we probably can’t even do idiography without having ideas of general patterns, and it’s better to have good supportable patterns than mere intuitions and stereotypes. So we need nomothety! (I don’t even know if those are real nouns.) Statistics, probability, and emergence, put together, are a way of insisting that it's the individuals that are real while still seeking to understand those patterns the cosmos won't do without.

(And morality has to be at least somewhat nomethetic/realist, even if the idiographic/nominalist informs each particular decision, or else it literally cannot be morality, right?)

-----

As you can tell from the deplorable spelling of flavour, the title is a quotation, in this case taken from a translation of Borges' essay "Personality and the Buddha;" the original was published at around the same time as "From Allegories to Novels." The context reads like this:
From Chaucer to Marcel Proust, the novel's substance is the unrepeatable, the singular flavor of souls; for Buddhism there is no such flavor, or it is one of the many varieties of the cosmic simulacrum. Christ preached so that men would have life, and have it in abundance (John 10:10); the Buddha, to proclaim that this world, infinite in time and in space, is a dwindling fire. [...]
But Borges writes in "From Allegories to Novels" that allegories have traces of the novel and novels, traces of the allegory:
Allegory is a fable of abstractions, as the novel is a fable of individuals. The abstractions are personified; there is something of the novel in every allegory. The individuals that novelists present aspire to be generic (Dupin is Reason, Don Segundo Sombra is the Gaucho); there is an element of allegory in novels. 

* What is strange about Aristotle and Plato is that Plato was Aristotelian when it comes to people and Aristotle, Platonic. Plato admitted that a woman might be born with the traits of a soldier or a philosopher-king, though it was unusual, and if such a woman were born it would be just to put her in that position for which she was suited. Aristotle, however, spoke of all slaves having the same traits, and all women the same traits, and all citizens the same traits, and thus slaves must always be slaves and women subject to male citizens. I want to hypothesize, subject to empirical study, that racists and sexists are more likely to be realists and use nomothetic thinking, while people with a more correct view of people (at least as far as sex and race are concerned) are more likely to be nominalists and use idiographic thinking... but the examples of Aristotle and Plato give me pause. Besides, is not such a hypothesis itself realist and nomothetic?

Friday, 29 August 2014

A Mature Philosophy

Is Personal Epistemology What I’ve Been Looking For?

Through the research I’m doing as an RA, I encountered the idea of personal epistemology; the Cole’s Notes version is that different people have different measurable attitudes towards how people gain knowledge, what knowledge is like, and so forth. In general, research into personal epistemology fit into two streams: 1) research into epistemic beliefs addresses the particular individual beliefs a person might have, while 2) developmental models of personal epistemology chart how personal epistemology changes over a person’s life. Personal epistemology and its developmental focus are the invention of William G. Perry with his 1970 Forms of Intellectual and Ethical Development in the College Years, but these days Barbara Hofer and Paul Pintrich are the major proponents and experts.

Perry studied college students for all four years of undergraduate school, asking them questions designed to elicit their views on knowledge. What he determined is that they gradually changed their views over time in a somewhat predictable pattern. Of course, not all students were at the same stage when they entered university, so the early stages had fewer examples, but there were still some. Generally, he found that students began in an dualist stage, where they believe that things are either true or false, and have little patience for ambiguity or what Perry calls relativism.* In this stage they believe that knowledge is gained from authorities (ie. professors)—or, if they reject the authority, as sometimes happens, they do so without the skills of the later stages and still tend to view things as black and white. As the stages progress, they start to recognize that different professors want different answers and that there are good arguments for mutually exclusive positions. By the fifth stage, they adopt Perry’s relativism: knowledge is something for which one makes arguments, and authorities might know more than you but they’re just as fallible, and there’s no real sure answer for anything anywhere. After this stage, they start to realize they can make commitments within relativism, up until the ninth stage, where they have made those commitments within a relativist framework. Not all students (or people) progress through all of the stages, however; each stage contains tensions (both internally and against the world/classroom) which can only be resolved in the next stage, but the unpleasant experience of these tensions might cause a student to retreat into a previous stage and get stuck there. Furthermore, with the exception of the first stage, there are always two ways to do a stage: one is in adherence to the authority (or the perceived authority), and the other is in rebellion against it.** It’s all quite complicated and interesting.

The 50s, 60s, and 70s show clearly in Perry, both in his writing style, his sense of psychology, and his understanding of the final stage as still being within a relativist frame. His theory foundered for a while but was picked up Hofer and Pintrich in the early 2000s. They, and other researchers, have revised the stages according to more robust research among more demographics. Their results are fairly well corroborated by multiple empirical studies.

According to contemporary developmental models of personal epistemology, people progress through the following stages:

Naïve realism: The individual assumes that any statement is true. Only toddlers are in this stage: most children move beyond it quite early. Naïve realism is the extreme gullibility of children.
Dualism: The individual believes statements are either right or wrong. A statement’s truth value is usually determined by an authority; all an individual must do is receive this information from the authority. While most people start moving out of this stage by the end of elementary school or beginning of high school, some people never move past it.
Relativism: The individual has realized that there are multiple competing authorities and multiple reasonable positions to take. The individual tends to think in terms of opinions rather than truths, and often believes that all opinions are equally valid. Most people get here in high school; some people proceed past it, but others do not.
Evaluism: The individual still recognizes that there are multiple competing positions and does not believe that there is perfect knowledge available, but rather gathers evidence. Some opinions are better than others, according to their evidence and arguments. Knowledge is not received but made. Also called multiplism. Those people who get here usually do so in university, towards the end of the undergraduate degree or during graduate school. (I’m not sure what research indicates about people who don’t go to university; I suspect there’s just less research about them.)

This link leads to a decent summary I found (with Calvin & Hobbes strips to illustrate!), but note that whoever made this slideshow has kept Perry’s Commitments as a stage after evaluism (which they called multiplism), which isn’t conventional. As with Perry’s model, there are more ways not to proceed that there are to proceed. Often people retreat from the next stage because it requires new skills from them and introduces them to new tensions and uncertainties; it feels safer in a previous stage. Something that’s been discovered more recently is that people have different epistemic beliefs for different knowledge domains: someone can hold an evaluist position in politics, a relativist position in religion, and a dualist position in science, for instance.

All of this pertains to our research in a particular way which I’m not going to get into much here. What I wanted to note, however, is that I am strongly reminded of Anderson’s pre-modern, modern, post-modern trajectory, I outlined just over a year ago. It’s much better than Anderson’s trajectory, however, for two reasons: 1) it’s empirically based, and 2) in evaluism it charts the way past relativism, the Hegelian synthesis I had been babbling about, the way I’d been trying to find in tentativism (or beyond tentativism). Perry’s model may or may not do this (without understanding better what he means by relativism, I can’t tell what his commitment-within-relativism is), but Hofer, Pintrich, et al.’s model does. Evaluism is a terrible word; I regret how awkward tentativism is, but I like evaluism even less. However, in it there seems to be the thing I’ve been looking for.

Or maybe not. It reminds me of David Deutsch’s Popper-inspired epistemology in The Beginning of Infinity, but it also reminds me literary interpretation as I’m used to practicing it, and so I can see a lot of people rallying under its banner and saying it’s theirs. That doesn’t mean it is theirs, but it often might be, and what I suspect is that evaluism might be a pretty broad tent. It was an exciting discovery for me, but for the last few months I’ve started to consider that it’s at best a genus, and I’m still looking for a species.

But this leads to a particular question: which comes first, philosophy or psychology? Brains, of course, come before both, but I’ve always been inclined to say that philosophy comes first. When, in high school, I first learned of Kohlberg’s moral development scheme, I reacted with something between indignation and horror: I was distressed at the idea that people would—that I would—inexorably change from real morality, which relies on adherence to laws, to something that seemed at the time like relativism. Just because people tended to develop in a particular way did not mean they should. What I told myself was that adults did not become more correct about morality but rather became better at rationalizing their misdeeds using fancy (but incorrect) relativistic logic. Of course I was likely right about that, but still I grew up just as Kohlberg predicted. However, I still believed that questions of how we tended to think about morality were different questions from how we should think about morality.

And yet it is tempting to see personal epistemology as the course of development we should take. Confirmation bias would lead me to think of it this way, so I must be careful. And yet the idea that there are mature and immature epistemologies, and that mature ones are better than immature ones, makes a certain intuitive sense to me. I can think of three possible justifications for this. An individual rational explanation would imagine this development less as biological and more as cognitive; as we try to understand the world, our epistemologies fail us and we use our reason to determine why they failed and update them. Since this process of failure and correction is guided by reason and interaction with the real world, it tends towards improvement. An evolutionary pragmatic explanation is also empirical and corrective: during the process of human evolution, those people with better epistemologies tended to survive, so humans evolved better epistemologies; however, given their complexity, they developed later in life (as prefrontal cortices do). A teleological explanation would suggest that humans are directed, in some way, toward the truth, and so these typical developments would indicate the direction in which we ought to head. I’m not sure I’m entirely convinced by of any of these justifications, but the first one seems promising.

So what comes first: psychology or philosophy? And should we be looking less for the right epistemology or a mature one?

-/-/-

*I’m still struggling to understand what Perry means by relativism, exactly, because it doesn’t seem to quite be what I think of relativism as being: it has much more to do with the mere recognition that other people can legitimately hold other positions than oneself, and yet it seems to be overwhelmed by this acknowledgement. It seems more like a condition than a philosophy. I'm still working it out.
**Perry writes about a strange irony in the fairly relativistic (or seemingly relativistic) university today.
Here’s the quotation:
In a modern liberal arts college, […] The majority of the faculty has gone beyond differences of absolutist opinion into teachings which are deliberately founded in a relativistic epistemology […]. In this current situation, if a student revolts against “the Establishment” before he has familiarized himself with the analytical and integrative skills of relativistic thinking, the only place he can take his stand is in a simplistic absolutism. He revolts, then, not against a homogeneous lower-level orthodoxy but against heterogeneity. In doing so he not only narrows the range of his materials, he rejects the second-level tools of his critical analysis, reflection, and comparative thinking—the tools through which the successful rebel of a previous generation moved forward to productive dissent.

Tuesday, 19 August 2014

Death Denial, Death Drive

Content warning: suicide, depression

If you’re at all aware of my online presence, you’ll know that I have great respect for Richard Beck’s work at his blog Experimental Theology and in his books known sometimes as the Death Trilogy. (You may be more aware of this than you'd like to be, since I doubt a month goes by before bringing his work up in some comment thread or another.) In particular I appreciate his work on Terror Management Theory and how it pertains to religious authenticity, hospitality, and the creation of culture. He gets the bare-bones of his theory from Paul Tillich’s The Courage to Be, but adds to it using experimental psychology that was unavailable to Tillich. I’ll give a summary.

Humans fear death. We fear particular threats of death, but we also experience anxiety from the knowledge that we will necessarily die one day. In order to manage the terror of death, we create systems of meaning which promise us some kind of immortality. This might be a promise of an afterlife, but it might simply be the idea that our lives had meaning—we were remembered, or we descendants, or our actions contributed to the creation of some great cultural work (like the progress of science or what have you). However, these cultural projects, or worldviews, can only eliminate our fear of death if we believe in them fully. Therefore we react to anything which threatens our belief in these worldviews in the exact same way that we would react to the possibility of death, because the loss of our worldviews is identical to the loss of immortality. And the mere existence of other plausible worldviews might constitute a threat to one’s own worldview. This is why most people defend their worldviews and reject other people’s worldviews so violently; violent defense of worldviews makes things like hospitality, charity, and justice difficult or impossible. Even a religion like Christianity, which is founded on the idea that we should side with the oppressed, became and in some cases remains violently oppressive because of these forces. Some people, however, do not use their worldview and cultural projects to shield themselves from the reality of death. These people, instead, face their fears of death directly; they also face their doubts about their own worldviews directly (these are, after all, the same thing). These people are in the minority, but they exist. Thus there are two things a person must do in order to prevent themselves from violently defending their worldviews: they must be willing to face the possibility of their own death (which means they must be willing to face doubt, the possibility that their worldview is false and their actions meaningless), and they must try to adopt a worldview that is not easily threatened by competing worldviews (which doesn’t necessarily mean relativism, but at the least worldviews should be easy on people who don’t adopt them).

I find this theory very compelling, not just because Beck marshals a lot of evidence for it in his work but also because I can use it to explain so many things. In particular, it seems to explain why so many religious people can be so hostile, and why their hostility seems to come from their religion, but at the same time religion motivates others to be hospitable instead. The theory can’t explain why people choose one religion over others, but that’s OK: other theories can do that. Beck’s work simply explains why people need to choose a worldview of some kind (even if that’s not a religion) and why they behave the way they do in relation to their own worldview and in relation to other people’s worldviews. And it is powerful when it does this.

However, I think there might be a problem with it. It presupposes that all people fear death.*

Now, I readily admit that most people fear death, and that complaints that certainly cultures do not fear death simply confuse the effects of a highly successful worldview with the lack of a natural fear of death. But I do not admit for a second that all people fear death. Some people actively desire death (that is, they exhibit marked suicidiality), but others simply have neither fear nor desire for death. When my depression is at its worst, this is me: I am generally unperturbed by possibly dangerous situations (unsafe driving, for instance). Moreover, however my mental health looks, I feel no anxiety at all about the fact that I will inevitably die. This does not scare me; it relieves me. There is little that gives me more comfort that the thought that I will one day die. I cannot state this emphatically enough: the only emotion I feel when contemplating my eventual death is relief. Of course I realize this might be pathological, but it means I fit into neither of Beck’s populations: I am not a person who denies death and my fear of it, and I am not a person who faces my fear of death. I face death and simply have no fear. I am surely not the only one, either.

Perhaps you object that I do not fear death because I have a fairly successful meaning-making worldview. And I do think my life probably has meaning. But, if that were true, my response to doubt would be fear of death. Contemplating the possibility that my life might be meaningless should, if Beck is right, cause me to grasp for life more readily. But the opposite happens: when I contemplate the possibility that my life has no meaning, I am even less interested in living out the full allotment of my life. If I came to believe that my life was meaningless, I would probably tip over into actively wanting to die. That’s opposite the prediction you would make if what Beck said applied to me. And I’m in good company as far as that goes; Camus felt the same.

Camus begins The Myth of Sisyphus as follows: 
There is but one truly serious philosophical problem and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.
Notice that the question of life and death is contingent on the question of meaning and meaninglessness; if life is meaningless, there is no reason to live. If life has meaning, there is a reason to live. This exhibits causality reverse to Beck’s. Camus’s answer was peculiar to him, a last-minute dodge from nihilism: suicide is the refusal to face the meaninglessness of life, and we must face the meaninglessness of life. This smuggles meaning back into absurdism, however, and creates a contradiction.** Camus’s answer doesn’t interest me, however; what interests me is the fact that the absurdists and existentialists seemed to have experiences parallel to mine, which suggests that Beck’s theory cannot describe all people. That is, it does not apply to the admittedly small population of people who simply don’t fear death in the first place.

Of course, it doesn’t have to address those people (us). As a theory of religion and culture in general, its failure to describe a small subset of the population, one which has not had much hand in shaping either religion or culture, is hardly a fatal error. In fact, this disjoint explains the general hostility most cultures and religions have shown towards suicide (as distinct self-sacrifice): suicide, as a repudiation of the meaningfulness of life, threatens the culture’s worldview and thereby exposes people to the fear of death. And since worldviews are designed to assure people that death isn’t real, they don’t have much to offer people who mostly fear that life is worthless and all meaning is uncertain, except to simply insist on their own validity in a louder and angrier voice. Maybe this is why it is difficult to find resources in Christianity which are at all effective at dissuading people from suicide or ameliorating their depression without making things so much worse (fear of damnation, sense of shame, etc.).

It is worth noting that I do have a fear of death, but it’s a different one: I fear other people’s deaths. I fear the deaths of those who matter to me. Perhaps Beck’s theory can be applied to that? The attempt to understand another person’s life as meaningful, in order to deny their death? Or the attempt to honestly face another person’s death or our fear of it?

So my complaint is not that Beck’s theory is wrong because it omits the population of which I am a part; my complaint is simply that I find it difficult to use Beck’s theory to determine what I should do, in regards to my own worldview, or to predict or understand my own anxieties regarding the meaningfulness of my actions, the role of doubt in hospitality, and how to face my particular anxieties (of meaninglessness, but also of the deaths of people who matter to me) without harming others.

For more on my depression and assorted philosophy I've worked through in response to it, see this index post.

If you are experiencing suicidal thoughts, please contact a local suicide hotline. In BC, call 1-800-784-2433, or 1-800-SUICIDE. If you are feeling depression generally, get therapy. I am serious about this: it helps so much.

-----

*There are other problems I can think of: for instance, Beck does not seem to account for a desire to believe that which is true. However, I won’t deal with it here; it’s possible that we do have a reality principle in our psychological make-up which competes with the meaning-creating and death-fearing components, but I’m not entirely sure that we do; if we do value truth for its own sake, that value is likely nonetheless a product and component or our worldview and appears after we choose a worldview to manage our fear of death. Beck, however, found some inspiration for his work from the American pragmatists, whose sense of truth is different from the intuitive one: a proposition which works, which bears fruit, which makes you the person you think you should be, is true and is true precisely for those reasons. Truth is not about correspondence between a proposition and an external reality or about coherence within a system of propositions. While Beck might not sign off on American pragmatist epistemology, I don’t think it’s a coincidence that Beck’s work focuses on whether beliefs make you a hospitable or inhospitable person rather than on whether beliefs meet some other benchmark of truth.
**This paradox may be resolvable, in the sense that it may be cognitively impossible for us to truly acknowledge meaninglessness. Any attempt to do so might smuggle meaninglessness in somehow; an absurdist might therefore say that absurdism is simply the philosophical system which gets closest to acknowledging meaninglessness, even though it smuggles in the imperative “You must authentically acknowledge meaninglessness.” As it stands, I prefer existentialism to absurdism, since it acknowledges that meaning can be created and only asks that you acknowledge that you created that meaning yourself; it still contains the paradox (you did not yourself create the imperative “You must acknowledge that you created all meaning yourself”), but it contains a more hospitable version of that paradox.

Sunday, 16 February 2014

He Hath Ever But Slenderly Known Himself

From Andrew Solomon's The Noonday Demon: An Atlas of Depression:
It is arguably the case that depressed people have a more accurate view of the world around them than do nondepressed people. Those who perceive themselves to be not much liked are probably closer to the mark than those who believe that they enjoy universal love. A depressive may have better judgement than a healthy person. Studies have shown depressed and nondepressed people are equally good at answering abstract questions. When asked, however, about their control over an event, nondepressed people invariably believe themselves to have more control than they really have, and depressed people give an accurate assessment. In a study done with a video game, depressed people who played for half an hour knew just how many little monsters they had killed; the undepressed people guessed four to six times more than they had actually hit. Freud observed that the melancholic has "a keener eye for the truth than others who are not melancholic." Perfectly accurate understanding of the world and the self was not an evolutionary priority; it did not serve the purpose of species preservation. Too optimistic a view results in foolish risk-taking, but moderate optimism is a strong selective advantage. "Normal human thought and perception." wrote Shelley E. Taylor in her recent, startling Positive Illusions, "is marked not by accuracy but positive self-enhancing illusions about the self, the world, and the future. Moreover, these illusions appear actually to be adaptive, promoting rather than undermining mental health. . . . The mildly depressed appear to have more accurate views of themselves, the world, and the future than do normal people . . . [they] clearly lack the illusions that in normal people promote mental health and buffer them against setbacks." 
The fact of the matter is that existentialism is as true as depressiveness. Life is futile. We cannot know why we are here. Love is always imperfect. The isolation of bodily individuality can never be breached. No matter what you do on this earth, you will die. It is a selective advantage to be able to tolerate these things, and to go on--to strive, to seek, to find, and not to yield. [...] Depressives have seen the world too clearly, have lost the selective advantage of blindness. (433-434)
While I understand the research is not quite so clear as all that, and while depression is capable of speaking lies, it is also the unfortunate case that depression sometimes tells the truth. Those with depression know in their bones that their automatic negative thoughts are true, and part of the practice of therapy is to learn to shut those thoughts out, but there's an equivocation there: do you ignore the thoughts, or deny them? Because, if we're going to be honest, some of the negative thoughts are true; the person with depression is right, and they know it. This makes it so much harder to identify the negative thoughts which are just lies your depression is telling you.

The fall-back position, I guess, is interpretation; change the metric and you can change the answer. I can say for sure that positive self-talk works (sometimes), but it's hard not to believe that positive self-talk is a ritual of lying to yourself. Maybe mental health is constituted by having more rather than less productive delusions.

Wednesday, 6 November 2013

An Apology to Authenticity

I have perhaps been unfair to what I've been calling the Polonius virtue. While, when I started out writing about it, I said that I thought you could likely build an argumentative robust version of it, I haven't been doing the best job of keeping that in mind. Instead, I've generally been thinking of people who adhere to the Polonius virtue as being pretty seriously mistaken. Of course even thinking that they're mistaken might be kind of strange, since I first came up with this idea in the context of Moral Foundations Theory, and there are real questions about whether moral foundations (or values) are even opinions anyway, at least in the sense that we think of them; we don't seem to choose which foundations matter to us. But I still want to make a case for valuing authenticity to oneself, in some shape or another.

To recap, I suspect that authenticity is value that a statistically significant proportion of humans care about. One version of this is the Polonius virtue, so called because of this line from Hamlet, spoken by Polonius: "This above all: to thine ownself be true, / And it must follow, as the night the day, / Thou canst not then be false to any man." To whit, it is a moral good to be true to yourself, a proclamation taken to mean many things. I think the problem, in fact, is that it's taken to mean so many things, and so many of those things are silly that I forget that some of them may not be so silly. For instance, I won't apologize for thinking that acting "true" to your every impulse is a good thing; hardly anyone can believe that if they keep in mind how many times people have conflicting impulses. But I think there might be one possible reading which isn't so silly: you shouldn't lie about yourself to yourself.

If you want to navigate the world with much hope of success, it will be much better if you are honest about yourself. This isn't a claim that there is some deep, immutable self to which you might be honest, nor is it even a claim that there is a temporary but nonetheless coherent self-of-the-moment about which you might be honest. I can fully recognize that I'm conflicted, that I'm partially opaque to myself, etc. and so forth, and still say that I tend to feel certain ways about certain things (so, for instance, I am afraid of heights, I have depression) and that I tend to do certain things (so, for instance, I usually articulate inchoate emotional content in logical, procedural ways). If I'm going to make it, I ought to know all of this about myself.

Another take on it: I need to recongize when people around me are making demands of me which don't honour the ways in which I differ from their ideas about what humans are. I typically see drives for authenticity as being a little anti-conventional, and I think this insight--that people do make demands of you which just don't mesh well with your personality, your needs, your cognitive style, etc.--is where that anti-conventional attitude might come from. Sometimes the dominant narrative just won't work for us; we aren't all or always so easily caught up in our culture's folkways. Being honest about this does seem rather important, at least as a condition of other goods.

And, for goodness sake, this is precisely the sort of thing I've been hoping other people would start adopting (particularly ones I think of as being uber-logical, disembodied rational types, like the Less Wrong people--and that may be unfair caricature, as well). I guess I've been an advocate for particular version of the authenticity ethic and I didn't even know it. Which is ironic in the technical sense. I'm a perfect example of what can go wrong if you aren't aware of yourself: I spent most of my life with dysthymia and I didn't even know it. Had I known it, things might have gone much better for me. And I'm also a decent example of what can go well if you are aware of yourself: I knew in advance that I was going to have a depressive breakdown, and I got a medical leave in time to weather that storm without damaging my academic career. So this specific kind of authenticity ethic is one I'm deliberately cultivating.

I think there are a few ways the Polonius virtue can go wrong, however. The first is when knowing yourself becomes an excuse to remain static. The second is when people start generalizing about what people are like; this can be a case of thinking other people are more like you than they are, or in can be a case of reasoning out from beliefs you have about human behaviour, even when it does not correspond well with the evidence/other people's experiences. The third is using an authenticity ethic to support selfish behaviour. The fourth (perhaps a superset which contains the third) is privileging authenticity over other goods. I'd explicitly disagree with what Polonius actually says: you can be true to yourself (whatever you take that to mean) and still be false with other people. (The most obvious example is that you could lie, and know you're lying.) And I think a lot of the metaphysics or anthropology that people build around the authenticity is unfounded .As an example, you could say that Freudian psychoanalysis is a huge and erroneous mythology built out of the valuable recognition that we are opaque to ourselves and spend a lot of time trying to repress/suppress our desires so that they correspond with societal norms.

Another way it can go wrong, too, is that we can fool ourselves into thinking that we know ourselves when we don't. Smart people prone to introspection are, apparently, really bad for this: smart people are terribly good at rationalization, and trick themselves into thinking they know their own mind. I'm inclinded to think that a fairly good knowledge of psychology--maybe not formal education but at least some commitment to following academic psychology, rather than folk psychology--would help people in this, but maybe not. That might just improve the plausiblity of their rationalizations, not improve the accuracy of their rationalizations. Really, awareness of the limitations of self-awareness is a kind of honesty to oneself, isn't it?

So in future I will try to be fairer to this ethic, and recognize that it probably has a place, when articulated in a certain way and when it is not burdened with baseless metaphysics. In retrospect, it seems appallingly obvious that, at minimum, being honest with oneself about oneself is pretty important to proper functioning. We've just got to pair that with the twin insights that 1) we can never really know ourselves entirely and 2) we are constantly in flux.

And I'm sorry if I offended anyone with my callous dismissal of authenticity as a moral good.

Tuesday, 22 October 2013

Which Myths Must Be True?

(This post is jam-packed with ideas and sources which I'm pulling together; I apologize in advance if it's a little dense. That's not a humble-brag; I am sincerely sorry conditional on your discomfort.)

An atheist friend once told me that the thing which frustrated him most about most of his atheist friends is that they didn't seem to understand that everyone has a mythology. His mythology is that birth is traumatic and then life is a downhill run to the grave. Other people have other mythologies: maybe it has something to do with the struggle between reason and conservatism, or maybe it's about the universe's basic indifference. Not everyone is aware that they have one. But everyone has one nonetheless. (To be fair, I don't think it's just his atheist friends who don't get this. A lot of Christians, for instance, think that non-religious people can't have a mythology; you'll have heard this as, "If there is no God, the world has no meaning.")

So that's a fair question to ask yourself: what's your mythology? I suppose we could think of mythologies as metanarratives, the big stories which make sense of all the little stories: Marxism's class struggle, Christianity's life-death-and-resurrection of Jesus Son of God, the Enlightenment's slow dawn of progress. But I don't know that all myths have to be big. The zodiac comes to mind: I know people who care about their sign, who understand themselves in light of that sign, but I don't think they believe the zodiac is the big story which gives meaning to all of the little stories. Myths can be medium-size stories which give some meaning to our experiences, but not all of the meaning. (In other words, you don't have to be a hedgehog to have a mythology; foxes have mythologies, too.)

Ambaa at the Patheos blog The White Hindu wrote a post a few months back called "Krishna is a Myth; Jesus is a Myth," arguing that it doesn't matter whether or not religious myths are historically true or religious figures were historically real; rather, what matters is how those myths and how those figures' teachings impact your life. It's the wisdom tradition that matters, she says. She speculated that insistence on historical reality was generally an attempt to claim religious supremacy. I commented to disagree with the motives she ascribes to Christians who believe that Jesus is historically real (counting myself among them): in general, the idea is that Jesus must have been real (and crucified, and resurrected...) in order for the Christian wisdom tradition to make any sense. So while many Christians probably do use Jesus' historicity in order to insist that Christianity is the one true and good religion, I think the theological (if not always emotional or social) motive for this belief is just that most Christians think that Jesus' historical reality is necessary to make Christianity coherent. The Christian wisdom tradition just doesn't make sense otherwise.*

However, I must say that the mythical truth/historical truth distinction is one that many Christians make. In particular, many Christians (I don't know about most) think that Genesis is not literally historically true; it really is a myth in the more anthropological sense. I know that some conservative Christians have argued that if we say Genesis isn't literally true, then soon we'll be saying Matthew isn't literally true, either; this may be a stretch, but certainly Ambaa is arguing for something like that. So there's another question: what parts of Christianity are necessary, and which are optional? Which parts must be history?

Or, a better way of putting that might be, which parts of my mythology require historical, scientific, logical, philosophical, or otherwise external justification in order to be sensible/useful/helpful? Which myths could still be useful and good if false? And which myths must be true?

(Technically I mean, "Which myths must be true in order to function as myths?", but that's far less pithy.)

In case you think that all myths must be true to be useful, I humbly submit that that's nonsense. Lots of really tenuous myths are helpful if they help you articulate something about yourself that you otherwise couldn't articulate. Freud, for instance, produced a massive mythology which has no real empirical basis, but some of his language--id and ego, repression--and some of his overarching concepts--the difference and relation between the conscious and unconscious minds--have been incredibly useful, at least until we came up with better language. And certainly science education has thrived on basically-flawed metaphors; when our best ways of understanding the universe is a set of advanced equations, you have to teach myths. I've written about how useful the idea of introversion has been to me; I was able to use that term to better articulate my needs and experiences. However, I am lucky: the concept of introversion does a very good job of articulating my experiences, but I know it that it doesn't help most people articulate their experiences. The fact that most people are ambiverts rather than introverts or extroverts suggests that the idea doesn't have much value as a scientific explanation of human behaviour generally. This does not change the fact that it has value as an explanation of my experiences. Introversion, as a myth, does not need to be true to be useful.**

This doesn't even get into the problems about what truth is, or what kind of truth we're talking about, and the distinction between "in order for this myth to be useful it must be true" and "in order for this myth to be useful I must believe that it is true." We have to tackle the nature of truth alongside the question, but I'm not getting into it again here.

So I have a lot of questions which I intend to ask of myself and I encourage you to ask of yourselves:

What is your mythology? And what are your myths?
Which myths could still be useful and good if false?
Which myths must be true?

(And let's remember that the map is not the territory...except when the map precedes the territory.)

-----------
*I had a Religious Studies professor in undergrad who said that one particular problem has plagued Hindu-Christian conversations: Christian participants often do not realize that, when they are explaining Christianity, they are not distinguishing between Christianity's Incarnation and Hinduism's avatars, allowing the Hindu participants to think Christianity is basically a kind of Hinduism. As a result, the Hindu participants would often just try to absorb Christianity into its exuberant polytheism without realizing that Christianity really does not work like Hinduism does. The reason this problem is a big one is that it afflicts conversations in which the participants are trying to get along; the problem results in the participants disagreeing about the best way of getting along (conflating Hinduism and Christianity vs. observing their differences). I think Ambaa's post fits well in this tradition of mutual miscommunication (if it is a tradition at all, and not something my prof made up).
**As it happens, there might be good empirical evidence to suggest that there is something going on at the level of the brain that folks have called introversion, to do with well-measured things like the brain's arousal to stimuli. But when people use the terms introvert and extrovert, they rarely use them in the neurological sense.
Blog Widget by LinkWithin