Thursday, 19 March 2015

Resignation

1.

A few years ago—at least three—I was talking to a friend who said he felt great embarrassment about his past: the things he used to say, the things he used to believe, the person he used to be. He felt uncomfortable running into people he met back then, because it always reminded him of that gap. Even though I had changed a lot, I couldn’t say I shared the feeling. I felt comfortable attributing my foolishness to youth; that is the price of maturing.

I am taking a course in social media as part of my last term in university. It’s been an interesting course; we’ve discussed identity-creation through social media, the role Twitter plays in professional discourse, and the similarities between content curation and library curation. At the beginning of class we were asked to give a brief history of our social media use. I mentioned Facebook and Tumblr, of course; I mentioned the research I’m doing on YouTube comments; I mentioned my defunct DeviantArt and Flickr accounts. I also mentioned that I had a blog for a long time and frequented the comments of some blogs, since that was relevant to the course blogs and course discussion board components of the class. I did not, however, share the link. To my knowledge, exactly one person who I met in Vancouver knows this blog exists.

In a management course I took in the second term of this program, the instructor spoke with us about job interviews. What sorts of things might make an interviewer think twice about hiring you? Pregnancy? Partying? Strong and … specific political opinions? Should you mention these things on your Facebook profile? Whether or not employers should check candidates’ Facebook profiles, they do; they also Google candidates’ names. Of course, one of the other things that might make an interviewer think twice is a history with mental illness.

Last night I was reading posts in which bloggers reveal the search terms that brought the most people to their blog. Of course, as with my own, the terms are largely pornographic. So I decided to check my current stats and see what search terms bring people in. Planarians are a common theme, including at least two people who wanted to know if cutting a planarian in half hurts it. (I don’t know for sure, but I’m going to have to go with yes.) Sidney’s sonnets have been replaced by Addison’s essay on wit for the subject of undergraduate plagiarism. No one looks for Disney songs in church more than a few times a month, but that’s still keeping regular. One search string in particular caught my attention: “christian h full name thinking grounds.”


2.

I assume you’re getting a sense of where this is going.

The gap between myself and “Christian H,” as a particular construct or persona or avatar, has long been tenuous. The first time I ever got something published, I mentioned it here; the first time I worked on an online project for an employer, with my name attached, I also mentioned it here. From then on, anonymity was only likely; a person could in theory get from this blog to my real name if they tried hard enough.

In the meantime, though, I’ve published other things, and I did not mention them. A big part of this silence is that I’m actually more concerned about people getting from my real name to this blog than the other way around. Googling my full name does not yet get you to this blog (within the number of pages that most people are willing to look through). If I mentioned a publication, there’d be another search term. What I felt in the first anecdote is no longer true: I am plenty embarrassed by this blog’s archives. Maybe I shouldn’t be embarrassed; it’s nearly a truism that all writers hate their older work. But I am. I won’t talk much about it; the most important things, I think, are that I realize that I was unfair, defensive, and blind to my own privileges.

When I first started blogging about depression, I spent a lot of time thinking about whether I wanted to do so. I knew that my anonymity was precarious at best. As I framed it to myself, my choice was between blogging about depression and no longer blogging at all, maybe even to the point of burying the Thinking Grounds, if I could. I’ve been unsure about this blog for a while, but it offers a chance to think in a particular way about things; I value this way of thinking. So the only way I’d keep blogging was if I could use it to think about depression. You know what I chose.

But I am headed for the job market in less than a month. I even made a LinkedIn account! Soon I will be co-presenting a paper at a major conference. While the novel I’m currently writing will probably never see the light of day, I am sending out poems and short stories, and someone may publish one of them sometime or another. Furthermore, I have a few ideas for novels to write after the one I’m writing now; one of those might go somewhere. I hope one of them does. Don’t get me wrong: I doubt I’ll be a quote-unquote public figure in the near future, and I may never be one at all. But I need to start worrying a lot more about online reputation.


3.

What does this mean?

First, over the next few weeks, I will be combing the archives and taking down those posts which I think I really need to take down. There are a few places where I think I was pretty unprofessional. Mostly I disapprove of taking down posts to prevent embarrassment, but in a few cases I think it’s necessary. (Provided there’s a point in taking it down; the Wayback Machine has archived some, but not all, of my blog.) For any opinion-pieces I can no longer stand behind, I’ll write a generic disclaimer and post it to the top. This task seems a little silly, but I’d rather be safe than sorry.

Second, I will be retiring this blog soon. I may post here a few more times, but I won’t be here past April 15. Except, possibly, to re-direct you to my new blog. If there is one.

I do value blogging. It gives me a place and a chance to think through things with an audience (even if mostly imagined) beyond whoever will indulge me in real life. I can keep track of where I am with things. I’m having trouble articulating I find so valuable in blogging, but those are close enough. I don’t like blogging here anymore, though. I can’t bury this blog, and I don’t want to; I feel I’ve done some good writing here in the last few years, and I’d like to be able to link back to it if the occasion comes. What I want is to a signal a break of some kind; I want a fresh-ish start.

(I’ve also learned some HTML and CSS, and the tiniest bit of Javascript, so this might be a chance to make something for myself and not just for an assignment.)

I need time to think about what shape that new blog would take, though, or if I do want to go ahead with it. I don’t know what my life will be like in the second half of April, let alone after that. I do know that I’m having trouble finding time (or energy, motivation, etc.) to write the fiction and poetry I want to write, and blogging takes up some of those resources. There might be good reasons to stop blogging entirely for a while, or scaling back, or changing my approach. I don’t know yet; I need to think about it. But I’ve been meaning to think about it for over two years now, and I never really do. I decided I need to commit publicly to ending this blog in order to seriously think about something new.

If I do start something new, though, there will be an important change: I’ll be using my full name. The Christian H persona has taken on the brunt of context collapse so I don’t have to—I don’t think I’ve looked to avoid accountability so much as avoid social awkwardness—but that strategy has a shelf life and I’ve pushed it past reasonable limits.


4.

In the meantime, the Weekly Wonders tumblr will still run at least until late May. I started late last May and since the start I planned to take a hiatus at the anniversary. I’m not the only wonder-monger, though, so it may continue without me; I’m also sure to pick it up again before too long.

I’ll still have my other tumblr, too, though it’s devolved into a reblog tumblr and I have no intention to make it very much more than that.

You could also follow me around on Disqus. I’m going to be seriously re-thinking my commenting practice, too, but I likely won’t stop entirely.

I would really appreciate suggestions for a new blog, on any aspect of it; suggestions can go in the comments or through e-mail, if you have an address. In particular, I'm thinking about what platform to use: Blogger, WordPress, and Tumblr all have advantages and disadvantages. Also, while I can look at the stats and see what kind of content gets traffic, I'd prefer to hear more qualitative assessments: what worked for you, what mattered to you, what you didn't understand. I can’t guarantee I’ll take your advice, of course, but certainly I welcome input.

Thank-you, all, for listening to me ramble for so long.

Saturday, 14 March 2015

Disciplinary Epistemologies 101

Since there have been universities, there has been a crisis in them. We should probably look at the more recent hand-wringing about universities teaching students relativism in the light of recurring accusations that universities corrupt youth, but I’m not going to do that analysis here (or ever, even). Instead I’m going to tell you about my Research Methods class.

1.

I had recently that article for Ethika Politika in which Margaret Blume worries that varied distribution requirements at Yale University—some humanities, some social sciences, some hard sciences, a language or two, etc.—neither give students the sense that the different disciplines could speak to each other nor provide them with a framework in which to organize themselves; with this in the back of my mind, I was listening to my Research Methods instructor talk about the positivist underpinnings of quantitative research and the constructivist underpinnings of qualitative research. Putting the two together, I thought: Hey, maybe what Yale—and UBC, and whoever—needs is a mandatory introductory Research Methods and Disciplinary Epistemologies class.

My experience in undergrad left me with the acute sense that almost none of my peers knew why their own disciplines did things the way they did them, let alone why other disciplines made different decisions. No science student, for example, could tell me why they wrote everything in passive voice, and so they were generally immune to my editorial tirades about 1) cacophonic language and 2) awful epistemology re: denying that the Observer Effect exists.* Students in the humanities weren’t much better; in English, for instance, theory courses were optional for many students, and not all those on offer were great. The only ones who seemed to know these sorts of things were grad-students-to-be or people who took Introduction to Philosophy and listened to the professor.

So if the problem, as Blume would have it, is that undergraduate students had no idea how to put the puzzle pieces together, it seems like a Research Methods and Disciplinary Epistemologies class would be a great solution. I don’t agree, actually, that quantitative research implies positivism and qualitative research implies constructivism—that’s a long discussion, but suffice to say that I’m doing mixed methods research right now—but that’s the sort of conversation that might put the pieces together. Getting a whole big picture of all the disciplines would really help.

Now, there are some problems with this course, logistically. There are really only two people who I’d trust to teach the course: myself and my first-year Philosophy professor. There are probably others here and there, but that’s still a low enough percentage of the people I know that it’s worrisome. Maybe there’d need to be a set curriculum. The issue is that I trust neither insiders nor outsiders to teach a discipline’s epistemology; you’d probably have to have one of each. Maybe there could be modules: one professor handles the etic approach, and guest lecturers handle the emic approach.

2.

This lovely daydream lasted perhaps five minutes before I remembered that I had been a Teaching Assistant for a mandatory Intro to Literature class, and I had sworn off the idea of classes mandatory for all students then and there. You can lead a horse to water, they say, but you can’t make it drink; in my experience, quite a lot of horses won’t drink precisely because you lead them to water when they didn’t want to be lead. These we’ll-make-them-learn-these-things-by-making-it-mandatory schemes rarely work.

I have heard of exceptions, where a professor and batch of TAs manage to get all or most of the students into the humanities, at least in heart if not in enrollment. But this seems to require a dream team of excellent professors, excellent TAs, and excellent students; rarely do you get two, let alone three, of those requirements. In the end, you just can’t force students to accept what they’re not willing to accept.

And it occurred to me, too, that a lot of the content I’d want to teach might be well over the heads of most first year students. As a first year student it took me years to understand existentialism and Buddhism and constructivism, letting them slowly gestate long after I’d passed my Philosophy and Religious Studies finals, and I’m the sort of person who’s good at this sort of thing and won’t leave it alone.

So I’m going to have to come out against mandatory courses in university, no matter how well intended. I don’t think they do what we want them to do. But maybe we’re just doing them wrong?

3.

Leah wrote that maybe the framework-building should be extracurricular anyway, and I’m inclined to agree. Classes might not create incentives for truth-seeking; they are good at creating incentives for skill-building and material-mastering, but I don’t know how you could grade someone on whether or not they are right, on whether their commitment to their values is authentic, on how justified their decisions are. And if you aren’t grading students on something, not many of them are going to do it. We should encourage big questions in the classroom, but we can’t expect students to find them there.

And maybe casting students into a sea of relativism for a while is good for them, as I mentioned in my last post, so long as we give them some sense that they can and should and must get themselves ashore. We can’t get the students ashore for them, more than likely, and while we should think of ways to help them do so, the best method might just be for professors to model evaluativist thinking. For the most part they already do award evaluativist thinking in assignments, since every disciplinary epistemology I’ve encountered has been thoroughly evaluativist; we needn’t make “evaluativist thinking” a formal requirement.

And the not-so-secret subtext of Blume’s article seems to be “every school should be a Catholic school,” so maybe I shouldn’t be taking it as seriously as a critique of university. For instance, Blume’s suggestion that only Catholic theology can tie together the disciplines is just silly: even if you spot her that Catholicism is true, it’s hard to deny that Islamic theology, Buddhist epistemology, and historical materialism have been able to create a coherent, if not necessarily true, framework for all disciplines.

But, anyway, it’s something to think about. I wouldn’t mind teaching a Disciplinary Epistemologies and Academic Research course; I just wonder who I’d teach it to.



* In case you too are unaware of the sciences’ use of the passive voice, I’ll explain it: sciences use the passive voice (“The results were analyzed…” rather than “We analyzed the results”) in order to mask the researchers’ presence. In theory, the researchers shouldn’t matter, the sciences say; we are removing the personality etc. from the procedures. Of course, some version of that claim is true, but not to the extent removing the researcher from consideration entirely. The observer effect is often a serious one, and this grammatical elision hides the way researchers are involved in their research. Consider Nixon’s famous remark, “Mistakes were made”; passive voice is the mechanism by which responsible agents deny responsibility.
Moreover, the science students whose papers I edited never knew why they were supposed to use the passive voice, so they also never knew when they were supposed to use it. As a result, they used it in almost every sentence, even when it was confusing and unnecessary.

Sunday, 8 March 2015

A Mature Philosophy, Part II

Or, Personal Epistemology, the Perils of Education, and Two Ways of Not Being a Relativist

Knowledge is always uncertain, but some ideas are better than others. Evidence for propositions exists, but doesn’t that require claims about evidence for which we cannot have evidence? One-size-fits-all-answers usually fit no one but the person who made them, but then physics decided to go and be all universal; if everything is just physics on a super-complicated scale, shouldn’t there be universal answers to all questions? I used to think the way to address these questions sat somewhere between modernism and postmodernism, or maybe through postmodernism and out to the other side, but that approach wasn’t generating very many answers for me and it certainly wasn’t working for any of my interlocutors. And then I discovered personal epistemology through my work as a research assistant and thought it was a helpful—though perhaps only modestly helpful—way of framing issues of knowledge and uncertainty and relativism and absolutism and people being not just wrong but annoyingly wrong.

So I chose personal epistemology as the topic for a class assignment.* Specifically, it was a literature review (in academics, a literature review is a summary of the published research on a topic: you review the scholarly literature). I learned a lot: my understanding of personal epistemology is a lot more nuanced now, but more to the point I read a study that almost destroyed my fledgling faith in the idea, but then I realized there was a flaw in the study design (I think); still, even if the study design is flawed, there’s an interesting implication which I want to explore here.

But first, I should do a better job explaining personal epistemology.

1.

Personal epistemology refers to the beliefs a person has about knowledge and knowing. William Perry coined the term in his 1970 Forms of Ethical and Intellectual Development in the College Years, a longitudinal study of college students’ epistemologies. Most versions of the concept retain some element of Perry’s emphasis on the development of these beliefs across a person’s life. That said, there are a lot of competing ways of modelling personal epistemology. I’m going to focus on my favourites; there are some models (see King and Kitchener, for instance, or Elby and Hammer, at the bottom) which are prominent enough in the field but which I don’t know well enough to discuss.

Barbara Hofer, sometimes in collaboration with Paul Pintrich, has a more synchronous model, which looks at specific beliefs people have about knowledge at one time. You can think of it as a photograph rather than a video: higher definition, but only for a single moment. Hofer in particular looks at two aspects of two dimensions, for a total of four epistemic beliefs: complexity of knowledge and certainty of knowledge (paired under nature of knowledge), and source of knowledge and justification for knowledge (paired under process of knowing). Any given person can hold a naïve version of these beliefs or a sophisticated version. For instance, a naïve belief about the complexity of knowledge would be, “Knowledge is simple”; a sophisticated belief about the complexity of knowledge would be, “Knowledge is complex.” You can also consider what’s called domain specificity: a person might have naïve beliefs about mathematics but sophisticated beliefs about psychology, or vice versa. (I italicized the jargon terms so you can identify them as jargon and not my own interpolation.) Hofer and others usually imply (or state outright) that sophisticated beliefs are truer and/or more desirable than naïve ones.

Hofer’s model shows a lot of interesting differences between populations. Men tend to exhibit somewhat different epistemic beliefs than women do (men tend to hold more naïve beliefs than women do), and students in different academic disciplines also tend to exhibit different epistemic beliefs. There are also cultural differences; indeed, as I understand it Hofer is currently working on epistemic beliefs in different cultures.

Deanna Kuhn, on the other hand, is a scholar who looks more at the developmental side. Her scheme, like Perry’s original scheme, has stages that a person would ideally move through over the course of his or her life. That scheme looks like this:

Realists think assertions are copies of reality.
Absolutists think assertions are facts that are correct or incorrect according to how well they represent reality.
Multiplists think assertions are opinions that their owners have chosen and which are accountable only to those owners.
Evaluativists think assertions are judgements that can be evaluated or compared based on standards of evidence and/or argument.

Realists and absolutists agree that reality is directly knowable, that knowledge comes from an external source, and that knowledge is certain; multiplists and evaluativists agree that reality is not directly knowable, that knowledge is something humans make, and that knowledge is uncertain. However, things start to get more fine-grained when it comes to critical thinking: realists do not consider critical thinking necessary, while absolutists use critical thinking in order to compare different assertions and figure out whether they are true or false. Meanwhile, multiplists consider critical thinking to be irrelevant but evaluativists value critical thinking as a way to promote sound assertions and to improve understanding.

(There are actually six stages, but I’ve conflated similar ones for simplicity’s sake, as Kuhn does fairly often. The six stages are named and numbered thus: Level 0, Realist; Level 1, Simple absolutist; Level 2, Dual absolutist; Level 3, Multiplist; Level 4, Objective evaluativist; Level 5, Conceptual evaluativist. The differences between the two kinds of absolutist and two kinds of evaluativist are less marked than the differences between the larger groupings.)

There is a certain amount of work in education psychology trying to move children from lower stages to higher ones, but there are several challenges to this: for instance, teachers don’t have much training in this area, since traditional pedagogy is pretty thoroughly absolutist; there’s also the chance that children will retreat to a previous stage. Ordinarily, people develop because their beliefs about knowledge are challenged: when confronted by competing authorities, an absolutist worldview cannot adjudicate between them and so the person will be forced to adopt new beliefs about knowledge. However, each stage is more difficult than the previous one, and there’s always a chance that a person will find their new stage too difficult and retreat to a previous one (Yang and Tsai). So if you’re trying to move children along from realism to absolutism to multiplism to evaluativism, you need to push them, but not push them too hard.

Kuhn does account for domain-specificity, too. People tend to use one stage for certain kinds of knowledge while using a different stage for another kind of knowledge. In fact, people tend to attain new levels for the different knowledge domains in a predictable sequence, though I’m sure there are exceptions: people first move from absolutist to multiplist in areas of personal taste, then aesthetic judgements, then judgements about the social world, and finally judgements about the physical world; they then move from multiplist to evaluativist in the reverse order, starting with judgements about the physical world and ending with aesthetic judgements (but not judgements of taste, which rarely become non-relativist).

Both of these models have pretty good empirical backing as constructs and can predict a number of other things, such as academic success, comprehension of material, and so on. I’ll talk about how they might interact later. For the moment, though, I’ll let it rest and move on to that study I was talking about before.

2.

Braten, Strømsø, and Samuelstuen found in a 2008 study that students with more sophisticated epistemic beliefs performed worse at some tasks than students with naïve epistemic beliefs. They were using Hofer’s model, as described above, and looked at college students with no training in environmental science. These students were given a few different documents about climate change and asked to read and explain them. Students with sophisticated beliefs about the certainty or complexity of knowledge performed better than students with naïve beliefs about those same concepts, as predicted. However, students with sophisticated beliefs about the source of knowledge—that is, students who believe that knowledge is constructed, that knowledge is something people make—performed worse than students with naïve beliefs in this area—that is, students who believe that knowledge is received.

This finding seems to be a pretty major blow to the idea that we should be trying to get people to adopt sophisticated epistemic beliefs. Specifically, Braten, Strømsø, and Samuelstuen suggest that sophisticated epistemic beliefs about the process of knowing are more appropriate to experts; non-experts in those areas would do better with naïve epistemic beliefs. Even if sophisticated epistemic beliefs are right, they tend to make people wrong.

At first glance, this makes a sort of sense. A lot of people making bizarre claims about the physical world—creationists, anti-vaxxers, and climate change deniers all come to mind—rely pretty heavily on a constructivist view of science in their rhetoric. Is it possible that these people all have sophisticated beliefs about knowledge, but since they are non-experts in these areas they tend to evaluate the evidence really poorly? Would they be better off with naïve beliefs about knowledge? I’m going to be honest: this bothered me for a few days.

And this isn’t just a small problem. As Bromme, Kienhues, and Porsch point out in a 2010 paper, people get most of their knowledge second-hand. Personal epistemology research has so far focused on how people gain knowledge on their own, but finding, understanding, and assessing documents is the primary way in which people learn things. So if sophisticated beliefs wreak havoc with that process, we’re in trouble.

However, I think there’s a problem with the 2008 study.

Hofer’s model of epistemic beliefs has just two settings for each dimension: naïve and sophisticated. But Kuhn’s model shows that people are far more complicated than that. Specifically, multiplists and evaluativists both believe knowledge is constructed, but they do significantly different things in light of this belief. As Bromme, Kienhues, and Porsch point out, the multiplists in Kuhn’s study have little or no respect for experts, even going so far as to deny that expertise exists; both absolutists and evaluativists have great respects for experts, though for different reasons. Multiplists tend to outnumber evaluativists in any given population, however, and not by just a little bit. (The majority—in some studies the vast majority—of people get stuck somewhere in the absolutist-multiplist range.) So if you take a random sampling of students and sort out the ones with sophisticated epistemic beliefs, most of them will be multiplists rather than evaluativists according to Kuhn’s scheme. It therefore shouldn’t be at all surprising to find that most of them will have trouble understanding documents about climate change: they aren’t terribly interested in expertise, after all. But evaluativists may still be perfectly good at the task; they’re just underrepresented in the study.

Of course, this is conjecture on my part. It’s conjecture based on reading a lot of these studies and, I think, a sufficient understanding of the statistics involved, though feel free to correct me if I’m wrong on that count—I’m no statistician. But it’s still conjecture and I’d rather have empirical evidence. Alas, no one seems to have tried to resolve this problem, at least not that I could find.

Now, I can imagine a few different ways Hofer’s model and Kuhn’s model might fit together. Maybe each belief has only two settings—naïve and sophisticated—and Kuhn’s stages are different combinations of beliefs. So, realists would have only naïve beliefs; evaluativists would have only sophisticated beliefs; absolutists and multiplists would have some combination of naïve and sophisticated beliefs. This might mean that the beliefs would work together in certain ways to produce new results, and a combination of naïve and sophisticated beliefs don’t work well together. And there might be some important beliefs that Hofer is missing that influence how these work, too. Or, maybe epistemic beliefs have more than two settings. Maybe there are two kinds of naïve belief and two kinds of sophisticated belief. Either of these possibilities would explain the conflict between Kuhn’s results and Braten, Strømsø, and Samuelstuen’s results.

3.

Even if Braten, Strømsø, and Samuelstuen’s results aren’t a nail in the coffin for those of us who want to be prescriptive about personal epistemology, any explanation for those results still means something interesting—or upsetting—about personal epistemology. Being an evaluativist is probably the best thing to be, in all knowledge domains: it’s both true** and useful. However, being a multiplist might not be better than being an absolutist, at least not for everything. Maybe, overall, multiplism is better than absolutism; certainly it’s truer. But people pay a price for maturity when they shift from absolutism to multiplism: they lose respect for expertise.

And it’s even worse than it might seem at first, because most people who make it to multiplism don’t make it to evaluativism. So we can take a bunch of absolutists and try to get them to evaluativism, but we’ll lose a lot of them in multiplism, and they might well be worse off in multiplism than they were in absolutism. (I’m not convinced that they’re actually worse off—multiplists are far more tolerant than absolutists—but let’s assume they are.) If I ask people to develop more sophisticated epistemic beliefs, I’m asking them to take a real risk. The pay-off is high (and, might I add, true), but the risk isn’t insignificant.

I’m reminded of all the worry about universities turning students into relativists, unable to make commitments, only able to show how nothing is undeniably true, lost at sea among competing frameworks. I’ve been really skeptical of such arguments in the past, but maybe I’ve underestimated how big of a problem this is. (The Blume and Roth articles are still riddled with problems, though.) Maybe relativists are real, and maybe they’re in trouble! I was wrong! But the existence of relativists might still be a good thing, even if relativism itself is a less good thing: it means that education is actually moving people along the stages of epistemological development. The trick is to get them all the way up to evaluativism; or, to phrase it more pointedly, the trick is to get them into evaluativism so they don’t slide back down into absolutism. I don’t know what the results look like for people who regress: I suspect it’s harder to get them into multiplism again so that they can get to evaluativism. This is what Perry’s research suggested, but Perry’s research was… well, that’s another story.

There’s still a lot to hash out here: Perry’s work suggests that schools with absolutist professors tend to produce multiplists anyway, since students still need to reconcile conflicting authorities and that process is what drives personal epistemology’s development. In fact, he suggests that yesterday’s reactionary tended to get a pretty developed epistemology, since they were wrestling with absolutist professors; today’s reactionary, however, rebels against multiplist or evaluativist professors, and so doubles-down on absolutism. This is a problem.

And I also suspect—this time with nothing but anecdote, so mark it with an appropriate amount of salt-grains—that people in early stages can’t recognize or comprehend later stages very well. To an absolutist, evaluativism and multiplism probably look much the same, or else they think they’ve already achieved evaluativism. Meanwhile, to a multiplist, evaluativism probably looks like some kind of compromise with absolutism. Moving forward just looks wrong, until there’s nothing else you can do.

It’s hard to say what all of this means for higher education (or elementary and secondary education, for that matter). Do you focus on getting relativists through into evaluativism? Or do you focus on getting people out of absolutism and keeping them out of absolutism, trusting that they’ll find their way to evaluativism on their own (though Kuhn suggests this is very unlikely). Maybe universities aren’t the ones that can get them to evaluativism anyway? Or do you throw them a bunch of professors with strong but conflicting opinions, hoping that this will challenge them through to evaluativism? (Personally, I learned a lot from clearly evaluativist professors who stated and argued for their own beliefs in the classroom, but did the opposing views justice, too. That seems like a good compromise: when the student is ready for evaluativism, they’ll have a model for that way of thinking, but students who tend to be reactionary aren’t so likely to slip into truculent absolutism for the rest of their lives, which they’d probably do if they had explicitly relativist professors.)

It’s hard to say, but I think personal epistemology is a good place to start thinking about the issue. My goodness there are a lot of studies I wish I could do!

4.

I wrote this post because I wanted to get all of this off my chest, but also because I intend to talk a bit in a upcoming post about higher education in response to one those worried articles I mentioned before. (Thanks, Leah, for bringing it to my attention.) Personal epistemology won’t play a large part in that discussion, I don’t think—I haven’t written it yet, so I can’t be sure—but I wanted you to have these concepts down as background information. A lot of these “there’s trouble in higher education” articles tend to worry a lot about all these hippy relativists that universities put out, and if we want to address that issue, I think we should learn how that relativism fits into cognitive development, right? It’s looking like people need to be relativists before they can be right, and then they need to move forward from relativism rather than retreat back into absolutism.

Actually, that’s an almost perfect summary. I’ll add that to the end.

OK. I know I’m well beyond acceptable blog post length, but there are two more things.

a. Way back when Eve Tushnet wrote an article for the American Conservative called “Beyond Critical Thinking,” and then I wrote a thing called “Beyond Simple Acceptance,” because sometimes I’m a snide jerk. All that and the resulting back-and-forth is in the links peppering this post. Anyway, Eve was talking about how people come out of university unable to make intellectual stands because they’ve over-learned critical thinking; I was talking about how quite a lot of people (probably most people, probably you, probably me) don’t seem to be sufficiently capable of critical thinking, so I really didn’t think that there was a problem with folks learning too much of the stuff.

Maybe I should have named this post “Beyond Critical Thinking and Simple Acceptance.” In retrospect, I think Eve is clearly arguing for something like evaluativism but I thought she was backsliding to absolutism. So I was wrong in that. But Eve was wrong to say that critical thinking is the problem. Instead, the problem seems to be that universities aren’t shepherding people through multiplism into evaluativism. (Maybe it isn’t the job of a university to do that, but I don’t know where else people will learn it.)

Now, I absolutely do not want a university to teach people which beliefs to take a stand for. The thought of that makes my skin crawl; the whole Catholic college or Baptist bible institute seems … disingenuous at best. Private schools at the elementary and secondary levels are even worse. (Though Perry might remind me that the rebels would fare better in that system than if they were taught by relativists.) But universities would certainly do well to help any students who make it to multiplism move on through to evaluativism.

b. Because I read Unequally Yoked and Slate Star Codex, I have some passing knowledge of Less Wrong, the Center for Applied Rationality (CFAR), and the rationalist community generally (though I wish they’d change the name from rationalist because I’m fairly sure they aren’t disciples of Descartes, Spinoza, and Kant.) A major focus of these groups is to figure out how to reason better. CFAR is particularly looking at rationality outreach and research—testing whether the sorts of tricks Less Wrong develops are empirically supportable, and teaching these tricks to people outside the movement.***

I wonder about this, though. How much rational thinking can non-evaluativists learn? Would the resources be better spent moving people from absolutism into multiplism and from multiplism into evaluativism? Or does that come automatically when you teach rational thinking? Perry was clear: he thought that critical reasoning skills came from the development of personal epistemology. But Perry’s method was… not the best. It might be worth spending some resources to check this: is it better—in terms of outcome/cost—to move people into evaluativism, or to teach them the rationality tricks? I don’t have the resources to check any of that, but maybe the CFAR does.

TL;DR: It’s looking like people need to be relativists before they can be right; relativism isn’t great, though; people need to move forward from relativism rather than retreat back into absolutism.


* On the note of class assignments, I finish my program in April. Yay! Then I will need to start job hunting. Boo! But after that, I hope, I will have a job. Yay!

** I am asserting that it is true. This assertion is based on—well, on everything, really—but I want to make clear that psychology can’t really tell us much about epistemology in the philosophical sense; the job here is to determine what beliefs people do have about knowledge, not which beliefs are true. However, it seems pretty clear, philosophically, that knowledge is uncertain and constructed, that reality cannot be directly accessed, and so on.

*** The other focus for Less Wrong (and Slate Star Codex?) is the creation of a robot-god which will usher in a utilitarian paradise (and prevent the otherwise-inevitable rise of a robot-demiurge), so I’m still not sure what to make about their claim to reasonableness.

_____
Select Sources

Braten, Ivar, Helge I. Strømsø, and Marit S. Samuelstuen. “Are sophisticated students always better? The role of topic-specific personal epistemology in the understanding of multiple expository texts.” Contemporary Education Psychology 33 (2008): 814-840. Web.

Bromme, Rainer, Dorothe Kienhues, and Torsten Porsch. “Who knows what and who can we believe? Epistemological beliefs are beliefs and knowledge (mostly) to be attained from others.” Personal Epistemology in the Classroom. Edited by Lisa D. Bendrixen and Florian C. Feucht. Cambridge: Cambridge UP, 2010. 163-193. Print.

Elby, Andrew. “Defining Personal Epistemology: A Response to Hofer & Pintrich (1997) and Sandoval (2005).” Journal of the Learning Sciences, 18.1 (2009): 138-149. Web.

Elby, Andrew and David Hammer (2002) “On the Form of Personal Epistemology.” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 169-190. Print.

Hofer, Barbara K. “Dimensionality and Disciplinary Differences in Personal Epistemology.” Contemporary Education Psychology 25.4 (2000): 378-405.

Hofer, Barbara K. “Exploring the dimensions of personal epistemology in differing classroom contexts: Student interpretations during the first year of college.” Contemporary Education Psychology 29 (2004): 129-163. Web.

Hofer, Barbara K. “Personal Epistemology and Culture.” Knowing, Knowledge and Beliefs: Epistemological Studies across Diverse Cultures. Edited by Myint Swe Khine. Perth: Spinger, 2014. 3-22. Print.

King, Patricia M. and Karen Strohm Kitchener. “The Reflective Judgment Model: Twenty Years of Research on Cognitive Epistemology.” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 37-61. Print.

Kuhn, Deanna and Michael Weinstock. “What is Epistemological Thinking and Why Does it Matter?” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 121-144. Print.

Perry, William G. Forms of Ethical and Intellectual Development in the College Years, New York: Holt, Rinehart and Winston, Inc., 1970. Print.


Yang, Fang-Ying, and Chin-Chung Tsai. “An epistemic framework for scientific reasoning in informal contexts.” Personal Epistemology in the Classroom. Edited by Lisa D. Bendrixen and Florian C. Feucht. Cambridge: Cambridge UP, 2010. 124-162. Print.
Blog Widget by LinkWithin