Wednesday, 6 May 2015

New Blog: Accidental Shelf-Browsing

This will be the last post here.

My new blog is called Accidental Shelf-Browsing. It is on Wordpress; otherwise, I do not know if I have much to say about it yet.

I've turned off commenting capabilities here (I think). If you would like to get in touch about anything at this blog, please do so at my new one.

Thank-you all for reading.

Christian H.

Thursday, 19 March 2015

Resignation

1.

A few years ago—at least three—I was talking to a friend who said he felt great embarrassment about his past: the things he used to say, the things he used to believe, the person he used to be. He felt uncomfortable running into people he met back then, because it always reminded him of that gap. Even though I had changed a lot, I couldn’t say I shared the feeling. I felt comfortable attributing my foolishness to youth; that is the price of maturing.

I am taking a course in social media as part of my last term in university. It’s been an interesting course; we’ve discussed identity-creation through social media, the role Twitter plays in professional discourse, and the similarities between content curation and library curation. At the beginning of class we were asked to give a brief history of our social media use. I mentioned Facebook and Tumblr, of course; I mentioned the research I’m doing on YouTube comments; I mentioned my defunct DeviantArt and Flickr accounts. I also mentioned that I had a blog for a long time and frequented the comments of some blogs, since that was relevant to the course blogs and course discussion board components of the class. I did not, however, share the link. To my knowledge, exactly one person who I met in Vancouver knows this blog exists.

In a management course I took in the second term of this program, the instructor spoke with us about job interviews. What sorts of things might make an interviewer think twice about hiring you? Pregnancy? Partying? Strong and … specific political opinions? Should you mention these things on your Facebook profile? Whether or not employers should check candidates’ Facebook profiles, they do; they also Google candidates’ names. Of course, one of the other things that might make an interviewer think twice is a history with mental illness.

Last night I was reading posts in which bloggers reveal the search terms that brought the most people to their blog. Of course, as with my own, the terms are largely pornographic. So I decided to check my current stats and see what search terms bring people in. Planarians are a common theme, including at least two people who wanted to know if cutting a planarian in half hurts it. (I don’t know for sure, but I’m going to have to go with yes.) Sidney’s sonnets have been replaced by Addison’s essay on wit for the subject of undergraduate plagiarism. No one looks for Disney songs in church more than a few times a month, but that’s still keeping regular. One search string in particular caught my attention: “christian h full name thinking grounds.”


2.

I assume you’re getting a sense of where this is going.

The gap between myself and “Christian H,” as a particular construct or persona or avatar, has long been tenuous. The first time I ever got something published, I mentioned it here; the first time I worked on an online project for an employer, with my name attached, I also mentioned it here. From then on, anonymity was only likely; a person could in theory get from this blog to my real name if they tried hard enough.

In the meantime, though, I’ve published other things, and I did not mention them. A big part of this silence is that I’m actually more concerned about people getting from my real name to this blog than the other way around. Googling my full name does not yet get you to this blog (within the number of pages that most people are willing to look through). If I mentioned a publication, there’d be another search term. What I felt in the first anecdote is no longer true: I am plenty embarrassed by this blog’s archives. Maybe I shouldn’t be embarrassed; it’s nearly a truism that all writers hate their older work. But I am. I won’t talk much about it; the most important things, I think, are that I realize that I was unfair, defensive, and blind to my own privileges.

When I first started blogging about depression, I spent a lot of time thinking about whether I wanted to do so. I knew that my anonymity was precarious at best. As I framed it to myself, my choice was between blogging about depression and no longer blogging at all, maybe even to the point of burying the Thinking Grounds, if I could. I’ve been unsure about this blog for a while, but it offers a chance to think in a particular way about things; I value this way of thinking. So the only way I’d keep blogging was if I could use it to think about depression. You know what I chose.

But I am headed for the job market in less than a month. I even made a LinkedIn account! Soon I will be co-presenting a paper at a major conference. While the novel I’m currently writing will probably never see the light of day, I am sending out poems and short stories, and someone may publish one of them sometime or another. Furthermore, I have a few ideas for novels to write after the one I’m writing now; one of those might go somewhere. I hope one of them does. Don’t get me wrong: I doubt I’ll be a quote-unquote public figure in the near future, and I may never be one at all. But I need to start worrying a lot more about online reputation.


3.

What does this mean?

First, over the next few weeks, I will be combing the archives and taking down those posts which I think I really need to take down. There are a few places where I think I was pretty unprofessional. Mostly I disapprove of taking down posts to prevent embarrassment, but in a few cases I think it’s necessary. (Provided there’s a point in taking it down; the Wayback Machine has archived some, but not all, of my blog.) For any opinion-pieces I can no longer stand behind, I’ll write a generic disclaimer and post it to the top. This task seems a little silly, but I’d rather be safe than sorry.

Second, I will be retiring this blog soon. I may post here a few more times, but I won’t be here past April 15. Except, possibly, to re-direct you to my new blog. If there is one.

I do value blogging. It gives me a place and a chance to think through things with an audience (even if mostly imagined) beyond whoever will indulge me in real life. I can keep track of where I am with things. I’m having trouble articulating I find so valuable in blogging, but those are close enough. I don’t like blogging here anymore, though. I can’t bury this blog, and I don’t want to; I feel I’ve done some good writing here in the last few years, and I’d like to be able to link back to it if the occasion comes. What I want is to a signal a break of some kind; I want a fresh-ish start.

(I’ve also learned some HTML and CSS, and the tiniest bit of Javascript, so this might be a chance to make something for myself and not just for an assignment.)

I need time to think about what shape that new blog would take, though, or if I do want to go ahead with it. I don’t know what my life will be like in the second half of April, let alone after that. I do know that I’m having trouble finding time (or energy, motivation, etc.) to write the fiction and poetry I want to write, and blogging takes up some of those resources. There might be good reasons to stop blogging entirely for a while, or scaling back, or changing my approach. I don’t know yet; I need to think about it. But I’ve been meaning to think about it for over two years now, and I never really do. I decided I need to commit publicly to ending this blog in order to seriously think about something new.

If I do start something new, though, there will be an important change: I’ll be using my full name. The Christian H persona has taken on the brunt of context collapse so I don’t have to—I don’t think I’ve looked to avoid accountability so much as avoid social awkwardness—but that strategy has a shelf life and I’ve pushed it past reasonable limits.


4.

In the meantime, the Weekly Wonders tumblr will still run at least until late May. I started late last May and since the start I planned to take a hiatus at the anniversary. I’m not the only wonder-monger, though, so it may continue without me; I’m also sure to pick it up again before too long.

I’ll still have my other tumblr, too, though it’s devolved into a reblog tumblr and I have no intention to make it very much more than that.

You could also follow me around on Disqus. I’m going to be seriously re-thinking my commenting practice, too, but I likely won’t stop entirely.

I would really appreciate suggestions for a new blog, on any aspect of it; suggestions can go in the comments or through e-mail, if you have an address. In particular, I'm thinking about what platform to use: Blogger, WordPress, and Tumblr all have advantages and disadvantages. Also, while I can look at the stats and see what kind of content gets traffic, I'd prefer to hear more qualitative assessments: what worked for you, what mattered to you, what you didn't understand. I can’t guarantee I’ll take your advice, of course, but certainly I welcome input.

Thank-you, all, for listening to me ramble for so long.

Saturday, 14 March 2015

Disciplinary Epistemologies 101

Since there have been universities, there has been a crisis in them. We should probably look at the more recent hand-wringing about universities teaching students relativism in the light of recurring accusations that universities corrupt youth, but I’m not going to do that analysis here (or ever, even). Instead I’m going to tell you about my Research Methods class.

1.

I had recently that article for Ethika Politika in which Margaret Blume worries that varied distribution requirements at Yale University—some humanities, some social sciences, some hard sciences, a language or two, etc.—neither give students the sense that the different disciplines could speak to each other nor provide them with a framework in which to organize themselves; with this in the back of my mind, I was listening to my Research Methods instructor talk about the positivist underpinnings of quantitative research and the constructivist underpinnings of qualitative research. Putting the two together, I thought: Hey, maybe what Yale—and UBC, and whoever—needs is a mandatory introductory Research Methods and Disciplinary Epistemologies class.

My experience in undergrad left me with the acute sense that almost none of my peers knew why their own disciplines did things the way they did them, let alone why other disciplines made different decisions. No science student, for example, could tell me why they wrote everything in passive voice, and so they were generally immune to my editorial tirades about 1) cacophonic language and 2) awful epistemology re: denying that the Observer Effect exists.* Students in the humanities weren’t much better; in English, for instance, theory courses were optional for many students, and not all those on offer were great. The only ones who seemed to know these sorts of things were grad-students-to-be or people who took Introduction to Philosophy and listened to the professor.

So if the problem, as Blume would have it, is that undergraduate students had no idea how to put the puzzle pieces together, it seems like a Research Methods and Disciplinary Epistemologies class would be a great solution. I don’t agree, actually, that quantitative research implies positivism and qualitative research implies constructivism—that’s a long discussion, but suffice to say that I’m doing mixed methods research right now—but that’s the sort of conversation that might put the pieces together. Getting a whole big picture of all the disciplines would really help.

Now, there are some problems with this course, logistically. There are really only two people who I’d trust to teach the course: myself and my first-year Philosophy professor. There are probably others here and there, but that’s still a low enough percentage of the people I know that it’s worrisome. Maybe there’d need to be a set curriculum. The issue is that I trust neither insiders nor outsiders to teach a discipline’s epistemology; you’d probably have to have one of each. Maybe there could be modules: one professor handles the etic approach, and guest lecturers handle the emic approach.

2.

This lovely daydream lasted perhaps five minutes before I remembered that I had been a Teaching Assistant for a mandatory Intro to Literature class, and I had sworn off the idea of classes mandatory for all students then and there. You can lead a horse to water, they say, but you can’t make it drink; in my experience, quite a lot of horses won’t drink precisely because you lead them to water when they didn’t want to be lead. These we’ll-make-them-learn-these-things-by-making-it-mandatory schemes rarely work.

I have heard of exceptions, where a professor and batch of TAs manage to get all or most of the students into the humanities, at least in heart if not in enrollment. But this seems to require a dream team of excellent professors, excellent TAs, and excellent students; rarely do you get two, let alone three, of those requirements. In the end, you just can’t force students to accept what they’re not willing to accept.

And it occurred to me, too, that a lot of the content I’d want to teach might be well over the heads of most first year students. As a first year student it took me years to understand existentialism and Buddhism and constructivism, letting them slowly gestate long after I’d passed my Philosophy and Religious Studies finals, and I’m the sort of person who’s good at this sort of thing and won’t leave it alone.

So I’m going to have to come out against mandatory courses in university, no matter how well intended. I don’t think they do what we want them to do. But maybe we’re just doing them wrong?

3.

Leah wrote that maybe the framework-building should be extracurricular anyway, and I’m inclined to agree. Classes might not create incentives for truth-seeking; they are good at creating incentives for skill-building and material-mastering, but I don’t know how you could grade someone on whether or not they are right, on whether their commitment to their values is authentic, on how justified their decisions are. And if you aren’t grading students on something, not many of them are going to do it. We should encourage big questions in the classroom, but we can’t expect students to find them there.

And maybe casting students into a sea of relativism for a while is good for them, as I mentioned in my last post, so long as we give them some sense that they can and should and must get themselves ashore. We can’t get the students ashore for them, more than likely, and while we should think of ways to help them do so, the best method might just be for professors to model evaluativist thinking. For the most part they already do award evaluativist thinking in assignments, since every disciplinary epistemology I’ve encountered has been thoroughly evaluativist; we needn’t make “evaluativist thinking” a formal requirement.

And the not-so-secret subtext of Blume’s article seems to be “every school should be a Catholic school,” so maybe I shouldn’t be taking it as seriously as a critique of university. For instance, Blume’s suggestion that only Catholic theology can tie together the disciplines is just silly: even if you spot her that Catholicism is true, it’s hard to deny that Islamic theology, Buddhist epistemology, and historical materialism have been able to create a coherent, if not necessarily true, framework for all disciplines.

But, anyway, it’s something to think about. I wouldn’t mind teaching a Disciplinary Epistemologies and Academic Research course; I just wonder who I’d teach it to.



* In case you too are unaware of the sciences’ use of the passive voice, I’ll explain it: sciences use the passive voice (“The results were analyzed…” rather than “We analyzed the results”) in order to mask the researchers’ presence. In theory, the researchers shouldn’t matter, the sciences say; we are removing the personality etc. from the procedures. Of course, some version of that claim is true, but not to the extent removing the researcher from consideration entirely. The observer effect is often a serious one, and this grammatical elision hides the way researchers are involved in their research. Consider Nixon’s famous remark, “Mistakes were made”; passive voice is the mechanism by which responsible agents deny responsibility.
Moreover, the science students whose papers I edited never knew why they were supposed to use the passive voice, so they also never knew when they were supposed to use it. As a result, they used it in almost every sentence, even when it was confusing and unnecessary.

Sunday, 8 March 2015

A Mature Philosophy, Part II

Or, Personal Epistemology, the Perils of Education, and Two Ways of Not Being a Relativist

Knowledge is always uncertain, but some ideas are better than others. Evidence for propositions exists, but doesn’t that require claims about evidence for which we cannot have evidence? One-size-fits-all-answers usually fit no one but the person who made them, but then physics decided to go and be all universal; if everything is just physics on a super-complicated scale, shouldn’t there be universal answers to all questions? I used to think the way to address these questions sat somewhere between modernism and postmodernism, or maybe through postmodernism and out to the other side, but that approach wasn’t generating very many answers for me and it certainly wasn’t working for any of my interlocutors. And then I discovered personal epistemology through my work as a research assistant and thought it was a helpful—though perhaps only modestly helpful—way of framing issues of knowledge and uncertainty and relativism and absolutism and people being not just wrong but annoyingly wrong.

So I chose personal epistemology as the topic for a class assignment.* Specifically, it was a literature review (in academics, a literature review is a summary of the published research on a topic: you review the scholarly literature). I learned a lot: my understanding of personal epistemology is a lot more nuanced now, but more to the point I read a study that almost destroyed my fledgling faith in the idea, but then I realized there was a flaw in the study design (I think); still, even if the study design is flawed, there’s an interesting implication which I want to explore here.

But first, I should do a better job explaining personal epistemology.

1.

Personal epistemology refers to the beliefs a person has about knowledge and knowing. William Perry coined the term in his 1970 Forms of Ethical and Intellectual Development in the College Years, a longitudinal study of college students’ epistemologies. Most versions of the concept retain some element of Perry’s emphasis on the development of these beliefs across a person’s life. That said, there are a lot of competing ways of modelling personal epistemology. I’m going to focus on my favourites; there are some models (see King and Kitchener, for instance, or Elby and Hammer, at the bottom) which are prominent enough in the field but which I don’t know well enough to discuss.

Barbara Hofer, sometimes in collaboration with Paul Pintrich, has a more synchronous model, which looks at specific beliefs people have about knowledge at one time. You can think of it as a photograph rather than a video: higher definition, but only for a single moment. Hofer in particular looks at two aspects of two dimensions, for a total of four epistemic beliefs: complexity of knowledge and certainty of knowledge (paired under nature of knowledge), and source of knowledge and justification for knowledge (paired under process of knowing). Any given person can hold a naïve version of these beliefs or a sophisticated version. For instance, a naïve belief about the complexity of knowledge would be, “Knowledge is simple”; a sophisticated belief about the complexity of knowledge would be, “Knowledge is complex.” You can also consider what’s called domain specificity: a person might have naïve beliefs about mathematics but sophisticated beliefs about psychology, or vice versa. (I italicized the jargon terms so you can identify them as jargon and not my own interpolation.) Hofer and others usually imply (or state outright) that sophisticated beliefs are truer and/or more desirable than naïve ones.

Hofer’s model shows a lot of interesting differences between populations. Men tend to exhibit somewhat different epistemic beliefs than women do (men tend to hold more naïve beliefs than women do), and students in different academic disciplines also tend to exhibit different epistemic beliefs. There are also cultural differences; indeed, as I understand it Hofer is currently working on epistemic beliefs in different cultures.

Deanna Kuhn, on the other hand, is a scholar who looks more at the developmental side. Her scheme, like Perry’s original scheme, has stages that a person would ideally move through over the course of his or her life. That scheme looks like this:

Realists think assertions are copies of reality.
Absolutists think assertions are facts that are correct or incorrect according to how well they represent reality.
Multiplists think assertions are opinions that their owners have chosen and which are accountable only to those owners.
Evaluativists think assertions are judgements that can be evaluated or compared based on standards of evidence and/or argument.

Realists and absolutists agree that reality is directly knowable, that knowledge comes from an external source, and that knowledge is certain; multiplists and evaluativists agree that reality is not directly knowable, that knowledge is something humans make, and that knowledge is uncertain. However, things start to get more fine-grained when it comes to critical thinking: realists do not consider critical thinking necessary, while absolutists use critical thinking in order to compare different assertions and figure out whether they are true or false. Meanwhile, multiplists consider critical thinking to be irrelevant but evaluativists value critical thinking as a way to promote sound assertions and to improve understanding.

(There are actually six stages, but I’ve conflated similar ones for simplicity’s sake, as Kuhn does fairly often. The six stages are named and numbered thus: Level 0, Realist; Level 1, Simple absolutist; Level 2, Dual absolutist; Level 3, Multiplist; Level 4, Objective evaluativist; Level 5, Conceptual evaluativist. The differences between the two kinds of absolutist and two kinds of evaluativist are less marked than the differences between the larger groupings.)

There is a certain amount of work in education psychology trying to move children from lower stages to higher ones, but there are several challenges to this: for instance, teachers don’t have much training in this area, since traditional pedagogy is pretty thoroughly absolutist; there’s also the chance that children will retreat to a previous stage. Ordinarily, people develop because their beliefs about knowledge are challenged: when confronted by competing authorities, an absolutist worldview cannot adjudicate between them and so the person will be forced to adopt new beliefs about knowledge. However, each stage is more difficult than the previous one, and there’s always a chance that a person will find their new stage too difficult and retreat to a previous one (Yang and Tsai). So if you’re trying to move children along from realism to absolutism to multiplism to evaluativism, you need to push them, but not push them too hard.

Kuhn does account for domain-specificity, too. People tend to use one stage for certain kinds of knowledge while using a different stage for another kind of knowledge. In fact, people tend to attain new levels for the different knowledge domains in a predictable sequence, though I’m sure there are exceptions: people first move from absolutist to multiplist in areas of personal taste, then aesthetic judgements, then judgements about the social world, and finally judgements about the physical world; they then move from multiplist to evaluativist in the reverse order, starting with judgements about the physical world and ending with aesthetic judgements (but not judgements of taste, which rarely become non-relativist).

Both of these models have pretty good empirical backing as constructs and can predict a number of other things, such as academic success, comprehension of material, and so on. I’ll talk about how they might interact later. For the moment, though, I’ll let it rest and move on to that study I was talking about before.

2.

Braten, Strømsø, and Samuelstuen found in a 2008 study that students with more sophisticated epistemic beliefs performed worse at some tasks than students with naïve epistemic beliefs. They were using Hofer’s model, as described above, and looked at college students with no training in environmental science. These students were given a few different documents about climate change and asked to read and explain them. Students with sophisticated beliefs about the certainty or complexity of knowledge performed better than students with naïve beliefs about those same concepts, as predicted. However, students with sophisticated beliefs about the source of knowledge—that is, students who believe that knowledge is constructed, that knowledge is something people make—performed worse than students with naïve beliefs in this area—that is, students who believe that knowledge is received.

This finding seems to be a pretty major blow to the idea that we should be trying to get people to adopt sophisticated epistemic beliefs. Specifically, Braten, Strømsø, and Samuelstuen suggest that sophisticated epistemic beliefs about the process of knowing are more appropriate to experts; non-experts in those areas would do better with naïve epistemic beliefs. Even if sophisticated epistemic beliefs are right, they tend to make people wrong.

At first glance, this makes a sort of sense. A lot of people making bizarre claims about the physical world—creationists, anti-vaxxers, and climate change deniers all come to mind—rely pretty heavily on a constructivist view of science in their rhetoric. Is it possible that these people all have sophisticated beliefs about knowledge, but since they are non-experts in these areas they tend to evaluate the evidence really poorly? Would they be better off with naïve beliefs about knowledge? I’m going to be honest: this bothered me for a few days.

And this isn’t just a small problem. As Bromme, Kienhues, and Porsch point out in a 2010 paper, people get most of their knowledge second-hand. Personal epistemology research has so far focused on how people gain knowledge on their own, but finding, understanding, and assessing documents is the primary way in which people learn things. So if sophisticated beliefs wreak havoc with that process, we’re in trouble.

However, I think there’s a problem with the 2008 study.

Hofer’s model of epistemic beliefs has just two settings for each dimension: naïve and sophisticated. But Kuhn’s model shows that people are far more complicated than that. Specifically, multiplists and evaluativists both believe knowledge is constructed, but they do significantly different things in light of this belief. As Bromme, Kienhues, and Porsch point out, the multiplists in Kuhn’s study have little or no respect for experts, even going so far as to deny that expertise exists; both absolutists and evaluativists have great respects for experts, though for different reasons. Multiplists tend to outnumber evaluativists in any given population, however, and not by just a little bit. (The majority—in some studies the vast majority—of people get stuck somewhere in the absolutist-multiplist range.) So if you take a random sampling of students and sort out the ones with sophisticated epistemic beliefs, most of them will be multiplists rather than evaluativists according to Kuhn’s scheme. It therefore shouldn’t be at all surprising to find that most of them will have trouble understanding documents about climate change: they aren’t terribly interested in expertise, after all. But evaluativists may still be perfectly good at the task; they’re just underrepresented in the study.

Of course, this is conjecture on my part. It’s conjecture based on reading a lot of these studies and, I think, a sufficient understanding of the statistics involved, though feel free to correct me if I’m wrong on that count—I’m no statistician. But it’s still conjecture and I’d rather have empirical evidence. Alas, no one seems to have tried to resolve this problem, at least not that I could find.

Now, I can imagine a few different ways Hofer’s model and Kuhn’s model might fit together. Maybe each belief has only two settings—naïve and sophisticated—and Kuhn’s stages are different combinations of beliefs. So, realists would have only naïve beliefs; evaluativists would have only sophisticated beliefs; absolutists and multiplists would have some combination of naïve and sophisticated beliefs. This might mean that the beliefs would work together in certain ways to produce new results, and a combination of naïve and sophisticated beliefs don’t work well together. And there might be some important beliefs that Hofer is missing that influence how these work, too. Or, maybe epistemic beliefs have more than two settings. Maybe there are two kinds of naïve belief and two kinds of sophisticated belief. Either of these possibilities would explain the conflict between Kuhn’s results and Braten, Strømsø, and Samuelstuen’s results.

3.

Even if Braten, Strømsø, and Samuelstuen’s results aren’t a nail in the coffin for those of us who want to be prescriptive about personal epistemology, any explanation for those results still means something interesting—or upsetting—about personal epistemology. Being an evaluativist is probably the best thing to be, in all knowledge domains: it’s both true** and useful. However, being a multiplist might not be better than being an absolutist, at least not for everything. Maybe, overall, multiplism is better than absolutism; certainly it’s truer. But people pay a price for maturity when they shift from absolutism to multiplism: they lose respect for expertise.

And it’s even worse than it might seem at first, because most people who make it to multiplism don’t make it to evaluativism. So we can take a bunch of absolutists and try to get them to evaluativism, but we’ll lose a lot of them in multiplism, and they might well be worse off in multiplism than they were in absolutism. (I’m not convinced that they’re actually worse off—multiplists are far more tolerant than absolutists—but let’s assume they are.) If I ask people to develop more sophisticated epistemic beliefs, I’m asking them to take a real risk. The pay-off is high (and, might I add, true), but the risk isn’t insignificant.

I’m reminded of all the worry about universities turning students into relativists, unable to make commitments, only able to show how nothing is undeniably true, lost at sea among competing frameworks. I’ve been really skeptical of such arguments in the past, but maybe I’ve underestimated how big of a problem this is. (The Blume and Roth articles are still riddled with problems, though.) Maybe relativists are real, and maybe they’re in trouble! I was wrong! But the existence of relativists might still be a good thing, even if relativism itself is a less good thing: it means that education is actually moving people along the stages of epistemological development. The trick is to get them all the way up to evaluativism; or, to phrase it more pointedly, the trick is to get them into evaluativism so they don’t slide back down into absolutism. I don’t know what the results look like for people who regress: I suspect it’s harder to get them into multiplism again so that they can get to evaluativism. This is what Perry’s research suggested, but Perry’s research was… well, that’s another story.

There’s still a lot to hash out here: Perry’s work suggests that schools with absolutist professors tend to produce multiplists anyway, since students still need to reconcile conflicting authorities and that process is what drives personal epistemology’s development. In fact, he suggests that yesterday’s reactionary tended to get a pretty developed epistemology, since they were wrestling with absolutist professors; today’s reactionary, however, rebels against multiplist or evaluativist professors, and so doubles-down on absolutism. This is a problem.

And I also suspect—this time with nothing but anecdote, so mark it with an appropriate amount of salt-grains—that people in early stages can’t recognize or comprehend later stages very well. To an absolutist, evaluativism and multiplism probably look much the same, or else they think they’ve already achieved evaluativism. Meanwhile, to a multiplist, evaluativism probably looks like some kind of compromise with absolutism. Moving forward just looks wrong, until there’s nothing else you can do.

It’s hard to say what all of this means for higher education (or elementary and secondary education, for that matter). Do you focus on getting relativists through into evaluativism? Or do you focus on getting people out of absolutism and keeping them out of absolutism, trusting that they’ll find their way to evaluativism on their own (though Kuhn suggests this is very unlikely). Maybe universities aren’t the ones that can get them to evaluativism anyway? Or do you throw them a bunch of professors with strong but conflicting opinions, hoping that this will challenge them through to evaluativism? (Personally, I learned a lot from clearly evaluativist professors who stated and argued for their own beliefs in the classroom, but did the opposing views justice, too. That seems like a good compromise: when the student is ready for evaluativism, they’ll have a model for that way of thinking, but students who tend to be reactionary aren’t so likely to slip into truculent absolutism for the rest of their lives, which they’d probably do if they had explicitly relativist professors.)

It’s hard to say, but I think personal epistemology is a good place to start thinking about the issue. My goodness there are a lot of studies I wish I could do!

4.

I wrote this post because I wanted to get all of this off my chest, but also because I intend to talk a bit in a upcoming post about higher education in response to one those worried articles I mentioned before. (Thanks, Leah, for bringing it to my attention.) Personal epistemology won’t play a large part in that discussion, I don’t think—I haven’t written it yet, so I can’t be sure—but I wanted you to have these concepts down as background information. A lot of these “there’s trouble in higher education” articles tend to worry a lot about all these hippy relativists that universities put out, and if we want to address that issue, I think we should learn how that relativism fits into cognitive development, right? It’s looking like people need to be relativists before they can be right, and then they need to move forward from relativism rather than retreat back into absolutism.

Actually, that’s an almost perfect summary. I’ll add that to the end.

OK. I know I’m well beyond acceptable blog post length, but there are two more things.

a. Way back when Eve Tushnet wrote an article for the American Conservative called “Beyond Critical Thinking,” and then I wrote a thing called “Beyond Simple Acceptance,” because sometimes I’m a snide jerk. All that and the resulting back-and-forth is in the links peppering this post. Anyway, Eve was talking about how people come out of university unable to make intellectual stands because they’ve over-learned critical thinking; I was talking about how quite a lot of people (probably most people, probably you, probably me) don’t seem to be sufficiently capable of critical thinking, so I really didn’t think that there was a problem with folks learning too much of the stuff.

Maybe I should have named this post “Beyond Critical Thinking and Simple Acceptance.” In retrospect, I think Eve is clearly arguing for something like evaluativism but I thought she was backsliding to absolutism. So I was wrong in that. But Eve was wrong to say that critical thinking is the problem. Instead, the problem seems to be that universities aren’t shepherding people through multiplism into evaluativism. (Maybe it isn’t the job of a university to do that, but I don’t know where else people will learn it.)

Now, I absolutely do not want a university to teach people which beliefs to take a stand for. The thought of that makes my skin crawl; the whole Catholic college or Baptist bible institute seems … disingenuous at best. Private schools at the elementary and secondary levels are even worse. (Though Perry might remind me that the rebels would fare better in that system than if they were taught by relativists.) But universities would certainly do well to help any students who make it to multiplism move on through to evaluativism.

b. Because I read Unequally Yoked and Slate Star Codex, I have some passing knowledge of Less Wrong, the Center for Applied Rationality (CFAR), and the rationalist community generally (though I wish they’d change the name from rationalist because I’m fairly sure they aren’t disciples of Descartes, Spinoza, and Kant.) A major focus of these groups is to figure out how to reason better. CFAR is particularly looking at rationality outreach and research—testing whether the sorts of tricks Less Wrong develops are empirically supportable, and teaching these tricks to people outside the movement.***

I wonder about this, though. How much rational thinking can non-evaluativists learn? Would the resources be better spent moving people from absolutism into multiplism and from multiplism into evaluativism? Or does that come automatically when you teach rational thinking? Perry was clear: he thought that critical reasoning skills came from the development of personal epistemology. But Perry’s method was… not the best. It might be worth spending some resources to check this: is it better—in terms of outcome/cost—to move people into evaluativism, or to teach them the rationality tricks? I don’t have the resources to check any of that, but maybe the CFAR does.

TL;DR: It’s looking like people need to be relativists before they can be right; relativism isn’t great, though; people need to move forward from relativism rather than retreat back into absolutism.


* On the note of class assignments, I finish my program in April. Yay! Then I will need to start job hunting. Boo! But after that, I hope, I will have a job. Yay!

** I am asserting that it is true. This assertion is based on—well, on everything, really—but I want to make clear that psychology can’t really tell us much about epistemology in the philosophical sense; the job here is to determine what beliefs people do have about knowledge, not which beliefs are true. However, it seems pretty clear, philosophically, that knowledge is uncertain and constructed, that reality cannot be directly accessed, and so on.

*** The other focus for Less Wrong (and Slate Star Codex?) is the creation of a robot-god which will usher in a utilitarian paradise (and prevent the otherwise-inevitable rise of a robot-demiurge), so I’m still not sure what to make about their claim to reasonableness.

_____
Select Sources

Braten, Ivar, Helge I. Strømsø, and Marit S. Samuelstuen. “Are sophisticated students always better? The role of topic-specific personal epistemology in the understanding of multiple expository texts.” Contemporary Education Psychology 33 (2008): 814-840. Web.

Bromme, Rainer, Dorothe Kienhues, and Torsten Porsch. “Who knows what and who can we believe? Epistemological beliefs are beliefs and knowledge (mostly) to be attained from others.” Personal Epistemology in the Classroom. Edited by Lisa D. Bendrixen and Florian C. Feucht. Cambridge: Cambridge UP, 2010. 163-193. Print.

Elby, Andrew. “Defining Personal Epistemology: A Response to Hofer & Pintrich (1997) and Sandoval (2005).” Journal of the Learning Sciences, 18.1 (2009): 138-149. Web.

Elby, Andrew and David Hammer (2002) “On the Form of Personal Epistemology.” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 169-190. Print.

Hofer, Barbara K. “Dimensionality and Disciplinary Differences in Personal Epistemology.” Contemporary Education Psychology 25.4 (2000): 378-405.

Hofer, Barbara K. “Exploring the dimensions of personal epistemology in differing classroom contexts: Student interpretations during the first year of college.” Contemporary Education Psychology 29 (2004): 129-163. Web.

Hofer, Barbara K. “Personal Epistemology and Culture.” Knowing, Knowledge and Beliefs: Epistemological Studies across Diverse Cultures. Edited by Myint Swe Khine. Perth: Spinger, 2014. 3-22. Print.

King, Patricia M. and Karen Strohm Kitchener. “The Reflective Judgment Model: Twenty Years of Research on Cognitive Epistemology.” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 37-61. Print.

Kuhn, Deanna and Michael Weinstock. “What is Epistemological Thinking and Why Does it Matter?” Personal Epistemology: The Psychology of Beliefs About Knowledge and Knowing. Edited by Barbara K. Hofer and Paul R. Pintrich. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002. 121-144. Print.

Perry, William G. Forms of Ethical and Intellectual Development in the College Years, New York: Holt, Rinehart and Winston, Inc., 1970. Print.


Yang, Fang-Ying, and Chin-Chung Tsai. “An epistemic framework for scientific reasoning in informal contexts.” Personal Epistemology in the Classroom. Edited by Lisa D. Bendrixen and Florian C. Feucht. Cambridge: Cambridge UP, 2010. 124-162. Print.

Friday, 27 February 2015

Book-Eaters and Titanomachy

TW: a brief discussion of cannibalism

Back in November, I posted the following to Facebook:

Does anyone else find themselves, at times, thinking, "I ate that book," rather than, "I read that book"?
Eating a book, for me, is different from reading it. If you read a book, you look at the words, understand them, and recognize the whole book as an object. If you eat a book, you do all of that, but then you also internalize it, assembling its ideas and perspectives into yourself. Moreover, you bring those perspectives into yourself as one perspective among many, and you do not take it as is; you fix it, alter it, improve it, nuance it, cut out the stuff that doesn't work, fit it into your framework. It may challenge you, but after you've responded to its challenge, you then challenge it. You internalize it, but you also tame it. If you merely internalize it, and you let it take you over, you did not eat the book, but rather the book ate you.
(Today, I was thinking of a book, and then imagined telling someone, "I ate that book," and it was weird but also made sense to me.)

What I doubt any of my Friends knew was that I had in mind the Majesty 2 computer game. The goblin shamans, in what is probably a pretty problematic joke, are said to have a “from stomach to heart” philosophy: they learn about potions and medicine and so forth by eating everything they find or concoct. If it cures them, they know. If it makes them sick, they know. They learn by eating. This struck me as a limited philosophy; I feel like shamans would be interested in a lot more than just medicine. So I started to imagine that “from stomach to heart” had metaphorical applications elsewhere: all learning was imagined as eating. Certainly we use traces of this metaphor ourselves, allowing thoughts to “digest” and maintaining a “balanced diet” of books or news sources. Thus, one day, my brain produced the quirk of thinking, “I ate that book.”

(The other influence was endocannibalism, the practice of eating your in-laws when they die so they stay a part of the tribe. Cannibalism holds no appeal for me in itself, but when I learned about endocannibalism I could immediately understand why some cultures practice it.)

Eating ideas is an agonistic vision, though, isn’t it? Either I eat or I am eaten: if I do not conquer it, it conquers me. And lately I’ve noticed that there is this dynamic in my learning process. If I find a work important or influential, I am at first taken away with it and I see everything in its terms. And then, in a little while, I decide it is time to outgrow it; either this happens when I start to see its cracks, or I decide it needs to happen and I look for the cracks. Sometimes I think of it as wrestling with angels, but the metaphors I use more often are destroying idols and titanomachy.

Titanomachy refers to the war, in Greek mythology, which the Olympian gods waged against the Titans. The Titans were like the gods-before-the-gods, brutal and terrible figures; their king, Cronus, was not only the god of time but also the father of Zeus, Poseidon, and Hades. So in the Titanomachy, the Olympians fought and killed their parents. But Cronus, too, overthrew the god who came before him, his own father Uranus, a god personifying the sky. These sorts of battles recur in mythology across the world, where a pantheon declares war on a dangerous and more ancient force.

Besides being one of Freud’s more obvious sources, the titanomachy reminds me of Tillich’s idea of the Protestant Principle, which I’ve discussed before. We have a certain conception about God, but this idea must be false in some respects since humans are finite, flawed, and contingent creatures trying to reason about the infinite, perfect, and absolute. Thus any honest attempt to approach God must include a willingness to tear down those ideas about God to replace them with new ones. Most people are not called to do this, but in theory you might need to build a conception of God (an idol, I would call it), and then promptly smash it; build a new one, and promptly smash it; on and on until you die. I do not know if titanomachy is a crude metaphor for this process, but certainly we could use it as one: primordial gods replaced by ancient gods replaced by classical gods replaced by new gods. If you think of idols as standing in for ideas about anything rather than just ideas about God, you’ve got a sense of how I’m imagining things.

I have no thesis here: this post is intended to record the clumsy metaphors I’ve been using for learning. Sometimes I have enthusiasms; I incorporate a worldview; it threatens to overwhelm me; I wrestle against it; I break it; I incorporate its pieces into my pantheon. This is also a crude metaphor, though, since I do not feel any hostility or resentment in this process. I still have much fondness for the poor idols I damage. Of course not all engagement is like this; other times I resist from the outset and am only slowly won over, from within. Other times an idea just does not take at all.

Thursday, 26 February 2015

Alien, Warrior, Outcast, Fugitive, and Victim

Or, Jones and the Theological Worlds

A Taxonomy of Religions Post

Behind each set of eyes is profound mystery, a tender, unique, fragile, and special creation which identifies the self as theological artist. And in such artistry, the self is always a social creature.
W. Paul Jones, Theological Worlds

I started the series comparing Stephen Prothero’s God is Not One and W. Paul Jones’s 1989 Theological Worlds; this was a typical bit of arrogance on my part, since I hadn’t read the latter. The review was enough for my purposes, of course, and I think it all turned out well enough. But I have now read Theological Worlds, and there’s more to say about it.

To begin, theological is perhaps inaccurate, or not reflective of how we tend to use the word. The Worlds Jones describes are not only Christian or religious ones; a lot of the examples he uses are non-religious (Camus and Sartre and Marx feature often). Each of these Worlds is characterized on one pole by an obsessio—literally, “to besiege,” here a central concern or anxiety for a person—and on the other pole by a corresponding epiphania—that idea or experience that promises to absorb the obsessio and make it tolerable. The obsessio is greater than any merely human response; the epiphania, then, is more-than-human. For Jones, the word and idea of God does not have a specific content so much as a functional meaning: for each of us, God is that which can promise epiphania. For some of us, the obsessio is more prominent; for others, the epiphania is. Thus while Jones’s Theological Worlds is written from a strictly Christian perspective, the worlds described are theological in a strictly functional sense: even a dyed-in-the-wool atheist has a “theology” if “God” refers merely to that which makes life livable. A Christian, for Jones, is a person who identifies Jesus of Nazareth (as either a historical figure or as a legend) as God in this sense.

According to the introduction, Jones used typical social studies methods to determine what these Worlds most commonly look like: he interviewed a number of people, determined common threads, looked to find these threads articulated in theological and cultural literature, formulated Five hypothesized worlds using the patterns that arose, and then ran those worlds past another independent set of subjects to test their validity. I can’t speak to the method further than that, but I will try to talk about the usefulness of this typology in a moment. But first, let’s look at the Five Worlds he discovered.

People living in World One are struck, and horrified, by the way in which contingency determines the universe. Everything seems so arbitrary; so much has happened by chance, and it could easily have happened another way. The fact that I am alive right now, that I did not die five seconds ago, is true only by luck. The fact that the churning, teeming mass of evolutionary history spat out creatures capable of self-awareness was neither necessary nor even likely, as far as we know. If we look up, we see a jumble of planets and stars strung out in great empty spaces, none of which have anything to do with us; if we look inward, we see atoms and quarks and fundamental forces, all of them bumping about with no regard for you or me. The world seems pointless. A person in World One lives as an Alien; the longing Alien’s obsessio is thus a sense of isolation experienced as abandonment by whoever or whatever made us. Any epiphania must involve a new way of seeing the world as containing hidden mystery: epiphania is a glimpse of homecoming or reunion with some great plan hidden behind the veil. Jones mentions Paul Tillich, Kafka, E.T., the painter El Greco, and Beethoven’s Opus 132; I would mention Northrop Frye and H. P. Lovecraft, noting that Lovecraft seems wholly without epiphania.

People living in World Two are much more concerned with history than the universe. In particular, history is marked by war and violence; that is, history is marked by evil. The evil is so pervasive and resistant to change that anyone in this World quickly realizes the problem isn’t people but the system itself. People do evil but only because of the systems that control them, and even those systems are themselves the products of death (or entropy or, in a fancier term, the Nihil). In many Worlds death can come as a boon; in World Two, death is nothing but bad, and against both death and history one can do nothing but fight. A person in World Two lives as a Warrior; the angry Warrior’s obsessio is chaos experienced as the evil and violence of history. Because of this focus on the present world, the epiphania cannot be a promise fulfilled in an afterlife. Instead, the epiphania must take place here, at the end of history and as a product and redemption of history, a sort of New Earth. Jones mentions Karl Barth, Karl Marx, Moby Dick, Van Gogh, and Beethoven’s Symphony No. 5; I would mention Richard Beck’s powers and principalities (and almost everything else Beck has written) and Scott Alexander’s Meditations on Moloch.

People living in World Three do not find their obsession in the world around them: for people in World Three, the problem lies within. Or, really, the problem is that there isn’t much of a within at all. They feel empty, or unfulfilled, or unlovable (or, I would add, worthless). They feel like they are wearing a mask and that if anyone saw what was beneath that mask, they would be horrified—perhaps because whatever lies under that mask is loathsome, or perhaps because there’s just nothing under there at all. This World is marked with regret at lost opportunities, and the kind of exposure these people fear is not death but nakedness. And yet, whatever drive a person in this world might feel to make something of themselves, they usually feel guilty whenever they do so: they aren’t worth their own attentions. Unlike people in other Worlds, inhabitants of World Three do not necessarily generalize their problem to others. A person in World Three is an Outcast, not because anyone cast them out but because they cast themselves out, at least emotionally speaking; the aching Outcast’s obsessio is self-estrangement experienced as impotence or emptiness. The only epiphania that can rescue a person in World Three is an enrichment that allows them to make something of themselves, either in Sartre’s sense of self-creation through every decision or in Kierkegaard’s sense that each person has a unique identity given to them by God which they must discover and become, or something in between these views. Jones also mentions Tolstoy, Ralph Ellison’s Invisible Man, and James Joyce’s Portrait of the Artist as a Young Man. I would add that Echo’s character arc in Dollhouse is pretty much pure epiphania of this sort.

People living in World Four also find that the problem with everything is inside: that problem is that they’re awful. They are selfish and arrogant and striving and cruel, but so is everyone else. In order to thrive, we must compete; we must kill to eat; whatever we do, we do damage. So each and everyone one of us is guilty and condemned. Even in our attempts to make reparations, though, we are guilty: we try to make amends because we are afraid of punishment. Reason becomes only a tool for rationalization; charity becomes a way of promoting ourselves. A person in World Four is thus a Fugitive; the guilty Fugitive’s obsessio is idolatry, specifically the idolatry of self-interest and arrogance. Any epiphania must then be a kind of forgiveness, in particular one unearned. Even accepting forgiveness, though, is difficult, since accepting forgiveness is a selfish act; in order to be free of guilt, the epiphania must in some sense give such a person both forgiveness and the ability to accept it unselfishly. Jones mentions Nathaniel Hawthorne, William Faulkner, and American Gothic. The Buffy-spinoff Angel hovers between this World and World Two, I think.

People living in World Five are not worried about any of those things; the obsessios of the other worlds are perhaps out of reach for those in World Five. Here, people are just overwhelmed with suffering. This is the world of hard living, of slaves and miners and poverty. Hope, in such a world, is always false: hope is merely the prelude to disappointment. Suffering is perpetual, and it can always get worse. There’s a certain pride to this world, at times: one mustn’t pity the old or the scarred, because they survived. A person in World Five is a Victim; the overwhelmed Victim’s obsessio is engulfment, engulfment in life itself (which is to say, suffering). Epiphania cannot take the form of hope here; epiphania is only endurance and survival. Often this is a survival in a community, where people suffer with (com-passion) one another if they cannot suffer for them. Jones mentions Elie Wiesel, Tennessee Williams, and Rembrandt. I would mention strains of Buddhism.

So for a Christian in each of these worlds, it is Jesus, the Christ, that offers epiphania, but in very different ways. In World One, Jesus offers access to the Creator who made the world and made it good; Christ offers reunion with God. In World Two, Jesus promises to make a New Earth and cast down empires; Christ is God entering history. In World Three, Jesus tells us that we are beloved by God and invites us to grow and be fulfilled in him; Christ is license to love yourself so that you can love your neighbour. In World Four, Jesus condemns us and then forgives us; Christ is God taking on guilt and punishment so that we do not have to. In World Five, Jesus is crucified in his compassion for us; Christ is God suffering with us in solidarity.

One of the problems in Christianity is that most people consider their own world to be the legitimate one. When an evangelist turns to someone and says, “Jesus washes your sins away,” they are not going to interest anyone who does not live in World Four. When a progressive mainline church promises to help congregants grow and affirm themselves, anyone who does not live in World Three will probably roll their eyes at best and mutter about children starving in Africa at worst. (That said, many World Three people will do the same, since the problem is that they don't believe in self-affirmation or growth.) Or, when an atheist points to the randomness of the world and says there are no traces of God’s plan, no one living outside of World One will even see the point of the objection. (This might well be what happened when I wrote about how wonderful it is to be an alien.) Thus so much discourse is just people talking past one another; Jones looks to fix this.

However, every example Jones gives in this book suggests that no one lives in a single World; every person has their own World, which is some mix of the obessios and epiphanias of the five he describes. Jones himself is mostly in World Two, with World One’s fear of abandonment, raised in a small World Five town which deeply impacted him. These five worlds are more like clusters of data points, patterns arising from all these idiosyncrasies aggregated. No one, or almost no one, is a pure type.

There’s a lot more to say about these Worlds, I think, that I don’t have space to say: Jones does not at any point discuss the relation between these Worlds and what the world is actually like, though this might be outside the scope of his project; psychoanalysis lurks throughout the whole book, often without consideration; race and gender play sophisticated but strange and problematic roles in his discussion; his ecclesiology, which relies heavily on sub-congregations segregating people with different Worlds, seems pretty improbable and maybe undesirable, especially when no one is a pure type.

What I want to spend a moment asking, though, is how exhaustive these Worlds are. In 1989, perhaps, these were the dominant Worlds; were there smaller ones that he did not detect? Were there others in other countries, or cultures, that did not enter his sample? And are there other ones now? Might there be others in the future? What could they look like? Or are these five Worlds representative of some deep and fundamental orientations, exhausting all possibilities? I can maybe imagine a spectrum between the Human/Self and Universe/Outside: World Three is entirely concerned with the human’s own self, and the Outcast makes no claims about anyone else at all; World One locates the problem entirely outside the human, in the Universe’s lack of human meaning; between them, World Four focuses on universal human sin, World Five on a harsh world, and World Two on a system of humans shaped by death to be destructive. I don’t know. This seems to be a stretch, but since it’s an empirical question, it’s one we can test.



Friday, 6 February 2015

My Very Own Exploitation Flick?

Or, The Ex-Exploitation Flick

It occurred to me that it would be boring to talk about every genre in terms of my own worldview, so I’ll probably tackle them as lenses to other worldviews instead. However, I think it might be worth considering what my exploitation flick would look like, rather than talking about what someone else’s exploitation flick would like, because there are a few problems I have with the genre which would impact or limit its use. Thinking about this will let me dig a bit more into the genre than I otherwise would dig and it will give me a reason to talk about finding genres which wouldn’t work for someone’s worldview.

I wrote extensively about exploitation flicks before and I don’t want to engage in that too much. In fact, I’m going to do something pretty bad and ignore the historical and political specificity of the assorted subgenres for this post and focus instead on four more top-level characteristics:
  1. displaying an abundance of stereotypes or common place trope about the subject matter, such that collecting these references marks a specific identity;
  2. a penchant for taboo violations, such as a sacrilege, body horror, cannibalism, etc.;
  3. enthusiastic/gratuitous nudity and sex, with a pretty unapologetic heterosexual male perspective; and
  4. enthusiastic/gratuitous violence, with a pretty unapologetic pleasure in gore and explosions over, for instance, choreography of fighting;

Recall, too, that these subgenres are mostly used to celebrate the identity in question, either by indulging in it (the first trait) or making it attractive (the third and fourth traits).

Let’s get started.

The connection between collection and identity has come up in a class I’m taking about social media. A few weeks ago we discussed how people constitute their identities in social media contexts. An example I brought up, in connection with Jorge Luis Borges’s “Afterword” to Museums (of course), is how sharing seemingly non-autobiographical content can constitute a kind of identity. Think of Tumblr, and particularly what are called reblog tumblrs, where a person does not produce their own content but rather shares things they like, sometimes with commentary appended or in the tags but sometimes without any addition at all. Though they technically only collect and co-locate content they found, and therefore do not reveal any information about their offline life, they still create a kind of identity—reputation? presence? role in the media ecology?—by creating that collection. Lots of people, or lots of the people I know, do this sort of thing on their Facebook or in their real-life conversations. I’m well-known for spouting out facts about animals and, more recently, for my Weekly Wonders tumblr. And I have other interests or obsessions which different people know to varying degrees.

So if a person were making an exploitation flick in order to celebrate or valourize Christian-H-ness, there are a number of things they might use to mark the film as having that identity, probably as plot points or as passing references, but maybe as background, too. I could make a list, but I think that you either know me well enough that I don’t need to or you don’t know me well enough to care. Either way, I won’t bore you with the list. Feel free to compile one in the comments if that’s something you’re into.

Still, I don’t know what to do with this observation that a collection can constitute a sort of identity (except in a library science context, in which I think there’s at least one obvious direction to go with that). If I think of anything, I’ll come back to it.

Violating taboos is one I have a harder time with. For the most part, I don’t like violating taboos; when I violate a taboo it’s usually because I don’t think of it as a taboo, and when I think of something as a taboo I usually don’t violate it. It is difficult for me to think about a taboo I’d want to violate that I’d also want to depict as a taboo violation. I’m all for violating gender norms, but I basically don’t want there to be gender norms, so I’m not at all interested in their violation as violation. I’m also all for having a variety of body types in film—and I don’t just mean different levels of body fat, but also people with amputations and people born with unusual anatomy, etc. We’ll get back to this topic, but I guess I wouldn’t want to see the movie treat this as breaking a taboo, either, even if that’s technically what is; I want the camera to normalize people with unusual anatomy, and that seems at odds with taboo violation.

The only thing I can think of is breaking museum exhibits or destroying art. I don’t generally like either, but I can imagine instances in which I’d support or at least be sympathetic to either. For instance, in some cultures, specific objects are made specifically to be broken or to be allowed to degrade over time, but then museums preserve the objects, violating those cultures’ values. So breaking into a museum and destroying the object might be 1) appropriate within the object’s original culture, even if a taboo violation in the museum’s culture, and 2) a political act against colonialism. I can think of other such examples. This would be a taboo violation for me, since I tend to treat art and artifacts as being set aside, almost sacred, but it would also be a taboo violation which possibly agrees with my explicit values, too, given highly specific circumstances.

Nudity is also a difficult one. There seem to be three “functions” for nudity in exploitation flicks, which don’t all align with one another: the film’s apparent reasons for using nudity aren’t necessarily the same as what the film actually achieves with its nudity, nor does it match what I, personally, get out of it. So, respectively, 1) for the apparent audience, the presentation of nudity in exploitation flicks seems intended to shock and excite/arouse, in order to celebrate the relevant identity; 2) the usually female nudity in exploitation flicks rhetorically erases women’s agency, turning women into bodies, plot points, trophies, etc.; 3) regarding my personal engagement, nudity in exploitation flicks mostly improves my understanding of human anatomy, though of course the lack of diversity in the bodies shown limits this function. On the grounds of #2 I am uncomfortable with #1 and, besides, the way these movies depict nudity isn’t all that appealing to me; #1, meanwhile, limits #3, because it means that only certain kinds of bodies are shown. This means that if a person were to make a Christian-H-ploitation flick, conventional exploitation flick nudity wouldn’t work. It would misrepresent, rather than confirm and celebrate, my norms and identity.

There might be two solutions: 1) use nudity, but use it differently, or 2) find something to substitute for nudity. These aren’t mutually exclusive. For instance, you could have nudity in order to showcase human anatomy and show how human bodies work, but not do so in an erotic way; choose a wide variety of body types and highlight how those bodies move and function rather than focusing on their naughty bits. (For instance: have three people in gym showers, facing the wall, with the camera on them from behind; one of them is elderly, one of them is heavyset, and one of them—who will be a main character?—is missing an arm.) And then you could also show people in ways intended to make them physically attractive that don’t rely on nudity. There’s a wide range of outfits that I consider to be very attractive beyond “naked” or “swimsuit,” like floor-length sleeveless dresses and scarves over sweaters over dark skirts and checked shirts tucked into blue jeans. By splitting functions 1 and 3, you can achieve both without committing function 2.

(If you’ve noticed that we’re quite a distance from exploitation flicks now, don’t worry; I’m going to talk about that.)

The last element I mentioned is violence. I’m quite non-violent in real life, and I feel almost like pacifism is a moral obligation, though I’m not quite there. At the same time, I have no objections to violence in films. I find a lot of fight sequences boring (Transformers is by far the worst offender but certainly not the only one), but I’m always down for a well-done fight scene. So, on the one hand, violence would certainly reflect my media choices, but on the other hand it might be a violation of my values.

There might be three possible fixes: first, we could ensure that the violence always has consequences and isn’t undertaken frivolously; second, there could be a lot of simulated violence, but little actual violence (for instance, water fights, martial arts training, play fighting, LARPing, or video games); and third, there could be violence against inanimate objects. I stole that last idea from Yojimbo, in which the most elaborate and impressive “fight scene” involves the protagonist trashing a room to make it look like a fight happened there when it didn’t. (Someone could trash a museum?)


But, when I combine all of these transformations, what I don’t get is an exploitation flick. There might be a lot of stylistic things we could do to give it similar visuals (see Grindhouse for some ideas), but its heart is elsewhere. Exploitation flicks rely on an unembarrassed celebration of things which we usually feel uncomfortable celebrating: sex, violence, blasphemy, etc. For the most part, I have no interest in doing that. So while I might still get a lot out of the question, “What would my exploitation flick look like?”, in terms of both understanding myself and understanding exploitation flicks better, I have to conclude that the simplest answer would be, “Well, it wouldn’t be an exploitation flick anymore.”

What about you? Does it sound like exploitation flicks would work for you? Are there other genres that wouldn’t?

Tuesday, 3 February 2015

My Very Own Epic: Part 4

What in Me is Dark, Illumine 
                    What in me is dark
Illumine, what is low raise and support,
That to the height of this great argument
I may assert eternal Providence,
And justify the ways of God to men.
John Milton, Paradise Lost


In the previous posts I thought about how I might use the epic conventions to depict my own worldview. But I think there’s another question to ask: what’s missing from that discussion? A cursory glance through my recent interests would notice that I haven’t yet mentioned these things:

  1. the idea that any truly radical change in social organization—a revolution, in other words—would create a new society and culture that we cannot predict from our current position, because people would have new arrangements of choices that we’ve never seen before; this is scary because it means we can’t tell in advance whether that society will be “better,” but it is also an opportunity for hope because it means there might be solutions to problems which we have not yet been able to imagine; but anyway if we can’t go on in the present condition, revolution is the only thing we have
  2. the idea that the property relationship (the idea that you can own things) is nothing more than a legal construct, and an unnecessary one at that; and, furthermore, that that legal construct has produced conditions which are actively bad and tend only to get worse (ie. income inequality, environmental degradation) and it is only through persistent violations of that construct, and the ideology built around it, that the system can be sustained
  3. things about bodies, and the ways bodies don’t adhere to our expectations of them; almost all of my thoughts on bodies come from Alice Domurat Dreger’s One of Us: Conjoined Twins and the Future of Normal, though a graduate seminar on Shakespeare and Marlowe gave me some ideas, too
  4. multiculturalism, by which I mean both cultures learning from one another and also creating a meta-culture in which multiple cultures can all function
  5. religion broadly, though most ways in which I am interested in religion are implicated in things I have already discussed
  6. social justice, which I guess makes me a social justice warrior or cleric or druid or whatever
  7. a postmodern suspicion of metanarratives
  8. a particular vision of freedom, and what freedom means, which I don't think I've ever discussed here
In an epic, which is supposed to give a complete vision of a worldview, I can’t dispense with these concerns, some of which are pretty important to me. Of course, I don’t have to address them with conventions exactly; they might be part of the plot or the individual episodes without transforming the conventions. But I have a few ideas nonetheless: the in media res turn might be a good way of suggesting revolution, since the structure would suggest, or lend itself to, a radical break from the past; even as it grows out of that which preceded it, in another sense it truly is a new beginning. And that might suggest the event of great importance: a revolution, involving the defeat of the previous system and the creation of the new one (and its possible challenges). For bodies, the drakaina and the talking trees seem obvious avenues; I would be concerned about making them do double (or triple) duty, but I might either find ways to make these ideas relate or have multiple drakainae. In a fantasy setting, I could also just populate the world with Arkans, handling both the thing about bodies and the thing about multiculturalism; the Arkans might allude to Spenser’s habit of using lots of doubles—lookalikes, in particular—in the Faerie Queene, but that’s not exactly a convention in other epics. Otherwise, I don’t see at the moment how I’d implicate these ideas in epics specifically.

A word on the suspicion of metanarratives, though: I hope my idea about digressions already addresses this. Epics, by their very nature, are metanarratives. Thus I should be suspicious of any epic, even my own. (Indeed, what this might be good for is to make my metanarrative explicit so I can be suspicious of it.) But I hope this epic seriously attempts to question its own arc, right? The idea is that the quest is changed by each encounter with the other, and the sense might be that the quest must always, forever by changed by future encounters. Hopefully that would be clear.

Let’s try and tie these together a bit. Regardless of setting, it might make sense to begin with a revolution, though not necessarily a socialist one: if the opening scene were a parliament or conference of some kind, that might call back to the Parliament in Hell which occurs near the beginning of Paradise Lost, though that might not be a favourable comparison. If the setting is historical, I’d have to choose a real revolution (if I chose the French Revolution, the first National Assembly might make a good opening), or perhaps the founding of a new society—a commune somewhere, maybe—or a post-disaster attempt to rebuild society, as with survivors of a shipwreck on a deserted island. A pirate vessel just after its mutiny might work. In a fantasy setting, I could tailor the events as I needed them. Either way, half of the story would involve the formation of society from that point on; the other half would be flashbacks to the events which led to the revolution. The protagonist or protagonists would likely embark on a quest in order to help secure the new society; after all, the new society would still have enemies. The quest may involve a journey of some kind, but it might also be more like a project: drafting a constitution, building a defensive wall or tower, designing a monument. As they try to complete the quest, they’ll encounter other members of the new society who, while on board with the whole revolution thing, have needs which the current direction of the new society is not meeting or is actively thwarting. The protagonists hear these characters’ stories, try to help their problems, and are changed as a result—or, anyway, they ought to be, but they may fail in this. Thus the nature of the quest must change, too, in order to account for these stranger’s needs. The protagonist and protagonists would eventually have to ask themselves why they were on this quest, and acknowledge that they could not get to the bottom of the problem “Why I am who I am?”; at the same time, they had to take responsibility for their commitment to this quest, whatever version it is, from here on out. Having begun it does not mean they must finish it, or finish it in its current form. So they keep re-interpreting what the quest is as they encounter strangers. I’m not sure how it would end. The katabasis and underworld might well happen in flashback—the moment of shipwreck, or the deplorable situation which made revolution necessary—but it might be more exciting if it happened during the “present,” in which case it could be a subterranean prison or a ruined city, or, again, actual depression.

I think I’ll stop there. I haven’t worked out how the natural world—its Otherness, but also our embodiment, and also environmental concerns—would play out, but that’s largely because it depends so much on the setting. If it takes place most shipwreck, it seems easy enough to figure out; in a fantasy setting, I could use fantastic elements as metaphors. It would take some thinking for other settings. But I’m not actually planning to write the epic, after all; this is an exercise.

So what did I learn? Well, I learned that I need to be careful not to treat intellectual people as the Greeks treated the aristocracy. I learned that my tendency to treat the natural world as kin to us might make me forget how Other it is. I learned a few writerly things, too, which are harder to communicate: that it will be hard or impossible to convey the Otherness of other people’s stories in my own voice, and that doing so might in lots of ways break my narrative—and maybe that’s something I need to try and do on purpose. I learned that I’m not sure what Butler means by vexations disrupting or interrupting narrative, and what that means for storytelling. And I learned that I haven’t talked about existentialism enough here—though that’s because I’m only really started to get into it now—and that I haven’t yet written out how I think I can reconcile nominalism and realism, or how I understand freedom in a political sense. Maybe, as this settles in my brain, I’ll realize other things, too.

Most of all, though, I hope I’ve demonstrated how you can use this yourselves. Please, let me know if you do this exercise and how it turns out. (Obviously, you can replace the epic with a genre you prefer.)
Blog Widget by LinkWithin