Friday 29 August 2014

A Mature Philosophy

Is Personal Epistemology What I’ve Been Looking For?

Through the research I’m doing as an RA, I encountered the idea of personal epistemology; the Cole’s Notes version is that different people have different measurable attitudes towards how people gain knowledge, what knowledge is like, and so forth. In general, research into personal epistemology fit into two streams: 1) research into epistemic beliefs addresses the particular individual beliefs a person might have, while 2) developmental models of personal epistemology chart how personal epistemology changes over a person’s life. Personal epistemology and its developmental focus are the invention of William G. Perry with his 1970 Forms of Intellectual and Ethical Development in the College Years, but these days Barbara Hofer and Paul Pintrich are the major proponents and experts.

Perry studied college students for all four years of undergraduate school, asking them questions designed to elicit their views on knowledge. What he determined is that they gradually changed their views over time in a somewhat predictable pattern. Of course, not all students were at the same stage when they entered university, so the early stages had fewer examples, but there were still some. Generally, he found that students began in an dualist stage, where they believe that things are either true or false, and have little patience for ambiguity or what Perry calls relativism.* In this stage they believe that knowledge is gained from authorities (ie. professors)—or, if they reject the authority, as sometimes happens, they do so without the skills of the later stages and still tend to view things as black and white. As the stages progress, they start to recognize that different professors want different answers and that there are good arguments for mutually exclusive positions. By the fifth stage, they adopt Perry’s relativism: knowledge is something for which one makes arguments, and authorities might know more than you but they’re just as fallible, and there’s no real sure answer for anything anywhere. After this stage, they start to realize they can make commitments within relativism, up until the ninth stage, where they have made those commitments within a relativist framework. Not all students (or people) progress through all of the stages, however; each stage contains tensions (both internally and against the world/classroom) which can only be resolved in the next stage, but the unpleasant experience of these tensions might cause a student to retreat into a previous stage and get stuck there. Furthermore, with the exception of the first stage, there are always two ways to do a stage: one is in adherence to the authority (or the perceived authority), and the other is in rebellion against it.** It’s all quite complicated and interesting.

The 50s, 60s, and 70s show clearly in Perry, both in his writing style, his sense of psychology, and his understanding of the final stage as still being within a relativist frame. His theory foundered for a while but was picked up Hofer and Pintrich in the early 2000s. They, and other researchers, have revised the stages according to more robust research among more demographics. Their results are fairly well corroborated by multiple empirical studies.

According to contemporary developmental models of personal epistemology, people progress through the following stages:

Naïve realism: The individual assumes that any statement is true. Only toddlers are in this stage: most children move beyond it quite early. Naïve realism is the extreme gullibility of children.
Dualism: The individual believes statements are either right or wrong. A statement’s truth value is usually determined by an authority; all an individual must do is receive this information from the authority. While most people start moving out of this stage by the end of elementary school or beginning of high school, some people never move past it.
Relativism: The individual has realized that there are multiple competing authorities and multiple reasonable positions to take. The individual tends to think in terms of opinions rather than truths, and often believes that all opinions are equally valid. Most people get here in high school; some people proceed past it, but others do not.
Evaluism: The individual still recognizes that there are multiple competing positions and does not believe that there is perfect knowledge available, but rather gathers evidence. Some opinions are better than others, according to their evidence and arguments. Knowledge is not received but made. Also called multiplism. Those people who get here usually do so in university, towards the end of the undergraduate degree or during graduate school. (I’m not sure what research indicates about people who don’t go to university; I suspect there’s just less research about them.)

This link leads to a decent summary I found (with Calvin & Hobbes strips to illustrate!), but note that whoever made this slideshow has kept Perry’s Commitments as a stage after evaluism (which they called multiplism), which isn’t conventional. As with Perry’s model, there are more ways not to proceed that there are to proceed. Often people retreat from the next stage because it requires new skills from them and introduces them to new tensions and uncertainties; it feels safer in a previous stage. Something that’s been discovered more recently is that people have different epistemic beliefs for different knowledge domains: someone can hold an evaluist position in politics, a relativist position in religion, and a dualist position in science, for instance.

All of this pertains to our research in a particular way which I’m not going to get into much here. What I wanted to note, however, is that I am strongly reminded of Anderson’s pre-modern, modern, post-modern trajectory, I outlined just over a year ago. It’s much better than Anderson’s trajectory, however, for two reasons: 1) it’s empirically based, and 2) in evaluism it charts the way past relativism, the Hegelian synthesis I had been babbling about, the way I’d been trying to find in tentativism (or beyond tentativism). Perry’s model may or may not do this (without understanding better what he means by relativism, I can’t tell what his commitment-within-relativism is), but Hofer, Pintrich, et al.’s model does. Evaluism is a terrible word; I regret how awkward tentativism is, but I like evaluism even less. However, in it there seems to be the thing I’ve been looking for.

Or maybe not. It reminds me of David Deutsch’s Popper-inspired epistemology in The Beginning of Infinity, but it also reminds me literary interpretation as I’m used to practicing it, and so I can see a lot of people rallying under its banner and saying it’s theirs. That doesn’t mean it is theirs, but it often might be, and what I suspect is that evaluism might be a pretty broad tent. It was an exciting discovery for me, but for the last few months I’ve started to consider that it’s at best a genus, and I’m still looking for a species.

But this leads to a particular question: which comes first, philosophy or psychology? Brains, of course, come before both, but I’ve always been inclined to say that philosophy comes first. When, in high school, I first learned of Kohlberg’s moral development scheme, I reacted with something between indignation and horror: I was distressed at the idea that people would—that I would—inexorably change from real morality, which relies on adherence to laws, to something that seemed at the time like relativism. Just because people tended to develop in a particular way did not mean they should. What I told myself was that adults did not become more correct about morality but rather became better at rationalizing their misdeeds using fancy (but incorrect) relativistic logic. Of course I was likely right about that, but still I grew up just as Kohlberg predicted. However, I still believed that questions of how we tended to think about morality were different questions from how we should think about morality.

And yet it is tempting to see personal epistemology as the course of development we should take. Confirmation bias would lead me to think of it this way, so I must be careful. And yet the idea that there are mature and immature epistemologies, and that mature ones are better than immature ones, makes a certain intuitive sense to me. I can think of three possible justifications for this. An individual rational explanation would imagine this development less as biological and more as cognitive; as we try to understand the world, our epistemologies fail us and we use our reason to determine why they failed and update them. Since this process of failure and correction is guided by reason and interaction with the real world, it tends towards improvement. An evolutionary pragmatic explanation is also empirical and corrective: during the process of human evolution, those people with better epistemologies tended to survive, so humans evolved better epistemologies; however, given their complexity, they developed later in life (as prefrontal cortices do). A teleological explanation would suggest that humans are directed, in some way, toward the truth, and so these typical developments would indicate the direction in which we ought to head. I’m not sure I’m entirely convinced by of any of these justifications, but the first one seems promising.

So what comes first: psychology or philosophy? And should we be looking less for the right epistemology or a mature one?

-/-/-

*I’m still struggling to understand what Perry means by relativism, exactly, because it doesn’t seem to quite be what I think of relativism as being: it has much more to do with the mere recognition that other people can legitimately hold other positions than oneself, and yet it seems to be overwhelmed by this acknowledgement. It seems more like a condition than a philosophy. I'm still working it out.
**Perry writes about a strange irony in the fairly relativistic (or seemingly relativistic) university today.
Here’s the quotation:
In a modern liberal arts college, […] The majority of the faculty has gone beyond differences of absolutist opinion into teachings which are deliberately founded in a relativistic epistemology […]. In this current situation, if a student revolts against “the Establishment” before he has familiarized himself with the analytical and integrative skills of relativistic thinking, the only place he can take his stand is in a simplistic absolutism. He revolts, then, not against a homogeneous lower-level orthodoxy but against heterogeneity. In doing so he not only narrows the range of his materials, he rejects the second-level tools of his critical analysis, reflection, and comparative thinking—the tools through which the successful rebel of a previous generation moved forward to productive dissent.

Sunday 24 August 2014

Symbol Confusion

When I visited my (first) alma mater a season after graduating, I had tea with some of the staff from my old fellowship, and one of them told me he thought of the recent-grad situation as being rather like a swamp. I think he was trying to say that people tended to get lost in that time period, perhaps even stuck, without knowing which way to go; maybe he was trying to evoke unstable ground, and general lack civilization or guideposts. But I had to shrug and say, “You know, I’ve always liked swamps.”

Churchill has famously called depression a black dog. The black dog visited when Churchill’s depression became active. But I like dogs quite a lot, including black ones. If a literal black dog were to visit me, it would make my periods of depression far more tolerable. Sometimes, if I need to distract or comfort myself, such as when I am getting a painful medical procedure, I imagine there is a large black dog lying next to me.

Sometimes, if I feel like depression might come upon me in the near or near-ish future, I think of it as a fogbank approaching from the horizon. The image has the merits of specificity, and I feel like it would communicate to other people what I am feeling. However, I like fogbanks rather a lot, so the image feels inauthentic to me.

This morning at church we had a baptism, and during the service the deacon lit a candle and passed it to the baby’s mother, saying, “Receive the light of Christ, to show that you have passed from darkness to light.” But I don’t like the light so much, or anyway I prefer periods of gloaming and overcast, light mixed with darkness. To save electricity I will sometimes move about the house without turning on any lights, and I do not mind this darkness. Apparently I did this often enough that a housemate once called me a vampire. Darkness, I find, can be a balm.

Heaven is often depicted as being celestial, in the sky; Hell is subterranean, in the ground with the graves. The rich are the upper class, and the poor are the lower class. Revelations are sought and received on mountaintops. Thrones are placed on a dais, above the crowd. In pyramid schemes, those at the top benefit from those at the bottom. I, however, dislike heights. As with Antaeus, I feel stronger on the earth.

Do not misunderstand: when I affiliate with the low, the shadowed, the misty, the marshy, the canine, I do not mean to paint myself as a friend to victims and outcasts and wretched sinners, as much as that sort of thing appeals to me. Rather, I’m just affiliating with the low, the shadowed, the misty, the marshy, and the canine, with no regard for their uses as symbols. More, I am not sure why they symbolize what they are used to symbolize: truth cannot be a light if it is so often unseen; power cannot be high in the air if it is so often entrenched. Some of these are said to be universal metaphors, which show up in every culture (that the anthropologists who made this argument studied): height always indicates status, size always indicates superiority, and so forth. It may be true that all cultures run on such symbols, but I doubt all people do. I sometimes do not.

I wonder how important a skill it is to be able to confuse symbols, to break the equivalences of the symbol set you’ve inherited.

Friday 22 August 2014

Six Principles

Last summer I wrote about how I sometimes try to understand a worldview by mentally outlining an epic espousing its attitudes and assumptions. This forces me to ask specific questions about it, ones which I might not otherwise think to ask: what characteristics would the protagonist need to exhibit if he or she were to embody the community's values? which major event in history would be most appropriate as the epics subject?  what would its underworld look like, and what would it mean to descend there? if the worldview does not have gods which might intervene, what would be analogous to divine intervention in this epic? what contemporary discoveries or discourses would the epic mention? and so on. I also discussed how choosing the best genre for a worldview entailed asking similar questions: is this worldview more interested in individuals, communities, or nations? is this worldview especially interested in the sort of origin-stories that epics tend to be? is this worldview interested in the ways social order can be breached and repaired, as mysteries tend to show? and so on.

Well, I've been trying a similar thought exercise for understanding worldviews, which I'm calling the Six Principles approach. Basically, I'm trying to boil a position down to six principles, and while those principles do tend to have multiple clauses I try for some semblance of brevity. There are two points to this severe summary: the first is to try to shuck off a lot of the unnecessary details, and the second is to try to collapse multiple elements into one. Collapsing multiple elements into one principle forces me to figure out how different elements relate to one another (for example, satire and decorum in neoclassicism).

What I've found is that the Six Principles approach works far better when I'm trying to figure out things like intellectual movements rather than specific positions. For example, Romanticism and Neoclassicism were easier than Existentialism; Existentialism was easier (and likely more valid) than Quasi-Platonism; Quasi-Platonism was just barely possible while something like Marxism probably wouldn't have worked at all. Trying to describe a particular person's position is far harder. Movements, however, consist mostly of the overlap between different thinkers' views, which makes them easier to summarize. Further, it's easier to render attitudes rather than theories this way (though, of course, the distinction between the two isn't fine and clear).

Of course, I'm including a suppressed criterion for this exercise I haven't mentioned yet. See, I came up with the exercise while imagining a world-building machine which had limited granularity: for instance, you could designate what climate a nation had, and what sapient species populated it, but you couldn't get right in there and write their culture from the ground up. So you'd have to define cultural trends for the machine and give them names so you can apply them to nations (for instance, Nation A is temperate and wet, populated by humans and gorons, and is mostly Romantic with a minority Neoclassic culture). It's something I might use in a novel or similar project some day. Anyway, I wanted to see if I could actually define movements in a limited set of principles for the purposes of said possible novel, and it would have to be legible to the world-building machine and therefore couldn't depend on references to specific historical events (ie. romanticism is in part a reaction to increasing urbanization and industrialization in Europe).

Here are some of my attempts:

Neoclassicism (you saw this the other day)
  1. Reason and judgement are the most admirable human faculties.
  2. Decorum is paramount.
  3. The best way to learn the rules is to study the classical authors.
  4. Communities have an obligation to establish and preserve social order, balance, and correctness.
  5. Invention is good in moderation.
  6. Satire is a useful corrective for unreasonable action, poor judgement, and breaches of decorum.

Romanticism

  1. Spontaneity of thought, action, and expression is healthier than the alternative.
  2. A natural or primitive way of life is superior to artificial and urban ways of life.
  3. A subjective engagement with the natural world leads to an understanding of oneself and of humanity in general.
  4. Imagination is among the most important human faculties.
  5. An individual's free personal expression is necessary for a flourishing society and a flourishing individual.
  6. Reactionary thought or behaviour results in moral and social corruption.

Existentialism

  1. Individuals have no destiny and are obliged to choose freely what they become.
  2. The freedom to choose is contingent on and restricted by the situation in which the individual exists, both physically and in the social order.
  3. Authenticity means the ability to assume consciously both radical freedom and the situation that limits it.
  4. Individuals are responsible for how they use their freedom.
  5. Individuals have no access to a pre-determined set of values with which to give meaning to their actions, but rather must create their own.
  6. Humans are uniquely capable of existing for themselves, rather than merely existing, and of perceiving the ways they exist for others rather than for themselves.

Humanism (that is, Renaissance humanism, though its current secular homonym has some overlap)

  1. It is important to perfect and enrich the present, worldly life rather than or as well as preparing for a future or otherworldly life.
  2. Revival of the literature and history of the past will help enrich the present, worldly life.
  3. Education, especially in the arts, will improve the talents of individuals.
  4. Humans can improve upon nature through invention.
  5. Humanity is the pinnacle of creation.
  6. Individuals can improve themselves and thus improve their position in society.

Postmodernism

  1. No perfect access to truth is possible.
  2. Simulations can become the reality in which one lives.
  3. No overarching explanation of theory for all experiences (called metanarratives) is likely to be true.
  4. Suspicion of metanarratives leads to tolerance of differing opinions, which in turns welcomes people who are different in some way from the majority.
  5. Individuals do not consist of a single, unified self, but rather consist of diverse thoughts, habits, expectations, memories, etc., which can differ according to social interactions and cultural expectations.
  6. Uncertainty and doubt about the meanings one's culture generates are to be expected and accepted, not denied of villainized.

Nerdfighterism (referring more to what John and Hank Green say than what their fans espouse)

  1. Curiosity, the urge to collaborate, and empathy are the greatest human attributes.
  2. Truth resists simplicity.
  3. Knowledge of the physical universe, gained through study and empirical research, is valuable.
  4. Individuals and communities have an obligation to increase that which makes life enjoyable for others (called awesome) and decrease oppression, inequality, violence, disease, and environmental destruction (called worldsuck).
  5. Optimism is more reasonable and productive than pessimism.
  6. Artistic expression and appreciation spark curiosity, collaboration, and empathy.

Quasi-Buddhism ("quasi-" because I make no mention of Buddha or specific Buddhists practices, as per the thought experiment)

  1. Suffering exists because of human wants and desires.
  2. The way to end suffering is discipline of both thought and action, especially right understanding, detachment, kindness, compassion, and mindfulness.
  3. Nothing is permanent and everything depends on something else for existence.
  4. Meditation frees the mind from passion, aggression, ignorance, jealousy, and pride, and thus allows wisdom to develop.
  5. Individuals do not have a basic self, but are composed of thoughts, impressions, memories, desires, etc.
  6. Individuals and communities can get help on their path to the end of suffering by following those who have preceded them on that path.

Quasi-Platonism ("quasi-" again because I do not refer to Plato and try to generalize somewhat, but this is still pretty close to Plato rather than, say, the neo-Platonists)

  1. All things in the physical world, including people, are imperfect versions of the ideal form of themselves.
  2. Knowledge is the apprehension of the ideal forms of things using reason.
  3. Individuals consist of reason, passion, and appetite.
  4. It is best to subordinate passion and appetite to reason, even though passion and appetite are not in themselves bad.
  5. Those who are ruled by reason ought to be in charge of society.
  6. It is best if things in the physical world, including people, act or are used in accordance with their ideal form.

Charistmaticism (as elucidated here)

  1. An individual or community open to the unexpected will receive surprising gifts.
  2. The natural world is enchanted, and what some may call the supernatural is merely the intensification of embedded creative (or corrupting) forces already present in a particular place or experience.
  3. Deliverance from evil entails satisfaction of both bodily and spiritual needs.
  4. Emotional and embodied experiences of the world are prior to intellectual engagement, which is dependent on the former.
  5. Right affection and right action require training in gratitude.
  6. Truth is best spoken by the poor and marginalized.


I'd be interested in seeing other attempts, if anyone would like to try their hand at it.

Tuesday 19 August 2014

Death Denial, Death Drive

Content warning: suicide, depression

If you’re at all aware of my online presence, you’ll know that I have great respect for Richard Beck’s work at his blog Experimental Theology and in his books known sometimes as the Death Trilogy. (You may be more aware of this than you'd like to be, since I doubt a month goes by before bringing his work up in some comment thread or another.) In particular I appreciate his work on Terror Management Theory and how it pertains to religious authenticity, hospitality, and the creation of culture. He gets the bare-bones of his theory from Paul Tillich’s The Courage to Be, but adds to it using experimental psychology that was unavailable to Tillich. I’ll give a summary.

Humans fear death. We fear particular threats of death, but we also experience anxiety from the knowledge that we will necessarily die one day. In order to manage the terror of death, we create systems of meaning which promise us some kind of immortality. This might be a promise of an afterlife, but it might simply be the idea that our lives had meaning—we were remembered, or we descendants, or our actions contributed to the creation of some great cultural work (like the progress of science or what have you). However, these cultural projects, or worldviews, can only eliminate our fear of death if we believe in them fully. Therefore we react to anything which threatens our belief in these worldviews in the exact same way that we would react to the possibility of death, because the loss of our worldviews is identical to the loss of immortality. And the mere existence of other plausible worldviews might constitute a threat to one’s own worldview. This is why most people defend their worldviews and reject other people’s worldviews so violently; violent defense of worldviews makes things like hospitality, charity, and justice difficult or impossible. Even a religion like Christianity, which is founded on the idea that we should side with the oppressed, became and in some cases remains violently oppressive because of these forces. Some people, however, do not use their worldview and cultural projects to shield themselves from the reality of death. These people, instead, face their fears of death directly; they also face their doubts about their own worldviews directly (these are, after all, the same thing). These people are in the minority, but they exist. Thus there are two things a person must do in order to prevent themselves from violently defending their worldviews: they must be willing to face the possibility of their own death (which means they must be willing to face doubt, the possibility that their worldview is false and their actions meaningless), and they must try to adopt a worldview that is not easily threatened by competing worldviews (which doesn’t necessarily mean relativism, but at the least worldviews should be easy on people who don’t adopt them).

I find this theory very compelling, not just because Beck marshals a lot of evidence for it in his work but also because I can use it to explain so many things. In particular, it seems to explain why so many religious people can be so hostile, and why their hostility seems to come from their religion, but at the same time religion motivates others to be hospitable instead. The theory can’t explain why people choose one religion over others, but that’s OK: other theories can do that. Beck’s work simply explains why people need to choose a worldview of some kind (even if that’s not a religion) and why they behave the way they do in relation to their own worldview and in relation to other people’s worldviews. And it is powerful when it does this.

However, I think there might be a problem with it. It presupposes that all people fear death.*

Now, I readily admit that most people fear death, and that complaints that certainly cultures do not fear death simply confuse the effects of a highly successful worldview with the lack of a natural fear of death. But I do not admit for a second that all people fear death. Some people actively desire death (that is, they exhibit marked suicidiality), but others simply have neither fear nor desire for death. When my depression is at its worst, this is me: I am generally unperturbed by possibly dangerous situations (unsafe driving, for instance). Moreover, however my mental health looks, I feel no anxiety at all about the fact that I will inevitably die. This does not scare me; it relieves me. There is little that gives me more comfort that the thought that I will one day die. I cannot state this emphatically enough: the only emotion I feel when contemplating my eventual death is relief. Of course I realize this might be pathological, but it means I fit into neither of Beck’s populations: I am not a person who denies death and my fear of it, and I am not a person who faces my fear of death. I face death and simply have no fear. I am surely not the only one, either.

Perhaps you object that I do not fear death because I have a fairly successful meaning-making worldview. And I do think my life probably has meaning. But, if that were true, my response to doubt would be fear of death. Contemplating the possibility that my life might be meaningless should, if Beck is right, cause me to grasp for life more readily. But the opposite happens: when I contemplate the possibility that my life has no meaning, I am even less interested in living out the full allotment of my life. If I came to believe that my life was meaningless, I would probably tip over into actively wanting to die. That’s opposite the prediction you would make if what Beck said applied to me. And I’m in good company as far as that goes; Camus felt the same.

Camus begins The Myth of Sisyphus as follows: 
There is but one truly serious philosophical problem and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.
Notice that the question of life and death is contingent on the question of meaning and meaninglessness; if life is meaningless, there is no reason to live. If life has meaning, there is a reason to live. This exhibits causality reverse to Beck’s. Camus’s answer was peculiar to him, a last-minute dodge from nihilism: suicide is the refusal to face the meaninglessness of life, and we must face the meaninglessness of life. This smuggles meaning back into absurdism, however, and creates a contradiction.** Camus’s answer doesn’t interest me, however; what interests me is the fact that the absurdists and existentialists seemed to have experiences parallel to mine, which suggests that Beck’s theory cannot describe all people. That is, it does not apply to the admittedly small population of people who simply don’t fear death in the first place.

Of course, it doesn’t have to address those people (us). As a theory of religion and culture in general, its failure to describe a small subset of the population, one which has not had much hand in shaping either religion or culture, is hardly a fatal error. In fact, this disjoint explains the general hostility most cultures and religions have shown towards suicide (as distinct self-sacrifice): suicide, as a repudiation of the meaningfulness of life, threatens the culture’s worldview and thereby exposes people to the fear of death. And since worldviews are designed to assure people that death isn’t real, they don’t have much to offer people who mostly fear that life is worthless and all meaning is uncertain, except to simply insist on their own validity in a louder and angrier voice. Maybe this is why it is difficult to find resources in Christianity which are at all effective at dissuading people from suicide or ameliorating their depression without making things so much worse (fear of damnation, sense of shame, etc.).

It is worth noting that I do have a fear of death, but it’s a different one: I fear other people’s deaths. I fear the deaths of those who matter to me. Perhaps Beck’s theory can be applied to that? The attempt to understand another person’s life as meaningful, in order to deny their death? Or the attempt to honestly face another person’s death or our fear of it?

So my complaint is not that Beck’s theory is wrong because it omits the population of which I am a part; my complaint is simply that I find it difficult to use Beck’s theory to determine what I should do, in regards to my own worldview, or to predict or understand my own anxieties regarding the meaningfulness of my actions, the role of doubt in hospitality, and how to face my particular anxieties (of meaninglessness, but also of the deaths of people who matter to me) without harming others.

For more on my depression and assorted philosophy I've worked through in response to it, see this index post.

If you are experiencing suicidal thoughts, please contact a local suicide hotline. In BC, call 1-800-784-2433, or 1-800-SUICIDE. If you are feeling depression generally, get therapy. I am serious about this: it helps so much.

-----

*There are other problems I can think of: for instance, Beck does not seem to account for a desire to believe that which is true. However, I won’t deal with it here; it’s possible that we do have a reality principle in our psychological make-up which competes with the meaning-creating and death-fearing components, but I’m not entirely sure that we do; if we do value truth for its own sake, that value is likely nonetheless a product and component or our worldview and appears after we choose a worldview to manage our fear of death. Beck, however, found some inspiration for his work from the American pragmatists, whose sense of truth is different from the intuitive one: a proposition which works, which bears fruit, which makes you the person you think you should be, is true and is true precisely for those reasons. Truth is not about correspondence between a proposition and an external reality or about coherence within a system of propositions. While Beck might not sign off on American pragmatist epistemology, I don’t think it’s a coincidence that Beck’s work focuses on whether beliefs make you a hospitable or inhospitable person rather than on whether beliefs meet some other benchmark of truth.
**This paradox may be resolvable, in the sense that it may be cognitively impossible for us to truly acknowledge meaninglessness. Any attempt to do so might smuggle meaninglessness in somehow; an absurdist might therefore say that absurdism is simply the philosophical system which gets closest to acknowledging meaninglessness, even though it smuggles in the imperative “You must authentically acknowledge meaninglessness.” As it stands, I prefer existentialism to absurdism, since it acknowledges that meaning can be created and only asks that you acknowledge that you created that meaning yourself; it still contains the paradox (you did not yourself create the imperative “You must acknowledge that you created all meaning yourself”), but it contains a more hospitable version of that paradox.

Monday 18 August 2014

Surviving, Thriving, and Neoclassicism

Scott Alexander at Slate Star Codex tries to figure out the different worldviews underlying conservative and liberal attitudes. He calls his result the Survive-Thrive spectrum. In his view, right-wingers tend to imagine that civilization is hanging by a thread: therefore, right-wingers want lots of police to prevent the loss of order through crime, and lots of soldiers to prevent it coming from outside. Citizens (or, anyway, the right citizens) should have lots of guns, just in case the threat is too big even for the police. Egghead intellectuals are of no real use: you need practical on-the-ground knowledge. They’re suspicious of outsiders, who not only take the stuff you need but might also destabilize the existing order. As Alexander writes, right-wingers look for “hierarchy and conformity.” If you want to imagine how a conservative thinks, you need to imagine you’re in a zombie apocalypse. Their goal is to Survive. (I suggest you read the whole post; he ties a lot of seemingly disparate conservative concerns into this point of view.)

Liberals, meanwhile, imagine society as headed towards utopia. Since crime is on the decline, police are more likely to cause trouble trying to assert their authority than they are to protect people. Because we have more than we need, it is more important that jobs are safe and fulfilling than productive. And because we have everything we need, we can focus our energy on more distant and longer-term concerns: the environment, eradicating inequality, etc. Liberals are oriented not towards Surviving but Thriving.

With the caveat that this is of course a spectrum rather than a dichotomy and that many people are inconsistent, acting sometimes in one way and sometimes in another, I think Alexander does a good job of describing some conservatives and some liberals, but his thought experiment has some problems as well. One of the biggest is that he omits a particular kind of conservative from his framework, the kind I’d call the neoclassicist.

Some conservatives, thinking that civilized society is hanging by thread, do not react by acting as though civilization is already crumbling and becoming either a military state or acting like roving bands of zombie-hunters, but by acting as civilized as they possibly can. Think of C. S. Lewis’s The Last Battle, where the protagonists insist on acting honourable despite the approaching apocalypse, or G. K. Chesterton’s The Man Who Was Thursday, where the protagonists fight anarchists by not being anarchists. In order to keep civilization from crumbling, these conservatives double-down on tradition, on etiquette, on the arts, on philosophy and science and other intellectual pursuits. In Battlestar Galactica, Commander Adama suggests that the fleet must do more than survive; they must also act such that they deserve to survive. So Survival is important to these conservatives, but they can only Survive by Thriving. Anything else isn’t Survival.

But they still wind up being very conservative. Because civilization depends on being super-civilized, they won’t tolerate many breaches of etiquette. Moreover, they can be very hostile towards different value systems. Civilization, after all, relies on everyone being civilized, so you cannot tolerate your neighbour’s polyamorous relationship or your daughter’s anarchist politics. Conformity is as necessary in this view as in Alexander’s conservative view, but Alexander’s Survivalists will allow certain standards to slip in order to react to the apocalypse which the neo-classicists won’t tolerate, since those slippages would result in the apocalypse. And changes or experiments in culture are potentially dangerous things, since they might turn out to destabilize the whole foundation of that culture; it’s not only art that is held to fairly conservative standards, but also social forms. If you allow same-sex marriage, the very idea of marriage would change, letting polygamy, incest, bestiality, and pedophilia in the door. Examples of neo-classicists include C. S. Lewis, G. K. Chesterton (as I implied above), Edmund Burke, the contemporary conservatives who tend to quote those people (see many of the writers for First Things and the American Conservative), and of course Alexander Pope and the other 17th-century neoclassicists I’m naming the whole group after.

(17th-century neoclassicists, roughly speaking, adhered to the following six principles: 1) Reason and judgement are the most admirable human faculties. 2) Decorum is paramount. 3) The best way to learn the rules is to study the classical authors. 4) Communities have an obligation to establish and preserve social order, balance, and correctness. 5) Invention is good in moderation. 6) Satire is a useful corrective for unreasonable action, poor judgement, and breaches of decorum.)

The neoclassicists are much more logical, I think, than the survivalists: if both groups note that civilization is hanging by a thread, the neoclassicists try to prevent the civilization from collapsing while the survivalists act as though it already has. The neoclassicist could point out the survivalists’ ironic mistake: by acting as though civilization has already collapsed, when it hasn’t yet, the survivalist may actually make the collapse more likely.

And, by the same token, I think Alexander describes certain liberals quite well, but I wouldn’t consider my own position well-described by his spectrum. Even if society is headed toward utopia, we clearly aren’t there right now. Acting as though we live in a utopia is a fantastic way to prevent a utopia from happening. So I would notice that some liberals might act like we’ve already arrived at utopia, but a more considered liberalism works towards making that utopia happen. The first sort of liberal is, ironically, less inclined to change than the second, because if you think you’ve achieved utopia, why bother changing anything? However, if utopia is possible but not yet attained, you have to work hard to get there: it’s from this more considered liberalism that we get anti-racist activists and ardent environmentalists and so forth.

Indeed, at a certain point and in some respects, leftists look a lot like neoclassicists. I would include myself among these. Some leftists reject the whole “headed toward utopia” vision; we would note with relief that we’ve made great strides in knowledge and medicine, but utopia is far off and we’ve maybe made some things far worse. Income inequality is in a terrible state, and economic mobility does not seem any better today than in 1600. 1600, however, did not have the threats of nuclear war, environmental collapse, or corporate ownership of your genetic material. Indeed, some leftists (including me) would see the world as drifting towards dystopia, not utopia. Civilization is hanging by a thread. But the solution (in our view) isn’t to double-down on tradition and so on, because the existing social order is part of the problem; the existing social order is what produced those nuclear warheads and this environmental precarity and that economic inequality. And the conservative avoidance of change and refusal to accommodate different perspectives is what creates a dystopian government. So my leftist response to thinking that civilization is hanging by a thread is not to double-down on tradition any more than it is to act like there’s a zombie apocalypse going on: my leftist response is to try and tear out all those bad traditions and replace them with something better. We want to try new things and welcome alien perspectives, because our own traditions do not contain the answer, or at least not the whole answer. Perhaps Alexander is right in not counting this as liberal; it may be better called radical. It’s the politics of socialists and anarcho-pacifists. The neoclassicists do not fit so well on Alexander’s Survive-Thrive spectrum, but we radicals fit even less well.

A final word on the Survive-Thrive spectrum: I am not sure how well the zombie apocalypse thought experiment works. It’s easy and tempting to think of a zombie apocalypse as ending civilization, but I don’t think that’s right. I suspect that neoclassicists and survivalists would respond to a zombie apocalypse differently: survivalists would become bands of disciplined survivors, while the neoclassicists would insist that the only way to really prevent the zombies from winning would be to still act honourably despite the apparent absurdity of that honour. In other words, perhaps it isn’t so telling to imagine that survivalists act like they are living in a zombie apocalypse because the culture people come out of would impact how they acted when that culture appeared to collapse. Rapture-ready folks would not act the same as transhumanists and singularity-mongers.

Possible glossary, bearing in mind that this does not describe all of the political landscape by any stretch:

Survivalist: a kind of conservative who, believing that civilization is hanging by a thread, acts as though the apocalypse has happened or has started. Think right-wing US gun nuts or al-Qaeda.
Neoclassicist: a kind of conservative who, believing that civilization is hanging by a thread, acts as civilized as possible. Think Edmund Burke, C. S. Lewis, or Eve Tushnet.
Thriver: a kind of liberal who, believing that society is headed towards utopia, acts as though society is already a utopia and therefore sees little point in challenging the status quo. Think… Hollywood celebrities, mostly?

Activist: a kind of liberal who, believing that society is headed towards utopia, acts to usher that utopia in. Think David Deutsch, Barack Obama, or Gene Robinson.
Radical: a kind of leftist who, believing that society is headed towards dystopia, acts to prevent dystopia by tearing out the parts of civilization pushing it that way. Think Marxists, Christian anarchists, or David Suzuki.

Caveat 1: This is all just a reaction to Slate Star Codex’s framing. I’m not sure how useful it will wind up being, and I don’t expect to think of people or movements primarily in terms of the glossary I’ve offered. This is no more than an offering, no more than one way of colour-coding the thoughtscape.
Caveat 2: I have not read enough of Slate Star Codex to know whether Scott Alexander has already addressed the conservatives I’m calling the neoclassicists. From my skimming, he does an excellent job showing some things that are terribly wrong with what I’m calling the survivalist viewpoint, but I don’t see much about the differences between neoclassicism and survivalism.

(Thanks to Leah Libresco for linking to Scott Alexander's post in one of her own and thus drawing it to my attention.)

Sunday 10 August 2014

Wonder on a Weekly Basis

Part I of ?

Have I mentioned here that I am running a collaborative tumblr on which my collaborators and I post brief entries on wonderful things on a weekly schedule? On Mondays I post an entry about a prehistoric animal of some kind; on Tuesdays I post an idea; on Wednesdays, a plant or fungus; on Thursdays, a fantastic being; on Fridays, a meteorological and geological phenomenon. Contributors run on their own schedules, usually at one day a week.

This is not the first time I’ve run such a project. In November of 2012 I began posting one animal to Facebook per day. I would include a link to a resource about the animal—often but not always Wikipedia—and a brief entry summarizing what I found interesting about it. I did this for a calendar year, ending exactly 365 days later in November 2013. During this time, I missed 4 or 5 days; 3 of those were because the Internet went down at home and I didn’t want to abuse my Internet access at work.

I’m not sure why exactly I chose to run such a project in the first place. I do remember this: a friend of mine posted a link to Facebook to an article about a parasitic isopod that eats out and then replaces fish’s tongues. Through aimless Wikipedia link-hopping I came on a page about velvet worms, which I had never heard of before then. Finding them fascinating (they have social hierarchies demonstrated by who gets to crawl over whom!), I posted the link to Facebook. The response was favourable and enthusiastic, and within an hour I planned to do it daily for an undetermined length of time. A few months in, I realized I could probably only keep it up for about a year before it got too hard to find new ones (that prediction proved correct). When I stopped, a lot of my Facebook friends seemed disappointed but grateful that I had done it for so long (the best compliment I got, I think, was this: “true story: whenever i am forced to defend the internet, i cite this series”).

However, even when I quit then I had planned to start again with a new category, and about 6 months later I asked if people would be interested in a different sort of thing each day of the week (I had decided to cap it at five days a week). Quickly I found that some people would be willing to collaborate, so to accommodate collaboration and allow advance preparation of the posts, I decided to transfer the project to tumblr, though I still do cross-post to Facebook every day.

What I’ve noticed is that the daily practice was much more enjoyable than the weekly practice, since now I prepare posts in more intensive bursts rather than doing a little bit each day. I was able to appreciate something each particular day on the first schedule; now I don’t have that, because I prepare the posts in advance. Sure, I still cross-post to Facebook on a daily basis, but by this time I have already looked at that content so much that I no longer feel so enchanted by it when I finally get it to Facebook—I’ve scheduled it for that months before, written the content weeks before, and put it up on Tumblr as much as one week prior. I might still feel great affection for whatever I’m sharing, but I no longer feel quite so surprised by it. (As silly as it may seem, I do feel affection for pushmi-pullyus and simulacra and virga and catchflies.) What I do still get on a daily basis is the possibility of my friends’ responses; this is never guaranteed, but it is nice when I get it.

However, the weekly basis does have a slight advantage: it’s easier to remember everything and to start to understand how it all fits together when I return to the same content at least four times. I did get this benefit posting animals, too, because I have a fairly good memory and I could keep in my head the taxonomic relationships between all of the animals. (Indeed, the first idea I posted in this project was Linnaean taxonomy as an homage to that experience with the first project.) I’m not sure if the trade-off is quite worth it on its own, but the advance work certainly makes the project easier on my schedule, and I can take days off without worry. Last time, I had to arrange for friends to cover my travel-based absences. Also, the way it is now, I get to have collaborators, which is great.

I like this project quite a lot. I enjoy sharing things I find wonderful with other people, hoping they experience the same. I worry, sometimes, because I have little capacity for effusive praise; in general I simply present what I find wonderful and assume or hope that other people will find it wonderful, too. To facilitate this I try to find details about it that will draw out what’s unique or fascinating about this creature or this idea or this cloud in particular. Simply describing an octopus’s neurology or a fire whirl’s dynamics seems praise enough to me. Discussing an idea’s complexities and repercussions, even critiquing its failures, is for me an exercise in appreciation. (People have told me, in the past, that it’s clear that I care about ideas, and I suppose this is true; I don’t know what it would be like not to.) However, as much as I admit to being biased in favour of ideas and fantastic beings, I am glad I chose to write about rocks, weather, and flora as well; their concreteness, and their independence from human need or activity, gives me the same sort of wonder that the animals did: their sheer otherness from human endeavour is a relief from our self-absorption and neurosis. They are grounding to a person as easily detached as me. Even if there was nothing else interesting about them, this would be wonder enough in itself.

I asked a while ago what gave people wonder; no one answered. If anyone is reading this, I’d still like that answer, but I suppose I’m trying to figure it out elsewhere, too.

Monday 4 August 2014

Please Read Responsibly?

A Using Theory Post 

A person might think I was a bit hard on John Green's "books belong to their readers" theory of interpretation when outlining my own. Well, it turns out that I wasn't hard enough on it; in the meantime I discovered a rather more insidious version of that argument, coming from John Green himself.

Before I begin, I want to preface this all by saying that I respect John Green's commitment to public education very much; I find his Internet presence kind, compassionate, and otherwise admirable; his politics seem empathetic, and in general he seems well-intentioned. My impression is that he is likable, and worthy of being liked. But at the same time there are some problems with his theory of interpretation, and I hope you'll agree with me when I'm done here that those problems require attention.

John Green published the following Tweets (on a date which I have not been able to determine):


The transcription reads like this:
I was sedated for an endoscopy today and was told to stay off social media for a day. So you know what that means: Twitter rant. | I've been marathoning Twilight movies all day, which has been totally enjoyable... | ...and I'm thinking about how easy it is to dehumanize the creator or fans of something extremely popular. | I've done this, too. I made fun of the Twilight movies without even having watched them. I'm sorry for that, and embarrassed. | When we make fun of Twilight, we're ridiculing the enthusiasm people have for unironic love stories. Have we nothing better to satirize? | Yes, you can read misogynistic gender dynamics into the stories, but tens of millions of people have also proven that you don't HAVE to. | Do we really believe that tens of millions of people who found themselves comforted and inspired by these stories are merely wrong? | Isn't our disdain FAR more misogynistic than anything in the stories? | Art that is entertaining and useful to people is a good thing to have in this world. And I'm grateful for it and celebrate it. | So big ups to the Twilight fandom, and to Stephanie Meyer, who has been relentlessly attacked professionally and personally over Twilight... | ...in ways that male authors of love stories never are. I'm gonna go back to watching the movies now. /rant
(I used "|" rather than "/" to indicate line breaks because Green used a "/" himself.) Of course Green is saying some important things in this rant: it is maybe inadvisable to criticize something that one hasn't read (though this varies depending on the quality of the sources you have read which describe the text); the way in which criticism of the Twilight fandom resorts to insulting age and gender is a problem; if female authors of love stories are attacked in a way that male authors are not (and I have no reason to doubt Green's observation), then this is a serious problem, too. I don't mean to detract from any of these valuable claims. And I haven't read Twilight, so I guess I can't give you a good reading of it. But I can notice, and will notice, that John Green is using an enormously troubling theory of interpretation in the middle of his rant. Essentially, he places the responsibility for a text's meaning entirely on the reader's shoulders.

Green at first seems only to be claiming that "misogynistic gender dynamics" are not inherent to the books or films, but are rather something that certain readers have manufactured: he writes, "Yes, you can read misogynistic gender dynamics into the stories..." (emphasis mine), and the phrase "read into" is usually used to imply that the interpretation is something the reader added to the text. In the terms of biblical studies, "read into" indicates eisegesis rather than exegesis. So far Green seems to be making a typical anti-intellectual claim about feminist interpretation: the text is only what it appears to be with a naive reading, and any further meanings are simply added by the interpreter, so neither the text nor the author are responsible for those meanings. But Green then makes a strange move when he writes, "...but tens of millions of people have also proven that you don't HAVE to. Do we really believe that tens of millions of people who have found themselves comforted and inspired by these stories are merely wrong?" This exhortation suggests a different theory of interpretation than the one which insists on a naive reading of a text; rather, it suggests that interpretation is an act of agency on the part of the reader, and it suggests that the number of people who support a particular interpretation gives that interpretation legitimacy.

This a whole new spin on "books belong to their readers." In this version, a text has no independent meaning outside of how different readers have read it: if readers choose to read Twilight as having misogynistic gender dynamics, then it does; if readers choose not to, then it doesn't. And since nothing can be said about the text in itself, the only meaningful distinction between interpretations is whether or not they will produce a valuable reading experience. But Green goes further: the meaning of a book seems to derive from its readers collectively. He seems to say that if "tens of millions" of people interpret a book in a particular way, then that is the best way to interpret it; at the very least he suggests that if millions of people interpret a book in a particular way, then it's a legitimate interpretation.

Of course this theory of interpretation is almost certainly false, and texts do have a set of meanings intrinsic to themselves; see Part I of my Theory of Reading series for that. But I also want to suggest that Green's error here isn't innocuous. If the meaning of a text is determined entirely by the reader, then no criticism of any text is possible: the Mein Kampfs and Marc Discoll's sermons of the world are only awful because we interpret them that way. Such inability to criticize gives those people who are harmed by such texts no ability to explain that harm, and it prevents us from holding people accountable for what they say and write. Or, alternately, if the majority interpretation of a work is taken to be correct, then we are holding authors responsible for things that they did not write. Neither situation is at all desirable: the meaning of what people is independent of its readers and we can therefore hold them accountable for what they actually wrote, but the "books belong to their readers" theory denies us that necessary ability. This applies not just to the fringe cases of hate speech, but to any speech act at all.

What Green wants to protect, I think, is the value that those millions of readers have gotten from Twilight: "Do we really believe that tens of millions of people who found themselves comforted and inspired by these stories are merely wrong?" But this is a question wrongly asked: there's no reason that latent (or not-so-latent) misogyny would make those readers' comfort and inspiration somehow wrong or inauthentic. Rather, Twilight proves capable of providing comfort and inspiration in spite of its misogyny. (It's also possible that the comfort and inspiration is based on the readers' internalized misogyny, such that what seems to be a good reading experience is a detrimental one, but I don't want to rely on that argument because I've been feeling queasy lately about suggesting people have false consciousness when they disagree with me about what's been good for them.) This is why it's both useful and true to distinguish between the reader's experience and the text's meaning.

The irony, of course, is that John Green regularly relies on the independent existence of a text's meaning. The first is in his practice of literary interpretation, which is part of his professional career. There is no point whatsoever in interpreting texts if whatever interpretation you like is admissible: you could read the phone book as a communist epic if you wanted to under his stated theory. (Of course, it may be possible to read the phone book as a communist epic with legitimate interpretation, but the point is that it also may not be possible, and you can't tell until you try.) But, furthermore, John Green's own books are regularly misread, and he insists that those are misreadings. It is often charged that his novel Paper Towns is an example of a Manic Pixie Dream Girl story, where by loving a quirky girl character the boy protagonist is freed from a boring life, and often frees the quirky girl in some way, too. Green rejects the interpretation:
Have the people who constantly accuse me of this stuff read my books? Paper Towns is devoted IN ITS ENTIRETY to destroying the lie of the manic pixie dream girl; the novel ends (this is not really a spoiler) with a young woman essentially saying, “Do you really still live in this fantasy land where boys can save girls by being romantically interested in them?” I do not know how I could have been less ambiguous about this without calling the novel The Patriarchal Lie of the Manic Pixie Dream Girl Must Be Stabbed in the Heart and Killed [source].
He knows perfectly well that his intentions are not relevant to the conversation (source), so on what does he rely to claim that this is a misreading? Well, he relies on the text's own features, the only thing on which he can rely: he offers a brief analysis of the novel using paraphrases of the text itself.

---

So who is responsible for the meaning of a text?

The author, I would say, in terms of moral responsibility: it is the author's actions which produced the text. And we apply all of the concerns of moral responsibility to the question when we are asking whether an author is culpable for problems with the text. The author's intention does not determine whether or not the text caused harm; on the other hand, the author's ability to anticipate whether the text would cause harm isn't irrelevant, either. It the text's harm is utterly unpredictable, then the author cannot really be held responsible for that harm.

But how can we tell if an author could predict a book's harm? Look at Paper Towns: many many teenage boys read this as an endorsement of treating their female colleagues like Manic Pixie Dream Girls (or so I've read on tumblr). Could Green have predicted this frequent misreading?* I don't know if I can answer that, but his culpability relies on the answer. My guess is that he's not on the hook for Paper Towns, at least not insofar as the Manic Pixie Dream Girl trope is concerned. But the sheer purported ubiquity of the misreading gives me pause: how plausible is it that a frequent misreading is all that wrong? It's still wrong, absolutely, but I'm inclined to say that there's something about the text which lends itself to being misread in that way. In the case of Paper Towns, a reader who knows nothing about feminist criticism, or who is already invested in the Manic Pixie Dream Girl trope, might be prone to misreading the book in this particular way.

This discussion throws up a strange consequence, though: it looks like readers are responsible for their reading, in some senses. Namely, insofar as authors aren't responsible for misreadings of their tests, are readers then responsible for whatever misreadings they commit? I suspect so, but it's hard to say. There could be a middle space of simple error, where no one is at fault (or maybe there's a middle space of overlap, where both are). And there's still a lot left unanswered: the "insofar" in the second sentence of this paragraph will be really hard to determine, for example. I admit that it is really hard for me to say how charitable we ought to be when working out whether an author could have predicted the way zer book was interpreted. Fortunately, the stakes aren't often very high, but when they are, I feel strongly that we need to keep clearly in mind the distinctions between the author's knowledge of the text, the text's internal meanings, and the reading experience.

---

*I have not read Twilight, but I have read Paper Towns, and it is my assessment that Green's claim is right: the book deflates the Manic Pixie Dream Girl narrative. However, his critique only becomes obvious in the last quarter of the novel. For the first half, the novel appears to be the sort of story it is critiquing, and I'm not sure I'd fault someone for putting the book down during that time if the apparent MPDG narrative was upsetting them.

---

UPDATE: My friend Jon wrote the following on my Facebook in response to this post, and gave me permission to share it here:
Yes, you can read a critique of the MPDG trope into PT, but hundreds of thousands of people have also proven that you don't HAVE to. | Do we really believe that hundreds of thousands of people who found themselves comforted and inspired by the hope that they can save a girl by being romantically interested in her are merely wrong?
Blog Widget by LinkWithin