Tuesday, 9 September 2014

The Singular Flavor of Souls

While I’m recording things I’ve read recently that do a far better job of articulating, and expanding, what I was trying to say last summer, I have two more things to mention that touch on what I was trying to get at with my posts on difference and the acknowledgement thereof. I assume there are many people for whom these ideas are well-trod ground, but they were new to me and it might be worth something to record my nascent reactions here.

In “From Allegories to Novels,” in which Borges tries to explain why the allegory once seemed a respectable genre but now seems in poor taste, he writes the following:
Coleridge observes that all men are born Aristotelians or Platonists. The Platonists sense intuitively that ideas are realities; the Aristotelians, that they are generalizations; for the former, language is nothing but a system of arbitrary symbols; for the latter, it is the map of the universe. The Platonist knows that the universe is in some way a cosmos, an order; this order, for the Aristotelian, may be an error or fiction resulting from our perfect understanding. Across latitudes and epochs, the two immortal antagonists change languages and names: one is Parmenides, Plato, Spinoza, Kant, Francis Bradley; the other, Heraclitus, Aristotle, Locke, Hume, William James. In the arduous schools of the Middle Ages, everyone invokes Aristotle, master of human reason (Convivio IV, 2), but the nominalists are Aristotle; the realists, Plato. […] 
As one would suppose, the intermediate positions and nuances multiplied ad infinitum over those many years; yet it can be stated that, for realism, universals (Plato would call them ideas, forms; we would call them abstract concepts) were the essential; for nominalism, individuals. The history of philosophy is not a useless museum of distractions and wordplay; the two hypotheses correspond, in all likelihood, to two ways of intuiting reality.[*]

But the distinction between nominalism and realism is not so keen as that, commentaries on Borges—Eco’s The Name of the Rose is notable—have noted, maybe missing that Borges might already have understood that.

I read Borges' essay months ago; Saturday, I read/skimmed the first chapter, written by Marcia J. Bates, of Theories of Information Behavior, edited by Karen E. Fisher, Sandra Erdelez, and Lynne (E. F.) McKechnie. In that chapter, I read this:
First, we need to make a distinction between what are known as nomethetic and idiographic approaches to research. These two are the most fundamental orienting strategies of all.
  • Nomothetic – “Relating to or concerned with the study or discovery of the general laws underlying something” (Oxford English Dictionary).
  • Idiographic – “Concerned with the individual, pertaining to or descriptive of single or unique facts and processes” (Oxford English Dictionary).
The first approach is the one that is fundamental to the sciences. Science research is always looking to establish the general law, principle, or theory. The fundamental assumption in the sciences is that behind all the blooming, buzzing confusion of the real world, there are patterns or processes of a more general sort, an understanding of which enables prediction and explanation of particulars.  
The idiographic approach, on the other hand, cherishes the particulars, and insist that true understanding can be reached only by assembling and assessing those particulars. The end result is a nuanced description and assessment of the unique facts of a situation or historical event, in which themes and tendencies may be discovered, but rarely any general laws. This approach is fundamental to the humanities. […]
Bates goes on to describe the social sciences as being between the two, the contested ground; at times, social sciences tend to favour one approach and then switch to the other. It is in the context of the social sciences that she talks about library and information science:
LIS has not been immune to these struggles, and it would not be hard to identify departments or journals where this conflict is being carried out. My position is that both of these orienting strategies are enormously productive for human understanding. Any LIS department that definitively rejects one of the other approach makes a foolish choice. It is more difficult to maintain openness to these two positions, rather than insisting on selecting one or the other, but it is also ultimately more productive and rewarding for the progress of the field.
I don’t think it’s difficult to see the realism/nominalism distinction played out here again, though it’s important to note that realism v. nominalism is a debate about the nature of reality, while the nomothetic v. idiographic debate concerns merely method (if method can ever be merely method).

Statistics, I think, is a useful way forward, though not sufficient. The idea of emergence, of patterns emerging at different levels of complexity, might also be helpful. Of course, my bias is showing clearly when I say this: Coleridge would say that I am a born Aristotelian, in that it is the individual that exists, not the concept. And yet it is clear that patterns exist and must be accounted for, and we probably can’t even do idiography without having ideas of general patterns, and it’s better to have good supportable patterns than mere intuitions and stereotypes. So we need nomothety! (I don’t even know if those are real nouns.) Statistics, probability, and emergence, put together, are a way of insisting that it's the individuals that are real while still seeking to understand those patterns the cosmos won't do without.

(And morality has to be at least somewhat nomethetic/realist, even if the idiographic/nominalist informs each particular decision, or else it literally cannot be morality, right?)


As you can tell from the deplorable spelling of flavour, the title is a quotation, in this case taken from a translation of Borges' essay "Personality and the Buddha;" the original was published at around the same time as "From Allegories to Novels." The context reads like this:
From Chaucer to Marcel Proust, the novel's substance is the unrepeatable, the singular flavor of souls; for Buddhism there is no such flavor, or it is one of the many varieties of the cosmic simulacrum. Christ preached so that men would have life, and have it in abundance (John 10:10); the Buddha, to proclaim that this world, infinite in time and in space, is a dwindling fire. [...]
But Borges writes in "From Allegories to Novels" that allegories have traces of the novel and novels, traces of the allegory:
Allegory is a fable of abstractions, as the novel is a fable of individuals. The abstractions are personified; there is something of the novel in every allegory. The individuals that novelists present aspire to be generic (Dupin is Reason, Don Segundo Sombra is the Gaucho); there is an element of allegory in novels. 

* What is strange about Aristotle and Plato is that Plato was Aristotelian when it comes to people and Aristotle, Platonic. Plato admitted that a woman might be born with the traits of a soldier or a philosopher-king, though it was unusual, and if such a woman were born it would be just to put her in that position for which she was suited. Aristotle, however, spoke of all slaves having the same traits, and all women the same traits, and all citizens the same traits, and thus slaves must always be slaves and women subject to male citizens. I want to hypothesize, subject to empirical study, that racists and sexists are more likely to be realists and use nomothetic thinking, while people with a more correct view of people (at least as far as sex and race are concerned) are more likely to be nominalists and use idiographic thinking... but the examples of Aristotle and Plato give me pause. Besides, is not such a hypothesis itself realist and nomothetic?

Tuesday, 2 September 2014

Reasons to Read

A Using Theory Post

In my discussion of literary theory and interpretation, I made a particular assumption about reading which isn’t universally supportable: I assumed that the point of reading was either to 1) discover what the text’s meaning really is or to 2) gain a particular reading experience (be that challenge or distraction or pleasure). However, there are almost certainly other reasons to read something and other expected results.

For instance, a person might read a text in order to learn something about the author. This is a fraught process, as I outlined early in my argument. But it’s a common and unavoidable reason to read: I don’t read letters as independent objects of reading, but as correspondence, one person’s attempt to communicate their ideas to me. I don’t listen to politicians’ speeches as pure rhetorical performances, meant to be enjoyed in themselves; I listen to politicians’ speeches in order to understand what the politician is thinking about the issues that face our body politic. And so, in what are maybe the most quotidian and ubiquitous acts of reading, we violate the intentional fallacy, a cornerstone of interpretation.

A person might also read in order to learn something about the world. When I want to discover something new about koalas or dobsonflies or argonauts, I do not go out and try to find examples of them for observation; I read about them. (I might ask someone about them, but this is no different: the text is auditory rather than written, and while that changes some of the ways we need to interpret, the fundamental principles are the same.) This is a strange sort of thing, when you think about it, because it takes two kinds of trust: the first is that my interpretation of the text is valid, and the second is that the text’s information is valid. Or perhaps it doesn’t: I can imagine a situation in which the text is ambiguous or outright inaccurate, but I still learn what I need to learn because I can compare this text with what I already know about koalas or dobsonflies or argonauts and “correct” it in my interpretation, just like I can usually read around typographic errors.

I might also read as a way of generating ideas. This is the book club sort of reading: we read a common text, and then discuss what we think of the character’s actions or the book’s depiction of some facet of reality. Often these arguments are not really about what the text actually says; the book only serves as a focal point or as common ground for a discussion or argument about ethics, politics, philosophy, and so on. If we ask of Watership Down, “Do you think that rabbits in Cowslip’s Warren would really act the way they do in Watership Down?”, we are not asking a question about the novel but about people, and it’s not a question of interpretation but of anthropology. Nonetheless, this is a reason that people read.

Some people read in order to improve their own writing. They want to inspire themselves, and so they go back to what inspired them to write in the first place. But, as Bloom makes clear, this process does not require accurate interpretation at all. He suggests that it positively benefits from inaccurate interpretation; whether he’s right or wrong, we can notice that this is a different thing from interpretation.

And there are other things that one might do as an academic studying literature. One of the sillier errors David Deutsch makes in The Beginning of Infinity is when he seems to think that literature departments ought to be working on the problem of beauty rather than meaning. Deutsch in interested in explanations that have reach, and if he’s noticing that what literary critics do sometimes lacks reach, he’d be right; his desire to see critics figure out what makes a poem beautiful might be an attempt to get this field back into conjecturing universal explanations. But he’s wrong that universality of reach is the only measure of an explanation; particularity matters too. The question of beauty is probably one that’s worth answering, however, even as the questions literary analysis currently asks are also worth asking. So this is maybe another reason to read: to figure out beauty’s mechanism. (I suspect this is a task for psychology, though, and not the humanities.)

I want to affirm all of these reasons to read. Some of these activities are necessary; some of them are excellent. But they aren’t interpretation; they do not contribute to interpretation, they are not the ends of which interpretation is only one of the means, and they are not what people do in English departments (or at least not primarily what people do in English departments). Of course an interpretation of a text might note that the text seems especially well suited to one of these tasks, but that’s not it’s job. Of course some of these tasks rely on interpretation to some degree, and so they benefit if that interpretation is expert rather than amateur, proficient rather than inept. And of course insofar as these tasks rely on interpretation they are also subject to interpretations limitations. But it’s still important to make distinguish between these activities, because the skills and methods involved in one are not always the skills and methods involved in another.

Let’s go back to that first example: reading a letter. I care what the person wanted to say, so literary interpretation isn’t going to cut it. I could do that work, of course, but it isn’t going to get me the result that I want. Trying to discern authorial intent is a somewhat harder task: instead of working out the meaning of the text in itself, I try to anticipate what meaning a person would want to impart when they chose those words. It is a much more speculative activity than literary interpretation. The result is far less certain when trying to discover intent than when trying to discover meaning; ambiguities must be resolved, not acknowledged and incorporated into the reading. Prior knowledge of the person, however, counts as evidence here, which means that you do have more data to work with—unless you don’t know the person very well, in which case reliance on the person’s personality becomes a liability.

And, I think, this goes back to the questions in the second half of my post on John Green, Twilight, and Paper Towns. If we’re holding people accountable for what they wrote, we luckily have all of the evidence we need in the text itself. If we’re holding people accountable for what they intended to write, our project is in trouble from the outset. If we’re holding people accountable for which misinterpretations they could anticipate…that seems difficult, indeed.  But, whatever we do, our understanding of the text must be an understanding of the text, and not anything else. That’s why I’m making these distinctions.

(For more on literary theory, see this index.)

Friday, 29 August 2014

A Mature Philosophy

Is Personal Epistemology What I’ve Been Looking For?

Through the research I’m doing as an RA, I encountered the idea of personal epistemology; the Cole’s Notes version is that different people have different measurable attitudes towards how people gain knowledge, what knowledge is like, and so forth. In general, research into personal epistemology fit into two streams: 1) research into epistemic beliefs addresses the particular individual beliefs a person might have, while 2) developmental models of personal epistemology chart how personal epistemology changes over a person’s life. Personal epistemology and its developmental focus are the invention of William G. Perry with his 1970 Forms of Intellectual and Ethical Development in the College Years, but these days Barbara Hofer and Paul Pintrich are the major proponents and experts.

Perry studied college students for all four years of undergraduate school, asking them questions designed to elicit their views on knowledge. What he determined is that they gradually changed their views over time in a somewhat predictable pattern. Of course, not all students were at the same stage when they entered university, so the early stages had fewer examples, but there were still some. Generally, he found that students began in an dualist stage, where they believe that things are either true or false, and have little patience for ambiguity or what Perry calls relativism.* In this stage they believe that knowledge is gained from authorities (ie. professors)—or, if they reject the authority, as sometimes happens, they do so without the skills of the later stages and still tend to view things as black and white. As the stages progress, they start to recognize that different professors want different answers and that there are good arguments for mutually exclusive positions. By the fifth stage, they adopt Perry’s relativism: knowledge is something for which one makes arguments, and authorities might know more than you but they’re just as fallible, and there’s no real sure answer for anything anywhere. After this stage, they start to realize they can make commitments within relativism, up until the ninth stage, where they have made those commitments within a relativist framework. Not all students (or people) progress through all of the stages, however; each stage contains tensions (both internally and against the world/classroom) which can only be resolved in the next stage, but the unpleasant experience of these tensions might cause a student to retreat into a previous stage and get stuck there. Furthermore, with the exception of the first stage, there are always two ways to do a stage: one is in adherence to the authority (or the perceived authority), and the other is in rebellion against it.** It’s all quite complicated and interesting.

The 50s, 60s, and 70s show clearly in Perry, both in his writing style, his sense of psychology, and his understanding of the final stage as still being within a relativist frame. His theory foundered for a while but was picked up Hofer and Pintrich in the early 2000s. They, and other researchers, have revised the stages according to more robust research among more demographics. Their results are fairly well corroborated by multiple empirical studies.

According to contemporary developmental models of personal epistemology, people progress through the following stages:

Naïve realism: The individual assumes that any statement is true. Only toddlers are in this stage: most children move beyond it quite early. Naïve realism is the extreme gullibility of children.
Dualism: The individual believes statements are either right or wrong. A statement’s truth value is usually determined by an authority; all an individual must do is receive this information from the authority. While most people start moving out of this stage by the end of elementary school or beginning of high school, some people never move past it.
Relativism: The individual has realized that there are multiple competing authorities and multiple reasonable positions to take. The individual tends to think in terms of opinions rather than truths, and often believes that all opinions are equally valid. Most people get here in high school; some people proceed past it, but others do not.
Evaluism: The individual still recognizes that there are multiple competing positions and does not believe that there is perfect knowledge available, but rather gathers evidence. Some opinions are better than others, according to their evidence and arguments. Knowledge is not received but made. Also called multiplism. Those people who get here usually do so in university, towards the end of the undergraduate degree or during graduate school. (I’m not sure what research indicates about people who don’t go to university; I suspect there’s just less research about them.)

This link leads to a decent summary I found (with Calvin & Hobbes strips to illustrate!), but note that whoever made this slideshow has kept Perry’s Commitments as a stage after evaluism (which they called multiplism), which isn’t conventional. As with Perry’s model, there are more ways not to proceed that there are to proceed. Often people retreat from the next stage because it requires new skills from them and introduces them to new tensions and uncertainties; it feels safer in a previous stage. Something that’s been discovered more recently is that people have different epistemic beliefs for different knowledge domains: someone can hold an evaluist position in politics, a relativist position in religion, and a dualist position in science, for instance.

All of this pertains to our research in a particular way which I’m not going to get into much here. What I wanted to note, however, is that I am strongly reminded of Anderson’s pre-modern, modern, post-modern trajectory, I outlined just over a year ago. It’s much better than Anderson’s trajectory, however, for two reasons: 1) it’s empirically based, and 2) in evaluism it charts the way past relativism, the Hegelian synthesis I had been babbling about, the way I’d been trying to find in tentativism (or beyond tentativism). Perry’s model may or may not do this (without understanding better what he means by relativism, I can’t tell what his commitment-within-relativism is), but Hofer, Pintrich, et al.’s model does. Evaluism is a terrible word; I regret how awkward tentativism is, but I like evaluism even less. However, in it there seems to be the thing I’ve been looking for.

Or maybe not. It reminds me of David Deutsch’s Popper-inspired epistemology in The Beginning of Infinity, but it also reminds me literary interpretation as I’m used to practicing it, and so I can see a lot of people rallying under its banner and saying it’s theirs. That doesn’t mean it is theirs, but it often might be, and what I suspect is that evaluism might be a pretty broad tent. It was an exciting discovery for me, but for the last few months I’ve started to consider that it’s at best a genus, and I’m still looking for a species.

But this leads to a particular question: which comes first, philosophy or psychology? Brains, of course, come before both, but I’ve always been inclined to say that philosophy comes first. When, in high school, I first learned of Kohlberg’s moral development scheme, I reacted with something between indignation and horror: I was distressed at the idea that people would—that I would—inexorably change from real morality, which relies on adherence to laws, to something that seemed at the time like relativism. Just because people tended to develop in a particular way did not mean they should. What I told myself was that adults did not become more correct about morality but rather became better at rationalizing their misdeeds using fancy (but incorrect) relativistic logic. Of course I was likely right about that, but still I grew up just as Kohlberg predicted. However, I still believed that questions of how we tended to think about morality were different questions from how we should think about morality.

And yet it is tempting to see personal epistemology as the course of development we should take. Confirmation bias would lead me to think of it this way, so I must be careful. And yet the idea that there are mature and immature epistemologies, and that mature ones are better than immature ones, makes a certain intuitive sense to me. I can think of three possible justifications for this. An individual rational explanation would imagine this development less as biological and more as cognitive; as we try to understand the world, our epistemologies fail us and we use our reason to determine why they failed and update them. Since this process of failure and correction is guided by reason and interaction with the real world, it tends towards improvement. An evolutionary pragmatic explanation is also empirical and corrective: during the process of human evolution, those people with better epistemologies tended to survive, so humans evolved better epistemologies; however, given their complexity, they developed later in life (as prefrontal cortices do). A teleological explanation would suggest that humans are directed, in some way, toward the truth, and so these typical developments would indicate the direction in which we ought to head. I’m not sure I’m entirely convinced by of any of these justifications, but the first one seems promising.

So what comes first: psychology or philosophy? And should we be looking less for the right epistemology or a mature one?


*I’m still struggling to understand what Perry means by relativism, exactly, because it doesn’t seem to quite be what I think of relativism as being: it has much more to do with the mere recognition that other people can legitimately hold other positions than oneself, and yet it seems to be overwhelmed by this acknowledgement. It seems more like a condition than a philosophy. I'm still working it out.
**Perry writes about a strange irony in the fairly relativistic (or seemingly relativistic) university today.
Here’s the quotation:
In a modern liberal arts college, […] The majority of the faculty has gone beyond differences of absolutist opinion into teachings which are deliberately founded in a relativistic epistemology […]. In this current situation, if a student revolts against “the Establishment” before he has familiarized himself with the analytical and integrative skills of relativistic thinking, the only place he can take his stand is in a simplistic absolutism. He revolts, then, not against a homogeneous lower-level orthodoxy but against heterogeneity. In doing so he not only narrows the range of his materials, he rejects the second-level tools of his critical analysis, reflection, and comparative thinking—the tools through which the successful rebel of a previous generation moved forward to productive dissent.

Sunday, 24 August 2014

Symbol Confusion

When I visited my (first) alma mater a season after graduating, I had tea with some of the staff from my old fellowship, and one of them told me he thought of the recent-grad situation as being rather like a swamp. I think he was trying to say that people tended to get lost in that time period, perhaps even stuck, without knowing which way to go; maybe he was trying to evoke unstable ground, and general lack civilization or guideposts. But I had to shrug and say, “You know, I’ve always liked swamps.”

Churchill has famously called depression a black dog. The black dog visited when Churchill’s depression became active. But I like dogs quite a lot, including black ones. If a literal black dog were to visit me, it would make my periods of depression far more tolerable. Sometimes, if I need to distract or comfort myself, such as when I am getting a painful medical procedure, I imagine there is a large black dog lying next to me.

Sometimes, if I feel like depression might come upon me in the near or near-ish future, I think of it as a fogbank approaching from the horizon. The image has the merits of specificity, and I feel like it would communicate to other people what I am feeling. However, I like fogbanks rather a lot, so the image feels inauthentic to me.

This morning at church we had a baptism, and during the service the deacon lit a candle and passed it to the baby’s mother, saying, “Receive the light of Christ, to show that you have passed from darkness to light.” But I don’t like the light so much, or anyway I prefer periods of gloaming and overcast, light mixed with darkness. To save electricity I will sometimes move about the house without turning on any lights, and I do not mind this darkness. Apparently I did this often enough that a housemate once called me a vampire. Darkness, I find, can be a balm.

Heaven is often depicted as being celestial, in the sky; Hell is subterranean, in the ground with the graves. The rich are the upper class, and the poor are the lower class. Revelations are sought and received on mountaintops. Thrones are placed on a dais, above the crowd. In pyramid schemes, those at the top benefit from those at the bottom. I, however, dislike heights. As with Antaeus, I feel stronger on the earth.

Do not misunderstand: when I affiliate with the low, the shadowed, the misty, the marshy, the canine, I do not mean to paint myself as a friend to victims and outcasts and wretched sinners, as much as that sort of thing appeals to me. Rather, I’m just affiliating with the low, the shadowed, the misty, the marshy, and the canine, with no regard for their uses as symbols. More, I am not sure why they symbolize what they are used to symbolize: truth cannot be a light if it is so often unseen; power cannot be high in the air if it is so often entrenched. Some of these are said to be universal metaphors, which show up in every culture (that the anthropologists who made this argument studied): height always indicates status, size always indicates superiority, and so forth. It may be true that all cultures run on such symbols, but I doubt all people do. I sometimes do not.

I wonder how important a skill it is to be able to confuse symbols, to break the equivalences of the symbol set you’ve inherited.

Friday, 22 August 2014

Six Principles

Last summer I wrote about how I sometimes try to understand a worldview by mentally outlining an epic espousing its attitudes and assumptions. This forces me to ask specific questions about it, ones which I might not otherwise think to ask: what characteristics would the protagonist need to exhibit if he or she were to embody the community's values? which major event in history would be most appropriate as the epics subject?  what would its underworld look like, and what would it mean to descend there? if the worldview does not have gods which might intervene, what would be analogous to divine intervention in this epic? what contemporary discoveries or discourses would the epic mention? and so on. I also discussed how choosing the best genre for a worldview entailed asking similar questions: is this worldview more interested in individuals, communities, or nations? is this worldview especially interested in the sort of origin-stories that epics tend to be? is this worldview interested in the ways social order can be breached and repaired, as mysteries tend to show? and so on.

Well, I've been trying a similar thought exercise for understanding worldviews, which I'm calling the Six Principles approach. Basically, I'm trying to boil a position down to six principles, and while those principles do tend to have multiple clauses I try for some semblance of brevity. There are two points to this severe summary: the first is to try to shuck off a lot of the unnecessary details, and the second is to try to collapse multiple elements into one. Collapsing multiple elements into one principle forces me to figure out how different elements relate to one another (for example, satire and decorum in neoclassicism).

What I've found is that the Six Principles approach works far better when I'm trying to figure out things like intellectual movements rather than specific positions. For example, Romanticism and Neoclassicism were easier than Existentialism; Existentialism was easier (and likely more valid) than Quasi-Platonism; Quasi-Platonism was just barely possible while something like Marxism probably wouldn't have worked at all. Trying to describe a particular person's position is far harder. Movements, however, consist mostly of the overlap between different thinkers' views, which makes them easier to summarize. Further, it's easier to render attitudes rather than theories this way (though, of course, the distinction between the two isn't fine and clear).

Of course, I'm including a suppressed criterion for this exercise I haven't mentioned yet. See, I came up with the exercise while imagining a world-building machine which had limited granularity: for instance, you could designate what climate a nation had, and what sapient species populated it, but you couldn't get right in there and write their culture from the ground up. So you'd have to define cultural trends for the machine and give them names so you can apply them to nations (for instance, Nation A is temperate and wet, populated by humans and gorons, and is mostly Romantic with a minority Neoclassic culture). It's something I might use in a novel or similar project some day. Anyway, I wanted to see if I could actually define movements in a limited set of principles for the purposes of said possible novel, and it would have to be legible to the world-building machine and therefore couldn't depend on references to specific historical events (ie. romanticism is in part a reaction to increasing urbanization and industrialization in Europe).

Here are some of my attempts:

Neoclassicism (you saw this the other day)
  1. Reason and judgement are the most admirable human faculties.
  2. Decorum is paramount.
  3. The best way to learn the rules is to study the classical authors.
  4. Communities have an obligation to establish and preserve social order, balance, and correctness.
  5. Invention is good in moderation.
  6. Satire is a useful corrective for unreasonable action, poor judgement, and breaches of decorum.


  1. Spontaneity of thought, action, and expression is healthier than the alternative.
  2. A natural or primitive way of life is superior to artificial and urban ways of life.
  3. A subjective engagement with the natural world leads to an understanding of oneself and of humanity in general.
  4. Imagination is among the most important human faculties.
  5. An individual's free personal expression is necessary for a flourishing society and a flourishing individual.
  6. Reactionary thought or behaviour results in moral and social corruption.


  1. Individuals have no destiny and are obliged to choose freely what they become.
  2. The freedom to choose is contingent on and restricted by the situation in which the individual exists, both physically and in the social order.
  3. Authenticity means the ability to assume consciously both radical freedom and the situation that limits it.
  4. Individuals are responsible for how they use their freedom.
  5. Individuals have no access to a pre-determined set of values with which to give meaning to their actions, but rather must create their own.
  6. Humans are uniquely capable of existing for themselves, rather than merely existing, and of perceiving the ways they exist for others rather than for themselves.

Humanism (that is, Renaissance humanism, though its current secular homonym has some overlap)

  1. It is important to perfect and enrich the present, worldly life rather than or as well as preparing for a future or otherworldly life.
  2. Revival of the literature and history of the past will help enrich the present, worldly life.
  3. Education, especially in the arts, will improve the talents of individuals.
  4. Humans can improve upon nature through invention.
  5. Humanity is the pinnacle of creation.
  6. Individuals can improve themselves and thus improve their position in society.


  1. No perfect access to truth is possible.
  2. Simulations can become the reality in which one lives.
  3. No overarching explanation of theory for all experiences (called metanarratives) is likely to be true.
  4. Suspicion of metanarratives leads to tolerance of differing opinions, which in turns welcomes people who are different in some way from the majority.
  5. Individuals do not consist of a single, unified self, but rather consist of diverse thoughts, habits, expectations, memories, etc., which can differ according to social interactions and cultural expectations.
  6. Uncertainty and doubt about the meanings one's culture generates are to be expected and accepted, not denied of villainized.

Nerdfighterism (referring more to what John and Hank Green say than what their fans espouse)

  1. Curiosity, the urge to collaborate, and empathy are the greatest human attributes.
  2. Truth resists simplicity.
  3. Knowledge of the physical universe, gained through study and empirical research, is valuable.
  4. Individuals and communities have an obligation to increase that which makes life enjoyable for others (called awesome) and decrease oppression, inequality, violence, disease, and environmental destruction (called worldsuck).
  5. Optimism is more reasonable and productive than pessimism.
  6. Artistic expression and appreciation spark curiosity, collaboration, and empathy.

Quasi-Buddhism ("quasi-" because I make no mention of Buddha or specific Buddhists practices, as per the thought experiment)

  1. Suffering exists because of human wants and desires.
  2. The way to end suffering is discipline of both thought and action, especially right understanding, detachment, kindness, compassion, and mindfulness.
  3. Nothing is permanent and everything depends on something else for existence.
  4. Meditation frees the mind from passion, aggression, ignorance, jealousy, and pride, and thus allows wisdom to develop.
  5. Individuals do not have a basic self, but are composed of thoughts, impressions, memories, desires, etc.
  6. Individuals and communities can get help on their path to the end of suffering by following those who have preceded them on that path.

Quasi-Platonism ("quasi-" again because I do not refer to Plato and try to generalize somewhat, but this is still pretty close to Plato rather than, say, the neo-Platonists)

  1. All things in the physical world, including people, are imperfect versions of the ideal form of themselves.
  2. Knowledge is the apprehension of the ideal forms of things using reason.
  3. Individuals consist of reason, passion, and appetite.
  4. It is best to subordinate passion and appetite to reason, even though passion and appetite are not in themselves bad.
  5. Those who are ruled by reason ought to be in charge of society.
  6. It is best if things in the physical world, including people, act or are used in accordance with their ideal form.

Charistmaticism (as elucidated here)

  1. An individual or community open to the unexpected will receive surprising gifts.
  2. The natural world is enchanted, and what some may call the supernatural is merely the intensification of embedded creative (or corrupting) forces already present in a particular place or experience.
  3. Deliverance from evil entails satisfaction of both bodily and spiritual needs.
  4. Emotional and embodied experiences of the world are prior to intellectual engagement, which is dependent on the former.
  5. Right affection and right action require training in gratitude.
  6. Truth is best spoken by the poor and marginalized.

I'd be interested in seeing other attempts, if anyone would like to try their hand at it.

Tuesday, 19 August 2014

Death Denial, Death Drive

Content warning: suicide, depression

If you’re at all aware of my online presence, you’ll know that I have great respect for Richard Beck’s work at his blog Experimental Theology and in his books known sometimes as the Death Trilogy. (You may be more aware of this than you'd like to be, since I doubt a month goes by before bringing his work up in some comment thread or another.) In particular I appreciate his work on Terror Management Theory and how it pertains to religious authenticity, hospitality, and the creation of culture. He gets the bare-bones of his theory from Paul Tillich’s The Courage to Be, but adds to it using experimental psychology that was unavailable to Tillich. I’ll give a summary.

Humans fear death. We fear particular threats of death, but we also experience anxiety from the knowledge that we will necessarily die one day. In order to manage the terror of death, we create systems of meaning which promise us some kind of immortality. This might be a promise of an afterlife, but it might simply be the idea that our lives had meaning—we were remembered, or we descendants, or our actions contributed to the creation of some great cultural work (like the progress of science or what have you). However, these cultural projects, or worldviews, can only eliminate our fear of death if we believe in them fully. Therefore we react to anything which threatens our belief in these worldviews in the exact same way that we would react to the possibility of death, because the loss of our worldviews is identical to the loss of immortality. And the mere existence of other plausible worldviews might constitute a threat to one’s own worldview. This is why most people defend their worldviews and reject other people’s worldviews so violently; violent defense of worldviews makes things like hospitality, charity, and justice difficult or impossible. Even a religion like Christianity, which is founded on the idea that we should side with the oppressed, became and in some cases remains violently oppressive because of these forces. Some people, however, do not use their worldview and cultural projects to shield themselves from the reality of death. These people, instead, face their fears of death directly; they also face their doubts about their own worldviews directly (these are, after all, the same thing). These people are in the minority, but they exist. Thus there are two things a person must do in order to prevent themselves from violently defending their worldviews: they must be willing to face the possibility of their own death (which means they must be willing to face doubt, the possibility that their worldview is false and their actions meaningless), and they must try to adopt a worldview that is not easily threatened by competing worldviews (which doesn’t necessarily mean relativism, but at the least worldviews should be easy on people who don’t adopt them).

I find this theory very compelling, not just because Beck marshals a lot of evidence for it in his work but also because I can use it to explain so many things. In particular, it seems to explain why so many religious people can be so hostile, and why their hostility seems to come from their religion, but at the same time religion motivates others to be hospitable instead. The theory can’t explain why people choose one religion over others, but that’s OK: other theories can do that. Beck’s work simply explains why people need to choose a worldview of some kind (even if that’s not a religion) and why they behave the way they do in relation to their own worldview and in relation to other people’s worldviews. And it is powerful when it does this.

However, I think there might be a problem with it. It presupposes that all people fear death.*

Now, I readily admit that most people fear death, and that complaints that certainly cultures do not fear death simply confuse the effects of a highly successful worldview with the lack of a natural fear of death. But I do not admit for a second that all people fear death. Some people actively desire death (that is, they exhibit marked suicidiality), but others simply have neither fear nor desire for death. When my depression is at its worst, this is me: I am generally unperturbed by possibly dangerous situations (unsafe driving, for instance). Moreover, however my mental health looks, I feel no anxiety at all about the fact that I will inevitably die. This does not scare me; it relieves me. There is little that gives me more comfort that the thought that I will one day die. I cannot state this emphatically enough: the only emotion I feel when contemplating my eventual death is relief. Of course I realize this might be pathological, but it means I fit into neither of Beck’s populations: I am not a person who denies death and my fear of it, and I am not a person who faces my fear of death. I face death and simply have no fear. I am surely not the only one, either.

Perhaps you object that I do not fear death because I have a fairly successful meaning-making worldview. And I do think my life probably has meaning. But, if that were true, my response to doubt would be fear of death. Contemplating the possibility that my life might be meaningless should, if Beck is right, cause me to grasp for life more readily. But the opposite happens: when I contemplate the possibility that my life has no meaning, I am even less interested in living out the full allotment of my life. If I came to believe that my life was meaningless, I would probably tip over into actively wanting to die. That’s opposite the prediction you would make if what Beck said applied to me. And I’m in good company as far as that goes; Camus felt the same.

Camus begins The Myth of Sisyphus as follows: 
There is but one truly serious philosophical problem and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy.
Notice that the question of life and death is contingent on the question of meaning and meaninglessness; if life is meaningless, there is no reason to live. If life has meaning, there is a reason to live. This exhibits causality reverse to Beck’s. Camus’s answer was peculiar to him, a last-minute dodge from nihilism: suicide is the refusal to face the meaninglessness of life, and we must face the meaninglessness of life. This smuggles meaning back into absurdism, however, and creates a contradiction.** Camus’s answer doesn’t interest me, however; what interests me is the fact that the absurdists and existentialists seemed to have experiences parallel to mine, which suggests that Beck’s theory cannot describe all people. That is, it does not apply to the admittedly small population of people who simply don’t fear death in the first place.

Of course, it doesn’t have to address those people (us). As a theory of religion and culture in general, its failure to describe a small subset of the population, one which has not had much hand in shaping either religion or culture, is hardly a fatal error. In fact, this disjoint explains the general hostility most cultures and religions have shown towards suicide (as distinct self-sacrifice): suicide, as a repudiation of the meaningfulness of life, threatens the culture’s worldview and thereby exposes people to the fear of death. And since worldviews are designed to assure people that death isn’t real, they don’t have much to offer people who mostly fear that life is worthless and all meaning is uncertain, except to simply insist on their own validity in a louder and angrier voice. Maybe this is why it is difficult to find resources in Christianity which are at all effective at dissuading people from suicide or ameliorating their depression without making things so much worse (fear of damnation, sense of shame, etc.).

It is worth noting that I do have a fear of death, but it’s a different one: I fear other people’s deaths. I fear the deaths of those who matter to me. Perhaps Beck’s theory can be applied to that? The attempt to understand another person’s life as meaningful, in order to deny their death? Or the attempt to honestly face another person’s death or our fear of it?

So my complaint is not that Beck’s theory is wrong because it omits the population of which I am a part; my complaint is simply that I find it difficult to use Beck’s theory to determine what I should do, in regards to my own worldview, or to predict or understand my own anxieties regarding the meaningfulness of my actions, the role of doubt in hospitality, and how to face my particular anxieties (of meaninglessness, but also of the deaths of people who matter to me) without harming others.

For more on my depression and assorted philosophy I've worked through in response to it, see this index post.

If you are experiencing suicidal thoughts, please contact a local suicide hotline. In BC, call 1-800-784-2433, or 1-800-SUICIDE. If you are feeling depression generally, get therapy. I am serious about this: it helps so much.


*There are other problems I can think of: for instance, Beck does not seem to account for a desire to believe that which is true. However, I won’t deal with it here; it’s possible that we do have a reality principle in our psychological make-up which competes with the meaning-creating and death-fearing components, but I’m not entirely sure that we do; if we do value truth for its own sake, that value is likely nonetheless a product and component or our worldview and appears after we choose a worldview to manage our fear of death. Beck, however, found some inspiration for his work from the American pragmatists, whose sense of truth is different from the intuitive one: a proposition which works, which bears fruit, which makes you the person you think you should be, is true and is true precisely for those reasons. Truth is not about correspondence between a proposition and an external reality or about coherence within a system of propositions. While Beck might not sign off on American pragmatist epistemology, I don’t think it’s a coincidence that Beck’s work focuses on whether beliefs make you a hospitable or inhospitable person rather than on whether beliefs meet some other benchmark of truth.
**This paradox may be resolvable, in the sense that it may be cognitively impossible for us to truly acknowledge meaninglessness. Any attempt to do so might smuggle meaninglessness in somehow; an absurdist might therefore say that absurdism is simply the philosophical system which gets closest to acknowledging meaninglessness, even though it smuggles in the imperative “You must authentically acknowledge meaninglessness.” As it stands, I prefer existentialism to absurdism, since it acknowledges that meaning can be created and only asks that you acknowledge that you created that meaning yourself; it still contains the paradox (you did not yourself create the imperative “You must acknowledge that you created all meaning yourself”), but it contains a more hospitable version of that paradox.
Blog Widget by LinkWithin