The “Real” and the “True”

I’m currently in the sixth week, or a third of the way (!) through, my contemporary narrative class.  I’ve drafted my students into the service of my current obsessions, and so we’re tracking the ways that a select set of contemporary narratives thematize reading/interpretive processes as methods of evaluating truth.  My intrepid students are going great guns, of course, and are finding all sorts of examples and avenues that never would have occurred to me.  Case in point: how to do we articulate the complex relationship between realism and the truth in any given narrative?  How does the former shape our expectations of the latter, and to what extend does the ambiguity of the latter force us to question the former?

To fully understand that question, you’d need to have an idea of the kind of texts that I’ve been asking them to endure.  To some extent, whether they are novels or television serials, they have largely cohered, thus far, to the genre pithily described as “mind-fuck,” or, in more genteel language, what Thomas Elsaesser calls the “mind-game.”  In essence, I’ve asked students to dig into narratives (Adam Ross’s Mr. Peanut, Heidi Julavits’s The Uses of Enchantment, and now Moffatt and Gatiss’s Sherlock) that actively present a series of internal questions about which of many narratives or perspectives is true, OR real, or both.  Still confused?  (So are we.)  In Mr. Peanut, for example, we begin with a compelling and horrifyingly ambiguous image of a woman who has died from anaphylactic shock: death by peanut.  Her husband is present, with a bloody hand.  The question: did he shove the peanut down her throat, or did he try to prevent her from swallowing it?   The novel goes on to consider the complexities of married life, the emotional weight of a desire for freedom, and along the way, retells one of the famous American uxoricide cases, that of the Sheppard murder made famous in the television series and film The Fugitive.  Thus, the details of the protagonist’s daily life and the “ripped from the headlines,” crime scene evidence of the Sheppard case accumulate, attempting to verify these tales of matrimonial mayhem. It doesn’t take much to see how the status “the real” serves to support “the true,” until the processes of interpretation and abstraction are brought to bear: how do law enforcement officials assess guilt?; to what extent does the desire to kill one’s wife differ from the actual act?; in what ways does the indecipherability of one case reflect on another?  (And just when you think you’ve got a handle on those in this novel, we move on to the next one.)

The class, thus far, has enthusiastically assessed these narrative strands in each text, weighing them against each other in order to argue for the one that seems believable (we also like the word “possible,” along with “plausible”).   We marshal our evidence to make claims about where we stand as readers when we close the covers; we integrate the evidence that others provide to alter our own readings.  What we have yet to be able to do, however, is to consider the ways that the conventions of realism enter into the conversation.  Or to put this another way: it’s all we can do to get a handle on what is “the real story” of the text; identifying the mechanisms that get us there is beyond the pale.  Who designed this class, anyway?

And yet, the question remains.  For all of the retro-postmodern ambiguity these narratives possess, they also rest on a 200 year history (give or take) of a realist tradition: a painstakingly-constructed, historically and culturally situated, ideologically-rife set of conventions that registers to readers as “real.”  Where does our current cultural fascination with reality—our own dissonant belief, for instance, that “reality tv” is both a constructed falsity, and yet somehow also true—stand in relation to that history?

Stay tuned, true believers.  We’ve still got 12 weeks to figure this stuff out.

#Digiwrimo; or, November is the Cruelest Month for Public Intellectualism

I don’t know what’s more sad: the image of thousands (if not millions) of abandoned blogs, laying by the side of the digital highway, carrion for web vultures; or the fact that, after more than a year, I myself may have forgotten the finer points of constructing a blog post.
Either way, sometime at the end of October, I was reminded of the venerable tradition of Nanowrimo, the increasingly popular use of the month of November to join a community of writers in the pursuit 50,000 words–a draft of a novel–in 30 days.  I’m no would-be novelist, for sure, but I couldn’t help but admire (and envy, a bit) the challenge and sense of camaraderie that I imagine has to develop over the course of Nanowrimo.  I imagine that it’s akin to the moment in a triathlon when you find yourself chatting with the person running next to you.  You may be strangers, but you have everything in common for this measure of time.  But what is a non-novelist to do with November, I ask you?  Thankfully, the good people at Marylhurst University in Portland have come up with an answer for the rest of us: Digiwrimo, a month of digital writing—in all of its manifestations (see “What is Digital Writing?” for more details).  November=50,000 words, novel or no; and by no, I think I mean no excuses, and no reason not to address the sad of the abandoned blog, the loss of blogging skills.  All right, Digiwrimo.  Let’s do this thing.

Reason #1:

When I stop to consider why this kind of challenge is worth the commitment, I don’t have to dig too deeply. First and foremost, I should note that I’ve been requiring my students to keep class blogs for almost 10 years.  It’s a practice that I believe promotes a sustained engagement with their coursework, asks them to think of their writing and thinking as public acts, and knits them into a community of thinkers who are considering similar questions and approaches to texts.  Over time, I’ve come to applaud the students who develop their blogging and commenting as a sustained and dependable practice.  “It’s hard to be consistent, and consistently thoughtful,” I recently wrote on a student’s midterm.  And it is.  Life for students, for professors, for parents, for people is complicated; it’s the easiest thing in the world to put off the complex cognitive work of thinking and writing.  But the payoff can be wonderful, and there is a set of pleasures that develop both from the practice of writing as well as from seeing an ever-growing archive of your work over time.  What patterns emerge?  What persistent concepts, questions, ideas appear across a number of posts?  What do these reveal about your own predilections, and how do you intend to follow those?  Fine questions for my students, but for myself as well.  No one wants to be the professor who embodies the “do what I say, not what I do.”

Reason #2:

A year ago, I put together a list of links for a colleague who was sorting through the complicated questions that surround contemporary scholarship.  What does it look like in the digital age?  What counts, and what doesn’t? If we are reading, writing, and thinking differently with and through the internet, then how do scholars and intellectuals begin to identify the practices that matter to them, and consider the ways that these practices can occur in new forms?  The argument for the scholarly use of blogs has been building for some time; it may have reached its fever pitch in and around 2011.  A cavalcade of prominent intellectuals in a variety of fields had been blogging for years by that point (any list of these will be perspectival and incomplete, but I’ll just throw out a few here.  You have The Leiter Report in philosophy; Pharyngula in the sciences; Kristen Thompson and David Bordwell in film; Henry Jenkins in media studies; Michael Berube’s now sadly defunct blog, which covered cultural studies and politics).  These, of course, are just the blogs by individuals, and leave out the impressive blog collectives.

Out of this history of practice, then, came a debate (now much rehearsed and rehashed) about the place and value of these blogs.  One flashpoint in the “conversation” occurred during the 2010 MLA convention, when then-graduate student/adjunct professor Brian Croxall was unable to attend the conference because of financial constraints and instead posted his paper on his website.  Dave Parry’s post sums up the conundrum that resulted:

Let’s be honest, at any given session you are lucky if you get over 50 people, assuming the panel at which the paper was read was well attended maybe 100 people actually heard the paper given. But, the real influence of Brian’s paper can’t be measured this way. The real influence should be measured by how many people read his paper, who didn’t attend the MLA. According to Brian, views to his blog jumped 200-300% in the two days following his post; even being conservative one could guess that over 2000 people performed more than a cursory glance at his paper (the numbers here are fuzzy and hard to track but I certainly think this is in the neighborhood). And Brian tells me that in total since the convention he is probably close to 5,000 views. 5000 people, that is half the size of the convention.

And, so if you asked all academics across the US who were following the MLA (reading The Chronicle, following academic websites and blogs) what the most influential story out of MLA was I think Brian’s would have topped the list, easily. Most academics would perform serious acts of defilement to get a readership in the thousands and Brian got it overnight.

Or, not really. . .Brian built that readership over the last three years.

Parry’s take on the brouhaha that emerged is a useful one; it identifies the kinds of markers that scholars use to identify the value of their work (here, translated into eyeballs and influence).  But Parry goes on to note that the dismissal of Croxall by those who were devoted to a strict view of the historical means by which scholars captured eyeballs and built influence: presence at conferences, publications in peer-reviewed journals, etc.  Parry refutes this model, citing the kind of careful work that Croxall had done up until this point, utilizing social media to forward his scholarly and pedagogical interests.  He ends his piece by linking this kind of work—the mobilization of a number of digital media forms and their attendant functions to circulate research—to “public intellectualism.”

I now ask my graduate students to read Parry’s blog post before they create their own blogs and start tweeting for our class.  It’s the narrative, I think, that brings home to them the way that the world of scholarship is changing, and the ways that they need to consider how their own work might circulate both in long-standing print formats and also online.  In addition, I hope that it encourages them to think carefully about how they want to straddle that divide.  For me, however, the argument about social media as public intellectualism is compelling, particularly at the moment when colleges and universities are imperiled by their rising costs, shrinking state and federal budgets, and perhaps most troublingly, their inability to make the case that what they offer is worthwhile.  Better scholars than me are making the argument that the self-same media that some view as chipping away at the foundations of education (e.g., social media will be the death of reading and bring on the zombie apocalpyse, etc.) may actually be the grounds for re-invigorating it.  Dan Cohen, director of the Roy Rosenzweig Center for History and New Media at George Mason University, is such a believer that he’s posted a draft of his book chapter dedicated to this argument on his blog; meanwhile, Kathleen Fitzpatrick, the Director of Scholarly Communication at the MLA addresses the complexities of academic publishing (in both print and digital forms) in her most-recent book, Planned Obsolescence: Publishing, Technology, and the Future of the Academy.

It goes without saying, I should hope, that both Cohen and Fitzpatrick are consistent bloggers, and by Parry’s definition, public intellectuals.

Quite frankly, I’ve drunk the Kool-Aid on this one; I’m convinced by the arguments that, while academic publishing in journals remains an important way for experts in academic fields to talk to each other, we also have a responsibility to make our interests and passions and discoveries known to other audiences, and to model forms of engagement with the objects that we love the most.  And for that kind of work, nothing beats a blog.  (I’ll save my thoughts about Twitter for another day.)

So, thank you, Digiwrimo, for reminding me why I believe in digital writing, and why I need to make room for it, to develop and practice the same habits that I ask my students to develop every semester.  Let November begin.  (It’s going to be a long month.)

Postmodern Listy-ness

It’s really all about the list, no?  Somewhere, someone is articulating a complex theory on our media fascination with the list.  In the meantime, however, I couldn’t help but post a link to the LA Times list of “61 Essential Postmodern Reads.

The list is interesting for a number of reasons.  First and foremost, it’s annotated!  With graphics!  And while my punctuation here might indicate a sarcastic appreciation for the aforementioned qualities, it is an easy way to understand why a particular book made the list, and also a quick and dirty representation of their criteria for inclusion.  I’m a bit surprised to see that all of the criteria focus on form (e.g., “author is character,” “includes historical falsehoods,” and the needing-of-more-detail “plays with language”).  Formal criteria allow the lister, Carolyn Kellogg, to include a number of intriguing picks that don’t always get included in the postmodern canon—Tristram Shandy, for instance, which gets a hearty “amen!” from me—or The Metamorphosis.  But it also allows for a couple of real head-scratchers—The Scarlet Letter, anyone?

As any of my poor, put-upon students in the postmodernism seminar can tell you, I’m highly suspicious of a postmodernism defined solely on the basis of form.  You don’t have to worship at F. Jameson’s feet to consider the idea that content might be part of the postmodern equation.  And you don’t have to buy everything Linda Hutcheon ever thought to mull over the notion that an attempt at political/ideological  intervention can be part of a postmodern aesthetic movement.

Those caveats (or screed.  call it what you will) aside, however, the list does what many good lists do: it provides a basis for readers to debate inclusions, exclusions, and criteria.  A look at the ever-expanding comments is a testament to Kellogg’s work.  Take a look-see.

Wallace Elegy

I just peeked at the New York Times for a moment, and was absoutely shocked to see the latest AP news that David Foster Wallace is dead.  According to the very short AP story, Wallace’s wife discovered that he’d hung himself.  What a tragedy.

I read Infinite Jest in 2000, in the month between finishing my qualifying exams and getting married and moving to a new city.  It was the perfect novel for that moment: utterly diverting and weird (buried heads and cross-dressing CIA agents), surreal and sincere by turns.  It was the perfect distraction from the endless details and free-roaming anxiety of moving, of beginning the dissertation.  It was nothing like what I had crammed in my brain for the preceding months, but an excellent test of all of the theories and interpretive strategies and thus reminded me why I wanted a career in English Studies in the first place.

Infinite Jest is a novel that begs you to read it again the minute you finish it.  Wallace peppers the novel with spot-on characterizations of contemporary American life (corporate sponsorship of years, negotiating national ownership of toxic waste, television taken to its logical conclusion), but witholds their narrative origins, hiding them deep in the text.  In the weeks it takes to finish the book, you develop a relationship with it (as well as a significant bicep muscle from carrying it around).  It makes you a careful reader, an almost paranoid interpreter, a bit desperate to skim through scenes, but afraid that you’ll miss something. Making it to the end is the perfect ambivalent moment: a relief that you’ve made it through, and the simultaneous realization that the conclusion makes the rest of the novel clear, and that you need to begin again.

I’ve since read some of Wallace’s other works (The Girl with Curious Hair; The Broom of the System; his unbelievable, replete-with-footnotes essay on grammar for Harper’s Magazine ), but none of them were able to replicate the same reading experience for me.  Periodically, when I’m fantasizing about the perfect class to teach, I imagine that a semester spent with Infinite Jest would be a once-in-a-lifetime experience for students—a kind of contemporary literature boot camp.  Of course, then reality sets in: can I really justify dragging undergraduates through 1100 pages of weirdness on a whim?

Perhaps it’s time to revisit the idea, or at least to revisit the novel myself.  It seems like a fitting tribute to an brilliant author dead long before his time.

***updated to add: Here’s a lovely farewell to Wallace from Times book doyenne Michiko Kakutani.

Read Only

The NY Times today released the first part of a series dedicated to investigating “how the Internet and other technological and social forces are changing the way people read.” Hoo boy. Let the games begin.

On first read, I’d say that author Motoko Rich strives for an admirable balance between two factions dedicated to defending their particular reading practices. For every study of declining test scores and reading for pleasure, she cites online readers’ descriptions of their own practices or new literacy scholars.

From this format, we can see a surprising tone that both boosters and naysayers of digital reading share: a relatively consistent dismissal of alternate format. For instance, Rich cites Dana Gioia of the NEA:“Whatever the benefits of newer electronic media they provide no measurable substitute for the intellectual and personal development initiated and sustained by frequent reading.” At the same time, we have fluent digital readers who have this to say about print books: “The Web is more about a conversation. Books are more one-way.”

The article carefully cites the number of material factors to consider as we weigh a shift in reading habits: the socioeconomic benefits of print literacy, its deep integration into school curricula, the challenges it presents for students with learning differences. But these considerations are buried deep on page 3 of the article, in a way that suggests they’re simply fodder for the bigger issue–the deep psychological investment in the way that reading inflects our daily lives, and that no one is willing to be told that their preferred method is lacking in some way.

I find myself perched uncomfortably between these two ways of reading and the assumptions of superiority they promulgate. When Gioia says: “What we are losing in this country and presumably around the world is the sustained, focused, linear attention developed by reading,” a portion of my heart goes pitter pat. Does reading a novel require that sustained attention? Obviously. And I’m willing to believe (until a neuroscientist tells me different) that there’s a cognitive benefit to it, as well as a pleasure to be taken in it. But I’m also not willing to believe that all digital reading is the short-attention span theater that Gioia assumes and of which Rich provides examples. When Nadia is reading fan fiction stories that run “45 web pages,” we’re talking about focused attention, and we’d have to study Nadia’s reading practices to convince ourselves that it wasn’t sustained or linear. In addition, the statement ignores the sociality of reading a number of digital sources on a similar topic.

On the other side of the fence (here I am, perched on a cliche), I’m taken aback by the digital readers’ characterizations of books. At least two of the young people interviewed take issue with books’ unitary nature–either as a fixed plot structure or singularity of voice. This also seems to be a mis-characterization of what print readers love about books, wherein the process of interpretation makes a book an archive of alternatives. [This assumes, of course, that you include interpretation in your definition of reading, I suppose.]

I’m anxious to see how others perceive the coverage in the Times. For now, however, I’m struck by the gulf between readers, and the very little coverage (and study?) of how omnivorous readers characterize pleasure, benefit and drawbacks of their reading practices across media

Cult Books?

Given that lists are always fascinating and disappointing, there’s a great piece up at the Telegraph on the “50 Best Cult Books” (hat tip to Whitney). The authors have a difficult time constructing the criteria for the category, as any of us would. What do you count as “cult”? What makes it so? For all of the possibilities, the one that stuck with me was this:

we were looking for the sort of book that people wear like a leather jacket or carry around like a totem. The book that rewires your head: that turns you on to psychedelics; makes you want to move to Greece; makes you a pacifist; gives you a way of thinking about yourself as a woman, or a voice in your head that makes it feel okay to be a teenager; conjures into being a character who becomes a permanent inhabitant of your mental flophouse.

Evocative and metaphoric it may be, but it’s a viscerally satisfying way to differentiate the cult novel from the bestseller, the merely popular, the truly weird. I’m particularly taken with the notion of the book as totem. Perhaps I’ve spent too much time on college campuses, but aren’t there always students (and professors, for that matter) that carry a particularly dog eaten copy of the cult book around with them? Doesn’t it become one of the ways that we identify our essential, unique identities (you know, the one that we share with 400,000 other people)? Aren’t those the ones with the characters that speak to us, make us right with the world, or at least explain the wrongness of the world and our own alienation?

Having said that, the Telegraph list can’t help but disappoint. To their credit, it’s a staunchly historical and multi-national list (including The Sorrows of Young Werther and Fear and Loathing in Las Vegas—I don’t know how many other categories can claim that). It’s multi-genre as well, featuring self-help books, novels, and philosophical tomes (Godel Escher Bach? I dragged a copy of that around with me for years before I gave up). But the scope robs it of something too; perhaps it’s modern resonance? Were 19th century cult readers—even if they did off themselves in a tribute to Goethe—like 1960’s drug-addled cult readers? Is every cult the same?

For this reader, the comments become the saving grace of the list. Give them a read, and you’ll find yourself testing your own definition of “cult.” The Lovely Bones? Um, no. It was beautiful and sad and a page-turner, but not a cult classic. Fight Club or Trainspotting? Now you’re talking. It’s become a cliche, now, for sure, but it’s almost impossible to read Fight Club without getting sucked into it as a world view. It’s insanely quotable too—maybe in the future we WILL all be wearing leather clothes… While I’m not a huge Philip Dick fan, he certainly deserves a place. And to the commenter who asks whether a book that’s assigned for high school reading can be counted (we’re looking at you, To Kill a Mockingbird), I can only say amen.

Tunnel Vision

I suppose it’s an occupational hazard for academics, the phenomenon wherein texts that float across our consciousness tend to be subsumed and codified by our current research and teaching. Or at least this is what happens to me. As occupational hazards go, it could be much worse; I’ve never been stuck in a mine, I’ve never had a patient die on me. I may be well on my way toward carpal tunnel syndrome and a Mr. Magoo style myopia, but that’s about it.

So I recently discovered the singer/songwriter Rufus Wainwright. I know, I know, where have I been? What pop music planet have I been living on, that I’ve avoided RW? Let’s be honest; if left to my own devices, I’d be huddled in a corner listening to Squeeze’s Greatest Hits on infinite repeat. Be happy that I’ve made it out of the 80’s. The point of this, of course, is two-fold, First, how the world has changed since the last time I tried to find out information about an artist. Not only can I look up Wainwright’s entire catalog on iTunes, I can google the lyrics to songs, read his Wikipedia entry, see his MySpace page, and cruise YouTube for videos. This beats the hell out of Tiger Beat, I must say (TB was, if I remember correctly, my primary source the last time that I wanted info on a singer. That might have been Simon LeBon. It’s all very fuzzy now).

So onto the second point: YouTube. In addition to featuring a significant collection of fan videos of Wainwright, the site also houses a few of his professional music videos. The one I’m currently obsessed with?

The audio track of “Rules and Regulations” is just fine on its own, but there’s something about this video that pushes it into the realm of transcendent.   I’ve been trying to put my finger on it for days now as the tune tumbles around in my brain.  Since I’m re-reading a slew of postmodern theory right now, I thought for awhile that it was the video’s irony that was doing it.  In some ways, it’s a beautiful pop culture take on historiographic metafiction.  Wainwright takes all the signifiers of Victorian masculinity (the gentlemen’s club, the group excercising) and reveals them in all of their homoerotic glory.  [Or as Caitlin2489 writes: ” If this isn’t the gayest thing. Rufus, darling, you’ve out-done yourself.”  Couldn’t have said it better myself.]  Linda Hutcheon would be so very proud!

Later, however, I found myself reviewing the introduction to Todd Gitlin’s book Media Unlimited, in which he argues (in part) that while we tend to say that we go to media for information, our interaction with media is really about feeling—media produces not a conscious analytic making of meaning, but rather an unconscious emotional response.  While I think that this needs to be qualified a bit (which Gitlin later goes on to do in the book), I wonder if that isn’t a more authentic approach to explaining my fascination with this video.  Really, when I think about it, I get the giggles.  It’s his interaction with the camera, I think—that knowing, deadly-serious and thus definitively tongue-in-cheek, twee awareness of his surroundings and how the audience must be perceiving it.

There’s also, of course, the possibility that it’s neither, but that my courses this semester have colonized my brain.  While I figure it out, I’ll keep on watching the video.  Hee hee.