Tw-Oscar Night

Oscar night has finally descended upon us, friends, and with that, we can all heave a gigantic sigh of relief.  There will be a final riotous outcry of “I can’t believe X won,” (Argo) “X was robbed,” (Zero Dark Thirty) and “OMG, did you see what she was wearing?!” (J. Lo) and then a delightful nine month surcease until the awards season Rube Goldberg machine cranks into high gear again.  But in these final hours before what we know will be an interminable live-telecast, I find myself reflecting on the ways that participation has shifted the experience of awards shows, and the distinctive pleasures (and a few losses) that are incurred.

As I noted a few months ago, I’ve only recently made a return to television, after a two-year break.  My, how the world has changed!  Last year at this time, I would have gone along my merry way today, paying no attention to the official Eastern Standard kickoff time, blissfully unaware of the calculus needed to reckon the best window for red carpet coverage on multiple networks.  I would have gone to bed as happy as a clam this evening, knowing that, come morning, someone(s) (a heady combination of The Daily Beast, E! Online, and The Fug Girls) would have provided a just-meaty-enough curation of the evening’s bests and worsts for me to get the general idea.  Thus, I’d be informed; I’d get the goods.  My, wasn’t I efficient.

What I’d neglected to consider, however, were the manifold ways that participation in social networks expands the experience of “events” like The Oscars.  It’s fair to say, I think, that for all of the attempts at entertainment and concision that the showrunners produce, these shows D-R-A-G.  Four hours of nominees, speeches, musical numbers, montages, held together by an ever-diminishing thread of anticipation—it’s a recipe for disappointment and frustration.  It’s no wonder that the show is shedding its audience at an alarming rate, particularly in the all-important “younger” demographic (18-49?  Really?  Younger than what, exactly?).  In contrast to the boredom/rising-annoyance-fest, however, stands the never-silent mob on Twitter and Facebook, a field of voices processing images, statements, and affects in real time. There is a frenetic kind of energy that pervades this participation, for sure, and an intense competitive motivation to say something first and best.  In terms of resuscitating the Oscars (and award shows in general), I’d say there’s nothing like it.  (In fact, a month or so ago, the brilliant writer and effervescent Twitterer Alexander Chee noted something to the effect that Twitter may be the only thing maintaining appointment television viewing, anymore.  I think he’s nailed it.)

There’s no question that this approach, and in particular, the way that it privileges speed over reflection, can allow for some of the worst kinds of responses.  (Self-censorship, self-preservation, and etiquette are apparently second-level instincts.)  I can’t help but wonder, however, if these events—in their online milieu—function as high-stakes training camps for wit: the equivalent of an improv class, where your spontaneous extemporanaeity blasts out to the ends of the ‘Verse.  At its best, event participation fosters a network that rewards the insightful, the funny, the pithy—all linguistic skills that I’m happy to see rise to the top of a discursive community’s values.  And this says nothing of the associated participatory skills of selection and curation via retweets—a analytical and socially generous investment in sharing things that delight you with your own network of followers.

Smarter scholars than I (Jean BurgessJason MittellKelli Marshall?) could say volumes, I sure, about the ways that social networks perform, and the histories of participation in television viewing, and the connections among these trajectories.  While I go look up what they have to say, however, I’ll be flexing my thumbs in anticipation of this evening’s event, more so for its commentary than for its content.

The “Real” and the “True”

I’m currently in the sixth week, or a third of the way (!) through, my contemporary narrative class.  I’ve drafted my students into the service of my current obsessions, and so we’re tracking the ways that a select set of contemporary narratives thematize reading/interpretive processes as methods of evaluating truth.  My intrepid students are going great guns, of course, and are finding all sorts of examples and avenues that never would have occurred to me.  Case in point: how to do we articulate the complex relationship between realism and the truth in any given narrative?  How does the former shape our expectations of the latter, and to what extend does the ambiguity of the latter force us to question the former?

To fully understand that question, you’d need to have an idea of the kind of texts that I’ve been asking them to endure.  To some extent, whether they are novels or television serials, they have largely cohered, thus far, to the genre pithily described as “mind-fuck,” or, in more genteel language, what Thomas Elsaesser calls the “mind-game.”  In essence, I’ve asked students to dig into narratives (Adam Ross’s Mr. Peanut, Heidi Julavits’s The Uses of Enchantment, and now Moffatt and Gatiss’s Sherlock) that actively present a series of internal questions about which of many narratives or perspectives is true, OR real, or both.  Still confused?  (So are we.)  In Mr. Peanut, for example, we begin with a compelling and horrifyingly ambiguous image of a woman who has died from anaphylactic shock: death by peanut.  Her husband is present, with a bloody hand.  The question: did he shove the peanut down her throat, or did he try to prevent her from swallowing it?   The novel goes on to consider the complexities of married life, the emotional weight of a desire for freedom, and along the way, retells one of the famous American uxoricide cases, that of the Sheppard murder made famous in the television series and film The Fugitive.  Thus, the details of the protagonist’s daily life and the “ripped from the headlines,” crime scene evidence of the Sheppard case accumulate, attempting to verify these tales of matrimonial mayhem. It doesn’t take much to see how the status “the real” serves to support “the true,” until the processes of interpretation and abstraction are brought to bear: how do law enforcement officials assess guilt?; to what extent does the desire to kill one’s wife differ from the actual act?; in what ways does the indecipherability of one case reflect on another?  (And just when you think you’ve got a handle on those in this novel, we move on to the next one.)

The class, thus far, has enthusiastically assessed these narrative strands in each text, weighing them against each other in order to argue for the one that seems believable (we also like the word “possible,” along with “plausible”).   We marshal our evidence to make claims about where we stand as readers when we close the covers; we integrate the evidence that others provide to alter our own readings.  What we have yet to be able to do, however, is to consider the ways that the conventions of realism enter into the conversation.  Or to put this another way: it’s all we can do to get a handle on what is “the real story” of the text; identifying the mechanisms that get us there is beyond the pale.  Who designed this class, anyway?

And yet, the question remains.  For all of the retro-postmodern ambiguity these narratives possess, they also rest on a 200 year history (give or take) of a realist tradition: a painstakingly-constructed, historically and culturally situated, ideologically-rife set of conventions that registers to readers as “real.”  Where does our current cultural fascination with reality—our own dissonant belief, for instance, that “reality tv” is both a constructed falsity, and yet somehow also true—stand in relation to that history?

Stay tuned, true believers.  We’ve still got 12 weeks to figure this stuff out.

New(s) Access

November 8: the day we recover from the election and begin to process the data with some modicum of logic, distance, and methodology, as opposed to the last two days of sleep-deprived enthusiasm, relief, and urgency to be the first out of the gate.  Like many of us, I spent yesterday in a haze, reading election coverage, trying to make sense of what we knew about the election after it had happened, the various parties’ reactions, and what we could discern about demographics, about responses to important issues, about America as a national body.  Of course, I was doing it on about 4 hours of sleep, in between classes and meetings, and so it was less than optimal cogitation on my part.  This morning, however, buoyed by yesterday’s 9 p.m. bedtime, I realized that one of the elements of this election cycle that I wanted to preserve and assess was the difference in the ways that I had accessed election night coverage itself, which describes a particular shift from television culture to internet culture.

A little background probably can’t hurt here.  When my husband and I moved into our current abode three years ago, we had a little skirmish with our local cable provider (I’ll spare you the gory details.  Suffice to say that it involved a lot of profanity and calling into question of the legitimacy of said cable providers’ parentage).  The question, at the time, was what our options were; could we live without access to television?  Our house sits in some mysterious blackout zone of reception.  We receive neither the public digital signal, nor much in the way of cell service.  We’re lucky to have access to FiOS here, or we might as well have hung it up and started our own Pioneer Days celebration.  It was a bizarre moment: we could get fiber optic service for internet and phone, but not cable through that provider.  Thus, the real question was whether we could live with what was available through streaming services and the mail.  This was an actual question, three years ago. Netflix was radically expanding its library of streaming media, Amazon had just entered the fray, but services like Hulu had not yet made the jump to a simple access point for television-viewing (and by simple I mean “don’t make me get an HDMI cable and my laptop to try and Frankenstein this mess together just so that I can watch an episode of 30 Rock).  In addition, this was juuuust before BluRay players began to integrate access to streaming services as part of their hardware.  In short: we weren’t quite your plucky, early adopters who were willing to figure out how to make the wifi talk to the computer talk to the television; we were looking for something not much harder than cable was: I want to turn on the television and watch what I want to watch.  And I don’t want to give any more money to the cable company.  Jerks.

The moral of this story is thus: with some research, we invested in a dandy little Roku box, and have been mighty pleased with it.  Because streaming offerings have, for the most part, expanded exponentially (hello, Criterion Collection?  Gimme.), we’re generally able to find things we enjoy, and we’ve gotten quite used to NEVER HAVING TO WATCH COMMERCIALS.  EVER.  In addition, we watch entire seasons in a go, rather than seeing an episode a time, weekly.  I’ll say more about this later, but moving to streaming media exclusively will change you as a viewer.  ‘Nuff said.

What this switch meant, however, is that we DO NOT have access to mainstream television, not in any timely way.  Sure, the internet and Roku both offer access to news shows after they’ve aired, but the timeliness of most news coverage tempers my desire to hunt down particular shows and watch them in their entirety.  In essence: why watch Rachel Maddow or NBC Nightly News hours or days after their broadcast?  I can skim the NYTimes, or the Daily Beast, and get a sense of the trajectory of news for the day.

All of these changes in media and news consumption, effectuated by the cutting of cable, have been, for the most part, painless and fascinating, in the “self-as-lab-rat” way.  But I had forgotten the ways that certain cultural events (The Olympics was one of these, but more importantly: PRESIDENTIAL ELECTIONS) demand a shared access to certain kinds of viewing for full participation. In the 2008 election, we gathered at a friend’s house to watch the returns, eating dinner and having nervous conversations as we waited for Brian Williams to call various states.  I’m sure that we could have suckered someone we knew into sharing their television for a night, but I wasn’t feeling totally sociable.  Surely, there was a way to experience this election with others?  To check returns as they came in?

When you live on the East Coast, election results come in late.  And when you’re used to getting up at 5, bedtime comes early.  As much as I wanted to know how this was going to turn out, I also wanted to get some damn sleep.  So rather than sitting up with my laptop all night, I took my phone to bed with me.  “I’ll just check in periodically,” I thought.  “You know, just to see the electoral map at CNN.com.”

What actually emerged from that decision, however, was a frenetic experience of monitoring several apps and sites in an attempt to access breaking news, and then to verify it; to get a sense of the reaction to said news from friends and from the wider world.  And there wasn’t a clear-cut distinction between news outlets and social outlets: I received as much breaking information about local races, about leading poll numbers and districts from Facebook as I did from the CNN website.  As many have noted, Twitter itself became a crucial and almost overwhelming hash of early, rescinded, hoax, and legitimate calls, in addition to a hotbed of snark that was feeding television discourse as well as making its way on to Facebook.  (I’d see a particularly snort-worthy tweet approximately 3 minutes before someone posted to FB.)  I got to watch how excited and anxious many of my students were about the returns, even as people swapped tips and questions about where reliable information was coming from—and how’s that for internet haters?  A consistent and running discourse, throughout the evening, about how we could verify the information coming in: first calls vs. the number of calls vs. grudging calls by networks opposed to the results all were vetted as probable functions of veracity.  Fool us once, Election 2000, but not again.

On the one hand, then, I got the equivalent of a back-stage pass to a much larger community of shared reactions than I would have received with a small group of friends, parked in front of a television all night.  It was a networked amalgam of sites, for sure, but Twitter, Facebook and news websites, strung together, created both a local and national view of the election that was utterly new to me.  On the other hand, there was a thread of the conversation that I did NOT have access to: a band of discussion/snark that was reacting to the media’s reaction: Brian Williams’s discussions of the legalization of marijuana, Diane Sawyer’s demeanor, Karl Rove’s questioning of the Ohio call.  There remains an important dimension of shared media experience and critique that revolves around the dynamism and unpredictability of live television that can’t be accessed, necessarily, via web—at least not on my phone in the dead of night with no audio.
So, Election 2012 is behind us, with its new landscape of media access and participation.  And as the interpenetration of the social and informative grows, I can only imagine the ways that the next scheduled political event will be accessed, unevenly, by viewers with a variety of devices and inputs, both singular and jerry-rigged together.  To what extent are the experiences that are shared by the most of us (e.g., national politics) accessed differently?  And in what ways will those continue to shape disparate or common experiences of the same event?

 
[NOTE: If your question about this post is: "Hey!  Didn't you commit to #digiwrimo, you slacker?  Isn't this your first post in 4 days?  Are you just going to ignore that?!", then your answers are, in order: yes, yes, and obviously, no.  I mean, did I fall off the wagon, hard?  Yes.  And I spent some time feeling bad about that, in between reading student portfolios and writing up materials for my department and advising 12 students and teaching classes.  And I even thought about counting the tweets and comments and class-related posts that I've written since then, as it would make a significant contribution to my word count.  In the end, I decided against that, because I think the spirit of #digiwrimo, or at least my own commitment to the idea, is that it should be a certain kind of writing, the observational/analytic writing that I associate with public academic blogs, that public humanities intellectualism that I wrote about last week.  And on Nov. 30, I want a clear picture of my accomplishments in that arena, rather than the kinds of writing that I do, and do for my job, regardless of writing challenges and communities of writers who are challenging themselves.  And in that same vein, while I thought about throwing in the digital towel as of Nov. 5, I also thought that perhaps the larger purpose of Digital Writing Month is not that participants achieve 50,000 words and a daily post, but rather that they form a habit of being called to writing and expression in digital formats; that they practice a kind of mindfulness about their writing, and cultivate a desire and readiness to find experiences and events worth writing about, and to do that writing, regardless of word counts and months.  And so, in that spirit, I'll soldier on.  So there.]

“You Don’t Upload Me?” Pre-Election Women’s Video

The new face of women’s political video?

It must be said, up front, that I am no scholar of elections, or even someone who follows political rhetoric carefully, in all of its complex historical manifestations, carefully.  I’m not even someone who religiously tracks the ways that video gets circulated during elections.  (There are, for the record, people who are fantastic at this: Chuck Tryon, for one.  Go and read his blog if you want expertise.  Go ahead.  I’ll wait.)

But even for a layperson/concerned citizen/casual observer like me, it’s difficult to ignore the video production and circulation by and for women as we approach the final lap of this presidential election cycle, in part because it is beginning to counteract some assumptions that media scholars have made about the ways that women use video.  How does that work, you ask?  Stay with me, now.

Look: you don’t have to dig deep to find the news that women voters will play a major role on Tuesday; sources as disparate as the Huffington Post, the Atlanta Journal-Constitution,  and the Wall Street Journal have detailed the ways in which, as a constituency, women may very well choose our next president.  It shouldn’t be a surprise, then, that we’ve seen an a significant amount of female-focused political video throughout this campaign.  But in the last few weeks, there’s been a decided uptick in the number of videos themselves, in addition to the kind of circulation they receive.

The clip of Fey’s speech for the Center for Reproductive Rights lapped the internet on Oct. 25; it’s been uploaded to YouTube in a number of formats, made the Facebook and Twitter rounds, and garnered some coverage in entertainment, culture and political coverage (E!Online, Salon.com, and the Wall Street Journal, who is hosting the video above, which now sits at ~137,000 views,).  And Fey’s video is just one of a handful that have emerged in the final two weeks before election day.  I’m also interested in the recent release and dissemination of Lena Dunham’s “First Time” video, the Nov. 2 “Don’t Turn Back Time on Women” video, with Cher and Kathy Griffin, and Lesley Gore’s 10/22 “You Don’t Own Me PSA” (see below).  What’s so significant about this handful?

Here’s the thing about this emergence of explicitly political video by and for women : it flies in the face of the data that women, particularly women over 30, are less likely to create and post video than men.  In a 2007 Pew study of online video, Mary Madden notes that while a small-but-meaningful gender disparity exists when one tracks video watching and downloads, the gap widens considerably when we consider uploads:

Nearly two-thirds of online men (63%) use the internet to watch or download video, while just about half of online women do so (51%). Video posting produces a more dramatic disparity; 11% of online men say they upload video, compared with only 6% of online women.

She goes on to mention, however, that the gender gap lessens considerably when you look at younger internet users:

When looking exclusively at the viewing and uploading habits of young adults (those ages 18-29), young men and women report roughly the same incidence of video watching and uploading. Instead, users age 30 and older are the ones who exhibit the most pronounced gender differences.

With these ideas in mind, it’s even more significant, then, that the majority of the figures in these election season videos are squarely outside of the 18-29 demographic (I’ll cheat a bit here and include Lena Dunham, who was born in ’86).  But for the most part, we’re seeing women making videos to persuade other women to vote for a particular candidate.  And the reaction itself is notable: not to pick on Dunham’s video in particular, but it is, as far as I know, the only one to get a special shout-out from the Family Research Council for being “disgusting.”  (You can follow that controversy here.)  Without Dunham, however, we have videos featuring women in their 40s (thank you, Tina Fey); 50s (Kathy Griffin); 60s (Cher and Lesley Gore).  It’s a cavalcade of mature women who are registering their political discontent, mobilizing women to vote and to vote for a progressive slate of candidates, and, in opposition to demographic media trends, have decided that video is the most powerful medium to accomplish their aims.

Is this political cycle, one that has focused on women as perhaps the most treasured voting bracket, while simultaneously featuring some of the most retrograde policies and opinions about women’s health and autonomy, simply an anomaly that is great enough to interrupt gendered, generational practices with video?  Or rather, does this indicate a growing interest in older women to harness video as a viral tool to represent their beliefs?

It could be either of these, as well as any number of other considerations that I’m not taking into account here. As a parting observation, however,  I will just mention that there is a fascinating rhetorical concurrence in the “Don’t Turn Back Time on Women” video and the “You Don’t Own Me PSA.”  Dunham’s video invokes a shared second person audience—the “you” that wants your first time voting to be with someone special (wink); but both Cher/Griffin and Gore invoke a “we” voice throughout their videos.  For Team Cher/Griffin, this is a multidimensional “we”: it’s “women and people who like women and respect them.  I’m looking at you, LGBTQs…”; later, it’s a “we” that is calling on voters in cities in crucial swing states (Portsmouth, Fort Lauderdale, Cleveland).  In short, it’s a “we” that is held together by values, and must be tended to/protected by those in key geographical locations (implicitly, also part of the “we”).

Gore’s “we,” however, is different, and that difference is one that may well instantiate the role that women’s political video is playing in the final days before Nov. 6.  Gore lends her iconic 1963 song for the video, which consists of women in solitude, in pairs, in groups, lip-syncing to her song in front of their webcams.  The video functions as a collage, then, of women making their own video in collusion with others, for the sake of a particular cause; almost literally singing with the same voice.  “You Don’t Own Me” is comprised of a multi-generational, media-making and producing collective of women, oriented toward the same political goal.

And so, as we move into the final hours preceding the election, I find myself hoping for an enormous voter turnout; a clear and decisive winner of the presidency; and the continuation of an emerging media trend that might disrupt some of our assumptions about who can make and circulate video.

Voting via Video

With election fever in the air, I’ve been holding on to Errol Morris’s Op Ed video “11 Excellent Reasons Not to Vote?“, waiting for a time when I could give it my full attention.  Thank you, Friday morning!

Morris’s piece interests me for two reasons that will be familiar to literary types: form and content.  As many readers will know, Morris is an award-winning director and author (who keeps a vibrant website that corrals all of his various projects).  For me, however, Morris is most notable for his documentaries: films like The Fog of War and The Thin Blue Line not only take up some of the most fascinating and complex questions that we have as a society (war, justice, ethics, belief), but do so in a visually compelling way.  [I'll just come right out and say this now, so as to reveal my prejudices: I dearly wish that more documentary filmmmakers, working both in short and long form, would pay more attention to aesthetics.  Realism can be a trap; dependence on interviews, static camera, and archival footage can be flat. I'm looking at you, Ken Burns.]  His recent piece for the NYTimes, then, is actually labeled an “Op-Doc”: an neologism that I assume brings together the ideas of “Op-Ed” (a term that I assumed grew out of “opinion-editorial,” and certainly fulfills that role, although Wikipedia tells me that it actually comes from “opposite the editorial page” to indicate its difference from the editorials penned by newspaper staff themselves.  Huh.  You learn something new every day.) and “Documentary.”

Before we even get out of the gate with Morris’s piece, then, we’re already talking about a new genre: what is an “Op-Doc”?  What are its components?  Are the expectations for it different than they would be for an op-ed piece?  What happens when you move the requirements for an op-ed into a video form?  And for that matter, what happens when a short documentary becomes an opinion?

I’m overly concerned about these formal questions right now because they’re the questions that my first-year composition students are wrestling with as they move into their final research project for the course.  Up until now, they’ve crafted essays in print and moved them into digital text (by uploading them into an online portfolio); they’ve also composed a remix video, and thus worked with visual and audio sources (with a bit of text sprinkled throughout).  But as we move toward the end of the semester, I’ve asked them to think about how to use the best of both formats: digital text, along with visual and audio sources, to help their audience to understand a complex question and their attempts to answer it with their original research.  Piece of cake, right?  (If you’re interested, you can follow their good-natured discussion about this and other class issues on Twitter at #DEW1: a hashtag that grows out of the name for the class—Digital Expository Writing.)
On Monday, I think I’ll ask them to look at the ways that Morris does just this in his Op-Doc: his question, as you might note from the title of the piece, is manifold:

It made me wonder: What’s stopping us? Do we have reasons not to vote? How can we hear so much about the election, and not participate? If hope isn’t doing it, isn’t the fear of the other guy winning enough to brave the roads, the long lines?

To answer that question, he interviewed a series of young people who actually DID intend to vote (a characteristic that makes them unusual by national standards) and asked them to engage his questions before explaining their own motivations.  I love this approach: it sets his subjects up to think beyond themselves from the very beginning, which may very well help them to imagine their initial motivations very differently.  But before I jump fully into the recognition of the content of Morris’s piece, I want to finish up this assessment of the form: how does this position his audience?  If you are a reader first, then you know what’s up with the video—he reveals his methodology in the fourth paragraph.  You would know, then, by the time you double back to watch (assuming that you do), that the interviewees don’t endorse the “11 reasons not to vote” that they’re articulating.  But if you’re a viewer first and a reader second, you’d be at least a minute and 30 seconds into the video before you began to see the speakers questioning the arguments that they provide against voting.  And perhaps this is at least part of the work that the video achieves: if your assumption is that these are young people who are apathetic/confused/slackers, then you need to take a closer look at them.  It’s a clever, and subtle, rhetorical move on the part of the filmmaker, who might be calling out the readers/viewers of the Times on their willingness to castigate a generation for their unfathomable lack of civic pride.

On the question of content, which has already managed to slide into the conversation here, Morris quickly runs through, and largely dispels, I think some of the more popular reasons for not voting (e.g., one vote won’t matter; confusion and complexity; no candidate is good; “it’s just a way to make yourself happy”; “awkward family dinners”), before listing some very serious reasons to vote (i.e., Florida in 2000; the legacy of the Voting Act of 1965) with some less serious ones (e.g., spite voting).  Along with some chipper music and Morris’s own good-natured hectoring from behind the camera (“How much would you sell your vote for?”), it makes for an incitement to vote that is free of the hectoring, guilt-inducing messages of some “get out the vote” messages.

As a side note, however, I’d like to point out one of the themes that emerges from the interviews.  At the end of the written portion of Morris’s Op-Doc, he says this: “Voting is a leap of faith. Calling it a civic duty is not enough. Either you believe that the system is both changeable and worth changing, or you don’t — and most new voters are not convinced.”  Very probably true; and as someone who is particularly interested in the ways that language works, I’d venture a guess that “civic duty” is not a term that lands with very many young people nowadays.  It barely lands with me, and I’m almost 20 years beyond many of the people interviewed in this piece.

The theme that the interviewees DO pick up, however, is the dismissal of the individual and the pleasures of joining a group.  The video begins with the argument against voting that hinges on the acknowledgment that a single vote could matter; five minutes in, a participant reminds us that “it’s not about you, it’s about all of us…Get off Twitter, stop talking to your friends about how great you are,  go down to vote and throw your lot into the sea with everyone else.”  The next person talks about the “on the other side” experience of having voted, a kind of shared practice that should inspire people to go and get a drink.  We later see a very pregnant mother whose vote is now “twice as important,” along with a newly-naturalized citizen who will vote for the first time.  It’s a bit of a vexed message (what’s up with the Twitter hate?), and yet seems to suggest that dedicated voters in a demographic notorious for NOT voting imagine themselves and their motivations as being distinctly communal; they’re in a group who vote right now, in this election, and/or they’re in a group that prizes voting in a historical trajectory.  Everyone else is in the sea, or getting a drink after having voted, or voting in honor of those who couldn’t vote before him.  This is what we all do; you should do it too.  Is it going to far to say that individualism, here, is shunted aside for the priorities and pleasures of the generation as a whole?  Where does the rationale for voting as a mode of belonging fit in the rhetoric of civics, of responsibility, and in the description of the millennial generation(s) as individualistic and navel-gazers?  If Morris’s interviewees are representative of young people who DO vote, how do we use these insights to capture and incite more of them to “throw their lot into the sea”?

#Digiwrimo; or, November is the Cruelest Month for Public Intellectualism

I don’t know what’s more sad: the image of thousands (if not millions) of abandoned blogs, laying by the side of the digital highway, carrion for web vultures; or the fact that, after more than a year, I myself may have forgotten the finer points of constructing a blog post.
Either way, sometime at the end of October, I was reminded of the venerable tradition of Nanowrimo, the increasingly popular use of the month of November to join a community of writers in the pursuit 50,000 words–a draft of a novel–in 30 days.  I’m no would-be novelist, for sure, but I couldn’t help but admire (and envy, a bit) the challenge and sense of camaraderie that I imagine has to develop over the course of Nanowrimo.  I imagine that it’s akin to the moment in a triathlon when you find yourself chatting with the person running next to you.  You may be strangers, but you have everything in common for this measure of time.  But what is a non-novelist to do with November, I ask you?  Thankfully, the good people at Marylhurst University in Portland have come up with an answer for the rest of us: Digiwrimo, a month of digital writing—in all of its manifestations (see “What is Digital Writing?” for more details).  November=50,000 words, novel or no; and by no, I think I mean no excuses, and no reason not to address the sad of the abandoned blog, the loss of blogging skills.  All right, Digiwrimo.  Let’s do this thing.

Reason #1:

When I stop to consider why this kind of challenge is worth the commitment, I don’t have to dig too deeply. First and foremost, I should note that I’ve been requiring my students to keep class blogs for almost 10 years.  It’s a practice that I believe promotes a sustained engagement with their coursework, asks them to think of their writing and thinking as public acts, and knits them into a community of thinkers who are considering similar questions and approaches to texts.  Over time, I’ve come to applaud the students who develop their blogging and commenting as a sustained and dependable practice.  “It’s hard to be consistent, and consistently thoughtful,” I recently wrote on a student’s midterm.  And it is.  Life for students, for professors, for parents, for people is complicated; it’s the easiest thing in the world to put off the complex cognitive work of thinking and writing.  But the payoff can be wonderful, and there is a set of pleasures that develop both from the practice of writing as well as from seeing an ever-growing archive of your work over time.  What patterns emerge?  What persistent concepts, questions, ideas appear across a number of posts?  What do these reveal about your own predilections, and how do you intend to follow those?  Fine questions for my students, but for myself as well.  No one wants to be the professor who embodies the “do what I say, not what I do.”

Reason #2:

A year ago, I put together a list of links for a colleague who was sorting through the complicated questions that surround contemporary scholarship.  What does it look like in the digital age?  What counts, and what doesn’t? If we are reading, writing, and thinking differently with and through the internet, then how do scholars and intellectuals begin to identify the practices that matter to them, and consider the ways that these practices can occur in new forms?  The argument for the scholarly use of blogs has been building for some time; it may have reached its fever pitch in and around 2011.  A cavalcade of prominent intellectuals in a variety of fields had been blogging for years by that point (any list of these will be perspectival and incomplete, but I’ll just throw out a few here.  You have The Leiter Report in philosophy; Pharyngula in the sciences; Kristen Thompson and David Bordwell in film; Henry Jenkins in media studies; Michael Berube’s now sadly defunct blog, which covered cultural studies and politics).  These, of course, are just the blogs by individuals, and leave out the impressive blog collectives.

Out of this history of practice, then, came a debate (now much rehearsed and rehashed) about the place and value of these blogs.  One flashpoint in the “conversation” occurred during the 2010 MLA convention, when then-graduate student/adjunct professor Brian Croxall was unable to attend the conference because of financial constraints and instead posted his paper on his website.  Dave Parry’s post sums up the conundrum that resulted:

Let’s be honest, at any given session you are lucky if you get over 50 people, assuming the panel at which the paper was read was well attended maybe 100 people actually heard the paper given. But, the real influence of Brian’s paper can’t be measured this way. The real influence should be measured by how many people read his paper, who didn’t attend the MLA. According to Brian, views to his blog jumped 200-300% in the two days following his post; even being conservative one could guess that over 2000 people performed more than a cursory glance at his paper (the numbers here are fuzzy and hard to track but I certainly think this is in the neighborhood). And Brian tells me that in total since the convention he is probably close to 5,000 views. 5000 people, that is half the size of the convention.

And, so if you asked all academics across the US who were following the MLA (reading The Chronicle, following academic websites and blogs) what the most influential story out of MLA was I think Brian’s would have topped the list, easily. Most academics would perform serious acts of defilement to get a readership in the thousands and Brian got it overnight.

Or, not really. . .Brian built that readership over the last three years.

Parry’s take on the brouhaha that emerged is a useful one; it identifies the kinds of markers that scholars use to identify the value of their work (here, translated into eyeballs and influence).  But Parry goes on to note that the dismissal of Croxall by those who were devoted to a strict view of the historical means by which scholars captured eyeballs and built influence: presence at conferences, publications in peer-reviewed journals, etc.  Parry refutes this model, citing the kind of careful work that Croxall had done up until this point, utilizing social media to forward his scholarly and pedagogical interests.  He ends his piece by linking this kind of work—the mobilization of a number of digital media forms and their attendant functions to circulate research—to “public intellectualism.”

I now ask my graduate students to read Parry’s blog post before they create their own blogs and start tweeting for our class.  It’s the narrative, I think, that brings home to them the way that the world of scholarship is changing, and the ways that they need to consider how their own work might circulate both in long-standing print formats and also online.  In addition, I hope that it encourages them to think carefully about how they want to straddle that divide.  For me, however, the argument about social media as public intellectualism is compelling, particularly at the moment when colleges and universities are imperiled by their rising costs, shrinking state and federal budgets, and perhaps most troublingly, their inability to make the case that what they offer is worthwhile.  Better scholars than me are making the argument that the self-same media that some view as chipping away at the foundations of education (e.g., social media will be the death of reading and bring on the zombie apocalpyse, etc.) may actually be the grounds for re-invigorating it.  Dan Cohen, director of the Roy Rosenzweig Center for History and New Media at George Mason University, is such a believer that he’s posted a draft of his book chapter dedicated to this argument on his blog; meanwhile, Kathleen Fitzpatrick, the Director of Scholarly Communication at the MLA addresses the complexities of academic publishing (in both print and digital forms) in her most-recent book, Planned Obsolescence: Publishing, Technology, and the Future of the Academy.

It goes without saying, I should hope, that both Cohen and Fitzpatrick are consistent bloggers, and by Parry’s definition, public intellectuals.

Quite frankly, I’ve drunk the Kool-Aid on this one; I’m convinced by the arguments that, while academic publishing in journals remains an important way for experts in academic fields to talk to each other, we also have a responsibility to make our interests and passions and discoveries known to other audiences, and to model forms of engagement with the objects that we love the most.  And for that kind of work, nothing beats a blog.  (I’ll save my thoughts about Twitter for another day.)

So, thank you, Digiwrimo, for reminding me why I believe in digital writing, and why I need to make room for it, to develop and practice the same habits that I ask my students to develop every semester.  Let November begin.  (It’s going to be a long month.)

President’s Day 2011: Technology and the Teaching Learning Process

What better occasion to return to the blog than a spring semester President’s Day devoted to “Technology and the Teaching Learning Process”?  Below are a few links that I’ll discuss bright and early tomorrow in the Lally Forum with my colleague Michael Brannigan.

The New York Times on Digital Humanities: “Digital Keys for Unlocking Humanities’ Riches

The Pew Research Center’s Internet and American Life Project “Teens, Video Games and Civics

The It Gets Better Project on YouTube, and on its own site

And while I won’t get a chance to talk about these, they’re also great examples of smart people thinking in sophisticated ways about the learning potential of new media technologies:

USC Annenberg School for Communication and Journalism: Project New Media Literacies

HASTAC: Humanities, Arts, Science and Technology Advanced Collaboratory

MacArthur Foundation Spotlight: Digital Media and Learning