Tw-Oscar Night

Oscar night has finally descended upon us, friends, and with that, we can all heave a gigantic sigh of relief.  There will be a final riotous outcry of “I can’t believe X won,” (Argo) “X was robbed,” (Zero Dark Thirty) and “OMG, did you see what she was wearing?!” (J. Lo) and then a delightful nine month surcease until the awards season Rube Goldberg machine cranks into high gear again.  But in these final hours before what we know will be an interminable live-telecast, I find myself reflecting on the ways that participation has shifted the experience of awards shows, and the distinctive pleasures (and a few losses) that are incurred.

As I noted a few months ago, I’ve only recently made a return to television, after a two-year break.  My, how the world has changed!  Last year at this time, I would have gone along my merry way today, paying no attention to the official Eastern Standard kickoff time, blissfully unaware of the calculus needed to reckon the best window for red carpet coverage on multiple networks.  I would have gone to bed as happy as a clam this evening, knowing that, come morning, someone(s) (a heady combination of The Daily Beast, E! Online, and The Fug Girls) would have provided a just-meaty-enough curation of the evening’s bests and worsts for me to get the general idea.  Thus, I’d be informed; I’d get the goods.  My, wasn’t I efficient.

What I’d neglected to consider, however, were the manifold ways that participation in social networks expands the experience of “events” like The Oscars.  It’s fair to say, I think, that for all of the attempts at entertainment and concision that the showrunners produce, these shows D-R-A-G.  Four hours of nominees, speeches, musical numbers, montages, held together by an ever-diminishing thread of anticipation—it’s a recipe for disappointment and frustration.  It’s no wonder that the show is shedding its audience at an alarming rate, particularly in the all-important “younger” demographic (18-49?  Really?  Younger than what, exactly?).  In contrast to the boredom/rising-annoyance-fest, however, stands the never-silent mob on Twitter and Facebook, a field of voices processing images, statements, and affects in real time. There is a frenetic kind of energy that pervades this participation, for sure, and an intense competitive motivation to say something first and best.  In terms of resuscitating the Oscars (and award shows in general), I’d say there’s nothing like it.  (In fact, a month or so ago, the brilliant writer and effervescent Twitterer Alexander Chee noted something to the effect that Twitter may be the only thing maintaining appointment television viewing, anymore.  I think he’s nailed it.)

There’s no question that this approach, and in particular, the way that it privileges speed over reflection, can allow for some of the worst kinds of responses.  (Self-censorship, self-preservation, and etiquette are apparently second-level instincts.)  I can’t help but wonder, however, if these events—in their online milieu—function as high-stakes training camps for wit: the equivalent of an improv class, where your spontaneous extemporanaeity blasts out to the ends of the ‘Verse.  At its best, event participation fosters a network that rewards the insightful, the funny, the pithy—all linguistic skills that I’m happy to see rise to the top of a discursive community’s values.  And this says nothing of the associated participatory skills of selection and curation via retweets—a analytical and socially generous investment in sharing things that delight you with your own network of followers.

Smarter scholars than I (Jean BurgessJason MittellKelli Marshall?) could say volumes, I sure, about the ways that social networks perform, and the histories of participation in television viewing, and the connections among these trajectories.  While I go look up what they have to say, however, I’ll be flexing my thumbs in anticipation of this evening’s event, more so for its commentary than for its content.

New(s) Access

November 8: the day we recover from the election and begin to process the data with some modicum of logic, distance, and methodology, as opposed to the last two days of sleep-deprived enthusiasm, relief, and urgency to be the first out of the gate.  Like many of us, I spent yesterday in a haze, reading election coverage, trying to make sense of what we knew about the election after it had happened, the various parties’ reactions, and what we could discern about demographics, about responses to important issues, about America as a national body.  Of course, I was doing it on about 4 hours of sleep, in between classes and meetings, and so it was less than optimal cogitation on my part.  This morning, however, buoyed by yesterday’s 9 p.m. bedtime, I realized that one of the elements of this election cycle that I wanted to preserve and assess was the difference in the ways that I had accessed election night coverage itself, which describes a particular shift from television culture to internet culture.

A little background probably can’t hurt here.  When my husband and I moved into our current abode three years ago, we had a little skirmish with our local cable provider (I’ll spare you the gory details.  Suffice to say that it involved a lot of profanity and calling into question of the legitimacy of said cable providers’ parentage).  The question, at the time, was what our options were; could we live without access to television?  Our house sits in some mysterious blackout zone of reception.  We receive neither the public digital signal, nor much in the way of cell service.  We’re lucky to have access to FiOS here, or we might as well have hung it up and started our own Pioneer Days celebration.  It was a bizarre moment: we could get fiber optic service for internet and phone, but not cable through that provider.  Thus, the real question was whether we could live with what was available through streaming services and the mail.  This was an actual question, three years ago. Netflix was radically expanding its library of streaming media, Amazon had just entered the fray, but services like Hulu had not yet made the jump to a simple access point for television-viewing (and by simple I mean “don’t make me get an HDMI cable and my laptop to try and Frankenstein this mess together just so that I can watch an episode of 30 Rock).  In addition, this was juuuust before BluRay players began to integrate access to streaming services as part of their hardware.  In short: we weren’t quite your plucky, early adopters who were willing to figure out how to make the wifi talk to the computer talk to the television; we were looking for something not much harder than cable was: I want to turn on the television and watch what I want to watch.  And I don’t want to give any more money to the cable company.  Jerks.

The moral of this story is thus: with some research, we invested in a dandy little Roku box, and have been mighty pleased with it.  Because streaming offerings have, for the most part, expanded exponentially (hello, Criterion Collection?  Gimme.), we’re generally able to find things we enjoy, and we’ve gotten quite used to NEVER HAVING TO WATCH COMMERCIALS.  EVER.  In addition, we watch entire seasons in a go, rather than seeing an episode a time, weekly.  I’ll say more about this later, but moving to streaming media exclusively will change you as a viewer.  ‘Nuff said.

What this switch meant, however, is that we DO NOT have access to mainstream television, not in any timely way.  Sure, the internet and Roku both offer access to news shows after they’ve aired, but the timeliness of most news coverage tempers my desire to hunt down particular shows and watch them in their entirety.  In essence: why watch Rachel Maddow or NBC Nightly News hours or days after their broadcast?  I can skim the NYTimes, or the Daily Beast, and get a sense of the trajectory of news for the day.

All of these changes in media and news consumption, effectuated by the cutting of cable, have been, for the most part, painless and fascinating, in the “self-as-lab-rat” way.  But I had forgotten the ways that certain cultural events (The Olympics was one of these, but more importantly: PRESIDENTIAL ELECTIONS) demand a shared access to certain kinds of viewing for full participation. In the 2008 election, we gathered at a friend’s house to watch the returns, eating dinner and having nervous conversations as we waited for Brian Williams to call various states.  I’m sure that we could have suckered someone we knew into sharing their television for a night, but I wasn’t feeling totally sociable.  Surely, there was a way to experience this election with others?  To check returns as they came in?

When you live on the East Coast, election results come in late.  And when you’re used to getting up at 5, bedtime comes early.  As much as I wanted to know how this was going to turn out, I also wanted to get some damn sleep.  So rather than sitting up with my laptop all night, I took my phone to bed with me.  “I’ll just check in periodically,” I thought.  “You know, just to see the electoral map at CNN.com.”

What actually emerged from that decision, however, was a frenetic experience of monitoring several apps and sites in an attempt to access breaking news, and then to verify it; to get a sense of the reaction to said news from friends and from the wider world.  And there wasn’t a clear-cut distinction between news outlets and social outlets: I received as much breaking information about local races, about leading poll numbers and districts from Facebook as I did from the CNN website.  As many have noted, Twitter itself became a crucial and almost overwhelming hash of early, rescinded, hoax, and legitimate calls, in addition to a hotbed of snark that was feeding television discourse as well as making its way on to Facebook.  (I’d see a particularly snort-worthy tweet approximately 3 minutes before someone posted to FB.)  I got to watch how excited and anxious many of my students were about the returns, even as people swapped tips and questions about where reliable information was coming from—and how’s that for internet haters?  A consistent and running discourse, throughout the evening, about how we could verify the information coming in: first calls vs. the number of calls vs. grudging calls by networks opposed to the results all were vetted as probable functions of veracity.  Fool us once, Election 2000, but not again.

On the one hand, then, I got the equivalent of a back-stage pass to a much larger community of shared reactions than I would have received with a small group of friends, parked in front of a television all night.  It was a networked amalgam of sites, for sure, but Twitter, Facebook and news websites, strung together, created both a local and national view of the election that was utterly new to me.  On the other hand, there was a thread of the conversation that I did NOT have access to: a band of discussion/snark that was reacting to the media’s reaction: Brian Williams’s discussions of the legalization of marijuana, Diane Sawyer’s demeanor, Karl Rove’s questioning of the Ohio call.  There remains an important dimension of shared media experience and critique that revolves around the dynamism and unpredictability of live television that can’t be accessed, necessarily, via web—at least not on my phone in the dead of night with no audio.
So, Election 2012 is behind us, with its new landscape of media access and participation.  And as the interpenetration of the social and informative grows, I can only imagine the ways that the next scheduled political event will be accessed, unevenly, by viewers with a variety of devices and inputs, both singular and jerry-rigged together.  To what extent are the experiences that are shared by the most of us (e.g., national politics) accessed differently?  And in what ways will those continue to shape disparate or common experiences of the same event?

 
[NOTE: If your question about this post is: “Hey!  Didn’t you commit to #digiwrimo, you slacker?  Isn’t this your first post in 4 days?  Are you just going to ignore that?!”, then your answers are, in order: yes, yes, and obviously, no.  I mean, did I fall off the wagon, hard?  Yes.  And I spent some time feeling bad about that, in between reading student portfolios and writing up materials for my department and advising 12 students and teaching classes.  And I even thought about counting the tweets and comments and class-related posts that I’ve written since then, as it would make a significant contribution to my word count.  In the end, I decided against that, because I think the spirit of #digiwrimo, or at least my own commitment to the idea, is that it should be a certain kind of writing, the observational/analytic writing that I associate with public academic blogs, that public humanities intellectualism that I wrote about last week.  And on Nov. 30, I want a clear picture of my accomplishments in that arena, rather than the kinds of writing that I do, and do for my job, regardless of writing challenges and communities of writers who are challenging themselves.  And in that same vein, while I thought about throwing in the digital towel as of Nov. 5, I also thought that perhaps the larger purpose of Digital Writing Month is not that participants achieve 50,000 words and a daily post, but rather that they form a habit of being called to writing and expression in digital formats; that they practice a kind of mindfulness about their writing, and cultivate a desire and readiness to find experiences and events worth writing about, and to do that writing, regardless of word counts and months.  And so, in that spirit, I’ll soldier on.  So there.]

Voting via Video

With election fever in the air, I’ve been holding on to Errol Morris’s Op Ed video “11 Excellent Reasons Not to Vote?“, waiting for a time when I could give it my full attention.  Thank you, Friday morning!

Morris’s piece interests me for two reasons that will be familiar to literary types: form and content.  As many readers will know, Morris is an award-winning director and author (who keeps a vibrant website that corrals all of his various projects).  For me, however, Morris is most notable for his documentaries: films like The Fog of War and The Thin Blue Line not only take up some of the most fascinating and complex questions that we have as a society (war, justice, ethics, belief), but do so in a visually compelling way.  [I’ll just come right out and say this now, so as to reveal my prejudices: I dearly wish that more documentary filmmmakers, working both in short and long form, would pay more attention to aesthetics.  Realism can be a trap; dependence on interviews, static camera, and archival footage can be flat. I’m looking at you, Ken Burns.]  His recent piece for the NYTimes, then, is actually labeled an “Op-Doc”: an neologism that I assume brings together the ideas of “Op-Ed” (a term that I assumed grew out of “opinion-editorial,” and certainly fulfills that role, although Wikipedia tells me that it actually comes from “opposite the editorial page” to indicate its difference from the editorials penned by newspaper staff themselves.  Huh.  You learn something new every day.) and “Documentary.”

Before we even get out of the gate with Morris’s piece, then, we’re already talking about a new genre: what is an “Op-Doc”?  What are its components?  Are the expectations for it different than they would be for an op-ed piece?  What happens when you move the requirements for an op-ed into a video form?  And for that matter, what happens when a short documentary becomes an opinion?

I’m overly concerned about these formal questions right now because they’re the questions that my first-year composition students are wrestling with as they move into their final research project for the course.  Up until now, they’ve crafted essays in print and moved them into digital text (by uploading them into an online portfolio); they’ve also composed a remix video, and thus worked with visual and audio sources (with a bit of text sprinkled throughout).  But as we move toward the end of the semester, I’ve asked them to think about how to use the best of both formats: digital text, along with visual and audio sources, to help their audience to understand a complex question and their attempts to answer it with their original research.  Piece of cake, right?  (If you’re interested, you can follow their good-natured discussion about this and other class issues on Twitter at #DEW1: a hashtag that grows out of the name for the class—Digital Expository Writing.)
On Monday, I think I’ll ask them to look at the ways that Morris does just this in his Op-Doc: his question, as you might note from the title of the piece, is manifold:

It made me wonder: What’s stopping us? Do we have reasons not to vote? How can we hear so much about the election, and not participate? If hope isn’t doing it, isn’t the fear of the other guy winning enough to brave the roads, the long lines?

To answer that question, he interviewed a series of young people who actually DID intend to vote (a characteristic that makes them unusual by national standards) and asked them to engage his questions before explaining their own motivations.  I love this approach: it sets his subjects up to think beyond themselves from the very beginning, which may very well help them to imagine their initial motivations very differently.  But before I jump fully into the recognition of the content of Morris’s piece, I want to finish up this assessment of the form: how does this position his audience?  If you are a reader first, then you know what’s up with the video—he reveals his methodology in the fourth paragraph.  You would know, then, by the time you double back to watch (assuming that you do), that the interviewees don’t endorse the “11 reasons not to vote” that they’re articulating.  But if you’re a viewer first and a reader second, you’d be at least a minute and 30 seconds into the video before you began to see the speakers questioning the arguments that they provide against voting.  And perhaps this is at least part of the work that the video achieves: if your assumption is that these are young people who are apathetic/confused/slackers, then you need to take a closer look at them.  It’s a clever, and subtle, rhetorical move on the part of the filmmaker, who might be calling out the readers/viewers of the Times on their willingness to castigate a generation for their unfathomable lack of civic pride.

On the question of content, which has already managed to slide into the conversation here, Morris quickly runs through, and largely dispels, I think some of the more popular reasons for not voting (e.g., one vote won’t matter; confusion and complexity; no candidate is good; “it’s just a way to make yourself happy”; “awkward family dinners”), before listing some very serious reasons to vote (i.e., Florida in 2000; the legacy of the Voting Act of 1965) with some less serious ones (e.g., spite voting).  Along with some chipper music and Morris’s own good-natured hectoring from behind the camera (“How much would you sell your vote for?”), it makes for an incitement to vote that is free of the hectoring, guilt-inducing messages of some “get out the vote” messages.

As a side note, however, I’d like to point out one of the themes that emerges from the interviews.  At the end of the written portion of Morris’s Op-Doc, he says this: “Voting is a leap of faith. Calling it a civic duty is not enough. Either you believe that the system is both changeable and worth changing, or you don’t — and most new voters are not convinced.”  Very probably true; and as someone who is particularly interested in the ways that language works, I’d venture a guess that “civic duty” is not a term that lands with very many young people nowadays.  It barely lands with me, and I’m almost 20 years beyond many of the people interviewed in this piece.

The theme that the interviewees DO pick up, however, is the dismissal of the individual and the pleasures of joining a group.  The video begins with the argument against voting that hinges on the acknowledgment that a single vote could matter; five minutes in, a participant reminds us that “it’s not about you, it’s about all of us…Get off Twitter, stop talking to your friends about how great you are,  go down to vote and throw your lot into the sea with everyone else.”  The next person talks about the “on the other side” experience of having voted, a kind of shared practice that should inspire people to go and get a drink.  We later see a very pregnant mother whose vote is now “twice as important,” along with a newly-naturalized citizen who will vote for the first time.  It’s a bit of a vexed message (what’s up with the Twitter hate?), and yet seems to suggest that dedicated voters in a demographic notorious for NOT voting imagine themselves and their motivations as being distinctly communal; they’re in a group who vote right now, in this election, and/or they’re in a group that prizes voting in a historical trajectory.  Everyone else is in the sea, or getting a drink after having voted, or voting in honor of those who couldn’t vote before him.  This is what we all do; you should do it too.  Is it going to far to say that individualism, here, is shunted aside for the priorities and pleasures of the generation as a whole?  Where does the rationale for voting as a mode of belonging fit in the rhetoric of civics, of responsibility, and in the description of the millennial generation(s) as individualistic and navel-gazers?  If Morris’s interviewees are representative of young people who DO vote, how do we use these insights to capture and incite more of them to “throw their lot into the sea”?

#Digiwrimo; or, November is the Cruelest Month for Public Intellectualism

I don’t know what’s more sad: the image of thousands (if not millions) of abandoned blogs, laying by the side of the digital highway, carrion for web vultures; or the fact that, after more than a year, I myself may have forgotten the finer points of constructing a blog post.
Either way, sometime at the end of October, I was reminded of the venerable tradition of Nanowrimo, the increasingly popular use of the month of November to join a community of writers in the pursuit 50,000 words–a draft of a novel–in 30 days.  I’m no would-be novelist, for sure, but I couldn’t help but admire (and envy, a bit) the challenge and sense of camaraderie that I imagine has to develop over the course of Nanowrimo.  I imagine that it’s akin to the moment in a triathlon when you find yourself chatting with the person running next to you.  You may be strangers, but you have everything in common for this measure of time.  But what is a non-novelist to do with November, I ask you?  Thankfully, the good people at Marylhurst University in Portland have come up with an answer for the rest of us: Digiwrimo, a month of digital writing—in all of its manifestations (see “What is Digital Writing?” for more details).  November=50,000 words, novel or no; and by no, I think I mean no excuses, and no reason not to address the sad of the abandoned blog, the loss of blogging skills.  All right, Digiwrimo.  Let’s do this thing.

Reason #1:

When I stop to consider why this kind of challenge is worth the commitment, I don’t have to dig too deeply. First and foremost, I should note that I’ve been requiring my students to keep class blogs for almost 10 years.  It’s a practice that I believe promotes a sustained engagement with their coursework, asks them to think of their writing and thinking as public acts, and knits them into a community of thinkers who are considering similar questions and approaches to texts.  Over time, I’ve come to applaud the students who develop their blogging and commenting as a sustained and dependable practice.  “It’s hard to be consistent, and consistently thoughtful,” I recently wrote on a student’s midterm.  And it is.  Life for students, for professors, for parents, for people is complicated; it’s the easiest thing in the world to put off the complex cognitive work of thinking and writing.  But the payoff can be wonderful, and there is a set of pleasures that develop both from the practice of writing as well as from seeing an ever-growing archive of your work over time.  What patterns emerge?  What persistent concepts, questions, ideas appear across a number of posts?  What do these reveal about your own predilections, and how do you intend to follow those?  Fine questions for my students, but for myself as well.  No one wants to be the professor who embodies the “do what I say, not what I do.”

Reason #2:

A year ago, I put together a list of links for a colleague who was sorting through the complicated questions that surround contemporary scholarship.  What does it look like in the digital age?  What counts, and what doesn’t? If we are reading, writing, and thinking differently with and through the internet, then how do scholars and intellectuals begin to identify the practices that matter to them, and consider the ways that these practices can occur in new forms?  The argument for the scholarly use of blogs has been building for some time; it may have reached its fever pitch in and around 2011.  A cavalcade of prominent intellectuals in a variety of fields had been blogging for years by that point (any list of these will be perspectival and incomplete, but I’ll just throw out a few here.  You have The Leiter Report in philosophy; Pharyngula in the sciences; Kristen Thompson and David Bordwell in film; Henry Jenkins in media studies; Michael Berube’s now sadly defunct blog, which covered cultural studies and politics).  These, of course, are just the blogs by individuals, and leave out the impressive blog collectives.

Out of this history of practice, then, came a debate (now much rehearsed and rehashed) about the place and value of these blogs.  One flashpoint in the “conversation” occurred during the 2010 MLA convention, when then-graduate student/adjunct professor Brian Croxall was unable to attend the conference because of financial constraints and instead posted his paper on his website.  Dave Parry’s post sums up the conundrum that resulted:

Let’s be honest, at any given session you are lucky if you get over 50 people, assuming the panel at which the paper was read was well attended maybe 100 people actually heard the paper given. But, the real influence of Brian’s paper can’t be measured this way. The real influence should be measured by how many people read his paper, who didn’t attend the MLA. According to Brian, views to his blog jumped 200-300% in the two days following his post; even being conservative one could guess that over 2000 people performed more than a cursory glance at his paper (the numbers here are fuzzy and hard to track but I certainly think this is in the neighborhood). And Brian tells me that in total since the convention he is probably close to 5,000 views. 5000 people, that is half the size of the convention.

And, so if you asked all academics across the US who were following the MLA (reading The Chronicle, following academic websites and blogs) what the most influential story out of MLA was I think Brian’s would have topped the list, easily. Most academics would perform serious acts of defilement to get a readership in the thousands and Brian got it overnight.

Or, not really. . .Brian built that readership over the last three years.

Parry’s take on the brouhaha that emerged is a useful one; it identifies the kinds of markers that scholars use to identify the value of their work (here, translated into eyeballs and influence).  But Parry goes on to note that the dismissal of Croxall by those who were devoted to a strict view of the historical means by which scholars captured eyeballs and built influence: presence at conferences, publications in peer-reviewed journals, etc.  Parry refutes this model, citing the kind of careful work that Croxall had done up until this point, utilizing social media to forward his scholarly and pedagogical interests.  He ends his piece by linking this kind of work—the mobilization of a number of digital media forms and their attendant functions to circulate research—to “public intellectualism.”

I now ask my graduate students to read Parry’s blog post before they create their own blogs and start tweeting for our class.  It’s the narrative, I think, that brings home to them the way that the world of scholarship is changing, and the ways that they need to consider how their own work might circulate both in long-standing print formats and also online.  In addition, I hope that it encourages them to think carefully about how they want to straddle that divide.  For me, however, the argument about social media as public intellectualism is compelling, particularly at the moment when colleges and universities are imperiled by their rising costs, shrinking state and federal budgets, and perhaps most troublingly, their inability to make the case that what they offer is worthwhile.  Better scholars than me are making the argument that the self-same media that some view as chipping away at the foundations of education (e.g., social media will be the death of reading and bring on the zombie apocalpyse, etc.) may actually be the grounds for re-invigorating it.  Dan Cohen, director of the Roy Rosenzweig Center for History and New Media at George Mason University, is such a believer that he’s posted a draft of his book chapter dedicated to this argument on his blog; meanwhile, Kathleen Fitzpatrick, the Director of Scholarly Communication at the MLA addresses the complexities of academic publishing (in both print and digital forms) in her most-recent book, Planned Obsolescence: Publishing, Technology, and the Future of the Academy.

It goes without saying, I should hope, that both Cohen and Fitzpatrick are consistent bloggers, and by Parry’s definition, public intellectuals.

Quite frankly, I’ve drunk the Kool-Aid on this one; I’m convinced by the arguments that, while academic publishing in journals remains an important way for experts in academic fields to talk to each other, we also have a responsibility to make our interests and passions and discoveries known to other audiences, and to model forms of engagement with the objects that we love the most.  And for that kind of work, nothing beats a blog.  (I’ll save my thoughts about Twitter for another day.)

So, thank you, Digiwrimo, for reminding me why I believe in digital writing, and why I need to make room for it, to develop and practice the same habits that I ask my students to develop every semester.  Let November begin.  (It’s going to be a long month.)

President’s Day 2011: Technology and the Teaching Learning Process

What better occasion to return to the blog than a spring semester President’s Day devoted to “Technology and the Teaching Learning Process”?  Below are a few links that I’ll discuss bright and early tomorrow in the Lally Forum with my colleague Michael Brannigan.

The New York Times on Digital Humanities: “Digital Keys for Unlocking Humanities’ Riches

The Pew Research Center’s Internet and American Life Project “Teens, Video Games and Civics

The It Gets Better Project on YouTube, and on its own site

And while I won’t get a chance to talk about these, they’re also great examples of smart people thinking in sophisticated ways about the learning potential of new media technologies:

USC Annenberg School for Communication and Journalism: Project New Media Literacies

HASTAC: Humanities, Arts, Science and Technology Advanced Collaboratory

MacArthur Foundation Spotlight: Digital Media and Learning

Twitter-pated

I know people who simply lurve twitter.  It’s the new cool thing!  It’s a microblog!  Follow your friends!  It’s internet poetry!  I wanted to get it, really, but it wasn’t quite working for me.  What would be the circumstance wherein I’d want to read such short, of the minute posts?  I like the lengthy, meandering blog post, after all.  Preferably with pictures!!

But then (and you knew this was coming, right?), I happened upon Slate’s Olympic coverage via Twitter.  You would think that there’s nothing else to be said about the Olympics right now.  I love me some televised competitive swimming, but this is just getting ridiculous.  The whole world knows Michael Phelps’ torso measurement, as well as what he has for breakfast—because it’s on CNN.  Fashion magazines are covering beach volleyball; Perez Hilton is tracking medals and opening ceremony cover-ups, for crying out loud.  In this climate of neverending sports-cum-nationalism information flow, what kind of coverage could we possibly be missing?

Enter the fabulous one-liner.  A few choice quotes:

Slate’s coverage of Dara Torres informing the judges to wait for the Swedish swimmer to change her torn swimsuit; an event heralded as the apotheosis of sports ethics on NBC, merits this tweet: “Torres pointing out the Swede’s torn swimsuit is the greatest act of Olympic sportsmanship since Lochte gave Phelps half his sandwich.”

On the controversial win for Michael Phelps’, wherein he touched the wall 1/100 of a second before the Serbian swimmer.  Some cry conspiracy, and Slate’s tweet reads: “No conspiracy, Phelps just has the ability to alter space-time. That’s what he’s doing with that dolphin kick.”

Suddenly, Twitter makes perfect sense to me.  It’s the transcendental medium for the one-liner, and I prefer the ones that are sarcastic shots over the bow, capable of puncturing the balloon of teary-eyed national sentiment and/or athletic fetishism.  In feed form, the tweets are reminiscent of those magical conversations with your smartest friends, whose reactions to absurd events reduce you to tears of laughter.

The Twitter folk position their application as one in which users answer the question “What are you doing?”  I can help but wonder whether a better use might be to answer the question “What are you seeing?”

Read Only

The NY Times today released the first part of a series dedicated to investigating “how the Internet and other technological and social forces are changing the way people read.” Hoo boy. Let the games begin.

On first read, I’d say that author Motoko Rich strives for an admirable balance between two factions dedicated to defending their particular reading practices. For every study of declining test scores and reading for pleasure, she cites online readers’ descriptions of their own practices or new literacy scholars.

From this format, we can see a surprising tone that both boosters and naysayers of digital reading share: a relatively consistent dismissal of alternate format. For instance, Rich cites Dana Gioia of the NEA:“Whatever the benefits of newer electronic media they provide no measurable substitute for the intellectual and personal development initiated and sustained by frequent reading.” At the same time, we have fluent digital readers who have this to say about print books: “The Web is more about a conversation. Books are more one-way.”

The article carefully cites the number of material factors to consider as we weigh a shift in reading habits: the socioeconomic benefits of print literacy, its deep integration into school curricula, the challenges it presents for students with learning differences. But these considerations are buried deep on page 3 of the article, in a way that suggests they’re simply fodder for the bigger issue–the deep psychological investment in the way that reading inflects our daily lives, and that no one is willing to be told that their preferred method is lacking in some way.

I find myself perched uncomfortably between these two ways of reading and the assumptions of superiority they promulgate. When Gioia says: “What we are losing in this country and presumably around the world is the sustained, focused, linear attention developed by reading,” a portion of my heart goes pitter pat. Does reading a novel require that sustained attention? Obviously. And I’m willing to believe (until a neuroscientist tells me different) that there’s a cognitive benefit to it, as well as a pleasure to be taken in it. But I’m also not willing to believe that all digital reading is the short-attention span theater that Gioia assumes and of which Rich provides examples. When Nadia is reading fan fiction stories that run “45 web pages,” we’re talking about focused attention, and we’d have to study Nadia’s reading practices to convince ourselves that it wasn’t sustained or linear. In addition, the statement ignores the sociality of reading a number of digital sources on a similar topic.

On the other side of the fence (here I am, perched on a cliche), I’m taken aback by the digital readers’ characterizations of books. At least two of the young people interviewed take issue with books’ unitary nature–either as a fixed plot structure or singularity of voice. This also seems to be a mis-characterization of what print readers love about books, wherein the process of interpretation makes a book an archive of alternatives. [This assumes, of course, that you include interpretation in your definition of reading, I suppose.]

I’m anxious to see how others perceive the coverage in the Times. For now, however, I’m struck by the gulf between readers, and the very little coverage (and study?) of how omnivorous readers characterize pleasure, benefit and drawbacks of their reading practices across media