Tuesday, November 9, 2010

RIP Jeeves: Convergence Claims Latest Victim

In Memoriam
Jeeves
1996 - 2010
I'd like to begin this post by requesting a moment of silence in observation of the passing of Ask.com formerly known as AskJeeves. 

Today marks the tragic yet some would say timely passing of Ask.com, that venerable and erudite search engine that intelligently answered even the most inane and archaic of inquiries.  Jeeves came of age in the Dot.com boom. Striding with elegance and panache onto a a nubile web1.0 scene, Jeeves brought an air of much needed decorum to veritable wild wild west of talking sock puppets and Napster file sharing. Prior to the global collapse of the "old" internet Jeeves served as a sage advisor and trusted guide on the information superhighway.

This week's assignment has us exploring the concepts of "Convergence" and "Remix".  One of the definitions of convergence offered by Jenkins in "Convergence Culture" is: an old concept taking on new meanings. (p. 6)  I think the concept of the search engine fits that description quite nicely.  In the beginning the search engine was the "Oracle" one went to to research well just about anything.  Jeeves ushered in an era of digital knowledge seeking that replaced analog methods of seeking information.  Gone were the days of poring through the pages of dusty encyclopedias and paper archives. Unfortunately in the beginning the information superhighway was more often than not the misinformation superhighway. I believe first generation search engines, like Jeeves, Dogpile, et al may have undeservedly received blame when the links they served up were less accurate than they should have been. However one of the things I find most interesting is how the search engine tool itself evolved within it's own generation.

Again in the beginning the search engine was a tool to at least begin one's quest to find information. That information was often in the vein of a concept, idea, or subject to which the searcher was unfamiliar.  However it wasn't long into Jeeves tragically short life that he took on an unexpected role in addition to his info gathering duties.  Jeeves and indeed search engines in general became sort of a global phone book. As the internet evolved, died, was reborn, and evolved some more, more and more businesses co-opted the internet successfully this time to launch online storefronts.  Hence it became Jeeves' job to direct coffee drinkers to the nearest Starbucks as well as seek out information on the mating habits of wombats. Witness the convergence of old media to new media and from new media to newer media.

As in the course of any career Jeeves soon found his experience and wisdom challenged by upstart whippersnappers like Google. Next to Google's search term algorithms and clean design Sir Jeeves began to show his age.  Turns out natural language queries aren't where it's at. Why type in an entire question when one can just toss out a few key words? Suddenly the sage advisor was starting to resemble a fusty old codger.

So today we mourn the passing of a dear friend whose links and pages helped the world to transform the way it sought information.  He leaves in his wake a host of innovators and maybe a few imitators. So it's only fitting that we all take a moment to reflect on the nature of digital search while we bid a fond adieu to one of it's pioneers. Well goodbye, old chap. Say hello to Encyclopedia Britannica for me.

Note: Early reports of Bing and Google attending Jeeves' memorial solely for the purpose of gleefully dancing on his grave are completely unfounded.

Tuesday, November 2, 2010

Doctor, I Think I Fractured My Attention Span!

It took me over four hours to complete the Linda Stone video. That's right four hours. Now you may be asking yourself how it is that it took an otherwise intelligent person more than four hours to complete a video that is exactly twenty-one minutes and fifty-two seconds long? Well, I'm kinda guilty of Stone's thesis. I think I have Continuous Partial Attention.  Although when one says it like that it sort of sounds like a diagnosis of a life threatening illness. What I mean to say is that it took me four hours to complete a twenty-two minute video because I paused it a gazillion times. Well maybe not an actual gazillion but several times anyway. Several times.

Now one may be wondering why it might be necessary to pause a twenty-two minute video more than once? An excellent question indeed. Well while watching the video I find myself surfing the web for interesting blog post ideas, monitoring CNN for nationwide election results, jotting notes for said blog posts, tweeting interesting and funny anecdotes from my day, checking my email for client inquiries, and generally fretting about my final class project. That's rather a lot to do, huh? I think Stone is certainly on to something in her analysis of the contemporary definition of multi-tasking as compared with the traditional definition.  All of the tasks that I described above require cognition. None of them are automated. Hence at some point it becomes necessary to pause one or two just to have the necessary brain capacity to focus, still partially, on the others. Hence four hours after I started the video I finally finished it.

I've seen several studies recently that conclude that those who hyper multi-task actually decrease their efficiency at completing the tasks they undertake successfully.  This NPR article details a French study that concluded that the human brain is ultimately set up to do no more than two tasks at a time. That's right, our frontal lobes max out at two. When the participants in the study were given a third task to complete researchers found that their accuracy suffered greatly over their initial performance when only two tasks had been assigned. So what does this mean for time starved, harried, multi-taskers like me. Well if I had time to write by hand it would be on the wall - slow down and focus.  As Stone so eloquently illustrates in her examination of our social evolution from a data economy to a wisdom economy society has sort of come full circle.  The information overload which currently faces today's workers and individuals begs for at least a partial return to simpler times.  Value added today means time added, as in how will this particular good or service make my life easier and give me more time?  Ok so maybe instead of hitting pause on the DVR I should force myself to hit the power button. Lesson learned. In fact I will no longer allow myself to hit any pause buttons anywhere any more for any reason. Hmm, I think that one may be a little harder. I'll keep working on it, anyway. Well, that's all for now, I must rush off as I have a million other things I need to get...

Just kidding! I am actually done with this particular post.

Tuesday, October 26, 2010

Who Moved My Congress?

This week's readings examine both sides of the cyber utopia debate.  In "Digital Maoism: The Hazards of the New Online Collectivism" author Jaron Lanier laments the advent of what he calls "the hive mind." The hive mind by Lanier's definition is nothing more than old and unsuccessful ideas of collectivism re-packaged for the digital age.  Lanier views the trend of online content aggregation and meta aggregation as the death of critical thinking, informative journalism and public discourse. I think he may be on to something.

Particularly insightful in Lanier's article was his observation of collectivism in pop culture.  I do think it highly ironic that in some 10 seasons of broadcast American Idol has failed to produce one...well American idol. I suppose some would disagree and cite the popularity of Kelly Clarkson. But I doubt Miss Clarkson will sell out Madison Square Garden anytime soon and I certainly wouldn't call her contributions to the musical lexicon monumental, however catchy they may be.  Lanier is right, artists like John Lennon, Duke Ellington, Jimi Hendrix, Joni Mitchell, Grandmaster Flash and Bob Dylan would never win a competition like AI because they are all innovators and the very mainstreamy-ness of AI places zero value on innovation. In attempting to develop an "everyman's artist" complete with a built in fan base (i.e. the people who vote for the contentestents) the producers of have produced a package that no one actually wants to buy. Witness the hive mind at work and it's particular failing. The hive mind isn't neccessarily the brightest one.

On the other side of the argument we have Pierre Levy and his work "Collective Intelligence: Mankind's Emerging World in Cyberspace."  Mr Levy fairly swoons with admiration for the possibilities afforded mankind by collective political activity in cyberspace. In the utopia that he envisions direct democracy becomes a distinct reality with literally every man, woman and child having the ability to represent themselves politically by voting online. Gone are the antiquated ideas of representative democracy and political parties. Congress is no longer even neccessary in in Levy's vision. (I can think of 534 individuals in Washington who might take particular issue with Levy's vision.)  Directly opposite Lanier, Levy seems particularly taken with the notion of collectivity and its implications for personal freedom. So who is right?

While both gentlemen have valid points I think I'll have to give this one to Mr. Lanier.  If only because I think the design of Levy's Intelligent Cities is rather utopian in scope.  Admitttedly I am only about half way through the reading at this point so I apologize if my critique is a little premature. (However I reserve the right to revise my views in the near future.) Levy envisions a political system in which everyone participates with equal zeal and all involved make intelligent, informed choices. Sounds pretty wonderful, right?  However, where Mr. Levy fails is that he doesn't allow for the personal apathy, ignrance, shortsightedness and biases that Lanier is all to aware of.  "...an individual best achieves optimal stupidity on those rare occasions when one is both given substantial powers and insulated from the results of his or her actions." (Lanier 9) Can we say Bush administration? How about the Birther Movement? I do realize how cynical my analysis sounds and that is the particular failing of Levy's collective intelligence.  His analysis is flawed in that it relies on an electorate that is completely informed and personally invested in political outcomes.  I think he may be asking a little too much in an age of severly fragmented attention spans.

Tuesday, October 19, 2010

Open-Source: Public Enemy #1

I was searching Mashable recently looking for interesting story ideas when I came across this tasty little nugget: "Lobby Group Says Open-Source Threatens Capitalism." The piece details the efforts of the International Intellectual Property Alliance, whom we read about in "Information Feudalism", and who also broadly represents the MPAA among others, to curtail the popularity of open-source software globally.  The group recently requested that developed and developing nations including Brazil, India and Indonesia be placed on an international watchlist, which "effectively puts those countries on a shortlist of governments considered “enemies of capitalism” who aren’t doing enough to protect intellectual property abroad." A little harsh, don't you think?

The major bone of contention for the alliance is the tendency for these nation's governments to use, or advocate the use of, open-source software in their governmental departments. What could be no more than a simple cost cutting measure, in the wake of the worst global economic recession since the great depression, has effectively been labeled...wait for it...communism.  The fifties called, they want their Red Scare back.

As we are exploring the issues surrounding intellectual property this week I thought this article a fitting example of the "global expansion of intellectual property systems" (p.5) that Drahos and Braithwaite detail in Information Feudalism.  What I find particularly ironic is that in sanctioning these nations the Alliance is actually behaving like a communist dictatorship as they are essentially stifling competition. The race towards innovation spurred by the open-source movement benefits both producers and consumers. Isn't market competition- which is precisely what open-source software represents in the intellectual property battle - the very cornerstone of capitalism? As such, shouldn't it be encouraged?

The dubious victory of literally killing market competition is ultimately a losing battle for all concerned. Especially when the mere suggestion of cost effective alternatives can be equated with subverting the entire U.S. economic system. Unfortunately as long as profits speak louder than innovation and consumer satisfaction, powerful lobbyist groups like the IIPA will reign like medieval feudal lords - and we get to be the lowly serfs. Well, I must go now. I have to delete all my bookmarks for free online photo editing tools. I would hate to be labeled a terrorist just because I'm too poor to afford PhotoShop.

[img credit: David Erickson]

Tuesday, October 12, 2010

On Networks and Sovereigns

So I am reading this week's assignment "The Exploit: A Theory of networks" and the following passage jumps out at me:

"By contrast contemporary political thought often defines sovereignty not as the power to command or execute a law but as the power to claim exceptions to the rule.  The sovereign ruler occupies a paradoxical position, at once within the law (in that the ruler forms part of the body politic) and yet outside the law (in that the sovereign can decide when the law no longer applies). Sovereignty is, then, not power or force but the ability to decide - in particular the ability to decide what constitutes an exceptional situation, one that calls for a "state of exception" and a suspension of the law. (p. 38)

Notwithstanding the usual suspects, historical and contemporary political leaders, I immediately thought of celebrities when I read this. Yes, I applied network theory to Hollywood. I'll cover the political leaders shortly.

One of the most salient Hollywood examples that comes to mind is the recent case of Roman Polanski. Here we have the sovereign, Mr. Polanski fighting extradition on some very serious charges. The basis of defense is pretty much "I'm famous and all that happened a long time ago. Therefore the law shouldn't apply to me." Around this sovereign has arisen a network of supporters - those 700 or so (depending on which reports you read) actors, actresses, directors, producers, etc. who signed a petition demanding his immediate release. The crux of their argument, "He's enormously talented, that really bad thing happened a long time ago, and further we don't like the way he was apprehended. Therefore the law should make an exception."  I will withhold judgement as to the validity (or lack thereof) of these arguments. Ok, maybe I won't.

Now this network, much like the networks Galloway and Thacker present, isn't organized or governed by one central figure. Not even Polanski. The nodes - the actors, et al - are are a particularly heterogeneous bunch whose connections to each other outside of the petition may or may not exist. I suppose you could consider their connection to Hollywood or the film industry as a constant, but to each other, not so much. Further like the Galloway/Thacker model, the nodes may or may not even be directly connected to the sovereign.  

I think it is the idea of the "sovereign" that I find particularly fascinating. Examples of sovereign like figures abound in both contemporary and historical culture.  Another literal sovereign that comes to mind is King Henry VII. As an actual sovereign King Henry attempted to break a network - the Church of England - so that he could do what was previously un-doable; divorce his wife and marry his mistress.  When the King found himself subject to the rules of the church and therefore unable to legitimately divorce his wife he did what any good sovereign would do. He invalidated the church. From where did he derive the power for such an invalidation? He was king, and as such should be above the law, even that law of the church. Here we see network theory in action once again. Though this may be what Galloway and Thacker refer to as a "disciplinary" society ( p. 36) or the beginning of the breakdown of one.

So how does network theory apply to new media. I think the most obvious candidate for examination is the Internet. It functions cooperatively yet has no single overarching, governing body - though recently net neutrality opponents have attempted to grab control. It's myriad nodes - either net users, or the actual web of disparate servers - aren't necessarily connected to each other in any collective sort of organization. What I find particularly interesting is the sphere of influence exerted by particularly powerful nodes. I'm referring to human nodes in this meaning. I wonder if the net will produce digital sovereigns that mimic or rival the sovereigns of yore? Perhaps it already has.

Tuesday, October 5, 2010

What Is and Isn't New Media (Includes a Case Study Overview)

This week we were tasked with reading Lev Manovich's "The Language of New Media." In the text Manovich explores both the cultural and historical symbols synonymous with new media objects and traces how they came to be such. For instance why is it that the home screen on one's PC is called a "desktop"? Or how it came to be that our work output takes the form and hierarchy of "files"? Well, in a nutshell, this symbology may have more to do with the original function of the computer as a work tool than the media playground that is the PC of today.  However, let's take a brief look at a media object that I know I can't live without and see if it is actually new media as according to Manovich - the iPod, or iTunes as it were.

So beginning on page 27 Manovich enumerates exactly what the properties of new media objects are. They are:

1. Numerical Representation: All new media objects, whether created from scratch on computers or converted from analog media sources, are composed of digital code; they are numerical representations. (p. 27)

2. Modularity: Media elements, be they images, sounds, shapes, or behaviors, are represented as collections of discrete samples (pixels, polygons, voxels, characters, scripts). (p. 30) Because all elements are stored independently, they can be modified at any time without having to change the... (p.30)

3. Automation: The numerical coding of media (principal 1) and the modular structure of a media object (principle 2) allow for the automation of many operations involved in media creation, manipulation, and access. Thus human intentionality can be removed from the creative process, at least in part. (p.32)

4. Variability: A new media object is not something fixed once and for all, but something that can exist in different, potentially infinite versions. (p. 36)

5. Transcoding: Similarly, new media in general can be thought of as consisting of two distinct layers - the "cultural layer" and the "computer layer." (p. 46) Because new media is created on computers, distributed via computers, and stored and archived on computers the logic of a computer can be expected to significantly influence the traditional cultural logic of media; that is , we may expect that the computer layer will affect the cultural layer. (p. 46)

Now, you may be wondering why certain portions of these quotes have been placed in bold type. Well there is a method to my madness. I have combed my iTunes library and put together an an EMAC 6300 playlist consisting of four very distinct songs.  (Also, if you're interested, it can be downloaded on iTunes as an iMix for $3.96.)The tracks are all quite different in their styles, artists and genres. It is my contention that per Manovich's specifications, particularly the sections I have highlighted in bold, only one of these tracks actually fits the definition of "new media."

That's right, just one.  My methodology was simple. First of all I examined the tracks as content and form separately. If we examine them for form alone then all of the songs would qualify as new media simply by virtue of their interface, ie digital tracks in a digital playlist.  However, if we examine the songs as content, paying specific attention to production, then our outcome is quite different.  When I applied Manovich's principles the tracks that failed the new media test did so on the criteria that is presented in bold type.

Once iTunes releases my list publicly I will include a link for preview. However, if you want to preview the tracks individually they are:

Symphony No.3 in E-flat Major - Artist: Nicolaus Esterhazy Sinfonia, composed by Beethoven
One Step Beyond -  Artist: Karsh Kale
Stay By My Side (Acoustic) - Artist: Mishka
Blue In Green - Artist: Miles Davis

In the meantime, feel free to ruminate on the essence of musical composition in the digital age.

UPDATE: Here is a link to the iTunes iMix. EMAC 6300 New Media Objects Discussion Playlist

2nd UPDATE: Here is a link to my Prezi. Music and New Media: A Critical Look at Manovich's Criteria

Tuesday, September 28, 2010

Mona Lisa's Smile

One of the things I found fascinating about one of this week's readings, Remediation by Bolter and Grusin, is that though it is much newer than some of our other texts it seemed - to me at least - singularly outdated.  The analysis and examination of virtual reality, a term I don't believe I have actually heard
since 1997, dated the text and made me question it's relevance in today's socially networked world. Of course the themes that Bolter and Grusin explore are evergreen in their nature. The ongoing cycle of re-interpreting, ie remediating older media onto new forms is timeless and especially relevant in the digital space. However I found their particular vehicle of analysis distracting. Interestingly I found the work of McLuhan, written some 50 years ago, prior even to the introduction of the personal computer, infinitely more relevant to discussions of emerging media. How interesting?  


The purpose of this post, however, is to explore the goal of remediation. If remediation could be said to have an overarching "goal" it would be to improve upon preceding media in some tangible way. Of course this is a grossly simplified definition that assumes a collective participation on the part of technological innovators worldwide. However, generally when new tech toys are sold to the public marketing claims never fail to tout how they improve upon prior technology. The smartphone improves upon the cell phone, which improved upon the wired phone, which improved upon the telegraph and so on.  Naturally the definition of "improvement" is highly subjective. However remediation in it's attempt to transcribe older media onto new forms often amounts to not much more than an attempt make  it better in some way. Consider for a moment the Mona Lisa.







I'm pretty sure this isn't what came to mind at the mention of Da Vinci's
seminal work. The talking Mona Lisa is an animated interactive piece currently
on display in Singapore's Alive Gallery. The goal of the Alive Gallery is
literally to bring historical works of art to life.  Viewers can actually
interact with the paintings and ask them questions and the paintings will
respond. Here are a few of the questions one can ask Mona Lisa:


Why don't you have any eyebrows?
Why is your smile so popular?
Where were you painted?
What is in the background?


All inquiries that if posed by anyone other than an eight year old talking to an animated object might be considered quite rude.  My question is: Is this an improvement or an abomination? I suppose it depends on whom you ask.  This particular gallery came up for discussion in a previous semester with about half of the class thinking it quite cool, and the other half ready to tar and feather the gallery owners. On the one hand I can see the appeal of making high art accessible to an audience that it has never reached before. However I do have serious concerns as to that audience's ability to appreciate the work if it literally has to be able to talk back to them first.


The traditionalist in me says that paintings aren't supposed to move or talk. (Also I can't help but picture Da Vinci spinning in his grave at what has been done to his work.) But is this interpretation completely lacking in value? Maybe. Maybe not. I can envision a youngster getting some knowledge out of being able to interact with a work of art, like gaining historical context, or learning about the artist. The problem, at least as I see it, comes in that exhibits like this also foster a disrespect for the aura of the work. A large part of the value of the piece lies in it's ambiguity. Who's to say that whomever programs Mona Lisa's response to the question about her smile has any idea why it's so popular? Or if she is even smiling? Art historians have been debating that for centuries. The answer to that particular question could never be anything more than a highly subjective interpretation. I doubt, however, that an audience of small children, will be sophisticated enough to make that distinction for themselves. Moreover, I wonder if in presenting talking works of art we are setting these kids on a path to a lifetime of disregard for the intrinsic value of the piece, and art itself. I'm not so sure that remediation worked here. Yes it did make the art interactive but at what cost? For me one thing is for certain, my kids (when I have them) will never have a conversation with Mona Lisa if I can help it.

Tuesday, September 21, 2010

Actors 2.0: Humanity in the Age of Digital Reproduction

"The stage actor identifies himself with the character of his role. The film actor is often denied this opportunity. His creation is by no means all of a piece; it is composed of many separate performances." (p.10)



The quote above pulled from this week's reading of "The Work of Art in the Age of Mechanical Reproduction" by Walter Benjamin immediately brought to mind the movie Avatar. Specifically I was reminded of the raw emotion evoked by the character Neytiri when she discovers Jake's duplicity, and indeed throughout the entire film. If, according to Benjamin, a stage actor identifies himself with the character of his role - an opportunity that is denied to the film actor - how does that interpretation change when the actor is digital?

All of the awards and accolades heaped upon the film Avatar had in common that the film presented a very realistic vision of a very make-believe world.  Certainly everyone I know raved about how "real" the characters and settings read on screen. That is quite a remarkable accomplishment given that the characters are 10 foot tall blue humanoid beings who have tails. And live in trees.  What struck me about the film, more even than the lush opulence of the cinematography, were the absolutely human performances of the digital actors.  When Neytiri rips into Jack for deceiving her and her dooming her people I knew exactly how she felt. The emotions evident in her facial expressions are exactly what a woman deceived and betrayed by her boyfriend - and whose betrayal threatens to wipe out her entire race - would feel.  The clip above shows a tiny bit of that scene and then goes on to explain how the filmmakers achieved such realistic effect. How did they do it? By very closely recording the actual facial expressions of the human actors. As Benjamin further elucidates, "Experts have long recognized that in the film 'the greatest effects are almost always obtained by 'acting' as little as possible...'" (p. 10)

In this example these digital actors (in the final theatrical presentation) are completely divorced from their human counterparts however to produce an authentic performance it became necessary to closely record and augment their humanity.  Applying Benjamin's model we see that the same rule applies. Though denied a live audience to react to, the film actor, even the digital one, still draws on the universal human experience to convey his performance.  I wonder if the same rule will apply when it becomes commonplace for films - not just kiddie movies - to use completely digitally rendered characters with no human actors ever involved in the process?

Tuesday, September 7, 2010

On Writing Well

While reading this week's class assignment the following exchange between Phaedrus and Socrates practically leapt off the screen at me:

Phaedr. I thought, Socrates, that he was. And you are aware that the greatest and most influential statesmen are ashamed of writing speeches and leaving them in a written form, lest they should be called Sophists by posterity.

Soc. You seem to be unconscious, Phaedrus, that the "sweet elbow" of the proverb is really the long arm of the Nile. And you appear to be equally unaware of the fact that this sweet elbow of theirs is also a long arm. For there is nothing of which our great politicians are so fond as of writing speeches and bequeathing them to posterity. And they add their admirers' names at the top of the writing, out of gratitude to them.

Phaedr. What do you mean? I do not understand.

Soc. Why, do you not know that when a politician writes, he begins with the names of his approvers?

Phaedr. How so?

Soc. Why, he begins in this manner: "Be it enacted by the senate, the people, or both, on the motion of a certain person," who is our author; and so putting on a serious face, he proceeds to display his own wisdom to his admirers in what is often a long and tedious composition. Now what is that sort of thing but a regular piece of authorship?

Phaedr. True.

Soc. And if the law is finally approved, then the author leaves the theatre in high delight; but if the law is rejected and he is done out of his speech-making, and not thought good enough to write, then he and his party are in mourning.

Phaedr. Very true.

Soc. So far are they from despising, or rather so highly do they value the practice of writing.

Phaedr. No doubt.

Soc. And when the king or orator has the power, as Lycurgus or Solon or Darius had, of attaining an immortality or authorship in a state, is he not thought by posterity, when they see his compositions, and does he not think himself, while he is yet alive, to be a god?

Phaedr. Very true.

Soc. Then do you think that any one of this class, however ill-disposed, would reproach Lysias with being an author?

Phaedr. Not upon your view; for according to you he would be casting a slur upon his own favourite pursuit.

Soc. Any one may see that there is no disgrace in the mere fact of writing.

Phaedr. Certainly not.

Soc. The disgrace begins when a man writes not well, but badly.

Phaedr. Clearly.

In the first line Phaedrus makes the claim that most politicians, including Lysias he thinks, are ashamed of writing and wary of recording their speeches for the written record lest history paint them as Sophists. No disrespect to Phaedrus but as a former political speechwriter I happen to know that he is dead wrong. As Socrates points out speech making/writing is the very lifeblood of political discourse. That is just as true today as it was in ancient Greece. Furthermore, as far as the Sophist fears, I feel that Phedrus is wrong there also. If a Sophist, according to Wikipedia,( http://en.wikipedia.org/wiki/Sophist) is one who is skilled in making incorrect and deceptive arguments sound correct, and using the fears and prejudices of the listener to strengthen an inherently flawed argument, then I'm afraid that our entire political right could be labeled as Sophists. And Glen Beck sure isn't afraid of how his chalk board may be recorded for posterity. But I digress. What really got me going was this:

Soc. Any one may see that there is no disgrace in the mere fact of writing.

Phaedr. Certainly not.

Soc. The disgrace begins when a man writes not well, but badly.

Let the church say, "Amen!" Last week following class I got into a discussion with a classmate regarding "text speak," and how terms like "LOL" and "OMG" have lately appeared in official and academic writings. As a teacher and erstwhile writer, and moreover as someone with a healthy respect for language, text speak is the bane of my existence. Now don't get me wrong, In it's proper forum text speak is quite effective at furthering communication between parties quickly. In our current age of digital communication having a universal shorthand is a good thing. Even I am guilty of ROTFL at my friends. But that's just it, with my friends. Text shorthand has no place, and indeed no meaning outside of a digital forum. However there is an entire generation that uses that shorthand so regularly they have ceased to even recognize it as such. That's a pity.

While grading papers recently I ran across the term "U" short for the actual word "you" so often in student writings it made me wonder whether the students actually knew the real English word. Yes I know that language is an ever evolving thing, and one hundred years from now that may be how we actually spell "you." Today, however, writing well still means writing with standard English. There was even a whole controversy at the New York Times recently regarding a copy editor banning use of the word "tweet" as outside of the Twitter forum the word literally has no meaning. Unless you are referring to the sound those evil birds make outside my window at 5:00 am. http://www.theawl.com/2010/06/new-york-times-bans-the-word-tweet  I once had a teacher tell me that as long as one can communicate articulately, either in speech or in writing, people will listen to them regardless of what they are actually saying.  That is certainly something that Sophists know, both ancient and contemporary.  However I fear that is a lesson completely lost on our current generation. What is to become of our future statesmen and professionals? Would you return to a doctor who wrote, "C U L8TR"on your prescription for a follow up visit?

Tuesday, August 31, 2010

The Emerging Media Ecosystem

Check out this headline: "Meet the First Plant That Requires Facebook Fans to Survive."  Yes, it's about a plant - actual vegetation - that requires social media interaction to thrive. I saw this story on Mashable today and it started me thinking about all the ways in which electronic media really drive everyday life. It begs the question, "Can there be a such thing as too much progress?"

Now before you go labeling me a spineless technophobe - I'm really not, by the way. I would marry my Android phone if I wasn't afraid of how weird the wedding photos might look - consider for a moment the preceding two thousand years of human history. Somehow, remarkably, for most of existance we mere mortals have managed to live, and even reproduce, without the miracle that is Facebook.  I wasn't there but I'm pretty sure my dad "liked" my mom the old fashioned way. In the days of yore people actually watered plants, and plants actually managed to grow and feed the population. I'm not arguing against technological progress, I just wonder if progress always well...progresses.

Think for a moment about the cotton gin. A technological wonder to some, the ruling slave owners, the absolute devil to others, the slaves who were now consigned to more centuries of servitude. Had Eli Whitney bothered to ask them I'm sure their answer would have been a resounding "Screw progress!"

A friend of mine recently told me the story of his little sister who was put on "Internet punishment" for getting a bad grade. As far as I know the boundaries of her grounding did not preclude actual human interaction. Yet in all the weeks of her miserable confinement it never occurred to her to pick up the phone or better yet invite her friends over. The kid literally did not know how to live without social media. This is the way in which I believe technological progress has actually crippled human progress.  And I say this as someone who makes her living (in theory anyway) advising business owners how to promote themselves via social media. 

In reading the blog of one of my classmates, Little Miss Cales - Caleigh I am struck by the fact that she predicts a veritable "Lord of the Flies" type Armageddon should the nation's electronic resources fall victim to attack. I think she is 100% right. Sadly. Somehow we genius humans have technologically advanced ourselves right out of our collective humanity. Remarkable. Can't call, text, tweet, or facebook your neighbor? Try knocking on their door.