Friday, February 24, 2012

"Digital people" conjures up (for me, anyway) visions of Star Trek's transporter: turning people into information beamed from one place to another.  What we're focusing on here is instead the way in which we conceive of ourselves and each other in the digital age.  In what ways do new kinds of technology change not just what we think about, but how we think and interact?  And does it, in fact change anything but the medium through which we communicate and learn?

I'm sure many have heard of The Shallows: How the Internet is Changing the Way We Think, Read, and Remember, by Nicholas Carr.  His is not the only voice questioning what we're unwittingly doing to ourselves by fragmenting our attention (he focuses on the internet, but others talk about new technologies as a single thing), but it's hard to imagine that there is sufficient evidence for wholesale claims about this yet; the landscape (and thus the data) is just too fluid. There are, however, discrete facts which must be taken into account. 


Take this bit from "Think Before You Speak", an article in The Economist by Andreas Kluth:

"The human brain has to work harder to process language and communication with somebody who is not physically present. (Conversation with passengers is much less distracting, apparently because those passengers are also aware of the traffic situation and moderate their conversation.) A study by Carnegie Mellon University using brain imaging found that merely listening to somebody speak on the phone led to a 37% decrease in activity in the parietal lobe, where spatial tasks are processed. This suggests that hands-free use of mobile phones cannot help much. Such distractions, according to one study, make drivers more collision-prone than having a blood-alcohol level of .08%, the legal limit in America. It appears to raise the risk of an accident by four times. Texting multiplies the risk by several times again."

What's interesting about this point is that the claim about the brain seems to extend to all kinds of 'remote' interactions, which is a significant chunk of what people mean when they refer to new technologies. Clearly, there are unwelcome consequences to our use of some technologies. But so far, they seem to be discrete problems, with a host of potential solutions (e.g., when I've seen this point about attention lately, it's often mentioned in tandem with an observation of work on autopiloted cars).  I'm not worried - yet - about the future of We the Digital People.




Tuesday, February 21, 2012

Links Galore

Today's my last post, and I want to leave y'all with a list of places I like to go for news and discussions about issues and events related to "digital" tech and related themes. I really encourage you to explore these sites and blogs (most of them are in my blog reader, and I follow them regularly.)

boingboing.net: This is one of the most venerable, yet still hip, sites on technology, geek culture, maker/hacker culture, etc. It's a group blog including people like Xeni Jardin, Cory Doctorow, and Mark Frauenfelder. 

Rhizome.org: This is THE place to go for "new media"/digital art. 

Electronic Frontier Foundation: This non-profit "defends your rights in the digital world."

Findingada: Ada Lovelace wrote the first computer program, back in the 19th century. Yep, the first computer programmer was a woman! This site is dedicated to Ada Lovelace Day, which is an international celebration of women in technology. 

Delia-Derbyshire.org: A site dedicated to the life and works of electronic music pioneer Deila Derbyshire. She worked at the BBC Radio Workshop in the mid 20th century, and arranged the original Dr. Who theme song.

Monday, February 20, 2012

Humanism & Posthumanism

One's views about digital technology and "digital people"--even what one identifies as questions, problems, issues, advantages, worries, etc.--will depend upon one's other assumptions and values. Here, I want to talk about two different philosophical systems or stances: humanism and posthumanism.

Humanism

Humanism comes from the Enlightenment (The idea of the "human" changed significantly with the turn to secular science and the increased emphasis on "reason.") It is the view that human life/being is somehow morally distinct from other forms of life/beings. Humanism is really human exceptionalism. The most common basis for "human exceptionalism" claim looks something like this: human beings are the only forms of life who are capable of reason. Because we humans can reason, we can autonomously make decisions for ourselves--we don't need to obey anyone (like a king), or anything (like natural law). However, with this autonomy comes responsibility: we make our own beds, so to speak, so we have to sleep in them. Human exceptionalism itself assumes several things: that humans are the only forms of life capable of reason; that reason is good; what counts as "rationality"; that reason necessarily enables/leads to autonomy; etc.

In addition to human exceptionalism, humanism entails other values and assumptions. First, humanism privileges authenticity above all else. There is a "true inner you" that ought to be allowed free expression. Second, humanism privileges ideals of wholeness and unity: the "self" is undivided, consistent with itself, an organic whole that ought not be fractured. Related to the first two is the third value, that of immediacy: mediation in any form--in representation, in communication, etc.--makes authenticity and wholeness more difficult to maintain. Copies are bad, originals are good. This leads some humanisms to commit what philosophers call the "naturalistic fallacy"--i.e., to mistakenly assume that what is "natural" (whole, authentic, unmediated, original) is preferable to what is "artificial" (partial, mediated, derivative, etc.).

Though most of our commonsense, everyday mainstream ideas, ideals, and values in the global North are fundamentally, if not largely informed by humanism, there are many, many problems with humanism. The most pressing of these problems is the circularity of the definition of "human" itself. If you look at the history of philosophy, no one has been successfully able to determine a set of criteria for who counts as (fully) "human" that do not already presuppose some idea of what a human is/who already counts. For example, for a long, long time, non-white, non-Western people did not count as fully human (read the US Constitution, for example, with its 3/5 clause, for a very literal example of this). So, the definition of "human" relied on unquestioned assumptions about the less-than-human-ness of non-whites, women, disabled people, and other historically marginalized populations. "Human" has been defined in ways that make it easier for privileged people to "count" or be "recognized" as fully human.

Humanism can be problematized on other grounds as well. For example, its ideals of wholeness, authenticity, and immediacy are both empirically and philosophically problematic. For instance, if there are cognitive processes that happen in our brains, but of which we are not consciously aware, then our "selves" aren't unified or whole--there are facets of ourselves that happen beyond our control. Human exceptionalism is problematic from ecological and animal-rights perspectives. We can also question the extent to which humans are fundamentally or necessarily "rational" (e.g., infants aren't rational, so does this mean they're not human?).


Posthumanism


Just as there are many humanisms, there are many posthumanisms. Some posthumanisms merely extend humanist ideals of rationality, progress, and self-improvement. Other posthumanisms are critical of humanist values and assumptions, and offer alternative values and assumptions. I'll offer a brief overview of posthumanism in this post, but a more extensive discussion can be found here.


I'll use the term "instrumental posthumanism" to describe those posthumanisms that are really just integrate humanist values and ideals with post-industrial technologies. Instrumental posthumanisms treat the human body, human "life," etc., as things that can and ought to be optimized by technologies. Pacemakers, prostheses, insulin pumps, exercise equipment that provides biofeedback data, genetically modified food, diet supplements, iPods, all these are posthuman technologies that are in current and widespread use in the "developed" world. In this sense we are already posthuman.


Critical posthumanism challenges the values and assumptions on which humanism is based. There are a variety of critical posthumanisms. There is cyborg feminism, Afrofuturism, cyberpunk, goth(ic) posthumanism, animal studies, just to name a few. Though these are all very different theories and practices, they share the view that humanism is a limiting and most often oppressive ideology that needs critique. They are all alternatives to humanism and human exceptionalism. For example, Afrofuturism holds that the Atlantic slave trade turned African slaves into "wetware" robots ("robot" being Czech slang for "worker" or "slave"), and that Middle Passage was a form of alien abduction (funny-looking guys land in your community with huge, technologically advanced ships, abduct you, and experiment on you). Rather than viewing robot or alien identity as impediments, Afrofuturism argues that even as robots and aliens, Afrodiasporic subjects/slaves resisted white supremacy, and crafted a unique and vibrant culture. The underlying point is this: though they were not "human," this did not mean they didn't live meaningful lives. So, "human" does not need to be the only, or even the central, model for a worthwhile, valuable life.


So what does critical posthumanism mean for thinking about "digital people"?


First, it means that most of the common critiques of technology are basically humanist ones. For example, concerns about alienation are humanist ones; posthumanism doesn't find alienation problematic, so critical posthumanisms aren't worried by, for example, the shift from IRL (in-real-life) communication to computer-assisted communication...at least they're not bothered by its potential alienation. Critical posthumanisms don't uniformly or immediatly endorse all technology as such; rather, it might critique technology on different bases--e.g., a critical posthumanist might argue that technological advancement is problematic if it is possible only through exploitative labor practices, environmental damage, etc.


Humanism often includes the belief that "technology" is the opposite of "natural humanity." Critical posthumanisms do not see these as opposed: the human body is just as "technological" or "mechanical" as the digital device on which you're reading this post. The brain and the heart rely on electricity, just as DNA is a kind of programming. Critical posthumanism holds that technology is itself neither good nor bad, helpful nor hurtful. It's the contexts in which it is used, the conditions under which it is produced, etc., that make it a positive or negative thing.

Sunday, February 19, 2012

Mixed Reality & 3D Printing


Back in the 1990s, “digital technology” was synonymous with “virtual reality” (VR). VR was “virtual” because it happened on a computer (and was projected on a screen, in a visor, or, perhaps someday directly into your brain, as in The Matrix or ExistenZ), not in “real” life (which is often abbreviated as IRL, in-real-life).

Twenty or so years later, consumer technologies mix the virtual/digital and the IRL experience. Nintendo and Microsoft have popular gaming systems that incorporate the IRL body’s movements into on-screen events. GPS systems and various smartphone apps (like Google Maps) track my movement across the Earth’s surface, and display it on a map on my portable device. Those are just some of the most common examples.  The point is that as technology advances, we’re not abandoning physical, embodied reality (IRL) for “virtual” reality, we’re using the virtual to augment IRL experiences, and increasingly blurring the line between “virtual” and IRL.

One popular technology that blurs the virtual/IRL distinction is 3D printing. #D printing is just what it sounds like it is: the use of a computer and a printer to extrude or fabricate three-dimensional objects. When I print a document, I send, say, a PDF to my 2D printer, which then, via some software, translates the PDF file into printer-friendly code. The printer then runs this code, and prints me up a paper copy of my newest journal article, or that cupcake recipe I’m going to make this weekend. 3D printers work more or less the same way: there’s a file—either something the user develops, or a canned “blueprint” someone else made—that gets sent to the printer, and then printed. Most hobbyists print with plastic; heated extruder heads melt plastic “thread,” and then push out thin strips of molten plastic, building objects layer by layer. 3D printing is just about to break from hobbyist/specialist/geek chic to mainstream consumer technology; some speculate that within the next decade, 3D printers will be as common a computer add-on as a 2D printer.


If you want to go into more detail about 3D printing, I suggest these websites:

For a basic intro, here’s the Wikipedia article.  

Thingverse is good for people with basic technical knowledge, e.g., people who might want to build their own printer.

Hackerspace Charlotte has a weekly workshop for people building 3D printers.

3D printing does raise some interesting philosophical questions, such as:

1.    Before the industrial revolution and factory mass production, consumer goods were handcrafted at home—this is what we call the “cottage industry” model. In the 20th century, consumer goods were largely mass produced in factories. But 3D printing raises the possibility of mass-produced goods made at home. 3D printing potentially calls into question the public/private (home vs factory) logic that, up until now, has organized relations and “logistics” of production. This likely has a whole host of implications for business ethics, for example.
2.     Take all the intellectual property questions raised in the “virtual” era (e.g., with file sharing), and make them more complicated by introducing IRL products into the mix.

I’m interested to hear what other sorts of questions or problems you think 3D printing might raise. Thoughts?

Saturday, February 18, 2012


Using our digital devices to connect . . . and making time to disconnect.

Today's post is a link to a recent article in the Atlantic, by Jason Farman, who is the author of Mobile Interface Theory: Embodied Space and Locative Media, and an assistant professor in American Studies at the University of Maryland. Farman adresses some of Bruce and Fred's concerns from my two previous posts. He explains that the same concerns about disconnectivity resulting form contemporary technolgical devices have been raised througout history in response to all sorts of technological advancements that are largely seen as innocous today. However, he admits that there does seem to be something unique about today's devices which many of us can not seem to be without. This leads him to a suggestion made by William Powers, for everyone to find time to disconnect from these devices.  Here's the link to the article:

http://www.theatlantic.com/technology/archive/2012/02/the-myth-of-the-disconnected-life/252672/#.TzFKDswOG6s.twitter

Friday, February 17, 2012


Thoreau and Freud walk into a blog . . .

Today's post is from Fred Williams, who will be the other presenter, along with Bruce Arrigo, at the Symposium.  He discusses his own concerns about technology and responds to Bruce's post, invoking Thoreau and Freud, and then brings up some thought provoking questions of his own.

-Mark Sanders


In response to Bruce's admission that he's never blogged before, I'll admit my own limited involvement with social media -- and fairly serious distrust of the privacy and other social implications of them. I've rarely posted comments to blogs, preferring to send an email to the author. As a prosecutor, I had a role in some people having to wear an ankle bracelet as a part of their probation, and refused to wear the one my boss wanted me to wear (i.e. a cell phone). I still don't carry a cell phone, both because of the privacy issues (all your movements can be continuously tracked with it) and because I read and was influenced by Thoreau many many years ago on the human need for solitude and contemplation. Yet I've come to prefer the internet to libraries, and reading a book on my laptop because its easier -- and ethical -- to mark it up and take notes from the electronic version.


Social media, the Facebooks and Twitters of our brave new world, may well be both the replacements for the traditional newspaper and a fundamental facilitator of the Arab Spring and other movements which may improve democracy in some countries. Or not. Only time will tell whether some autocrats are pushed out and others grab power, and the actual, empirically verifiable, role of various technologies in whatever happens.


In response to his question of whether we relate to others "better" through these media, I'd contrast quotes from Thoreau and Freud:




"Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate." Thoreau, Walden, in Writings, Boston: Houghton Mifflin, 1906, vol. 2, pp. 57-58.

"One would like to ask: is there, then, no positive gain in pleasure, no unequivocal increase in my feeling of happiness, if I can, as often as I please, hear the voice of a child of mine who is living hundreds of miles away or if I can learn in the shortest possible time after a friend has reached his destination that he has come through the long and difficult voyage unharmed?" Freud, Civilization and its Discontents, NY: Norton, 1961 (Strachey trans. & ed.), p. 35.
The internet makes finding and using such quotes quick and easy. Often you can simply downloading the whole book from google books, OCR them, and then search, copy, and past.


Whatever one thinks about the gains and losses, especially of emotional and cultural values, is there an inevitability to technological, social, cultural, and psychological change from modern communications? Could the Sumarians have stopped the changes which eliminated writing on clay tablets, no matter how deeply this affected their culture and the aesthetics of the traditional physical embodiment of ideas in their society? Think of the loss when beautiful hand written and illuminated books were replaced with ugly printed ones. But few could have access to culturally and economically significant information and ideas before printing.


-Fred Williams

Thursday, February 16, 2012


QUESTIONS ABOUT DIGITAL CONNECTIVITY

The following is from Bruce Arrigo, Professor of Criminal Justice at UNC Charlotte, and one of the presenters at the Symposium to be held on March 28th.  Bruce brings up many questions for all of us to think about. I think I share some of Bruce's concerns. Although I have to say that when it comes to these issues I worry about worrying too much and I worry about not worrying enough. Maybe Aristotle was right- it's all about finding the right balance. Anyway here are some of Bruce's thoughts. If you have something to say in response please do so. If you want to hear more about these issues contact the Ethics Center about attending the Symposium.

-Mark Sanders



I've never blogged before. By the way, I very rarely text message, don't use myspace or facebook, and I don't even know what twitter is, really. Skyping may have some appeal for me; although, I've never done this either. I struggle to understand why digital "connectivity" must be constant, ubiquitous, and always derivative. I struggle to understand how such connectivity grows our humanness. Maybe that's not its purpose. But, what does this connectivity teach us about social relationships and the experience of friendship, love, community, courage, etc., especially if the message is always in the form of a cyber-code that re-presents our identity through the laws of the informational age? It seems to me that forms of social media and networking are changing how we relate to one another. Is it for the better? Hospital patients now have a bar code placed on their wrists that is scanned every time the patient receives a medical service. Radio frequency identification technology ("tagging") is used to track the movement and activity of released offenders. Political lobbyists create "fictionalized" media-manufactured images of candidates that morph reality and identity into pseudo-states of existence. And these non-realities and non-identities are the bases on which campaigns are fought and elections are won.

Are we adapting to the conditions of cyber-technology as evolutionary psychologist might suggest? Who are we becoming in the digital age and what is the quality of our humanness given this  adaptation?     

-Bruce Arrigo  

Tuesday, February 14, 2012

New York Times: "A Newspaper, and a Legacy, Reordered"

Seems lately that The New York Times also finds these topics surrounding the digitization of our lives quite interesting. In the Sunday, February 12th print edition of The New York Times, an article titled "A Newspaper, and a Legacy, Reordered" graced the front page of the "Sunday Business" section. The tagline reads: "For a Digital Future, The Washington Post Shrinks Its Scope."


A quote by Marcus Brauchli, The Washington Post's Executive Editor, leapt at me about midway though the article: “The Washington Post doesn’t need to cover everything, but what it does cover it will cover well." 

Quality versus quantity. This is not a new philosophical conundrum. Socrates once said, "Quantity and quality are therefore more easily produced when a man specializes appropriately on a single job for which he is naturally fitted, and neglects all others." 

Certainly according to the above mentioned article, The Washington Post aims to err on the side of quality, while also attempting to underscore the quantity of their specialized coverage, in a Socratic sense. However The Washington Post seems also to acknowledge that their Socratic version of quality quantity simply can't keep up with the digital world's version of (less quality) quantity. 

If I've not yet lost you in a sea of tongue-twisting q's, then ponder this: What good will quality journalism be if it is run out of town by sub-par quantity? The Washington Post seems to be attempting to ride a tightrope of "quality versus quantity", while still digging one toe into the ground in a valiant attempt to resist bubblegum journalism (fun to chew on, but devoid of substance and tossed into the trash after a few minutes). Are they riding this tightrope well? Or are they practicing a doomed, specialized, soon-to-be-lost art? Are print newspapers a thing of the past? 

Is it too late to turn the tide on bubblegum journalism? Can journalism ever return to a Socratic  balance of quality and quantity in the age of 140 character-maximum Tweets? 

New York Times: "Life Under Digital Dominance"

The Sunday, February 5th print edition of The New York Times contained a three-article special within the "Sunday Review" section, titled "Life Under Digital Dominance". The tagline: "We've given up control and lost the fun. Privacy? A distant memory. The cost and consequences." The three individual articles in the special are:




All three articles are worth a close read, but for this blog entry, I'm choosing to focus on the second in order to raise some existentialist questions. Before some of my readers assume I've suffered a typo, a "cyberflâneur" is someone who rather aimlessly peruses the internet, a play on the French masculine noun "flâneur", meaning "wanderer". Quite possibly, it may describe the way you discovered this blog. 

This particular aforementioned article raised, for me, an interesting existentialist question: How much of our lives are real anymore? Web surfing used to be a playful time killer to fill the time lapses betwixt our real life schedules. Perhaps we'd log on for a few minutes to check Facebook for new pictures from friends. Or browse the news headlines. Or read a review on the latest piece of technology before driving to Target for a purchase. But now... so much of our life schedules actually are virtual. I must admit that probably 75% of my non-grocery purchases are conducted on Amazon, before I've even handled the item I have dropped in my virtual shopping cart. I maintain a handful of "dear friends" with whom I schedule face-to-face visits. My other "friends" are really nothing more than one-dimensional status updates. And reading the news? I'm juggling at least a dozen opinions on the same topic, eating up time I could be utilizing to develop my own opinion. 

Did I "go shopping" today if this entailed sitting in my pajamas at the kitchen table at 1:00 am, dropping photo representations of my purchases into my virtual shopping cart? Did I really "talk to my friend" today if I merely clicked the "thumbs up" button and post a quick "hey, that's cute!" comment on the picture she posted of her cat wearing a baseball cap? Do I really have an opinion on SOPA if all I can do is regurgitate Wikipedia's questionably piecemeal entry? Did I really live today if the only times I peeled myself from my laptop screen were to use the bathroom, grab lunch, and walk the dogs? If my eyes are so glazed over that I have to take a nap when my husband returns home from work (and forgo the only opportunity I had all day for person-to-person human interaction) then did I really ever wake up from my morning stupor before retiring to bed and my virtual dreamworld?

How much of our lives are real anymore?

Monday, February 13, 2012

Welcome to Digital People!


Dear Digital People blog readers:

The Center for Professional and Applied Ethics, the College of Liberal Arts and Sciences announces the upcoming First Bi-Annual Symposium on Ethics.  The details for the event are as follows:  

First Annual Ethics Center Symposium
“Digital People:  Technology, Identity, and Social Change” 
Student Activities Center (SAC), Salons A, B, and C
March 28, 2012 from 5:30pm – 9:00pm

If you wish to be formally invited to this event and have not yet been contacted about it in some way or another, please contact the Ethics Center’s program coordinator, Mary Jo Speer, at mjspeer@uncc.edu.

The purpose of this blog is to begin a lively discussion about the social, ethical, and legal implications of being “digital people.”  Who are we as “digital people” and where are we going each time we update our Facebook status, tweet, blog and/or text?  Is the digital revolution more or less significant than the invention of the printing press?  Do we communicate less and less face-to-face because we communicate electronically more and more?  Or will people always yearn for embodied contact?  Who is storing information about us and for what purpose?  Are digital people more or less free on account of electronic communication?  Is privacy an obsolete value? 

Over the next month or so expect this blog to be routinely updated and please let us know about anyone who wishes to join our blog.  Melissa Wilson at avoiceforthevoiceless@gmail.com will facilitate your entrance.

Welcome,

Rosie Tong   
Distinguished Professor of Healthcare Ethics            
Department of Philosophy         
Director, Center for Professional and Applied Ethics