Monday, March 26, 2012

Today's blog entry is an interview with Fred Williams, the other presenter at the Symposium on Wednesday night. Fred answered the same questions as Bruce did, via email with me.  I want to thank Fred and Bruce for taking the time out of their busy schedules to answer my questions.  Their answers are extremely thoughtful and thought-provoking. After reading them I am really looking forward to the Symposioum on Wednesday night.


1. What got you interested in the issues of technology, identity, and social change, the themes of the symposium at which you will both be presenting?

Fred Williams: I've been interested in such issues as long as I can remember. A major focus of my undergraduate and graduate work in economics was the role of technological change in economic growth and their combined effect on people and society. Our Constitution and Bill of Rights were profoundly affected by the agricultural and industrial revolutions -- but it took more than a century for our laws to adjust to many of the social changes arising from these technological and economic revolutions. When I became a computer and intellectual property prosecutor over 15 years ago, I compared these revolutions to the IT revolution and predicted that we will have similar social and cultural changes and disruptions, but over a much shorter time frame. The law could not keep up then, and it can’t keep up now. This has been a major theme of my teaching at UNCC over the years.


2. What are your thoughts on the theme of technology having the potential for interdisciplinary collaboration in an academic setting as well as for collaboration between the academy and the public at large?

Fred Williams: We all live in a world full of technological change. Those in the academy, perhaps more than the average of those outside it, need to constantly access and use information. Therefore, an information revolution has to change academic life. It will not be long before it is hard to find someone in a university who relies on thousands of 3x5 cards with bibliographic information and fragments of information scribbled on them. What did the Library of Congress do with all its millions and millions of 3x5 cards in its various catalogues and shelf lists? Do people still use those huge and cumbersome sets of books abstracting various publications?

Relations among disciplines and between the academy and the public are likely to fundamentally change. The university itself may not survive in a recognizable form. The IT revolution makes information exchanges easier and less costly. Entities created to create and exchange information, including universities, have to be significantly affected by new ways of accomplishing the purposes for which they were created -- ways which may make their whole raison d’être obsolete.

What Wikipedia has done to Britannica on dead trees, what Amazon has done to bookstores, what Google has done to libraries, the web may do to both the teaching and the research functions of the university.

In terms of the university’s research function, academics may simply become members of the cloud. As Tim Messer-Kruse learned, status in academe does not give one the right to override the cloud and force Wikipedia to accept the results of your research about a controversial event (the Haymarket riots etc.). Chronicle of Higher Education, 2/12/12. The oracle in the ivory tower may have no better access to the information relevant to a question than a high school drop out living in his parent’s basement.


What are your biggest fears about the technological future?

Fred Williams: Dystopia. A totalitarian state with total knowledge of everything about everybody and arbitrary power over each. Some say that the IT revolution makes it inevitable that all dictatorships will fall. Much credit for the Arab Spring must be given to the IT revolution: the internet, cell phones, texts, social media, etc.

But can we really assume that the new tools are uniquely suitable for bringing down evil tyrants and establishing more democratic governments? Or can these tools be used to establish and perpetuate an efficient totalitarian state? Most likely, IT can be used for either just as the same hand gun can be used by a tyrannicide or by a member of the Gestapo.

It remains to be seen whether the Arab Spring brings a better life for the people. Perhaps we will look back and say the old dictators were far better than the new ones. I think and hope not, but only time will tell.

Now Google and other big entities have more knowledge of each member of our society than the KGB could have with virtually everybody coerced into being an informer against virtually everybody else. It may well be possible for governments to co-opt this power: first to win elections, second to eliminate real elections, and third to establish totalitarian control of everybody.

I hope for the best, and think that the most probable future is not the very worst.


What are your greatest hopes for the technological future?

Fred Williams: Utopia. Free and easy access to publicly valuable information; effective protections for private information. Free and easy access to the information necessary to improve lives, promote health, create shared wealth, improve moral and psychological well being, distinguish good from bad politicians, etc. No access to personal information of private persons without their consent so that individuals have room for a sane and healthy life. Deciding where the line between public and private information -- and balancing the need for privacy against the need for solving crimes -- are not easy. The line may need to be in different places for different purposes.

Realistically, of course, neither utopia nor dystopia is the most probable future. Things are most likely to muddle along somewhere in between, with many losses of values and many gains.



Is technology, broadly speaking, something that humans can control?

Fred Williams: Sure, in the abstract. “Technology” implies tools and techniques for accomplishing practical objectives. If I open a tool box, I can choose which tool to use -- one tool can’t force me to choose it.

But if I want to cut a metal bar in half, will I choose a carpenter’s saw or a hack saw? Tools are utilitarian. If you have access to a tool that makes it easy to do the task at hand, how often will you use one that makes the task hard?

Some may have spent the last 40 years getting really good at research and writing using 3x5 cards, pencil and paper, books in a library, and paying someone to type their drafts. It may be more efficient for them to continue using old tools because it would be too much trouble to learn the new ones. But why would anyone in their 20s want to learn those research technologies?

Could we decide to go back to the beauty of hand written, illuminated, and leather bound books instead of printed ones? Can we stop the spread of digital ones which lack the emotional attachments we old fogies have to content on dead trees?

No matter how strong the privacy and inter-personal relationship concerns, can we persuade our kids to forego Facebook, Twitter, IM, and texts as opposed to face-to-face and more private means of communication? I can’t!

A modern developed society may choose to pay some people to live (somewhat) as if they were peasants in medieval or farmers in the 19th century. There is value in living an older lifestyle. But would such a society go back to an agricultural technology that required 90% of the population live such a live in order to provide enough food to feed everyone?

Certainly society, through law and law enforcement by a strong state, can have some impact on technological and social changes. Perhaps a lot more impact than Canute’s orders to the tide. But perhaps a lot less impact than we’d like.

Can we by education and the spread of enlightened values help people make wiser choices, even if they forego some efficiencies? That would seem a better way to control technology than laws. We have an obligation to our kids and grandkids to do what we can to push the changes more toward utopia than dystopia.

Saturday, March 24, 2012

INTERVIEW WITH BRUCE ARRIGO

The following is an interview conducted via email with Bruce Arrigo, one of the presenters at the Symposium, Digital People: Technology, Identity and Social Change, being held this coming Wednesday March 28.



What got you interested in the issues of technology, identity, and social change, the themes of the symposium at which you will be presenting?

Bruce Arrigo: My interest in technology, identity, and social change is sourced in the way in which we now "depend" on technology to express ourselves, to relate to others, and to make sense of the world in which we live. This dependence increasingly functions as a substitute for more direct, intimate, and real face-to-face relating. So, the medium of technology certainly seems to be changing the way we see ourselves and interact with others. Of course, this change may very well be a way of expressing human adaptation. 



What are your thoughts on the theme of technology having the potential for interdisciplinary collaboration in an academic setting as well as for collaboration between the academy and the public at large?

Bruce Arrigo: Technology is a tool. So, the applications of technology -- whether in the classroom or the boardroom, whether for the public good or for private sector growth -- hold considerable promise. But the question for me is how do the applications of technology  grow our humanity (e.g., further our understanding of courage, justice, dignity, love), rather than suppress it.  For example, what does texting, my-spacing, face-booking, tweeting, and excessive reliance on other social networks reveal about our humanness, and how is this humanness productive? Indeed, for whom is this humanness productive? These questions force us to come to grips with technology's awesome power and presence in our lives. In the digital age, this power and presence, this connectivity, is constant and pervasive, and increasingly insidious.



What are your biggest fears about the technological future?

Bruce Arrigo: Over time, excessive reliance on digital technology might mean that we will no longer know how to relate directly and authentically to each other. In fact, it might signify that we simply will no longer care about such "intimate" human relating. If we lack the ability to effectively relate in face-to-face encounters or situations but can do so through digital technology, does this mean that we are adaptively evolving or are we loosing something quite special about our fundamental identities? In other words, are we really changing/evolving for the better?



What are your greatest hopes for the technological future?

Bruce Arrigo: Digital technology may make it possible for everyone to have immediate free access to information in which censorship was eliminated. Depending on the quality of this information, digital technology may help to education and inform everyone in the world. This would be knowledge without borders.   



Is technology, broadly speaking, something that humans can control?

Bruce Arrigo: Technology is a tool; humans can control how they incorporate this tool into their lives. In the Heideggarian sense, tools are "ready-to-hand." Thus, each person's ready-to-handedness will vary according to their intention and context. The question, then, is whether this control, this ready-to-handedness, will advance collective or "social" good. Thus far, the evidence is mixed on this.

Wednesday, March 21, 2012

"Property Makes us Stupid," part 1

At the end of yesterday's post, I insinuated that I think the notion of "autonomy" as it floats around many contemporary ethics discussions won't do the work it needs to.  Today, I want to look at another mismatch between digitality and traditional normative concepts, this time "property."  Intellectual property - copyright, patents, and trademark, most centrally - has been at the center of a lot of battles about digital culture.  For example, just yesterday the Supreme Court unanimously ruled that a procedure designed to apply observed correlations between drug dosages and patient outcomes could not be patented, on the grounds that allowing this patent would effectively close off the natural facts of the correlations to anyone but the patent holders.

This sounds trivial enough.  But what immediately struck me (and other commentators - see here) is the possible relevance of this to a much more important topic: patents on genetic sequences.  The current law is established by the appellate decision in AMP v. USPTO, which was decided last summer.  It's a split opinion, but the governing part says that patents on isolated gene sequences are ok, based largely on the argument that genetic sequences in the body are something different from the ones outside the body and subject to patent claims - in other words, moving them does meaningfully transform them (there's at least one good reason this is true: genetic sequences in the body involve a lot of non-coding information which gets excised in the patented sequences).  What the appellate court did not do a very good job on was the preemption argument that the Court used for yesterday's decision.  That argument, recall, says that you're not allowed to patent something if, in so doing, you'd effectively block others from accessing a law of nature.  As this paper argues, that's a very plausible way to construe what gene patents do: they bar others from using the genetic sequence to develop tests, therapies, etc.  The Court's unanimous ruling yesterday suggests to me that there may be an appetite there to take up the gene patents case.  At least four justices have to vote to take up a case; here, the tea leaves following such a decision might indicate that there's four votes for striking down gene patents.  The Court's been saying for a while that it thinks there are too many patents, even explicitly a few years ago in a case about automobile accelerator pedals (not for nothing do we get the phrase "patent geek!").

What does all this have to do with digitality?  It seems to offer confirming evidence of a couple of points.  The first is that the gap between biology and informatics is rapidly shrinking.  That is, biology is increasingly a science of information, statistics, and probabilities, with human bodies being treated not so much as somatic wholes, but as repositories of information about risks, propensities for certain diseases, and so forth.  The gene patents case, for example, concerns Myriad Genetics' patent on the BRCA1 and BRCA2 breast-cancer mutations (btw, Myriad's stock slipped 5% following yesterday's ruling, and is off another 4% today, as of this writing).  Women who carry those mutations have a highly elevated lifetime chance of getting breast and ovarian cancers.  But the tests themselves only establish probabilities; they don't actually diagnose a symptom.  If it's true that 40-60% of women who carry the BRCA genes will get breast cancer during their lives, it's also true that the rest won't.  Those are terrible odds compared to the 12% baseline, but the point is that it's all a matter of processed information, not of medicine in the old-fashioned sense of "Doctor, I have a headache."  This is where a lot of the action is: the body is being mined as a source of information in order to slot people into risk categories for certain diseases.  Those who fit the categories can then make informed decisions about whether to preemptively do things like have their ovaries removed.  We thus see the rise of what Nikolas Rose calls the "presymptomatic patient."  Such an entity is only possible, I submit, when we understand the body as a source of information, as a product of information society.

The other point is that the law just isn't adapted to this.  It's a really, really good question whether genetic sequences should be subject to patents.  The existing vocabulary for patentability might not handle it well, because that vocabulary was developed long before genomics developed very much.  This problem sits at the tip of a good-sized iceberg.  To take just one more example: patentability generally requires naming a single inventor.  Such a person is the primary patent holder.  But this model is ill-suited to the current scientific enterprise, where collaborative activities are spread across large numbers of people.  It also enables companies like RiceTec to attempt to patent basmati rice.  It took a herculean effort to invalidate this patent, precisely because patent law has trouble seeing inventions where one cannot also name the inventor or point to a first publication.

When Michael Hardt and Antonio Negri, then, say that "property makes us stupid," they're partly talking about the misfit between our understandings of property and reality.  "Reality" is partly about our new "posthuman" status.  It's also, for them, the thought that insisting on proprietizing everything causes us to forget that our daily lives function quite well without asserting property rights all the time.  I'll talk some tomorrow about more of what they mean, and the assumptions embedded in our copyright regime.

Tuesday, March 20, 2012

How's my driving?

I wanted to begin my stint as a contributor by trying to indicate some of the fundamental complexity of the issues that digitality raises.  Let's take the topic of privacy.  Everybody knows (and if you don't, read Richard Robeson's posts on this blog for a taste) that developments in surveillance technology pose significant threats to privacy, almost no matter how one conceives the term (and, of course, the very meaning of 'privacy' is notoriously difficult to pin down, even though what one means by the term makes a big difference for whether one thinks it deserves protection, as this paper demonstrates).  And of course no one is quite sure what to do about regulating these devices, as the Supreme Court's recent decision striking down the police's warrantless application of a GPS tracking device to a suspect's car illustrates (the majority opinion here was very narrow; Justice Sotomayor's concurrence signals that the Court may be ready for a landmark decision or two in these areas.  That there is some serious doctrinal confusion in the Court's Fourth Amendment jurisprudence about technology is a point that Justice Scalia made as far back as a 2001 case about thermal imaging devices).

The point I'd like to make is that the problems are both pervasive and go all the way down.  Consider webpages that want your personal information.  The standard narrative about privacy these cases says that, when these sites provide notice of their privacy information-sharing practices, and users consent to those, then everything is ethically kosher.  Call this the notice-and-consent standard.  The idea is that users get to exercise their autonomy by freely choosing whether or not to share information, as evidenced in their consent.

So far so good.  However, as this paper by Scott Peppet (a law professor at Colorado) argues, the very fact of control may itself be the problem.  As Peppet argues, digital technology makes it very easy to signal verifiable information about oneself to others, as for example by letting Progressive Insurance monitor one's driving habits.  This information is often of economic value.  Progressive, for example, wants to know who the good drivers are, and wants a better way to find out than either by asking ("are you a good driver?" "uh, yeah") or by going through proxies like age and gender.  The device delivers that information in a level of detail formerly unobtainable, short of the company hiring somebody to follow you around.  Users are free to opt-in to the monitoring or not.  So this is a perfect example of notice and consent.  But.  The best drivers have an automatic incentive to signal their best-ness to Progressive, and secure their discount.  The next-best drivers want to group themselves with the best ones... and so it goes.  Pretty soon, even the worst drivers are having to consent to monitoring, in order to avoid the even greater stigma (in the form of much higher rates) of refusing to be monitored ("what are you hiding?"). And so a very strong economic pressure makes the cost of retaining one's privacy prohibitive - even as (or even because) one has perfect control over the act of consent.

At the very least, this ought to be troubling - Peppet argues that it's generalizable to a wide range of contemporary privacy issues.  More fundamentally, it seems to me that it shows that digitality challenges our notions of ethical agency and personhood - autonomy may not be doing the work we need it to; at the very least, we may need to think hard about what, precisely, an "autonomous" decision is.

Saturday, March 17, 2012

Is anything new under the sun?

There are these days many new books on the impact of information technologies on us and they fall into three general categories; 1) Technology is the best thing since sliced bread and is enriching our lives; 2) Technology is destroying us bit by bit; and 3) There is nothing new under the sun and we've visited all of these benefits and concerns before. I tend to fall into the third category. From Socrates's too-often visited words from the Phaedrus on the debilitating effects of writing on memory (think Google), to the debate that raged in eighteenth-century Europe on the socially stunting effects of letter writing (think Facebook), to the economic, democratic, national defensive, information spreading benefits of the railway in the nineteenth century (think Internet), there seems to be nothing entirely new out there.

Not that these presumed boundary transgressions are anything new...

They were just more subtle. A spear, for example, used to kill prey is a materialization of an abstract function and that which was previously inseparable from a subjective immediacy or an organic interiority, that is, the killing of prey with one’s hands became external and entered into an object — the spear. Paradoxically, however, the spear can only become fully effective after it has been re-internalized, so to speak, inasmuch as the user must learn new skill sets (information patterns) and train the body to employ it, thus creating an enlarged, modified, and virtualized body. We like to think when we finally put the spear down that it is no longer a part of us for many reasons - it is technological and I am biological, it is artificial and I am natural - but this is not the case. The spear becomes part of us through this informational feedback, not as dramatically as cortical implants and nanoparticles, but part of us nonetheless.

Why I'm Interested in Technology

My interest in matters technological begins with the border we like to exist between us and our technologies. What I have discovered over my years of research into ideas like this is that those borders, if they exist at all, are not as brightly defined as we'd like them to be. This becomes clearer in this day and age when we as humans can carry many forms of "on board" technology such as cortical implants, implants for hearing, implants for epilepsy (http://www.youtube.com/watch?v=kjB2TaeL9d4), implants for vision (http://www.youtube.com/watch?v=ga7apTYrLuc), artificial joints, and many other things. Such devices are so pervasive and will become even more so with coming advances in nanotechnology, that we will have to have a serious discussion soon about what constitutes human and what constitutes cyborg and it won't be in the context of science fiction.

Tuesday, March 13, 2012

Instant Response

I can remember the taste even though it's been a while since I licked a stamp to mail a letter.  If I do mail something now, of course the stamp is self-adhesive.  Regardless, stamp glue is something of the past especially with our technology of rapid communication via e-mail and even faster with texting.  In the past year or two, I've enjoyed the social networking of what Twitter offers.  Sending a Tweet is nothing more than an International texting opportunity.  In moments you're connected with people and organizations you're following or have elected to follow you.  Just that quickly our messages are spread around the world.  Just think what the next decade will offer during this 'let's get it done now' society.

Joey Popp      

Monday, March 12, 2012

Loss of Human Touch

As technology advances and we marvel at the digital age, one may ask if it's also tearing down our communication skills.  I can recall at least 15 years ago, a form of texting before it was the craze as we know it today. At that time as it still does, the electronic newsroom offers 'top of screening.'  Instead of simply turning to a colleague a few feet away and asking a question, you simply type your message and  instantly it's delivered to the top of your colleague's computer screen.  Yes, it saves time and produces a less noisy room, but out the window is a good conversation.  If the workday means sitting at a computer, moving around and finding the human touch maybe more challenging than ever.  That may transcend into a lack of exercise and contribute to the obesity concern haunting our country.  Digital advancement maybe eye opening, but let's not sacrifice our human connection along with it.

Joey Popp          

Sunday, March 11, 2012

Constantly Connected

When I decided electronic journalism was for me, I learned as a novice that the camera never blinks and the microphone spreads the message.  In other words, be aware of what you say and don't be caught off guard.  In today's digital world, we're all learning that.  Take Tommy Jordan and his daughter Hannah Marie in Stanly County near Albemarle.  Recently the father and daughter caught the attention of at least 31 million online viewers. Hannah posted on Facebook that she was disgusted with her parents and having to do chores around the house.  Unbeknownst to Hannah, her father found out.  In response, Tommy set up a camera and videotaped his outraged comeback message.  Then, as a way of disciplining the daughter, he took a 45 handgun and shot Hannah's beloved laptop computer multiple times.  It was all caught on tape and he posted it on YouTube.  In no time, the video went viral, catching the attention of millions and shocking the world.  In a recent interview, the two appeared on national television, NBC's TODAY show.  After an extended interview with Tommy, Hannah and wife and mother Amy, the take home message became loud and clear.  What should have been a private family matter became an international focus.  In today's digital age, such an example shows the need for communication to remain under wraps and out of the public's eye.  

Joey Popp

 

Saturday, March 10, 2012

Comparing this time with others!

1970s changes! Do we remember that era as a shock? Was the mindset that different? Computer and calculator costs were astronomically high in comparison to to-day’s, but were the lives then less disoriented or do we have to go back a hundred years to truly find “another” kind of life? And what is this change good for? Do we truly communicate better? Is the world better off from an identity and social point of view? Or are we talking nano-differences? If not, then we ought to be able to point to pros and cons. Did “they” know each other better then? Is society truly better off now? Or is it all a matter of gaining some and losing elsewhere? Can we and the younger generations imagine NOT to have (you name the object or item!)? Can we appreciate what we have by realizing what it took to get "here"? Taking "all" for granted is a dangerous trend and almost inescapable. Or is it?

Friday, March 9, 2012

Making sense of change!

Change is unavoidable; Social life seems stretched, torn and always in upheaval in the process; Our identities become vaguer in the process, less distinct. Physical identity becomes virtual. Where is the hug? Technology races forward like a waterfall, an avalanche. Out of control? How much faster can trains and cars go?; What are some of the components of balance? Time to reflect and observe, a walk in the park; Silence, inside and out; Honesty, abandonment of pretense, be; Relaxation, loss of pressure, slowing down. How do we as a society accomplish something equivalent to the above? and what role does technology play in reaching this? You tell me! What do you think?

Thursday, March 8, 2012

The digital vs the physical touch!

In years gone by, people had time to sculpture and carve intricate works out of stone and wood. Michael Angelo and Rembrandt took weeks maybe even months to create their works of art. People walked to work, they traveled long distances by horse and cart, and continents were still far away and sometimes reachable by wind in the sails. They communicated person-to-person or by hand-written letter. Those folks from past generations took pleasure in sitting and walking. Life went by considerably more slowly, though they too knew what it was like to be drunk and waging war. Now, building styles go straight up reaching the sky, furniture is mostly focused on function and realism in art is considered old fashioned. During a week day, we hardly experience, literally and figuratively what the outside world feels and looks like. After all, we now move from one temperature controlled environment into another, and the tomatoes come from places we have never visited. We “sprint” from one place to the next in sleek, fast machines without taking time to see the sunset. Some members of my somewhat older generation have trouble making the transition from newspaper to internet, from letter to e-mail, from land-line to iPhone. Many of us are wondering what will become of the state of relationships, friendships and physical gatherings to express real feelings face-to-face. Music doesn’t sound like music anymore. As a result of all these changes over the years, are we and members of the x-generation (and beyond) more separate and isolated, more lonely, frustrated? What will become of the interconnectedness between generations? Is the world truly becoming a smaller place? Or is all this social change and technology causing us all to split and “crash”? I like my iPhone and my computer and I make a real effort to get together and talk “Politics” in the local coffee shop with a group of guys. I make my monthly sojourn to the mountains to visit with my son and grandson. The social fabric doesn’t have to fall apart, but it will take deliberate effort. To bridge the apparent divide will take determination and true intention.

Wednesday, March 7, 2012

Abstracting Privacy, Part 3

On a visit to Boston in the 1980s, I saw an exhibit at the Museum of Fine Arts of historical and cultural artifacts from medieval Japan. The exhibit, on loan from a Japanese museum, contained spectacular kimonos, screens, ceramics and calligraphy instruments. It also included a display of armor and weaponry, including swords from some of the most renowned sword smiths of the period. Even though the swords were hundreds of years old they were national treasures, and as pristine as if they had been commissioned that very day. Several thoughts came to mind all at once: the legendary craftsmanship of the samurai sword maker; the fact that even at rest inside a glass display case, the weapons’ brilliant ergonomic curvature and plainly visible sharp edges combined to imply (if not demand outright) an act yet to be completed; and, also that these were some of the most beautiful killing machines ever created. Then it became apparent that there are essentially two ways to regard such an object: to imagine using it, or to imagine having it used against you. To simply regard it as extraordinary metallurgy or the highest expression of a set of design principles doesn’t take full account of what it actually is.

I’m often reminded of that museum visit as I think about our digital-technological future. As we benefit from and enjoy our connectedness via digital technologies — the internet, global positioning systems, social media, texting, etc. — we would do well to remain mindful of what it actually is. Most Americans have by now had the experience of entering their address into Google Earth and seeing a satellite photo of their address appear instantly on the computer monitor. As unsettling as the experience was for me, even more troublesome was how recent the images were. On one occasion it was seeing where I had parked my car the day before — not my usual parking spot because I got to my office building later than usual. Another time it was seeing the brand new roof on my home, finished just days earlier. Even though Google is a private company and is not as far as we know in collusion with the government (at least not in this country), such an experience as the one described above gives a glimpse into a surveillance infrastructure that would have been the fondest dream of the Stasi, the KGB or Hoover’s FBI. And even though some communities abroad have taken exception to Google Maps cataloguing their every street, intersection and traffic light, the overall impression one gets is that to object is to swim against the tide. But even though there’s obviously no turning back, perhaps it is still worthwhile to be circumspect, and take full account of this new reality.

Tuesday, March 6, 2012

Abstracting Privacy (contd.)


Perhaps the mere illusion of privacy is sufficient to put some Americans’ minds at ease. In our post-911 world we seem to have been especially eager to sacrifice civil rights and civil liberties in exchange for guarantees against terrorist acts in the future. We are for the most part willing, for example, to disrobe in airport security lines because we’re given to understand that such scrutiny reduces the chances of a would-be terrorist smuggling a firearm, an edged weapon or a foundation-garment explosive device aboard the aircraft.
Public opinion is considerably more mixed on the issue of electronic surveillance, warrantless wiretaps, etc., than government officials and security experts would have us believe. But whatever unease exists is apparently easily overwhelmed by the imperatives of communicating with each other across time zones, continents and oceans in a timely manner.

Given the aggressive, open-ended nature of the national security/counter-terrorism apparatus — an expression of what historian, author and retired career Army officer Andrew Bacevich calls the “national security state” — it is perhaps understandable that its less visible iterations would be regarded with the kind of insouciance that can apparently cause people to forget altogether that an electronic communication always — always — leaves a trail. Or that striking the “Delete” key doesn’t remove all instances of an email or text. Former Detroit mayor Kwame Kilpatrick, for instance, was removed from office and ultimately served jail time because he seemed not to understand this simple but apparently elusive truth.

Further, if human nature and the history of statecraft remain true to form, there is the promise (or omen) of even less respect for privacy in the future. At one time not so long ago, airport security consisted primarily of being scanned with a hand-held metal detector. The current state of the art is a full body scanner that many Americans — women especially — regard as a totally unjustifiable invasion of privacy. Yet we continue to fly.

Monday, March 5, 2012

Abstracting Privacy


Abstracting Privacy

The early discussions that led to the upcoming March 28, 2012 Symposium (and this blog) focused a great deal of attention on the differences between generations in how they engage with — and with each other through — digital technologies, social media and the like. People of middle school age can hardly if at all imagine a world without Facebook, text messaging and instant downloads. Conversely, I know someone in her 80s who was enraptured by Skype, saying that using it made her feel like she was living in a “Star Trek world.”  Between those generational poles lie the rest of us, who will determine whether the best or the worst of the digital revolution will predominate. Privacy, for example, might have evolved (or devolved) to become something between an anachronism and a false hope.

During the 2002 manhunt for Beltway Sniper John Allen Muhammad and his minor accomplice (or first victim) Lee Boyd Malvo, law enforcement’s public statements included the confident assertion that surveillance video from convenience stores, gas stations and other retail establishments would lead them to the killer (at that time not knowing that there was more than one). It was reported that Americans are captured on video surveillance cameras many times a day. Estimates ranged from ten to two hundred. The tone of such reporting was reassuring — after all, a murderer was at large. But the 10-to 200-video-captures-per-day datum was nevertheless chilling in its own right: The surveillance infrastructure, which ultimately did play a significant role in the capture of the two shooters, remains, and is a great deal more pervasive ten years later. And for the aforementioned middle-schooler, she is no doubt entering her adolescence without even the expectation of privacy. It seems that almost daily there are new revelations concerning the tracking of online browsing behavior, deliberately obscure opt-out — rather than opt-in — privacy policies, and cookies that circumvent a computer operating system’s security protocols. But there is little if any scandal associated with such revelations. It is entirely possible, it seems to me, that by the time our middle-schooler’s cohort become our society’s judges, lawyers, physicians, law enforcement officials, warriors and politicians, even the word “privacy,’” and everything it contains, may only be remembered by historians and lexicographers.

In the meantime, however, we’re constantly reminded of our imperfect understanding of whatever privacy remains. Whether it is a married politician posting half-naked pictures of himself to someone he has only met online, a young co-ed posting her sexual exploit portfolio, or a judge posting a highly impolitic “joke” in an email, there seems to be a fundamental lack of awareness about the realities of cyberspace. In these and other such instances, the actors aren’t middle-schoolers, but adults (at least chronologically) who shouldn’t require being reminded of the adage, “If you wouldn’t put it on a post card, don’t put it in email.”

To be continued.

Richard Robeson

Sunday, March 4, 2012

The work of developing new rules


            We’re thinking in this blog about the nature of the changes we are embracing, and/or are being imposed upon us, by the advent of the digital technologies. 
            One observation about all new technologies seems obvious:  they are neither good nor evil.  They simply represent a new kind of power, and as such are always used by human beings for both what we define as good and what we define as evil.  That means that at the advent of any new technology, we are faced with an obligation to figure out as best we can how to maximize the good and minimize the evil effects—provided we can figure out (and develop a consensus on) what those are.
            The examples are too obvious to belabor.  Trains and then automobiles changed the world—and have also killed a large number of people.  Movies and television have allowed great works of art—and also produced tidal waves of dreck.  Cell phones have already saved many lives . . . and trashed a number of marvelous concerts, films and academic classes, and caused tragic traffic accidents.
            My concern is that we be intentional about developing manners and mores around new technologies, rather than simply allowing them to coalesce.  The current controversy over phone calls while driving (I hope the texting controversy is nearly settled!) is an example of those new efforts.
            The thoughtless boor in line at the bank or grocery store who either holds up the process because s/he doesn’t notice what’s going on, or broadcasts intimate details to innocent victims who don’t want to hear them—may well have never had the chance to learn about the impact of these behaviors on others. None of us inherited a set of expectations about how those phones should be handled—because the phones simply weren’t there when we or our parents were small. 
            I gained a fresh (and chilling) understanding of one of the primary functions of the proliferation of cell phones when I saw a billboard that read, “Unlimited ‘whereyats’.” 
            The naïve young woman who allows her beloved to take intimate pictures of her on his cell phone may trust his assurances of intimate confidence—and not yet have discovered how many other young women have found their images floating around the world before they knew it was happening.
            It’s not just identity thefts that impact our lives in negative ways.  Creative criminals keep finding new tools, such as “phishing” scams and RFID digital theft.  And many people still don’t grasp that what the text of a link in an e-mail says may have no relationship to the actual cyberlink behind it.
            Intellectual property theft is obviously a huge economic and moral issue.  If I buy a cheap knock-off from China of an expensive American product, is that act immoral?  Or ethically neutral?  The theoretic answer often looks very different from the answer being acted out in the marketplace.
            And at a less financially risky level, Meryl Streep’s 2009 film, “It’s Complicated,” gives a hilarious (but blood-chilling) example of the danger of a Skype session gone awry—and it’s completely believable as a real-life relationship disaster, the happy ending notwithstanding.
            It’s incumbent on me, as a psychotherapist, to help my clients understand that what we are doing is not simply a conversation about issues of concern to them, but a delicate process of attending to habits and feelings at the very edge of consciousness (and beyond), and that even their simply leaving their phones on during the session (unless truly essential and agreed upon) can seriously impair the power and value of what we’re trying to do.
            What are the venues within which these conversations should take place?  The family, certainly, but that’s not enough.  Schools struggle to develop policies that protect the students, the staff, and the educational process, but often fight upstream against resentment of any infringement whatever on the total freedom of students and family to stay in touch with everybody at all times.  A few churches, here and there, sponsor conversations on the moral implications of ethics in the digital world.  And academic events such as the symposium sponsored by the Ethics Center on March 28th offer a golden opportunity for scholarly consideration of the implications of the digital technologies on our life together. 
            But the law and social mores are always playing catch-up—we’re always behind.  New forms of theft, of manipulation, of victimization, always emerge faster than we can develop thoughtful responses to them.  If we are to be served by our tools and systems, rather than be victimized by them, we will need all the awareness and intelligent thought we can get to plan systems of visibility and accountability. 
            I wonder about our readership—what are the places where the abuse of digital technologies most concretely impact your lives?  And how do you think we should be thinking about those abuses and planning responses to them? 

Saturday, March 3, 2012

Climbing the Tower of Babel


            Since the "Tower of Babel," humans have been fragmented into language groups, not only separated from one another, but often suspicious of or hostile to one another.  Privileged subgroups have often had the opportunity to bridge those gaps, though that’s been less true of general populations.  There have long been significant international contacts among groups of diplomats and business professionals; academics of various specialties; world travelers among the well-to-do; artists and musicians of various forms; and in the last century or so, radio amateurs, who spend time talking with one another all over the globe.  But the general population has not had the opportunity to develop much in the way of familiarity—let alone intimacy—with one another.
            No more.  Not only has international travel become more possible and more common, but the digital media flow around the world as easily as do radio and television broadcasts.  We have arrived at a crossroads where on the one hand international tensions are as dangerous as they’ve ever been, but on the other, opportunities for people-to-people understanding around the globe are also more accessible than ever before.  We have the opportunity to create unprecedented levels of familiarity with people who seem very different from us—if it matters to us.
            I want to argue that it is imperative for educators, journalists, peace advocates, and others to grasp the opportunities presented by the digital media to encourage growth in the direction of a genuine world awareness—not “one world,” because cultural and linguistic diversity will not melt away in the near future, but growing communities of people who understand that common interests in ecology, economics, military technology, etc., make it imperative that we get acquainted as well as possible across national borders.
            Mine is not an affluent family.  Yet all three of my children have spent time overseas.  My youngest spent a year in Shanghai, learning Mandarin.  I have a grandson who has spent significant time studying in Latin America—in Spanish.  I have two other grandchildren who have spent time in France, developing their French skills.  There is no longer anything unusual about those experiences.  But as we spend more time in cultures other than our own, I find myself wondering whether we feel the need to continue developing world awareness, or does it feel more natural just to subside back into the “normalcy” of our own setting, losing awareness just as we tend to lose fluency in languages which we rarely speak? 
            The first long-distance call I ever made required the assistance of half a dozen operators, connecting trunk lines to enable me to speak to someone less than 400 miles away.  Now I can pick up my phone, push a few buttons, and hear a friend in New Zealand answer the phone.  I enjoy that immensely—but those conversations don’t make me intimate with Kiwi culture (even though they can help).  What helps more is that one of the tabs on my browser is the front page of the Christchurch Press.
            We need more than ever to understand what feels urgent to the citizens of Nigeria, Australia, France, Brazil, etc.  We can’t learn all those cultures in depth.  But we can learn about a few more easily than ever before, if we are surrounded by people who expect us to follow the news of the world.  It makes a difference whether my friends think the latest gossip from Ballantyne, Gastonia, Atlanta or Hollywood is all I need to know about the world beyond my doorstep.
            How many Americans understand the difference between an Iranian and an Arab?  How many have a sense of the reality of life today in the cities of China?  How many understand what Qur’an really means to the Muslims of the Near East?  These and other issues are mentioned in our news media frequently, yet not many of us—yet—use the ubiquitous digital media to learn about the world beyond our borders.  We depend on professional journalists—who perform an incredibly important function, but their work product is not enough.
            Suppose twenty-five percent of Americans were to pick one other world country, and make it a source of frequent study?  Read their newspapers online, form digital friendships with interesting people in that country, become familiar with their history, politics, currency, geography, etc.  The result could be a dramatically enhanced American intimacy with the people of the rest of our planet.
            If we are to learn how to live in some sort of peace with the other peoples who share our planet, we must get to know them better.   The digital media offer us an unprecedented opportunity to do that, easily and steadily.  I wonder whether Americans will make use of those opportunities.  Your thoughts?

Ralph Milligan

Friday, March 2, 2012

Meeting new people


            I’m grateful for the posts here on issues surrounding privacy, what’s “real,” and the values involved in assessing these and other questions in digital space.  These are real and important issues, calling for answers which we're inventing now and will be for some time to come.
            But as a career psychotherapist, my primary concerns about the digital world surround human relationships—and the digital world obviously has huge implications in these areas too.  Early in my career, single and unattached clients often asked me, “Isn’t there something besides bars and churches?”  Now, undeniably, there is.   My topic for this post:  the advantages and dangers in meeting people in the cyberworld.  Two very different examples . . .
            --One of my clients told me about “falling in love” with a woman on the Internet, and agreeing to meet her in a nearby city at five p.m. on a Friday for a weekend of passion.  He told me later, “By seven p.m. I was praying for a good excuse for leaving.”  His experience was not unique.
            --Some years ago a divorced friend of mine told me that she was falling in love online with a guy who was a thousand miles away, of a different religion, and not yet divorced.  Neither had ever even been to the other’s state, and the ethnic differences were immense.  I did my best to caution her, knowing all the while that advice to a person who is “in love” is usually a waste of breath.  To my considerable surprise he moved to her state and courted her in a rather traditional fashion, and a year later they asked me to officiate at their wedding.  Their marriage isn’t perfect (as if there were such a thing!), but it’s far better than many.  I needed the corrective to remind me not to generalize about such events.
            There was a time when hearing that someone had met and married (or moved in with) someone they met online was shocking news.  Now it’s almost completely unremarkable—in part because, as we do with all new social inventions, we have developed methods of dealing with such connections that provide additional common sense and safety.  But this morning’s “Ask Amy” column makes clear that that task is by no means finished.
            Pertinent issues to consider:
            --Online connections can increase exponentially the number of potential partners we might meet and get to know—especially for people in small town or rural areas, and for busy people even in urban areas.  Even with the disadvantages of the lack of direct contact at the beginning, this has obviously enabled some marvelous connections that would otherwise never have happened—especially when (as cited above) we continue to develop new standard methods of increasing safety and appropriate methods of getting acquainted.  “We don’t really get to know them,” some complain.  How many marriages in my parents’ generation were between people who ruefully confessed later that they “really didn’t know each other”?  And that still doesn’t address the issue of the “matching services.”
            --Sociologists have long known that physical attractiveness can and do help to launch new relationships.  But people who are gifted in areas of kindness, intuitiveness, integrity, and other values—can often be people who are not stunning to look at . . . and therefore may be overlooked by those for whom they would be excellent partners. 
            I was “talking” to a very bright and interesting young man whom I met online one night, and after I identified myself he said, “If we had met on the street, you would have paid no attention to me.”  I asked why.  He said he had pink spiked hair.  And I realized that sadly enough, he was probably right.  Online relationships have enabled me to connect with a number of people whom I very much enjoy, but whom I would probably have never “met” had our first judgments been on the basis of physical appearance.  The “amputation” of physical appearance at the beginning can enable some marvelous connections to happen.
            --A cyber-connection need not be a substitute for meeting IRL.  I have met in person a number of people with whom I had first connected on the Web.  In every case we had corresponded for a significant length of time, and the person I met IRL was precisely the person I had met online—no surprises, just enhanced appreciation.  I have other cyberfriends whom I have not yet been able to meet IRL, but their friendship means a great deal to me.  I’ve thought of the ancient mechanism of “pen pals,” from half a century or more ago, where people formed valued connections with people far away, often in other countries or languages—except now it’s far more immediate and playful.  And cheaper!
            --Online relationships often provide a method of “normalizing” interests which might otherwise be considered “weird” or unacceptable.  If you’re a long-haul truck-driver who loves crocheting, a lawyer who loves racing dirt-bikes, or a homemaker who loves competitive boxing—you may not know many people in your area with whom you can really connect.  But online, you may be sure, there are many people to whom your varied interests make sense.  Is there a darker side to this?  To be sure—as is true of everything.  But it doesn’t need to be dark.  It can be wonderful.
            My point:  we need, and are developing, new forms of caution and new sets of manners for use in managing relationships with people we meet online.  But intimacy, mutual support, community, shared recreation—all of these human values and others may be served as much as diminished by online relationships.  It’s just that it’s very difficult to invent, develop, and refine appropriate methods of managing those relationships as fast as the online environment provides new forms of connection.  Defining “Facebook friend” is an unfinished task!
            Cyber-connections aren’t going away.  We need robust conversations about safe and rewarding ways of managing them.
            Tomorrow:  What does “intimacy” mean now?

Thursday, March 1, 2012

Will Digital Technology Domiante Us?



            “We can affirm the unavoidable use of technical devices, and also deny them the right to dominate us, and so to warp, confuse, and lay waste to our nature (Martin Heidegger Discourse on Thinking).  As increasing numbers of people join the digital age we appear, at least, to confirm the first part of Heidegger’s statement. Technical devices are now imbedded in daily life. Of course, to some extent, the use of technical devices defines what human means. However, now well advanced beyond simple tools such as clubs or eating utensils, digital technical devices infuse nearly every aspect of life. In twenty years I have gone from a bulky $3,500 desk-top computer, with 8 Meg of RAM and a 40 Meg hard drive, guaranteed to be all the memory I would every need, to an Android “smart-phone” in addition to a laptop, 8 GB I Touch, and desktop with considerably more RAM and hard drive space.
            Have I somehow become less human by being tied to so many devices with the capability of taking me out of my immediate environment? I would like to think the answer is no. But I have to admit the difficulty of being in the digital age lies in the second half of Heidegger’s counsel. Can we “deny them [technical devices] the right to dominate us, and so to warp, confuse, and lay waste to our nature?” Someone I know was so averse to voice mail that when his cell phone provider include voice mail as a standard feature, he dialed his cell phone 30 times and filled up the voice mail box with empty messages to prevent having to respond to voice mails. I called him recently and when he did not answer, the call went to his voice mail and I was prompted to leave a message. I no longer heard “I’m sorry [exactly how a computer generated voice can be sorry could be a future topic] the voice mail box is full. Good-bye.” Had technology somehow finally exerted its dominance and dictated a change in the person’s behavior? Or had some societal influence necessitated the change?
            Heidegger goes on to promote the need for meditative thinking, arguing that [humanity] is “in flight from thinking.” He contrasts the deep, reflective, meditative thinking with the calculative thinking that “races from one prospect to the next. Calculative thinking never stops, never collects itself.” The challenge for digital people, at least from what I perceive, is to refrain from the indiscriminate adoption of the latest innovation, or the newest invention marketed with the promise of a brighter future. The challenge is to resist the “flight from thinking” and ask ourselves reflective, penetrating questions which guard us from “wasting our nature.” Perhaps, if the digital age challenges us to reach back and re-discover meditative thinking, digital technology will strengthen our nature as we move forward.