Saturday, July 7, 2012

Talking about Unsustainable!

When was the last time you have been in a nursing or elderly care “home”? Or maybe I should ask: “When was the first time?”
OK, a couple of suspected facts:
Many of the nursing homes, rehabilitation centers for the elderly and elder care facilities 
are listed of the NY Stock Exchange! These places are first: Profit Centers!
These homes send “patients” to the hospital for bladder and other infections and then upon return can bill Medicare for substantially higher fees for those elderly people;
The number of these people in “need” of end-of-life care will hugely increase by 2030 and thereby the cost of such care and the burden on the tax payer will increase with leaps and bounds.
These facts alone are kept away from the classroom and kitchen table. No one - well, pretty much so, has family discussions on what to do at end of life! Why not?
IT is a subject (End of Life) just not talked about. We Ooh and Aah about babies, but don’t come with what we do when life is - for all practical purposes - OVER! We, the people keep people alive, if that is what it is, because we don’t want to say
“Good Bye!”
                                    I repeat: We love to say “Hello!” and hate/avoid saying “Good-Bye!”
We, yes we tax payers, allow many of the very elderly people to be “stuffed” in “homes” and then leave the caring to minimum wage earners and we, ourselves are going to the beach or the movies!
Doesn’t this picture appall the reader?
If so, why don’t we talk about this?
If you are willing, contact: Maarten at maartenjp@mac.com
or join Compassion and Choices (compassionandchoices.org)

Tuesday, July 3, 2012

Extremes vs centered!

Swimmers swim milliseconds faster and thereby set a new world record, runners set minuscule speeding records, jumpers jump 1/4 of an inch higher than anyone ever has, cars drive faster, smoother and with even more sophisticated hardware, scientists discover even smaller living parts, medical companies come up with ways to make us all live even longer .... we think in terms of smaller, larger, grander, safer, faster, slicker, more comfortable. Everyone seems obsessed to create the most extreme in whatever.
Millions, no, even billions of dollars are at stake. To what purpose?

Another question is: "How essential are these extreme accomplishments?"
The Olympic Games are all about sensationally small increments of speed and distance with the latest aerodynamics in clothing and shape, and the winners have a chance to pick up lucrative endorsements.

AND SO WHAT!!!!

Have we passed the "reasonable" limit and are we omitting looking at the essentials of life?
The above accomplishments seem to emphasize the superficial.
What about being (extremely) nice to one another, generous to our neighbors, complimentary where appropriate, attentive to a need?

When I look around I notice little of the latter. Can we come to the center again? Or have we lost our
perspective? This world is SOOO small in relation to the space this world is "floating" in. I'd like us to focus a bit more on what used to be normal: Holding hands, having a substantial conversation, listening, standing still and looking at the beauty of this world, considering our fabulous surroundings and all this without setting "silly" records .

It may not make a lot of money, but understanding who you are ought to be my focus. I am working on being centered. You?

Monday, March 26, 2012

Today's blog entry is an interview with Fred Williams, the other presenter at the Symposium on Wednesday night. Fred answered the same questions as Bruce did, via email with me.  I want to thank Fred and Bruce for taking the time out of their busy schedules to answer my questions.  Their answers are extremely thoughtful and thought-provoking. After reading them I am really looking forward to the Symposioum on Wednesday night.


1. What got you interested in the issues of technology, identity, and social change, the themes of the symposium at which you will both be presenting?

Fred Williams: I've been interested in such issues as long as I can remember. A major focus of my undergraduate and graduate work in economics was the role of technological change in economic growth and their combined effect on people and society. Our Constitution and Bill of Rights were profoundly affected by the agricultural and industrial revolutions -- but it took more than a century for our laws to adjust to many of the social changes arising from these technological and economic revolutions. When I became a computer and intellectual property prosecutor over 15 years ago, I compared these revolutions to the IT revolution and predicted that we will have similar social and cultural changes and disruptions, but over a much shorter time frame. The law could not keep up then, and it can’t keep up now. This has been a major theme of my teaching at UNCC over the years.


2. What are your thoughts on the theme of technology having the potential for interdisciplinary collaboration in an academic setting as well as for collaboration between the academy and the public at large?

Fred Williams: We all live in a world full of technological change. Those in the academy, perhaps more than the average of those outside it, need to constantly access and use information. Therefore, an information revolution has to change academic life. It will not be long before it is hard to find someone in a university who relies on thousands of 3x5 cards with bibliographic information and fragments of information scribbled on them. What did the Library of Congress do with all its millions and millions of 3x5 cards in its various catalogues and shelf lists? Do people still use those huge and cumbersome sets of books abstracting various publications?

Relations among disciplines and between the academy and the public are likely to fundamentally change. The university itself may not survive in a recognizable form. The IT revolution makes information exchanges easier and less costly. Entities created to create and exchange information, including universities, have to be significantly affected by new ways of accomplishing the purposes for which they were created -- ways which may make their whole raison d’ĂȘtre obsolete.

What Wikipedia has done to Britannica on dead trees, what Amazon has done to bookstores, what Google has done to libraries, the web may do to both the teaching and the research functions of the university.

In terms of the university’s research function, academics may simply become members of the cloud. As Tim Messer-Kruse learned, status in academe does not give one the right to override the cloud and force Wikipedia to accept the results of your research about a controversial event (the Haymarket riots etc.). Chronicle of Higher Education, 2/12/12. The oracle in the ivory tower may have no better access to the information relevant to a question than a high school drop out living in his parent’s basement.


What are your biggest fears about the technological future?

Fred Williams: Dystopia. A totalitarian state with total knowledge of everything about everybody and arbitrary power over each. Some say that the IT revolution makes it inevitable that all dictatorships will fall. Much credit for the Arab Spring must be given to the IT revolution: the internet, cell phones, texts, social media, etc.

But can we really assume that the new tools are uniquely suitable for bringing down evil tyrants and establishing more democratic governments? Or can these tools be used to establish and perpetuate an efficient totalitarian state? Most likely, IT can be used for either just as the same hand gun can be used by a tyrannicide or by a member of the Gestapo.

It remains to be seen whether the Arab Spring brings a better life for the people. Perhaps we will look back and say the old dictators were far better than the new ones. I think and hope not, but only time will tell.

Now Google and other big entities have more knowledge of each member of our society than the KGB could have with virtually everybody coerced into being an informer against virtually everybody else. It may well be possible for governments to co-opt this power: first to win elections, second to eliminate real elections, and third to establish totalitarian control of everybody.

I hope for the best, and think that the most probable future is not the very worst.


What are your greatest hopes for the technological future?

Fred Williams: Utopia. Free and easy access to publicly valuable information; effective protections for private information. Free and easy access to the information necessary to improve lives, promote health, create shared wealth, improve moral and psychological well being, distinguish good from bad politicians, etc. No access to personal information of private persons without their consent so that individuals have room for a sane and healthy life. Deciding where the line between public and private information -- and balancing the need for privacy against the need for solving crimes -- are not easy. The line may need to be in different places for different purposes.

Realistically, of course, neither utopia nor dystopia is the most probable future. Things are most likely to muddle along somewhere in between, with many losses of values and many gains.



Is technology, broadly speaking, something that humans can control?

Fred Williams: Sure, in the abstract. “Technology” implies tools and techniques for accomplishing practical objectives. If I open a tool box, I can choose which tool to use -- one tool can’t force me to choose it.

But if I want to cut a metal bar in half, will I choose a carpenter’s saw or a hack saw? Tools are utilitarian. If you have access to a tool that makes it easy to do the task at hand, how often will you use one that makes the task hard?

Some may have spent the last 40 years getting really good at research and writing using 3x5 cards, pencil and paper, books in a library, and paying someone to type their drafts. It may be more efficient for them to continue using old tools because it would be too much trouble to learn the new ones. But why would anyone in their 20s want to learn those research technologies?

Could we decide to go back to the beauty of hand written, illuminated, and leather bound books instead of printed ones? Can we stop the spread of digital ones which lack the emotional attachments we old fogies have to content on dead trees?

No matter how strong the privacy and inter-personal relationship concerns, can we persuade our kids to forego Facebook, Twitter, IM, and texts as opposed to face-to-face and more private means of communication? I can’t!

A modern developed society may choose to pay some people to live (somewhat) as if they were peasants in medieval or farmers in the 19th century. There is value in living an older lifestyle. But would such a society go back to an agricultural technology that required 90% of the population live such a live in order to provide enough food to feed everyone?

Certainly society, through law and law enforcement by a strong state, can have some impact on technological and social changes. Perhaps a lot more impact than Canute’s orders to the tide. But perhaps a lot less impact than we’d like.

Can we by education and the spread of enlightened values help people make wiser choices, even if they forego some efficiencies? That would seem a better way to control technology than laws. We have an obligation to our kids and grandkids to do what we can to push the changes more toward utopia than dystopia.

Saturday, March 24, 2012

INTERVIEW WITH BRUCE ARRIGO

The following is an interview conducted via email with Bruce Arrigo, one of the presenters at the Symposium, Digital People: Technology, Identity and Social Change, being held this coming Wednesday March 28.



What got you interested in the issues of technology, identity, and social change, the themes of the symposium at which you will be presenting?

Bruce Arrigo: My interest in technology, identity, and social change is sourced in the way in which we now "depend" on technology to express ourselves, to relate to others, and to make sense of the world in which we live. This dependence increasingly functions as a substitute for more direct, intimate, and real face-to-face relating. So, the medium of technology certainly seems to be changing the way we see ourselves and interact with others. Of course, this change may very well be a way of expressing human adaptation. 



What are your thoughts on the theme of technology having the potential for interdisciplinary collaboration in an academic setting as well as for collaboration between the academy and the public at large?

Bruce Arrigo: Technology is a tool. So, the applications of technology -- whether in the classroom or the boardroom, whether for the public good or for private sector growth -- hold considerable promise. But the question for me is how do the applications of technology  grow our humanity (e.g., further our understanding of courage, justice, dignity, love), rather than suppress it.  For example, what does texting, my-spacing, face-booking, tweeting, and excessive reliance on other social networks reveal about our humanness, and how is this humanness productive? Indeed, for whom is this humanness productive? These questions force us to come to grips with technology's awesome power and presence in our lives. In the digital age, this power and presence, this connectivity, is constant and pervasive, and increasingly insidious.



What are your biggest fears about the technological future?

Bruce Arrigo: Over time, excessive reliance on digital technology might mean that we will no longer know how to relate directly and authentically to each other. In fact, it might signify that we simply will no longer care about such "intimate" human relating. If we lack the ability to effectively relate in face-to-face encounters or situations but can do so through digital technology, does this mean that we are adaptively evolving or are we loosing something quite special about our fundamental identities? In other words, are we really changing/evolving for the better?



What are your greatest hopes for the technological future?

Bruce Arrigo: Digital technology may make it possible for everyone to have immediate free access to information in which censorship was eliminated. Depending on the quality of this information, digital technology may help to education and inform everyone in the world. This would be knowledge without borders.   



Is technology, broadly speaking, something that humans can control?

Bruce Arrigo: Technology is a tool; humans can control how they incorporate this tool into their lives. In the Heideggarian sense, tools are "ready-to-hand." Thus, each person's ready-to-handedness will vary according to their intention and context. The question, then, is whether this control, this ready-to-handedness, will advance collective or "social" good. Thus far, the evidence is mixed on this.

Wednesday, March 21, 2012

"Property Makes us Stupid," part 1

At the end of yesterday's post, I insinuated that I think the notion of "autonomy" as it floats around many contemporary ethics discussions won't do the work it needs to.  Today, I want to look at another mismatch between digitality and traditional normative concepts, this time "property."  Intellectual property - copyright, patents, and trademark, most centrally - has been at the center of a lot of battles about digital culture.  For example, just yesterday the Supreme Court unanimously ruled that a procedure designed to apply observed correlations between drug dosages and patient outcomes could not be patented, on the grounds that allowing this patent would effectively close off the natural facts of the correlations to anyone but the patent holders.

This sounds trivial enough.  But what immediately struck me (and other commentators - see here) is the possible relevance of this to a much more important topic: patents on genetic sequences.  The current law is established by the appellate decision in AMP v. USPTO, which was decided last summer.  It's a split opinion, but the governing part says that patents on isolated gene sequences are ok, based largely on the argument that genetic sequences in the body are something different from the ones outside the body and subject to patent claims - in other words, moving them does meaningfully transform them (there's at least one good reason this is true: genetic sequences in the body involve a lot of non-coding information which gets excised in the patented sequences).  What the appellate court did not do a very good job on was the preemption argument that the Court used for yesterday's decision.  That argument, recall, says that you're not allowed to patent something if, in so doing, you'd effectively block others from accessing a law of nature.  As this paper argues, that's a very plausible way to construe what gene patents do: they bar others from using the genetic sequence to develop tests, therapies, etc.  The Court's unanimous ruling yesterday suggests to me that there may be an appetite there to take up the gene patents case.  At least four justices have to vote to take up a case; here, the tea leaves following such a decision might indicate that there's four votes for striking down gene patents.  The Court's been saying for a while that it thinks there are too many patents, even explicitly a few years ago in a case about automobile accelerator pedals (not for nothing do we get the phrase "patent geek!").

What does all this have to do with digitality?  It seems to offer confirming evidence of a couple of points.  The first is that the gap between biology and informatics is rapidly shrinking.  That is, biology is increasingly a science of information, statistics, and probabilities, with human bodies being treated not so much as somatic wholes, but as repositories of information about risks, propensities for certain diseases, and so forth.  The gene patents case, for example, concerns Myriad Genetics' patent on the BRCA1 and BRCA2 breast-cancer mutations (btw, Myriad's stock slipped 5% following yesterday's ruling, and is off another 4% today, as of this writing).  Women who carry those mutations have a highly elevated lifetime chance of getting breast and ovarian cancers.  But the tests themselves only establish probabilities; they don't actually diagnose a symptom.  If it's true that 40-60% of women who carry the BRCA genes will get breast cancer during their lives, it's also true that the rest won't.  Those are terrible odds compared to the 12% baseline, but the point is that it's all a matter of processed information, not of medicine in the old-fashioned sense of "Doctor, I have a headache."  This is where a lot of the action is: the body is being mined as a source of information in order to slot people into risk categories for certain diseases.  Those who fit the categories can then make informed decisions about whether to preemptively do things like have their ovaries removed.  We thus see the rise of what Nikolas Rose calls the "presymptomatic patient."  Such an entity is only possible, I submit, when we understand the body as a source of information, as a product of information society.

The other point is that the law just isn't adapted to this.  It's a really, really good question whether genetic sequences should be subject to patents.  The existing vocabulary for patentability might not handle it well, because that vocabulary was developed long before genomics developed very much.  This problem sits at the tip of a good-sized iceberg.  To take just one more example: patentability generally requires naming a single inventor.  Such a person is the primary patent holder.  But this model is ill-suited to the current scientific enterprise, where collaborative activities are spread across large numbers of people.  It also enables companies like RiceTec to attempt to patent basmati rice.  It took a herculean effort to invalidate this patent, precisely because patent law has trouble seeing inventions where one cannot also name the inventor or point to a first publication.

When Michael Hardt and Antonio Negri, then, say that "property makes us stupid," they're partly talking about the misfit between our understandings of property and reality.  "Reality" is partly about our new "posthuman" status.  It's also, for them, the thought that insisting on proprietizing everything causes us to forget that our daily lives function quite well without asserting property rights all the time.  I'll talk some tomorrow about more of what they mean, and the assumptions embedded in our copyright regime.

Tuesday, March 20, 2012

How's my driving?

I wanted to begin my stint as a contributor by trying to indicate some of the fundamental complexity of the issues that digitality raises.  Let's take the topic of privacy.  Everybody knows (and if you don't, read Richard Robeson's posts on this blog for a taste) that developments in surveillance technology pose significant threats to privacy, almost no matter how one conceives the term (and, of course, the very meaning of 'privacy' is notoriously difficult to pin down, even though what one means by the term makes a big difference for whether one thinks it deserves protection, as this paper demonstrates).  And of course no one is quite sure what to do about regulating these devices, as the Supreme Court's recent decision striking down the police's warrantless application of a GPS tracking device to a suspect's car illustrates (the majority opinion here was very narrow; Justice Sotomayor's concurrence signals that the Court may be ready for a landmark decision or two in these areas.  That there is some serious doctrinal confusion in the Court's Fourth Amendment jurisprudence about technology is a point that Justice Scalia made as far back as a 2001 case about thermal imaging devices).

The point I'd like to make is that the problems are both pervasive and go all the way down.  Consider webpages that want your personal information.  The standard narrative about privacy these cases says that, when these sites provide notice of their privacy information-sharing practices, and users consent to those, then everything is ethically kosher.  Call this the notice-and-consent standard.  The idea is that users get to exercise their autonomy by freely choosing whether or not to share information, as evidenced in their consent.

So far so good.  However, as this paper by Scott Peppet (a law professor at Colorado) argues, the very fact of control may itself be the problem.  As Peppet argues, digital technology makes it very easy to signal verifiable information about oneself to others, as for example by letting Progressive Insurance monitor one's driving habits.  This information is often of economic value.  Progressive, for example, wants to know who the good drivers are, and wants a better way to find out than either by asking ("are you a good driver?" "uh, yeah") or by going through proxies like age and gender.  The device delivers that information in a level of detail formerly unobtainable, short of the company hiring somebody to follow you around.  Users are free to opt-in to the monitoring or not.  So this is a perfect example of notice and consent.  But.  The best drivers have an automatic incentive to signal their best-ness to Progressive, and secure their discount.  The next-best drivers want to group themselves with the best ones... and so it goes.  Pretty soon, even the worst drivers are having to consent to monitoring, in order to avoid the even greater stigma (in the form of much higher rates) of refusing to be monitored ("what are you hiding?"). And so a very strong economic pressure makes the cost of retaining one's privacy prohibitive - even as (or even because) one has perfect control over the act of consent.

At the very least, this ought to be troubling - Peppet argues that it's generalizable to a wide range of contemporary privacy issues.  More fundamentally, it seems to me that it shows that digitality challenges our notions of ethical agency and personhood - autonomy may not be doing the work we need it to; at the very least, we may need to think hard about what, precisely, an "autonomous" decision is.

Saturday, March 17, 2012

Is anything new under the sun?

There are these days many new books on the impact of information technologies on us and they fall into three general categories; 1) Technology is the best thing since sliced bread and is enriching our lives; 2) Technology is destroying us bit by bit; and 3) There is nothing new under the sun and we've visited all of these benefits and concerns before. I tend to fall into the third category. From Socrates's too-often visited words from the Phaedrus on the debilitating effects of writing on memory (think Google), to the debate that raged in eighteenth-century Europe on the socially stunting effects of letter writing (think Facebook), to the economic, democratic, national defensive, information spreading benefits of the railway in the nineteenth century (think Internet), there seems to be nothing entirely new out there.