Wednesday, June 22, 2011

Fear Itself (and everything else)

The future doesn't excite me. I'm not talking about MY future, I'm talking about OUR future, and that's part of the problem. When I say "our", it's not even clear what that means. It could mean everybody. Or it could mean nobody. It could mean a lot of things, but I'm getting ahead of myself.

There are plenty of reasons to be bummed about the coming years. Michael Bay films, the end of the Tim Thomas era (too short to begin with), climate change (and whatever comes with it), the continued existence of reality- or vampire-based television (vampire reality series?), political unrest, global economic crises, and the transition to a post-apocalyptic wasteland. The post-apocalypse itself will vary depending on your sources; Totally bad-ass (Mad Max), odd, personal, and quest-based (The Fallout series), or with an abundance of snacks (Andrew Bird's Tables and Chairs).

I'm not a pessimist. I'm not a misanthrope. I can be cynical, but I wouldn't consider myself a cynic (there may be some who disagree with that statement). All of that being true, I try to think about things from every angle I can come up with. Most of the time, the level of importance of these things isn't nearly as high as the amount of time and energy I invest in thinking about them, and the end result is me slightly less mentally stable than I was before. Sometimes, however, the things I think about are (or at least could be) important, and I think this is one of those times.

If he has his way, Raymond Kurzweil will never die. I suppose this sentence could be said about most people and it would be somewhat true, but Kurzweil is probably doing more about it than anyone. Spurred by his father's death from heart disease, along with being diagnosed with Type 2 diabetes, Kurzweil has worked with doctors specializing in longevity, and now takes between 150 and 200 pills a day. But that's just were it starts. Kurzweil wants pills that think.

Kurzweil is one of the leading voices on the Singularity. The Singularity is the name for the moment when humans create a computer that is more intelligent than humans, and while no one is sure exactly when that will happen, Kurzweil thinks it will be around 2045. If his prediction doesn't sound like it means anything, consider this; he accurately predicted the rise of the internet (as well as access becoming mostly wireless), the fall of the Soviet Union through (then) new technology, and even that a computer would one day beat the world's best chess players. He's not without detractors, but he's been right often enough for people to listen.

One of the problems with anticipating the changes that would come with an event like the Singularity is considering how big an event like this would actually be. Kurzweil likes to talk about how technology grows exponentially. and he's right. He's created a chart based on Moore's law (which states that the number of transistors you can fit on a microchip doubles roughly every two years). Kurzweil charted how many calculations per second you can buy for $1,000. His chart moves much like Moore's, a steadily rising curve technological advancement. And if he and others are right, the Singularity is when that curve reaches a point even he doesn't fully understand. If computers ever become capable of strong A.I. (artificial intelligence that meets or exceeds human intelligence, more on that later), the belief is that they could then start designing more powerful computers, which would then design more powerful computers, and so on. This is when the possibilities become seemingly endless. Among some of the more incredible predictions are medical nanobots that will practically live inside of us and extend our lives significantly, mind uploading, which would allow us to "live" mentally inside of a computer, implants and enhancements that will allow us to live more productive lives, possibly forever, and virtual immersion devices that people will spend much of their time in. If that last one sounds like The Matrix, you read it right. Kurzweil himself has claimed that The Matrix is a good example of what's possible, but without the dystopian twist.

The Matrix is far from the only movie that paints a bleak picture of a future dominated by machines, and that's not my intention right now, but this seems like a good time to discuss this point. While Kurzweil admits the possibility of super-intelligent computers posing a threat to humanity, he doesn't consider it likely. One of Kurzweil's predictions involves computers that can interact with and show respect for humans. This is when the concept of "strong A.I." goes a little fuzzy for me. Truly strong A.I. isn't just a computer that knows a lot. It's a computer that can act intelligently. A strong A.I. would be as intelligent as a human being, and probably more intelligent than the smartest human being, and by the definition, would be able to formulate its own thoughts. When Watson appeared on Jeopardy!, it showed that it was more intelligent than the two very intelligent people it was put up against, but only in that limited capacity. Watson had to be programmed to understand the answers it was given in order to find the appropriate question. Watson is an incredible piece of technology, but it's still being told what to do by its programming. If something needs to be told by programming how to behave/act/talk/etc..., it is not strong A.I. So how would anything like this know to "respect" living people? It might choose to be respectful, but it might not. It's probably not a stretch to consider that, assuming we get to the point when we are making computers that need this kind of control programming, it's only a matter of time after that when they don't get it. And while we can run projections and make guesses, no one is really sure how such a computer would respond.

But we're not talking about that.

Obviously, a lot of good could come from these possible advances in technology, but even the positives lead to more questions and uncertainty? Who decides who will have access to these advances? Will we need to apply for a license for extended, or even permanent lives? Will it simply be a matter of financial resources, with immortality going to the highest bidders? Will anyone be able to scan their "self" into a hard drive? Will Obamacare cover medical nanobots? What about food for a planet filled with immortal hyper-beings? Kurzweil has talked about food that is grown (constructed?) using technology we will eventually create. What happens if we don't? Or if it proves too difficult to produce enough food in this way for a planet that would most likely go through a population explosion like it has never seen before? Personally (for some), living forever would seem like the greatest accomplishment possible for man. Socially, it's most likely a complete disaster, at least as we (and again, we don't know what "we" even means in this scenario) transition into living with this technology.

I don't understand the appeal of living forever. There are certainly interesting things that will happen long after I die (or long after my essence is converted to iTunes) that I would like to be around for, but does that give me the right to force myself on the world forever? And is that curiosity even worth it? This might seem like it goes against statements I made earlier in this post, but at times, I find life exhausting. While it's possible I do a lot of it to myself (refer to the third paragraph for an example), it seems to be how I deal with the world, which makes the world at least partially at fault. I like to slow things down from time to time, and that usually means getting away from the devices that have already augmented our lives. Don't get me wrong, I like having the ability to contact friends and family whenever I want, or to get directions on the fly, or even update facebook with whatever mostly meaningless thought goes through my head (hey, everyone else does it). But it's also nice to cut myself off from the objects that give me those options, even if it's just for a little while. Well, what happens when those objects have become an actual part of us? What happens when we become walking smartphones, or more? Kurzweil talks about a point in time when the entire planet becomes, in essence, a giant supercomputer. He even talks about eventually doing this to the entire universe.

I'm not opposed to moving forward. And I'm not saying that these things, if possible, shouldn't happen. I like that we are trying to push forward as a species. I applaud Kurzweil and men and women like him who challenge themselves to think bigger than those who came before them. What I'm saying is that people, throughout history, have shown a tendency to believe that being able to do something was reason enough to do it. It's not. The future that Kurzweil and others presents us is loaded with more potential than we can possible make sense of, and we need to do just that before we get there.

As for me, I like being able to turn things off from time to time. And I want to live in a world where I still have that option.

1 comment:

  1. I'm trying to find a very specific quote about earth being a supercomputer, though now that I've posted this I feel like I probably don't need to post the quote. Ok then, carry on.

    ReplyDelete