Monday, January 14, 2013

From Dependence on the Internet to Artificial Intelligence


I really wanted to focus on an idea mentioned in the Carr article, “Is Google Making Us Stupid,” the idea that in the future our brains could be supplemented or replaced by artificial intelligence. In the article, Sergey Brin is quoted as saying, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.”  This is completely terrifying to me, but it is something we have to consider as the human race keeps making technological advances.

Could we come to that point as humans where we would be choosing or even forced to supplement our brains? If all that knowledge was at our fingertips at a moment’s notice, would we stop learning or at least making an effort to learn? It suggests that the mind is only useful for the information that it holds and all knowledge is easily quantifiable.  This seems to suggest a future like Anderson’s Feed (http://www.amazon.com/Feed-M-T-Anderson/dp/0763662623) where creativity has been completely wiped out and we are constantly receiving advertisements directly to our brains.

As pointed out in the article, the same things were said at the advent of other technologies like writing and the printing press. Is that comparable to the coming age of artificial intelligence? Perhaps, I am just worrying too much and the human race can adapt to that sort of change as easily as we have adapted to having technologies like the printing press and being able to write things down, but I live in fear of a society where humans stop learning and innovating because all that they need is right at their fingertips. 

7 comments:

  1. I was excited to see you mention the book, Feed, in your blog post. I also drew a lot of parallels between Feed and the articles we've been reading lately in class. However, I tend to see the future of technology a little more optimistically. As stated in one of our readings, new bouts of technology are always met with skepticism. To continue with the text's example, consider the printing press. While the printing press started an influx of mediocre literacy, the mediocracy was not permanent. Eventually, the texts deemed unworthy by consumers began dying off until only relevant texts remained. This is also prevalent within the internet. Inapt information and websites are accessed less and less to make way for more fitting information and websites. Indeed, I think consumers serve as technology's quality control, and I think we will continue to do so in the future.

    ReplyDelete
  2. Last semester in my ENG 213 class, we watched a documentary about Ray Kurzweil, and I became incredibly interested in the guy. He had a lot of viewpoints that deal with the very concept you're talking about. It is his belief (and it makes sense) that technological innovation is exponential: the more technology you have, the more you can use it and build on it to make new technologies. Because of this, he believes that there will be a point when our technology will grow so much in so little time, that we will improve what we have times over in a terribly small amount of time, and the resulting world will be that of what Kurzweil calls "the singularity."

    After we watched the documentary, we had a discussion about it, and a few people expressed the same concern that you have. When we have all the information in the world accessible to us, and virtual worlds in our heads, will we even have any drive to do anything else? You sort of expect teachers to counter questions like that, but I was surprised to hear my professor say something to the tune of "it's very likely that we won't, because there may not be anything left to improve." It's a strange thought: not having anything left to improve.

    While the worlds of "Surrogates" and "Feed" (which I haven't read, but it sounds good) seem scary, it may be the direction we're heading. Could we really reach a point where there's nothing left to improve, though? If so, would that point really be so bad?

    ReplyDelete
  3. I agree that this idea is very terrifying. I think it comes down to the philosophical question of what is intelligence anyway? I believe that artificial intelligence is more stupidity than anything else. I feel this is one of the dangers of the continual exponential growth in technology. I really like your part about how creativity will vanish is artificial intelligence takes over. Everyone will have the same knowledge and expertise, so how will our world work effectively? I do not believe this is a good/bad or yes/no argument, but it is something to be careful with. Just like the 213 documentary said, this is the type of stuff that causes world wars.

    ReplyDelete
  4. I think it's terrifying, but I really have trouble believing that we would approach A.I. irresponsibly. I think we've spent too much time in the world of SciFi witnessing and experimenting with the results of A.I. to make the same mistakes that many characters in the genre tend to make. While the idea of irresponsible use of A.I. is frightening, I don't think it's something we'll have to concern ourselves with.

    Also, I think that the access to knowledge will only create curiosity and innovation. Even if we have a bunch of information at our fingertips, we'll still have to search for it, or even make the decision to search for it. If I decide that I want to learn something (even if learning it is easy) I'm making the decision to alter myself. I'm innovating who I am.

    ReplyDelete
  5. I really liked how you used Feed as well because after rereading it for a second time last semester and discussing it in class Feed kept popping into my head while reading this article. I think what makes the idea of inserting AI into your head is the unknown consequences of doing so. I mean look at what happened to Violet in Feed. Something went wrong and something bad happened. We don’t want to get hurt from something that is supposed to make us better. I know I don’t. And I can think the same thing about Ray Kurzweil and all that he does to his body and his work. The ideas Kurzweil has are what some would call crazy, but fascinating at the same time because it is something new. That is just what I think (but probably would never participate in, just saying).

    I think you have a right to worry. Like I said before, we don’t know much about what will happen when technology is involved. I like to think good will come out of it—like the printing press you mentioned— or we, as a society, will learn from the mistakes that are made. I think that even if things are right at a person’s fingertips, we will still try to keep learning and upping that technology with something else. This is just what I have gathered that we do as a race, we try to advance ourselves/things more and more. But who’s to say that will continue…

    ReplyDelete
  6. One thing that I do appreciate about how our race has developed is we're constantly looking to improve. We are shift ourselves to reach higher goals than ever before. Look at the educational systems! Students are obtaining higher scores. They are succeeding and surpassing past accomplishments. We are making things better than before. However, we are also failing more. There's a wider spectrum of intellect in the world to consider and to that I say, "Let's look a bit closer." I don't think (or at least I truly hope) that we as a culture are not moving towards instant downloads of information making the computer more superior than the humans who have created it. Even if that does happen picturing the fat pod hovering people from Wall-E we will still strive to improve beyond that. The goal is not to become these lazy, ignorant citizens of society. It is access information for efficiently than ever before. Because of our advances, it challenges us to think and absorb information differently. We are thinking different than 50 years ago and that bothers some people. Some individuals are going to keep trying to race against that bar that keeps rising in intellect. Others who are less competitive won't. A.I. is not a fear factor. It's a new challenge for us to work with.

    On the subject of A.I., with intelligence sinking lower or higher I find that we don't clearly define intelligence well enough. Who is to say the mechanic who can look and see what is wrong with a car is less intelligent than a doctor who has to keep guessing out of thousands of illnesses what one patient is experiencing? How do we measure who's "smarter" when intelligence is so much like talent. It varies from person to person. I don't see how one talent is less than others. We utilize all (or perhaps most) intellects in each of our lives one way or another. If a robot has A.I. is its logic more superior to the care and love of Mother Teresa? Can it lead a culture to gain human rights like Martin Luther King? Maybe. I don't know.

    What is intelligence?

    ReplyDelete
  7. Well, I'm not sure if it would ever come to a point where humans were basically robots. But it is a little scary to think with artificial intelligence there are people out there who experiment with substituting the brain for something that is technologically better. I think it's very "human" of us as a society to constantly want something better. But my view is that we ourselves are constantly evolving, there will always be that area of grey in life and with technology - it's very black and white. I think technology would lack the capability to consider certain emotions that may help make a decision. I also think with how much everyone relies on technology, there will be something that will happen that will change that? I feel like a lot of people set themselves up for weakness- with relying on technology, because just because it's created -doesn't mean it can't be destroyed.

    ReplyDelete