The Technological Singularity

Discussion in 'Soap Box' started by Edgar Roni Figaro, Jul 26, 2010.

Thread Status:
Not open for further replies.
  1. Edgar Roni Figaro

    Edgar Roni Figaro Well-Known Member

    Are there any transhumanists on the forum?

    If you believe that Moore's Law

    "( Moore's Law - The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore's Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore's Law to hold for at least another two decades. )"

    will continue on at the same pace it has been moving for the past century then the time line takes us to 2016-2019 as around the time the first CPU has equal processing power to one human being.

    If the trend continues then around 2029 a CPU will be expected to have the processing power of 1000 human brains combined. The projections for CPU processing power have been increasing since the 1970s. So we have exceeded the projected technological pace of advancement for each of the past 3 decades.

    Once a CPU with higher intelligence than a single human is created that CPU will be able to build a superior version of itself. At first it may take that CPU A 2 years to make a superior CPU B to it. Then it will take CPU B maybe 1 year to create CPU C which is superior to it. Then it will take CPU C 6 months to create CPU D which is superior to it. And as the progression goes on each CPU created a new generation superior to itself until the advancement of technology is moving so fast that new technology is being created every hour, then every minute, then every second, until it moves so quickly human beings can no longer understand it. This is the point of time which is referred to as The Singularity. It is the point in time where mankind must make a decision to genetically engineer itself, augment itself with micro processors to become vastly more intelligent, or go extinct as a species as technology becomes so advanced that mankind understands it the way an ant understands mankind.

    Today there are already human beings with microchips placed on their brains. These are mostly in quadriplegics. The microchips read the electrical impulses that the neurons send through the brain of the quadriplegic and sort them out using a CPU program. The result is that a quadriplegic is able to surf the internet simply by thinking the words and thinking the mouse cursor to move in a particular direction.

    It will be another decade or so before people will be able to augment their brains with microprocessors but it should happen within most peoples life time. So what do you guys think about it?
     
  2. empty101

    empty101 Well-Known Member

    I've read about technological singularity before.

    You've only talked about the physical hardware. Sure we may have the processing power to do a trillion+ things at once but what's the use if it can only find out 1+1=2 a trillion times.

    Will we really be able to make software that is capable of making itself better? Is it even possible? A lot of what we hear from psychologists tells us that we only use a fraction of our brain power. Sure, if we go to school and learn we can get smarter. However, I think there will be a difference between improving something and bringing something to its full potential. Even if we could change the physical structure of our brains (e.g. making them bigger) I think it would ultimately be limited.

    A monkey wouldn't know how to make himself much smarter. Have you ever met someone who had a terribly stupid perspective on everything? So stupid, that if you tried to explain to them why they were stupid they wouldn't understand. Perhaps, they don't understand logic at all. Your only way to explain it is by using logic. Maybe after a very long frustrating time you can teach them logic but what if there was no one to teach them?

    Have computers really done anything we couldn't do? I mean, they've made things a lot more reasonable. They've changed days or years worth of calculations into a couple milliseconds. However, in the end of the day it's really the human that tells the computer exactly what to do. There's no next level of intelligence with computers- only a lot more of the stuff we could already do ourselves. The best program on the planet is a result of the people who made it.

    I think technology will improve and the things it can do for our lives we can barely dream of but I don't think there will be a technological singularity.
     
  3. Edgar Roni Figaro

    Edgar Roni Figaro Well-Known Member

    I guess it all comes down to if the CPU will be capable of learning. Right now the problem is the human brain while it has billions of neurons has multiple connections to each neuron resulting in trillions of pathways toward the processing power of the brain.

    CPUs do not yet have the capability to mimic the brain but that will probably change in the future. The question is can the CPUs create the kind of network that the human brain has in which a single transistor can make connections with multiple transistors around it the same way a neuron in the brain can make multiple connections to other neurons.

    One thing is for sure, the future is going to be really interesting provided there is no WW 3 before this kind of technology exists and we see if it is possible or not.

    The other possibility IMO is much worse. That is that there is no singularity, and we eventually reach a limitation on the amount of transistors we can place in a microchip. When that happens we will reach a hardware limitation that will prevent any CPU based technology from advancing any further. If that happens it will be a crisis of human limitation that could destroy us as a species. We would find ourselves stagnated in our understanding of the universe around us and be left asking the same questions generation after generation with way to answer them. That to me seems like the definition of hell on earth.
     
  4. aoeu

    aoeu Well-Known Member

    I suspect it's not possible, though that may be coloured by my hope that it isn't. There are physical limits to computing power. Energy densities/energy use are one such probable limit. The speed of electrons is a temporary limit, when we switch to optical processors the speed of light will be a hard limit. The basic randomness of the universe might be a problem, I don't know enough about quantum physics or computing. There are also physical limits on the size of computers - increasing the number of processors can only go on so far until you've consumed all the possible computing materials on earth.

    You say it'd be worse to stagnate than to enter a singularity? I'm not so sure I wish to be obsolete.
     
  5. nolonger

    nolonger Well-Known Member

    How can I 'stagnate' twice? :tongue:
     
  6. johnnysays

    johnnysays Well-Known Member

    \I actually think Ray Kurzweil is close to the truth than people give him credit for. He is, of course, a big dealer in the singularity market. He's sold a few books on the topic. Someone who knows about the singularity and doesn't know about Ray doesn't really know about the singularity.

    One of the possible reasons that we haven't detected alien civilizations is that the period in time that they use radio communications (on the band that they're looking for with SETI) is very short. This would reduce the chances of finding genuine signals. The research and writing compiled by Ray Kurzweil supports this hypothesis. This is because he says technological advance is exponential. At that pace, a civilization might only use radio communications for a century or two. The next several years will see more advances than the past 100. So maybe the universe is so "quiet" because most of what is out there has either reached the singularity or is so far ahead of us in evolutionary terms that we're unable to detect them any better than an ant can detect us. In other words, we may very well be "detecting" alien civilizations, but we don't realize that what we're "detecting" is artificial!

    So how does this relate to AI? Well, if you're judging the prospect of a human level AI based on what we've seen and learned then you're going to be unable to predict what will happen in the future. Essentially, whatever makes human level AI possible is difficult for us to predict at this time, but it will likely happen faster than most people think because we don't think exponentially.

    Look at this:
    http://www.youtube.com/watch?v=AOaZspeSBZU

    I feel that we cannot predict the future in detail, but we can predict it in general terms. This is what Arthur C. Clarke did in that link. He did not do an error-less job of it, but he was very close to the truth. From this, I predict AI will get better. I also predict we will increasingly become cyborgs. This means it won't be a us vs computers, black and white kind of argument. It will be: Will we become a part of computers fast enough to keep up with the rate of advance? Why will we become cyborgs, though? Because over time we have developed more and more devices to better interface with our computers. Just like how Arthur saw that we increasingly network, I see that we increasingly interface with our computers. The devices themselves have become smaller and smaller, generally. We want convenience. This leads me to believe that one day you won't see your monitor when you look at the screen, you will SEE. You won't move a mouse cursor with your hand, it will move in response to a thought or something else. We might not even need to go to our computer to use it. We may be able to use it anywhere in our house with just our mind and a chip in our brain. Who knows?


    However, that's not to say that we're not making progress at THIS TIME.

    Look at both links (they relate to AI):
    http://www.youtube.com/watch?v=G6CVj5IQkzk
    http://www.youtube.com/watch?v=oozFn2d45tg

    Something Ray said a while back stuck with me. I'll share it here...

    Some people say that copying our mind and placing it on a computer system would create a dilemma. Who is the real you? The flesh and blood you, or the you that's now on the computer? Ray offers a unique way of looking at this dilemma. He said to imagine one part of you is replaced by a synthetic version. For example, maybe you replaced a real neuron with a synthetic neuron. Now, lets imagine that the artificial part operates the same way and the inputs and outputs are the same. Now, imagine replacing another and another and another and another, until all of your original parts are now synthetic. Now ask yourself a question: Who is now the real you?
     
    Last edited by a moderator: Aug 1, 2010
  7. Edgar Roni Figaro

    Edgar Roni Figaro Well-Known Member

    You bring up so many awesome points.

    First though I think that Ray Kurzweil makes money off of his books because he knows so much about this and is able to explain it to those who want to understand it better than anyone else. So I don't blame him for profiting a little off of it because he is really profiting due to the fact that people want to understand the knowledge he has.

    On the thought of alien civilizations I remember reading an article that talked about how one of reasons we may never encounter an alien civilization is due to exponential growth of technological progress. That the alien civilizations may all have reached a technological point where they were able to upload their entire consciousness into a cpu that is made of pure energy running on the nano level. They would thus seal themselves off to the universe and all threats from it inside this system and live eternally.

    On the prediction of technological advancement I was watching a 20 minute program on youtube yesterday in which Ray Kurzweil went through and explained about 12 different charts relating to technological advancement.

    He talked about how the progress of technology was neither slowed by the great depression or any of the world wars in short it seems unstoppable. And while Moore's Law is predicted to slow down by 2018ish he pointed out that before the invention of the transistor we used vacuum tubes for everything and they shrunk the tubes until they couldn't be any smaller, while some thought that was the end of technology the transistor was invented and the technological growth at exponential levels continued. He talked about how the transistor will reach its limits by 2018 but how there are already new nano systems being developed that will be able to create atomic sized systems which will replace those transistors and continue the trend.

    I also saw the program you are talking about where he talked about replacing 1 neuron at a time with a synthetic neuron and I truly believe you will be the same person.

    As far as cybernetics is concerned we already have cyborgs. Have you see the man who had an operation to connect fiber optics into and around his nerves in his arm and had them connected to a computer. The computer was connected to a cybernetic arm. After a little practice when he used his mind to simulate moving his hand the cybernetic hand was moving as well. This is amazing.

    Here is the video

    http://www.youtube.com/watch?v=ppILwXwsMng&feature=PlayList&p=8D3CC59FD9B93FB4&playnext=1&index=86


    There is no doubt in our lifetime we will witness the complete transformation of mankind into something far superior to what we are now. Most people don't seem to understand exponential growth.

    Technology has been growing exponentially for over 100 years. If you take 30 linear steps you go from 1 to 30. If you take 30 exponential steps you go from 1 to 1 billion.
     
  8. johnnysays

    johnnysays Well-Known Member

    Links are organized by date, from earliest to latest:

    New graphene transistor promises life after death of silicon chip (Update)
    http://www.physorg.com/news91891899.html
    Graphene transistor may save Moore’s Law
    http://tech.blorge.com/Structure: /2007/03/05/graphene-transistor-may-save-moores-law/
    TR10: Graphene Transistors
    http://www.technologyreview.com/read_article.aspx?ch=specialsections&sc=emerging08&id=20242
    Graphene transistor speeds up
    http://physicsworld.com/cws/article/news/37204
    IBM hits graphene transistor "breakthrough"
    http://www.zdnet.com/blog/btl/ibm-hits-graphene-transistor-breakthrough/30447
    Graphene transistor could advance nanodevices
    http://www.physorg.com/news192786026.html

    Most of the links are just rehashing hte same information, but each one has something unique.
     
    Last edited by a moderator: Aug 2, 2010
  9. Just_a_guy

    Just_a_guy Well-Known Member

    Pure processing power is irrelevant if you don't know how to use it correctly. The human brain might not have the most powerful data processing capabilities but it sure as hell is efficient, and it has a complex and powerful neural network. Simulating or building a neural network like that is no easy task, if possible at all.

    A computer by itself is just a thingy that does calculations. Can that be considered more "intelligent"?
     
Thread Status:
Not open for further replies.