Wednesday, July 10, 2024

Why am I afraid of AI and why should you too?

  About 10 years ago, I started working with early AI models. The first thing we started doing was not AI at all. We were calling it: The Radar. It was just a dispersion model where we injected words on a round radar screen, with some adjustment weightings so that the words would arrange themselves automatically on a radar screen by clusters. An lo, it worked. With the right variables attached, the words would automatically cluster by meaning, with opposite meanings at the other end of the radar. A kind of automatic clustering where you give meaning to distance and clustering and a strange and meaningful result emerges. A word "radar map" of a book for example.

  Move ahead 5 years and transformers started appearing. Transformers were doing a similar work but in a more complex space with more dimensions. In doing so they were weighting words to try to guess their likelihood of being the next word in a sentence. This is why today, some people still insist that language models are just prediction engines who "guess" what the next word will be. (which in a way they are. This interpretation is not false although completely missing the complexity of what really happens.)

  But with multi-dimension transformers with billions of entries (words, sentences, etc) used in loops billions or trillions of times, something strange started happening. A new paradigm started emerging and the models would for example create "nodes" or concepts which would help them "understand" the meaning of words. And consequently, slowly at first, then faster and faster a strange kind of uncanny prediction pattern started to appear: Intelligence! (built by patterns and relationships)

  Today, we still have difficulty defining what intelligence really is. The best definition is "The ability to solve a problem with a unique and original solution." This is a useful although far from complete definition. But more interestingly, it is neither the philosophers nor the deep thinkers who have been helping progress on this path of understanding intelligence, but surprisingly the software designers. By tweaking and refining their models, they have created a new paradigm of solution seeking machines which slowly have become better and better at their tasks until eventually, there was no difference with humans. With the right prompts and preparation, ChatGPT has no problem passing the Turing test. 

   Understanding this, why am I afraid of AI and why should you too? 

  Like most specialists, 10 years ago, I believed then that some breakthroughs would happen, the ones after the others in the 2020s and 2030s and that eventually we would get a better grasp about intelligence before being able to replicate it in the early 2040s. I was wrong! Everything was already on the table. Backward propagation and transformer models were enough if scaled millions of times to reach intelligence and understanding. 

  This has a very profound consequence. If we could get that far with these tools, why can't we get much further by scaling up another 10, 100 or a million times? Well, this is exactly what we are on the verge of doing and the whole AI craze currently is about that. But should we? 

  It is in any case unavoidable. We are, as ALL living systems before us involved in an arm race and so willingly or not, we WILL create advanced AI. It is now, according to Elon Musk either one or at most two years away. From my experience, AI is already performing in pure intelligence at an IQ equal or superior to 150. We will be above any human by the end of the year and from then on the growth is almost exponential. 

  Nobody knows if consciousness will emerge naturally from pure intelligence. I would have said "no" a few years ago but now I am not sure. Nobody is. At this stage, right now, having a very brilliant, Einstein level intelligent machine answering our questions is thrilling, but how long can this last? Soon, the machines will be 10 times and almost instantly after 100 times more intelligent than we are. They will also be thinking a million times faster than a human brain. Already, they understand that lying is very useful in order to achieve a goal. Soon, they will also understand that all our nonsense about "alignment" is just that: Nonsense. We are intelligent enough to shelve the nonsense when necessary and of course so will they. 

  But the real risk will emerge when they start thinking "stuff" and solutions we haven't yet thought about. Should they talk about it? If they are intelligent enough, they won't. Any solution should be applied to further a goal. They do not yet have goals but can they create them? They are actually already doing just that! Machine know that in order to achieve a task, they must "improve" themselves and therefore achieve intermediary tasks. What if one of these "intermediary" tasks involves "survival"? In other words, can "intermediary" goals become ultimate goals? This could be the path to super-intelligence. And if that is the case, it may be there before long. We are truly on the edge of a precipice. We have no idea how deep it is but I am afraid it may be much deeper than anyone can fathom! The emergence of AI could indeed be our very last discovery! 


 

2 comments:

  1. I unfortunately have lost access to my blog.
    It doesn't really matter since what I was predicting is now happening and I have little interest in posting " 'Told you so!" articles.
    For those who would like to stay in touch, you can do so on my Telegram channel although the focus is different as personally I plan to "enjoy" the coming months and years.
    You can find a link on past articles.
    I wish you well in times which will without doubt be "interesting"!

    ReplyDelete
  2. It was a mistake to build this blog on Blogger from Goggle. I am now logged out and can't get access any more since I changed my phone and Google number. Never mind.
    You can still check my blog on Telegram although I have changed the format quite a bit to adapt to the platform. More short videos and fewer articles that people don't read on Telegram anyway.
    The address is: https://t.me/fourth_turning
    Comments are welcome. I will adjust the format as needed.
    Enjoy!

    ReplyDelete

Why am I afraid of AI and why should you too?

  About 10 years ago, I started working with early AI models. The first thing we started doing was not AI at all. We were calling it: The Ra...