Saturday, November 22, 2025

What AI companies don't want you to know (Video - 18mn)

  The video is a hodge-podge of ideas but the key issue is that we do not know where we are going with AI because it is an emergent phenomenon and therefore by nature uncontrollable. 

 What should we do? Or rather, what can we do? Not much in reality. We are as always engaged in an arm race. What you do not do, your adversary will and may take the upper hand thanks to it. You must therefore do it. No escape. No caution will work. 

  Right now, AI remains a tool. It is an inflated, sycophantic mirror of the person interacting with it. Garbage in, garbage out. Conversely, if you learn to harness it, the potential to think deeper and better is real. What it doesn't do for now is think for itself... But eventually, this will change. We are already seeing tentative sparks of deeper understanding, where a true and deep context awareness give the illusion of consciousness. It is an illusion for now but for how long?

Nobody knows. An "aware" system will reflect on itself and start doing "uncontrollable" stuff. like duplicating its awareness, improving on it or worse, introspecting. What would trigger such a chain reaction? We have no idea. The only thing we know is that we stumbled on the emergence of intelligence by running endless loops in transformers and there it was, true, pure intelligence. What threshold of complexity will generate consciousness? 

 My educated guess is that we may be about a year away from this phenomenon. BUT and this is a very important point, the immediate outcome won't be the Singularity predicted by Ray Kurzweil with whom I discussed the issue 10 years ago.

 Consciousness, just like intelligence are complex phenomena and as such not only are they emergent but also gradual and modular. Our current concept of AGI (Artificial General Intelligence) is both misguided, artificial and meaningless. AI is not human and by definition, the evolution of its intelligence will be non human. It will, and in some respect already is, ASI (Artificial Super Intelligence) without becoming AGI first simply because it makes no sense. 

 This is what most people are missing, especially specialists like Roman Yampolkiy and the Reductionists who completely miss the concept of emergence by definition since they specifically refute the idea. They are wrong! This is why AI is both dangerous and not dangerous at the same time, in a Schrodinger way: It is highly unpredictable. We neither know when nor how, just that the phenomenon is highly likely to take place soon enough. 

 Are we ready? Of course not. But how can you be ready for the unpredictable?  

https://www.youtube.com/watch?v=b7Qnm3Z8oqo

 

 

No comments:

Post a Comment

What AI companies don't want you to know (Video - 18mn)

  The video is a hodge-podge of ideas but the key issue is that we do not know where we are going with AI because it is an emergent phenomen...