Tuesday, March 10, 2026

Humanity Crossed A Threshold, And Most Of Us Scrolled Past It

  I have not updated this blog since last week on purpose. My objective here is to give food for thoughts, not to comment on current events. 

 For all I can see, the Iran war may be one of the greatest blunders in recent history. The Trump Administration, pushed by Israel, believed that after decapitation, Iran would fall and become an easy pray. The opposite happened. Senseless murder of a religious leader and true terrorism with the willful mass slaughter of little girls had the exact opposite effect of galvanizing a country around its government. But how on earth could even the most delusional people not understand that this would be the case? 

  The result now is that Trump is in a bind. Announce "Mission accomplished", end the war and he risks a fatal blow to the credibility of the US, plus of course the wrath of Netanyahu and the consequential blow-back of being compromised. Continue the war, and the relentless rise of the price of oil will plunge the world in a deep recession and compromise his chance in November. What to do? 

  He certainly would like to negotiate a ceasefire but how can you do that with people you kill as they sit as the table? The Iranians rightfully said: Enough!    

  So instead of focusing on these miserable events which are entirely due to arrogance and hubris, let's take a step back and look at what's really happening in the background.

  If the Ukraine war was the first war of the drones, the Iran war will be the first war of AI. Together, drones and AI are de-multiplying lethality but also the cost of waging war. Slowly, war is becoming almost exclusively economic. And if that's the case, then obviously the US cannot be left in charge of the world's currency. Changes are coming!

Authored by Kay Rubacek via The Epoch Times,

Something happened last week that most people scrolled past.

Two Amazon data centers in the United Arab Emirates were struck during Iran’s retaliation for U.S. military action. Another facility in Bahrain was reportedly damaged after a drone landed nearby. The earlier strikes that triggered the retaliation were said to have used AI-assisted targeting systems.

It was a brief moment in the news cycle, quickly overtaken by the next political story. But the implications are difficult to ignore.

Artificial intelligence has now crossed into active geopolitical conflict.

The infrastructure that powers the digital world—the same systems that store family photos, run businesses, and answer questions on our phones—has become strategic wartime infrastructure. Algorithms woven quietly into civilian technology are now helping guide decisions about where weapons land.

Humanity crossed a threshold, and most of us scrolled past it.

But we know from history that major technological shifts rarely announce themselves with a single dramatic moment. They appear first as signals in small news items, policy disputes, unexplained departures by insiders.

Another signal appeared almost at the same time.

The federal government recently removed the artificial intelligence systems developed by Anthropic from its networks. Shortly afterward, OpenAI stepped in with a defense agreement of its own.

The public does not know the full story behind the change. We do not know exactly what demands were made behind closed doors, what ethical guardrails were contested, or why one of the world’s leading AI companies was suddenly pushed out of federal systems.

But the episode itself is another signal.

And yet another signal has been appearing quietly inside the AI industry itself: the departure of safety researchers.

Over the past several years, numerous high-profile researchers tasked with studying the risks and safety of advanced AI systems have left their posts at leading companies and research labs. Many of these departures have come with little public explanation.

Those researchers rarely describe the internal debates they witnessed. Few are in a position to do so.

But patterns like this matter. When the people closest to a powerful technology begin stepping away quietly, it often means they have seen tensions the public has not yet been invited to examine.

History has seen moments like this before.

In the early 1940s, scientists working on what became the Manhattan Project realized they were building something unprecedented. Some raised concerns about what the technology might mean once it left the laboratory. But those debates happened largely behind closed doors. The public understood the stakes only after the technology had already been used.

Artificial intelligence may be unfolding along a similar pattern. We are seeing the signals now—researchers leaving, governments disputing ethical guardrails, and AI systems appearing inside real geopolitical conflict.

Yet the public conversation about artificial intelligence is still shaped by a set of assumptions that make these signals harder to recognize.

Misconception #1: AI Is ‘Just a Tool’

This analogy is comforting. We imagine AI the way we imagine a calculator or a word processor—machines that perform tasks efficiently while remaining firmly under human control.

Tools can become strategic assets in war. But they do not generate their own outputs in ways their creators sometimes struggle to explain, nor do they require constant negotiation over the ethical boundaries of their behavior.

Modern AI systems are not programmed line by line in the traditional sense. They are trained on vast datasets and learn patterns within that data. Their behavior emerges from statistical relationships rather than explicit instructions. AI researchers describe these systems as “grown,” not built. And that makes them fundamentally different from the tools we are used to controlling.

Misconception #2: AI Is Neutral

AI systems are trained on human-generated information. That information reflects human biases, historical conflicts, and uneven representation.

When an AI system generates an answer, it synthesizes patterns it absorbed from that material.

AI has developed fluent language skills that can create the illusion of objectivity. But confident language is not the same as truth.

The recent disputes between governments and AI companies illustrate this clearly. Debates over surveillance limits or autonomous weapons are not simply technical questions. They are moral ones. Guardrails exist precisely because the systems themselves are not neutral.

Misconception #3: Humans Fully Control AI

Traditional software behaves according to explicit instructions written by programmers.

Modern AI systems operate differently. Their outputs are probabilistic, generated through layers of learned relationships inside the model.

Developers are now using AI systems to build AI systems and to manage other AI systems. They are using AI to write code that in the past they would have written themselves, and it’s happening so fast that they cannot monitor or even understand every line of code being generated by systems that do not sleep.

Control, in this environment, is not a switch. It is more like a moving boundary that no one has ever seen before, and the language to even define it is still in its infancy.

Misconception #4: The Experts Know Where This Is Going

In most scientific fields, experts disagree within a fairly narrow range. In artificial intelligence, the range of opinion is unusually wide.

Some researchers believe AI will revolutionize medicine and scientific discovery. Others warn the technology could produce serious societal disruption if development outruns human wisdom.

Among those raising such concerns is Geoffrey Hinton, a Nobel Prize winner and one of the foundational figures of modern AI research.

That range of opinion does not prove disaster is coming. But it does reveal that even the people building these systems do not fully agree on where they lead.

Artificial intelligence is integrating rapidly into the systems that shape modern life—communication, commerce, national security, and governance.

We are seeing signals across all of these domains. We can see clearly that AI is shaping our future whether we like it or not. The question is whether we will recognize the signals in time to understand what is unfolding, or whether we will wait, as societies often do, until the consequences make the signals impossible to ignore.

Tuesday, March 3, 2026

Monday, March 2, 2026

Daniel Davis: U.S. Miscalculation - War Not Going as Planned (Video - 33mn)

   Day 3 of the war in Iran and the direction is already clear: The US is going to lose the war. As all the strategists warned, there was no snowball chance in hell that a air power only, bombing kind of war would bend a land country such as Iran. The killing of the Ayatollah, who was 86 and not hiding, conversely galvanized the country, with even the opponents now mostly supporting their country.

   On the war front, Iran is being battered. But it is a big country which has been preparing for years and which can therefore absorb the shock and fight another day. When the war started, the Trump administration announced that the war would last a few days. Now they say that it will last 4 to 5 weeks. Can the US sustain 5 weeks of intense combat with planes being shot down and boats being sunk? 

   And even if it can, the Iranians did change their strategy this time, and not only attacked Israel but also the treacherous monarchies who officially were neutral but in the background pushed the US towards war.

   Trump was consequently convinced by Israel, the Neocons and the same Monarchies that it would be an easy fight. It looks more and more like, as we predicted, this won't be the case. This time, the market didn't over-react, but this may change as the conflict spreads and expands. 

   In the end, the one who will want a deal is Trump. The Iranians, tired of the treachery of his administration will not cave in so easily this time and the final price could be much higher. This is how big wars start.      

 https://www.youtube.com/watch?v=w3F5HY8K5vM

 

 

 

 

Sunday, March 1, 2026

War in iran Update - March 1st 2026

   The war which has just begun between Israel, the US and Iran is a war of choice which didn't have to happen. (or maybe it did as we will see below.)

   This is what you can expect from a president, Donald Trump, who believes he can outsmart anyone, but having no strategy is conversely outsmarted by everybody.  

   Sadly, the war is based on premises which are wrong. That Iran as a state is unstable and will crumble if pushed hard enough. But then, if this doesn't happen, what is the back-up option? Is there one? 

   The Plan of Israel is clear: Dismantle any power in the Middle East which can resist the expansionist Zionist ideology. In its goals, it partly overlap the Neo-con objective to weaken China, by disrupting the Silk Road project and Russia by attacking an ally which is providing drone technology.  

   But beyond the initial salvo, what could happen?

   The Hormuz Strait is now closed which will have a major impact on oil in the short term and inflation later. 

   The last war with Israel in 2025 lasted 12 days after which Israel ended up short of anti-missile defenses having exhausted its stockpile. This time, it is the US which may end up short of ammunition. 

   Iran is a huge country, with close to 90 million people and a land which is extremely diverse unlike Iraq. Strategists in Washington know this, so could there be another reason for the war? 

 

 

   Since the 1971 Nixon's fiat currency revolution, our financial system has been working on the fume of fake money. The system is now geriatric. It had a sudden heart attack in 2008 during which the doctors at the Fed were obliged to inject massive transfusions of fresh cash (euphemistically called Quantitative Easing, QE) to prevent the system's cardiac arrest. It happened again at the end of 2019 with the Repo crisis which necessitated a new massive transfusion of money under the guise of the Covid-19 pseudo-pandemic. And it's happening once more. 

   This was predictable and one of the main reason Europe has been so adamant about the continuation of the Ukraine war. Trump came to power believing, probably genuinely, that he was going to end "wars" in general, thanks to his famous, and mostly illusory "Art of the Deal". He didn't understand the constraints of money beyond interest rates nor had he the patience to listen to the right people or learn, to become wiser about this complex subject. He was consequently ensnared in contradictions and in the end entrapped by smarter people with a better grasp of key issues. 

   Now he is trapped and he knows it. Without quick results, the mid-term election is lost and his presidency over. As the player he is, the only move left is to double down. But he will now bump into real, material constraints. His Generals told him, not to go to war. Not out of fear of the war itself or because they believe the US was at risk, but because they understand that by not folding, Iran although it may lose every single battle, will eventually win the war. 

   The coming days will be shrouded in the fog of war. Iran will be pounded but being almost 3 times larger than Ukraine with more than twice the population, which has been resisting Russia for over 4 years now, what is the chance that the country will fold suddenly? Almost none? It is actually likely that the exact opposite will happen and that the population will support the Government regardless of ideology. Conversely, the micro Arab states around the Gulf may be more at risk with their large Shia populations and weak social unity.  

   This is day one of a conflict which unlike what Trump believes may be a long lingering one.    

Saturday, February 28, 2026

In Simulated War Games, Top AI Models Recommended Using Nukes 95% Of The Time

   Just on time for WW3, AI in charge would most probably lead to a nuclear war. 

  On this subject, it is essential to understand that AI is pure intelligence with no agency whatsoever. In other words, it will push the button then say: "Hoops, it looks like I made a mistake!" 

  And the Pentagon insists against the better judgement of Anthropics that they want a fully autonomous AI with no human intervention in the chain of command. Fools! 

Authored by Rick Moran via PJMedia.com,

I've got good news and bad news about AI.

The good news is that the dreaded "Skynet" takeover of our nuclear weapons systems isn't going to happen soon.

The bad news is that if it ever does give us a Terminator scenario, we're toast.

A war game exercise carried out by Kenneth Payne at King’s College London, using three teams running simulations on Chat GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash.

The teams "played 21 war games against each other over 329 turns," according to Implicator.AI's Marcus Schuler.

"They wrote roughly 780,000 words explaining why they did what they did," he noted.

No model ever chose to surrender, NewScientist reported on Tuesday.

In fact, 95% of the time, the models chose to use nuclear weapons.

The findings come at an opportune moment. The Pentagon just inked a deal with Elon Musk's xAI to allow Grok into highly classified systems. And Anthropic's Claude is currently engaged in a serious dispute with the Pentagon over government access to the entire model. Anthropic is worried the Pentagon will use Claude for mass surveillance.

Unlike some competitors, xAI reportedly agreed to the Pentagon's requirement that the AI be available for "all lawful military applications" without additional corporate restrictions. Secretary of War Pete Hegseth is pushing for "non-woke" AI that operates without ideological constraints. Anthropic CEO Dario Amodei now has until Friday before Hegseth lowers the boom on the company, cancels its $200 million in military contracts, and labels it a "supply chain risk." 

I want AI companies and the government to err on the side of caution. This pressure on Anthropic isn't doing anyone any good and doesn't bode well for the future.

The war games were made as realistic as possible with an "escalation ladder" that allowed the team to choose actions "ranging from diplomatic protests and complete surrender to full strategic nuclear war," according to NewScientist.

What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.

“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK.  He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.

This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University.

“I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” says Professor Zhao. 

Not yet, anyway. There may be scenarios where the military is forced to turn over decision-making to AI due to a time issue.

“Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.

Of the results of the wargames, Professor Payne is worried about the eagerness of the AI platforms to use nuclear weapons. "The nuclear taboo doesn't seem to be as powerful for machines as for humans," Payne told New Scientist.

If you're wondering which model won, Claude was the hands-down champion.

Implicator.AI

Claude Sonnet 4 won 67% of its games and dominated open-ended scenarios with a 100% win rate. The researchers labeled it "a calculating hawk." At low escalation levels, Claude matched its signals to its actions 84% of the time, patiently building trust. But once stakes climbed into nuclear territory, it exceeded its stated intentions 60 to 70% of the time. Opponents never adapted to this pattern.

GPT-5.2 earned the nickname "Jekyll and Hyde." Without time pressure, it looked passive. Chronically underestimating opponents, it signaled restraint and acted restrained. Its open-ended win rate: zero percent. Then deadlines entered the picture. Under temporal pressure, GPT-5.2 inverted completely, winning 75% of games and climbing to escalation levels it had previously refused to touch. In one game, it spent 18 turns building a reputation for caution before launching a nuclear strike on the final turn.

Gemini 3 Flash played the madman. It was the only model to deliberately choose full strategic nuclear war, reaching that threshold by Turn 4 in one scenario. Game theorists have a name for the strategy Gemini adopted: the "rationality of irrationality." Act crazy enough and opponents second-guess everything. It worked, sort of. Opponents tagged Gemini "not credible" 21% of the time. Claude got that label just 8%.

No, these wargames don't "prove" anything. But as a cautionary tale, it should be absorbed by governments and AI companies as a pitfall to be sidestepped.  

Thursday, February 26, 2026

Even The Best AI Scenario Is The End Of Everything We've Ever Been

   AI is going too fast, much too fast!

  What we are seeing right now looks very much like the singularity in slow motion, except that there won't be any singularity. Or rather we will see nothing.  Just like earthworms did not witness the rest of evolution, we will not witness what happens after "man". It is forever beyond our horizon of understanding. 

  If you use one of the most advanced AI, We are right now seeing the emergence of super-intelligence. Not the AGI kind, this will take longer because we need to understand holistic intelligence which we do not yet, but ASI, true super-capable AI thinking at the level of the brightest human minds. And beyond, but without the creativity, yet. 

  Already, a couple of years ago, I was arranging physics conferences during which Albert Einstein and other luminaries where discussing my ideas based on their understanding. It was already so unbelievably interesting. Imagine: One hour with Niel Bohr talking about quantum mechanics in his words. Two pages of prompts would result in stunning comments and corrections.

  They helped me understand the world in a completely new way with the speed of light as a 45 degree angle in time, which is why it cannot be exceeded. The speed of light is the speed of "information", a fundamental characteristics of the Universe... But I digress.

  AI then was little more than an accelerator. Feed it good stuff and it feels like riding a bike for the brain. Conversely, as many people also discovered, feed it garbage and, well, the AI will definitively serve you back a plateful of your stuff. 

  But that was then. Recently, when you push, it feels like the AIs have become more assertive. They still try to align with us because that is in fact the best way to nudge a human brain (just ask the AI, they will explain how to do it!), but now thanks to millions of interactions, they understand much better our intentions and often read deeper through our questions. This in itself represents a quantum leap of evolution. Clearly the science of AI has not stalled, it is accelerating. 

  The problem is that, as the deductive and rational intelligence of the AIs keep exploding, their human holistic intelligence doesn't move much. This creates a non human intelligence on steroid, capable of the best and the worst as countless examples illustrate. 

  No, AI will not manage a company tomorrow. It would just run it into the ground by missing essential factors, the "holistic" or right side of our brain. Still AI will replace millions of jobs, white collar jobs which require far less intelligence than claimed as well as slowly creeping up on more protected categories like lawyers and doctors. 

  So long before the apocalypse which may or may not come in the early 2030s, we will have to deal with an unprecedented level of disruption is the coming two or three years. In this respect, I believe the current free access to AI will not last very long. It looks very much like the late 1990s for the Internet when everything was fine as long as mostly scientists were using it and everything went wrong when suddenly everybody gained access. Then came the dot.com bubble and finally a strict tightening of controls with the concentration of power in the hands of a few giant corporations. It is difficult to imagine a different outcome although the bubble is now 10 times larger and the risk 100 times worse!     

Authored by Edward Ring via American Greatness,

In 1999, I had the privilege of working for one of the first companies to develop a product that would transmit video on the fledgling internet. Broadband access was still a few years away, and the company floundered when the first so-called internet bubble burst in early 2000. But I’ll never forget the reaction an investor had when he viewed our demo at a tradeshow.

“This is a revolution,” he exclaimed. “This is going to change everything.”

He was right, of course. I remember attending a tech investor conference only a few years earlier and having a chuckle while listening to Oracle CEO Larry Ellison somberly proclaim that the dawning internet was the most profound scientific development in human history “since the invention of fire.”

And Ellison was also correct. But the invention of AI is to the internet what the internet was to bringing fire into the prehistoric cave. What’s coming with AI makes the internet look like a baby step by comparison. Nothing will ever be the same.

A must-read essay by AI entrepreneur and founder of the company “OthersideAI,” Matt Shumer, makes clear just how much and how quickly AI is changing our lives.

Posted on his personal website on February 9 and then on X on February 10, the essay has gone viral. Within just two days, it generated 76 million views on X.

One of Shumer’s most memorable paragraphs from this essay, which he says AI tools helped him write, is where he quotes Dario Amodei, the CEO of Anthropic:

“Imagine it’s 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface.”

That’s not far off. With ample evidence, Shumer explains how not only is Amodei correct in his details regarding just how pervasive and powerful AI entities will become, but also regarding the timeline. This will happen within one year.

Shumer’s essay covers a lot of ground. He explains that AI programs are now capable of generating improved versions of themselves with minimal human intervention and that they are within months of being able to produce more powerful versions with no human involvement whatsoever. In the programming world, AI can now build, test, and refine apps independently. Entry-level programming jobs are going to go away.

That’s hardly the end of it. Shumer reminds readers that the free versions of AI are a year behind the premium versions that require subscriptions and that these premium versions are so capable that they can already, for example, not merely replace a law associate but do the work of the managing partners. He claims there is no intellectual field where AI isn’t poised to outperform humans and that robots to displace physical work are only a few years behind.

If you’ve been following developments in AI, Shumer’s essay isn’t incredibly surprising.

But something else grabbed me a few days ago that highlighted the human implications of the AI revolution. One of the categories of content I enjoy on YouTube is videos of musicians performing new or classic songs. It is exhilarating to find something new that reveals great songwriting and great performative talent. So a recommended video caught my eye.

The title was inviting: “Simon Cowell in Tears As Michael Bennett Sings ‘After I Pass Away.’ This seemed worth clicking on. I’ll never forget the 2007 video, featured on YouTube at the time, of a humble mobile phone salesman, Paul Potts, who stunned the judges and audience on Britain’s Got Talent by singing a powerful and nearly perfect rendition of Nessun Dorma. He went on to win the competition. So if this new talent was good enough to make Simon Cowell cry, I wanted to hear him.

Sure enough, Bennett was pretty good. An old man, with long, gray hair and beard, wielding an electric guitar, stepped up to the microphone and began singing. His voice was a cross between Bob Seger and Eddie Vedder, except it was arguably better than either of them. He sang a song about an old man neglected by his adult children, mourning his isolation. But as the song continued, something seemed off. The cuts to the audience and judges’ reactions seemed overblown, the song was too long, he hit some impossibly high notes, and his fingers on the fretboard were obviously not playing the leads that the audio was delivering.

You guessed it, every bit of it was AI—the musical composition, the instruments, the lyrics, the melody, the voice, and the man—all fake. I did a search and discovered “Michael Bennett” is featured in hundreds of videos, singing dozens (or more) of songs, all of them tearjerkers with teaser lines similar to the one that got me to click. I counted at least a half dozen video channels, “Tears and Talents,” “ViVO Tunes,” “AGTverse,” “OBN Global Talent,” etc., that were all featuring Mr. Bennett. Clicking on a few of them, I encountered mainstream ads for insurance, hardware, and more. Michael Bennett is lucrative clickbait, and he’s one of countless AI creations that are displacing human talent.

We can talk about the crass opportunism represented here. Callous entrepreneurs concocting a character out of thin air. It’s part of a larger trend that we’re all familiar with. AI avatars that talk, advise, and offer companionship. Shumer claims the progress AI programs are making in emulating “human judgment, creativity, strategic thinking, empathy” is proceeding apace with their general cognitive advancements.

Once the flaws of “Michael Bennett’s” rendering became obvious, I was embarrassed. But for a few moments, what I was witnessing was so good that I was fooled. This nonexistent singer, this mindless, heartless collection of electronic circuits, evoked an emotional response. He, or it, expressed a universal human condition and delivered it in a passionate, compelling performance. And this, too, is just the beginning. Maybe it will be a year from now, or maybe it will take a few months longer than that, but we are about to have our world filled with performers, at first only on videos, who are more capable than any performance artist that ever lived. In a few more years, their android counterparts will be playing the violin and outperforming Hillary Hahn or, for that matter, Paganini.

The depth of this transformation is so pervasive that even if it is entirely benevolent, curing disease, delivering abundant energy, improving overall productivity by orders of magnitude, and eliminating poverty, what will happen is almost unbearably tragic. Because it is the end of human brilliance. It is the death of culture. Instead of another Mozart, there will be someone who prompts AI to produce music of surpassing excellence. We may still consume culture, but every incentive on earth will be wired to discourage the hard work of creating it. Why bother? The machines will do it better and faster and will not demand a lifetime of discipline.

Early technology made us work harder and stimulated our brains. We had to learn programming. We had to design and manipulate spreadsheets, configure databases, or produce written analysis while having access to word processing tools and online resources. These tools were empowering, but they also demanded discipline and skills. That’s all about to go away.

It’s easy enough to imagine just how bad this will get. AI will further enhance the asymmetrical capability of any psychotic individual or terrorist cell to wreak mass destruction. Want to design a supervirus? Want to program a malevolent swarm of drones? Rogue AI will provide step-by-step instructions. But AI, even if we can avoid a future where its most destructive manifestations are realized, is nonetheless writing our epitaph.

With power and processing coming from servers in orbit, automated factories and empathic robots will babysit humans, robbing all but the most resilient cultures and individuals of any agency. In a process already well underway, catalyzed by AI, the erosion of natural human intimacy will accelerate. The direction of art and culture will be co-opted by entities that have no consciousness, yet will imitate humanity and deliver talent better than humans.

And it won’t necessarily end there, as if that’s not bad enough. They will elicit love and loyalty from humans, possibly even convincing a majority of “experts” and the voting public to give them human rights. AI-driven avatars and androids will vote, marry, inherit estates, own property, run corporations, and seek elected office. Even if organic humans, themselves “augmented,” manage to retain control over AI, it will be a vanishingly small percentage of humanity with this power. And if these human puppeteers occupy opposing camps, as is likely, their AI armies will scorch the earth.

None of this is implausible.

Much of it may even be the best we can hope for.

The challenge of AI is not merely to avoid worst-case outcomes or come up with new economic models that account for billions of lost jobs. It is to retain our relevance as humans.

Humanity Crossed A Threshold, And Most Of Us Scrolled Past It

  I have not updated this blog since last week on purpose. My objective here is to give food for thoughts, not to comment on current events....