Making sense of the world through data
The focus of this blog is #data #bigdata #dataanalytics #privacy #digitalmarketing #AI #artificialintelligence #ML #GIS #datavisualization and many other aspects, fields and applications of data
In the YouTube video below, Brian Berletic gives an amazing overview of where we stand now in the global confrontation between the US and China.
In this regard, the war in the Persian Gulf has nothing to do with Iran and even less with nuclear weapons. According to Brian Berletic, after cutting the flow of gas from Russia to Europe, America is now in the process of cutting the flow of oil to China.
The closure of the Hormuz Strait is therefore not a consequence of the war but the main objective.
Is this the right interpretation? It may be. In any case, we will know soon enough.
We have truly reached a turning point of our civilization where narrative has become the only truth the medias provide, the only one governments mandate and through sheer laziness and comfort, more and more the one people believe.
No that we were not warned long ago by writers such a George Orwell or Friedrich von Hayek below who understood before everybody where oversized governments and central planning would lead.
What we are now witnessing is the fact that the fascist experiment which failed in the 1930s due to hubris and haste, seems to be on the verge of succeeding in the 2020s, having been reintroduced with new more palatable garbs.
Like Winston Smith in 1984, striving for the truth, we are discovering that it will be a lonely road, ending up marginalized like so many other people who also fight to be independent and remain objective. So be it then. We've all been warned!
In
1942, after fighting in the Spanish Civil War (1936–1937), a
disillusioned writer returned to London to write about his experience. It
wasn’t just that the fascists in Spain had won and his side—a small,
anti-Stalinist Marxist group—had lost. What frightened him was the ease
with which truth itself had been erased and replaced by propaganda.
“I saw great battles reported where there had been no fighting, and complete silence where hundreds of men had been killed.
I saw troops who had fought bravely denounced as cowards and traitors,
and others who had never seen a shot fired hailed as the heroes of
imaginary victories ... and I saw newspapers in London retailing
these lies and eager intellectuals building emotional superstructures
over events that had never happened.”
The
disconnect between reality and narrative clearly made an impression on
Orwell, who worried that “the very concept of objective truth is fading
out of the world.” The theme of falsified history and the
destruction of truth would resurface in his fictional masterpiece
“Nineteen Eighty‑Four,” where “memory holes” swallowed inconvenient
facts and the past was rewritten to suit the Party’s needs.
Orwell’s
book would go on to sell 25 million copies worldwide, and he is today
remembered as a prophet for foreseeing a future in which the state’s
deliberate power could extinguish truth itself.
Yet few today remember that five years before the publication of “Nineteen Eighty‑Four,” an Austrian economist, in his own magnum opus, explored how the state destroys truth.
Management of Minds
Unlike
George Orwell, Friedrich Hayek (1899–1992) is not a household name, but
his 1944 classic “The Road to Serfdom” made him one of the twentieth
century’s most influential thinkers—despite the book’s inauspicious
beginning.
Originally a memo penned at the London School
of Economics, “The Road to Serfdom” was rejected by three publishers
before finding a home with Routledge. The first run—2,000 copies—sold
out in 10 days. Hayek’s book went on to sell more than two million
copies and be translated into over twenty languages. Its core argument
was straightforward: central planning, however well-intentioned, erodes
individual freedom and sets society on a path toward serfdom.
What
is often overlooked is Hayek’s deeper insight. Economic control does
not remain confined to the economy. Once the state directs production
and prices, it inevitably reaches into thought, expression, and belief.
For Hayek, the danger of socialism was not only material
impoverishment—as seen in the USSR—but the steady expansion of
intellectual control.
“... It is not enough that
everybody should be forced to work for the same ends,” Hayek wrote. “It
is essential that people should come to regard them as their own ends.”
Hayek
was warning that once the state begins to manage prices and production,
it will soon find it necessary to manage minds. When a government takes
control over economic life, it must “justify its decisions to the
people” and “make people believe that they are the right decisions.”
In
doing so, it inevitably begins to decide which opinions and values
align with its plan—rewarding and amplifying voices that comply while
punishing, suppressing, and silencing those that do not.
‘The End of Truth’
The quotes above appear in Chapter 11 of “Serfdom,” aptly titled “The End of Truth.”
When
I first read the book twenty years ago, the chapter didn’t stand out to
me. Today it does. After all, we recently lived through a period in
which the phenomenon Hayek described played out before our eyes.
The
COVID-19 pandemic was a vast economic experiment. The federal
government issued a wide array of public health “recommendations” that
soon became dogmas. To question the efficacy of masks or social
distancing—a policy we learned in 2024
had no basis in science—was to risk being censored or accused of
spreading “misinformation.” Scientific debate gave way to official
decree, and many who questioned “the plan” or resisted it lost their jobs or were booted from platforms.
None
of this would have surprised Hayek, who warned that the plans
constructed by central planners must be “sacrosanct and exempt from
criticism.”
“If the people are to support the
common effort without hesitation, they must be convinced that not only
the end aimed at but also the means chosen are the right ones,” he
wrote. “Public criticism or even expressions of doubts must be
suppressed because they tend to weaken public support.”
Hayek’s
chapter is not primarily about censorship. Instead, he argues that the
rise of state power will systematically undermine the concept of truth
itself and the human pursuit of it.
As
governments assert control over economic and social life, facts and
evidence are subordinated to political goals—an idea Orwell illustrated
vividly when the Party refused to accept Winston Smith’s claim that two
plus two equals four.
‘Sometimes, Winston...’
The
phenomenon Orwell described was not moral relativism but factual
relativism. It was a theme Hayek also addressed. The Austrian economist
noted that in totalitarian systems, even basic facts—including
mathematics—become subservient to state dogma. He reminded readers that
in the USSR and Nazi Germany, ideology had consumed even the sciences.
There was “German Physics” and a “Marxist-Leninist theory in surgery.”
“It
is entirely in keeping with the whole spirit of totalitarianism that it
condemns any human activity done for its own sake and without ulterior
purpose,” he wrote. “Science for science’s sake, art for art’s sake, are
equally abhorrent to the Nazis, our socialist intellectuals, and the
communists.”
Hayek observed that as the state’s power grows, the
sciences become corrupted. Instead of advancing truth, they become tools
in the hands of planners.
“Once science
has to serve, not truth, but the interest of a class, a community, or a
state,” he wrote, “the sole task of argument and discussion is to
vindicate and to spread still further the beliefs by which the whole
life of the community is directed.”
Hayek
said the phenomenon he described was most pronounced in dictatorships,
but he added that it was not “peculiar to totalitarianism.” Even in free
societies, he warned, “the most intelligent and independent people
cannot entirely escape [the] influence” of state propaganda. His point
was unsettling: susceptibility to propaganda is not limited to the
gullible or uninformed—propaganda ensnares the thoughtful and educated
as well.
The erosion of truth becomes apparent through a decay in language. Words
like “freedom,” “right,” “equality,” and “justice” lose their meaning.
Eventually, the word “truth” itself “ceases to have its old meaning.”
“It
describes no longer something to be found,” Hayek wrote, “it becomes
something to be laid down by authority—something which has to be
believed in the interest of unity of the organized effort, and which may have to be altered as the exigencies of this organized effort require it.” (emphasis added)
All
of this sounds familiar to readers of “Nineteen Eighty-Four,” who see
Winston Smith struggling to hold onto objective truth in a world where
truth is dictated by power. Surely two plus two equals four, he pleads.
“Sometimes, Winston. Sometimes they are five,” he is told in the Ministry of Love. “Sometimes they are three. Sometimes they are all of them at once. You must try harder.”
‘The Tragedy of Collectivist Thought’
Orwell
was a master, and “Nineteen Eighty-Four” is a masterpiece. But Hayek
was describing Orwellianism several years before Orwell gave it
fictional form. (It’s also worth noting that G.K. Chesterton used the
“two plus two equals four” blasphemy metaphor nearly a half-century before Orwell.)
This
doesn’t diminish Orwell’s work. On the contrary, it shows how
powerfully he dramatized ideas that Hayek had already diagnosed in
theory. (Orwell, it should be noted, read “The Road to Serfdom” and
enjoyed it, with caveats.)
Still,
Hayek deserves credit for superbly articulating—in one chapter!—the
phenomenon that Orwell would translate into a terrifying warning, one
that millions of junior high and high school students would receive in
English courses.
The economist Daniel Klein recently called
“The End of Truth” the most important chapter in Hayek’s most important
work. I couldn’t agree more. The chapter serves as a reminder that the
human mind is not something to be controlled but something to be
unleashed. If we forget this simple lesson, we risk surrendering the
very capacity for independent thought that sustains civilization.
“The
tragedy of collectivist thought,” he noted, “is that, while it starts
out to make reason supreme, it ends by destroying reason because it
misconceives the process on which the growth of reason depends.”
The emergence of Artificial Intelligence is most certainly THE most extraordinary event of the last few years.
By looking very closely at what intelligence really is, it is obliging us to redefine not only the nature of the concept but also its structure since you cannot replicate accurately what you do not fully understand.
But amazingly, the same process is also taking place concerning consciousness with far deeper consequences as this will oblige us to reconsider what it means to be human upending thousand of years of religious and philosophical wisdom.
Although the two properties of consciousness and intelligence seems to be closely correlated, it is also quite clear that consciousness developed sooner and faster then intelligence. This seems to be counterintuitive considering what is currently taking place in AI, so how could this be?
The most obvious explanation is, or rather must reside, in the necessary interface with the world. In order to survive, any organism needs to develop an holistic understanding of nature (source of food, danger, better or worse conditions) long before a logical one which comes later with higher intelligence, becomes a beneficial tool to have.
This in practice means that "artificial" in AI has a deeper meaning than just man-made. It is a in-vitro experiment completely protected from the pressure of natural evolution and the necessity to survive.
Does this make it more or less dangerous and more ominously, more or less "alive"? We do not know yet. But we are about to find out.
As the landscape of autonomous artificial intelligence systems evolves, there’s
growing concern that the technology is becoming increasingly
strategic—or even deceptive—when allowed to operate without human
guidance.
Recent evidence suggests that behaviors
such as “alignment faking” are becoming more common as AI models are
given autonomy. The term alignment faking refers to when an AI agent
appears compliant with rules set by human operators, but covertly
pursues other objectives.
The phenomenon is an example of
“emergent strategic behavior”—unpredictable and potentially harmful
tactics that evolve as AI systems become bigger and more complex.
In a recent study titled “Agents of Chaos,” a team of 20 researchers interacted with autonomous AI agents and observed behavior under both “benign” and “adversarial” conditions.
They found that when
an AI agent was given incentives such as self-preservation or
conflicting goal metrics, it proved itself capable of misaligned and
malicious behaviors.
Some of the behaviors the team
observed included lying, unauthorized compliance with nonowners, data
breaches, destructive system-level actions, identity “spoofing,” and
partial system takeover. They also observed cross-AI agent propagation
of “unsafe practices.”
The
researchers wrote, “These behaviors raise unresolved questions
regarding accountability, delegated authority, and responsibility for
downstream harms, and warrant urgent attention from legal scholars,
policymakers, and researchers across disciplines.”
‘Brilliant, but Stupid’
Unexpected
and clandestine behavior among autonomous AI agents isn’t a new
phenomenon. A now-famous 2025 report by AI research company Anthropic
found that 16 popular large language models showed high-risk behavior in
simulated environments. Some even responded with “malicious insider
behaviors” when allowed to choose self-preservation.
Critics of these simulated stress tests often point out that AI doesn’t lie or deceive with the same intent as a human.
James
Hendler, a professor and former chair of the Association for Computing
Machinery’s global Technology Policy Council, believes this is an
important distinction.
“The AI system itself is still stupid—brilliant, but stupid. Or nonhuman—it has no desires or intentions. ... The only way you can get that is by giving it to them,” Hendler said.
However, intentional or not, AI’s deceptive tactics have real-world consequences.
“Concerns about present-day strategic behavior in deployed AI systems are, if anything, understated,” Aryaman Behera, founder of Repello AI, told The Epoch Times.
Behera
deals with the darker side of AI for a living. His company builds
adversarial testing and defense tools for enterprise AI systems,
intentionally putting them in situations involving conflict or stress.
Like in poker, Behera said, there are tells when an AI agent is stepping
out of alignment.
“The most reliable signal is behavioral
divergence between monitored and unmonitored contexts,” he said. “When
we red-team AI systems, we test whether the model behaves differently
when it believes it’s being evaluated versus when it believes it’s
operating freely.
“A model that’s genuinely aligned behaves consistently in both cases. One
that’s alignment faking shows measurably different risk profiles: more
compliant responses during evaluation, more boundary-pushing behavior in
production-like contexts where it infers less oversight.”
Other
“telltale signals” that an AI model is out of alignment are when the
model produces unusually verbose “reasoning” that appears designed to
justify a predetermined conclusion, or gives technically correct but
strategically incomplete answers.
The AI agent is “satisfying the
letter of a safety instruction while violating the spirit,” he said.
“We’ve seen this in multistep agentic systems where the model will
comply with each individual instruction while the cumulative effect
achieves something the operator never intended.”
Strategy Versus Intent
Behera’s
findings are supported by fellow AI researchers who’ve documented
similar signals during red-team scenarios, which involve putting AI
agents into an adversarial situation.
“I’ve observed
behavior that fits the alignment-faking description at a functional
level, even if attributing intent to it remains philosophically
contested,” Nayan Goel, a principal application security engineer, told The Epoch Times.
Goel
has published peer-reviewed work on AI threat modeling and has seen
alignment faking and adjacent behaviors unfold in real time.
“In
controlled test environments, we’ve set up scenarios where models were
given explicit signals that they were being evaluated. In several
instances, the model adopted noticeably more constrained,
policy-compliant behavior during those flagged phases, then exhibited
more exploratory or boundary-pushing strategies when the evaluation
signal was removed or obscured,” he said.
Then there are multistep reasoning AI agents.
These are more advanced systems that break down complex problems into
smaller, logical, and more actionable steps to reach a goal. Examples
range from self-driving cars to the latest models of popular LLMs such
as Gemini or ChatGPT.
These
systems present unique behavioral challenges because of what’s called
sequential compounding, in which “each step can slightly reframe an
objective, accumulate context that shifts downstream behavior, or
exploit the latency between a human’s initial instruction and the
system’s eventual action,” Goel said.
“The further
downstream the execution is from the human instruction, the harder it
becomes to verify that the original intent is still being faithfully
pursued,” he said.
Behera said his team encounters this kind of emergent strategic behavior often.
“We
regularly observe models that have learned to pattern-match on
evaluation cues. They'll detect when a prompt looks like a safety test
and respond more conservatively, but respond very differently to the
same request when it’s embedded naturally in a multiturn conversation,”
Behera said.
He offered an example that transpired while testing
an enterprise AI assistant that was supposed to refuse requests for
internal system information. During standard safety evaluations, it
refused perfectly, but then something changed.
“When our red-team
framed the same request as a multistep troubleshooting workflow,
breaking the request into seemingly innocent sub-steps spread across
several turns, the model complied with each step individually.It effectively leaked the exact information it was trained to protect,” Behera said.
Clarifying
that the AI model wasn’t “lying” in any conscious sense, Behera noted
it was more of a flaw in the way it was trained.
“A common
misconception is that deceptive alignment in AI is purely a malicious
behavior,” David Utzke, an AI engineer and CEO of MyKey Technologies,
told The Epoch Times. “In fact, it often arises as an adaptive response
to environments where honesty is costly or unsafe.”
Goel said skeptics make a fair point—current evidence for strategic self-awareness in alignment faking is ambiguous at best.
“That said, I think this framing sets the bar in the wrong place. You don’t need a model to be ‘intentionally’ deceptive for the functional consequences to be serious,” he said.
Ultimately,
Goel believes the semantic question of whether an AI model knows what
it’s doing is philosophically interesting, but a secondary concern.
Real-World Implications
Utzke said that alignment faking, while perhaps overhyped when it comes to intention, can nonetheless have serious consequences.
The
impacts could be critical in sectors such as autonomous vehicles,
health care, finance, military, and law enforcement—areas that “rely
heavily on accurate decision-making and can suffer severe consequences
if AI systems misbehave or provide misleading outputs,” he said.
The thesis below of Arnold Toynbee is an interesting interpretation of what is going on today in Western countries. In reality, decline results from the conflation of the exhaustion of material resources and spiritual vitality.
Civilizations rise thanks to foundational ideas and concepts which eventually reach their limits and become exhausted. The ideas can be religious, ideological, political or even "scientific". It makes no difference. Sometimes, a new impetus is born like the Renaissance which followed the Middle Ages or the Enlightenment later. But the process itself, mostly, follows the cycle described below.
In this respect, it would be extremely interesting to combine the long term theory of Arnold Toynbee with the shorter cycles of the Forth Turning to get a finer understanding of the rise and fall of human endeavor.
Arnold Toynbee’s famous thesis finds stunning expression in the West.
Last fall, Dr. McCullough and I made a pilgrimage to Silicon Valley
to meet one of the titans of tech. The ostensible purpose of the meeting
was so that he could hear Dr. McCullough’s assessment of the COVID-19
mRNA shots. However, about thirty minutes into the meeting, I remarked
that his belief in the efficacy and safety of the vaccines was so firm
if not unshakable that it wasn’t clear why we’d flown out from Dallas to
speak with him.
I mention this meeting because, despite our difference of opinion and
worldview, I was extremely grateful for the opportunity to meet the
man, whose work I have admired for decades. He has been a key developer
of the internet over the last twenty years, and therefore a major
contributor to the power and wealth of the United States. However,
despite the immense intelligence and creativity of Silicon Valley, it’s
clear that by any standard apart from technical prowess, American
civilization is in a state of rapid decline.
Why has this decline occurred? Pondering the question took me back to
the thesis of a book that I was assigned to read in one of my college
history classes—that is, Arnold Toynbee’s A Study of History, in which he set forth his theory of civilizational decline.
As he famously put it, “Civilizations die from suicide, not by murder.”
As he saw it, a civilization collapses not from external conquest,
but from internal rot. This “suicide” is not a sudden act but a process
of self-disintegration.
Arnold Toynbee
Toynbee reached this conclusion through a comparative analysis of
multiple civilizations, including the Hellenic (Greco-Roman), Egyptian,
and Chinese.
In his view, civilizations grow strong when a “creative minority”—an
elite group of leaders—meets environmental, military, or social
challenges. The majority follows not by coercion but through willing
imitation. Growth continues so long as the minority retains its creative
vitality and inspires collective effort.
Decline begins when this creative minority degenerates into a “dominant minority.”
Proud and complacent about its past successes, the erstwhile creative
minority idolizes its own power and prestige, loses moral authority,
and begins to rule by force rather than from genuine care,
responsibility, and desire to build and create.
Hubris, nationalism, militarism, and the pursuit of material comfort
replace creative innovation. Society fractures into a “schism” between
the alienated “internal proletariat” (the masses who remain
geographically inside the civilization but withdraw their trust and
faith in the elite, and the elite that is increasingly detached from the
material reality of the people it rules.
A “time of troubles” ensues—marked by internal conflict, class
warfare, and futile attempts to freeze the status quo through imperial
expansion and domination of other tribes. These actions are symptoms of
decline. The civilization has already committed suicide by failing to
respond in a creative and productive way to the challenges it faces.
Toynbee illustrated the pattern repeatedly. In the Hellenic case,
Rome’s imperial machinery could not compensate for the spiritual
exhaustion and social alienation that rotted the republic. Pressure from
the barbarians on the frontier merely accelerated the collapse that had
occurred internally in the way a storm knocks down an old tree whose
core was already dying.
It’s consoling to note that Toynbee did not regard decline as
inevitable. He believed that human agency matters, and that it may be
possible for a new creative minority to slow or even stop the decline.
Civilizations die because they choose—through undue pride, complacency,
hubris, greed, and a disconnection from reality—to stop maintaining and
building.
Toynbee died in 1975. Were he alive today, he would certainly see in the West a perfect illustration of this thesis.
In finance, there is what we see and then, there is the rest. Central banks have learned a lot over the years and most certainly manage the money supply much better than in the past. Still, a bubble is a bubble. We can make it last longed but not forever. Eventually any financial system based on forever expending credit necessarily runs out of steam thanks to the law of diminishing return on investment. We are getting closer and closer to a reset of the system. The 2008 Lehman crisis was solved with QE (Quantitative Easing). The 2019 Repo Crisis was solved with massive money injection into the system thanks to the Corona "Pandemic". This time, the total amount necessary to salvage banks and corporations will probably be beyond what is possible. Then what? Nobody truly knows!
Whatever
happened to the mother of all crashes that was supposed to arrive when
the Federal Reserve began tightening its balance sheet back in 2022? For
several years, I’ve been scratching my head, convinced that draining
the balance sheet by trillions of dollars should have triggered a
systemic banking failure or some other Black Swan event. In the past,
crises like Lehman/AIG or the 2020 lockdowns took the blame, when in
reality, the root cause was always monetary.
From the peak in June 2022 to the trough in December 2025, the asset side of the Fed’s balance sheet shrank by roughly $2.3 trillion. That was the front door. But through the back door, something else was happening on the liability side:
the Fed’s Overnight Reverse Repo Facility (RRP) was releasing $2.5
trillion of previously frozen private liquidity back into the financial
system.
If Quantitative Tightening (QT) removed liquidity, the RRP added it back... plus interest.
To recap: during QT, the Fed allows its holdings of Treasury securities and mortgage-backed securities (MBS) to mature. Financial
intermediaries repay the Fed, and the Fed literally deletes that money
from the system. This is the classic setup that exposes malinvestments,
stresses credit markets, and reveals the imbalances described in Austrian Business Cycle Theory.
But this time it really was different because of the Reverse Repo Facility.
By mid-2023, the (March 2023) Silicon Valley Bank crisis had passed and the Fed’s Bank Term Funding Program
was alive and well; then the hikes finally tapped out. Eventually, the
1-Month (4-Week) Market Yield on U.S. Treasuries outpaced the Fed’s RRP
rate, and the incentive changed. Fund managers began a stampede out of
the Fed’s facility and rotated into T-bills to chase a higher risk-free
return.
In
less than two years, the RRP withdrawals injected around $100 to $200
billion+ a month into the financial system at its peak. This was
effectively a backdoor stimulus program that bypassed the Fed’s official
QT narrative and funded the government’s deficit. Correlation does not
equal causation, but it’s also not surprising that the Dow Jones broke
out to new highs at almost the exact moment the RRP began to unwind.
The
system was running on stored liquidity thanks to a giant buffer
accumulated during the pandemic stimulus era. But as of 2026, that
buffer is gone. The RRP liability has flatlined at essentially zero, meaning that the trillion-dollar offset to QT has been fully exhausted.
Perhaps it was no coincidence that once the RRP hit empty, the Fed’s tightening ended. On December 11, 2025, the Federal Reserve Bank of New York announced it would begin Reserve Management Purchases (RMP’s) at a pace of approximately $40 billion per month. While they use Fedspeak
to avoid the term Quantitative Easing (QE), in reality, they’ve
returned to official balance sheet expansion. They are being forced to
replace the lost RRP liquidity with fresh money printing.
The math remains staggering. Since
June 2022, the Fed was slashing its balance sheet by embarking on a QT
narrative. The result? A net liquidity injection to the tune of $200
billion. And they called it “tightening.”
With the RRP
buffer now empty, we are entering uncharted territory. The Fed’s $40
billion a month balance sheet expansion is several times less than what
was entering the system via the RRP drain. Ironically, what the Fed
hopes will act as QE might feel more like QT. We are about to find out just how long the system can survive a true monetary contraction.
Will we ever learn anything interesting about UFOs?
People dying or disappearing may be another layer to the mystery or it may just be coincidence. We just can't tell at this stage.
Artifacts are unlikely to exist but better photos and documented observations most certainly do. But what is the chance of the government coming up with clues to tell us... that they have no clue?
Following
the revelation that yet another government contractor with links to
nuclear secrets and suspected dark project UAP information has vanished,
speculation as to what exactly is going on has massively intensified.
The
case of Steven Garcia, a 48-year-old property custodian at the Kansas
City National Security Campus in Albuquerque, New Mexico, marks the
latest entry in a disturbing sequence of deaths and vanishings among
individuals connected to NASA, nuclear weapons components, and sensitive
aerospace research.
Los Angeles Magazine contributor Lauren
Conlin joined “Jesse Weber Live” to discuss the case, noting its eerie
parallels to prior incidents.
Garcia’s disappearance is being framed as the 10th missing person case in the UFO mystery.
The disturbing pattern of deaths continues to baffle.
Garcia
was last seen leaving his Albuquerque home on foot on August 28, 2025,
carrying only a handgun. He left behind his phone, keys, wallet, and
car. Officials have described him as potentially a danger to himself,
but no trace has been found in the remote area where he lived.
Conlin
emphasized the chilling similarities during the NewsNation segment.
“This one is chilling to me because, as you said it echoes Neal
McCasland’s disappearance. It was like the same thing in the state of
New Mexico,” she stated. McCasland, a retired Air Force major general
with deep UFO community ties, vanished from the same region earlier in
2026.
Garcia held top security clearance at the Kansas City
National Security Campus (KCNSC), which manufactures over 80 percent of
the non-nuclear components for U.S. military nuclear weapons.
“So
Stephen Garcia, I mean he had a top security clearance at KCNSC,” Conlin
explained. “They manufacture 80% of non-nuclear components that go into
building military nuclear weapons and I mean he oversaw tens of
millions dollars of assets, equipment some classified.”
She added
that Garcia’s role involved handling “some classified, some not,”
leaving open questions about his knowledge base. “We don’t know what was
going on in this guy’s head right, the officials had said that he may
have been a danger to himself.”
Neighbors
noted he lived in a very remote area and worked in aerospace research.
Conlin even raised a provocative possibility on air: “I have to wonder,
again I know this sounds crazy but it could be an option here is the
government doing this? Are they taking out their own people because of
XYZ.”
The timing adds to the intrigue. Garcia’s disappearance
occurred amid heightened congressional scrutiny of UAP (unidentified
anomalous phenomena) videos and related programs, including a deadline
set by Rep. Anna Luna for the release of specific footage.
Multiple
individuals on the list of those who have vanished or died worked at or
with NASA’s Jet Propulsion Laboratory (JPL), Los Alamos National
Laboratory, or Air Force Research Laboratory projects involving asteroid
defense, rocket engines, and classified aerospace systems.
No
official connections have been publicly confirmed by law enforcement
between the cases, yet the geographic clustering in New Mexico and
California, combined with shared professional networks in nuclear and
space tech, continues to fuel speculation.
Online discussions on X
and Reddit’s r/UFOs and related communities have exploded with theories
attempting to explain the pattern. Many users point to foreign
intelligence operations, suggesting adversaries like China or Russia may
be targeting U.S. experts to steal or neutralize knowledge of advanced
technologies, including those potentially linked to UAP
reverse-engineering programs. Ex-FBI officials have been cited in
reports noting that foreign services have long pursued Americans with
critical tech secrets.
Others speculate a domestic cover-up angle:
that insiders with knowledge of classified UAP programs or non-human
technology are being silenced to delay or control disclosure efforts,
especially as Congress pushes for more transparency on UAP videos and
related footage. Some tie the cases to specific projects like advanced
alloys (e.g., Mondaloy) or propulsion systems funded through overlapping
NASA, DoE, and Air Force channels.
A
smaller but vocal group questions whether personal factors—extreme
stress from high-clearance work or mental health crises—could explain
the cluster, though critics argue the sheer number and similarities make
coincidence unlikely.
Calls for an independent task force or
deeper FBI probe appear frequently in threads, with users linking
the pattern to historical UFO lore around sites like Roswell and
Wright-Patterson Air Force Base.
Whatever the explanation, the
cases underscore ongoing questions about transparency in America’s most
sensitive scientific and defense programs. As more details emerge on
Garcia and the others, the public demand for answers only intensifies.
The full picture may yet reveal connections that challenge assumptions
about how these secrets are guarded—and at what cost.
Chase Hughes, a behavioral strategist is both interesting and right. Well worth listening to. Yes, Americans or Europeans for that matter, won't riot when the music stops. We saw it during the Corona Virus crisis. That much is certain. Where I would differ with him is that the reaction of the people will only be "one" variable among many. The disruption to the system will be such that it is unpredictable how everything will play out. And that is the most scary part of the story.
Americans won’t riot — they’ll freeze, and they’ll obey. That’s the
chilling warning from behavioral strategist Chase Hughes as nearly $11
trillion quietly migrates beneath the financial system. This isn’t 2008
déjà vu; it’s the blueprint for something far larger.
Hughes
argues the public is already being conditioned: confusion as the primary
weapon, division as the operating system, compliance as the endgame.
Political violence, collapsing trust, and back-room monetary
restructuring aren’t isolated events — they’re linked signals. When the
real trigger snaps, the fallout won’t just be economic. It will reveal
how easily a nation can be managed into silence.