DailyAIWire | ~ 8 minutes Read | Category:- AI News
A newsletter briefing for investors, bankers, and risk professionals — April 15, 2026
Why This AI Warning Matters Now?
A new Anthropic AI model called Claude Mythos can break into software faster than human hackers. It has already found thousands of serious security holes in real systems. One bug it uncovered had been hiding in OpenBSD for 27 years.
That is the real story behind Goldman Sachs CEO David Solomon telling investors this week that the bank is “hyper-aware” of the risk. It is not that AI is about to take over trading floors. It is that AI can now be used as a weapon against the systems banks run on.
Wall Street and Washington are taking this seriously. Treasury Secretary Scott Bessent has already pulled in the CEOs of major US banks. UK regulators — the Bank of England, the FCA, and HM Treasury — are holding emergency meetings with the National Cyber Security Centre. This is the first time a single AI model has triggered this kind of response from global financial authorities.
Here is what you need to know.
What Goldman Sachs CEO Really Said About AI Risks?

On Goldman’s Q1 2026 earnings call, David Solomon said this:
“Obviously the LLMs are making rapid progress and we’re hyper-aware of the enhanced capabilities of these new models with the help of the US government and the model publishers. We’re working closely with Anthropic and all of our security vendors to harness frontier capabilities wherever it’s possible. And this will continue to be an important focus.”
Three things to notice:
- Solomon names Anthropic directly. This is unusual. CEOs rarely name an AI vendor as a risk focus on an earnings call.
- The tone is defensive, not celebratory. Last year banks talked about AI as a productivity lever. Now they are talking about it as a threat surface.
- Goldman is acting with the US government, not around it. That is new. It signals that AI cyber risk has moved from a corporate IT problem to a national security one.
What Is Anthropic’s Mythos AI — and Why It’s Raising Eyebrows?
Anthropic is the AI safety company behind the Claude family of models. It is backed by Amazon and Google and is one of the three most influential frontier AI labs in the world, along with OpenAI and Google DeepMind.
Claude Mythos Preview is its newest model. Anthropic describes it as a “step change” in capability. The company is deliberately not releasing it to the public.
What makes Mythos different:
- It is strong at general tasks but exceptional at finding and exploiting software vulnerabilities.
- It can discover zero-day flaws (bugs nobody knew about) in real open-source code.
- It outperforms human security researchers on parts of this work.
- Anthropic says the capability is “currently far ahead of any other AI model.”
Mythos is priced at $25 per million input tokens and $125 per million output tokens — roughly 3 to 5 times more than premium commercial models. That pricing is not aimed at mass adoption. It is aimed at a small list of vetted partners.
The Hidden Risks: How Mythos Could Disrupt Finance?
Most coverage has jumped to dramatic “AI will crash markets” headlines. That is the wrong story. The real concern is narrower and more serious.
1. Bank infrastructure is the target, not bank trading desks. Mythos does not trade stocks. It finds and breaks into software. Banks run on millions of lines of code — payment systems, clearing systems, customer data, core banking platforms. Every line is a potential hole.
2. The attacker advantage is the problem. A bank has to defend every system. An attacker using Mythos only has to find one way in. Anthropic itself warned that the fallout for “economies, public safety, and national security could be severe” if this capability spreads to the wrong actors.
3. Proliferation is near-certain. Anthropic said it directly: “it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.” Once open-source models catch up — likely within 12 to 24 months — this capability will not be gated.
Could AI Trigger the Next Financial Crisis?
Previous financial crises came from inside the system. 2008 was about bad mortgages and bad risk models. 1998 was about a hedge fund blowing up. The Mythos-era risk is different. It comes from outside.
A realistic bad scenario looks like this:
- An attacker uses a Mythos-class model to find a flaw in a payment network.
- They breach a major clearing bank or exchange.
- Trading halts. Settlement fails. Trust cracks.
- Other banks pull liquidity as a precaution. Credit freezes.
This is not a trading-algorithm crash. It is a cyber-triggered liquidity crisis. The 2024 outages at CrowdStrike and the 2023 Industrial and Commercial Bank of China ransomware attack were mild previews. A coordinated AI-powered attack could be an order of magnitude worse.
The honest truth: nobody, including regulators, knows the full blast radius. That is exactly why they are worried.
Are Current AI Regulations Strong Enough?
Short answer: no.
What exists today:
- The EU AI Act covers high-risk AI systems but was written before models like Mythos were expected this soon.
- The US has executive orders and voluntary commitments from frontier labs, not binding rules on frontier cyber capability.
- The UK AI Safety Institute does evaluations but has no enforcement teeth.
What the Mythos situation just exposed:
- There is no standard process for how a lab should disclose a dangerous capability to governments and affected industries.
- There is no shared playbook for banks, exchanges, and insurers when an AI capability jump happens.
- Export control rules were written for chips, not for model weights and API access.
Expect new rules. The question is whether they arrive before the capability leaks.
Inside the Banking Industry: Fear vs Opportunity
Anthropic’s response is called Project Glasswing. It is a restricted-access program that lets critical infrastructure operators use Mythos to defend themselves before attackers get their hands on something similar.
Launch partners include:
- Tech: AWS, Apple, Google, Microsoft, Nvidia, Broadcom, Cisco, CrowdStrike, Palo Alto Networks
- Financial: JPMorganChase (explicit launch partner)
- Other: The Linux Foundation
Banks testing Mythos internally: JPMorgan, Goldman Sachs, Citigroup, Bank of America, Morgan Stanley.
Anthropic is committing up to $100 million in usage credits and $4 million in direct donations to open-source security groups.
What this tells you:
- The big US banks are not sitting this out. They are inside the tent.
- Mid-tier banks, regional banks, and most emerging-market banks are outside. That is the real exposure gap.
- If your bank is not on the Glasswing list, you are exposed to the same threat without the same defensive tool.
AI Adoption vs AI Caution: A Growing Divide

The split inside finance is now visible.
Fast movers:
- Glasswing partner banks get red-team access to the strongest defensive AI on the market.
- They can pre-patch systems before attackers find the same flaws.
- They can hire fewer entry-level security analysts for routine work.
Cautious or excluded players:
- Regional banks and most non-US banks are behind. They cannot test Mythos. They cannot afford equivalent tooling.
- They are forced to rely on traditional security vendors who are themselves racing to integrate frontier AI.
The competitive gap is not about who trades faster. It is about who gets breached first. That is a different kind of moat, and it favors scale.
The Future of AI in Investment Banking
Three honest predictions:
- Security budgets rise. AI-specific cyber spend at tier-one banks likely doubles within 24 months. Do not expect this to be framed as “AI investment.” It will be buried in IT and compliance.
- Entry-level jobs shrink first. Junior security analyst roles, basic code review, and tier-one SOC jobs will contract. Senior roles that require judgment get more valuable.
- Vendor concentration grows. Banks already depend on a small number of cloud providers and security vendors. Frontier AI access will tighten that concentration further. That is itself a systemic risk.
What AI is not doing, yet: replacing bankers, running trading books, or making investment decisions at scale. Those stories are still hype.
What Safeguards Are Needed Before It’s Too Late
Based on what regulators are actually asking for:
- Mandatory pre-release disclosure of dangerous capabilities to a designated government body, with a quiet window before public release.
- Tiered access frameworks like Glasswing but run by neutral bodies, not individual labs.
- Cross-border coordination so a model restricted in the US is not freely available through a shell entity in a weaker jurisdiction.
- Sector-specific stress tests for banks, exchanges, and utilities that simulate AI-powered attacks, not just the standard penetration test.
- Human oversight that is real, not performative. A risk committee that gets a slide deck once a quarter is not oversight.
None of this is settled. The Mythos release is the forcing event that will shape the rules for the next five years.
Conclusion: Innovation at the Edge of Risk
The Mythos story is not about AI taking over Wall Street. It is about AI changing who holds the advantage in cybersecurity — and banks knowing they are the target.
Goldman Sachs, JPMorgan, and their peers are doing the right thing by getting inside Project Glasswing. But that does not remove the risk. It concentrates the defensive tool in a few hands while the offensive capability races to proliferate.
What to watch next:
- Whether Anthropic’s restricted-access model actually holds, or whether model weights leak.
- Whether US and UK regulators agree on a shared disclosure framework, or split.
- Whether a Mythos-class open-source model arrives within 12 months. If it does, every bank not inside Glasswing will have a serious problem.
This is a risk story, not a hype story. Treat it that way.
FAQ
Q1. What is Mythos AI and who created it? Claude Mythos Preview is a frontier AI model built by Anthropic, announced in late March 2026. It is unusually strong at finding and exploiting software vulnerabilities. Anthropic has deliberately kept it off the public market.
Q2. Why is Goldman Sachs concerned about AI risks? Because Mythos and models like it can be used to attack the software banks run on. Goldman CEO David Solomon said the firm is “hyper-aware” of this and is working directly with Anthropic and the US government to defend against it.
Q3. Can AI really cause a financial crisis? Directly causing a crisis through autonomous trading is still unlikely. But an AI-powered cyberattack on a major bank, exchange, or payment system could trigger a liquidity crisis. That is the scenario regulators are actively stress-testing.
Q4. How are banks preparing for AI disruptions? Top US banks — JPMorgan, Goldman, Citigroup, Bank of America, Morgan Stanley — have joined or are testing Anthropic’s Project Glasswing, which gives them access to Mythos for defensive use. Smaller banks are largely outside this program.
Q5. Is AI regulation keeping up with innovation? No. Existing rules were written before models like Mythos were expected. The EU AI Act, US executive orders, and UK AI Safety Institute all lack binding rules for frontier cyber capability. New regulations are likely within 12 to 18 months.
Sources
- Goldman Is Using Mythos, Working With Anthropic on Cyber Risks — Bloomberg
- Claude Mythos Preview — Anthropic
- Project Glasswing — Anthropic
- UK financial regulators rush to assess risks of Anthropic AI model — Computer Weekly
- Why Anthropic’s new Mythos AI model has Washington and Wall Street worked up — Euronews
- US Treasury Seeking Access to Anthropic’s Mythos to Find Flaws — Bloomberg
- Banks Face Complex Cyber Risks From Anthropic’s Mythos — PYMNTS
- Powell, Bessent discussed Anthropic’s Mythos AI cyber threat with major U.S. banks — CNBC
- Exclusive: Anthropic ‘Mythos’ AI model representing ‘step change’ — Fortune
- On Anthropic’s Mythos Preview and Project Glasswing — Schneier on Security
