-
We did find some Reddit comments, though, warning other netizens to steer clear of MEDVi, claiming serious allegations of possible HIPPA violations, shady billing practices, and even damaged vials of seemingly bogus drugs causing physical harm.
AI is making the web weirder and muddier than ever. And though MEDVi promises that “sometimes you have to see it to believe it,” in our burgeoning AI-powered web, that’s no longer the case.
MEDVi, sadly, is the same company from last week's NYT's article about a one-person, $1.8bn company. It is disappointing to see NYT fall for their hype despite this article being published almost a year ago.
This, yet again, also raises the question of just how credulous and naive am I being when it comes to the AI Hype cycle. Keep that in mind with rest of this week's coverage.
-
Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software.
We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.
Since Anthropic (along with OpenAI) is trying to IPO this year, it is tempting to dismiss this as hype, especially in context of the previous link. However, there are many signals that end credibility to their claims.
First, there is the large list of credible partners above including their competitor in the LLM space, Google. Second, was the news that Treasury Secretary Scott Bessent and Chairman of the Federal Reserve summoned CEOs of major Financial Services firms to warn them about the risks posed by this model. Third is the long list of credible tech people endorsing the abilities of this model.
With this level of publicity, if this was hype, we will find out soon enough but the evidence so far suggests it is likely real.
In which case, this is a huge step change in the abilities of LLMs. I expect this will also bring AI centerstage in national and global political discourse. This is a model with major national security implications because the NSA / Mossad types can use one vulnerability in operating systems to compromise personal devices of their targets. Imagine what they could do with "thousands of high-severity vulnerabilities".
This also raises important questions like what if China had developed a model with such abilities first or what if Anthropic hadn't realized the power of this model and released it to public or who gets to decide who gets access to a model like this, a private company or government?
The other question I am thinking about is how do leaders of China, Russia react to this news knowing that NSA / CIA have access to such a system?
There is a lot of excellent coverage of Mythos and related stuff, if you want to read more.
Banksy, Satoshi & The Unmasking Impulse
First Banksy and then Satoshi. Something about their unmasking is not sitting right with me. I am bothered by it. I am annoyed by it. And even more annoyed with myself because as a former journalist I should understand, but I don’t. I am referring to Reuters’s meticulous investigation and unmasking of Banksy, and John Carreyrou’s in-depth report labeling Adam Back as Satoshi, the creator of Bitcoin.
Both investigations are technically impressive. Both raised the same question I keep turning over: what exactly was accomplished here, and for whom?
We Are on the Cusp of a Revolution in Rare Disease Treatment
When KJ Muldoon was born in the summer of 2024, his parents were told he had a disease so rare, it strikes about one in 1.3 million newborns. His condition, a severe deficiency of an enzyme known as CPS1, left his tiny body unable to properly break down protein, flooding his blood with toxins that could cause brain damage or death. A liver transplant could correct the problem, but KJ was too young and too fragile to undergo one. With each passing day, the risk of irreversible neurological damage grew.
What happened next may become the most important medical story of the decade. In just six months, a team at Children’s Hospital of Philadelphia and Penn Medicine designed a personalized therapy that could correct the single misspelled letter in KJ’s DNA using a gene editing technology known as CRISPR. To get the therapy inside KJ’s cells, doctors relied on the same kind of mRNA technology that powered the Covid-19 vaccines. He received his first dose at 6 months old. One year later, KJ is walking, talking and thriving at home with his family.
Worth a read, the key question being how does the FDA regulate individualized treatments when the current paradigm is to rely on RCTs with thousands of subjects.
The Jump Rope Queen of Beverly Hills
Ms. Judis currently holds the Guinness World Record for oldest competitive rope skipper. She also thrives on having an audience: If she doesn’t share a workout, she said, it’s like it never happened.
82!
I Trained for the Paris Marathon Using ChatGPT:
Twelve months ago, I signed up for the Paris Marathon. Within six months, I knew I’d be in trouble without a trainer. So, living in the San Francisco Bay Area — the home of artificial intelligence — I decided to build one myself.
-
We should all do this sort of thing more often. 🙂
-
Imagine I told you that AI was going to create a 40% unemployment rate. Sounds bad, right? Catastrophic even. Now imagine I told you that AI was going to create a 3-day working week. Sounds great, right? Wonderful even. Yet to a first approximation these are the same thing. 60% of people employed and 40% unemployed is the same number of working hours as 100% employed at 60% of the hours.