What AI coding costs you

From Tom Wojcik:

Here’s what keeps me up at night. By every metric on every dashboard, AI-assisted human development and human-assisted AI development is improving. More PRs shipped. More features delivered. Faster cycle times. The charts go up and to the right. But metrics don’t capture what’s happening underneath. The mental fatigue of reviewing code you didn’t write all day. The boredom of babysitting an agent instead of solving problems. The slow, invisible erosion of the hard skills that made you good at this job in the first place. You stop holding the architecture in your head because the agent handles it. You stop thinking through edge cases because the tests pass. You stop wanting to dig deep because it’s easier to prompt and approve. There’s no spark in you anymore.

I really enjoyed this article – I found myself nodding along throughout. I’m not an AI skeptic, but I do worry about what the next decade looks like for my career, and even more so for the people coming up behind me.

We’re drifting toward a future where the only engineers truly qualified to review AI generated code are the seniors who earned that judgment by writing bad code themselves — before AI existed to do it for them. When that generation retires, we’ll be left with teams peer-reviewing AI output they don’t deeply understand, using other AI tools to validate it. The blind leading the blind, but with great dashboards.

That doesn’t mean we can’t build remarkable things in this new world. But the quiet erosion of institutional knowledge means that even as the metrics trend upward, our collective human capital is quietly atrophying. We’re getting extraordinarily efficient at constructing systems that nobody will actually know how to fix … right up until an agent hallucinates its way into a 3 AM production outage and the on-call rotation just stares blankly.

Defense secretary Pete Hegseth designates Anthropic a supply chain risk

From The Verge:

Nearly two hours after President Donald Trump announced on Truth Social that he was banning Anthropic products from the federal government, Secretary of Defense Pete Hegseth took it one step further and announced that he was now designating the AI company as a “supply-chain risk”. The decision could immediately impact numerous major tech companies that use Claude in their line of work for the Pentagon, including Palantir and AWS. It is not immediately clear to what extent the Pentagon may blacklist companies that contract with Claude for other services outside of national security.

Good for Anthropic. It’s a shame that the other AI companies aren’t lining up behind them.

Makes me happy that I’m a subscriber to Claude, relatively speaking.

Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid

From The Wall Street Journal:

“Anthropic’s artificial-intelligence tool Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro, highlighting how AI models are gaining traction in the Pentagon, according to people familiar with the matter. The mission to capture Maduro and his wife included bombing several sites in Caracas last month. Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance.”

Apple News+ Link

Not great, bob.

Google concedes the open web is in “rapid decline”

From In court filing, Google concedes the open web is in “rapid decline”:

If the increasingly AI-heavy open web isn't worth advertisers' attention, is it really right to claim the web is thriving as Google so often does? Google's filing may simply be admitting to what we all know: the open web is supported by advertising, and ads increasingly can't pay the bills. And is that a thriving web? Not unless you count AI slop.

No matter how Google spins this in a very narrow sense, it’s very concerning to see how quickly AI generated content is drowning out content on the web. Feels like Facebook and other companies integrating AI into their posting tools are only hastening the demise of their platforms.

Here’s why Apple believes it’s an AI leader—and why it says critics have it all wrong

From Saumel Axon at Ars Technica:

If big tech companies and venture capital investments are to be believed, AI and machine learning will only become more ubiquitous in the coming years. However it shakes out, Giannandrea and Borchers made one thing clear: machine learning now plays a part in much of what Apple does with its products, and many of the features consumers use daily. And with the Neural Engine coming to Macs starting this fall, machine learning’s role at Apple will likely continue to grow.

John Giannandrea joined Apple a few years ago from Google to run the AI part of the business and the fruits of his expertise appear to be paying off according to this article. There’s a lot of direct quotes and anecdotes in this article, but near the end you get the feeling that there’s a cultural shift happening in Cupertino:

After a long track record of mostly working on AI features in the dark, Apple’s emphasis on machine learning has greatly expanded over the past few years.

The company is publishing regularly, it’s doing academic sponsorships, it has fellowships, it sponsors labs, it goes to AI/ML conferences. It recently relaunched a machine learning blog where it shares some of its research. It has also been on a hiring binge, picking up engineers and others in the machine learning space—including Giannandrea himself just two years ago.

Remember when Giannandrea said he was surprised that machine learning wasn’t used for handwriting with the Pencil? He went on to see the creation of the team that made it happen. And in tandem with other teams, they moved forward with machine learning-driven handwriting—a cornerstone in iPadOS 14.
It appears that behind the scenes there’s a decent amount of restructuring happening that should help Apple deliver more practical enhancements to experiences without just shouting “AI” from the rooftops the way that Google does. Users don’t actually care about those implementation details, they just want nifty products that work well and get out of the way.