


Cerebras stock nearly doubles on day one as AI chipmaker hits $100 billion — what it means for AI infrastructure
The company sold 30 million shares at $185 apiece, raising $5.55 billion in what Bloomberg reported as the largest U.S. tech IPO since Uber went public in 2019. The final pricing shattered expectations: Cerebras initially marketed shares at $115 to $125, then raised the range to $150 to $160 as investor demand surged, before ultimately pricing above even that elevated band.

Claude Code's '/goals' separates the agent that works from the one that decides it's done

AI IQ is here: a new site scores frontier AI models on the human IQ scale. The results are already dividing tech.
For decades, the IQ test has been one of the most familiar — and most contested — yardsticks for human intelligence. Now, a startup project called AI IQ is applying the same metaphor to artificial intelligence, assigning estimated intelligence quotients to more than 50 of the world's most powerful language models and plotting them on a standard bell curve.
Subscribe to get latest news!
Deep insights for enterprise AI, data, and security leaders

Anthropic reinstates OpenClaw and third-party agent usage on Claude subscriptions — with a catch

Anthropic finally beat OpenAI in business AI adoption — but 3 big threats could erase its lead
For the first time since the AI race began, more American businesses are paying for Anthropic's Claude than for OpenAI's ChatGPT.

Frontier AI models don't just delete document content — they rewrite it, and the errors are nearly impossible to catch

Perceptron Mk1 shocks with highly performant video analysis AI model 80-90% cheaper than Anthropic, OpenAI & Google


Partner Content
Turning AI cost spikes into strategic growth opportunities
Presented by Apptio, an IBM company

Thinking Machines shows off preview of near-realtime AI voice and video conversation with new 'interaction models'

Intent-based chaos testing is designed for when AI behaves confidently — and wrongly
Here is a scenario that should concern every enterprise architect shipping autonomous AI systems right now: An observability agent is running in production. Its job is to detect infrastructure anomalies and trigger the appropriate response. Late one night, it flags an elevated anomaly score across a production cluster, 0.87, above its defined threshold of 0.75. The agent is within its permission boundaries. It has access to the rollback service. So it uses it.