
The AI beat goes on...with a farewell | The AI Beat
Last weekend I flew to San Francisco, preparing for several days of immersion into all things Nvidia and AI. I needed to muster all of my energy for the company's annual GTC conference, but I didn't sleep in. I shrugged off my jet lag and headed down to Monterey for two days to commune with sea lions and otters — call it a Red Bull-style shot of actual nature and wildlife before joining the world of artificial intelligence and GPUs and PFLOPS.

'Attention is All You Need' creators look beyond Transformers for AI at Nvidia GTC: 'The world needs something better'
Seven of the eight authors of the landmark 'Attention is All You Need' paper, that introduced Transformers, gathered for the first time as a group for a chat with Nvidia CEO Jensen Huang in a packed ballroom at the GTC conference today.

Nvidia CEO Jensen Huang introduces a 'big, big GPU' that is 'pushing the limits of physics'
For his two-hour long keynote address at Nvidia's GTC developers conference yesterday at the packed SAP Center in San Jose, CEO Jensen Huang was clad in a black leather jacket that was just a bit more rock and roll than the plain versions he has sported over the past few years. With a few silver zippers here, a little shine and texture there, the jacket offered a small clue that, while Huang joked that he hoped the audience realized the gathering was not a concert, Nvidia is one of the past year's biggest AI rock stars.

OpenAI's Sora: The devil is in the 'details of the data'
For OpenAI CTO Mira Murati, an exclusive Wall Street Journal interview with personal tech columnist Joanna Stern yesterday seemed like a slam-dunk. The clips of OpenAI's Sora text-to-video model, which was shown off in a demo last month and which Murati said could be available publicly in a few months, were "good enough to freak us out" but also adorable or benign enough to make us smile. That bull in a china shop that didn't break anything! Awww.

EU Parliament officially adopts AI Act — landmark regulation likely to become law in May
Nearly three years after draft rules were proposed, European Parliament lawmakers approved the AI Act today. The approval, which came a month earlier than expected, is the final endorsement of the first comprehensive regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products.

Money and politics continue to merge in AI safety — including a new Super PAC | The AI Beat
Back in January, I spoke to Mark Beall, a co-founder and then-CEO of Gladstone AI, a consulting firm that released a bombshell AI safety report yesterday, commissioned by the State Department. The announcement was first covered by TIME, which highlighted the report's AI safety action-plan recommendations — that is, "how the US should respond to what it argues are significant national security risks posed by advanced AI."

Insilico Medicine unveils first AI-generated and AI-discovered drug in new paper
Insilico Medicine, the Hong Kong and New York-based biotech startup that has raised over $400 million to connect biology, chemistry, and clinical trial analysis using next-generation AI systems, announced a new paper today that highlights the journey of what it claims is the first AI-generated and AI-discovered drug — which has now reached Phase II clinical trials.

NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute
The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency's newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.

Experts call for legal 'safe harbor' so researchers, journalists and artists can evaluate AI tools
According to a new paper published by 23 AI researchers, academics and creatives, 'safe harbor' legal and technical protections are essential to allow researchers, journalists and artists to do "good-faith" evaluations of AI products and services.

As NIST funding challenges persist, Schumer announces $10 million for its AI Safety Institute
US Senate Majority Leader Chuck Schumer (D-NY) announced today that the National Institute Of Standards and Technology (NIST) will receive up to $10 million to establish the US Artificial Intelligence Safety Institute (USAISI) — which was established in November 2023 to “support the responsibilities assigned to the Department of Commerce” under the AI Executive Order.

5 revealing details from OpenAI's emails with Elon Musk
The tech world got its popcorn out last night after OpenAI dropped a new blog post that responded to the lawsuit Elon Musk filed last week against OpenAI, CEO Sam Altman and president Greg Brockman. Musk's claims include breach of contract, breach of fiduciary duty, and unfair competition — all circling around the idea that OpenAI put profits and commercial interests in developing artificial general intelligence (AGI) ahead of its duty to protect the public good.

Why the open letter to ‘build AI for a better future’ falls flat
In the canon of AI industry open letters — remember the "pause" letter from last March? — I would venture to say that the latest, titled "Build AI for a Better Future," might take the cake. A deflated, flat cake, that is.