OpenAI and Anthropic’s rivalry on display
Digest more
Claude Sonnet 4.6 is more consistent with coding and is better at following coding instructions, Anthropic said.
The Defense Department is threatening to blacklist Anthropic over limits on military use, potentially putting one of its top contractors in a bind.
Amid the slate of top names in AI: OpenAI, Google, Meta, etc. – there’s one, Anthropic, that considers itself to be, in certain ways, “human-first.” Started by a set of twins, Dario and Daniela Amodei, Anthropic strives to live up to its name, in keeping AI safe for people.
Figma has been caught in the software stock sell-off that has sent names like Salesforce, ServiceNow and Intuit plummeting.
Dario Amodei, who left OpenAI before founding Anthropic, has been outspoken about the need for greater AI regulation.
Amodei and Altman’s company have recently sniped at each other over OpenAI’s introduction of ads within its products.
Anthropic rolls out Claude Sonnet 4.6 as its new default model, bringing stronger reasoning and coding power to free and paid users alike.
The company is at odds with the Pentagon over how its A.I. will be used. The conflict has its roots in the foundational plan for Anthropic.
The roots of the conflict point to the changing nature of software stacks as top officials push to modernize the military.
Anthropic has increasingly found itself at odds with the Pentagon over how its AI model Claude can be deployed in military operations following disclosure about its use in the raid that captured Venezuelan President Nicolas Maduro last month.
Objectors ask the Court to revise the Class Notice and extend deadlines, contending the current Notice fails to fully
The Pentagon is reviewing its relationship with artificial intelligence (AI) giant Anthropic over the terms of use of its AI model, which was used by the U.S. military during last month’s operation