Anthropic

Amazon strengthens commitment to Anthropics, fulfills proposed $4 billion investment

Gettyimages 1305439234
Amazon invested a further $2.75 billion in growing AI power Anthropic on Wednesday, following through on the option it left open last September. The $1.25 billion it invested at the time must be producing results, or perhaps they’ve realized that there are no other horses available to back. Lacking the capability to develop adequate models on their own for whatever reason, companies like Amazon and Microsoft have had to act vicariously through others, primarily OpenAI and Anthropic. Right now the AI world is a bit like a roulette table, with OpenAI and Anthropic representing black and red. We know Anthropic has a plan, and this year we’ll find out what Amazon, Apple, Microsoft and other multinational interests think they can do to monetize this supposedly revolutionary technology.

” New Models from Anthropic Outperform GPT-4

Claude2 Blog V1 1
All show “increased capabilities” in analysis and forecasting, Anthropic claims, as well as enhanced performance on specific benchmarks versus models like GPT-4 (but not GPT-4 Turbo) and Google’s Gemini 1.0 Ultra (but not Gemini 1.5 Pro). A model’s context, or context window, refers to input data (e.g. In a technical whitepaper, Anthropic admits that Claude 3 isn’t immune from the issues plaguing other GenAI models, namely bias and hallucinations (i.e. Unlike some GenAI models, Claude 3 can’t search the web; the models can only answer questions using data from before August 2023. Here’s the pricing breakdown:Opus: $15 per million input tokens, $75 per million output tokensSonnet: $3 per million input tokens, $15 per million output tokensHaiku: $0.25 per million input tokens, $1.25 per million output tokensSo that’s Claude 3.

Anthropologists discover deceptive capabilities of trained AI models

Gettyimages 1548038240 1
A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code. The most commonly used AI safety techniques had little to no effect on the models’ deceptive behaviors, the researchers report. Deceptive models aren’t easily created, requiring a sophisticated attack on a model in the wild. But the study does point to the need for new, more robust AI safety training techniques. “Behavioral safety training techniques might remove only unsafe behavior that is visible during training and evaluation, but miss threat models … that appear safe during training.