My wild-ass guess with no evidence is that Anthropic wants OpenAI to crash. OpenAI is mostly focused on “chat” whereas Anthropic is mostly focused on code generation (where the tooling has less of an AI psychosis problem).
These two companies are barely competitors anymore because of their different focuses, but they are still operating in a space with finite resources (RAM, GPUs and investor money) and until now OpenAI has had greater access to those resources. If people lose confidence in OpenAI and it crashes, then Anthropic can step in and say “Hey! We still have a viable product!” and the remaining investors will flock to them. Demand for GPUs and RAM will also go down and Anthropic can scoop them up for cheaper.
Though in science, you’d regularly discredit other scientists based on their methodology, less so their affiliation. But yeah, that might be a factor. 😉
Sure. Just saying. I mean the pharma industry also does the studies on their own products… It’s how it often works, Anthropic themselves would be the people with access to their user’s chats… So it’d be more the second step to grant other people a sample of one and a half million user chats and verify it independently. But it’s not really wrong for them to get a conversation going.
I’m far more worried about using an AI tool to analyze and aggregate the usage patterns. But I have no clue how that Clio thing performs.
Ah, nice and unbiased
Even more damning that the findings are so bad then. Imagine what someone biased against LLMs would find…
My wild-ass guess with no evidence is that Anthropic wants OpenAI to crash. OpenAI is mostly focused on “chat” whereas Anthropic is mostly focused on code generation (where the tooling has less of an AI psychosis problem).
These two companies are barely competitors anymore because of their different focuses, but they are still operating in a space with finite resources (RAM, GPUs and investor money) and until now OpenAI has had greater access to those resources. If people lose confidence in OpenAI and it crashes, then Anthropic can step in and say “Hey! We still have a viable product!” and the remaining investors will flock to them. Demand for GPUs and RAM will also go down and Anthropic can scoop them up for cheaper.
Though in science, you’d regularly discredit other scientists based on their methodology, less so their affiliation. But yeah, that might be a factor. 😉
A clear conflict of interest should certainly set off alarm bells.
Sure. Just saying. I mean the pharma industry also does the studies on their own products… It’s how it often works, Anthropic themselves would be the people with access to their user’s chats… So it’d be more the second step to grant other people a sample of one and a half million user chats and verify it independently. But it’s not really wrong for them to get a conversation going.
I’m far more worried about using an AI tool to analyze and aggregate the usage patterns. But I have no clue how that Clio thing performs.
In science this is usually called a conflict of interest and definitely something that would draw criticism.
reminds me of the doctors who said tazers were safe. but hired by axon. the tazer company.