Claude now supports 150,000 words in a single prompt

Anthropic just rolled out Claude 2.1, the latest and greatest in the ChatGPT showdown. The cool part? They’ve amped up its brainpower, allowing a whopping 200,000 tokens in its context window. What does that mean? Well, imagine tossing in the entirety of Homer’s The Odyssey for a chat with the AI – tokens are like the building blocks of text that Claude processes. This makes it a heavy lifter, ready to analyze entire codebases, academic papers, or financial statements. You can even throw in a lengthy novel and watch Claude do its magic.

Now, there’s a bit of a trade-off. With this mega boost, responses might take a tad longer – minutes instead of the usual quick seconds. But hey, it’s a small price for handling such heavy-duty tasks. Anthropic says they expect the wait times to drop as the tech evolves.

But that’s not all. Claude 2.1 has gotten savvy, cutting its hallucination rate in half. In the AI world, hallucination means confidently stating something wrong. Claude 2.1 is now twice as likely to say, “I don’t know” instead of giving you a wrong answer. Plus, it makes 30% fewer errors in super-long documents.

For the tech wizards, there’s a Workbench console where developers can play around with prompts and tweak Claude’s behavior. And they’ve thrown in a cool developer beta feature called “tool use.” It’s like giving Claude a toolbox – it can now integrate with existing processes, products, and APIs. Think calculators for complex equations or language translation to structured API calls. Anthropic admits it’s still in the early stages, urging users to share feedback.