Meet Claude, the AI with 50x the memory of ChatGPT
Why Claude's new long-term memory is a big deal. Spoiler alert: the chatbots are getting smarter.
The AI landscape is evolving faster than ever. It seems like there is a major announcement every week. Last week, Anthropic announced they are increasing the context window to 100,000 tokens for their model Claude. That is 50 times the memory of ChatGPT-4.
What does this mean in simple terms? A context window refers to how much surrounding information an AI system can take into account. For you, it means you can have longer conversations with AI while it stays on topic. You can also provide more reference material—up to a 300 page book—for the AI to understand and use. You can use your own content for this.
Anthropic, like OpenAI, builds machine learning models and is focused on safety. Both companies build useful AI tools for productivity1 that hopefully won't take over the world.
To access Claude, we will use Quara’s app, called POE. Poe provides access to multiple language models through one interface. It's useful for trying different models and handles subscriptions so you don't need separate ones for each company.
To use Poe, sign up with Google, Apple, or your phone number. It's available as a web app and mobile app so you can switch between devices. Some models are limited without a subscription, but subscribing provides access to more, including Claude.
In Poe, you'll see different models like GPT-4 and Claude with 100,000 tokens of context. Claude+ is an even smarter version. Claude with 100K isn't as advanced but can take in much more information.
To test it out, in the video I fed the user manual for a product I created called SolidMon into Claude-100k and asked it to create a landing page. The result was a usable landing page that was really about my product from what Claude learned by reading the user manual. I was able to have an intelligent conversation with Claude using the context from all that information—something not possible with GPT-4 as the 2k token limit would have made it forget what we were talking about :)
Pricing for Poe starts at $20/month which provides 600 messages for GPT-4 and 1,000 for Claude+. The subscription also includes a free daily message allowance. I find this good value for trying different models and the latest innovations.
The AI landscape will continue evolving quickly. But with models like Claude and tools like Poe, we have more opportunities to take advantage of the latest breakthroughs. Let me know if you have any other questions!
Next week in productivityhacks.ai
Next week I will be using Descript to create an interview between AI and me, with me as the interviewee. you can see it in my instagram reel by clicking here. I was planning to do it this week but the Claude-100k announcement was way more important :)
Poe - sign up at https://poe.com - use different models from one interface on both desktop and mobile.
Anthropics - Competitor to OpenAI and Bard - https://www.anthropics.com
Anthropic vs OpenAI
OpenAI is a nonprofit AI research organisation. Their goal is to ensure that artificial general intelligence (AGI) benefits humanity. They conduct fundamental research into deep learning and reinforcement learning algorithms. OpenAI has released several AI models and systems, including Dota 2 bots, text generation models, and robotic hand dexterity systems. However, their research is fairly open-ended and directed at achieving human-level AI in a general sense.
Anthropic, on the other hand, is a startup focused specifically on AI safety research and development. They aim to build AI systems that are safe, beneficial, and honest using a technique called Constitutional AI. This technique involves natural language feedback to teach AI systems human values and ensure safe behavior. Anthropic has not released any open-source AI models and instead keeps their research closely guarded. They seem focused on addressing risks from advanced AI before human-level AI is achieved.
In summary, while OpenAI and Anthropic share the goal of safe and beneficial AI, their approaches differ substantially. OpenAI conducts open-ended research into AGI, releasing several models and tools publicly along the way. Anthropic, on the other hand, focuses specifically on technical AI safety with an emphasis on transparency and oversight into AI systems. Anthropic's work is not open and primarily theoretical, aiming to ensure AI systems of the future are grounded and safe as progress accelerates. Both organizations recognize risks from uncontrolled superhuman AI, but OpenAI adopts a more incremental approach while Anthropic is working to get ahead of the challenge as much as possible.