Google-backed Anthropic Launches Chatbot Claude a New Face into AI Space, Another Competition to ChatGPT
Technology

Google-backed Anthropic Launches Chatbot Claude a New Face into AI Space, Another Competition to ChatGPT

Google-backed Anthropic has come up with its AI Chatbot Claude. It is competing against OpenAI’s authority in the space of artificial intelligence. Anthropic is founded by Dario and Daniela Amodei, both ex-Open AI employees. Recently, Google invested $400 million into Claude. Anthropic has announced in a blog post the launch of Claude, an AI assistant with a next-generation vision. It is based on research by Anthropic into training, harmless, honest, and helpful AI systems.

The Google-backed startup said that Claude is able to offer a wide variety of text-processing tasks which are conversational. It also has a high degree of predictability and reliability. Claude can perform the same tasks like that as ChatGPT. These include summarizing text, coding, writing blog posts, and replying to emails. But Anthropic does claim that Claude is easier to converse with, and is at lesser risk of producing harmful outputs as well as is more steerable.

Also, the behavior, personality, and tone of the chatbot can be tweaked to match the expectations of the users. Internet is not accessible to Claude just like ChatGPT. It has training for data till the spring of the year 2021. Constitutional AI is the AI chatbot of Anthropic, which is highly principled. For instance, Claude is trained for large data chunks. It follows principles based on which it avoids certain dangerous topics. Also, Claude can recognize their own biases as well.

To test its technology, Claude before its launch had partnered with different companies. Some of these are DuckDuckGo which combined ChatGPT with Claude to summarize Wikipedia text, while Quora powered its AI Chatbot app with Claude. To power Notion AI, the Productivity app Notion also used Anthropic technology.

However, Claude like ChatGPT, faces certain issues similar to Bing Chat by Microsoft, such as users bypassing the safety features of the chatbot with the use of clever commands and the app itself hallucinating. Dario Amodei, Anthropic CEO did admit that their chatbot could imagine things the same as other language models. However, he does accept that there are still problems to solve and thinks that the models as well as theirs, sometimes can make up things.