AN AUTHOR’S FRUSTRATION WITH THE LIMITS OF ANTHROPIC’S CLAUDE
@AnthropicAI #Claude
1) Claude is much better at writing than its competitors. There is no comparison. GPT is infinitely better all around than competitors, but Claude’s composition is better than competitors.
2) Despite paying the fee:
… (a) I hit the message limit after a trivial number of exchanges in a chat. I can’t even make it through reviewing a third of a chapter before hitting that limit.
… (b) the same is true for chat length. I can’t even upload the set of chapters for Claude to ‘understand’ before I hit the chat limit. This results in Claude recommending what’s covered in later chapters – and this dramatically impacts the rest of the recommendations, rendering each section reviewed as pointless.
3. NET: I would prefer to use Claude to help me write for publication, because Claude’s production of readable prose despite the complexity of my work, is in fact superior. But it’s pointless because the context window is simply too small to assist in more than ‘marketing and email spam’.
Why this limit in Claude? I don’t hit the limit in ChatGPT and even then it updates it’s memory to retain a general understanding of my work. So starting a separate chat to continue is relatively easy.
Color me sad so to speak. But you know, MS/OpenAI are closing in on ‘infinite memory’ at present. And that’s without their (a) pursuit of step-by-step reasoning (b) followed by recursion (self testing) (c) followed by adversarial competition between responses. At that point, it’s over. Other than neuromorphic hardware that gets over the cost of training, and awareness of evolution of global state (‘consciousness’) that’s all that’s required for GAI.
FWIW: My workflow consists of Grok (current activity), Perplexity (research papers), ChatGPT (writing) and when possible Claude (reviewing). (And I find Geminii useless.)
ALSO: IMO: The tests being used on math and programming and rudimentary logic are ‘errors of reductio ad absurdum’. They do not expose the sophistication or lack of it in the models. My work consists of operational language and constructive logic of first principles without the use of symbols. This work immediately exposes the problems of textual probabilism in LLM layers, and puts greater stress on the conceptual networks of meaning behind the terms used. In other words, Math and Programming are ‘cheating’ by using a reserved vocabulary that circumvents the demonstration of the depth of intelligence in human language, which constitutes the repository of human knowledge.
And this is why ChatGPT has no near competitors at all whatsoever at this point. Semantically it’s unparalleled.
Curt Doolittle
The Natural Law Institute
Source date (UTC): 2024-11-26 16:15:06 UTC
Original post: https://twitter.com/i/web/status/1861443573469847552
Leave a Reply