COUNTER PROPOSITIONS: TO RISKS STATED BY ANTHROPIC’S CEO
RE #1
Our think tank (‘lab’) and our company (‘commercial application’) produce an AI governance layer that pretty much eliminates hallucination and all but guarantees a warrantable assessment of testifiable(truth), ethics (reciprocity), constructability (possibility), liability and restitutability.
We are certain that within two years it will be possible to gate even current LLMS and that in fact our governance layer or an equivalent will be required to do so – at least in an IP window that is a competitive advantage.
The thing is? Computable Epistemology and Decidability is far harder than you’d think and there is not much evidence of sufficient cross disciplinary knowledge in the field at present.
RE #2:
GIVEN:
a) There is plenty of interstitial discover to be made,
b) There is plenty of permutation discovery made,
c) So there is a relatively finite set of low handing fruit for AI to identify.
d) On the other hand the primary obstacle to innovation is not brains – it’s building experiments and tests.
e) There is a fundamental simple order to the universe (really because we have taught it to our AI), and everything evolved from it.
As such universal commensurability is possible and therefor constructive proof MIGHT be as well as constructive Hypothesis.
RESULT: This means we can’t extrapolate innovation by work of AIs any more than we can demonstrate that we have made any difference in the rate of innovation since 1963 despite vastly increasing the population and funding of researchers (and yes I am correct, sorry.).
Ergo, we should make early discoveries in the interstitial (cross disciplinary) and permutable (combinatorics) space. But those early discoveries will be misleading. The problem will remain boots on the ground testing, with technologies that are increasingly expensive when funding may be pressed by present asymmetric reproduction due to population aging and collapse.
RE: #3. We cannot make an LLM deceive when operating under our governance layer. The mistake everyone is making is that it’s something to do with LLM incentives instead of the semantic content of internet training includes deception that is provoked by context saturation.
Worse, the idea that LLMS are ‘just predicting the next word’ is a childish falsehood. Instead the latent space is a projection of n dimensional relations, the query or prompt is a union with it, and the attention layers are projections of wayfinding through that union. This is an almost perfect analogy of how the human langauge facility operates.
a) The difference is that humans engage in massive parallelism (darwinian competition between hypothesis) updated moment by moment via recursion as we speak. (You should have seen papers last week that illustrated the solution to the problem, or seen how Google is using (I think five) competing hypotheses in adversarial competition, which is one of the (costly) reasons for the radical improvement in Gemini.) FWIW the human grammatical faculty and the universe’s means of evolution are identical: continuous recursive disambiguation to the point of identity.
b) The other difference is that humans have episodic memory for compartmentalization.
You should have seen a paper in the past month that illustrated a rather simple solution – though they don’t arrive at the conclusion that’ they’ve reconstructed the faculty of episodic memory.
c) What’s left to produce is the equivalent of the prefrontal cortex that decomposes and tests any given hypothesis. Our governance layer is effectively that solution.
d) In fact the hardest problem we face, which we are close to overcoming, is that one subset of safety features demanding universalism (prohibiting sex, age, class, culture, civilizational, population group, differences) is causing the LLMs to constantly evade or lie about solving the hardest problems facing us, and prohibiting us from explaining those differences as rational adaptations both evolutionary and cultural, and offering possible means of compromise – thus helping us all understand each other as not evil per se, but as the product of evolution’s division of perception, cognition, valence, and labor.
e) All that is left is something I don’t see value in, which is consciousness – which is not the mystery philosophers claim it is. It’s the natural result of hierarchical memory processing, which is why it emerges incrementally among animals. Giving AIs a task or goal and having it loose ‘consciousness’ upon completion, while still storing episodic memory for later retrieval, tends to mitigate runaway recursive self interest – at least under our governance layer.
So from my understanding (and I have been at this problem since the early 80s and the resulting AI winter) we have all the pieces for AGI and possibly ASI (which is a questionable distinction for the reasons I said above).
FWIW, my experience is that the labs are not as sophisticated as they claim, and are making predictions based on correlations and processing power, and not on necessarily understanding ‘how to make a brain’. This is a kind of optimistic confidence. Even LeCun is overhyping his advancement when it is an addition to the language function. (He’s trying to solve the hippocampal problem which is the equivalent of the sixth sense: the production of a geometric world model in addition to a semantic one that we have today.) This is an add. AFAIK it’s not a replacement. It’s also something we understand, biomechanically, thoroughly.
Thanks for the read if you managed it.
Cheers
Curt Doolittle
NLI and Runcible inc.
Source date (UTC): 2026-01-27 02:31:13 UTC
Original post: https://twitter.com/i/web/status/2015975853298221216