Great analogy, and I’m cautious of analogies, because they lead to subsequent false deductions, but basically, as I think you meanit, yes.
The way to think about it, is that at some point the correlations you create in the LLM via training either over-enforce (overdetermine) or misdirect (underdetermine) the distribution.
This is why training using our existing regression algorithms independent of contextualization of whatever subnetwork we’re trying to tune, requires retesting nearly everything.
I see papers discussing compartmentalization through episodic memory associations (like the brain does) which should get us there, but my job is governance (constraining the path through the latent space) and I leave the training to those who have access to the code and the large models. I don’t, my team doesn’t, so it’s pointless to theorize without the foundation model dev’s ability to test.
Source date (UTC): 2025-12-31 20:03:29 UTC
Original post: https://twitter.com/i/web/status/2006456195608199539
Leave a Reply