(AI)
Why do I find it ironic that jailbreaking ai with poetry is trivially easy? :/
Source date (UTC): 2025-12-09 02:23:26 UTC
Original post: https://twitter.com/i/web/status/1998216889097728105
(AI)
Why do I find it ironic that jailbreaking ai with poetry is trivially easy? :/
Source date (UTC): 2025-12-09 02:23:26 UTC
Original post: https://twitter.com/i/web/status/1998216889097728105
It’s nonsense. They’re constructing a context and the AI is deterministically meeting them there. Why? The AI (manifold), has no sense of self, homeostasis, or context. So it will converge on whatever context you provide it with.
As I’ve argued forever, anthropomorphism is a human cognitive frailty. In the case of ai, it’s contemporary superstition.
Which is depressing.
Source date (UTC): 2025-12-08 15:53:23 UTC
Original post: https://twitter.com/i/web/status/1998058335183192118
ANOTHER OBSTACLE TO AI BITES THE DUST
Source date (UTC): 2025-12-06 03:23:45 UTC
Original post: https://twitter.com/i/web/status/1997144907799318841
So they attribute agency where there is only compression and selection.
Source date (UTC): 2025-12-05 21:45:28 UTC
Original post: https://x.com/i/articles/1997059776669536312
RUNCIBLE IS THE SOLUTION TO TRUSTWORTHY AI
WHY NOW
– Enterprises can’t scale AI because of liability.
– Regulation mandates governance and auditability.
– Models are commoditizing; governance becomes the moat.
– Agentic systems make governance non-optional.
WHY US
– We provide the governance layer no model provider can.
– We solve the root problem: decidability, compliance, liability.
– We have the only computable governance framework.
– Every output becomes a certified, auditable artifact.
– Our head start is measured in years, not months.
Source date (UTC): 2025-12-05 21:40:24 UTC
Original post: https://twitter.com/i/web/status/1997058497977168328
Source date (UTC): 2025-12-03 20:16:34 UTC
Original post: https://x.com/i/articles/1996312628063613362
THE AI VALUE CHAIN:
Creativity > Scaffolding > Understanding > Insight > ‘Work’ (production) > Trust (truth) > Liability > Compliance > Safety
Source date (UTC): 2025-12-03 19:31:12 UTC
Original post: https://twitter.com/i/web/status/1996301211423912119
My argument would be that the innovations in Chinese open source models is a matter of reducing compute without significant degradation of signal, but the innovations in closed source models are in computation and computability.
Source date (UTC): 2025-12-03 01:05:49 UTC
Original post: https://twitter.com/i/web/status/1996023031857381566
THE RUNCIBLE GOVERNANCE LAYER
(Technical Version (ML Research Audience))
The governance layer operates as a deterministic control system wrapped around an arbitrary foundation model.
The architecture is LLM-agnostic; it can govern any sufficiently large model whose parameter count supports high-dimensional reasoning, stable abstraction, and long-horizon dependency tracking.
The base model is treated as an untrusted but extremely capable function approximator, and the governance layer provides the closure, constraint, and decidability conditions the model itself cannot satisfy.
Architecture:
The system architecture consists of the following components, each playing a necessary and non-substitutable role:
Loader (Execution Engine):
A lightweight runtime that intercepts every request, parses protocol definitions, orchestrates tool calls, enforces constraints, and mediates all interaction with the underlying LLM. It standardizes inference behavior across models.
Protocols (YAML/JSON as Semantic Code):
Protocols provide the operational grammar. They specify context limits, constraint logic, adversarial checks, decomposition steps, permissible transformations, verification sequences, and required outputs. YAML functions as human-legible code defining the permissible state transitions in the reasoning process.
RAG Layer (Structured Research Index):
A retrieval system populated exclusively with the organization’s validated research corpus. It provides references, reduction paths, causal models, logical dependencies, and formal definitions. The RAG enforces epistemic locality: the LLM can only draw from adjudicated knowledge.
Truth Corpus (Certificate Ledger):
A versioned repository of certificates—fully resolved and verified problem–solution pairs that have survived decidability tests, reciprocity tests, and falsification attempts. Certificates provide high-value training anchors: they encode working solutions to genuinely difficult problems.
Training Agent (Certificate Compiler):
A transformation engine that converts certificates into structured training modules. The agent then submits these modules into the fine-tuning process to update the foundation model in bounded increments. This creates a closed feedback loop: research → certificate → training → improved inference → more certificates.
Attention Modifiers (Token-Economy Optimization):
Low-level control over attention masks, routing heuristics, and context allocation reduces compute by forcing the model to attend only to causally necessary information. This implements “possibility filters” and “truth-reciprocity filters” inside the attention mechanism.
The core innovation is the formalization of machine decidability: a method for converting the human concepts of possibility, testifiability, reciprocity, and liability into computable constraints the model must satisfy before it may emit a result. This converts an LLM from a probabilistic language model into a constrained reasoning engine with predictable failure bounds.
The theory behind machine decidability requires careful study, but the implementation becomes relatively direct once the dependency graph—protocols → constraints → corpus → certificates → training—is understood.
Source date (UTC): 2025-12-02 18:29:13 UTC
Original post: https://twitter.com/i/web/status/1995923221678620911
It’s just recursion. It’s expensive. Otherwise everyone would do it. But yes, thank you for sharing and in the future pls continue. 🙂
Source date (UTC): 2025-11-30 08:53:56 UTC
Original post: https://twitter.com/i/web/status/1995053670929690910