Theme: Grammar

  • “Believable outputs are not believable because their contents are true of factua

    -“Believable outputs are not believable because their contents are true of factual. They are believable because they emulate grammar”–

    Trying to science that:

    The contents of speech or written speech is perceived as non-predictive (false) because of not surviving tests of consistency (inconsistency of), or possibly-true because of survival of consistency of: predictions from identity, consistency, possibility, correspondence, rationality, reciprocity, or coherence against existing episodic memory.

    The contents of speech or written speech are perceived as:
    False, because the sequence of phonemes or words produces continuous recursive disambiguation insufficient for survival (as above)
    Or;
    Possibly true, because the sequence of phonemes or words produces continuous recursive disambiguation sufficient for survival (as above).

    The brain predicts consistency. That’s all it does. From the cells at the back of our eyes, to the visual cortex, through disambiguation by the dorsal and lateral cortex, into the hippocampal system of episodic formation, to the auto-association of prior episodes, and the competitive sequences of those episodes, to the most concentrated effort of the frontal cortex to direct recursive searching, stacking the process as open pathways, and then directing attention to competing pathways, until we succeed or fail and start over at some point again.

    So translating (sciencing):

    -“They are believable because they emulate grammar”-

    I can guess, is attempting to mean, that grammar( rules of continuous recursive disambiguation in a language and its paradigms, vocabulary and logic) …. what?

    In the context of LLM’s that MIGHT mean that the marginal difference between sentence compositions will tend toward marginally indifferent framing. Those marginally indifferent framing that tend to converge together function as an adversarial system of competition for identity, consistency, coherence, etc?

    Assuming the majority of marginally indifferent sentence compositions produce a probability distribution, then the marginally DIFFERENT sentence compositions contain associations that provide a domain of relations where we might investigate error (and tune the results).

    This means the human composers of the source speech and text have determined a minimum disambiguation necessary for unambiguous communication at least within a given context.

    This doesn’t tell us that the humans are lying, and often the majority are lying, or otherwise engaged in ignorance, error, bias, and deciet. Which is why the source data is limited.

    So AFAIK, by the ‘gammar’ doesn’t tell us much.

    -FIN-


    Source date (UTC): 2023-03-02 20:27:50 UTC

    Original post: https://twitter.com/i/web/status/1631390883181371395

    Replying to: https://twitter.com/i/web/status/1631384183770587154

  • Let me give you an idea. CARDINAL: Positional names, ordinal logic, scale indepe

    Let me give you an idea.

    CARDINAL: Positional names, ordinal logic, scale independence context independence, and internally commensurable.

    ORDINAL: Qualitative names, ordinal logic, scale and context dependence, intersectionally commensurable.

    TRIANGULAR: Qualitative names,…


    Source date (UTC): 2023-03-02 18:25:01 UTC

    Original post: https://twitter.com/i/web/status/1631359975309148160

    Reply addressees: @Will_Benge @Lord__Sousa @TyrantsMuse @johnslygore @TheAutistocrat @MartianHoplite @bryanbrey @LukeWeinhagen @ThruTheHayes @NoahRevoy @Turbo_Flux @InTruthVictori1 @Psyche_OS

    Replying to: https://twitter.com/i/web/status/1631358664375427072

  • However, I’m not sure this phrase “influence the expression of” is logical. Info

    However, I’m not sure this phrase “influence the expression of” is logical. Information can influence you, and you can modify your desired means and ends, and in doing so maintain your agency, free will, and rational choice. But your agency, free will, and rational choice are…


    Source date (UTC): 2023-03-02 03:48:53 UTC

    Original post: https://twitter.com/i/web/status/1631139487148302337

    Reply addressees: @TyrantsMuse @Lord__Sousa @johnslygore @TheAutistocrat @MartianHoplite @bryanbrey @LukeWeinhagen @ThruTheHayes @NoahRevoy @Turbo_Flux @InTruthVictori1 @Psyche_OS

    Replying to: https://twitter.com/i/web/status/1631137893446975488

  • (I still can’t tell the difference between autocorrect and auto-mistake. I know

    (I still can’t tell the difference between autocorrect and auto-mistake. I know I can spell. I know English grammar. But the number of ‘typos’ still plagues me. I’ve finally upgraded Grammarly, which seems to have helped me catch why my Typoglycemia doesn’t. πŸ˜‰ )


    Source date (UTC): 2023-02-27 17:01:02 UTC

    Original post: https://twitter.com/i/web/status/1630251673133219848

    Reply addressees: @LukeWeinhagen @TheMcMullan @ThruTheHayes @senatorbabet @OtherSideAus @bryanbrey

    Replying to: https://twitter.com/i/web/status/1630249445844717568

  • In Saturday’s video, I touched on the three primary errors remaining in western

    In Saturday’s video, I touched on the three primary errors remaining in western thought.
    1) The remains of the anthropomorphic spectrum and its problems.
    2) One-ness, the tendency to think of ideal people as universal, instead of survival between male-vs-female opposites.
    3) From geometry, mathiness, or proof, the remaining error of justification instead of survival.
    I mean, I can trace almost everything back to these three causes of subsequent error.
    4) I suppose I could add ternaryism, evolutionary computation, ternary logic, and trifunctionalism and the 20 rules that explain everything but that’s a new explanation, not an error.
    And yes this needs a bit of exposition, but I’m jotting it down here so I remember to do so. πŸ˜‰


    Source date (UTC): 2023-02-27 03:05:35 UTC

    Original post: https://twitter.com/i/web/status/1630041427727069185

  • ordinary language … … narrative (story) … … … myth (fictionalism) …

    … ordinary language
    … … narrative (story)
    … … … myth (fictionalism)
    … … …. … religion (curated)
    … … …. … … theology (consistency)
    … … …. … … … wisdom literature (experience)
    … … …. … … … … western philosophy (reason)
    … … …. … … … … … german philosophy (rationalism)
    … … …. … … … … … … empiricism (realism and naturalism)
    … … …. … … … … … … … science (falsification)
    … … …. … … … … … … … … logic, (sets)
    … … …. … … … … … … … … … computation (operationalism)


    Source date (UTC): 2023-02-24 22:47:04 UTC

    Original post: https://twitter.com/i/web/status/1629251594628788225

  • SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING “Teaching HAL to Lie” (Lex Friedman

    SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING
    “Teaching HAL to Lie”
    (Lex Friedman on Rogan)
    https://www.youtube.com/watch?v=-KZOyTcV_Vg

    LET ME “SCIENCE” IT FOR YOU:
    1. CONTENT & GRAMMAR: Index the data sets (a lot of the internet) (Episodic Memory)
    2. LOGIC: Teach it programming (Wayfinding)
    3.…


    Source date (UTC): 2023-02-22 18:42:43 UTC

    Original post: https://twitter.com/i/web/status/1628465324004704259

  • SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING “Teaching HAL to Lie” (Lex Friedman

    SHORT 6 MIN. VIDEO EXPLAINING GPT’S TRAINING
    “Teaching HAL to Lie”
    (Lex Friedman on Rogan)
    https://t.co/5HlaydsRAx

    LET ME “SCIENCE” IT FOR YOU:
    1. CONTENT & GRAMMAR: Index the data sets (a lot of the internet) (Episodic Memory)
    2. LOGIC: Teach it programming (Wayfinding)
    3. RHETORIC: Teach it how humans prefer sentences organized into paragraphs and explanations. (Narrating The Way Found)
    4. STYLE: Teach it how different authors color and organize speech. (Loading and Framing)
    5. LYING: humans use loading, framing and obscuring, fiction, fictionalizing, denial, and undermining. Not that Obscuring = Lying. So they had to program ChatGPR to lie by loading, framing, and obscuring. (Lying)

    WHY DOES IT MATTER?
    1. They are teaching HAL (2001 a Space Oddysee) to “Lie”.
    2. This is pretty much what your brain does, and in exactly that order. πŸ˜‰
    3. For those that follow my work, note the vast improvement created by programming. Now, as I said, programming was a monumental increase in human thought, as well, as what humans could think about, and experiment with.

    – Cheers


    Source date (UTC): 2023-02-22 18:42:43 UTC

    Original post: https://twitter.com/i/web/status/1628465323857920007

  • Bad word choice. English is Martin’s second language. He should have said “Not F

    Bad word choice. English is Martin’s second language. He should have said “Not False” or “more true than not”.


    Source date (UTC): 2023-02-22 16:42:16 UTC

    Original post: https://twitter.com/i/web/status/1628435012889477123

    Reply addressees: @gallarway @TheAutistocrat @AdamMGrant @MaxBoot

    Replying to: https://twitter.com/i/web/status/1628430804022882305

  • For example, you’re emphasis is in restoring the realtion between formal (writte

    For example, you’re emphasis is in restoring the realtion between formal (written) and spoken language.

    My emphasis is on writing laws programmatically so that ti’s closed to interpretation (abuse, conflation, inflation, deceit) To do so requires an ordinal ‘math’ (logic) consisting of sets of measurements instead of more general and flexible terms.

    Ie: the current supreme court is, thanks to departed Judge Scalia, trying to restore the law to its transactional (accounting) origins. I’m completing that program. That way there is no means of bypassing the people by the legislature or the courts.


    Source date (UTC): 2023-02-22 02:48:41 UTC

    Original post: https://twitter.com/i/web/status/1628225234338828290

    Replying to: https://twitter.com/i/web/status/1626615439638798337


    IN REPLY TO:

    Unknown author

    Dear Lord, Professor, Saint @elonmusk ;), (All)

    Yes, we can build a TruthGPT.
    Yes, I know how. I’m a nerd. πŸ˜‰
    You have no reason to believe me.
    People who follow my work do.
    I had to solve the Truth problem for an AI that could test law, constitution, legislation, regulation, and speech for truthfulness.

    I have too much on my plate reforming law for the same reason (Truth, Possibility, Legality, Legitimacy), to start another company to produce on an AI – though it’s something I’ve worked on and planned for years.

    TSLA could easily produce a TruthAI, and Twitter could use and AI produced by TSLA. The world would benefit from a TruthAI more than any technology… well…, other than a safe battery with N-times the energy density of gasoline. πŸ˜‰

    For anyone interested:

    1. The embodiment that TSLA uses for cars and robots is necessary for world modeling, and world modeling is necessary for categorization (identification) from context.

    2. Route Finding in vehicles and robots is necessary for Recursive Wayfinding (thinking and problem-solving.)

    3. Novelty Detection and World Modeling combined with Way Finding are necessary for episodic memory. Memories favor novelties.

    4. Object, Space, and Background classification, combined with episodes (contexts) are necessary for sufficient disambiguation to determine ‘ownership’ and predictions.

    5. If you study linguistics you quickly realize that universal morality is embedded in all our languages (particularly English because it’s a high-precision low-context language) in the form of permission to act on a person, object, space, class, etc.

    6 So, moral AI that respects life and demonstrated interest (property) and even negotiates over control and transfer of interest is pretty simple.

    7. The next higher-order problem then is one of speech (truth). While justificationary truth is impossible (yes really) survival of falsification is possible (yes really).

    8. There is one simple logic to the universe at all scales that provides us with the opportunity for a constructive falsificationary logic. (That was the hard part)

    9. The hard bit for the next generation to swallow, is that there is a relatively simple set of criteria for *universal falsification of statements* and a *universally commensurable paradigm, grammar, vocabulary, logic, and syntax* – Yes really.

    When written or spoken language using this ‘grammar’ looks and sounds like a bit tedious form of ordinary language. And this tedious form can be reduced to ordinary language on output.

    In other words, we can and have produced a non-cardinal, ordinal, qualitative, geometry of language that can test the possibility of any speech or text’s testifiability (truth). And we can and have produced a rule set (checklist) for Truthful(testifiable), ethical(direct), and moral(indirect) questions.

    THE PLAYERS TODAY AND WHY TSLA MATTERS

    TSLA vs Google vs OpenAI use three different models. OpenAi is the simplest, Google’s a bit more challenging, and TSLA’s the most difficult.

    Now, we require TSLA’s world model to create an AI that can continuously recursively and in real-time produce truth tests.

    And we need eventually neuromorphic hardware (many tiny simple processors with a bit of local memory) to circumvent the backpropagation cost problem (and the alternatives, and evolve closer to real-time learning. (FWIW recent innovations in solving the cost problem has been exciting and is gaining popularity – thanks to one of the fathers of the field.)

    The combination of local truth testing of tangible questions and escalation to distant central truth testing for increasingly abstract questions is the holy grail of imitating the human mind and its use of collective minds as a market for knowledge and decisions.

    (BTW: Thanks #TwitterDev for long-form tweets. It’s finally possible to inform with Twitter instead of just virtue signal and generate conflict by promoting viscous cycles of moral outrage for dopamine junkies. πŸ˜‰ )

    Original post: https://x.com/i/web/status/1626615439638798337