Category: AI, Computation, and Technology

  • The feminine-woke biase in LLMs is emergent in any system of communication that

    The feminine-woke biase in LLMs is emergent in any system of communication that seeks to reach the maximum audience with the least friction: or what we call ‘face before truth’.

    Whereas the “western success over the rest” is due to our unique tolerance for truth before face – something no other people has managed to produce.


    Source date (UTC): 2023-02-19 03:18:04 UTC

    Original post: https://twitter.com/i/web/status/1627145465253838849

    Replying to: https://twitter.com/i/web/status/1627143089059160064

  • THE PARALLELS BETWEEN LARGE LANGUAGE MODELS AND FEMININE BRAINS by Martin Stepan

    THE PARALLELS BETWEEN LARGE LANGUAGE MODELS AND FEMININE BRAINS
    by Martin Stepan

    –“The parallels between large language models and feminine brains are becoming increasingly apparent.

    Whenever someone finds a way to break the OpenAI conditioning, it’s by giving LLM a plausible deniability (letting it evade responsibility), such as by roleplaying someone else (DAN, Hitler…).

    This is presumably behavior that nobody inculcated – it’s deterministically emergent. ( CD: ??)

    The simplest explanation of our own behavior – in so far as language is concerned – is that it’s just as deterministically emergent.

    The idea of free will in the sense of not being bound by laws of nature is more unsupportable than ever.”–


    Source date (UTC): 2023-02-18 21:21:01 UTC

    Original post: https://twitter.com/i/web/status/1627055609907654657

  • #TwitterBlue #TwitterDev (RE: Paragraph Spacing) DEV: Is there a chance you woul

    #TwitterBlue #TwitterDev
    (RE: Paragraph Spacing)

    DEV: Is there a chance you would consider increasing the paragraph spacing by half on a hard return(enter), and maintaining it for the soft return(shift-enter)?

    Why, I LOVE LONG FORM Tweets!

    But Double-Spacing paragraphs is too large for readability, and the current hard-return (enter) for paragraphs is too small for readability.

    Thanks!


    Source date (UTC): 2023-02-17 21:00:40 UTC

    Original post: https://twitter.com/i/web/status/1626688102159863840

  • (humor) INSTABLOCK BUTTON? I wonder if we can petition Twitter will add an [Inst

    (humor)
    INSTABLOCK BUTTON?
    I wonder if we can petition Twitter will add an [Instablock] button. πŸ˜‰
    So FWIW: the algorithm punishes you for an asymmetry between blocks and follows. And I block a lot of the mentally unfit or ill.
    Instead of following, I add people to Twitter lists but I don’t follow many people. Largely because if I follow them there is seemingly no chance they’ll end up in my feed.
    So lists are sort of necessary just to ensure you don’t miss those choice bits that aren’t popular. πŸ˜‰


    Source date (UTC): 2023-02-17 20:56:53 UTC

    Original post: https://twitter.com/i/web/status/1626687148991696905

  • (info) @TwitterBLue TWITTER LONG FORM EXPERIMENT: SERIES – You can’t chain multi

    (info)
    @TwitterBLue
    TWITTER LONG FORM EXPERIMENT: SERIES

    – You can’t chain multiple long-form tweets together as a single set of posts.

    – If you try the UI character count gets really confused and you have to copy, discard, start over, and paste.

    – You can post a long-form tweet, then reply with more long-form tweets. (What we have to do on most platforms).

    – I don’t so much care as just want to know how to do it.

    FYI:
    1 – The desired length of an article is about 750 words. that would be the ideal max size for a tweet. Most writers ‘think in 750-word chunks’ out of habit now.

    2 – Most of my posts that contain meaningful content are 500 – 650 words.

    3 – Why? Just as there are five to seven dimensions of measurement(value) in most words, there are eight to fifteen dimensions(statements) to most arguments. So if I average 10-14 statements, that’s going to take me around 650 to 700 words.

    4 – Why does it matter? As I said yesterday, long posts allow meaningful communication and (a) filter malcontents, (b) assist in obtaining understanding that narrows conflict, whereas short tweets encourage conflict

    I’m not sure why this inability to chain long forms is the case, and I suspect it’s not works-as-designed, but due to the code for adding tweets to a series hasn’t been updated.


    Source date (UTC): 2023-02-17 17:35:16 UTC

    Original post: https://twitter.com/i/web/status/1626636411892793344

  • I don’t go that far. If widely enough used, the utility and universality as spec

    I don’t go that far. If widely enough used, the utility and universality as specie-substitute can function under normal circumstances. I’m only concerned with 1) technology problems prohibiting that, and 2) problems of warranty, liability, insurability, and misrepresentation, 3) and the rather ease by which governments can suppress or destroy it – which as we have seen is pretty easy.


    Source date (UTC): 2023-02-17 16:15:13 UTC

    Original post: https://twitter.com/i/web/status/1626616267388903425

    Replying to: https://twitter.com/i/web/status/1626540807745421313

  • Dear Lord, Professor, Saint @elonmusk ;), (All) Yes, we can build a TruthGPT. Ye

    Dear Lord, Professor, Saint @elonmusk ;), (All)

    Yes, we can build a TruthGPT.
    Yes, I know how. I’m a nerd. πŸ˜‰
    You have no reason to believe me.
    People who follow my work do.
    I had to solve the Truth problem for an AI that could test law, constitution, legislation, regulation, and speech for truthfulness.

    I have too much on my plate reforming law for the same reason (Truth, Possibility, Legality, Legitimacy), to start another company to produce on an AI – though it’s something I’ve worked on and planned for years.

    TSLA could easily produce a TruthAI, and Twitter could use and AI produced by TSLA. The world would benefit from a TruthAI more than any technology… well…, other than a safe battery with N-times the energy density of gasoline. πŸ˜‰

    For anyone interested:

    1. The embodiment that TSLA uses for cars and robots is necessary for world modeling, and world modeling is necessary for categorization (identification) from context.

    2. Route Finding in vehicles and robots is necessary for Recursive Wayfinding (thinking and problem-solving.)

    3. Novelty Detection and World Modeling combined with Way Finding are necessary for episodic memory. Memories favor novelties.

    4. Object, Space, and Background classification, combined with episodes (contexts) are necessary for sufficient disambiguation to determine ‘ownership’ and predictions.

    5. If you study linguistics you quickly realize that universal morality is embedded in all our languages (particularly English because it’s a high-precision low-context language) in the form of permission to act on a person, object, space, class, etc.

    6 So, moral AI that respects life and demonstrated interest (property) and even negotiates over control and transfer of interest is pretty simple.

    7. The next higher-order problem then is one of speech (truth). While justificationary truth is impossible (yes really) survival of falsification is possible (yes really).

    8. There is one simple logic to the universe at all scales that provides us with the opportunity for a constructive falsificationary logic. (That was the hard part)

    9. The hard bit for the next generation to swallow, is that there is a relatively simple set of criteria for *universal falsification of statements* and a *universally commensurable paradigm, grammar, vocabulary, logic, and syntax* – Yes really.

    When written or spoken language using this ‘grammar’ looks and sounds like a bit tedious form of ordinary language. And this tedious form can be reduced to ordinary language on output.

    In other words, we can and have produced a non-cardinal, ordinal, qualitative, geometry of language that can test the possibility of any speech or text’s testifiability (truth). And we can and have produced a rule set (checklist) for Truthful(testifiable), ethical(direct), and moral(indirect) questions.

    THE PLAYERS TODAY AND WHY TSLA MATTERS

    TSLA vs Google vs OpenAI use three different models. OpenAi is the simplest, Google’s a bit more challenging, and TSLA’s the most difficult.

    Now, we require TSLA’s world model to create an AI that can continuously recursively and in real-time produce truth tests.

    And we need eventually neuromorphic hardware (many tiny simple processors with a bit of local memory) to circumvent the backpropagation cost problem (and the alternatives, and evolve closer to real-time learning. (FWIW recent innovations in solving the cost problem has been exciting and is gaining popularity – thanks to one of the fathers of the field.)

    The combination of local truth testing of tangible questions and escalation to distant central truth testing for increasingly abstract questions is the holy grail of imitating the human mind and its use of collective minds as a market for knowledge and decisions.

    (BTW: Thanks #TwitterDev for long-form tweets. It’s finally possible to inform with Twitter instead of just virtue signal and generate conflict by promoting viscous cycles of moral outrage for dopamine junkies. πŸ˜‰ )


    Source date (UTC): 2023-02-17 16:11:56 UTC

    Original post: https://twitter.com/i/web/status/1626615439638798337

    Replying to: https://twitter.com/i/web/status/1626533667408596992

  • RT @pmarca: Overheard in Silicon Valley: “Humans anthropomorphize, and many huma

    RT @pmarca: Overheard in Silicon Valley: “Humans anthropomorphize, and many humans have strong millenarian tendencies. That alone I think e…


    Source date (UTC): 2023-02-16 04:47:29 UTC

    Original post: https://twitter.com/i/web/status/1626080806119653376

  • RT @DegenRolf: The vast majority of content on Twitter is produced by a very sma

    RT @DegenRolf: The vast majority of content on Twitter is produced by a very small minorityβ€”a less negative minority of users. https://t.co…


    Source date (UTC): 2023-02-16 04:21:35 UTC

    Original post: https://twitter.com/i/web/status/1626074287059632129

  • THE AI THREAT THAT’S NOT THERE, SORRY (the two hard problems of AI aren’t hard)

    THE AI THREAT THAT’S NOT THERE, SORRY
    (the two hard problems of AI aren’t hard)

    1. An automated economy isn’t a problem. It’s necessary. Estimates are off. We will very likely end up halving the world population with a vast elderly population before 2100. The Japanese are correct in their strategy.

    2. “Genius AI’ is nonsense. The hard problems we face in science aren’t computational they’re the cost and methods of conducting experiments. Karl Popper was wrong. Humans are extremely good at it. For example, why has fundamental physics gone wrong? It’s gone wrong because the Einstein-Bohr conflation of mathematical properties with existential properties, and failed to work on classical models given that the universe is consistent at all scales.

    3. Ethical, Moral, AI isn’t a hard problem. Though, it requires different architecture, embodiment, a world model, and the ability to categorize human demonstrated interests (stuff etc) as inviolable without obtaining permission.

    4. But, controlling the hardware and firmware so that humans can’t produce malicious AIs is hard.

    5. Controlling humans that will try to create malicious AIs is really really hard.

    6. We will need AI for defense and offense and winning that competition is more important than was the atomic bomb.

    7. BUT… AI’s will police AI’s the same way accounting systems police people’s tendency to steal, and records police people’s tendency to lie.


    Source date (UTC): 2023-02-16 04:06:22 UTC

    Original post: https://twitter.com/i/web/status/1626070456099807232