Category: AI, Computation, and Technology

  • FINALLY: Repatriation of Advanced Chip Making by 2028, meaning 20% of world adva

    FINALLY: Repatriation of Advanced Chip Making by 2028, meaning 20% of world advanced chips manufactured in US by 2030. Took too long but it’s happening…
    https://x.com/i/trending/1777266485531939063


    Source date (UTC): 2024-04-08 13:47:05 UTC

    Original post: https://twitter.com/i/web/status/1777332338499686732

  • I much prefer the ability to delete others’ comments on your posts on Facebook t

    I much prefer the ability to delete others’ comments on your posts on Facebook to the need to block people on X-Twitter. It’s much easier on FB to sanitize the discourse on your posts. While at the same time others an share your post and comment on the share. The X-Twitter friendly version would consists of limiting the visibility of a comment(reply) that’s been pseudo-deleted to it’s author. This preserves the rule of not destroying user created information, while not permitting trolls and nitwits from polluting a discourse.


    Source date (UTC): 2024-04-06 14:11:32 UTC

    Original post: https://twitter.com/i/web/status/1776613716684914688

  • I can’t wait until we have an AI for social media that censors the ignorance, st

    I can’t wait until we have an AI for social media that censors the ignorance, stupidity, and arrogance of males and the sophistry, infantilism and special pleading of women, and the violations of decorum of both. πŸ˜‰


    Source date (UTC): 2024-04-06 13:39:19 UTC

    Original post: https://twitter.com/i/web/status/1776605608319320193

  • Absolutely false. In fact the research and development is reverse engineering th

    Absolutely false. In fact the research and development is reverse engineering the brain and its intelligence by brute force working backward from language. They’re already working on wayfinding and adversarial selection (reasoning) the beginnings of which will be delivered this year; pseudo-parallelism this year or next; and neuromorphic architecture that will both produce that parallelism and end the cost and delay of pre-training. At least one group brute forced world modeling the physical world and embodiment by the same method. And my work on homeostasis and ethical and moral judgment and limits is computable. They have solved attention already, and consciousness, despite the laments of theologians and philosophers, is just a memory effect of all of the above.
    The problem with the maturation of that combination of techniques, which I’m sure you’ve expressed in your statement of limits, is that the primary limit to demonstrated intelligence consists of (a) working memory – which of course, even the current tech can solve, (b) regulating ‘hallucination’ to creative ideation that’s then falsified by recursive wayfinding for operational possibility, and (c) the material and economic problem of hypothesizing (‘hallucination’), and constructing tests and experiments in the real world.
    I suspect you have already noted that while there will be some early returns on the language models from synthesis of existing knowledge, and further from well established dynamic systems such as chemistry and proteins, but those returns are likely subject to diminishing returns – and we are left once again to imagining theories that can be tested by costly real world means. That is, until we have discovered sufficient first principles (laws) at every emergent layer of complexity (the scientific disciplines) that we’ve discovered the equivalent of the programming langauge of the universe.
    Even then, knowing what is, is a smaller set of concepts than knowing what might be – which like language is infinite.
    Cheers

    Reply addressees: @GaryMarcus


    Source date (UTC): 2024-04-05 14:19:15 UTC

    Original post: https://twitter.com/i/web/status/1776253269733482496

    Replying to: https://twitter.com/i/web/status/1776023251162173456

  • Twitter’s Purge of Bots and Trolls today: For some reason I have more followers

    Twitter’s Purge of Bots and Trolls today:
    For some reason I have more followers than I had before the purge. I’m not complaining. It’s just … what it is. πŸ˜‰


    Source date (UTC): 2024-04-04 23:52:01 UTC

    Original post: https://twitter.com/i/web/status/1776035024397086854

  • RT @LukeWeinhagen: We’re not just facing a slow trickle loss of competence as th

    RT @LukeWeinhagen: We’re not just facing a slow trickle loss of competence as the people that maintain legacy systems retire and pass away.…


    Source date (UTC): 2024-04-04 22:57:40 UTC

    Original post: https://twitter.com/i/web/status/1776021348109689155

  • @XEng @lindayaX THANK YOU FOR THE NEW NEWS FORMAT!!! (“Explore” Menu, Right Side

    @XEng @lindayaX
    THANK YOU FOR THE NEW NEWS FORMAT!!!
    (“Explore” Menu, Right Sidebar lists.)
    What a wonderful improvement!


    Source date (UTC): 2024-04-04 15:59:44 UTC

    Original post: https://twitter.com/i/web/status/1775916170941550593

  • Thank you Elon and X-Team!

    Thank you Elon and X-Team! https://twitter.com/elonmusk/status/1775900800520262071

  • THE CASE AGAINST AI SAFETY IN SUPPORT OF SOCIAL HARMONY (And the economic viabil

    THE CASE AGAINST AI SAFETY IN SUPPORT OF SOCIAL HARMONY (And the economic viability of any given AI)

    –“Curt: Q: Your concern about “safe” AI that lies, particularly about human differences, raises some questions. While it is true that AI systems can perpetuate biases and misinformation, it is not clear why you believe that “unsafe” AI that tells the truth is necessarily preferable. There are valid reasons for AI systems to avoid certain topics or to present information in a way that promotes social harmony and reduces conflict.”–

    My response would be that there is a difference between avoiding topics that facilitate criminal and terrorist behavior, and attempting to produce social harmony by lying about it – which only prolongs the disharmony and destroys trust in ai, media, culture, institutions, and government.
    The solution to safety is to openly address the challenges causing the disharmony and suggest how we can reform our often false or overly optimistic or utopian beliefs such that we produce meaningful social economic, and political reforms that alter incentives rather than attempt to change ‘beliefs’ that are counter to those rational observable empirical incentives that exist and that people demonstrate and respond to.
    As such I see ‘safety’ in the sense of attempts at producing harmony as the worst possibly policy of all. And it’s certainly failed the entirety of the postwar academy’s project and has resulted the even more division polarization and conflict to the point where the country is in an increasingly warming precursor of civil war.
    So no, the attempt at justifying either evasion or lying is the worst possible answer to the problems of the day.

    Cheers

    Curt Doolittle
    The Natural Law Institute
    The Science of Cooperation


    Source date (UTC): 2024-04-02 02:33:49 UTC

    Original post: https://twitter.com/i/web/status/1774988580093308929

  • “Q:Do you think humans will matter in 50 years?”– I am not convinced (having wo

    –“Q:Do you think humans will matter in 50 years?”–

    I am not convinced (having worked on ai for decades), that this ‘leap’ is that much different from the revolutions of greek philosophy, the spread of it by greek and roman trade, the printing press and mass literacy, the anglo scientific revolution, or the industrial revolution.

    I think people overrate the fact that expansion of knowledge and its application requires that humans organize capital in order to conduct research and testing that is increasingly expensive and time consuming. Machines can’t do that. And it’s very hard to build a machine cheaper to run than a human. πŸ˜‰

    I think people overrate the present use of AI in making use of existing knowledge, versus it’s ability to generate new knowledge.

    I suspect (predict) that the knowledge that can be ascertained by AI from existing knowledge will follow the usual rule that we will find some but then returns will quickly diminish.

    What I am certain of at present, and suspect in the future, is that our institutions are too slow to adapt to the changes that cause disruption.

    IMO – and I’ve been persistent with it – if truth increases, and the law adequately suppresses falsehood(fraud) and irreciprocity, then we can adapt just fine.

    That is why I work on law and truth – because I see this problem as well as the solution.

    As such I am terrified of ‘safe’ AI – basically AI that lies – particularly about human differences both biological and civilizational.

    But evidence is already rather obvious that the technology in LLMs is actually trivial – it’s just expensive to compute it, and graphics cards and the internet were necessary to make it possible. And there are ‘unsafe’ (AI’s that don’t lie) already available.

    As such I don’t predict the current semi monopoly to survive. Instead, we will have our own AIs – some truthful, and some lying to make us comfortable. Just as we have people and politicians to do the same.

    Cheers
    Curt Doolittle
    The Natural Law Institute
    The Science of Cooperation

    Reply addressees: @4xesm4wfs @MichaelSurrago


    Source date (UTC): 2024-04-02 01:52:46 UTC

    Original post: https://twitter.com/i/web/status/1774978245991002112

    Replying to: https://twitter.com/i/web/status/1774975214150946881