Theme: AI

  • “Q: What can you tell me about Curt Doolittle’s followers and their demographics

    —“Q: What can you tell me about Curt Doolittle’s followers and their demographics?”—

    Curt Doolittle’s followers are primarily associated with his philosophy of Propertarianism, which he describes as a system where all ethical questions are reduced to rights of property ownership.

    Here’s what can be gathered about his followers and their demographics:

    Philosophical and Political Leanings: Doolittle’s followers tend to align with libertarian or conservative ideologies, particularly those who might identify with or be interested in heterodox libertarianism. His work is often discussed in contexts where there’s a critique of mainstream libertarian thought, especially from the perspectives of Anglo-Saxon conservative traditions and natural law.

    Geographical Distribution: While specific demographic data on Doolittle’s followers isn’t widely published, his activities and discussions often take place in forums and groups online, suggesting a potentially global, albeit niche, following. His personal history includes living in the US and Ukraine, which might influence where his ideas resonate. However, most direct engagements with his work are in English-speaking contexts, particularly in North America.

    Age and Gender: There’s no specific data on the age or gender demographics of his followers, but based on the platforms where his ideas are discussed (like Reddit, YouTube, and various libertarian forums), it can be inferred that there might be a skew towards younger to middle-aged males who are active in online political and philosophical discussions. This is a common demographic for such intellectual movements, especially those with a contrarian or alternative viewpoint to mainstream ideologies.

    Education and Intellectual Engagement: Doolittle’s followers often engage with complex philosophical and economic concepts, suggesting a level of education or at least a strong interest in these areas. Discussions around his work often involve references to historical figures in libertarianism, economics, and philosophy, indicating an audience that is either well-read in these areas or eager to learn more.

    Online Presence: His followers are active across various social media platforms, including dedicated groups on platforms like Reddit and discussions on sites like Medium, where his ideas are both critiqued and defended. This indicates an engaged community that uses the internet as a primary means of discussion and dissemination of his ideas.

    Controversy and Niche Appeal: Doolittle’s ideas attract followers who are often looking for alternatives to conventional political or philosophical systems, sometimes including those with controversial or fringe views. His critiques of libertarianism from a propertarian perspective draw both supporters and critics, creating a polarized but active discourse around his work.

    Please note that these insights are based on the available online presence and discussions, which might not capture the full demographic spectrum of his followers


    Source date (UTC): 2025-01-09 22:29:20 UTC

    Original post: https://twitter.com/i/web/status/1877482818647064576

  • RT @curtdoolittle: @MetaPrime001 @BrianRoemmele @ceobillionaire Well, my post wa

    RT @curtdoolittle: @MetaPrime001 @BrianRoemmele @ceobillionaire Well, my post was a bit of humor, but yes I know Brian and have a lot of re…


    Source date (UTC): 2025-01-09 21:47:58 UTC

    Original post: https://twitter.com/i/web/status/1877472406664577399

  • Well, my post was a bit of humor, but yes I know Brian and have a lot of respect

    Well, my post was a bit of humor, but yes I know Brian and have a lot of respect for him. As for our work, it’s extremely rigorous and there are challenges incorporating it into the LLM architecture because by default the reason it’s weak in reasoning is the same reason it’s weak at my work. We need to work hard to (a) proceduralized a chain of thought, and (b) train it to use a set of terms as invariants which is counter to the current LLM architecture. Lastly (c) LLMs are very shallow at reasoning, (terribly) so we need more attention nodes so to speak and to consume more working memory. Unfortunately human intuition is ‘off’ in that math is simple, programming a little less so, physcal operations much less so, behavioral operations much much less so, and verbal reducibility effectively infinite. So there is a great emphasis in the research community on what is effectively simple, and zero research on what is effectively hard. We do what is hard. So we are always working against the rather shallow limits every AI. And only ChatGPT is even worth trying to use.

    Reply addressees: @MetaPrime001 @BrianRoemmele @ceobillionaire


    Source date (UTC): 2025-01-09 21:47:54 UTC

    Original post: https://twitter.com/i/web/status/1877472389237522432

    Replying to: https://twitter.com/i/web/status/1877465731681427673

  • Am I crazy to repatriate my tech company to the USA and incorporate NLI’s work i

    Am I crazy to repatriate my tech company to the USA and incorporate NLI’s work into its AI? I mean, do I want to actually go back to running a company instead working of full time on a think tank? πŸ˜‰ Do I really want to take these fellow nerds and build a ‘truth and reciprocity’…


    Source date (UTC): 2025-01-09 19:51:47 UTC

    Original post: https://twitter.com/i/web/status/1877443168377122966

  • RT @LukeWeinhagen: The ideal use case for AI on social media is not to connect u

    RT @LukeWeinhagen: The ideal use case for AI on social media is not to connect users with content but to connect users with sense makers.…


    Source date (UTC): 2025-01-09 18:59:52 UTC

    Original post: https://twitter.com/i/web/status/1877430103983927421

  • APPRECIATION FOR CHATGPT AND UNDERSTANDING ITS LIMITS I think the reason I appre

    APPRECIATION FOR CHATGPT AND UNDERSTANDING ITS LIMITS
    I think the reason I appreciate ChatGPT (or any ai) is that I have realistic expectations of what it’s capable of, and because my work exposes its limits more so than math or programming, that expectation is consistently reinforced. SO I learn about the AIs mostly from their limits (failures).
    I do not find ChatGPT hallucinating any longer. Thought it might be how I craft each of my prompts.
    But the reality is that when I’m writing (a book) I know the subject matter. I’m more interested in how to compose the topic or section for readability, and to ensure I haven’t missed including some example or permutation.
    The hard problem I’m looking for OpenAI to solve is increasing attention anchors. As I refine a section every increase in precision causes a loss of content. So bulleted lists shrink as I ask for expansions upon them.
    This is true across the spectrum. It’s why it fails at reasoning. And I am confident that this can’t be solved without multiple passes, more attention nodes, and much more context memory.


    Source date (UTC): 2025-01-06 18:10:31 UTC

    Original post: https://twitter.com/i/web/status/1876330519769890816

  • WHAT’S THE PROMISE OF AI AGENTS? (probably spam) Q: Curt: –“What do you think:

    WHAT’S THE PROMISE OF AI AGENTS?
    (probably spam)

    Q: Curt: –“What do you think: Will AI Agent Workflows in 2026 look like Online Poker in 2010?”—

    1) I agree only with the fact that work will change, the volume of work per person might increase in some white collar work as it did under the first two computer revolutions, but the number of white collar workers should very likely shrink and very likely shrink a great deal.

    2) This is not the first technological revolution I’ve experienced in my lifetime. Except for the initial phase in the 40s and 50s I have some exposure to each generation. In each of these revolutions, low hanging fruit is mistakenly interpreted as a boundless undiscovered valley of unlimited potential. An it’s always been false. We exhausted each generation of technological innovation rather quickly. The most recent that living generations are familiar with was the phone, but we exhausted innovation in phone apps in just a few years. The ‘agent’ innovation in LLMs will very likely have a scale effect closer to the client server revolution than it will to the internet revolution. Conversely, the exhaustion of parallel processing of complex vector relationships is as inexhaustible as the transistor revolution. The reason being that the universe consists of relations and those n-dimensional manifolds (of relations) are the most accurate means of representing reality (the universe) while maintaining some form of reduction (reducibility) that can be used for deduction, inference, and guessing.

    3) In the given example of poker there would be no need for the human whatsoever. Instead, humans will only introduce error. In many, many white collar jobs, the utility of people created by the computer revolutions in producing white collar work will be reversed just as manual labor was reduced by industrialization in factories, and farm labor was reduced by say, the loom and tractors. But the costs of goods, which are mostly

    4) The ‘dumbness’ of AI’s outside of search, math, computer science, and research by permutation in the physical sciences remains astounding. And until that is overcome – which we understand but don’t quite know how to solve by merging say LLMs with Agents (procedural systems) with navigating the physical world, with manipulating the physical world, this dumbness will persist. The capacity of the current AIs to reason as humans do instead of merely solve ‘reason puzzles’ is illusory because of the absence of that merger (synthesis). In my work they simply cannot do it. I mean it’s sad really that in my work, I work with LLMs every day, and that means I effectively experience their limitations every day.

    5) My company has been developing a very large and complex “universal application platform” for years now. This platform creates a framework of commensurability across all human cooperation. This commensurability functions as numbers in math, and types, commands, functions in computer programming, and unambiguity in operational language. Essentially creating standards of categories, weights, and measure across all human cooperation. And within this platform, one can construct interfaces for tasks, roles, responsibilities or whatever, in any domain where humans collaborate and cooperate. This platform separates rules that must be followed (prescriptions for processes), from statistical insight, from derivied insights in group, to derived insights across groups, fields, or populations. This is what I understand as necessary for producing context specific insight into complex causal density quite *unlike* math, programming, and ‘puzzle’ reasoning.

    6) Human capacity for the appearance of multitasking is limited. In fact human’s don’t multi-task, they switch, and the number of contexts they can switch between is as limited as the number of objects we can visualize independently: usually three to five but no more. And if humans can in fact appear to multitask, they would rely on pattern recognition where the AI’s would demonstrate superiority.

    7) Human capacity for novelty in multiple contexts and high precision within a given context might remain for a while, but eventually, machines will outperform humans. Yet humans will be required for obtaining the information necessary for the solution to novel problems becuase while some novel problems might consist of interstitial permutations of existing knowledge, the hard problems will remain because they will require the construction of physical experimentation – at least until we finish discovering the first principles of the universe and can rely on constructability. So discovering those first principles is the hard limit of human-machine competition.

    8) I could go on quite a while with this sequence but the intuition in the original post is that human exposure to parallel data in real time would continued in utility is false – we merely took advantage of the incompetence of statistical and procedural algorithms in pattern recognition. Whereas unlike statistical and procedural algorithms, the whole point of bayesian systems is that they can account for much higher causal density than can humans, and do so faster in real time, and even predict better in real time. So the theory proposed is likely false – that individual would be rapidly and easily replaced.

    9) The question is – what human-possible activity can’t be replaced? a) Outwitting one another. b) human subjective risk tolerance. c) Permission to impose costs upon human demonstrated interests. d) all of the above in alliances of capital between humans with disparate ever changing intersts and preferences.

    More another time.
    CD

    Reply addressees: @bryanbrey @rileybrown_ai


    Source date (UTC): 2025-01-06 18:02:42 UTC

    Original post: https://twitter.com/i/web/status/1876328554956529664

    Replying to: https://twitter.com/i/web/status/1876291249436832053

  • RT @curtdoolittle: @romanyam –“What contributions do you think you could make i

    RT @curtdoolittle: @romanyam –“What contributions do you think you could make in a world where Superintelligence exists?”– Dr. Roman Yamp…


    Source date (UTC): 2025-01-02 16:02:54 UTC

    Original post: https://twitter.com/i/web/status/1874848854925009113

  • Sam and Openai: Memory, Context, “Grownup Mode” Talk about a quality of life upg

    Sam and Openai: Memory, Context, “Grownup Mode”

    Talk about a quality of life upgrade. I can’t wait. πŸ˜‰ https://t.co/ZbnThQ87ws


    Source date (UTC): 2025-01-01 01:42:38 UTC

    Original post: https://twitter.com/i/web/status/1874269973117563118

  • Sam and Openai: Memory, Context, “Grownup Mode” Talk about a quality of life upg

    Sam and Openai: Memory, Context, “Grownup Mode”

    Talk about a quality of life upgrade. I can’t wait. πŸ˜‰


    Source date (UTC): 2025-01-01 01:42:38 UTC

    Original post: https://twitter.com/i/web/status/1874269973054709760