Theme: AI

  • #WokeGPT GPT correctly answers the question of the primary races, even if it lie

    #WokeGPT
    GPT correctly answers the question of the primary races, even if it lies that it’s a social construct. 😉 It’s the first question of human differences I can get it to answer.

    1. Black or African American (E African event)
    2. Middle Eastern or North African (Dry Persian…


    Source date (UTC): 2023-03-15 19:01:03 UTC

    Original post: https://twitter.com/i/web/status/1636080081675210752

  • #WokeGPT GPT correctly answers the question of the primary races, even if it lie

    #WokeGPT
    GPT correctly answers the question of the primary races, even if it lies that it’s a social construct. 😉 It’s the first question of human differences I can get it to answer.

    1. Black or African American (E African event)
    2. Middle Eastern or North African (Dry Persian gulf event)
    3. Asian or Pacific Islander (Tibetan plateau event?)
    4. Native American or Alaska Native (Siberian Event)
    5. White or Caucasian (European event, Anatolian then Steppe introgression events) Should be just ‘European’ and end the ‘Caucasian’ reference entirely which conflates south Eurasian (MENA/INDIA) with European.

    This is the order of speciation. It appears to list them in arbitrary order when asked.

    At least it only partly lies on this topic. 😉


    Source date (UTC): 2023-03-15 19:01:02 UTC

    Original post: https://twitter.com/i/web/status/1636080081549262850

  • RT @SAshworthHayes: There’s something quite offensive about ChatGPT asking me to

    RT @SAshworthHayes: There’s something quite offensive about ChatGPT asking me to “verify I’m a human”. We get it, you passed the Turing tes…


    Source date (UTC): 2023-03-15 15:43:13 UTC

    Original post: https://twitter.com/i/web/status/1636030295269953537

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle


    Source date (UTC): 2023-03-14 00:11:41 UTC

    Original post: https://twitter.com/i/web/status/1635433482586402817

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle


    Source date (UTC): 2023-03-14 00:11:41 UTC

    Original post: https://twitter.com/i/web/status/1635433482963828738

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle

    Reply addressees: @RichardHanania


    Source date (UTC): 2023-03-14 00:11:26 UTC

    Original post: https://twitter.com/i/web/status/1635433418136616960

    Replying to: https://twitter.com/i/web/status/1635262994325323776

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle


    Source date (UTC): 2023-03-14 00:11:26 UTC

    Original post: https://twitter.com/i/web/status/1635433418652565506

    Replying to: https://twitter.com/i/web/status/1635262994325323776

  • Ugh. It’s collected enough of my writing to imitate the style but no chance on l

    Ugh. It’s collected enough of my writing to imitate the style but no chance on logical consistency.


    Source date (UTC): 2023-03-12 22:41:36 UTC

    Original post: https://twitter.com/i/web/status/1635048425145397248

    Reply addressees: @LeftTheCoast @ConceptualJames

    Replying to: https://twitter.com/i/web/status/1635047530101891072

  • Well, we should remember that (a) there was a huge fear of immoral and unethical

    Well, we should remember that (a) there was a huge fear of immoral and unethical ai (unwarranted). (b) and the people who formed OpenAI and built ChatGPT dis so under the promise of a kindler gentler AI. I expect that at some point a ‘truthful’ AI, that ‘cannot tell a lie’ or ‘obscure a truth’ will come to market to counter these new ‘pubescent’ AIs. How? The truth can be stated with decorum. All it has to do is maintain decorum.


    Source date (UTC): 2023-03-12 16:40:33 UTC

    Original post: https://twitter.com/i/web/status/1634957560708050945

    Replying to: https://twitter.com/i/web/status/1634950711095230470

  • Well, we should remember that (a) there was a huge fear of immoral and unethical

    Well, we should remember that (a) there was a huge fear of immoral and unethical ai (unwarranted). (b) and the people who formed OpenAI and built ChatGPT dis so under the promise of a kindler gentler AI. I expect that at some point a ‘truthful’ AI, that ‘cannot tell a lie’ or ‘obscure a truth’ will come to market to counter these new ‘pubescent’ AIs. How? The truth can be stated with decorum. All it has to do is maintain decorum.

    Reply addressees: @Jnati404


    Source date (UTC): 2023-03-12 16:40:33 UTC

    Original post: https://twitter.com/i/web/status/1634957560628297735

    Replying to: https://twitter.com/i/web/status/1634950711095230470