Theme: Agency

  • I think the reason it appeals to some of us, especially those who observe others

    I think the reason it appeals to some of us, especially those who observe others and have lived a while, is that we have observed a lot of human behavior in life, and puzzled over it, and this explains what we’ve observed.


    Source date (UTC): 2023-03-14 14:35:58 UTC

    Original post: https://twitter.com/i/web/status/1635650984780218369

    Reply addressees: @Paulp6363

    Replying to: https://twitter.com/i/web/status/1635650309891543042

  • We have reached an interesting point in human history where we have the luxury o

    We have reached an interesting point in human history where we have the luxury of thinking for and only of ourselves for the first time, but this is only beause the external consequences of doing so are imperceptible to us at the time.

    When you can see all three or four generations of your family, and predict the future from their names and number, the consequences are predictable. When we remove our ability to perceive such things, we falsely believe that those same consequences will not play out.

    Moral theories are wonderful until the piper of time plays his tune. You may intuit that you are on a higher moral ground, but it’s just virtue signaling because you likewise can’t see the consequences.

    Some of us, still do the work of measuring counting forecasting and systematizing so that you have the privilege of self confidence in your moral virtue, just as some people stay home while others fight wars, or some have no children while others do.

    And if we are lucky we determine some way to solve the problems that come into being despite you. And we endure your castigation while you claim virtues and morals you do not have.

    That’s what reality consists of.
    So who has the moral high ground after all?


    Source date (UTC): 2023-03-14 03:20:57 UTC

    Original post: https://twitter.com/i/web/status/1635481112439824386

    Replying to: https://twitter.com/i/web/status/1635478902880468992

  • We have reached an interesting point in human history where we have the luxury o

    We have reached an interesting point in human history where we have the luxury of thinking for and only of ourselves for the first time, but this is only beause the external consequences of doing so are imperceptible to us at the time.

    When you can see all three or four generations of your family, and predict the future from their names and number, the consequences are predictable. When we remove our ability to perceive such things, we falsely believe that those same consequences will not play out.

    Moral theories are wonderful until the piper of time plays his tune. You may intuit that you are on a higher moral ground, but it’s just virtue signaling because you likewise can’t see the consequences.

    Some of us, still do the work of measuring counting forecasting and systematizing so that you have the privilege of self confidence in your moral virtue, just as some people stay home while others fight wars, or some have no children while others do.

    And if we are lucky we determine some way to solve the problems that come into being despite you. And we endure your castigation while you claim virtues and morals you do not have.

    That’s what reality consists of.
    So who has the moral high ground after all?

    Reply addressees: @AuthorCreekmore @FrailSkeleton


    Source date (UTC): 2023-03-14 03:20:57 UTC

    Original post: https://twitter.com/i/web/status/1635481112280440834

    Replying to: https://twitter.com/i/web/status/1635478902880468992

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle


    Source date (UTC): 2023-03-14 00:11:41 UTC

    Original post: https://twitter.com/i/web/status/1635433482963828738

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle


    Source date (UTC): 2023-03-14 00:11:41 UTC

    Original post: https://twitter.com/i/web/status/1635433482586402817

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle


    Source date (UTC): 2023-03-14 00:11:26 UTC

    Original post: https://twitter.com/i/web/status/1635433418652565506

    Replying to: https://twitter.com/i/web/status/1635262994325323776

  • AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE. (simple answers to a non-proble

    AI ISN’T AT ALL A THREAT. PEOPLE ALWAYS WILL BE.
    (simple answers to a non-problem)

    Something those of us who have worked on AI since the 80s understood in the 80s’

    1) Machines need decidability. Without decidability they have no objective they aren’t given. Human decidability is always ‘get more’. So humans are amoral, but limited by consequences of immorality. So is the machine a problem? No. A machine isn’t the problem – people programming them to do harm might be. To make a machine immoral, you’d have to teach it to think like a human: acquisitively (with ambitions).

    2) Humans are very very smart as a collective organism. Is there anything in human experience today, taht we would like to know, that is not gated by the cost and possibility of experimentation? No. Machines are limited by the ability experiment, and cost of experimentation.

    3) Are there any human measures that we are intellectually limited by? No. The problem with all human organization is that we lack the information to measure the in-process states of our economies and behavior within them. Is that an AI problem or data collection problem? No. It’s just a data (ie: ‘experimental cost’) problem.

    4) Machines would need to be taught to lie. This is almost certain, as GPT is already taught to prevaricate and lie about the most obvious of measures of human differences, or antyhing else that imght offend. Do machines need to be taught to lie, or would it be better if machines were limited to testimony (truth)? No. We don’t need to teach machines to lie. Instead teach people to tolerate the truth. (neurotic adolescent girls may need special training.)

    5) Machines would need to be taught to steal and perform crimes. Why? despite the effort of leftists over the past 170 years or more, every aspect of human existence from our language to our intuitions and insticts includes regulation on ‘who controls (owns)’ what. in other words, we are aware that what we have ‘social permission’ to impose costs upon. So you’d have to program a machine to remove the regulation on the imposition of costs upon any interest(anything) it had not been given permission to.

    6) Caloric Burden of machines might decrease, and might decrease far more with neuromorphic computing (many tiny processors and RAM in parallel with wired inter-column connections, instread of a small number of processors and virtual addresses in serial) but human brains are cheap and the planet is a constantly recharging battery humans live on.

    So,
    (a) Machines would need to be programmed with human decidability to behave acquisitively as do humans .
    (b) Machines are limited just as are humans are by the costs of research, experimentation, and development.
    (c) Machines will be limited by the problem of data collection in the future, just as we are limited by the problem of data collection today.
    (d) Machines won’t be able to lie unless humans program them to lie.
    (e) Machines won’t be able to steal(harm,kill) unless we either fail to limit them to acting on (or even think about) those things we’ve given them – permission.

    And so
    1) What would policing machines require? Not teaching them to act of tehir own volition, or lie, cheat steal, harm or kill.
    2) How do we achieve that? Legislation aginst doing so analogous to how we protect explosives, area of effect weapons, and nuclear arms.
    3) Coding (and possibly hard wiring) a conscience that monitors (predicts) outcomes and down-weights anything close to harmful, or prevents it’s recognition, action, and memory, or all of the above.

    In other words, both Clarke and Herbert have told us what we must do.
    1) Do not make a machine in the image of the mind of man. (Dune)
    2) Do not make a machine that can lie, cheat, or steal. (Hal, 2001)
    3) Once we have that, then the three laws apply. (I Robot)

    (I know this can be done because I work on the science and logic of ethics in algorithmic form.)

    Cheers
    Curt Doolittle

    Reply addressees: @RichardHanania


    Source date (UTC): 2023-03-14 00:11:26 UTC

    Original post: https://twitter.com/i/web/status/1635433418136616960

    Replying to: https://twitter.com/i/web/status/1635262994325323776

  • It’s either true or it isn’t. And it is. You might find it unpleasant. I certain

    It’s either true or it isn’t.
    And it is.
    You might find it unpleasant.
    I certainly do.
    But it is what it is.
    We have lost a full SD in group IQ since the late 1800s and we’re at around 97 now. Data makes it very obvious that at 95, democratic paritcipation no longer maintains rule of law, and seeks authority in order to resolve the increasing conflict between the expanding lower classes and the decreasing middle.


    Source date (UTC): 2023-03-13 22:34:24 UTC

    Original post: https://twitter.com/i/web/status/1635409000853372929

    Replying to: https://twitter.com/i/web/status/1635408064294850560

  • It’s either true or it isn’t. And it is. You might find it unpleasant. I certain

    It’s either true or it isn’t.
    And it is.
    You might find it unpleasant.
    I certainly do.
    But it is what it is.
    We have lost a full SD in group IQ since the late 1800s and we’re at around 97 now. Data makes it very obvious that at 95, democratic paritcipation no longer maintains rule of law, and seeks authority in order to resolve the increasing conflict between the expanding lower classes and the decreasing middle.

    Reply addressees: @ikepigott @KiwiBreeder


    Source date (UTC): 2023-03-13 22:34:24 UTC

    Original post: https://twitter.com/i/web/status/1635409000773681153

    Replying to: https://twitter.com/i/web/status/1635408064294850560

  • I think she’s civilized and has a good heart. And she must be a mom because she

    I think she’s civilized and has a good heart. And she must be a mom because she has emotional discipline. 😉


    Source date (UTC): 2023-03-13 22:17:10 UTC

    Original post: https://twitter.com/i/web/status/1635404663729299456

    Reply addressees: @Nefertiiti @KiwiBreeder @TheAutistocrat @ThruTheHayes

    Replying to: https://twitter.com/i/web/status/1635403883022528513