@stefanmolyneux
See what you’re trying to do, but we’re looking for an AI solution to the problem because of the sheer volume. And if we do that, let’s see what we can learn:
The founder of moderation, Jim Rutt, gave us three categories we can tag as:
- manners(decorum), such as cursing, slurs, ad hom, and;
… … We prohibit violations of manners. - content(taboos), porn, violence, lawbreaking etc, and;
… … We prohibit violations of taboos (content) - bias(want, position, viewpoint) – spectrum of masculine merit to feminine non.
… … We tag the biases, and explain their origin by doing so.
And to this we can add (and Tag):
- testifiability (truth/falsehood) – can you claim this is true?
- reciprocity (morality/immorality) – can you claim this is moral/ethical?
And to this we can add (and Tag):
- Methods of argument used (truth, deceit) – there are only so many.
- Sex Differences in deceit – and each is expressible as male or female bias because all biases as are all differences in humans, are the result of sex differences in perception, cognition, and valuation.
And by tagging each ‘post’ we teach everyone on earth how to tell the truth thruth, and what is moral.
So yes we can automate content moderation. And yes, we can automate some incentives to improve by doing so. But more importantly we can explain, by construction from first principles, not only our own arguments – but the opposition’s as well.
And only then is ‘liberty’ – a state of reciprocity – possible. Because only then is a standard of weights and measures both existential, empirically demonstrated in vast numbers, but nearly universally understood. And as such we can trade between our moral biases. Largely: resources produced by those with more self regulation in ecchange for behavior (self regulation) by those with less self regulation.
“A Surreptitious Universal Education In Morality”
-Curt
Source date (UTC): 2022-11-21 17:04:12 UTC
Original post: https://gab.com/curtd/posts/109382877364387705