TRUTHFULNESS ALGORITHM AND PROPERTARIANISM Well, you know, for the purpose that

http://arxiv.org/pdf/1502.03519v1.pdfGOOGLE’S TRUTHFULNESS ALGORITHM AND PROPERTARIANISM

Well, you know, for the purpose that they intend to use this theory, I’m not sure it’s all that bad. For all intents and purposes they are creating if-then statements consisting of a word pair and a conclusion (a triplet so to speak). But they are relying upon ‘authorities’ for the construction of triplets.

(I did work in AI exactly like this back in 1984-86 in assembly language, and spent many months on it, so it’s not exactly a novel idea — I understand that issues. Also in 2005, in one of my many failed attempts to reform Microsoft’s strategy, we created a similar algorithm for identifying terms, and reforming microsoft.com to provide information that was [surprise] helpful, and targeted to the user — at the time my company managed substantial parts of Microsoft’s internal taxonomy of terms, so it was something we understood quite clearly. )

For Google’s purposes, you can capture a database of sites filled with rumors and grab their triplets, then look for sites that use similar triplets. Conversely, you can hit authorities and index their triplets. That means a good web site is one that has fewer (or no) bad triplets.

Now here is where propertarianism comes in:

Very few statements are ‘true’ in any material sense. Some things are more truthful than others, but very little is true in the logical sense. And worse, the example they use is an interesting one: the nationality of Barack Obama. Which as far as I know is not exactly settled science (as someone who received an early copy of the obviously modified pdf – most likely because the birth certificate issued in Hawaii was tampered with in order to obscure that he was listed as a muslim on it. So they give this as an example of something that is true.

Now other things are matters of value, that each political bias (reproductive strategy) treats as true. To say Kennedy was a president, and to say he was a very bad president, are two different things.

But by and large, the political correctness crowd has succeeded in creating enough of a body of verbiage, and succeeded in controlling authorities (now they control wikipedia), that the NPOV has become synonymous with the politically correct POV.

So while it might be nice to stop rumours, I think that preference determines the values attributed to an arrangement of statements. And as such, it is better to detect bias in one direction or another than it is to detect ‘truth’.

First, because truth is very questionable. Second, because truth assertions are open to corruption (notice the number of asian authors in the paper isn’t surprising to me). Third because bias is both knowable and independent of truth claims. Fourth, because we desire to find biases that suit our arrangement of values.

Now, in addition, I think it is equally important to determine the structure of the argument – which is slightly more difficult but statistically ascertainable. (for a hierarchy of argument, See www.propertarianism.com for http://www.propertarianism.com/tools-and-techniques-for-political-debate/a-list-of-terms-for-use-in-evaluating-political-debate/)

So if you told me (a) how few rumor triplets a site had (b) the bias (proletarian, libertarian or aristocratic), and (c) the form of the argument, then I would think those three values would help us score sites, and that we could select our biases.

This is a very different search experience from a monopoly (totalitarian) one.

But then, if google chose NOT to do that, I would see a market opportunity (as some of us already do) in presenting a web index that filtered out biases we disapprove of.


Source date (UTC): 2015-03-11 15:41:00 UTC

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *