by Kage Keller
Pardon me for restating in language less exacting. We use Truth to create models that corresponde with reality. We use that model for heuristics.
—“definition: a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals”—
We need these models for prediction so that we can make choices, and take actions, to survive.
We seek novelty to improve our models.
Novelty-seeking saves time and energy – where we use pattern recognition and our predictive power of neurons to identify where others have done the processing for us.
When we consider what someone else has said (running it through our pattern recognition and predictive “wiring”, and then integrate it into a new model) we save time/processing power that we can use for other endeavors (survival reinforcement for our genetics).
This is an efficient use of social information processing…
But, if and only if you know enough about the problem to predict someone else’s processing result has a high probability of being right.
But, it is disastrous if you have a very weak model on the subject. Because then you can be INDOCTRINATED – where a biased model can be learned (installed) and then be extraordinarily difficult to overcome with future information (novelty).
This is where we rely on experts who have processed answers for us. It is dangerous however to take answers on faith.
Without digging into their model and having done some processing on your own, you leave yourself open to have your foundations supplanted by some entities’ agenda.
You become a pawn, or sheep, or cuck…etc. e.g. you surrender your agency for a modicum of processing savings.
Being well read is an insurance policy against indoctrination (which disenfranchises your agency(freedom to make decisions for survival))
(Curt: well done!)