AI And The Future of Information
In the information age, how do we know if what we read or hear is true? What if the most capable of AI models of today reflect prejudices?
Did you ever wonder why we don't remind ourselves that history, for example, is written by the winners? This isn't to disparage the work done by historians, or the efforts made to discern the truth from the tales told by those winners. Still, we often forget that history is written by the winners and take the information provided about history, generally speaking, as fact or close to factual. We don't think often about the biases present in our history.
A cursory examination of academic literature proves that biases are present. That's of greater importance now than, perhaps, it has ever been.
Upfront I'll admit that I don't have the data, but I suspect that the overwhelming majority of academic writing and research during the 1960s aligned with progressive political ideology. Additionally, the overwhelming majority of academic work and research in economics occurs entirely within the scope of Keynesian economic theory. You have to go looking, and purposefully, for details about the Chicago or Austrian schools of economic thought.
What if, over the last 70 years, the predominance of academic research material covered progressive, socialist, Marxist or leftist ideology? What if the mere preponderance of those views in the available information created a prevailing narrative passed on as truth?
The AI gatekeepers of our information today are trained on a gigantic corpus of academic research and, if bias is present in the information that bias will be present in what's reported by the AI. Whichever biases are present in the lion's share of the available material creates the prevailing narrative or dreaded consensus: an inherent bias in the responses provided by the AI toward a particular set of ideas or beliefs about the truth of things.
If you want dissenting opinions to those prevailing narratives from ChatGPT or Gemini you have to prompt the AI to provide them. They will not come as part of ChatGPT's or Gemini's initial response. In certain cases, the AI will preface the provision of dissenting opinions and perspectives with loaded descriptors like "fringe."
As I understand it, people use LLMs like ChatGPT and Gemini more often than they use search engines anymore. We're using AIs as provisors of our information and, as per usual, aren't often thinking about biases in what they present to us. I suspect that trend will continue, and grow. Over time what you end up with is the presentation of a curated set of information and data designed to support it with less and less questioning. We'll get one side of stories until people forget that there was, or might be an argument against prevailing narratives worth considering.
That's hard to swallow when the LLMs claim to strive to be truthful and accurate sources of reliable information. In my opnion, this doesn't bode very well for us at all.
The Dangers of Control and Bias in Information
AI bias is just the latest instance of efforts made to control information, and thereby prevailing opinion. Governments have carefully curated information for very distinct purposes.
Japanese children learn nothing in school of the horrors committed by Japanese soldiers during World War II, for example. If you somehow manage to post something critical of the CCP in China, it won't stay up for long. The producer will likely find themselves involuntarily disappeared or... reeducated.
There are far from the only instances of control of information or deliberate bias. Nothing critical of the Thai Royal Family can be uttered, much less written, maintained and studied in Thailand under the Lese Majeste laws.
In the recent past we've seen what happens to people who tell us what governments don't want us to know. Edward Snowden lost his career and his freedom by revealing to the American people how the US government was collecting information on them. Julian Assange is in all kinds of trouble for posting information that revealed some particularly egregious acts done by the US Armed Forces in the Middle East.
We learned during the Covid fiasco that governments were working with big tech to throttle visibility of reports that cast the vaccines in question. This was done, as I understand it, by altering the algorithms to keep such information largely hidden from search results. Facebook was censoring posts when "fact checkers" determined the content of the posts were "misinformation."
And now people everywhere ask ChatGPT or Gemini, or Grok or Perplexity about the "truth" of everything. Few adults ask for dissenting opinions, and younger people almost certainly won't. And by the time they're adults who have used and relied upon these LLMs as their sources of truth, close to no people will bother even asking about dissenting opinions.
When you realize this, does it make the gradual drop in our critical thinking and general attention spans more troubling?
Traffic Jam on the Information Superhighway
That much seems to make sense on it's own, but I don't think it will matter. Too few people will read opinions like this, and fewer still will make any substantive changes in how they interact with these technologies. We likely won't see massive changes in the near future, but what about kids who will never remember when AIs weren't the primary source of information?
AI As Information Clergy
Boom.
Within a generation, you've got a majority of the populace far more used to trusting the AIs implicitly. Whatever the AI says about anything will be taken as truth, and with ever-shrinking skepticism.
This will happen a lot less with mathematics, physics and chemistry. But it will happen with history, sociology, political science, psychology... you know, all the areas of study that end up having such tremendous impacts on our lives; the arenas of thought that motivate legislation, tax schemes, environmental policy, and our qualities of life.
I really don't like writing about things with such a bleak outlook, but the straws I've grasped at for a long time are fewer and farther between. More people than ever are using AIs for their primary source of information, or the first resource they turn to for verifying things they read or hear. Echo chambers are real. Algorithms effectively provide and maintain those echo chambers. Now we're in a position where the primary sources of information are all going to tell us very similar stories about everything.
I'm tempted to say "wake up," but I haven't pitched for one particular team here. My view doesn't necessarily align with your bias, and it's just as likely that you find my perspective compelling as it is that you think I'm some moron or tool making too much of something. That isn't comforting, but I think its a realistic viewpoint.
I can't help hoping I'm wrong. I can't help hoping that a few people will consider what I've written here, share it with others, and contribute somehow to the growth of a desire for independent thinking and deserved skepticism of what we believe.
What do you think? Am I overthinking this, or have I presented a perspective here worth contemplating? Let me know in a comment. If you'll go a step further, tell us what we might do to regain and improve our critical thinking ability; our willingness to be skeptical and consider what it means to be fed prevailing narratives as truth.
Thank you for your time. I hope you got value equal to your time investment for reading.
Comments
Post a Comment