Social media has also been overtaken by spin, and whether you’re on Facebook, X (formerly known as Twitter), or Meta’s new X alternative; Threads; you just know you’re going to be frequently exposed to people with a particular political agenda.
But what most people don’t know, (and that included me until quite recently), is that the new generative AI tools that are increasingly being integrated into websites, mobile apps, and search engines, also have a political bias, and may also be serving you up a whole lot of spin and misinformation.
At least, that is what a new study profiled recently in the MIT Technology Review maintains. It outlines how researchers have discovered that different AI large language models, have different political biases.
That seemed odd, unexpected, and more than a little alarming to me when I first read about it; so I decided to reach out to University of Akron Professor, Susan Ramlo, who is an expert in both Physics, and “Q Methodology”.
Since Q Methodology combines qualitative and quantitative methods to investigate the subjective views of those directly involved in a particular topic, I thought she might have some insight into whether or not AI tools can have subjective points of view.
In addition, Ramlo thinks lot about the future of technology in her work on quantum computing. So, I thought she would be an ideal guest to help us make sense of a world in which AI will increasingly help shape people’s views and opinions.
But the question is; should we let it do that? How reliable is the information we are getting from AI, and can we really trust that it is accurate? In addition; if it isn’t accurate; what sources can we consult to get verifiable facts?
Find out. Listen now.
Dr. Susan Ramlo, University of Akron