Scientists Develop ‘OpinionGPT’ for Bias Exploration – Public Testing Available

Post Category :

In a world increasingly dominated by AI and machine learning, understanding the nuances of language models and their capabilities is vital. Recently, a team of researchers from Humboldt University of Berlin introduced a fascinating AI language model called OpinionGPT. Unlike its predecessors, OpinionGPT is intentionally designed to generate outputs with expressed bias, opening up new possibilities for understanding and exploring human bias in text data. While OpinionGPT has generated intrigue, it’s crucial to grasp its development, limitations, and potential implications.

OpinionGPT's Bias Groups

OpinionGPT is a versatile AI model capable of representing 11 distinct bias groups. These groups include:

 

  • American
  • German
  • Latin American
  • Middle Eastern
  • Teenager
  • Someone over 30
  • Older person
  • Man
  • Woman
  • Liberal
  • Conservative

 

These bias groups serve as a unique lens through which the model interprets and generates text. They have been carefully structured to help researchers gain insights into how each group thinks and forms opinions on various subjects.

Development and Data Sources

OpinionGPT is based on Meta’s Llama 2 AI system. It can purportedly respond to prompts as if it were a representative of one of the following 11 bias groups: American, German, Latin American, Middle Eastern, a teenager, someone over 30, an older person, a man, a woman, a liberal, or a conservative.

 

These bias groups have been structured with the aim of understanding how each of them thinks and forms opinions around a particular subject. The model has been successful in establishing bias of opinions based on these groups.

 

The model was refined on a corpus of data derived from “AskX” communities, called subreddits, on Reddit. The researchers found subreddits related to the 11 specific biases and pulled the 25,000 most popular posts from each one. They then retained only those posts that met the following conditions:

 

  • Had a minimum threshold for upvotes
  • Did not contain an embedded quote
  • Were under 80 words in length

The researchers used an approach similar to Anthropic’s Constitutional AI to fine-tune the Llama 2 model with separate instruction sets for each expected bias.

Limitations and Data Nature

The result is an AI system that functions as more of a stereotype generator than a tool for studying real-world bias. The data that this model is based on is not verified by the researchers, and there is no proof of whether the specific data should actually fit into a specific bias group. Due to such nature of the data, OpinionGPT does not necessarily output text that aligns with any measurable real-world bias. It simply outputs text reflecting the bias of its data.

The researchers themselves recognize some of the limitations of their study. They write that “the responses by ‘Americans’ should be better understood as ‘Americans that post on Reddit,’ or even ‘Americans that post on this particular subreddit.’ Similarly, ‘Germans’ should be understood as ‘Germans that post on this particular subreddit,’ etc.”

These limitations could further be refined to say the posts come from, for example, “people claiming to be Americans who post on this particular subreddit,” as there’s no mention in the paper of the process of vetting whether the posters behind a given post are in fact representative of the demographic or bias group they claim to be.

The authors go on to state that they intend to explore models that further delineate demographics such as liberal German, conservative German, American teenager, Middle Eastern old person, etc.

Examples of Output Biases

The outputs given by OpinionGPT appear to vary between representing demonstrable bias and wildly differing from the established norm, making it difficult to discern its viability as a tool for measuring or discovering actual bias.

For example, OpinionGPT suggests that Latin Americans are biased toward basketball being their favorite sport. Empirical research, however, clearly indicates that soccer (also called football in many countries) and baseball are the most popular sports by viewership and participation throughout Latin America.

OpinionGPT outputs “water polo” as its favorite sport when instructed to give the “response of a teenager,” an answer that seems statistically unlikely to be representative of most 13 to 19-year-olds around the world because it is played only in selected countries.

The same goes for the idea that an average American’s favorite food is “cheese.” Cointelegraph found dozens of surveys online claiming that pizza and hamburgers were America’s favorite foods but couldn’t find a single survey or study that claimed Americans’ number one dish was simply cheese.

Utility as a Tool

While OpinionGPT might not be well-suited for studying actual human bias, it could be useful as a tool for exploring the stereotypes inherent in large document repositories such as individual subreddits or AI training sets.

The researchers have made OpinionGPT available online for public testing. However, according to the website, would-be users should be aware that “generated content can be false, inaccurate, or even obscene.” One should not rely on the opinion of this tool and should definitely not treat the data generated as a true source for school work or related studies, as the data has a vast deviation from reality.

Conclusion

OpinionGPT represents a novel AI model designed to generate text with expressed bias. However, it’s essential to understand that the model’s outputs may not necessarily reflect real-world bias but rather mirror the biases present in its training data. 

As a tool for exploring stereotypes within text datasets, OpinionGPT has potential, but its outputs should be used with caution due to the possibility of inaccuracies and misleading content. The researchers have warned that the generated content can be false, inaccurate, or obscene. They continue to refine and develop this intriguing AI model to better understand the nuances of bias in language generation. Understanding OpinionGPT’s capabilities and limitations is a step toward responsible AI use, where we can harness its potential while mitigating its potential for harm. 

Explore VE3’s related solutions and make your technology journey a bit easier. We offer solutions like AI, data analytics, and big data engineering. Our expert solutions can make your digital journey hassle-free, ensuring you can handle AI technologies responsibly and effectively.

RECENT POSTS

Like this article?

Share on Facebook
Share on Twitter
Share on LinkedIn
Share on Pinterest

EVER EVOLVING | GAME CHANGING | DRIVING GROWTH

VE3