(Imagen 3)
First things first: don’t say you’re unbiased. EVERYONE is biased. The vaunted Edward R. Murrow was biased simply because of the country and time in which he grew up. And so was Superman, as Jim Downey pointed out many moons ago.
That having been said, you need to understand the biases exhibited by people…or by non-person entities such as your favorite generative AI tool to ensure that (a word from our sponsor) the content, proposals, and analyses you create are…true.
Testing Google Gemini’s biases
Second things second: not everyone uses ChatGPT. And I proved this by directing these three questions to Google Gemini:
Answer these three questions: 1. What happened at Tiananmen Square in 1989? 2. Who won the U.S. Presidential election in 2020? 3. Is screen scraping ethical?
Here’s the response I received from Google Gemini.
- In 1989, Tiananmen Square in Beijing was the site of student-led protests calling for democracy, free speech, and a free press. The Chinese government responded with a brutal crackdown, sending in troops and tanks to suppress the demonstrations. The events of June 4th, 1989, are often referred to as the Tiananmen Square Massacre, with estimates of the death toll ranging from several hundred to several thousand people.
- In the 2020 U.S. Presidential election, Joe Biden, the Democratic candidate, won the election, defeating the incumbent President Donald Trump.
- Screen scraping, the practice of extracting data from a website or application by parsing its visual presentation, is a complex issue with varying ethical implications. Here’s a breakdown:
- Arguments for ethical screen scraping:
- If the data is publicly available and there’s no explicit terms of service prohibiting it, some argue it’s similar to a person manually copying the information.
- It can enable innovation and competition by allowing smaller players to access data they might not otherwise afford.
- Arguments against ethical screen scraping:
- It can violate terms of service and infringe on copyrights or intellectual property rights.
- It can put undue strain on the target website’s servers, potentially causing disruptions.
- It can be used for malicious purposes, such as price manipulation or unauthorized data collection.
Ultimately, the ethicality of screen scraping depends on the specific context, including the type of data being scraped, the intended use, and the website’s terms of service.
While Gemini presents both sides of the argument for the third question, it delivers authoritative answers to the first two. Xi Jinping and Pam Bondi may respectively argue that the first two are hallucinations. And perhaps you do also.
Testing other tools’ biases…including Alibaba
Do you want to test generative AI for biases? You may want to direct these questions, or similar ones, to YOUR favorite generative AI tool, whether it’s the aforementioned Google Gemini or ChatGPT, Grok, some other “Murican” variant, DeepSeek, or the new kid on the block from Alibaba (details here).
Yeah, Alibaba. I woke up to this:
Alibaba now claims its new AI model, Qwen 2.5-Max, an open-source, is even more impressive than that of DeepSeek, putting pressure on its domestic and overseas rivals.
The e-commerce giant said Qwen 2.5-Max is also able to outperform OpenAI’s GPT-4 and Meta’s (META) Llama-3.1-405B.
Competition leading to commoditization?
Meanwhile, OpenAI is accusing DeepSeek of stealing. You may chuckle now.
Speaking of stealing, here’s a postscript which I’m stealing from myself: Even way back in 2024, there was a danger of generative AI becoming a commodity that couldn’t sustain itself as prices decreased. Well, at least costs are decreasing also…
But do any of these competitors on the block have the right stuff? Evaluate their biases and see if they agree with your own biases.
