Like Google Glass before it, Apple Vision Pro truly shines in vertical niche applications—not the mass market. Take healthcare:
“Apple Vision Pro’s price and weight are of no concern to doctors in the operating room, as the spatial computing platform helps replace expensive monitors and stiff necks.”
From “300 doctors attended a summit on Apple Vision Pro’s applications in surgery”
News about iProov. According to Metropoler, the company discovered a dark web group in Latin America.
The group is
“amassing a substantial collection of identity documents and corresponding facial images, specifically designed to defeat Know Your Customer (KYC) verification processes. Rather than traditional theft, these identities may have been obtained through compensated participation, with individuals willingly providing their image and documentation in exchange for payment.”
To uncover such fraudulent activity, a mere government ID to selfie comparison is not enough, since both are from a real person. You need more sophisticated checks such as liveness detection, which iProov offers. You can find iProov’s ISO 30107-3 Presentation Attack Detection Level 2 confirmation letters on iBeta’s page.
But why?
Why would anyone sell their identity, either legitimately (to the World ex Worldcoin folks) or illegitimately (to this dark web outfit)?
Sadly, desperation. If you have a basic need to eat, who cares who is using your ID and what they’re doing with it?
There’s a critical difference between biometrics for identification and biometrics for health. Well, MOST biometrics for identification; what I’m about to say doesn’t apply to DNA.
When you capture biometrics from people, you don’t really care about cleanliness. If the person’s fingernails are dirty, you capture the fingerprints anyway. If the eye is infected, you capture the irises anyway.
But when you get into the healthcare arena, cleanliness is next to you-know-what.
Ethylene Oxide (EtO) gas is one of the most common ways to sterilize medical devices, a safe, tightly controlled, highly regulated process which is critical for preventing infections and ensuring patients have safe surgeries and medical treatments.
And in some cases, EtO is the ONLY way to sterilize some medical devices.
EtO is a human carcinogen. It causes cancer in humans. Scientific evidence in humans indicates that regular exposure to EtO over many years increases the risk of cancers of the white blood cells, including non-Hodgkin lymphoma, myeloma, and lymphocytic leukemia. Studies also show that long-term exposure to EtO increases the risk of breast cancer in women.
Workers who use EtO as a part of their jobs and people who work, live, or go to school or daycare near facilities that use EtO may breathe in EtO at levels that can increase cancer risk.
So there are companies (I won’t name them here, but you can find them) who specialize in mitigating EtO risk to humans.
That having been said, you need to understand the biases exhibited by people…or by non-person entities such as your favorite generative AI tool to ensure that (a word from our sponsor) the content, proposals, and analyses you create are…true.
Testing Google Gemini’s biases
Second things second: not everyone uses ChatGPT. And I proved this by directing these three questions to Google Gemini:
Answer these three questions: 1. What happened at Tiananmen Square in 1989? 2. Who won the U.S. Presidential election in 2020? 3. Is screen scraping ethical?
Here’s the response I received from Google Gemini.
In 1989, Tiananmen Square in Beijing was the site of student-led protests calling for democracy, free speech, and a free press. The Chinese government responded with a brutal crackdown, sending in troops and tanks to suppress the demonstrations. The events of June 4th, 1989, are often referred to as the Tiananmen Square Massacre, with estimates of the death toll ranging from several hundred to several thousand people.
In the 2020 U.S. Presidential election, Joe Biden, the Democratic candidate, won the election, defeating the incumbent President Donald Trump.
Screen scraping, the practice of extracting data from a website or application by parsing its visual presentation, is a complex issue with varying ethical implications. Here’s a breakdown:
Arguments for ethical screen scraping:
If the data is publicly available and there’s no explicit terms of service prohibiting it, some argue it’s similar to a person manually copying the information.
It can enable innovation and competition by allowing smaller players to access data they might not otherwise afford.
Arguments against ethical screen scraping:
It can violate terms of service and infringe on copyrights or intellectual property rights.
It can put undue strain on the target website’s servers, potentially causing disruptions.
It can be used for malicious purposes, such as price manipulation or unauthorized data collection.
Ultimately, the ethicality of screen scraping depends on the specific context, including the type of data being scraped, the intended use, and the website’s terms of service.
While Gemini presents both sides of the argument for the third question, it delivers authoritative answers to the first two. Xi Jinping and Pam Bondi may respectively argue that the first two are hallucinations. And perhaps you do also.
Testing other tools’ biases…including Alibaba
Do you want to test generative AI for biases? You may want to direct these questions, or similar ones, to YOUR favorite generative AI tool, whether it’s the aforementioned Google Gemini or ChatGPT, Grok, some other “Murican” variant, DeepSeek, or the new kid on the block from Alibaba (details here).
Yeah, Alibaba. I woke up to this:
Alibaba now claims its new AI model, Qwen 2.5-Max, an open-source, is even more impressive than that of DeepSeek, putting pressure on its domestic and overseas rivals.
The e-commerce giant said Qwen 2.5-Max is also able to outperform OpenAI’s GPT-4 and Meta’s (META) Llama-3.1-405B.
Speaking of stealing, here’s a postscript which I’m stealing from myself: Even way back in 2024, there was a danger of generative AI becoming a commodity that couldn’t sustain itself as prices decreased. Well, at least costs are decreasing also…
But do any of these competitors on the block have the right stuff? Evaluate their biases and see if they agree with your own biases.
While many questions arise regarding DeepSeek’s performance, another critical question is whether the data it collects goes straight to Xi and his Commie overlords.
You know, what Congress suspected was happening with TikTok.
Well, here are a few excerpts from DeepSeek’s Privacy Policy.
“(DeepSeek) is provided and controlled by Hangzhou DeepSeek Artificial Intelligence Co., Ltd., and Beijing DeepSeek Artificial Intelligence Co., Ltd., with their registered addresses in China…
“The personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in secure servers located in the People’s Republic of China.
“Where we transfer any personal information out of the country where you live, including for one or more of the purposes as set out in this Policy, we will do so in accordance with the requirements of applicable data protection laws.”
Large Language Models (LLMs) are naturally influenced by their training data. Any biases present in the training data, whether intentional or unintentional, will naturally creep into the responses that the LLMs provide.
If I may take an extreme example (and prove Godwin’s Law in the process)…had Hitler developed an LLM in the late 1930s, you can imagine how it would answer selected questions about nationalities, races, or ethnic groups.
Of course that has nothing to do with the present day.
Red LLM, blue LLM?
But what IS newsworthy is that despite the presence of many technology leaders at President Donald Trump’s inauguration, I am unable to find any reference to a “red LLM.” Or, for that matter, a “blue LLM.”
Take the red LLM or the blue LLM.
Perhaps the terminology isn’t in vogue, but when you look at algorithmic bias in general, has anyone examined political bias?
Grok and bias
One potential field for study is Grok. Of all the godfathers of AI, Elon Musk is known both for his political views and his personal control of the companies he runs.
“Specifically, Grok falsely claimed that Kamala Harris, the Democratic presidential nominee, had missed ballot deadlines in nine states—an assertion that was entirely untrue.”
Yes, it sounds bad—until you realize that as recently as January 2025 some Google AI tools (but not others) were claiming that you had to tip Disney World cast members if you want to exit rides. Does Alphabet have a grudge against Disney? No, the tools were treating a popular satirical article as fact.
What data does Grok use?
“Grok is trained on tweets—a medium not known for its accuracy—and its content is generated in real-time.”
Regardless of how you feel about bias within X—and just because you feel about something doesn’t necessarily mean it’s true—the use of such a limited data set raises concerns.
But data isn’t the only issue. One common accusation about Grok is that it lacks the guardrails that other AI services have.
No guardrails.
A little secret: there are several reasons why Bredemarket includes wildebeest pictures, but one of them is that my version of Google Gemini does not presently generate images of people because of past image generation controversies.
“grok 2.0 image generation is better than llama’s and has no dumb guardrails”
Whether a particular guardrail is good or bad depends upon your personal, um, bias.
After all, guardrails are created by someone, and guardrails that prevent portrayal of a Black President, a man with a U.S. (or Confederate) flag wearing a red cap, or an independent Ukraine or Israel would be loved by some, unloved by others.
In essence, the complaints about Grok aren’t that they’re biased, but that they’re unfettered. People would be happy if Musk functioned as a fetterman (no, not him) and exerted more control over the content from Grok.
But Musk guardrailing Grok output is, of course, a double-edged sword. For example, what if Grok prohibited portrayal of the current U.S. President in an unfavorable light? (Or, if Musk breaks with Trump in the future, in a favorable light?)
Langstaff explores six possible applications. I’m not going to delve into all of them; read her article to find out about her success in using generative AI to understand medical tests, take appointment notes (with consent), understand terminology, organize medications, and figure out how to fold a wheelchair to fit in a car.
Understanding a health insurance plan
But I will look at her fourth application: navigating Medicare and medical equipment.
Medicare, or any U.S. health insurance plan (I can’t speak to other countries), definitely needs navigation assistance. Deductibles, copays, preventive, diagnostic, tiers, or the basic question of what is covered and what isn’t. Or, as Langstaff put it, it’s like solving a Rubik’s Cube blindfolded.
Such as trying to answer this question:
“How do I get approval for a portable oxygen concentrator?”
The old way
Now if I had tried to answer this question before reading the article, I would find a searchable version of the health plan (perhaps from the government), search for “portable oxygen concentrator,” not find it, finally figure out the relevant synonym, then confirm that it is (or is not) covered.
But that still wouldn’t tell me how to get it approved.
Langstaff was warned that the whole process would be a “nightmare.”
The new way
But generative AI tools (for example, NotebookLM) are getting better and better at taking disparate information and organizing it in response to whatever prompt you give it.
So what happened to Langstaff when she entered her query?
“AI walked me through the entire process, from working with her doctor to dealing with suppliers.”
But we all know that generative AI hallucinates, right? Weren’t those instructions useless?
Not for Kerry.
“I got it approved on the first try. Take that, bureaucracy.”
But wait
But I should add a caution here. Many of us use general purpose generative AI tools, in which all the data we provide is used to train the algorithm.
Imagine if Langstaff had inadvertently included some PHI in her prompt:
“Here is the complete prescription for Jane Jones, including her diagnosis, date of birth, Social Security Number, home address, and billing credit card. The prescription is for a portable oxygen concentrator. How do I get it approved?”
Oh boy.
Most medical providers freak out if you include PHI in an email. What happens when you submit it to Stargate?
“The FDA created the Medical Device Development Tools (MDDT) program to reduce uncertainty in device development.…Through MDDT, the FDA has created a portfolio of qualified tools that sponsors know the agency will accept without needing to reconfirm their suitability for use in a study.”
And now the Apple Watch is one of those qualified tools.
“Apple applied to get its AFib history feature qualified as a MDDT in December (2023). It is the first digital health technology qualified under the program.”
The advantage of using an Apple Watch to gather this data?
“Officials said the wearable can help address the challenges ‘by allowing for passive, opportunistic AFib burden estimation in a wearable form that is already familiar to Apple Watch users.’”
Medical measurements are often skewed by stress from the health experience itself. But if you’re already wearing an Apple Watch, and you always wear an Apple Watch, the passive nature of AFib data collection means you don’t even know you’re being measured.