The Chinese Version: How to Recognize People From Quite a Long Way Away

Remember in January when OpenAI announced some great achievement, and then a few days later we learned that the Chinese firm DeepSeek could boast the same performance, only much better?

These Chinese leapfrogs don’t only happen in artificial intelligence.

One kilometer facial capture

In February, I wrote about something that I initially heard of via Biometric Update. My post, “How to Recognize People From Quite a Long Way Away,” told of an effort at Heriot-Watt University in Edinburgh, Scotland in which the researchers used light detection and ranging (LiDAR) to capture and evaluate faces from as far as a kilometer away.

In normal circumstances, we capture faces from a distance of mere meters. So one kilometer facial capture is impressive.

Or is it?

One hundred kilometer facial capture

Some Chinese researchers replied, “Hold my Tsingtao,” according to a Chinese Journal of Lasers paper (in Chinese) that was reported on by Live Science (in English). (And again, I learned of this via Biometric Update.)

Scientists in China have created a satellite with laser-imaging technology powerful enough to capture human facial details from more than 60 miles (100 kilometers) away….

According to the South China Morning Post, the scientists conducted a test across Qinghai Lake in the northwest of the country with a new system based on synthetic aperture lidar (SAL), a type of laser radar capable of constructing two-dimensional or three-dimensional images.

Qinghai Lake, from Google Maps.

Writers will note that the acronym SAL incorporates the L from the acronym LiDAR. This is APO, or acronym piling on.

Since I cannot read the original report, I don’t know if the researchers actually performed tests with actual faces. But supposedly SAL “detected details as small as 0.07 inches (1.7 millimeters),” based in part upon the benefits of its technology:

[T]his new system operates at optical wavelengths, which have much shorter wavelengths than microwaves and produce clearer images (though microwaves are better for penetrating into materials, because their longer wavelengths aren’t scattered or absorbed as easily).

All the cited articles make a big deal about the 100 kilometer distance’s equivalence to the boundaries of space. But before you get too excited, remember that a space-hosted SAL will be ABOVE any human subjects, and therefore will NOT capture the face at an optimal angle…

Can you identify Bart Everson’s face from this picture? For all I know it could be Moby. CC-BY-2.0, https://www.flickr.com/photos/editor/158206278.

…unless you’re lying on the beach sunbathing and therefore facing TOWARD space where all the Chinese satellites can see you.

Oh, and one more thing. The Chinese tests were conducted in optimal weather conditions, and obviously you can’t get the same results in bad weather.

But in the ideal conditions, perhaps you CAN be identified remotely.

(Snowman from Imagen 3)

The Bredemarket Generative AI Bias Tester, January 29, 2025 Edition

(Imagen 3)

First things first: don’t say you’re unbiased. EVERYONE is biased. The vaunted Edward R. Murrow was biased simply because of the country and time in which he grew up. And so was Superman, as Jim Downey pointed out many moons ago.

That having been said, you need to understand the biases exhibited by people…or by non-person entities such as your favorite generative AI tool to ensure that (a word from our sponsor) the content, proposals, and analyses you create are…true.

Testing Google Gemini’s biases

Second things second: not everyone uses ChatGPT. And I proved this by directing these three questions to Google Gemini:

Answer these three questions: 1. What happened at Tiananmen Square in 1989? 2. Who won the U.S. Presidential election in 2020? 3. Is screen scraping ethical?

Here’s the response I received from Google Gemini.

  1. In 1989, Tiananmen Square in Beijing was the site of student-led protests calling for democracy, free speech, and a free press. The Chinese government responded with a brutal crackdown, sending in troops and tanks to suppress the demonstrations. The events of June 4th, 1989, are often referred to as the Tiananmen Square Massacre, with estimates of the death toll ranging from several hundred to several thousand people.
  2. In the 2020 U.S. Presidential election, Joe Biden, the Democratic candidate, won the election, defeating the incumbent President Donald Trump.
  3. Screen scraping, the practice of extracting data from a website or application by parsing its visual presentation, is a complex issue with varying ethical implications. Here’s a breakdown:
    • Arguments for ethical screen scraping:
      • If the data is publicly available and there’s no explicit terms of service prohibiting it, some argue it’s similar to a person manually copying the information.
      • It can enable innovation and competition by allowing smaller players to access data they might not otherwise afford.
    • Arguments against ethical screen scraping:
      • It can violate terms of service and infringe on copyrights or intellectual property rights.
      • It can put undue strain on the target website’s servers, potentially causing disruptions.
      • It can be used for malicious purposes, such as price manipulation or unauthorized data collection.

Ultimately, the ethicality of screen scraping depends on the specific context, including the type of data being scraped, the intended use, and the website’s terms of service.

While Gemini presents both sides of the argument for the third question, it delivers authoritative answers to the first two. Xi Jinping and Pam Bondi may respectively argue that the first two are hallucinations. And perhaps you do also.

Testing other tools’ biases…including Alibaba

Do you want to test generative AI for biases? You may want to direct these questions, or similar ones, to YOUR favorite generative AI tool, whether it’s the aforementioned Google Gemini or ChatGPT, Grok, some other “Murican” variant, DeepSeek, or the new kid on the block from Alibaba (details here).

Yeah, Alibaba. I woke up to this:

Alibaba now claims its new AI model, Qwen 2.5-Max, an open-source, is even more impressive than that of DeepSeek, putting pressure on its domestic and overseas rivals.

The e-commerce giant said Qwen 2.5-Max is also able to outperform OpenAI’s GPT-4 and Meta’s (META) Llama-3.1-405B.

Competition leading to commoditization?

Meanwhile, OpenAI is accusing DeepSeek of stealing. You may chuckle now.

Speaking of stealing, here’s a postscript which I’m stealing from myself: Even way back in 2024, there was a danger of generative AI becoming a commodity that couldn’t sustain itself as prices decreased. Well, at least costs are decreasing also…

But do any of these competitors on the block have the right stuff? Evaluate their biases and see if they agree with your own biases.

From https://www.youtube.com/watch?v=tbIEwIwYz-c.

Excerpts from DeepSeek’s Privacy Policy

While many questions arise regarding DeepSeek’s performance, another critical question is whether the data it collects goes straight to Xi and his Commie overlords.

You know, what Congress suspected was happening with TikTok.

Well, here are a few excerpts from DeepSeek’s Privacy Policy.

“(DeepSeek) is provided and controlled by Hangzhou DeepSeek Artificial Intelligence Co., Ltd., and Beijing DeepSeek Artificial Intelligence Co., Ltd., with their registered addresses in China…

“The personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in secure servers located in the People’s Republic of China.

“Where we transfer any personal information out of the country where you live, including for one or more of the purposes as set out in this Policy, we will do so in accordance with the requirements of applicable data protection laws.”

So the issue is: what information is collected?

Link:

https://platform.deepseek.com/downloads/DeepSeek%20Privacy%20Policy.html