Artificial Intelligence and Healthcare, A Qualified View

As I’ve noted before, healthcare is a pioneering user of artificial intelligence, although (hopefully) under robust controls to maintain accuracy and preserve HIPAA-level privacy.

And a number of companies poured $125 million into Qualified Health to advance AI in healthcare.

Why?

“We are living through a generational shift, one where AI doesn’t just augment how organizations work but fundamentally transforms them from the inside out,” said Mohamad Makhzoumi, Co-CEO of NEA, who will join Qualified Health’s Board of Directors in conjunction with the financing. “From NEA’s nearly five decades of company-building experience, we believe the organizations shaping the next era of healthcare innovation will be those helping health systems reimagine every administrative and clinical workflow from the ground up, and Qualified Health is exactly that company. We are thrilled to lead this financing and to partner with Justin and team to accelerate healthcare’s AI transformation and shape the future of healthcare enterprises across the country.”

“Health systems today are operating under extraordinary pressure, from rising labor costs to tightening reimbursement, while managing increasing complexity in patient care,” said Jared Kesselheim, MD, Managing Partner at Transformation Capital. “What stood out to us about Qualified Health is that the team approaches this work as medical care specialists, with a deep understanding of the realities health systems face every day. That perspective allows them to identify where AI can create meaningful clinical and operational impact. We’re excited to partner with Justin and the Qualified Health team as they help leading health systems navigate this next phase of healthcare.”

When is a Law Enforcement Camera a Law Enforcement Camera?

Many years ago I was driving on Holt Boulevard in Montclair, California, preparing to make a left turn on Central. I followed the vehicle behind me and made my left turn…only then noticing that the left turn light was now red.

As the registered owner of the vehicle I was driving, I received an email from the city of Montclair a few days later. Because this is when Montclair was using cameras for traffic enforcement.

Off to traffic school.

Montclair doesn’t use traffic cameras any more, but all sorts of cameras are owned by, or accessible to, law enforcement agencies.

But how should they be used?

404 Media reported that the Georgia State Patrol accesses Flock cameras, for the intended purpose of gathering information for serious crimes. But what happens when the camera captures something not serious?

“Georgia State Patrol used its system of Flock automated license plate reader (ALPR) surveillance cameras to issue a ticket to a motorcyclist who was allegedly looking at his cell phone while riding, according to a copy of the citation obtained by 404 Media….The incident happened December 26 in Coffee County, Georgia. The ticket lists the offense as ‘Holding/supporting wireless telecommunications device,’ and includes the note ‘CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.’”

The man went to court and the ticket was dropped, but 404 Media is still outraged that the ticket was issued in the first place. Not because of Georgia’s policies, but because of other policies.

“Many police departments go out of their way to tell community members that Flock cameras are not used for traffic enforcement. For example, the City of Glenwood Springs, Colorado, states in a FAQ that “GSPD [Glenwood Springs Police Department] does not use Flock cameras for traffic enforcement, parking enforcement, or minor code violations.” El Paso, Texas, tells residents “these are not traffic enforcement cameras. They do not issue tickets, do not monitor speed, and do not generate revenue. They are investigative tools used after crimes occur.” Lynwood, Washington tells residents “these cameras will not be used for traffic infractions, immigration enforcement, or monitoring First Amendment-protected expressive activity” (Flock cameras have now been used for all of these purposes, as we have reported.)”

You will recall that I addressed another Flock Safety case, in which a citizen made public records requests from two Washington state jurisdictions. The jurisdictions said that they didn’t have the data; Flock Safety did. Flock Safety said that it had deleted the data.

Basically, Flock Safety is controversial, and some people are going to oppose ANYTHING they do. Even when Flock Safety technology protects people from dangerous drivers.

My view is that if a camera is used by a law enforcement agency, and there is no law prohibiting the law enforcement agency from using a camera for a particular purpose, then the agency can use the camera. There appears to be no such law in Georgia, so I’m not bent out of shape over this.

What are your thoughts? Is this a privacy violation?

Europe is Looking At More Than Just Biometric Testing

A little more detail, courtesy EU Brussels, regarding the policy brief published by the EU Innovation Hub for Internal Security, coordinated by eu-LISA together with the European Commission, Europol and Frontex.

As I noted earlier today, one proposal is for Europe to perform its own independent biometric testing, reducing Europe’s dependence on the American National Institute of Standards and Technology (NIST).

“The second is a centralised evaluation and testing platform connected to that repository, allowing standardised, independent and continuous assessment of biometric technologies, including benchmarking across vendors.”

But if there is a second proposal (European testing) in the cited European biometric policy brief, there must also be a first proposal—one I failed to discuss this morning.

“The first is a common EU biometric data repository containing datasets that comply with European rules, reflect the demographics and use-cases relevant to EU authorities and are stored in a secure environment.”

Makes sense. If you are going to test you need test data. And NIST has no obligation to ensure its test data complies with the General Data Protection Regulation (GDPR). The subjects in NIST test databases rarely provided the “explicit consent” mentioned in GDPR, and the “right to erasure” from a NIST database is…laughable.

Yes, it’s extremely challenging to construct a testing database that complies with GDPR.

And NIST certainly ain’t gonna do it.

Will a European entity construct it?

And if the right to erasure is maintained, how will you maintain historical consistency of test results?

Why Would Europe Perform Its Own Biometric Testing?

I’ve seen two articles about a possible move by Europe to set up a Europe-wide biometric testing agency, bypassing the need for National Institute of Standards and Technology (NIST) biometric testing.

One reason is that a European-controlled testing methodology can incorporate European regulations, such as the General Data Protection Regulation (GDPR).

A second related reason for Europe to bypass NIST biometric testing is that U.S. government agencies, including NIST and the Federal Bureau of Investigation (FBI), naturally place prime importance on American interests.

Remember when the U.S. House of Representatives Select Committee on the Chinese Communist Party complained that the FBI Certified Products List contained Chinese biometric vendors (the Certified Communist Products List)?

  • Wait until they discover all the Chinese companies that participate in NIST testing.
  • And wait until someone in the legislative or executive branches decides that the FBI or NIST shouldn’t list products from other countries deemed unfriendly to the United States. Denmark? Germany? France?

For these reasons, Europe may be compelled to set up its own biometric testing organization.

And so may China.

When Companies Can’t Target Prospects Under Age 16

If you’re on a platform such as Facebook, you sometimes receive advertisements that are VERY specific. Such as, “This is the perfect drink holder for California white males over the age of 50!” It’s almost as if they know everything about you…because they do.

Unless you implement privacy restrictions and don’t allow platform advertisers to reference your personal information.

Of course, if the advertiser isn’t able to narrowcast directly to you, the advertiser will broadcast to everybody.

And Facebook will start showing you advertisements in Chinese.

Qiaobi.

And if you complain to Facebook and ask why you’re seeing Chinese ads, Facebook will simply reply, “We are prohibited from using your personal information. Since there are a billion Chinese, we take a guess that you’re Chinese and show you those ads.”

Which brings us to age and social media.

The Under 16s Are Blocklisted

Back when Marky Mark created The Facebook, he initially targeted college-age users. But as time went on, Facebook and its competitors started aiming for younger ages.

This makes sense. Advertisers want to target consumers who are suspectible to changing their minds and are not set in their ways. So while a super kewl soft drink manufacturer isn’t going to target me, it is going to target 18 year olds…and 16 year olds…and 14 year olds…and 12 year olds.

A recent DKC report stated that 42% of all household spending is influenced by 8- to 14-year-olds, and that this age group is DIRECTLY spending over $100 billion per year.

So you can bet that advertisers are clamoring to purchase ad time on Facebook, TikTok, and the other social media services to get a pipeline to the brains of these 8 to 14 year olds…whoops, 12 to 14 year olds, since most social media services require you to be at least 12 years old to have an account.

But what if access to that entire age group is cut off entirely?

We’re seeing all over the world that jurisdictions are enacting or trying to enact bans on the use of social media for people under 16 years of age. The latest country to propose such a move is Indonesia:

“Authorities in the country, which is Southeast Asia’s largest economy, said Friday they expect social media platforms to deactivate the accounts of under-16s from March 28, starting with YouTube, TikTok, Facebook, Instagram, Threads, X, Bigo Live and Roblox.”

In other words, all the popular sites that teens love.

And in certain jurisdictions, the companies will implement age verification and age estimation technology to ensure that kids don’t like about their ages to get in.

Assuming these prohibitions stand, this causes a huge problem for B2C marketers that target teens: how do you market to them when the direct pipelines to this age group are cut off?

I’m just thankful that Bredemarket and its clients sell to adults. You don’t really see 13 year olds buying biometric technology.

If the City Fails, Try the County (Milwaukee and Biometrica)

The facial recognition brouhaha in southeastern Wisconsin has taken an interesting turn.

According to Urban Milwaukee, the Milwaukee County Sheriff’s Office is pursuing an agreement with Biometrica for facial recognition services.

The, um, benefit? No cost to the county.

“However, the contract would not need to be approved by the Milwaukee County Board of Supervisors, because there would be no cost to the county associated with the contract. Biometrica offers its services to law enforcement agencies in exchange for millions of mugshots.”

Sound familiar? Chris Burt thinks so.

“Milwaukee Police Department has also attempted to contract Biometrica’s services, prompting pushback, at least some of which reflected confusion about how the system works….

“The mooted agreement between Biometrica and MPD would have added 2.5 million images to the database.

“In theory, if MCSO signs a contract with Biometrica, it could perform facial recognition searches at the request of MPD.”

See Bredemarket’s previous posts on the city efforts that are now on hold.

And counties also.

No guarantee that the County will approve what the City didn’t. And considering the bad press from the City’s efforts, including using software BEFORE adopting a policy on its use, it’s going to be an uphill struggle.

Privacy, by Google Gemini

Google’s concept:

“Abstract 3D render of a human silhouette made of shimmering frosted glass, iridescent light refracting through, symbolizing secure data encryption and zero-knowledge proofs, elegant and high-end.”

Personally I think it’s TOO abstract, but perhaps that’s just me.

I didn’t create a musical version of this on Instagram because stuff, but there’s a Facebook version here. Sadly non-embeddable…but that’s why you should join my Facebook Bredemarket Identity Firm Services group.

Who Can Write My Biometric Company’s Product Marketing Content?

Someone who is a biometric product marketing expert.

Someone who has three decades of expertise in biometrics.

I remember ANSI/NIST-CSL 1-1993.

Someone who has worked with fingerprints, faces, irises, voices, DNA, and other biometric modalities.

Some modalities. Butts and tongues not included.

Someone who understands the privacy landscape in Europe (GDPR), Illinois (BIPA), California, and elsewhere.

BIPA is a four-letter word.

Oh…and someone who can write.

A slight exaggeration.

So who can write this stuff?

I know someone. Bredemarket.

Some great videos


Biometric product marketing expert.
Questions.
Services, process, and pricing.

Ambient Clinical Intelligence in Healthcare

Another topic raised by Nadaa Taiyab during today’s SoCal Tech Forum meeting was ambient clinical intelligence. See her comments on how AI benefits diametrically opposing healthcare entities here.

There are three ways that a health professional can create records during, and/or after, a patient visit.

  • Typing. The professional has their hands on the keyboard during the meeting, which doesn’t make a good impression on the patient.
  • Structured dictation. The professional can actually look at the patient, but the dictation is unnatural. As Bredebot characterizes it: “where you have to speak specific commands like ‘Period’ or ‘New Paragraph.’”
  • Ambient clinical intelligence.

Here is how DeepScribe defines ambient clinical intelligence:

“Ambient clinical intelligence, or ACI, is advanced, AI-powered voice recognizing technology that quietly listens in on clinical encounters and aids the medical documentation process by automating medical transcription and note taking. This all-encompassing technology has the ability to totally transform the lives of clinicians, and thus healthcare on every level.”

Like any generative AI model, ambient clinical intelligence has to provide my four standard benefits: accuracy, ease of use, security, and speed.

  • Accuracy is critically important in any health application, since inaccurate coding could literally affect life or death.
  • Ease of use is of course the whole point of ambient clinical intelligence, since it replaces harder-to-use methods.
  • Security and privacy are necessary when dealing with personal health information (PHI).
  • Speed is essential also. As Taiyab noted elsewhere in her talk, the work is increasing and the workforce not increasing as rapidly.

But if the medical professional and patient benefit from the accuracy, ease of use, security, and speed of ambient clinical intelligence, we all win.

Google Gemini.

Commit Traffic Crimes in 50 States…Well, 7

How does California know whether an arrested intoxicated person has a drunk driving conviction in, say, Oklahoma?

Or better still, how does Oklahoma know whether a licensed driver also has a driver’s license in, say, California?

Answer: they don’t. Because privacy.

The American Association of Motor Vehicle Administrators (AAMVA) provides participating states with a system (S2S) to check such things.

“State-to-State (S2S) Verification Service is a means for a state to electronically check with all other participating states to determine if the applicant currently holds a driver license or identification card in another state. The platform that supports S2S, the State Pointer Exchange Services (SPEXS) was successfully implemented in July 2015. Participation in S2S does not commit a state to be in compliance with the federal REAL ID Act. However, if a state chooses to be REAL ID compliant, the Department of Homeland Security generally looks for S2S to be part of their compliance plan.”

Not all states participate. As it turns out, neither California nor Oklahoma are part of S2S. Oklahoma is slated to join, but this may not happen.

“Oklahoma lawmakers have asked the state Supreme Court to immediately block the transfer of driver’s license and identification card data to a national interstate data exchange run by the American Association of Motor Vehicle Administrators (AAMVA).

“The lawmakers argue that the planned transmission exceeds statutory authority, violates state privacy protections, and collapses a key distinction that Oklahoma law makes between REAL ID-compliant and noncompliant credentials.”

Based upon past history, it’s no surprise that some in Oklahoma oppose big guvmint and AAMVA S2S participation.

But why has California opted out of S2S?

Basically, the privacy of Social Security Numbers. The state doesn’t to share this personally identifiable information willy nilly.

(As an aside, take a moment to think about how a state in enforcing the privacy of Social Security Numbers, which are assigned at the federal level. And also think about how Social Security Numbers are NOT supposed to be a national ID number. The mind boggles.)

So what do the other states do if someone claims to have a California driver’s license, but California won’t confirm this because of privacy concerns? Here’s what Tennessee does.

“All states and jurisdictions in the United States participate in S2S, except for California, Connecticut, Illinois, Kentucky, Nevada, Oklahoma, and West Virginia. New or returning Tennessee residents transferring from these nine states must obtain a Motor Vehicle Record (MVR) from their former state. The MVR be issued within 30 days of applying for a Tennessee license or ID.”

Good to know if I ever move out of California.