Back in the good old days, Dr. Welby’s word was law and was unquestioned.
Then we started to buy medical advice books and researched things ourselves.
Later we started to access peer-reviewed consumer medical websites and researched things ourselves.
Then we obtained our medical advice via late night TV commercials and Internet advertisements.
Finally, we turned to generative AI to answer our medical questions.
With potentially catastrophic results.
So how do we fix this?
The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.
Which is what you’d expect a standards-based government agency to say.
But since I happen to like NIST, I’ll listen to its argument.
“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.
“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”
So we know the risks, but how do we mitigate them?
“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”
