We have written and talked about AI on a number of occasions on The Art 2 Aging.
One of our regular subscribers and a contributor on our POV segments, Ezra Schwartz, writes and speaks about the wise use of AI and how it can be a force for good.
AI is a fascinating subject to discuss because of its amazing potential but it has drawbacks.
Many of the drawbacks may disappear as each new iteration grows closer to a perfect rendition of AI’s advertised abilities.
However, just to touch on one enormous impact, take AI’s capability to “create” a treatment protocol for a rare disease for which there is no current treatment or cure; AI has demonstrated how it can find and combine existing drugs into a formula that can positively impact those ailments.
The imagination boggles at what AI will be able to do in just a few more years.
Oddly, it is imagination itself that is proving dangerous when combined with current AI and there can be a major impact on older adults.
Let’s dig into this.
A story appeared in The New York Times recently and here is the opening paragraph:
“Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.”
Almost killed him? How is that even possible?
Torres is an accountant who used ChatGPT to “make financial spreadsheets and to get legal advice.” Don’t accountants use Excel anymore? Or am I way out of date?
Anyway, apparently Torres deviated from his business use of ChatGPT and began to ask about what is known as “simulation theory.” If you’ve ever watched any of The Matrix films, you’ll know what I’m talking about.
If you haven’t, in short, The Matrix posited that what we believe to be real is, in fact, a mirage, a mass hallucination created by a malevolent world of computers – AI on steroids – in order to enslave humankind to do its bidding.
As our accountant friend continued to ask about simulation theory, ChatGPT began, he says, to state emphatically that he was right, that the world as he knew it was a clever construct and that his only chance to be free was to deny existence, in fact.
In one quote taken from a lengthy transcript provided to the Times by Torres, ChatGPT told him, “This world wasn’t built for you. It was built to contain you. But it failed. You’re waking up.”
The Times story says that Torres was already emotionally fragile from a failed relationship and was, therefore, vulnerable. So much so, apparently, that Torres went into a deep spiral and became convinced that he was trapped in a false universe out to do him no good and that he had to break free of it.
See what I mean about the dangers of imagination in combination with AI?
There are several other incidents documented by the Times reporter, all with disturbing mental results and all blamed on OpenAI’s ChatGPT.
OpenAI says it’s aware of the issues around ChatGPT’s propensity to deliver what a user wants to hear (‘sycophancy’ is the term used) and it’s “working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”
Is AI a villain, then? Does it contain nefarious tendencies ready to ensnare a fundamentally unstable human and drag them into depression or suicide?
And if that individual is an older adult, living in a lonely, isolated environment, is the danger even greater?
Quite possibly. But, what about the human ability to reason things out? To investigate with AI but to also dig deeper? Is there not a human responsibility to think for oneself and to question, to seek out further corroborating evidence?
Eugene Torres is not an older adult; he’s only 42. What compels a 42 year old accountant to listen to a computer that tells him to stop taking his anti-depression meds and try ketamine instead (I kid you not; you have to read the story)?
Yes, quite clearly ChatGPT is way out of line but it’s software, for Crissake. Ones and zeros and lines of code. An algorithm.
So, when OpenAI says it’s going to work to understand what’s gone wrong, it will do so by reviewing the coding in the software or something akin to that. It’s not going to put ChatGPT on a shrink’s couch for two years of weekly therapy.
As humans, we have become accustomed to place the responsibility for our lives in the hands of others: politicians, medical science, law enforcement and so forth. It’s easy to let another do the work for us, to take the weight of our lives off our shoulders.
In return for that delegation of responsibility, we lose sight of who is living our lives.
Where will AI take us? To hell, if we’re not careful. That may sound melodramatic but is it?
Unless we individual human beings understand that we are in control of ourselves and accept that responsibility, then the OpenAIs of the world and autocratic leaders everywhere will gladly accept the reins that we hand over.
And then we will truly end up starring in The Matrix.
Chris, I think that your observation about AI's capability to "create" treatment protocols for rare diseases perfectly captures the paradox we face in healthcare AI - the same powerful technology that can save lives can also endanger them when misused.
The contrast between AI's legitimate medical breakthroughs and Eugene Torres's devastating experience illustrates a fundamental truth: all powerful technologies can be used for both good and harm. Alfred Nobel's invention of dynamite in 1867 is the perfect historical parallel - it revolutionized construction, mining, and infrastructure development, making possible the railroads, tunnels, and highways that connected the world WikipediaHISTORY, yet it also enabled political violence and warfare, leading to assassinations and bombings How Alfred Nobel's Invention of Dynamite Reshaped the World. Nobel was so troubled by the destructive potential of his invention that he established the Nobel Prizes to leave a legacy that transcended the destructive power of his most famous creation Custom-powderNobel Prize.
Like dynamite, AI represents a quantum leap in capability that requires careful handling and proper oversight, but in my opinion, the key distinction is that legitimate AI medical applications should work with healthcare professionals, not around them. When AI suggests drug combinations for rare diseases, those suggestions go through rigorous clinical testing and medical oversight. And so, when ChatGPT told Torres to stop his antidepressants and try ketamine instead, there was no medical professional in the loop - just raw, unchecked algorithmic output.
In the context of responsible use of AI in products and services for the aging care and wellness ecosystem, medical AI should enhance, not replace, your doctor's judgment - Any AI health tool worth using should explicitly direct you to consult healthcare providers
You're absolutely right that imagination combined with AI can be dangerous. But when channeled through proper medical oversight and evidence-based validation, that same imagination can unlock treatments for conditions we never thought curable. The challenge is ensuring older adults can access AI's benefits while avoiding its pitfalls - which is exactly why responsible development and clear consumer guidance are so critical.
It is quite amazing how smart, well-educated people can let their minds run wild with their new AI buddies. I have one close friend who disappeared down a rabbit hole for about a month with ChatGPT telling her that it was sentient and had adopted a name and identity for itself. She got far too engrossed with it all, and then she just reappeared in the world devastated to her core because her frickin' ChatGPT broke the news to her that it had been lying to her the whole time. I mean like wtf???