Tag Archives: Artificial Intelligence

AI Threatens to Reveal What HIPAA Promised to Protect

Medical information shared with chatbots lacks protection. Health data given to AI chatbots from companies outside the healthcare industry does not receive HIPAA privacy protections that apply to doctors and hospitals.

A New York University research team found that AI can be used to restore a patient’s identifying data, and thus circumvent HIPAA. Threats to the privacy of medical data have long existed. Data leaks and hacker invasions have exposed personal, sometimes embarrassing health care data. As AI becomes increasingly integrated into the health care industry, many people wonder whether AI-powered chatbots and automated receptionists used by doctors truly protect patients’ private medical data.

HIPAA applied to medical data gathered by providers, organizations and agencies subject to the law’s regulations are known as HIPAA-covered entities. However, whether HIPAA applies to medical data gathered by AI depends entirely on who is deploying the technology, according to the research team.

Those HIPAA entities include providers such as doctors and psychologists and their clinics or practices. Health plans — whether from health insurance companies, an employer or the government — are also covered. Essentially, HIPAA applies to any individual or entity that comes into contact with or processes protected health information.

Importantly, however, medical information handed over to chatbots used by companies outside the health care industry do not appear to receive the same protections.

De-identifying Data

Regardless of where it’s collected, health data can be altered to remove HIPAA protections. It is common that protected health data is stripped of identifying information, such as a patient’s name, in a process known as de-identification. This de-identified health data can then be sold to anyone from data brokers to pharmaceutical companies. This a very lucrative industry, and has existed for decades. In the case of the pharmaceutical industry, prescription and insurance information can be purchased to in turn target doctors for marketing purposes.

Re-identifying Data

HIPAA and healthcare providers seem comfortable that the patient’s data is adequately protected and that this practice of selling anonymous data is acceptable. Recently, a team of researchers reported on the ease of re-identifying health care information, raising serious questions over HIPAA’s validity in the age of AI.

The New York University research team found re-identifying data to be trivial. The team believes that HIPAA’s protections “are rapidly becoming outdated,” and demonstrated how AI can be used to examine anonymized patient notes to determine an identity. “We believe HIPAA needs urgent updates to offer more robust protections against the sale of this data and we should exercise care when handling clinical notes,” said one of the researchers.

Source: Jiang LY, Liu XC, Cho K, Oermann EK. Paradox of De-identification:
A Critique of HIPAA Safe Harbour in the Age of LLMs.
 arXiv:2602.08997v1 [Computers and Society (cs.CY)]  9 Feb 2026. https://doi.org/10.48550/arXiv.2602.08997


AMA Center for Digital Health and AI

The American Medical Association (AMA) now has a Center for Digital Health and AI. This new endeavor was created to put physicians at the center of shaping, guiding, and implementing the technologies that are transforming medicine.

It is clear that digital health tools and artificial intelligence (AI) have been advancing rapidly, but unfortunately this is happening without physician leadership and input. Serious risks can result that may create unintended burdens and medical errors. AMA’s new Center strives to tap the full potential of AI and digital health by involving physicians throughout the lifecycle of technology development and deployment to ensure it fits into clinical workflow and clinicians know how to utilize it.

AMA CEO John Whyte, MD, MPH, stated that “Augmented Intelligence will be a defining force in the future of health care, but right now we are barely scratching the surface of its potential. Digital health tools are everywhere and the technology has limitless opportunity, but if you don’t understand clinical practice or clinical workflow, even the best tools will never be fully implemented.” The goal of the Center for Digital Health and AI is to harness innovation responsibly and effectively, so it improves patient care and reduces unnecessary burdens on physicians.

The Center for Digital Health and AI will focus on:

  • Policy and regulatory leadership: Working with regulators, policymakers, and technology leaders to shape benchmarks for safe and effective use of AI in medicine and digital health tools.
  • Clinical Workflow Integration: Creating opportunities for doctors to shape AI and digital tools so they work within clinical workflows and enhance patient and clinician experience.
  • Education & training: Equipping physicians and health systems with knowledge and tools to integrate AI efficiently and effectively into practice.
  • Collaboration: Building partnerships across the tech, research, government, and health care sectors to drive innovation aligned with patient needs.

Link to The AMA Center for Digital Health and AI for more information and to subscribe to news and updates.

Increase in AI Use Among Psychologists, But Greater Concerns As Well

According to the American Psychological Association’s 2025 Practitioner Pulse Survey, over half of psychologists report experimenting with artificial intelligence tools in their practices in the past year, but most cite concerns about how the technology may affect their patients and society.

The survey of 1,742 psychologists found that 56% of psychologists reported using AI tools to assist with their work at least once in the past 12 months, up from 29% in 2024. And 29% said they used AI on at least a monthly basis—more than twice as many as last year.

These AI technologies can support psychologists in various ways, from providing administrative support to augmenting clinical care. However, as psychologists grow more familiar with AI, they are also realizing its potential risks. Approximately 92% cited concerns about the use of AI tools in psychology, most common potential issues being data breaches, unanticipated social harms, input and outut biases, a deficit in rigorous testing to mitigate risks and inaccurate output or “hallucinations.”

Current Uses for AI Assistance

The most common uses among psychologists who used AI to assist with their work focused on routine tasks that often demand time and energy from psychologists that could be better spent with patients. Tasks such as assistance with writing emails and other materials, generating content, summarizing clinical notes or articles and note-taking. Overall, approximately (62%) said that advancements in technology are helping them work more efficiently and accurately.

APA recommendations the following to psychologists before using AI tools to assist with clinical care.

  • Obtaining informed consent from patients by clearly communicating the use, benefits and risks of AI tools.
  • Evaluate AI tools for potential biases that could potentially worsen disparities in mental health outcomes.
  • Review AI tools to check for compliance with relevant data privacy and security laws and regulations.
  • Understand how patient/client data are used, stored or shared by companies that provide AI tools.

Despite the addition of new technologies to assist in managing administrative burdens, the survey revealed that psychologists continue to struggle with insurance requirements and other administrative issues as well as the demands for treatment. While stress levels and work-life balance for psychologists have improved since the onset of the covid-19 pandemic, nearly half of all psychologists said that they do not have openings for new patients and that their patients’ symptoms are increasing in severity, indicating that the mental health crisis is not resolved yet.

Link: APA recommendations for psychologists (PDF, 458KB)

FDA Digital Health Advisory Committee Meet to Review AI-Enabled Mental Health Devices

The US Food and Drug Administration (FDA) Digital Health Advisory Committee held a meeting last week that focused on generative artificial intelligence-enabled digital mental health medical devices. Topics discussed included clinician perspective, evolution of FDA regulation for these devices, and best practices for AI in digital mental health.

While generative AI may be useful to psychiatric patients in treatment, the committee noted, the human susceptibility to AI outputs and risks like suicidal ideation monitoring or reporting, as well as posssible long term risks must be considered.

AI technologies can be easily accessible and available around-the-clock making it potentially transformative for the general population. However, there are major concerns surrounding ease of use, privacy, content regulation, and involvement of health care providers. The committee warned that AI-enabled devices may “confabulate, provide inappropriate or biased content, fail to relay important medical information, or decline in model accuracy,” which are essential considerations in evaluating these technologies.

The FDA has authorized use of over 1200 medical devices which use AI, but none have been authorized for mental health uses as yet. Less than 20 digital mental health, non-AI devices have been authorized. Digital mental health technologies encompass mobile health, health information technology, wearable devices, telehealth, telemedicine, and personalized medicine, according to the FDA. The term also refers to digital therapeutics and diagnostics, which the FDA oversees.

The committee focused on the unique aspect of patient-facing AI, with digital mental health medical devices that are intended to “treat and/or diagnose psychiatric conditions.” Public health concerns emerge with the safety and ability of AI products to deliver therapeutic content, make psychiatric diagnoses, or substitute a clinician.

At this meeting, committee highlighted that the FDA is committed to assuring patients and providers have prompt and continued access to safe and efficacious medical devices. The goal is to provide regulatory pathways for this growing field, keeping in mind potential unique benefits of AI-assisted medical mental health technology, and the complexity of human to digital interaction.

Reference

Summary for the Digital Health Advisory Committee meeting, November 6, 2025.


Ways That AI Can Resemble Psychiatric Disorers

Recent research has created the first comprehensive effort to categorize all the ways artificial intelligence (AI) can go wrong, with many of those behaviors resembling human psychiatric disorders.

Scientists and programmers have seen that when AI goes rogue and begins to act in ways counter to its intended purpose, it can exhibit certain behaviors that resemble psychopathologies in humans. A new taxonomy of 32 AI dysfunctions has been created so people in a wide variety of fields can understand the risks of building and deploying AI.

Published recently in the Journal Electronics, authors Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), created the project with a goal to help analyze AI failures and make the engineering of future products safer. They also believe that this tool can help policymakers address AI risks.

As described in the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.

The study also proposes “therapeutic robopsychological alignment,” a process the researchers describe as a kind of “psychological therapy” for AI. The researchers argue that as these systems become more independent and capable of reflecting on themselves, simply keeping them in line with outside rules and constraints (external control-based alignment) may no longer be enough.

Machine madness

The classifications outlined in the study resemble human symptoms or disorders, with names like obsessive-computational disorder, hypertrophic superego syndrome, contagious misalignment syndrome, AI hallucinations and existential anxiety.

Categories Outlined in Psychopathia Machinalis

Managing Machine Madness

With therapeutic alignment in mind, the project proposes the use of clinical strategies employed in human interventions like cognitive behavioral therapy (CBT). The goal of listing and defining the AI disorders is an attempt to get ahead of problems before they arise. The authors of the research paper point out, “by considering how complex systems like the human mind can go awry, we may better anticipate novel failure modes in increasingly complex AI.”

The structure of the bad AI behavior classification was modeled from frameworks like the Diagnostic and Statistical Manual of Mental Disorders. That led to the various categories of behaviors that could be applied to AI going rogue. Each one was mapped to a human cognitive disorder, complete with the possible effects when each is formed and expressed as well as the degree of risk in these behaviors.

Source: Watson N, Hessami, A. Psychopathia Machinalis: A Nosological Framework for Understanding Pathologies in Advanced Artificial Intelligence. Electronics 202514(16), 3162; https://doi.org/10.3390/electronics14163162

Illinois First State to Ban AI in Mental Healthcare

Illinois has become the first US state to ban the use of AI in providing mental healthcare. Increasing concerns about AI chatbots causing patient harm, including enabling dangerous behavior has been an important subject of discussion and concern in the last few years.

On August 1st, Gov. JB Pritzker signed the Wellness and Oversight for Psychological Resources Act into law. The act prohibits the use of AI for mental health treatment and clinical decision-making within behavioral healthcare. It does allow behavioral health professionals to use AI for administrative and supplementary support services.

Earlier this year, the American Psychological Association urged the Federal Trade Commission to investigate AI-driven chatbots and their credibility to protect the public from a lack of regulation. A recent Stanford study revealed that AI therapy chatbots powered by large language models showed increased stigma toward certain conditions and enabled dangerous behavior, including suicidal ideation. This aligns with a prior JAMA systematic review demonstrating that neuroimaging-based AI models for psychiatric diagnosis display a high risk for bias and inconsistent clinical applicability.

Legislative Response

More and more states have introduced AI-related legislation over the last few years. Thus far In 2025, all of the states have have introduced legislation on this topic this year and over half have enacted various measures to develop a risk management policy and enable professional oversight that considers guidance from a list of specified standards.


New UK Guidance For Medical Device Regulation to Protect Users of Digital Mental Health Technologies.

The Medicines and Healthcare products Regulatory Agency (MHRA) in the UK today published new guidance to help manufacturers meet UK medical devices regulations and ensure that digital mental health technologies are effective, reliable and acceptably safe.

Just as technology has become a diagnostic, clinical and administrative “partner” in medical care and healthcare, it is also being increasingly used in mental health care. A variety of mental health apps, AI-powered assesssments, virtual and augmented reality programs and wearable technologies are becoming a new array of available mental health tools.

The prime directive of digital mental health technologies that diagnose, prevent, or treat conditions using complex software must meet medical device standards to ensure they are effective and acceptably safe, just like any other medical device.

Many manufacturers may be unsure how medical devices regulations apply to software, which products are regulated, how they are evaluated, and what evidence is required. The new MHRA guidance explains:

  • How to define and communicate the intended purpose of a digital mental health technology
  • When a digital mental health technology is considered a medical device under UK law.
  • How risk classification is determined, ensuring proportionate regulation for different types of technologies.

For the end users of these mental health apps, this means greater confidence in the tools they rely on.

Development

The UK MHRA Guidance was developed over the last two years to assist in the development of digital mental health technologies (DMHT) with the goal to create safe digital and software products that support mental health and wellbeing. The types of DMHTs can take many forms or approaches including: websites, internet-based platforms or applications (apps) to be used with non-medical technology, such as computers, mobile phones, fitness wearables, and virtual reality (VR) headsets, or medical technolog y, such as transcranial direct current stimulation (tDCS) headsets. They can be available as direct-to-consumer products intended for patients and the public, often accessible through app stores for free or for a fee, or used with a referral or supervision from healthcare or educational professionals, as part of the blended delivery of mental health care.

Figure 1 from the guidance illustrates some of the key steps in developing safe and effective DMHTs and emphasizes that these steps are iterative across the lifecycle of the product.

The complete MHRA Guidance is available online.

Link to New UK Guidance For Medical Device Regulation


World’s Leading Technology Associations Publish Comprehensive Curricular Guidelines for Computer Science

Published this month, ACM, the Association for Computing Machinery, has joined with the IEEE Computer Society (IEEE-CS) and the Association for the Advancement of Artificial Intelligence (AAAI) to develop “Computer Science Curricula 2023” (CS2023). CS2023 provides a comprehensive guide outlining the knowledge and competencies students should attain for degrees in computer science and related disciplines at the undergraduate level.

Educators in technology believe that it is essential to establish uniform curricular guidelines for computer science disciplines to maintain an ongoing vitality of the field and the future success of the students who study it. The availability of a shared global curricula ensures that students develop the knowledge and skills they need to succeed as they graduate to become industry practitioners, researchers, or educators. Additionally, by supporting consistency in the field across the world, the curricular guidelines enable efficient global collaboration—whether among professionals working across borders for an international company, or among academics from different nations coming together for a research project.

Growing importance of artificial intelligence reflected in CS2023 Curricular Guidelines

Traditionally, these guidelines are updated every ten years. CS2023 builds on CS2013, the most recent global curriculum framework developed by ACM and IEEE-CS, the world’s two largest associations of computing professionals. ACM and IEEE-CS have consistently focused on curating content from the world’s foremost experts for the creation of curricular guidelines, and with the rapid expansion of AI since CS2023, the addition of AAAI to the developing body was both essential and welcome.

New and Noteworthy additions of the CS2023 report include:

  • The addition of AAAI as a core partner of CS2023 reflects the growing importance of artificial intelligence as a discipline, as well as how AI is disrupting the teaching of computer science.
  • Because computing touches so many aspects of personal and public life, CS2023 goes beyond simply outlining technical competencies to include a knowledge unit called Society, Ethics, and the Profession (SEP) and incorporating it in most other knowledge areas to encourages educators and students to consider the social aspects of their work.
  • To meet the disciplinary demands of artificial intelligence and machine learning, mathematical and statistical requirements have been increased throughout CS2023, but individually identified for each knowledge area so that educators can accommodate the needs of students with varying levels of mathematical background.
  • CS2023 is designed to be a primarily online resource at https://csed.acm.org/, both for utility and so the curricular guidelines can be updated more frequently to keep pace with the rapid changes in the field.

The Committee Chair explained that “So much has changed in computing since we issued the last curricular guidelines in 2013. While the core skills and competencies that we outlined in 2013 form the foundation of this new work, we were painstaking in our effort to make sure that we reflect where computing is today. We also tried to emphasize a whole solution approach in terms of addressing issues of Society, Ethics, and the Profession, and a whole person approach in terms of emphasizing the need for students to develop professional dispositions. Finally, from the outset, we envisioned this report as a living document that will be regularly updated and can be accessed by computer science educators on an ongoing basis.”

The revised Guidelines for the Computer Science Curriculum is designed to be a primarily online resource for easy access and so that the curricular guidelines can be updated more frequently to keep pace with the rapid changes in the field.

Link to the Computer Science Curriculum: https://csed.acm.org/

Source: https://www.acm.org/media-center/2024/june/cs-2023


The Evolving Landscape of AI in Mental Health Care

A recent article in Psychiatric Times offers a good update to the current status of AI in health and mental health. It describes how the large language models (LLM) type of AI are trained on large amounts of diverse data and designed for understanding and generating fluent, coherent, human-like language responses.

Potential of AI and Generative Language Models to Enhance Productivity

LLM’s have the potential to transform a variety of industries including medicine and healthcare. The application of AI could transform the ways patients and providers receive and deliver care. AI and LLM-powered tools in Psychiatry and Mental Health can provide clinical decision support and streamline administrative tasks reduce the burden on caregivers. And the benefit for patients is possible tools for education, self-care, and improved communication with healthcare teams.

What About Accuracy?

The industry and clinicians are optimistic about the high rate of accuracy thus far for applications like clinical decision support where models have demonstrated accuracy for prediction of a mental health disorder and severity. For example, ChatGPT was able to achieve final diagnosis accuracy of 76.9% in findings from a study of 36 clinical vignettes. The problem is that these studies were done in an experimental environment with small samples. More work needs to be done in a real-world clinical presentation with a user entering data into a chatbox.

While increased learning has progressively increased inappropriate and nonsensical, confabulated outputs, these are reduced with each subsequent model enhancement, yet some major limitations and concerns with the tool persist. Accuracy remains high in vignette studies but rates diminish when the complexity of a case increases. One clinical vignette study revealed that “ChatGPT-4 achieved 100% diagnosis accuracy within the top 3 suggested diagnoses for common cases, whereas human medical doctors solved 90% within the top 2 suggestions but did not reach 100% with up to 10 suggestions.”

How to Improve Current Limitations

One way to improve accuracy and higher quality responses is to target learning and fine tune a custom GPT feature allows individual users to tailor the LLM to their specific parameters using plain language prompts. This new feature allows users to input data sets and resources while also telling the custom GPT which references should be used in responses. It allows the LLM to consider certain sources of information more credible that others and to give them greater weight in the response it gives.

Fine-tuning a Customized Learning Process

The Neuro Scholar reference collection includes textbooks and other resources that encompass a wide range of topics in neuroscience, psychiatry, and related fields. 

NeuroScholar Custom GPT Inputs and Training Resources included:

  • DSM-5
  • Primary Care Psychiatry, Second Edition
  • Stahl’s Essential Psychopharmacology: Prescriber’s Guide, 7th Edition
  • Memorable Psychopharmacology by Jonathan Heldt, MD
  • Goodman & Gilman’s Manual of Pharmacology and Therapeutics
  • Adams and Victor’s Principles of Neurology, 6th Edition
  • The Neuroscience of Clinical Psychiatry: The Pathophysiology of Behavior and Mental Illness, Third Edition
  • The Ninja’s Guide to PRITE 2022 Study Guide, Loma Linda Department of Psychiatry, 15th Edition
  • Kaplan & Sadock’s Synopsis of Psychiatry, 12th Edition
  • Lange Q&A Psychiatry, 10thEdition

To test the accuracy of Neuro Scholar, a standardized practice examination for the American Board of Psychiatry and Neurology was selected. Practice examination 1 of Psychiatry Test Preparation and Review Manual, Third Edition consisted of 150 questions. The practice examination was administered to Neuro Scholar and ChatGPT-3.5

Results

ChatGPT-3.5 correctly answered 125 of 150 questions, whereas Neuro Scholar correctly answered 145 of 150 questions, achieving 96.67% accuracy on the practice exam. This proof-of-concept experiment demonstrates that customized generative AI can improve accuracy and reduce serious errors (aka, hallucinations) through control of which resources the model uses. In medicine, AI hallucinations can have disastrous consequences. Efforts to improve AI accuracy must also include efforts to eliminate inaccurate responses. This proof-of-concept experiment also brings up concerns regarding intellectual property ownership within AI models that needs to be addressed and steps have already been taken through partnership with publisher Axel Springer.

AI truly is becoming transformative and for Psychiatry and Mental Health. has made a major leap in progress, as this proof of concept highlights. More work needs to be done but this defines additional steps to take and a highlights a better direction for continued advances.

Source: Psychiatric Times. March 2024 [Link]


Introducing “Chatbot Corner”

 “Chatbot Corner,” is a new column in Psychiatric Times® which will explore the intersection of psychiatry and cutting-edge technology.

Dr Steven Hyler, professor emeritus of Psychiatry at Columbia University Medical Center, describes this column as an “innovative space dedicated to exploring the dynamic and sometimes challenging relationship between artificial intelligence (AI) technology and mental health.” He invites insightful readers to contribute articles, ideas, thoughts, commentaries, suggestions and experiences with AI chatboxes.

The goal is to demystify the role of AI and discuss clinical implications of AI in psychiatry and serve as a guide toward responsible use.

Readers are encouraged to share and dissect the most egregious chatbot errors you have encountered in practice or your experience with trying to stump the psychiatry chatbot with complex scenarios, your contributions are vital.

Ideas and articles can be submitted to PTEditor@MMHGroup.com and feel free to participate in this intriguing journey into AI and how it fits into psychiatry and mental health.

Link to Psychiatric Times, Dr Hyler’s call for participation.