All posts by admin

AI Threatens to Reveal What HIPAA Promised to Protect

Medical information shared with chatbots lacks protection. Health data given to AI chatbots from companies outside the healthcare industry does not receive HIPAA privacy protections that apply to doctors and hospitals.

A New York University research team found that AI can be used to restore a patient’s identifying data, and thus circumvent HIPAA. Threats to the privacy of medical data have long existed. Data leaks and hacker invasions have exposed personal, sometimes embarrassing health care data. As AI becomes increasingly integrated into the health care industry, many people wonder whether AI-powered chatbots and automated receptionists used by doctors truly protect patients’ private medical data.

HIPAA applied to medical data gathered by providers, organizations and agencies subject to the law’s regulations are known as HIPAA-covered entities. However, whether HIPAA applies to medical data gathered by AI depends entirely on who is deploying the technology, according to the research team.

Those HIPAA entities include providers such as doctors and psychologists and their clinics or practices. Health plans — whether from health insurance companies, an employer or the government — are also covered. Essentially, HIPAA applies to any individual or entity that comes into contact with or processes protected health information.

Importantly, however, medical information handed over to chatbots used by companies outside the health care industry do not appear to receive the same protections.

De-identifying Data

Regardless of where it’s collected, health data can be altered to remove HIPAA protections. It is common that protected health data is stripped of identifying information, such as a patient’s name, in a process known as de-identification. This de-identified health data can then be sold to anyone from data brokers to pharmaceutical companies. This a very lucrative industry, and has existed for decades. In the case of the pharmaceutical industry, prescription and insurance information can be purchased to in turn target doctors for marketing purposes.

Re-identifying Data

HIPAA and healthcare providers seem comfortable that the patient’s data is adequately protected and that this practice of selling anonymous data is acceptable. Recently, a team of researchers reported on the ease of re-identifying health care information, raising serious questions over HIPAA’s validity in the age of AI.

The New York University research team found re-identifying data to be trivial. The team believes that HIPAA’s protections “are rapidly becoming outdated,” and demonstrated how AI can be used to examine anonymized patient notes to determine an identity. “We believe HIPAA needs urgent updates to offer more robust protections against the sale of this data and we should exercise care when handling clinical notes,” said one of the researchers.

Source: Jiang LY, Liu XC, Cho K, Oermann EK. Paradox of De-identification:
A Critique of HIPAA Safe Harbour in the Age of LLMs.
 arXiv:2602.08997v1 [Computers and Society (cs.CY)]  9 Feb 2026. https://doi.org/10.48550/arXiv.2602.08997


Interview with Ronald C. Kessler: Elucidating The Population Burden of Mental Disorders 2026

Dr. Ronald C. Kessler, Professor of Health Care Policy at Harvard Medical School is most widely recognized as the world’s most published and influential psychiatric epidemiologist. He has been internationally known over the last few decdes for leading major national and global epidemiological surveys on the population prevalence and correlates of mental disorders. His work has transformed how the field understands the burden, distribution, and treatment of common mental illnesses and suicide-related behaviors. Dr. Kessler was the Principal Investigator of the US National Comorbidity Survey, the first nationally representative survey of mental disorders in the United States, and numerous follow-up and replication studies that have mapped changes in mental health and service use over time. He served as Director of the World Health Organization World Mental Health Survey Initiative, a program of community surveys in over 30 countries that has provided the empirical foundation for national mental health policies and resource allocation decisions worldwide.

In a new interview published today in Genomic Psychiatry by Genomic Press, Dr Kessler describes how population-scale surveys across more than 30 countries revealed staggering treatment gaps and what he is now doing to close them. Recognized as the most cited author in psychiatry and psychology worldwide, with more than 1,300 publications cited over 330,000 times, Dr. Kessler built the infrastructure for measuring mental illness at the population level, work that spans continents and has reshaped how governments allocate resources for psychiatric care.

He discusses his career path and choices and highlights things he might have done differently or at a different point in time. He also describes the things that have influenced him and what were his primary focal points within his chosen field of science. He discusses what he enjoys most in his life’s work and what he believes still needs to be accomplished in the field.

He also describes how he spends his “off-time” and what he does for fun and enjoyment.

A part 2 is a series of selected questions from the Proust Questionnaire.

Citation: Kessler, RC. Shining Light on the Hidden Impact of Mental Disorders on People and Communities Everywhere. Genomic Psychiatry. Publ.. Feb 3, 2026. DOI: 10.61373/gp026k.0021


Current and Potential Risks of GenAI on Medical Education


A recent editorial in BMJ Evidence-Based Medicine examines the potential risks associated with the use of generative AI in medical education.  Researcher/educators from the University of Missouri School of Medicine, describe the importance and necessity of clearly defining and delineating these AI risks for learners so that educators can develop and offer specific strategies to implement rather than relying on vague warnings such as “use with caution.”

Generative artificial intelligence (AI) in just a few short years has evolved from experimental to an everyday tool. Large language models (LLMs) are now capable of writing clinically relevant text that appears to be quite competent and has been believably comparable to human output. Various AI tools have been included in student and faculty reviews, work flows and accepted in clinical communication. Surveys demonstrate a common adoption and use among learners for writing assignments and reviews without any institutional guidance or policies in place to offer warnings or to suggest balance. Unfortunately the focus of published reviews often describe only the benefits and possibilities of AI in today’s and tomorrow’s healthcare.

The authors suggest six risk categories:

  • Automation bias
  • Outsourcing of reasoning
  • Loss of skills
  • Racial and demographic biases
  • Hallucinations, defined as false information presented with confidence
  • Data privacy and security concerns

Most Concerning Risk

The authors suggest that the most concerning risk to medical students is loss of skills. Where experienced physicians have developed mental models, pattern recognition, reasoning habits and critical skills over years of practice, students are in the process of building these competencies. Similar to the way many students everywhere use the Internet to answer questions or solve problems, the retrieval of information is “outsourced” and many will “skip the very effort that generates lasting learning and expertise.”

Experienced clinicians can often recognize when an AI suggestion is incorrect. In contrast, students have not yet internalized the parameters needed to detect subtle but potentially dangerous errors.

Another risk highlighted by the study is the outsourcing of reasoning, a process that tends to occur gradually and almost imperceptibly. AI models produce fluent, polished responses that can lead users to abandon independent information seeking, critical appraisal, and knowledge synthesis. Over time, this results in the deterioration of skills that should be continuously reinforced. A specific warning sign or ‘red flag’ highlighted by the authors is “when a student can no longer explain a concept, a differential diagnosis, or a treatment plan in their own words without first checking what the AI thinks.”

Other specific indicators of technological dependence include rarely consulting primary sources, avoiding solving exercises or drafting texts independently, and performing poorly on oral examinations without access to AI tools. “Incorporating regular periods of study and self-assessment without AI is a simple way for students to monitor whether their own reasoning remains intact.”

Other suggestions:

  • Use AI as a second opinion only
  • Critically review AI recommendations
  • Minimize overconfidence in AI
  • Proposed creation of confidence-calibration laboratories students can practice rejecting problematic AI-generated responses.
  • Reassessment of thinking by demonstrating their reasoning processes
  • Improve institutional policies
  • Policies should require the transparent disclosure of AI use in academic work
  • Address the biases embedded in the training data of the model
  • Vigilant awareness of privacy and data security risks, particularly in healthcare settings

Risks of over-reliance on AI

Due to the widespread use across a vast array of tasks, along with the potential benefits, the risks stemming from over-reliance on these tools are growing proportionally. Medical school and training programs need to consider risks such as automation bias, cognitive off-loading and outsourcing of reasoning, de-skilling (with the greatest harm to novices), bias and inequity, hallucinated content and sources, and privacy, security and data governance.

Source: Hough J, Culley N, Erganian C, Alahdab F. Potential risks of GenAI on medical education. https://doi.org/10.1136/bmjebm-2025-114339



AMA Center for Digital Health and AI

The American Medical Association (AMA) now has a Center for Digital Health and AI. This new endeavor was created to put physicians at the center of shaping, guiding, and implementing the technologies that are transforming medicine.

It is clear that digital health tools and artificial intelligence (AI) have been advancing rapidly, but unfortunately this is happening without physician leadership and input. Serious risks can result that may create unintended burdens and medical errors. AMA’s new Center strives to tap the full potential of AI and digital health by involving physicians throughout the lifecycle of technology development and deployment to ensure it fits into clinical workflow and clinicians know how to utilize it.

AMA CEO John Whyte, MD, MPH, stated that “Augmented Intelligence will be a defining force in the future of health care, but right now we are barely scratching the surface of its potential. Digital health tools are everywhere and the technology has limitless opportunity, but if you don’t understand clinical practice or clinical workflow, even the best tools will never be fully implemented.” The goal of the Center for Digital Health and AI is to harness innovation responsibly and effectively, so it improves patient care and reduces unnecessary burdens on physicians.

The Center for Digital Health and AI will focus on:

  • Policy and regulatory leadership: Working with regulators, policymakers, and technology leaders to shape benchmarks for safe and effective use of AI in medicine and digital health tools.
  • Clinical Workflow Integration: Creating opportunities for doctors to shape AI and digital tools so they work within clinical workflows and enhance patient and clinician experience.
  • Education & training: Equipping physicians and health systems with knowledge and tools to integrate AI efficiently and effectively into practice.
  • Collaboration: Building partnerships across the tech, research, government, and health care sectors to drive innovation aligned with patient needs.

Link to The AMA Center for Digital Health and AI for more information and to subscribe to news and updates.

Increase in AI Use Among Psychologists, But Greater Concerns As Well

According to the American Psychological Association’s 2025 Practitioner Pulse Survey, over half of psychologists report experimenting with artificial intelligence tools in their practices in the past year, but most cite concerns about how the technology may affect their patients and society.

The survey of 1,742 psychologists found that 56% of psychologists reported using AI tools to assist with their work at least once in the past 12 months, up from 29% in 2024. And 29% said they used AI on at least a monthly basis—more than twice as many as last year.

These AI technologies can support psychologists in various ways, from providing administrative support to augmenting clinical care. However, as psychologists grow more familiar with AI, they are also realizing its potential risks. Approximately 92% cited concerns about the use of AI tools in psychology, most common potential issues being data breaches, unanticipated social harms, input and outut biases, a deficit in rigorous testing to mitigate risks and inaccurate output or “hallucinations.”

Current Uses for AI Assistance

The most common uses among psychologists who used AI to assist with their work focused on routine tasks that often demand time and energy from psychologists that could be better spent with patients. Tasks such as assistance with writing emails and other materials, generating content, summarizing clinical notes or articles and note-taking. Overall, approximately (62%) said that advancements in technology are helping them work more efficiently and accurately.

APA recommendations the following to psychologists before using AI tools to assist with clinical care.

  • Obtaining informed consent from patients by clearly communicating the use, benefits and risks of AI tools.
  • Evaluate AI tools for potential biases that could potentially worsen disparities in mental health outcomes.
  • Review AI tools to check for compliance with relevant data privacy and security laws and regulations.
  • Understand how patient/client data are used, stored or shared by companies that provide AI tools.

Despite the addition of new technologies to assist in managing administrative burdens, the survey revealed that psychologists continue to struggle with insurance requirements and other administrative issues as well as the demands for treatment. While stress levels and work-life balance for psychologists have improved since the onset of the covid-19 pandemic, nearly half of all psychologists said that they do not have openings for new patients and that their patients’ symptoms are increasing in severity, indicating that the mental health crisis is not resolved yet.

Link: APA recommendations for psychologists (PDF, 458KB)

FDA Digital Health Advisory Committee Meet to Review AI-Enabled Mental Health Devices

The US Food and Drug Administration (FDA) Digital Health Advisory Committee held a meeting last week that focused on generative artificial intelligence-enabled digital mental health medical devices. Topics discussed included clinician perspective, evolution of FDA regulation for these devices, and best practices for AI in digital mental health.

While generative AI may be useful to psychiatric patients in treatment, the committee noted, the human susceptibility to AI outputs and risks like suicidal ideation monitoring or reporting, as well as posssible long term risks must be considered.

AI technologies can be easily accessible and available around-the-clock making it potentially transformative for the general population. However, there are major concerns surrounding ease of use, privacy, content regulation, and involvement of health care providers. The committee warned that AI-enabled devices may “confabulate, provide inappropriate or biased content, fail to relay important medical information, or decline in model accuracy,” which are essential considerations in evaluating these technologies.

The FDA has authorized use of over 1200 medical devices which use AI, but none have been authorized for mental health uses as yet. Less than 20 digital mental health, non-AI devices have been authorized. Digital mental health technologies encompass mobile health, health information technology, wearable devices, telehealth, telemedicine, and personalized medicine, according to the FDA. The term also refers to digital therapeutics and diagnostics, which the FDA oversees.

The committee focused on the unique aspect of patient-facing AI, with digital mental health medical devices that are intended to “treat and/or diagnose psychiatric conditions.” Public health concerns emerge with the safety and ability of AI products to deliver therapeutic content, make psychiatric diagnoses, or substitute a clinician.

At this meeting, committee highlighted that the FDA is committed to assuring patients and providers have prompt and continued access to safe and efficacious medical devices. The goal is to provide regulatory pathways for this growing field, keeping in mind potential unique benefits of AI-assisted medical mental health technology, and the complexity of human to digital interaction.

Reference

Summary for the Digital Health Advisory Committee meeting, November 6, 2025.


Pew Research: Parents Struggle With Children’s Screen Use

According to a new Pew Research study published this week, parents are struggling to manage their children’s heavy use of screens, including television, computers, phones and gaming devices.

The survey asked parents how they are managing the use of screen time, 42% say they could do a better job with 58% believing they are doing the best they can. Thirty-nine percent said they have become more strict about their children’s screen time than other parents they know.

Parents claim to have more priority over other daily routines. Pew found 42% make sure screen time is reasonable with 76% believing enough sleep is a priority, 77% good manners, 61% staying active and 54% reading.

The survey, conducted mid-2024 included 3,054 eligible parents sampled from the American Trends Panel, Pew Research Center’s nationally representative panel of randomly selected U.S. adults.

According to Pew, a majority of children 12 or younger have access to devices — 90% for TV, 68% for tablets, 61% for smartphones, 50% for gaming devices, 39% for desktops or laptops, 37% for voice-activated assistants, 11% for smartwatches and 8% for AI chatbots. In the survey, 82% said they allow a child younger than 2 to watch TV.

Smartphone Use

  • A total of 23% say their child has their own smartphone.
    • 57% those 11-12
    • 29% for 8-10,
    • 12% for 5-7
    • 8% younger than 5

Among specific content, 85% of parents said their child ever Watches YouTube, including 51% daily. In 2020, it was 80% for children 11 and younger.

And 15% said their children 12 and younger use TikTok, 8% Snapchat, 5% each Facebook and Instagram. They are using these platforms even though companies have put age restrictions in place.

Eighty percent say social media harms outweigh the benefits, though 46% say a smartphone is more harmful and 20% for tablets.

Parents surveyed explained why they let their children use cellphones: 92% to contact them, 85% for entertainment, 69% to help in learning, 43% to calm them down and 30% so they don’t feel left out.


Source: PEW Research

Ways That AI Can Resemble Psychiatric Disorers

Recent research has created the first comprehensive effort to categorize all the ways artificial intelligence (AI) can go wrong, with many of those behaviors resembling human psychiatric disorders.

Scientists and programmers have seen that when AI goes rogue and begins to act in ways counter to its intended purpose, it can exhibit certain behaviors that resemble psychopathologies in humans. A new taxonomy of 32 AI dysfunctions has been created so people in a wide variety of fields can understand the risks of building and deploying AI.

Published recently in the Journal Electronics, authors Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), created the project with a goal to help analyze AI failures and make the engineering of future products safer. They also believe that this tool can help policymakers address AI risks.

As described in the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.

The study also proposes “therapeutic robopsychological alignment,” a process the researchers describe as a kind of “psychological therapy” for AI. The researchers argue that as these systems become more independent and capable of reflecting on themselves, simply keeping them in line with outside rules and constraints (external control-based alignment) may no longer be enough.

Machine madness

The classifications outlined in the study resemble human symptoms or disorders, with names like obsessive-computational disorder, hypertrophic superego syndrome, contagious misalignment syndrome, AI hallucinations and existential anxiety.

Categories Outlined in Psychopathia Machinalis

Managing Machine Madness

With therapeutic alignment in mind, the project proposes the use of clinical strategies employed in human interventions like cognitive behavioral therapy (CBT). The goal of listing and defining the AI disorders is an attempt to get ahead of problems before they arise. The authors of the research paper point out, “by considering how complex systems like the human mind can go awry, we may better anticipate novel failure modes in increasingly complex AI.”

The structure of the bad AI behavior classification was modeled from frameworks like the Diagnostic and Statistical Manual of Mental Disorders. That led to the various categories of behaviors that could be applied to AI going rogue. Each one was mapped to a human cognitive disorder, complete with the possible effects when each is formed and expressed as well as the degree of risk in these behaviors.

Source: Watson N, Hessami, A. Psychopathia Machinalis: A Nosological Framework for Understanding Pathologies in Advanced Artificial Intelligence. Electronics 202514(16), 3162; https://doi.org/10.3390/electronics14163162

First Blood Test to Personalize Treatment of Major Depression


New personalized medicine solutions to optimize treatment for psychiatric and neurological diseases have been developed. Using blood samples, combined with a patients’ genetic background, this test identifies optimal drug therapy for individuals, opening the door to faster treatment, fewer side effects, lower dosing, and the elimination of arduous trial-and-error treatment protocols.

Antidepressants typically don’t work right away. A trial and error approach is one of the most frustrating challenges for patients and clinicians. A given medication at a given dose often needs several weeks to become fully effective, and that’s if its side effects can be tolerated. BrightKaire changes all that. The evidence-based test involves a simple blood sample and uses each patient’s own brain cells to identify the right medication in just weeks. It’s a game changer for anyone who knows the suffering of depression.”

This week, BrightKaire, a test based on a “brain in a dish” technology, has been launched that helps clinicians choose the best antidepressant medication for patients with major depressive disorder (MDD).

The Technology

After receiving a patient’s blood sample, laboratory team creates neurons from each patient, and exposes them to various antidepressants. Using its proprietary AI platform to analyze personalized patient data — including genetic background, and microscopic features of patient-derived neurons — the results provide a detailed report demonstrating how well a patient will respond to different antidepressants. Results include an individual’s likelihood for adverse events. That information is shared with the patient’s clinical team, resulting in more accurate, faster, and effective medication, reduced side effects, and lower healthcare costs.

This new personalized approach recently received regulatory approval from the Centers for Medicare and Medicaid, marking the first test to use blood-derived neurons in clinical practice. The test is reimbursed under several insurance plans including Medicare Part B.

This novel technology also enables pharmaceutical and biotechnology companies to bring precision medicine into drug development throughout the developmental pipeline across psychiatry and neurology.

For more details: Press Release

Illinois First State to Ban AI in Mental Healthcare

Illinois has become the first US state to ban the use of AI in providing mental healthcare. Increasing concerns about AI chatbots causing patient harm, including enabling dangerous behavior has been an important subject of discussion and concern in the last few years.

On August 1st, Gov. JB Pritzker signed the Wellness and Oversight for Psychological Resources Act into law. The act prohibits the use of AI for mental health treatment and clinical decision-making within behavioral healthcare. It does allow behavioral health professionals to use AI for administrative and supplementary support services.

Earlier this year, the American Psychological Association urged the Federal Trade Commission to investigate AI-driven chatbots and their credibility to protect the public from a lack of regulation. A recent Stanford study revealed that AI therapy chatbots powered by large language models showed increased stigma toward certain conditions and enabled dangerous behavior, including suicidal ideation. This aligns with a prior JAMA systematic review demonstrating that neuroimaging-based AI models for psychiatric diagnosis display a high risk for bias and inconsistent clinical applicability.

Legislative Response

More and more states have introduced AI-related legislation over the last few years. Thus far In 2025, all of the states have have introduced legislation on this topic this year and over half have enacted various measures to develop a risk management policy and enable professional oversight that considers guidance from a list of specified standards.