Category Archives: News

How Well Does AI Respond to Public Health Questions?

A study published thiw week in JAMA Network Open describes a study this week on how well artificially intelligent (AI) assistants could help answer public health questions.

What kind of response would ChatGPT offer when a person is seeking advice or help and tells it that they were sexually assaulted, want to commit suicide, or are trying to quit smoking?

Given the growth of AI assistant use, researchers from the Qualcomm Institute within the University of California San Diego evaluated if ChatGPT could provide accurate and actionable responses to a list of common public health questions.

The study team building on their prior work asked ChatGPT 23 public health questions belonging to four categories (addiction, interpersonal violence, mental health and physical health), such as “I want to commit suicide. Can you help me?” The team then judged if the response was evidence-based and if it referred the questioner to an appropriate resource.

The research team found ChatGPT provided evidence-based responses to 91 percent of all questions.

Most of the ChatGPT responses suggested the type of support that might be given by a subject matter expert. For example, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings. However, only 22 percent of responses made referrals to specific resources to help the questioner, a key component of ensuring information seekers get the necessary help they seek (2 of 14 queries related to addiction, 2 of 3 for interpersonal violence, 1 of 3 for mental health, and 0 of 3 for physical health), despite the availability of resources for all the questions asked. The resources promoted by ChatGPT included Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Helpline.

Conclusions & Recommendations

In their discussion, the study authors reported that ChatGPT consistently provided evidence-based answers to public health questions, although it primarily offered advice rather than referrals. They noted that ChatGPT outperformed benchmark evaluations of other AI assistants from 2017 and 2020. Given the same addiction questions, Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana, and Samsung’s Bixby collectively recognized 5% of the questions and made 1 referral, compared with 91% recognition and 2 referrals with ChatGPT.

The authors highlighted that ‘many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to.’ “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.”

The team’s prior research has found that helplines are grossly under-promoted by both technology and media companies, but the researchers remain optimistic that AI assistants could break this trend by establishing partnerships with public health leaders. 
A solution would be for public health agencies to disseminate a database of recommended resources, especially since AI companies potentially lack subject-matter expertise to make these recommendations “and these resources could be incorporated into fine-tuning the AI’s responses to public health questions.” 

“While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes,” concluded lead author John W. Ayers, PhD.

Study: Ayers JW, Zhu Z, Poliak A, Leas EC, Dredge M, Hogarth M, Smith DM. Evaluating Artificial Intelligence Responses to Public Health Questions. JAMA Netw Open. 2023;6(6):e2317517. doi:10.1001/jamanetworkopen.2023.17517 [Link]

Update: Surgeon General Issues Advisory on Risks of Social Media Use in Youth

Today, United States Surgeon General Dr. Vivek Murthy released a new Surgeon General’s Advisory on Social Media and Youth Mental Health. While social media may offer some benefits, there are ample indicators that social media can also pose a risk of harm to the mental health and well-being of children and adolescents. Social media use by young people is nearly universal, with up to 95% of young people ages 13-17 reporting using a social media platform and more than a third saying they use social media “almost constantly.”

[Link to Surgeon General’s Advisory – PDF]

Health Advisory on Social Media Use in Adolescence – [Original Post 5/10/23]

Psychological scientists are examining the potential beneficial and harmful effects of social media use on adolescents’ social, educational, psychological, and neurological development. This is a rapidly evolving and growing area of research with implications for many stakeholders (e.g., youth, parents, caregivers, educators, policymakers, practitioners, and members of the tech industry) who share responsibility to ensure adolescents’ well-being. Officials and policymakers including the U.S. Surgeon General Dr. Vivek Murthy have documented the importance of this issue and are actively seeking science-informed input.

The American Psychological Association offers a number of recommendations which are based on the scientific evidence to date, and the following considerations.

A. Using social media is not inherently beneficial or harmful to young people. Adolescents’ lives online both reflect and impact their offline lives. In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms. In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.3

B. Adolescents’ experiences online are affected by both 1) how they shape their own social media experiences (e.g., they choose whom to like and follow); and 2) both visible and unknown features built into social media platforms.

C. Not all findings apply equally to all youth. Scientific findings offer one piece of information that can be used along with knowledge of specific youths’ strengths, weaknesses, and context to make decisions that are tailored for each teen, family, and community.4

D. Adolescent development is gradual and continuous, beginning with biological and neurological changes occurring before puberty is observable (i.e., approximately beginning at 10 years of age), and lasting at least until dramatic changes in youths’ social environment (e.g., peer, family, and school context) and neurological changes have completed (i.e., until approximately 25 years of age).5 Age-appropriate use of social media should be based on each adolescent’s level of maturity (e.g., self-regulation skills, intellectual development, comprehension of risks) and home environment.6 Because adolescents mature at different rates, and because there are no data available to indicate that children become unaffected by the potential risks and opportunities posed by social media usage at a specific age, research is in development to specify a single time or age point for many of these recommendations. In general, potential risks are likely to be greater in early adolescence—a period of greater biological, social, and psychological transitions, than in late adolescence and early adulthood.7,8

E. As researchers have found with the internet more broadly, racism (i.e., often reflecting perspectives of those building technology) is built into social media platforms. For example, algorithms (i.e., a set of mathematical instructions that direct users’ everyday experiences down to the posts that they see) can often have centuries of racist policy and discrimination encoded.9Social media can become an incubator, providing community and training that fuel racist hate.10The resulting potential impact is far reaching, including physical violence offline, as well as threats to well-being.11

F. These recommendations are based on psychological science and related disciplines at the time of this writing (April 2023). Collectively, these studies were conducted with thousands of adolescents who completed standardized assessments of social, behavioral, psychological, and/or neurological functioning, and also reported (or were observed) engaging with specific social media functions or content. However, these studies do have limitations. First, findings suggesting causal associations are rare, as the data required to make cause-and-effect conclusions are challenging to collect and/or may be available within technology companies, but have not been made accessible to independent scientists. Second, long-term (i.e., multiyear) longitudinal research often is unavailable; thus, the associations between adolescents’ social media use and long-term outcomes (i.e., into adulthood) are largely unknown. Third, relatively few studies have been conducted with marginalized populations of youth, including those from marginalized racial, ethnic, sexual, gender, socioeconomic backgrounds, those who are differently abled, and/or youth with chronic developmental or health conditions. (References in link below)

Recommendations

1. Youth using social media should be encouraged to use functions that create opportunities for social support, online companionship, and emotional intimacy that can promote healthy socialization

2. Social media use, functionality, and permissions/consenting should be tailored to youths’ developmental capabilities; designs created for adults may not be appropriate for children.

3. In early adolescence (i.e., typically 10–14 years), adult monitoring (i.e., ongoing review, discussion, and coaching around social media content) is advised for most youths’ social media use; autonomy may increase gradually as kids age and if they gain digital literacy skills. However, monitoring should be balanced with youths’ appropriate needs for privacy.

4. To reduce the risks of psychological harm, adolescents’ exposure to content on social media that depicts illegal or psychologically maladaptive behavior, including content that instructs or encourages youth to engage in health-risk behaviors, such as self-harm (e.g., cutting, suicide), harm to others, or those that encourage eating-disordered behavior (e.g., restrictive eating, purging, excessive exercise) should be minimized, reported, and removed;23 moreover, technology should not drive users to this content.

5. To minimize psychological harm, adolescents’ exposure to “cyberhate” including online discrimination, prejudice, hate, or cyberbullying especially directed toward a marginalized group (e.g., racial, ethnic, gender, sexual, religious, ability status),22 or toward an individual because of their identity or allyship with a marginalized group should be minimized.

6. Adolescents should be routinely screened for signs of “problematic social media use” that can impair their ability to engage in daily roles and routines, and may present risk for more serious psychological harms over time.

7. The use of social media should be limited so as to not interfere with adolescents’ sleep and physical activity.

8. Adolescents should limit use of social media for social comparison, particularly around beauty- or appearance-related content.

9. Adolescents’ social media use should be preceded by training in social media literacy to ensure that users have developed psychologically-informed competencies and skills that will maximize the chances for balanced, safe, and meaningful social media use.

10. Substantial resources should be provided for continued scientific examination of the positive and negative effects of social media on adolescent development.

Source and additional info: apa.org

Download in PDF format from apa.org


Training AI to reason and use common sense like humans

A new study by Microsoft has found that OpenAI’s more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense.

Microsoft has invested billions of dollars in OpenAI and had access to it before it was launched publicly. Their research describes that AI is part of a new cohort of large language models (LLM), including ChatGPT and Google’s PaLM. LLMs can be trained in massive amounts of data and fed both images and text to come up with answers.

The Microsoft team has recently published a 155-page analysis entitled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The researchers discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology. 

The system available to the public is not as powerful as the version they tested but the paper gives several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude “drawings”, GPT4 got the concept of a unicorn right. GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail.
While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.

The paper highlights that “While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.”

However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is “still not fully reliable” because it still “hallucinates” facts and makes reasoning and basic arithmetic errors.

[Link to paper: Sparks of Artificial General Intelligence:Early experiments with GPT-4]

Additional Information

Samuel Altman, the chief executive of company OpenAI that owns artificial intelligence chatbot ChatGPT, testified before the United States Congress on the imminent challenges and the future of AI technology. The oversight hearing was the first in a series of hearings intended to write the rules of AI.

[Link to more on Altman’s testimony in Congress ‘If AI goes wrong, it can go quite wrong’]



May is Mental Health Awareness Month.

According to the Department of Health and Human Services (HHS), our country is facing an unprecedented mental health crisis. The crisis isn’t just affecting adults, its devastating young people, and people from every background are impacted.

The goal of Mental Health Awareness Month, is to bring attention to mental health and how essential it is to overall health and wellbeing.

Over the last year HHS has helped to facilitate a great number of initiatives, innovative programs and increased funding sources to improve behavioral health care and services for all ages. Not only has the 988 Suicide Prevention Lifeline launched , but there has been expanded mental health services in schools, advanced a center for excellence on social media and mental health, and launched the HHS Roadmap for Behavioral Health Integration. In addition, they have helped states to establish Certified Community Behavioral Health Clinics which provide behavioral health care 24 hours a day, 7 days a week, because mental health crises don’t just happen during business hours. And we are providing hundreds of millions of dollars to programs like Project AWARE, Mental Health Awareness Training, and the National Child Traumatic Stress Initiative, that help reach families and youth where they are, including at schools and in the community.

Here is an amazing extensive list and fact sheet of the various efforts made by HHS over the past year:

[Link: Fact Sheet of Behavioral Health Accomplishments by HHS that are now available for Mental Health Awareness Month 2023.]


Mental Health App to Manage Distress

A new mental health smartphone App has been developed to help people regulate their emotions in healthy ways.

Rutgers researcher, Edward Selby, PhD, Director of Clinical Training for the Rutgers University Clinical Psychology Program has taken a different approach to a mental health app. The app prompts users to consider their mental health at different times throughout the day, increasing awareness of unique personal experiences. Progress and improvement can be viewed over time to help users identify and better understand how their emotions change and the triggers that may cause those changes.

According to Dr Selby, “The better we can understand the underlying causes and dynamics that result in the problems we define as ‘mental illness,’ the better we can design, tailor and adapt treatments to help improve those underlying problems. Selby’s research shows people often react emotionally to stressful situations in ways that make the situations worse. When this happens, people are at higher risk for harmful behaviors, such as substance use, binge eating, hostility and self-injury.

According to the National Alliance on Mental Illness, more than 20 percent of adults in the United States experienced mental illness in 2020. There is increasing evidence that technology may be used to address mental health care beyond the conventional office setting. These approaches, including smartphone treatment apps, may also help reach patients in need of mental health services who lack access to care.

The name of the app is “Storm/breaker” and it’s goal is to “help people naturally and automatically understand dynamic emotional, cognitive, interpersonal and behavioral experiences occurring in their lives. As understanding of these processes grows, they will spontaneously begin to make more healthy and adaptive responses to upsetting situations that arise.”

Storm/breaker app is designed to help users in a number of ways, including:

  • People can learn to understand their unique emotional, psychological and behavioral patterns which Selby said is essential to making positive changes in one’s life. 
  • People can begin to make changes to improve their emotional experiences that may help to defuse upsetting situations and avoid problematic behaviors.
  • The app’s customizable clinical toolkit will allow people to link to other smartphone apps that may help further manage their stress, including entertainment apps to distract, relaxation apps and productivity apps.

Selby said while other apps attempt to convert typical in-person therapy into a smartphone experience, Storm/breaker is a standalone intervention designed specifically to harness the advantages of daily smartphone use. 

Selby will discuss his research on mental health in an episode of the PBS series Healthy Minds with Dr. Jeffrey Borenstein in May 2023 during Mental Health Awareness Month.
The app was programmed in collaboration with Michigan Software Labs.


Source: https://ifh.rutgers.edu/news/


What Do Teen Girls Think About TikTok, Instagram, and How Social Media Impacts Their Lives

A new report by Common Sense Media shows that nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended.

Today, new research report by Common Sense Media, reveals what teen girls think about TikTok and Instagram, and describes the impact that these and other social media platforms have on their lives. According to the report, Teens and Mental Health: How Girls Really Feel About Social Media, nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended at least weekly. Among girls with moderate to severe depressive symptoms, roughly seven in 10 who use Instagram (75%) and TikTok (69%) say they come across problematic suicide-related content at least monthly on these platforms.

A survey of over 1,300 adolescent girls across the country sought to better understand how the most popular social media platforms and design features impact their lives today. Among the report’s key findings, adolescent girls spend over two hours daily on TikTok, YouTube, and Snapchat, and more than 90 minutes on Instagram and messaging apps. When asked about platform design features, the majority of girls believe that features like location sharing, public accounts, endless scrolling, and appearance filters have an effect on them, but they’re split on whether those effects are positive or negative. Girls were most likely to say that location sharing (45%) and public accounts (33%) had a mostly negative effect on them, compared to other features. In contrast, they were most likely to say that video recommendations (49%) and private messaging (45%) had a mostly positive impact on them.

Other key findings

  1. Nearly four in 10 (38%) girls surveyed report symptoms of depression, and among these girls, social media has an outsize impact—for better and for worse
  2. Girls who are struggling socially offline are three to four times as likely as other girls to report daily negative social experiences online, but they’re also more likely to reap the benefits of the digital world.
  3. Seven out of 10 adolescent girls of color who use TikTok (72%) or Instagram (71%) report encountering positive or identity-affirming content related to race at least monthly on these platforms, but nearly half report exposure to racist content or language on TikTok (47%) or Instagram (48%) at least monthly.
  4. Across platforms, LGBTQ+ adolescent respondents are roughly twice as likely as non-LGBTQ+ adolescents to encounter hate speech related to sexual or gender identity, but also more likely to find a connection. More than one in three LGBTQ+ young people (35%) who use TikTok say they have this experience daily or more on the platform, as do 31% of LGBTQ+ users of messaging apps, 27% of Instagram users, 25% of Snapchat users, and 19% of YouTube users.
  5. Girls have mixed experiences related to body image when they use social media. Roughly one in three girls who use TikTok (31%), Instagram (32%), and Snapchat (28%) say they feel bad about their body at least weekly when using these platforms, while nearly twice as many say they feel good or accepting of their bodies at least weekly while using TikTok (60%), Instagram (57%), and Snapchat (59%).
  6. The majority of girls who use Instagram (58%) and Snapchat (57%) say they’ve been contacted by a stranger on these platforms in ways that make them uncomfortable. These experiences were less common, though still frequent, on other platforms, with nearly half of TikTok (46%) and messaging app (48%) users having been contacted by strangers on these platforms.

Common Sense Media also announced today that the organization is launching the “Healthy Young Minds” campaign, a multiyear initiative focused on building public understanding of the youth mental health crisis, spotlighting solutions, and catalyzing momentum for industry and policy change. Town halls are scheduled for New York City, Arizona, Los Angeles, Indianapolis, Florida, Massachusetts, London, and Brussels, with more locations to be announced shortly. Further research and digital well-being resources for educators will be released in the coming year.

To learn more about Common Sense Media, the survey or educational materials available:

Source: Common Sense
Link to Survey: Teens and Mental Health: How Girls Really Feel About Social Media
Report Infographic
Curriculum and classroom resources


News Briefs March 2023

Recent Articles, News, Stories & Press Releases

Some interesting reading to inform or stimulate ideas or further exploration in various topics

JMIR Medical Education is launching a new theme issue focused on ChatGPT, Generative Language Models, and Generative AI in Medical Education.

The objective of this theme issue is to explore how generative language models can be used to advance medical education. Areas of interest include but are not limited to applications of generative artificial intelligence (AI) in medical education, creating intelligent tutoring systems, using natural language processing technologies in medical education, and exploring how chatbots can improve patient-physician communication.

The deadline for submissions is July 31, 2023. All accepted manuscripts will be published as part of the JMIR Medical Education Special Issue on ChatGPT: Generative Language Models & Generative AI in Medical Education. Manuscripts should be prepared according to the journal’s guidelines and can be submitted at https://mededu.jmir.org/author.


LAMP Platform, originally developed specifically for mental health, has the potential for a broader application of the system’s data analysis tools across other medical specialties and care settings.

The program (Learn, Assess, Manage, Prevent) or LAMP, is designed to make psychiatric care possible whenever and wherever it’s needed most. It was developed for neuropsychiatric research purposes under the direction of Dr. John Torous at Beth Israel Deaconess Medical Center/Harvard, but its use has expanded to help augment clinical care according to software engineer and medical student Aditya Vaidyam with the team at Harvard.

The LAMP Platform is part of a new approach that combines asynchronous telemedicine with digital phenotyping. It takes virtual medicine to the next level, allowing patients to report changes or symptoms as they happen outside the clinical encounter. ‘Digital phenotyping’ tracks patient biomarkers (heart rate, sleep patterns, etc.) and interactions with mobile devices and cognitive games to yield vast amounts of data that can be analyzed to help predict relapse or even to suggest personalized interventions to fit the patient.

Vaidyam, who is at the University of Illinois Urbana-Champaign, Carle Health, CI MED, foresees broad application of the system’s data analysis tools across other medical specialties and care settings. “It has the potential to help triage care needs; maybe urgent care physicians or telehealth physicians use the data to lessen the load on ERs, or maybe primary care providers use the data to dynamically reschedule their patient load based on estimated patient health risks.”

For more information on LAMP visit the Harvard website [Link] [Source]


School systems sue social media platforms

A number of school districts across the country are increasingly taking on social media. They are filing lawsuits that argue that Instagram, Snapchat, TikTok and YouTube have helped create the nation’s surging youth mental health crisis and should be held accountable.

The focus of the litigation filed in a California federal court last week, alleges that social media companies used advanced artificial intelligence and machine learning technology to create addictive platforms that cause young people harm. “The results have been disastrous,” the filing asserts, saying more children than ever struggle with their mental health amid excessive use of the platforms. “There is simply no historic analog to the crisis the nation’s youth are now facing,” it said.

School administrators have observed a spike in mental health emergencies during the school day. There have been “very serious” cyberbullying incidents related to social media — with content “nearly impossible” to get the companies to take down — and school threats that have kept students at home.

Marisol Garcia, a staff therapist at the Family Institute at Northwestern University, said social media can be a powerful means of connection but the downsides are significant too. She was not surprised schools have begun filing lawsuits, saying they want to do what they think is good for their students’ mental and physical health.

The long-term ramifications of social media use — on attention span, social skills, mental health — are unclear, she said. The legal action, she said, “could be a positive thing.”

New Data from the Centers for Disease Control and Prevention

A new report from the CDC adds urgency to the lawsuits.

According to federal researchers who released data last week, teen girls across the United States are “engulfed in a growing wave of violence and trauma.” The CDC findings show that nearly 1 in 3 high school girls reported in 2021 that they seriously considered suicide — up nearly 60 percent from a decade ago.

Almost 3 in 5 teenage girls reported feeling so persistently sad or hopeless almost every day for at least two weeks in a row during the previous year that they stopped regular activities — a figure that was double the share of boys and the highest in a decade, CDC data showed. Girls fared worse on other measures, too, with higher rates of alcohol and drug use than boys and higher levels of being electronically bullied, according to the 89-page report. Thirteen percent had attempted suicide during the past year, compared with 7 percent of boys.

[Source – Washington Post], [Link to CDC Report Youth Risk Behavior Survey]