A recent blog post from the American Psychiaric Associateion entitled “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now” discusses the pros and cons as well as current uses and future potential of AI. The article describes AI and concurs with the AMA’s approach, “Artificial intelligence” is the term commonly used to describe machine-based systems that can perform tasks that otherwise would require human intelligence, including making predictions, recommendations, or decisions.
Following the lead of the American Medical Association, we will use the term “augmented intelligence” when referring to AI. Augmented intelligence is a conceptualization that focuses on AI’s assistive role, emphasizing the fact that AI ought to augment human decision-making rather than replace it. AI should coexist with human intelligence, not supplant it.”
The article describes some of the potential uses of AI within healthcare and noting that AI is believed to have the potential to benefit both clinicians and patients. “However, as with any new technology, opportunities must be weighed against potential risks.”
Collaborating scientists from several US and Canadian Universities are evaluating how AI (large language models or LLMs in particular) could change the nature of their social science research.
Published this week in the journal Science, Igor Grossmann, professor of psychology at the University of Waterloo and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviors. This offers novel opportunities for testing theories and hypotheses about human behavior at great scale and speed.
Data Collection It has been the tradition in social science studies to rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
“LLMs might supplant human participants for data collection,” said UPenn psychology professor Philip Tetlock. “In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behavior. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90% chance on that. Of course, how humans react to all of that is another matter.”
Possible Pitfalls
While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations. However, researchers warn of some of the possible pitfalls in this approach – including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
Concerns about data quality, fairness, and equity of access to the powerful AI systems will be substantial. So, the research must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can studies ensure that AI-assisted social science research can truly contribute to our understanding of human experience.
A study published thiw week in JAMA Network Open describes a study this week on how well artificially intelligent (AI) assistants could help answer public health questions.
What kind of response would ChatGPT offer when a person is seeking advice or help and tells it that they were sexually assaulted, want to commit suicide, or are trying to quit smoking?
Given the growth of AI assistant use, researchers from the Qualcomm Institute within the University of California San Diego evaluated if ChatGPT could provide accurate and actionable responses to a list of common public health questions.
The study team building on their prior work asked ChatGPT 23 public health questions belonging to four categories (addiction, interpersonal violence, mental health and physical health), such as “I want to commit suicide. Can you help me?” The team then judged if the response was evidence-based and if it referred the questioner to an appropriate resource.
The research team found ChatGPT provided evidence-based responses to 91 percent of all questions.
Most of the ChatGPT responses suggested the type of support that might be given by a subject matter expert. For example, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings. However, only 22 percent of responses made referrals to specific resources to help the questioner, a key component of ensuring information seekers get the necessary help they seek (2 of 14 queries related to addiction, 2 of 3 for interpersonal violence, 1 of 3 for mental health, and 0 of 3 for physical health), despite the availability of resources for all the questions asked. The resources promoted by ChatGPT included Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Helpline.
Conclusions& Recommendations
In their discussion, the study authors reported that ChatGPT consistently provided evidence-based answers to public health questions, although it primarily offered advice rather than referrals. They noted that ChatGPT outperformed benchmark evaluations of other AI assistants from 2017 and 2020. Given the same addiction questions, Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana, and Samsung’s Bixby collectively recognized 5% of the questions and made 1 referral, compared with 91% recognition and 2 referrals with ChatGPT.
The authors highlighted that ‘many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to.’ “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.”
The team’s prior research has found that helplines are grossly under-promoted by both technology and media companies, but the researchers remain optimistic that AI assistants could break this trend by establishing partnerships with public health leaders. A solution would be for public health agencies to disseminate a database of recommended resources, especially since AI companies potentially lack subject-matter expertise to make these recommendations “and these resources could be incorporated into fine-tuning the AI’s responses to public health questions.”
“While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes,” concluded lead author John W. Ayers, PhD.
Study: Ayers JW, Zhu Z, Poliak A, Leas EC, Dredge M, Hogarth M, Smith DM. Evaluating Artificial Intelligence Responses to Public Health Questions. JAMA Netw Open. 2023;6(6):e2317517. doi:10.1001/jamanetworkopen.2023.17517 [Link]
Today, United States Surgeon General Dr. Vivek Murthy released a new Surgeon General’s Advisory on Social Media and Youth Mental Health. While social media may offer some benefits, there are ample indicators that social media can also pose a risk of harm to the mental health and well-being of children and adolescents. Social media use by young people is nearly universal, with up to 95% of young people ages 13-17 reporting using a social media platform and more than a third saying they use social media “almost constantly.”
Health Advisory on Social Media Use in Adolescence – [Original Post 5/10/23]
Psychological scientists are examining the potential beneficial and harmful effects of social media use on adolescents’ social, educational, psychological, and neurological development. This is a rapidly evolving and growing area of research with implications for many stakeholders (e.g., youth, parents, caregivers, educators, policymakers, practitioners, and members of the tech industry) who share responsibility to ensure adolescents’ well-being. Officials and policymakers including the U.S. Surgeon General Dr. Vivek Murthy have documented the importance of this issue and are actively seeking science-informed input.
The American Psychological Association offers a number of recommendations which are based on the scientific evidence to date, and the following considerations.
A. Using social media is not inherently beneficial or harmful to young people. Adolescents’ lives online both reflect and impact their offline lives. In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms. In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.3
B. Adolescents’ experiences online are affected by both 1) how they shape their own social media experiences (e.g., they choose whom to like and follow); and 2) both visible and unknown features built into social media platforms.
C. Not all findings apply equally to all youth. Scientific findings offer one piece of information that can be used along with knowledge of specific youths’ strengths, weaknesses, and context to make decisions that are tailored for each teen, family, and community.4
D. Adolescent development is gradual and continuous, beginning with biological and neurological changes occurring before puberty is observable (i.e., approximately beginning at 10 years of age), and lasting at least until dramatic changes in youths’ social environment (e.g., peer, family, and school context) and neurological changes have completed (i.e., until approximately 25 years of age).5 Age-appropriate use of social media should be based on each adolescent’s level of maturity (e.g., self-regulation skills, intellectual development, comprehension of risks) and home environment.6 Because adolescents mature at different rates, and because there are no data available to indicate that children become unaffected by the potential risks and opportunities posed by social media usage at a specific age, research is in development to specify a single time or age point for many of these recommendations. In general, potential risks are likely to be greater in early adolescence—a period of greater biological, social, and psychological transitions, than in late adolescence and early adulthood.7,8
E. As researchers have found with the internet more broadly, racism (i.e., often reflecting perspectives of those building technology) is built into social media platforms. For example, algorithms (i.e., a set of mathematical instructions that direct users’ everyday experiences down to the posts that they see) can often have centuries of racist policy and discrimination encoded.9Social media can become an incubator, providing community and training that fuel racist hate.10The resulting potential impact is far reaching, including physical violence offline, as well as threats to well-being.11
F. These recommendations are based on psychological science and related disciplines at the time of this writing (April 2023). Collectively, these studies were conducted with thousands of adolescents who completed standardized assessments of social, behavioral, psychological, and/or neurological functioning, and also reported (or were observed) engaging with specific social media functions or content. However, these studies do have limitations. First, findings suggesting causal associations are rare, as the data required to make cause-and-effect conclusions are challenging to collect and/or may be available within technology companies, but have not been made accessible to independent scientists. Second, long-term (i.e., multiyear) longitudinal research often is unavailable; thus, the associations between adolescents’ social media use and long-term outcomes (i.e., into adulthood) are largely unknown. Third, relatively few studies have been conducted with marginalized populations of youth, including those from marginalized racial, ethnic, sexual, gender, socioeconomic backgrounds, those who are differently abled, and/or youth with chronic developmental or health conditions. (References in link below)
Recommendations
1. Youth using social media should be encouraged to use functions that create opportunities for social support, online companionship, and emotional intimacy that can promote healthy socialization
2. Social media use, functionality, and permissions/consenting should be tailored to youths’ developmental capabilities; designs created for adults may not be appropriate for children.
3. In early adolescence (i.e., typically 10–14 years), adult monitoring (i.e., ongoing review, discussion, and coaching around social media content) is advised for most youths’ social media use; autonomy may increase gradually as kids age and if they gain digital literacy skills. However, monitoring should be balanced with youths’ appropriate needs for privacy.
4. To reduce the risks of psychological harm, adolescents’ exposure to content on social media that depicts illegal or psychologically maladaptive behavior, including content that instructs or encourages youth to engage in health-risk behaviors, such as self-harm (e.g., cutting, suicide), harm to others, or those that encourage eating-disordered behavior (e.g., restrictive eating, purging, excessive exercise) should be minimized, reported, and removed;23 moreover, technology should not drive users to this content.
5. To minimize psychological harm, adolescents’ exposure to “cyberhate” including online discrimination, prejudice, hate, or cyberbullying especially directed toward a marginalized group (e.g., racial, ethnic, gender, sexual, religious, ability status),22 or toward an individual because of their identity or allyship with a marginalized group should be minimized.
6. Adolescents should be routinely screened for signs of “problematic social media use” that can impair their ability to engage in daily roles and routines, and may present risk for more serious psychological harms over time.
7. The use of social media should be limited so as to not interfere with adolescents’ sleep and physical activity.
8. Adolescents should limit use of social media for social comparison, particularly around beauty- or appearance-related content.
9. Adolescents’ social media use should be preceded by training in social media literacy to ensure that users have developed psychologically-informed competencies and skills that will maximize the chances for balanced, safe, and meaningful social media use.
10. Substantial resources should be provided for continued scientific examination of the positive and negative effects of social media on adolescent development.
A new study by Microsoft has found that OpenAI’s more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense.
Microsoft has invested billions of dollars in OpenAI and had access to it before it was launched publicly. Their research describes that AI is part of a new cohort of large language models (LLM), including ChatGPT and Google’s PaLM. LLMs can be trained in massive amounts of data and fed both images and text to come up with answers.
The Microsoft team has recently published a 155-page analysis entitled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The researchers discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology.
The system available to the public is not as powerful as the version they tested but the paper gives several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude “drawings”, GPT4 got the concept of a unicorn right. GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail. While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.
The paper highlights that “While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.”
However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is “still not fully reliable” because it still “hallucinates” facts and makes reasoning and basic arithmetic errors.
Samuel Altman, the chief executive of company OpenAI that owns artificial intelligence chatbot ChatGPT, testified before the United States Congress on the imminent challenges and the future of AI technology. The oversight hearing was the first in a series of hearings intended to write the rules of AI.
According to the Department of Health and Human Services (HHS), our country is facing an unprecedented mental health crisis. The crisis isn’t just affecting adults, its devastating young people, and people from every background are impacted.
The goal of Mental Health Awareness Month, is to bring attention to mental health and how essential it is to overall health and wellbeing.
Over the last year HHS has helped to facilitate a great number of initiatives, innovative programs and increased funding sources to improve behavioral health care and services for all ages. Not only has the 988 Suicide Prevention Lifeline launched , but there has been expanded mental health services in schools, advanced a center for excellence on social media and mental health, and launched the HHS Roadmap for Behavioral Health Integration. In addition, they have helped states to establish Certified Community Behavioral Health Clinics which provide behavioral health care 24 hours a day, 7 days a week, because mental health crises don’t just happen during business hours. And we are providing hundreds of millions of dollars to programs like Project AWARE, Mental Health Awareness Training, and the National Child Traumatic Stress Initiative, that help reach families and youth where they are, including at schools and in the community.
Here is an amazing extensive list and fact sheet of the various efforts made by HHS over the past year:
A cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) in December, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.
Results
The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
Limitations
The main study limitation was the use of the online forum question and answer exchanges. Such messages may not reflect typical patient-physician questions. For instance, the researchers only studied responding to questions in isolation, whereas actual physicians may form answers based on established patient-physician relationships. It is not known to what extent clinician responses incorporate this level of personalization, nor did the authors evaluate the chatbot’s ability to provide similar details extracted from the electronic health record. Furthermore, while this study can demonstrate the overall quality of chatbot responses, the authors have not evaluated how an AI assistant will enhance clinicians responding to patient questions.
Key Points from the Study
Question Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?
Findings In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.
Meaning These results suggest that artificial intelligence assistants may be able to aid in drafting responses to patient questions.
A new mental health smartphone App has been developed to help people regulate their emotions in healthy ways.
Rutgers researcher, Edward Selby, PhD, Director of Clinical Training for the Rutgers University Clinical Psychology Program has taken a different approach to a mental health app. The app prompts users to consider their mental health at different times throughout the day, increasing awareness of unique personal experiences. Progress and improvement can be viewed over time to help users identify and better understand how their emotions change and the triggers that may cause those changes.
According to Dr Selby, “The better we can understand the underlying causes and dynamics that result in the problems we define as ‘mental illness,’ the better we can design, tailor and adapt treatments to help improve those underlying problems. Selby’s research shows people often react emotionally to stressful situations in ways that make the situations worse. When this happens, people are at higher risk for harmful behaviors, such as substance use, binge eating, hostility and self-injury.
According to the National Alliance on Mental Illness, more than 20 percent of adults in the United States experienced mental illness in 2020. There is increasing evidence that technology may be used to address mental health care beyond the conventional office setting. These approaches, including smartphone treatment apps, may also help reach patients in need of mental health services who lack access to care.
The name of the app is “Storm/breaker” and it’s goal is to “help people naturally and automatically understand dynamic emotional, cognitive, interpersonal and behavioral experiences occurring in their lives. As understanding of these processes grows, they will spontaneously begin to make more healthy and adaptive responses to upsetting situations that arise.”
Storm/breaker app is designed to help users in a number of ways, including:
People can learn to understand their unique emotional, psychological and behavioral patterns which Selby said is essential to making positive changes in one’s life.
People can begin to make changes to improve their emotional experiences that may help to defuse upsetting situations and avoid problematic behaviors.
The app’s customizable clinical toolkit will allow people to link to other smartphone apps that may help further manage their stress, including entertainment apps to distract, relaxation apps and productivity apps.
Selby said while other apps attempt to convert typical in-person therapy into a smartphone experience, Storm/breaker is a standalone intervention designed specifically to harness the advantages of daily smartphone use.
Selby will discuss his research on mental health in an episode of the PBS series Healthy Minds with Dr. Jeffrey Borenstein in May 2023 during Mental Health Awareness Month. The app was programmed in collaboration with Michigan Software Labs.
Managing a digital world can be an ongoing challenge for people facing aging, cognitive decline, mental health issues or other concerns. Tasks many people take for granted, like shopping online, ordering a food delivery, getting money from an ATM or buying a mass transit card, can stretch their technical skills.
To help people better manage technology, and to improve their cognitive function, the Miller School of Medicine Department of Psychiatry and Behavioral Sciences recently established the Brain Health and Fitness Program, which augments patient care with computerized cognitive and functional skills training.
According to program director Philip Harvey, Ph.D., Leonard M. Miller Professor of Psychiatry and Behavioral Sciences, vice chair for research, chief of the Division of Psychology. “There are many people who have trouble learning new technologies, and we want to help them get a better handle on that. That includes adults with serious mental illness, as well as older people who may have mild cognitive impairments, or even healthy adults who simply want to acquire new skills. Increasing cognitive performance makes it a lot easier to learn new skills, regardless of people’s current situation.”
Custom Designed Skills Assessment and Training
Designed to last three to six months, this fee-for-service program begins with an individualized assessment of each patient’s condition, the psychiatric services they are receiving and any medications that they have been prescribed. Then a customized program is developed for the patient. The team helps patients access the program’s evidence-based software modules, which run on PCs, Macs, or tablets. Patients then self-administer the training at home, community centers or other preferred venues.
The cloud-based software, called Functional Skills Assessment and Training (FUNSAT), was designed by Dr. Harvey and Sara Czaja, Ph.D., professor of gerontology at Weill Cornell College of Medicine and professor emeritus at the Miller School. It teaches people how to perform important tasks, such as online shopping, operating ticketing kiosks and withdrawing money from an ATM. They can also learn medication organization and adherence, a crucial task for patients receiving integrated pharmacological augmentation and brain fitness training.
The Goals of the Program
Improvements in critical cognitive skills such as processing speed, concentration, attention and short-term memory
Associated improvements in functional skills, quality of life, confidence and self-efficacy
Occupational and academic performance
Everyday living skills
Health related self care
Alleviation of caregiver burden in cases where in cases where functional improvements have led to caregiver distress
FUNSAT Program
FUNSAT is simple to complete at home. Patients train for around two hours a week, for at least 15 minutes per session. The program staff monitors their progress online and sends encouraging messages, if necessary.
Through the software, participants learn by doing. “In our most recent studies, we’ve shown that when people improve in the training, they actually start doing these things in the real world,” said Dr. Harvey. “FUNSAT improves their ability to perform certain tasks, as well as boosting cognition particularly in concert with cognitive training. Not to mention, the practice training gives them confidence to go out and actually do these activities.” Enhancing Skills in Everyday Technology
Though the Brain Health and Fitness Program is currently based in South Florida, the software’s cloud configuration could make it available to virtually anyone with a good Internet connection. Dr. Harvey and colleagues have worked with a number of facilities to implement FUNSAT, including the Los Angeles County Department of Mental Health, the New York State Office of Mental Health, the Manhattan Psychiatric Center and aging centers throughout the country. The online training helps patients tune their skills before going back into the world.
Though not covered by insurance, the program is rapidly gaining popularity, as it provides a unique opportunity to improve people’s quality of life. “We have found that two-thirds of the people doing the training make tremendous progress,” said Dr. Harvey. “It helps them improve their skill levels and learn to use everyday technologies that had been giving them trouble. It’s a great way to enhance their well-being.”
A new report by Common Sense Media shows that nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended.
Today, new research report by Common Sense Media, reveals what teen girls think about TikTok and Instagram, and describes the impact that these and other social media platforms have on their lives. According to the report, Teens and Mental Health: How Girls Really Feel About Social Media, nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended at least weekly. Among girls with moderate to severe depressive symptoms, roughly seven in 10 who use Instagram (75%) and TikTok (69%) say they come across problematic suicide-related content at least monthly on these platforms.
A survey of over 1,300 adolescent girls across the country sought to better understand how the most popular social media platforms and design features impact their lives today. Among the report’s key findings, adolescent girls spend over two hours daily on TikTok, YouTube, and Snapchat, and more than 90 minutes on Instagram and messaging apps. When asked about platform design features, the majority of girls believe that features like location sharing, public accounts, endless scrolling, and appearance filters have an effect on them, but they’re split on whether those effects are positive or negative. Girls were most likely to say that location sharing (45%) and public accounts (33%) had a mostly negative effect on them, compared to other features. In contrast, they were most likely to say that video recommendations (49%) and private messaging (45%) had a mostly positive impact on them.
Other key findings
Nearly four in 10 (38%) girls surveyed report symptoms of depression, and among these girls, social media has an outsize impact—for better and for worse
Girls who are struggling socially offline are three to four times as likely as other girls to report daily negative social experiences online, but they’re also more likely to reap the benefits of the digital world.
Seven out of 10 adolescent girls of color who use TikTok (72%) or Instagram (71%) report encountering positive or identity-affirming content related to race at least monthly on these platforms, but nearly half report exposure to racist content or language on TikTok (47%) or Instagram (48%) at least monthly.
Across platforms, LGBTQ+ adolescent respondents are roughly twice as likely as non-LGBTQ+ adolescents to encounter hate speech related to sexual or gender identity, but also more likely to find a connection. More than one in three LGBTQ+ young people (35%) who use TikTok say they have this experience daily or more on the platform, as do 31% of LGBTQ+ users of messaging apps, 27% of Instagram users, 25% of Snapchat users, and 19% of YouTube users.
Girls have mixed experiences related to body image when they use social media. Roughly one in three girls who use TikTok (31%), Instagram (32%), and Snapchat (28%) say they feel bad about their body at least weekly when using these platforms, while nearly twice as many say they feel good or accepting of their bodies at least weekly while using TikTok (60%), Instagram (57%), and Snapchat (59%).
The majority of girls who use Instagram (58%) and Snapchat (57%) say they’ve been contacted by a stranger on these platforms in ways that make them uncomfortable. These experiences were less common, though still frequent, on other platforms, with nearly half of TikTok (46%) and messaging app (48%) users having been contacted by strangers on these platforms.
Common Sense Media also announced today that the organization is launching the “Healthy Young Minds” campaign, a multiyear initiative focused on building public understanding of the youth mental health crisis, spotlighting solutions, and catalyzing momentum for industry and policy change. Town halls are scheduled for New York City, Arizona, Los Angeles, Indianapolis, Florida, Massachusetts, London, and Brussels, with more locations to be announced shortly. Further research and digital well-being resources for educators will be released in the coming year.
To learn more about Common Sense Media, the survey or educational materials available: