A new study by Microsoft has found that OpenAI’s more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense.
Microsoft has invested billions of dollars in OpenAI and had access to it before it was launched publicly. Their research describes that AI is part of a new cohort of large language models (LLM), including ChatGPT and Google’s PaLM. LLMs can be trained in massive amounts of data and fed both images and text to come up with answers.
The Microsoft team has recently published a 155-page analysis entitled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The researchers discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology.
The system available to the public is not as powerful as the version they tested but the paper gives several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude “drawings”, GPT4 got the concept of a unicorn right. GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail. While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.
The paper highlights that “While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.”
However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is “still not fully reliable” because it still “hallucinates” facts and makes reasoning and basic arithmetic errors.
Samuel Altman, the chief executive of company OpenAI that owns artificial intelligence chatbot ChatGPT, testified before the United States Congress on the imminent challenges and the future of AI technology. The oversight hearing was the first in a series of hearings intended to write the rules of AI.
According to the Department of Health and Human Services (HHS), our country is facing an unprecedented mental health crisis. The crisis isn’t just affecting adults, its devastating young people, and people from every background are impacted.
The goal of Mental Health Awareness Month, is to bring attention to mental health and how essential it is to overall health and wellbeing.
Over the last year HHS has helped to facilitate a great number of initiatives, innovative programs and increased funding sources to improve behavioral health care and services for all ages. Not only has the 988 Suicide Prevention Lifeline launched , but there has been expanded mental health services in schools, advanced a center for excellence on social media and mental health, and launched the HHS Roadmap for Behavioral Health Integration. In addition, they have helped states to establish Certified Community Behavioral Health Clinics which provide behavioral health care 24 hours a day, 7 days a week, because mental health crises don’t just happen during business hours. And we are providing hundreds of millions of dollars to programs like Project AWARE, Mental Health Awareness Training, and the National Child Traumatic Stress Initiative, that help reach families and youth where they are, including at schools and in the community.
Here is an amazing extensive list and fact sheet of the various efforts made by HHS over the past year:
A cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) in December, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.
Results
The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.
Limitations
The main study limitation was the use of the online forum question and answer exchanges. Such messages may not reflect typical patient-physician questions. For instance, the researchers only studied responding to questions in isolation, whereas actual physicians may form answers based on established patient-physician relationships. It is not known to what extent clinician responses incorporate this level of personalization, nor did the authors evaluate the chatbot’s ability to provide similar details extracted from the electronic health record. Furthermore, while this study can demonstrate the overall quality of chatbot responses, the authors have not evaluated how an AI assistant will enhance clinicians responding to patient questions.
Key Points from the Study
Question Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?
Findings In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.
Meaning These results suggest that artificial intelligence assistants may be able to aid in drafting responses to patient questions.
A new mental health smartphone App has been developed to help people regulate their emotions in healthy ways.
Rutgers researcher, Edward Selby, PhD, Director of Clinical Training for the Rutgers University Clinical Psychology Program has taken a different approach to a mental health app. The app prompts users to consider their mental health at different times throughout the day, increasing awareness of unique personal experiences. Progress and improvement can be viewed over time to help users identify and better understand how their emotions change and the triggers that may cause those changes.
According to Dr Selby, “The better we can understand the underlying causes and dynamics that result in the problems we define as ‘mental illness,’ the better we can design, tailor and adapt treatments to help improve those underlying problems. Selby’s research shows people often react emotionally to stressful situations in ways that make the situations worse. When this happens, people are at higher risk for harmful behaviors, such as substance use, binge eating, hostility and self-injury.
According to the National Alliance on Mental Illness, more than 20 percent of adults in the United States experienced mental illness in 2020. There is increasing evidence that technology may be used to address mental health care beyond the conventional office setting. These approaches, including smartphone treatment apps, may also help reach patients in need of mental health services who lack access to care.
The name of the app is “Storm/breaker” and it’s goal is to “help people naturally and automatically understand dynamic emotional, cognitive, interpersonal and behavioral experiences occurring in their lives. As understanding of these processes grows, they will spontaneously begin to make more healthy and adaptive responses to upsetting situations that arise.”
Storm/breaker app is designed to help users in a number of ways, including:
People can learn to understand their unique emotional, psychological and behavioral patterns which Selby said is essential to making positive changes in one’s life.
People can begin to make changes to improve their emotional experiences that may help to defuse upsetting situations and avoid problematic behaviors.
The app’s customizable clinical toolkit will allow people to link to other smartphone apps that may help further manage their stress, including entertainment apps to distract, relaxation apps and productivity apps.
Selby said while other apps attempt to convert typical in-person therapy into a smartphone experience, Storm/breaker is a standalone intervention designed specifically to harness the advantages of daily smartphone use.
Selby will discuss his research on mental health in an episode of the PBS series Healthy Minds with Dr. Jeffrey Borenstein in May 2023 during Mental Health Awareness Month. The app was programmed in collaboration with Michigan Software Labs.
Managing a digital world can be an ongoing challenge for people facing aging, cognitive decline, mental health issues or other concerns. Tasks many people take for granted, like shopping online, ordering a food delivery, getting money from an ATM or buying a mass transit card, can stretch their technical skills.
To help people better manage technology, and to improve their cognitive function, the Miller School of Medicine Department of Psychiatry and Behavioral Sciences recently established the Brain Health and Fitness Program, which augments patient care with computerized cognitive and functional skills training.
According to program director Philip Harvey, Ph.D., Leonard M. Miller Professor of Psychiatry and Behavioral Sciences, vice chair for research, chief of the Division of Psychology. “There are many people who have trouble learning new technologies, and we want to help them get a better handle on that. That includes adults with serious mental illness, as well as older people who may have mild cognitive impairments, or even healthy adults who simply want to acquire new skills. Increasing cognitive performance makes it a lot easier to learn new skills, regardless of people’s current situation.”
Custom Designed Skills Assessment and Training
Designed to last three to six months, this fee-for-service program begins with an individualized assessment of each patient’s condition, the psychiatric services they are receiving and any medications that they have been prescribed. Then a customized program is developed for the patient. The team helps patients access the program’s evidence-based software modules, which run on PCs, Macs, or tablets. Patients then self-administer the training at home, community centers or other preferred venues.
The cloud-based software, called Functional Skills Assessment and Training (FUNSAT), was designed by Dr. Harvey and Sara Czaja, Ph.D., professor of gerontology at Weill Cornell College of Medicine and professor emeritus at the Miller School. It teaches people how to perform important tasks, such as online shopping, operating ticketing kiosks and withdrawing money from an ATM. They can also learn medication organization and adherence, a crucial task for patients receiving integrated pharmacological augmentation and brain fitness training.
The Goals of the Program
Improvements in critical cognitive skills such as processing speed, concentration, attention and short-term memory
Associated improvements in functional skills, quality of life, confidence and self-efficacy
Occupational and academic performance
Everyday living skills
Health related self care
Alleviation of caregiver burden in cases where in cases where functional improvements have led to caregiver distress
FUNSAT Program
FUNSAT is simple to complete at home. Patients train for around two hours a week, for at least 15 minutes per session. The program staff monitors their progress online and sends encouraging messages, if necessary.
Through the software, participants learn by doing. “In our most recent studies, we’ve shown that when people improve in the training, they actually start doing these things in the real world,” said Dr. Harvey. “FUNSAT improves their ability to perform certain tasks, as well as boosting cognition particularly in concert with cognitive training. Not to mention, the practice training gives them confidence to go out and actually do these activities.” Enhancing Skills in Everyday Technology
Though the Brain Health and Fitness Program is currently based in South Florida, the software’s cloud configuration could make it available to virtually anyone with a good Internet connection. Dr. Harvey and colleagues have worked with a number of facilities to implement FUNSAT, including the Los Angeles County Department of Mental Health, the New York State Office of Mental Health, the Manhattan Psychiatric Center and aging centers throughout the country. The online training helps patients tune their skills before going back into the world.
Though not covered by insurance, the program is rapidly gaining popularity, as it provides a unique opportunity to improve people’s quality of life. “We have found that two-thirds of the people doing the training make tremendous progress,” said Dr. Harvey. “It helps them improve their skill levels and learn to use everyday technologies that had been giving them trouble. It’s a great way to enhance their well-being.”
A new report by Common Sense Media shows that nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended.
Today, new research report by Common Sense Media, reveals what teen girls think about TikTok and Instagram, and describes the impact that these and other social media platforms have on their lives. According to the report, Teens and Mental Health: How Girls Really Feel About Social Media, nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended at least weekly. Among girls with moderate to severe depressive symptoms, roughly seven in 10 who use Instagram (75%) and TikTok (69%) say they come across problematic suicide-related content at least monthly on these platforms.
A survey of over 1,300 adolescent girls across the country sought to better understand how the most popular social media platforms and design features impact their lives today. Among the report’s key findings, adolescent girls spend over two hours daily on TikTok, YouTube, and Snapchat, and more than 90 minutes on Instagram and messaging apps. When asked about platform design features, the majority of girls believe that features like location sharing, public accounts, endless scrolling, and appearance filters have an effect on them, but they’re split on whether those effects are positive or negative. Girls were most likely to say that location sharing (45%) and public accounts (33%) had a mostly negative effect on them, compared to other features. In contrast, they were most likely to say that video recommendations (49%) and private messaging (45%) had a mostly positive impact on them.
Other key findings
Nearly four in 10 (38%) girls surveyed report symptoms of depression, and among these girls, social media has an outsize impact—for better and for worse
Girls who are struggling socially offline are three to four times as likely as other girls to report daily negative social experiences online, but they’re also more likely to reap the benefits of the digital world.
Seven out of 10 adolescent girls of color who use TikTok (72%) or Instagram (71%) report encountering positive or identity-affirming content related to race at least monthly on these platforms, but nearly half report exposure to racist content or language on TikTok (47%) or Instagram (48%) at least monthly.
Across platforms, LGBTQ+ adolescent respondents are roughly twice as likely as non-LGBTQ+ adolescents to encounter hate speech related to sexual or gender identity, but also more likely to find a connection. More than one in three LGBTQ+ young people (35%) who use TikTok say they have this experience daily or more on the platform, as do 31% of LGBTQ+ users of messaging apps, 27% of Instagram users, 25% of Snapchat users, and 19% of YouTube users.
Girls have mixed experiences related to body image when they use social media. Roughly one in three girls who use TikTok (31%), Instagram (32%), and Snapchat (28%) say they feel bad about their body at least weekly when using these platforms, while nearly twice as many say they feel good or accepting of their bodies at least weekly while using TikTok (60%), Instagram (57%), and Snapchat (59%).
The majority of girls who use Instagram (58%) and Snapchat (57%) say they’ve been contacted by a stranger on these platforms in ways that make them uncomfortable. These experiences were less common, though still frequent, on other platforms, with nearly half of TikTok (46%) and messaging app (48%) users having been contacted by strangers on these platforms.
Common Sense Media also announced today that the organization is launching the “Healthy Young Minds” campaign, a multiyear initiative focused on building public understanding of the youth mental health crisis, spotlighting solutions, and catalyzing momentum for industry and policy change. Town halls are scheduled for New York City, Arizona, Los Angeles, Indianapolis, Florida, Massachusetts, London, and Brussels, with more locations to be announced shortly. Further research and digital well-being resources for educators will be released in the coming year.
To learn more about Common Sense Media, the survey or educational materials available:
Some interesting reading to inform or stimulate ideas or further exploration in various topics
JMIR Medical Education is launching a new theme issue focused on ChatGPT, Generative Language Models, and Generative AI in Medical Education.
The objective of this theme issue is to explore how generative language models can be used to advance medical education. Areas of interest include but are not limited to applications of generative artificial intelligence (AI) in medical education, creating intelligent tutoring systems, using natural language processing technologies in medical education, and exploring how chatbots can improve patient-physician communication.
LAMP Platform, originally developed specifically for mental health, has the potential for a broader application of the system’s data analysis tools across other medical specialties and care settings.
The program (Learn, Assess, Manage, Prevent) or LAMP, is designed to make psychiatric care possible whenever and wherever it’s needed most. It was developed for neuropsychiatric research purposes under the direction of Dr. John Torous at Beth Israel Deaconess Medical Center/Harvard, but its use has expanded to help augment clinical care according to software engineer and medical student Aditya Vaidyam with the team at Harvard.
The LAMP Platform is part of a new approach that combines asynchronous telemedicine with digital phenotyping. It takes virtual medicine to the next level, allowing patients to report changes or symptoms as they happen outside the clinical encounter. ‘Digital phenotyping’ tracks patient biomarkers (heart rate, sleep patterns, etc.) and interactions with mobile devices and cognitive games to yield vast amounts of data that can be analyzed to help predict relapse or even to suggest personalized interventions to fit the patient.
Vaidyam, who is at the University of Illinois Urbana-Champaign, Carle Health, CI MED, foresees broad application of the system’s data analysis tools across other medical specialties and care settings. “It has the potential to help triage care needs; maybe urgent care physicians or telehealth physicians use the data to lessen the load on ERs, or maybe primary care providers use the data to dynamically reschedule their patient load based on estimated patient health risks.”
For more information on LAMP visit the Harvard website [Link] [Source]
School systems sue social mediaplatforms
A number of school districts across the country are increasingly taking on social media. They are filing lawsuits that argue that Instagram, Snapchat, TikTok and YouTube have helped create the nation’s surging youth mental health crisis and should be held accountable.
The focus of the litigation filed in a California federal court last week, alleges that social media companies used advanced artificial intelligence and machine learning technology to create addictive platforms that cause young people harm. “The results have been disastrous,” the filing asserts, saying more children than ever struggle with their mental health amid excessive use of the platforms. “There is simply no historic analog to the crisis the nation’s youth are now facing,” it said.
School administrators have observed a spike in mental health emergencies during the school day. There have been “very serious” cyberbullying incidents related to social media — with content “nearly impossible” to get the companies to take down — and school threats that have kept students at home.
Marisol Garcia, a staff therapist at the Family Institute at Northwestern University, said social media can be a powerful means of connection but the downsides are significant too. She was not surprised schools have begun filing lawsuits, saying they want to do what they think is good for their students’ mental and physical health.
The long-term ramifications of social media use — on attention span, social skills, mental health — are unclear, she said. The legal action, she said, “could be a positive thing.”
New Data from the Centers for Disease Control and Prevention
A new report from the CDC adds urgency to the lawsuits.
According to federal researchers who released data last week, teen girls across the United States are “engulfed in a growing wave of violence and trauma.” The CDC findings show that nearly 1 in 3 high school girls reported in 2021 that they seriously considered suicide — up nearly 60 percent from a decade ago.
Almost 3 in 5 teenage girls reported feeling so persistently sad or hopeless almost every day for at least two weeks in a row during the previous year that they stopped regular activities — a figure that was double the share of boys and the highest in a decade, CDC data showed. Girls fared worse on other measures, too, with higher rates of alcohol and drug use than boys and higher levels of being electronically bullied, according to the 89-page report. Thirteen percent had attempted suicide during the past year, compared with 7 percent of boys.
An article published in the journal Primary Care Clinical Office Practice by physicians from Florida Atlantic University’s Schmidt College of Medicine recommend that primary care physicians and other clinicians need to screen adolescents and young adults for inappropriate or misuse of social media and cyberbullying.
Most adolescents and young adults have experienced bullying in some form, with about one-third of them experiencing cyberbullying, contributing to mental health concerns. Cyberbullying involves electronic communication such as texts, emails, online videos and social media, which has become increasingly problematic over the last few decades. Several reasons include the anonymity it allows, the fact that it is not as easily monitored, and that adolescents and young adults have easier access to devices.
Bullying is nothing new, but teens these days must also navigate the challenges of the digital landscape. Parents and teachers who grew up in a different generation may struggle to understand the many nuanced forms that online bullying, or cyberbullying, can take.
Any form of bullying is hurtful, but cyberbullying can be especially damaging because of the nature of the digital world. With cell phones and laptops, students carry their bullies in their backpacks, meaning that they cannot even escape their tormentors at home.
What’s more, because of the sneaky nature of cyberbullying, parents and teachers may be completely unaware that a problem is happening unless they are closely monitoring their children’s social media usage. And, teens may be reluctant to open up about cyberbullying for fear of losing access to the digital world.
Screening and Screening Tools
According to the article authors, “It is staggering that only 23 percent of students who were cyberbullied reported it to an adult at their school, which shows that many incidences go unreported. This is another crucial reason why we need to screen patients as well as educate parents.”
Screening tools are available and can be worked into the work-flow of healthcare visits to ensure that screening is consistently done and results are addressed in a timely manner.
Massachusetts Aggression Reduction Center (MARC) [Link & Ressources]
Another resource is a “Cyberbullying: Top Ten Tips for Health Care Providers,” pamphlet developed by the Cyberbullying Research Center, which is part of the FAU School of Criminology and Criminal Justice. [Link]
Source: Caceres J, Holley A. Perils and Pitfalls of Social Media Use: Cyber Bullying in Teens/Young Adults. Primary Care: Clinics in Office Practice. Volume 50, Issue 1, March 2023, Pages 37-45. [Link]
Millions of Americans have been using health tracking apps for for the last few years and since the pandemic, numerous apps for mental health issues like depression and anxiety have proliferated. We have become used to our private medical information from the usual medical settings being protected by HIPAA (the Health Insurance Portability and Accountability Act). But unfortunately, HIPAA wasn’t designed for the modern digital world, with its new technologies. Most apps—including health, mental health, and biometric tracking devices—don’t fall under HIPAA rules, meaning that these companies can sell your private health data to third parties, with or without your consent.
A new research report published by Duke University’s Technology Policy Lab reveals that data brokers are selling huge datasets full of identifiable personal information—including psychiatric diagnoses and medication prescriptions, as well as many other identifiers, such as age, gender, ethnicity, religion, number of children, marital status, net worth, credit score, home ownership, profession, and date of birth—all matched with names, addresses, and phone numbers of individuals.
Data brokers are selling massive lists of psychiatric diagnoses, prescriptions, hospitalizations, and even lab results, all linked to identifiable contact information.
Researcher Joanne Kim began by searching for data brokers online. She contacted 37 of them by email or a form on their website (Kim identified herself as a researcher in the initial contact). None of those she contacted via email responded; some of those she contacted via form referred her to other data brokers. A total of 26 responded in some way (including some automated responses). Ultimately, only 10 data brokers had sustained contact by call or virtual meeting with Kim, so they were included in the study.
The 10 most engaged data brokers asked about the purpose of the purchase and the intended use cases for the data; however, after receiving that information (verbally or in writing) from the author, those companies did not appear to have additional controls for client management, and there was no indication in emails and phone calls that they had conducted separate background checks to corroborate the author’s (non-deceptive) statements.
Data Brokers reported conditions for selling data:
emphasized that the requested data on individuals’ mental health conditions was “extremely restricted” and that their team would need more information on intended use cases—yet continued to send a sample of aggregated, deidentified data counts.
confirmed that the author was not part of a marketing entity, the sales representative said that as long as the author did not contact the individuals in the dataset, the author could use the data freely.
implied they may have fully identified patient data, but said they were unable to share this individual-level data due to HIPAA compliance concerns. Instead, the sales representative offered to aggregate the data of interest in a deidentified form.
one was most willing to sell data on depressed and anxious individuals at the author’s budget price of $2,500 and stated no apparent, restrictive data-use limitations post-purchase.
another advertised highly sensitive mental health data to the author, including names and postal addresses of individuals with depression, bipolar disorder, anxiety issues, panic disorder, cancer, PTSD, OCD, and personality disorder, as well as individuals who have had strokes and data on those people’s races and ethnicities.
two data brokers, mentioned nondisclosure agreements (NDAs) in their communications, and one indicated that signing an NDA was a prerequisite for obtaining access to information on the data it sells.
one often made unsolicited calls to the author’s personal cell. If the author was delayed in responding to an email from the data broker, the frequency of calls seemed to increase.
Conclusions
The author concludes that additional research is critical as more depressed and anxious individuals utilize personal devices and software-based health-tracking applications (which are not protected by HIPAA), often unknowingly putting their sensitive mental health data at risk. This report finds that the industry appears to lack a set of best practices for handling individuals’ mental health data, particularly in the areas of privacy and buyer vetting. It finds that there are data brokers which advertise and are willing and able to sell data concerning Americans’ highly sensitive mental health information.
This research concludes by highlighting that the largely unregulated and black-box nature of the data broker industry, its buying and selling of sensitive mental health data, and the lack of clear consumer privacy protections in the U.S. necessitate a comprehensive federal privacy law or, at the very least, an expansion of HIPAA’s privacy protections alongside bans on the sale of mental health data on the open market.
Association calls for more research, regulation, better messaging to parents and teens
Additional research is needed to better understand how certain features and content inherent in social media, as well as user behavior, may be affecting our children for both good and bad, APA Chief Science Officer Mitch Prinstein, PhD, told the Senate Judiciary Committee.
The age at which children begin to use social media is an area of great concern, he said. “Developmental neuroscientists have revealed that there are two highly critical periods for adaptive neural development. One of these is the first year of life. The second begins at the outset of puberty and lasts until early adulthood (i.e., from approximately 10 to 25 years old). This latter period is highly relevant, as this is when a great number of youths are offered relatively unfettered access to devices and unrestricted or unsupervised use of social media and other online platforms.”
Recent research shows over 50% of teens reporting at least one symptom of clinical dependency on social media. He also outlined several additional areas of concern that have emerged from scientific research. Social media sites ostensibly exist to foster social connections, he said. But many youth use the sites to compare themselves to others, seeking “likes” and other metrics rather than healthy, successful relationships.
Social media sites ostensibly exist to foster social connections, he said. But many youth use the sites to compare themselves to others, seeking “likes” and other metrics rather than healthy, successful relationships.
In other words, social media offers the ‘empty calories of social interaction’ that appear to help satiate our biological and psychological needs, but do not contain any of the healthy ingredients necessary to reap benefits,
Social media also heightens the risk for negative peer influence among adolescents, as well as for addictive social media use and stress, he added, citing research showing that many young people use social media more than they intend to and that they have difficulty stopping its use.
Recent studies have revealed that technology and social media use is associated with changes in structural brain development (i.e., changing the size and physical characteristics of the brain). This highlights the risks associated with young people accessing social media sites that glamorize disordered eating, cutting and other harmful behaviors. Filtering or removing this type of content is often not done or warnings are not triggered. So vulnerable youth are not sheltered from the effects that exposure to this content can have on their own behavior. “This underscores the need for platforms to deploy tools to filter content, display warnings, and create reporting structures to mitigate these harms.”
Another area of concern is what young people are missing out on by spending so many hours on social media—especially sleep, which they need for healthy development. “Research suggests that insufficient sleep is associated with poor school performance, difficulties with attention, stress regulation, and increased risk for automobile accidents,” he said.
But it is not all bad news. Some research demonstrates that social media use is linked with positive outcomes that can benefit youth mental health, according to Prinstein. “Perhaps most notably, psychological research suggests that young people form and maintain friendships online. These relationships often afford opportunities to interact with a more diverse peer group than offline, and the relationships are close and meaningful and provide important support to youth in times of stress,” he said. This can be especially important for youth with marginalized identities, including racial, ethnic, sexual and gender minorities.
Dr. Prinstein made several recommendations for what Congress can do to address many of the risks social media may pose to youth. These include:
Allocating at least $100 million to study social media and adolescent mental health;
Mandating that data from algorithms be made public, along with other internal research conducted by social media companies;
Requiring social media platforms to develop tools to mitigate the harm to youth, such as disabling particularly addictive features and enabling users to opt out of certain algorithms;
Mandating protections for marginalized and LGBTQ+ kids, while retaining their ability to connect with others in such groups for social support;
Passing the Kids Online Safety Act and previously proposed legislative fixes such as updates to the Children Online Privacy and Protection Act.
**Update Article – 02/16/23 Published by National Public Radio – “10 things to know about how social media affects teens’ brains” discusses in more depth the important takeaway messages from Dr Prinstein’s presentation to the US Judiciary Committee [Link to article]