All posts by admin

Update: Surgeon General Issues Advisory on Risks of Social Media Use in Youth

Today, United States Surgeon General Dr. Vivek Murthy released a new Surgeon General’s Advisory on Social Media and Youth Mental Health. While social media may offer some benefits, there are ample indicators that social media can also pose a risk of harm to the mental health and well-being of children and adolescents. Social media use by young people is nearly universal, with up to 95% of young people ages 13-17 reporting using a social media platform and more than a third saying they use social media “almost constantly.”

[Link to Surgeon General’s Advisory – PDF]

Health Advisory on Social Media Use in Adolescence – [Original Post 5/10/23]

Psychological scientists are examining the potential beneficial and harmful effects of social media use on adolescents’ social, educational, psychological, and neurological development. This is a rapidly evolving and growing area of research with implications for many stakeholders (e.g., youth, parents, caregivers, educators, policymakers, practitioners, and members of the tech industry) who share responsibility to ensure adolescents’ well-being. Officials and policymakers including the U.S. Surgeon General Dr. Vivek Murthy have documented the importance of this issue and are actively seeking science-informed input.

The American Psychological Association offers a number of recommendations which are based on the scientific evidence to date, and the following considerations.

A. Using social media is not inherently beneficial or harmful to young people. Adolescents’ lives online both reflect and impact their offline lives. In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms. In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.3

B. Adolescents’ experiences online are affected by both 1) how they shape their own social media experiences (e.g., they choose whom to like and follow); and 2) both visible and unknown features built into social media platforms.

C. Not all findings apply equally to all youth. Scientific findings offer one piece of information that can be used along with knowledge of specific youths’ strengths, weaknesses, and context to make decisions that are tailored for each teen, family, and community.4

D. Adolescent development is gradual and continuous, beginning with biological and neurological changes occurring before puberty is observable (i.e., approximately beginning at 10 years of age), and lasting at least until dramatic changes in youths’ social environment (e.g., peer, family, and school context) and neurological changes have completed (i.e., until approximately 25 years of age).5 Age-appropriate use of social media should be based on each adolescent’s level of maturity (e.g., self-regulation skills, intellectual development, comprehension of risks) and home environment.6 Because adolescents mature at different rates, and because there are no data available to indicate that children become unaffected by the potential risks and opportunities posed by social media usage at a specific age, research is in development to specify a single time or age point for many of these recommendations. In general, potential risks are likely to be greater in early adolescence—a period of greater biological, social, and psychological transitions, than in late adolescence and early adulthood.7,8

E. As researchers have found with the internet more broadly, racism (i.e., often reflecting perspectives of those building technology) is built into social media platforms. For example, algorithms (i.e., a set of mathematical instructions that direct users’ everyday experiences down to the posts that they see) can often have centuries of racist policy and discrimination encoded.9Social media can become an incubator, providing community and training that fuel racist hate.10The resulting potential impact is far reaching, including physical violence offline, as well as threats to well-being.11

F. These recommendations are based on psychological science and related disciplines at the time of this writing (April 2023). Collectively, these studies were conducted with thousands of adolescents who completed standardized assessments of social, behavioral, psychological, and/or neurological functioning, and also reported (or were observed) engaging with specific social media functions or content. However, these studies do have limitations. First, findings suggesting causal associations are rare, as the data required to make cause-and-effect conclusions are challenging to collect and/or may be available within technology companies, but have not been made accessible to independent scientists. Second, long-term (i.e., multiyear) longitudinal research often is unavailable; thus, the associations between adolescents’ social media use and long-term outcomes (i.e., into adulthood) are largely unknown. Third, relatively few studies have been conducted with marginalized populations of youth, including those from marginalized racial, ethnic, sexual, gender, socioeconomic backgrounds, those who are differently abled, and/or youth with chronic developmental or health conditions. (References in link below)

Recommendations

1. Youth using social media should be encouraged to use functions that create opportunities for social support, online companionship, and emotional intimacy that can promote healthy socialization

2. Social media use, functionality, and permissions/consenting should be tailored to youths’ developmental capabilities; designs created for adults may not be appropriate for children.

3. In early adolescence (i.e., typically 10–14 years), adult monitoring (i.e., ongoing review, discussion, and coaching around social media content) is advised for most youths’ social media use; autonomy may increase gradually as kids age and if they gain digital literacy skills. However, monitoring should be balanced with youths’ appropriate needs for privacy.

4. To reduce the risks of psychological harm, adolescents’ exposure to content on social media that depicts illegal or psychologically maladaptive behavior, including content that instructs or encourages youth to engage in health-risk behaviors, such as self-harm (e.g., cutting, suicide), harm to others, or those that encourage eating-disordered behavior (e.g., restrictive eating, purging, excessive exercise) should be minimized, reported, and removed;23 moreover, technology should not drive users to this content.

5. To minimize psychological harm, adolescents’ exposure to “cyberhate” including online discrimination, prejudice, hate, or cyberbullying especially directed toward a marginalized group (e.g., racial, ethnic, gender, sexual, religious, ability status),22 or toward an individual because of their identity or allyship with a marginalized group should be minimized.

6. Adolescents should be routinely screened for signs of “problematic social media use” that can impair their ability to engage in daily roles and routines, and may present risk for more serious psychological harms over time.

7. The use of social media should be limited so as to not interfere with adolescents’ sleep and physical activity.

8. Adolescents should limit use of social media for social comparison, particularly around beauty- or appearance-related content.

9. Adolescents’ social media use should be preceded by training in social media literacy to ensure that users have developed psychologically-informed competencies and skills that will maximize the chances for balanced, safe, and meaningful social media use.

10. Substantial resources should be provided for continued scientific examination of the positive and negative effects of social media on adolescent development.

Source and additional info: apa.org

Download in PDF format from apa.org


Training AI to reason and use common sense like humans

A new study by Microsoft has found that OpenAI’s more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense.

Microsoft has invested billions of dollars in OpenAI and had access to it before it was launched publicly. Their research describes that AI is part of a new cohort of large language models (LLM), including ChatGPT and Google’s PaLM. LLMs can be trained in massive amounts of data and fed both images and text to come up with answers.

The Microsoft team has recently published a 155-page analysis entitled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The researchers discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology. 

The system available to the public is not as powerful as the version they tested but the paper gives several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude “drawings”, GPT4 got the concept of a unicorn right. GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail.
While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.

The paper highlights that “While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.”

However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is “still not fully reliable” because it still “hallucinates” facts and makes reasoning and basic arithmetic errors.

[Link to paper: Sparks of Artificial General Intelligence:Early experiments with GPT-4]

Additional Information

Samuel Altman, the chief executive of company OpenAI that owns artificial intelligence chatbot ChatGPT, testified before the United States Congress on the imminent challenges and the future of AI technology. The oversight hearing was the first in a series of hearings intended to write the rules of AI.

[Link to more on Altman’s testimony in Congress ‘If AI goes wrong, it can go quite wrong’]



May is Mental Health Awareness Month.

According to the Department of Health and Human Services (HHS), our country is facing an unprecedented mental health crisis. The crisis isn’t just affecting adults, its devastating young people, and people from every background are impacted.

The goal of Mental Health Awareness Month, is to bring attention to mental health and how essential it is to overall health and wellbeing.

Over the last year HHS has helped to facilitate a great number of initiatives, innovative programs and increased funding sources to improve behavioral health care and services for all ages. Not only has the 988 Suicide Prevention Lifeline launched , but there has been expanded mental health services in schools, advanced a center for excellence on social media and mental health, and launched the HHS Roadmap for Behavioral Health Integration. In addition, they have helped states to establish Certified Community Behavioral Health Clinics which provide behavioral health care 24 hours a day, 7 days a week, because mental health crises don’t just happen during business hours. And we are providing hundreds of millions of dollars to programs like Project AWARE, Mental Health Awareness Training, and the National Child Traumatic Stress Initiative, that help reach families and youth where they are, including at schools and in the community.

Here is an amazing extensive list and fact sheet of the various efforts made by HHS over the past year:

[Link: Fact Sheet of Behavioral Health Accomplishments by HHS that are now available for Mental Health Awareness Month 2023.]


JAMA Study Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions

A cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) in December, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

Results

The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Limitations

The main study limitation was the use of the online forum question and answer exchanges. Such messages may not reflect typical patient-physician questions. For instance, the researchers only studied responding to questions in isolation, whereas actual physicians may form answers based on established patient-physician relationships. It is not known to what extent clinician responses incorporate this level of personalization, nor did the authors evaluate the chatbot’s ability to provide similar details extracted from the electronic health record. Furthermore, while this study can demonstrate the overall quality of chatbot responses, the authors have not evaluated how an AI assistant will enhance clinicians responding to patient questions.

Key Points from the Study

Question  Can an artificial intelligence chatbot assistant, provide responses to patient questions that are of comparable quality and empathy to those written by physicians?

Findings  In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.

Meaning  These results suggest that artificial intelligence assistants may be able to aid in drafting responses to patient questions.

[Link to Journal article] JAMA Intern Med. Published online April 28, 2023.

Mental Health App to Manage Distress

A new mental health smartphone App has been developed to help people regulate their emotions in healthy ways.

Rutgers researcher, Edward Selby, PhD, Director of Clinical Training for the Rutgers University Clinical Psychology Program has taken a different approach to a mental health app. The app prompts users to consider their mental health at different times throughout the day, increasing awareness of unique personal experiences. Progress and improvement can be viewed over time to help users identify and better understand how their emotions change and the triggers that may cause those changes.

According to Dr Selby, “The better we can understand the underlying causes and dynamics that result in the problems we define as ‘mental illness,’ the better we can design, tailor and adapt treatments to help improve those underlying problems. Selby’s research shows people often react emotionally to stressful situations in ways that make the situations worse. When this happens, people are at higher risk for harmful behaviors, such as substance use, binge eating, hostility and self-injury.

According to the National Alliance on Mental Illness, more than 20 percent of adults in the United States experienced mental illness in 2020. There is increasing evidence that technology may be used to address mental health care beyond the conventional office setting. These approaches, including smartphone treatment apps, may also help reach patients in need of mental health services who lack access to care.

The name of the app is “Storm/breaker” and it’s goal is to “help people naturally and automatically understand dynamic emotional, cognitive, interpersonal and behavioral experiences occurring in their lives. As understanding of these processes grows, they will spontaneously begin to make more healthy and adaptive responses to upsetting situations that arise.”

Storm/breaker app is designed to help users in a number of ways, including:

  • People can learn to understand their unique emotional, psychological and behavioral patterns which Selby said is essential to making positive changes in one’s life. 
  • People can begin to make changes to improve their emotional experiences that may help to defuse upsetting situations and avoid problematic behaviors.
  • The app’s customizable clinical toolkit will allow people to link to other smartphone apps that may help further manage their stress, including entertainment apps to distract, relaxation apps and productivity apps.

Selby said while other apps attempt to convert typical in-person therapy into a smartphone experience, Storm/breaker is a standalone intervention designed specifically to harness the advantages of daily smartphone use. 

Selby will discuss his research on mental health in an episode of the PBS series Healthy Minds with Dr. Jeffrey Borenstein in May 2023 during Mental Health Awareness Month.
The app was programmed in collaboration with Michigan Software Labs.


Source: https://ifh.rutgers.edu/news/


New Brain Health and Fitness Program Helps Adults with Mild Cognitive and Mental Health Issues

Managing a digital world can be an ongoing challenge for people facing aging, cognitive decline, mental health issues or other concerns. Tasks many people take for granted, like shopping online, ordering a food delivery, getting money from an ATM or buying a mass transit card, can stretch their technical skills.

To help people better manage technology, and to improve their cognitive function, the Miller School of Medicine Department of Psychiatry and Behavioral Sciences recently established the Brain Health and Fitness Program, which augments patient care with computerized cognitive and functional skills training.

According to program director Philip Harvey, Ph.D., Leonard M. Miller Professor of Psychiatry and Behavioral Sciences, vice chair for research, chief of the Division of Psychology. “There are many people who have trouble learning new technologies, and we want to help them get a better handle on that. That includes adults with serious mental illness, as well as older people who may have mild cognitive impairments, or even healthy adults who simply want to acquire new skills. Increasing cognitive performance makes it a lot easier to learn new skills, regardless of people’s current situation.”

Custom Designed Skills Assessment and Training

Designed to last three to six months, this fee-for-service program begins with an individualized assessment of each patient’s condition, the psychiatric services they are receiving and any medications that they have been prescribed.
Then a customized program is developed for the patient. The team helps patients access the program’s evidence-based software modules, which run on PCs, Macs, or tablets. Patients then self-administer the training at home, community centers or other preferred venues.

The cloud-based software, called Functional Skills Assessment and Training (FUNSAT), was designed by Dr. Harvey and Sara Czaja, Ph.D., professor of gerontology at Weill Cornell College of Medicine and professor emeritus at the Miller School. It teaches people how to perform important tasks, such as online shopping, operating ticketing kiosks and withdrawing money from an ATM. They can also learn medication organization and adherence, a crucial task for patients receiving integrated pharmacological augmentation and brain fitness training.

The Goals of the Program

  1. Improvements in critical cognitive skills such as processing speed, concentration, attention and short-term memory
  2. Associated improvements in functional skills, quality of life, confidence and self-efficacy
    • Occupational and academic performance
    • Everyday living skills
    • Health related self care
  3. Alleviation of caregiver burden in cases where in cases where functional improvements have led to caregiver distress

FUNSAT Program

FUNSAT is simple to complete at home. Patients train for around two hours a week, for at least 15 minutes per session. The program staff monitors their progress online and sends encouraging messages, if necessary.

Through the software, participants learn by doing.
“In our most recent studies, we’ve shown that when people improve in the training, they actually start doing these things in the real world,” said Dr. Harvey. “FUNSAT improves their ability to perform certain tasks, as well as boosting cognition particularly in concert with cognitive training. Not to mention, the practice training gives them confidence to go out and actually do these activities.”

Enhancing Skills in Everyday Technology


Though the Brain Health and Fitness Program is currently based in South Florida, the software’s cloud configuration could make it available to virtually anyone with a good Internet connection. Dr. Harvey and colleagues have worked with a number of facilities to implement FUNSAT, including the Los Angeles County Department of Mental Health, the New York State Office of Mental Health, the Manhattan Psychiatric Center and aging centers throughout the country. The online training helps patients tune their skills before going back into the world.


Though not covered by insurance, the program is rapidly gaining popularity, as it provides a unique opportunity to improve people’s quality of life.
“We have found that two-thirds of the people doing the training make tremendous progress,” said Dr. Harvey. “It helps them improve their skill levels and learn to use everyday technologies that had been giving them trouble. It’s a great way to enhance their well-being.”


What Do Teen Girls Think About TikTok, Instagram, and How Social Media Impacts Their Lives

A new report by Common Sense Media shows that nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended.

Today, new research report by Common Sense Media, reveals what teen girls think about TikTok and Instagram, and describes the impact that these and other social media platforms have on their lives. According to the report, Teens and Mental Health: How Girls Really Feel About Social Media, nearly half (45%) of girls who use TikTok say they feel “addicted” to the platform or use it more than intended at least weekly. Among girls with moderate to severe depressive symptoms, roughly seven in 10 who use Instagram (75%) and TikTok (69%) say they come across problematic suicide-related content at least monthly on these platforms.

A survey of over 1,300 adolescent girls across the country sought to better understand how the most popular social media platforms and design features impact their lives today. Among the report’s key findings, adolescent girls spend over two hours daily on TikTok, YouTube, and Snapchat, and more than 90 minutes on Instagram and messaging apps. When asked about platform design features, the majority of girls believe that features like location sharing, public accounts, endless scrolling, and appearance filters have an effect on them, but they’re split on whether those effects are positive or negative. Girls were most likely to say that location sharing (45%) and public accounts (33%) had a mostly negative effect on them, compared to other features. In contrast, they were most likely to say that video recommendations (49%) and private messaging (45%) had a mostly positive impact on them.

Other key findings

  1. Nearly four in 10 (38%) girls surveyed report symptoms of depression, and among these girls, social media has an outsize impact—for better and for worse
  2. Girls who are struggling socially offline are three to four times as likely as other girls to report daily negative social experiences online, but they’re also more likely to reap the benefits of the digital world.
  3. Seven out of 10 adolescent girls of color who use TikTok (72%) or Instagram (71%) report encountering positive or identity-affirming content related to race at least monthly on these platforms, but nearly half report exposure to racist content or language on TikTok (47%) or Instagram (48%) at least monthly.
  4. Across platforms, LGBTQ+ adolescent respondents are roughly twice as likely as non-LGBTQ+ adolescents to encounter hate speech related to sexual or gender identity, but also more likely to find a connection. More than one in three LGBTQ+ young people (35%) who use TikTok say they have this experience daily or more on the platform, as do 31% of LGBTQ+ users of messaging apps, 27% of Instagram users, 25% of Snapchat users, and 19% of YouTube users.
  5. Girls have mixed experiences related to body image when they use social media. Roughly one in three girls who use TikTok (31%), Instagram (32%), and Snapchat (28%) say they feel bad about their body at least weekly when using these platforms, while nearly twice as many say they feel good or accepting of their bodies at least weekly while using TikTok (60%), Instagram (57%), and Snapchat (59%).
  6. The majority of girls who use Instagram (58%) and Snapchat (57%) say they’ve been contacted by a stranger on these platforms in ways that make them uncomfortable. These experiences were less common, though still frequent, on other platforms, with nearly half of TikTok (46%) and messaging app (48%) users having been contacted by strangers on these platforms.

Common Sense Media also announced today that the organization is launching the “Healthy Young Minds” campaign, a multiyear initiative focused on building public understanding of the youth mental health crisis, spotlighting solutions, and catalyzing momentum for industry and policy change. Town halls are scheduled for New York City, Arizona, Los Angeles, Indianapolis, Florida, Massachusetts, London, and Brussels, with more locations to be announced shortly. Further research and digital well-being resources for educators will be released in the coming year.

To learn more about Common Sense Media, the survey or educational materials available:

Source: Common Sense
Link to Survey: Teens and Mental Health: How Girls Really Feel About Social Media
Report Infographic
Curriculum and classroom resources


News Briefs March 2023

Recent Articles, News, Stories & Press Releases

Some interesting reading to inform or stimulate ideas or further exploration in various topics

JMIR Medical Education is launching a new theme issue focused on ChatGPT, Generative Language Models, and Generative AI in Medical Education.

The objective of this theme issue is to explore how generative language models can be used to advance medical education. Areas of interest include but are not limited to applications of generative artificial intelligence (AI) in medical education, creating intelligent tutoring systems, using natural language processing technologies in medical education, and exploring how chatbots can improve patient-physician communication.

The deadline for submissions is July 31, 2023. All accepted manuscripts will be published as part of the JMIR Medical Education Special Issue on ChatGPT: Generative Language Models & Generative AI in Medical Education. Manuscripts should be prepared according to the journal’s guidelines and can be submitted at https://mededu.jmir.org/author.


LAMP Platform, originally developed specifically for mental health, has the potential for a broader application of the system’s data analysis tools across other medical specialties and care settings.

The program (Learn, Assess, Manage, Prevent) or LAMP, is designed to make psychiatric care possible whenever and wherever it’s needed most. It was developed for neuropsychiatric research purposes under the direction of Dr. John Torous at Beth Israel Deaconess Medical Center/Harvard, but its use has expanded to help augment clinical care according to software engineer and medical student Aditya Vaidyam with the team at Harvard.

The LAMP Platform is part of a new approach that combines asynchronous telemedicine with digital phenotyping. It takes virtual medicine to the next level, allowing patients to report changes or symptoms as they happen outside the clinical encounter. ‘Digital phenotyping’ tracks patient biomarkers (heart rate, sleep patterns, etc.) and interactions with mobile devices and cognitive games to yield vast amounts of data that can be analyzed to help predict relapse or even to suggest personalized interventions to fit the patient.

Vaidyam, who is at the University of Illinois Urbana-Champaign, Carle Health, CI MED, foresees broad application of the system’s data analysis tools across other medical specialties and care settings. “It has the potential to help triage care needs; maybe urgent care physicians or telehealth physicians use the data to lessen the load on ERs, or maybe primary care providers use the data to dynamically reschedule their patient load based on estimated patient health risks.”

For more information on LAMP visit the Harvard website [Link] [Source]


School systems sue social media platforms

A number of school districts across the country are increasingly taking on social media. They are filing lawsuits that argue that Instagram, Snapchat, TikTok and YouTube have helped create the nation’s surging youth mental health crisis and should be held accountable.

The focus of the litigation filed in a California federal court last week, alleges that social media companies used advanced artificial intelligence and machine learning technology to create addictive platforms that cause young people harm. “The results have been disastrous,” the filing asserts, saying more children than ever struggle with their mental health amid excessive use of the platforms. “There is simply no historic analog to the crisis the nation’s youth are now facing,” it said.

School administrators have observed a spike in mental health emergencies during the school day. There have been “very serious” cyberbullying incidents related to social media — with content “nearly impossible” to get the companies to take down — and school threats that have kept students at home.

Marisol Garcia, a staff therapist at the Family Institute at Northwestern University, said social media can be a powerful means of connection but the downsides are significant too. She was not surprised schools have begun filing lawsuits, saying they want to do what they think is good for their students’ mental and physical health.

The long-term ramifications of social media use — on attention span, social skills, mental health — are unclear, she said. The legal action, she said, “could be a positive thing.”

New Data from the Centers for Disease Control and Prevention

A new report from the CDC adds urgency to the lawsuits.

According to federal researchers who released data last week, teen girls across the United States are “engulfed in a growing wave of violence and trauma.” The CDC findings show that nearly 1 in 3 high school girls reported in 2021 that they seriously considered suicide — up nearly 60 percent from a decade ago.

Almost 3 in 5 teenage girls reported feeling so persistently sad or hopeless almost every day for at least two weeks in a row during the previous year that they stopped regular activities — a figure that was double the share of boys and the highest in a decade, CDC data showed. Girls fared worse on other measures, too, with higher rates of alcohol and drug use than boys and higher levels of being electronically bullied, according to the 89-page report. Thirteen percent had attempted suicide during the past year, compared with 7 percent of boys.

[Source – Washington Post], [Link to CDC Report Youth Risk Behavior Survey]


Physicians And Other Clinicians Should Screen Youth For Cyberbullying & Social Media Use

An article published in the journal Primary Care Clinical Office Practice by physicians from Florida Atlantic University’s Schmidt College of Medicine recommend that primary care physicians and other clinicians need to screen adolescents and young adults for inappropriate or misuse of social media and cyberbullying.

Most adolescents and young adults have experienced bullying in some form, with about one-third of them experiencing cyberbullying, contributing to mental health concerns. Cyberbullying involves electronic communication such as texts, emails, online videos and social media, which has become increasingly problematic over the last few decades. Several reasons include the anonymity it allows, the fact that it is not as easily monitored, and that adolescents and young adults have easier access to devices.

Bullying is nothing new, but teens these days must also navigate the challenges of the digital landscape. Parents and teachers who grew up in a different generation may struggle to understand the many nuanced forms that online bullying, or cyberbullying, can take.

Any form of bullying is hurtful, but cyberbullying can be especially damaging because of the nature of the digital world. With cell phones and laptops, students carry their bullies in their backpacks, meaning that they cannot even escape their tormentors at home.

What’s more, because of the sneaky nature of cyberbullying, parents and teachers may be completely unaware that a problem is happening unless they are closely monitoring their children’s social media usage. And, teens may be reluctant to open up about cyberbullying for fear of losing access to the digital world.

Screening and Screening Tools

According to the article authors, “It is staggering that only 23 percent of students who were cyberbullied reported it to an adult at their school, which shows that many incidences go unreported. This is another crucial reason why we need to screen patients as well as educate parents.”

Screening tools are available and can be worked into the work-flow of healthcare visits to ensure that screening is consistently done and results are addressed in a timely manner.

Current screening tools include:

  • Revised Olweus Bully/Victim Questionnaire (R-OBVQ) [Link]
  • California Bullying Victimization Scale (CBVS) [Link]
  • Child Adolescent Bullying Scale (CABS) [Link]
  • Massachusetts Aggression Reduction Center (MARC) [Link & Ressources]

Another resource is a “Cyberbullying: Top Ten Tips for Health Care Providers,” pamphlet developed by the Cyberbullying Research Center, which is part of the FAU School of Criminology and Criminal Justice. [Link]

Source: Caceres J, Holley A. Perils and Pitfalls of Social Media Use: Cyber Bullying in Teens/Young Adults. Primary Care: Clinics in Office Practice. Volume 50, Issue 1, March 2023, Pages 37-45. [Link]

The Sale of American’s Mental Health Data

Millions of Americans have been using health tracking apps for for the last few years and since the pandemic, numerous apps for mental health issues like depression and anxiety have proliferated. We have become used to our private medical information from the usual medical settings being protected by HIPAA (the Health Insurance Portability and Accountability Act). But unfortunately, HIPAA wasn’t designed for the modern digital world, with its new technologies. Most apps—including health, mental health, and biometric tracking devices—don’t fall under HIPAA rules, meaning that these companies can sell your private health data to third parties, with or without your consent.

A new research report published by Duke University’s Technology Policy Lab reveals that data brokers are selling huge datasets full of identifiable personal information—including psychiatric diagnoses and medication prescriptions, as well as many other identifiers, such as age, gender, ethnicity, religion, number of children, marital status, net worth, credit score, home ownership, profession, and date of birth—all matched with names, addresses, and phone numbers of individuals.

Data brokers are selling massive lists of psychiatric diagnoses, prescriptions, hospitalizations, and even lab results, all linked to identifiable contact information.

Researcher Joanne Kim began by searching for data brokers online. She contacted 37 of them by email or a form on their website (Kim identified herself as a researcher in the initial contact). None of those she contacted via email responded; some of those she contacted via form referred her to other data brokers. A total of 26 responded in some way (including some automated responses). Ultimately, only 10 data brokers had sustained contact by call or virtual meeting with Kim, so they were included in the study.

The 10 most engaged data brokers asked about the purpose of the purchase and the intended use cases for the data; however, after receiving that information (verbally or in writing) from the author, those companies did not appear to have additional controls for client management, and there was no indication in emails and phone calls that they had conducted separate background checks to corroborate the author’s (non-deceptive) statements.

Data Brokers reported conditions for selling data:

  • emphasized that the requested data on individuals’ mental health conditions was “extremely restricted” and that their team would need more information on intended use cases—yet continued to send a sample of aggregated, deidentified data counts.
  • confirmed that the author was not part of a marketing entity, the sales representative said that as long as the author did not contact the individuals in the dataset, the author could use the data freely.
  • implied they may have fully identified patient data, but said they were unable to share this individual-level data due to HIPAA compliance concerns. Instead, the sales representative offered to aggregate the data of interest in a deidentified form.
  • one was most willing to sell data on depressed and anxious individuals at the author’s budget price of $2,500 and stated no apparent, restrictive data-use limitations post-purchase.
  • another advertised highly sensitive mental health data to the author, including names and postal addresses of individuals with depression, bipolar disorder, anxiety issues, panic disorder, cancer, PTSD, OCD, and personality disorder, as well as individuals who have had strokes and data on those people’s races and ethnicities.
  • two data brokers, mentioned nondisclosure agreements (NDAs) in their communications, and one indicated that signing an NDA was a prerequisite for obtaining access to information on the data it sells.
  • one often made unsolicited calls to the author’s personal cell. If the author was delayed in responding to an email from the data broker, the frequency of calls seemed to increase.

Conclusions

The author concludes that additional research is critical as more depressed and anxious individuals utilize personal devices and software-based health-tracking applications (which are not protected by HIPAA), often unknowingly putting their sensitive mental health data at risk. This report finds that the industry appears to lack a set of best practices for handling individuals’ mental health data, particularly in the areas of privacy and buyer vetting. It finds that there are data brokers which advertise and are willing and able to sell data concerning Americans’ highly sensitive mental health information.

This research concludes by highlighting that the largely unregulated and black-box nature of the data broker industry, its buying and selling of sensitive mental health data, and the lack of clear consumer privacy protections in the U.S. necessitate a comprehensive federal privacy law or, at the very least, an expansion of HIPAA’s privacy protections alongside bans on the sale of mental health data on the open market.

[Link]