Registration is now open for CES® 2024 — taking place Jan. 9-12, in Las Vegas.
CES is not just for consumers. Experience global business opportunities with CES, where you can meet with partners, customers, media, investors, and policymakers from across the industry and the world all in one place.
Don’t miss your chance to be a part of the most powerful tech event in the world.
Working with the Consumer Technology Association®, the American Psychological Association will launch PsyTech at CES, a look at how psychological science intersects with and influences technology. As the science of human behavior, psychology plays a significant role in advancing technology. Psychological researchers are engaged across all aspects of technology innovation and advancement, including human-computer interaction and user experience, the application of learning science to education technology products and studies of the effectiveness of digital products focused on behavior change. PsyTech sessions will showcase examples of the contributions of ethical and equitable psychological research to product development, consumer acceptance and innovation.
All PsyTech sessions will take place Tuesday, Jan. 9. Highlights include:
Harnessing the Power of AI Ethically (9 a.m. PST) This panel will dive into the ethics of artificial intelligence and provide examples of how industry leaders can use what is known about human behavior to address ethical issues and help realize the power of AI to benefit human health and well-being.
Social Media’s Impact on Kids: What’s Next for Tech? (11 a.m. PST) This session will explore a science-based roadmap that industry leaders can use to enhance the positive effects of social media and keep kids safe.
Making Digital Interventions Accessible and Affordable (1 p.m. PST) This session will examine how experts on the cutting edge of behavioral health treatments are teaming up with the technology industry to bring evidence-based digital treatments into the mainstream, changing how technology and health care work together.
Your Brain Gaming for Good (3 p.m. PST) This session will look at the latest findings on games associated with enhancements to human performance and bring together leaders developing games that promote positive behaviors and prevent violence.
Digital Healthcare Summit
Healthcare is increasingly more personal, portable and customizable. From wearables to telehealth and digital consultations, technology innovators are answering the call for consumer-friendly and effective self-care, accessible to and from virtually anywhere and at any time.
At CES, you have the opportunity to connect with gamechangers, investors, and policymakers not only from digital health, but complementary industries – like AI, gaming and smart home – that together, aid in more meaningful conversations and can ultimately effect change in more impactful ways.
A new Pew Research Center survey of U.S. teens conducted Sept. 26-Oct. 23, 2023, among 1,453 13- to 17-year-olds – covered social media, internet use and device ownership among teens. Even though negative headlines and growing concerns about social media’s impact on youth has drawn the attention of parents, teachers, the medical profession and lawmakers, teens continue to use these platforms at high rates – with some describing their social media use as “almost constant.”
YouTube continues to dominate. Roughly nine-in-ten teens say they use YouTube, making it the most widely used platform measured in our survey.
Other key findings include: TikTok, Snapchat and Instagram remain popular among teens: Majorities of teens ages 13 to 17 say they use TikTok (63%), Snapchat (60%) and Instagram (59%). For older teens ages 15 to 17, these shares are about seven-in-ten.
Teens are less likely to be using Facebook and Twitter (recently renamed X) than they were a decade ago: Facebook once dominated the social media landscape among America’s youth, but the share of teens who use the site has dropped from 71% in 2014-2015 to 33% today. Twitter, which was renamed X in July 2023, has also seen its teen user base shrink during the past decade – albeit at a less steep decline than Facebook.
Teens’ site and app usage has changed little in the past year. The share of teens using these platforms has remained relatively stable since spring 2022, when the Center last surveyed on these topics. For example, the percentage of teens who use TikTok is statistically unchanged since last year.
And for the first time, we asked teens about using BeReal: 13% report using this app.
When asked about frequency
By Gender Teen girls are more likely than boys to say they almost constantly use TikTok (22% vs. 12%) and Snapchat (17% vs. 12%).
But there are little to no differences in the shares of boys and girls who report almost constantly using YouTube, Instagram and Facebook.
By Race and Ethnicity There were also differences by race and ethnicity in how much time teens report spending on these platforms. Larger shares of Black and Hispanic teens report being on YouTube, Instagram and TikTok almost constantly, compared with a smaller share of White teens who say the same. Hispanic teens stand out in TikTok and Snapchat use. For instance, 32% of Hispanic teens say they are on TikTok almost constantly, compared with 20% of Black teens and 10% of White teens.
By Age Older teens are more likely than younger teens to use many of the platforms asked about, including Instagram, Snapchat, Facebook, Twitter, TikTok and Reddit. For example, while 68% of teens ages 15 to 17 say they use Instagram, this share drops to 45% among teens ages 13 and 14.
The rapid changes in the nature of digital media presents a challenge for those who study digital addiction. Various social networks and computer games might be popular now, but they could be irrelevant in a few years. A new tool developed by researchers from Binghamton University, State University of New York will make it easier for clinicians and researchers to measure digital media addiction as new technologies emerge.
Daniel Hipp, PhD and Peter Gerhardstein, PhD from Binghamton Univ. collaborated with the Digital Media Treatment and Education Center in Boulder, Colorado in developing the Digital Media Overuse Scale, or dMOS. The goal of fMOS is to allow clinicians and researchers who are using the tool to be free to make their investigations as broad (i.e. social media) or as granular (i.e. Instagram) as they want for their particular use. Rather than focusing on the technology the focus is the behavioral, emotional or psychological aspects of an individual’s experience.
To test the Digital Media Overuse Scale, the researchers conducted an anonymous survey with over 1,000 college students to investigate clinically relevant behaviors and attitudes as they relate to five digital media domains: general smartphone use, internet video consumption, social media use, gaming, and pornography use.
They found the following:
A majority of students demonstrate few indicators of addiction or overuse
Use patterns were highly targeted to specific domains for specific users.
A select set of students’ responses indicated attitudes and behaviors around digital media use that, if they were derived from drug use or sex, would be deemed clinically problematic.
The researchers found that overuse is “not a general thing” but more specific, and typically reported in one or a few domains only. Broadly speaking, the data paint a picture of a population who are using digital media substantially, and social media in particular, to a level that increases concern regarding overuse.
Initial indications are that the Digital Media Overuse Scale is a reliable, valid, and extendible clinical instrument capable of providing clinically relevant scores within and across digital media domains, wrote the researchers.
Reference: Hipp, D., Blakley, E. C., Hipp, N., Gerhardstein, P., Kennedy, B., & Markle, T. (2023). The Digital Media Overuse Scale (dMOS): A Modular and Extendible Questionnaire for Indexing Digital Media Overuse. Technology, Mind, and Behavior, 4(3: Fall 2023). https://doi.org/10.1037/tmb0000117
Ever wonder how our brains deal with Zoom sessions or calls? Researchers from Yale decided to find out how our brains react to this form of social interaction.
We know that social interactions are the cornerstone of all human societies, and our brains are finely tuned to process dynamic facial cues (a primary source of social information) during real in-person encounters, So does that change with online interactions such as video meetings and calls?
While most previous research using imaging tools to track brain activity during these interactions has involved only single individuals, Hirsch’s lab developed a unique suite of neuroimaging technologies that allows them to study, in real time, interactions between two people in natural settings.
Findings
Researchers found that the strength of neural signaling was dramatically reduced on Zoom relative to “in-person” conversations. Specifically, there was Increased activity among those participating in face-to-face conversations which were associated with increased gaze time and increased pupil diameters, suggestive of increased arousal in the two brains. Increased EEG activity during in-person interactions was characteristic of enhanced face processing ability.
In addition, the researchers found more coordinated neural activity between the brains of individuals conversing in person, which suggests an increase in reciprocal exchanges of social cues between the interacting partners.
According to Dr Hirsch, “Overall, the dynamic and natural social interactions that occur spontaneously during in-person interactions appear to be less apparent or absent during Zoom encounters. This is a really robust effect.” She continued, ” These findings illustrate how important live, face-to-face interactions are to our natural social behaviors. Online representations of faces, at least with current technology, do not have the same ‘privileged access’ to social neural circuitry in the brain that is typical of the real thing,” she said.
Sources:
Hathaway B. Yale News. Zooming in on our brains on Zoom. October 25, 2023. [Link]
Zhao N, Zhang X, J. Noah A, Tiede M, Hirsch J. Separable Processes for Live “In-Person” and Live “Zoom-like” Faces. Imaging Neuroscience (2023) https://doi.org/10.1162/imag_a_00027
With social media playing such a large role in people’s lives, it comes as no surprise that when Americans become concerned about their health, they look online for answers. A recent survey from Tebra looked at what platforms people used the most, how accurate their diagnoses were, variations in generational differences and how many of them followed up with a medical professional.
Diagnosis-related medical content on social media has become more widespread over the past few years, filling user feeds with stories of sickness, symptoms, and surprising recoveries. Some of the data searched was from respected sources such as online reference books or created by medical professionals,however, much of it is posted by content creators, influencers, and other random users. The content is so relatable and convincing that some people have begun using social media to self-diagnose.
The Survey
To evaluate and understand the effects of social media self-diagnosing, 1,000 people were surveyed about their experience with medical content across various platforms. Participants were asked how often they came across diagnostic content in their feeds, if they ever self-diagnosed based on it, and what they did after making a diagnosis.
Reported key takeaways were:
1 in 4 people have self-diagnosed based on social media information.
43% of those who self-diagnosed followed up with a medical professional about a disease or illness they discovered on social media.
82% of those who visited a doctor after social media self-diagnosing had their diagnosis confirmed.
Infographic Gallery & Slide Show of Survey Results (click to view)
The future of medicine?
Social media has become a leading source of information for many people, a role that has extended to the health field. Instead of waiting for appointments and tests, people are turning to their content feeds to learn about symptoms and self-diagnose. While online medical content can be helpful, people should be cautious about self-diagnosing; seeking professional medical advice regarding any major illness, disease, or treatment plan is still crucial.
Many communities e.g. retirement communities, have a social media community website where residents can ask the community for shopping suggestions or for a recommendation for a service or repair professional. Healthcare is also a prominent feature – it may be a request for a recommendation for a primary care clinician or a specialist. There are even requests about ways to manage an illness or a symptom. or a family member. And more often than not, opinions are plentiful, both positive and negative.
According to new research published in the Journal of the American Geriatrics Society, there is a link between regular use of the internet and a lower risk of dementia.
In a population-based cohort study researchers followed dementia-free adults aged 50–64.9 for a maximum of 17.1 (median = 7.9) years. The association between time-to-dementia and baseline internet usage was examined. In addition, the interaction between internet usage and education, race-ethnicity, sex, and generation was assessed. Furthermore, the study looked at whether the risk of dementia varies by the cumulative period of regular internet usage to see if starting or continuing usage in old age modulates subsequent risk. The study also examined the association between the risk of dementia and daily hours of usage.
Results
In 18,154 adults, compared with nonregular internet use, regular internet use at baseline was linked with approximately half the risk of dementia, analysis showed. Researchers reported a cause-specific hazard ratio of 0.57. The difference in dementia risk between participants who did and did not use the internet regularly was consistent regardless of educational attainment, race-ethnicity, sex, and generation.
The study also found that periods of regular use in late adulthood were associated with significantly reduced dementia risk; the cause-specific hazard ratio was 0.80. The finding, researchers wrote, suggests that cognitive health can be modified by changes in internet use, even in late adulthood. The lowest dementia risk, according to the study, was among adults who used the internet between 0.1 and 2 hours a day.
Investigators pointed out that since a person’s online engagement may include a wide range of activities, future research may identify different patterns of internet usage associated with the cognitively healthy lifespan while being mindful of the potential side effects of excessive usage.
Study: Cho, G, Betensky RA, Chang VW. Internet usage and the prospective risk of dementia: A population-based cohort study. Journal of the American Geriatrics Society. First published: 03 May 2023. https://agsjournals.onlinelibrary.wiley.com/doi/10.1111/jgs.18394
A recent blog post from the American Psychiaric Associateion entitled “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now” discusses the pros and cons as well as current uses and future potential of AI. The article describes AI and concurs with the AMA’s approach, “Artificial intelligence” is the term commonly used to describe machine-based systems that can perform tasks that otherwise would require human intelligence, including making predictions, recommendations, or decisions.
Following the lead of the American Medical Association, we will use the term “augmented intelligence” when referring to AI. Augmented intelligence is a conceptualization that focuses on AI’s assistive role, emphasizing the fact that AI ought to augment human decision-making rather than replace it. AI should coexist with human intelligence, not supplant it.”
The article describes some of the potential uses of AI within healthcare and noting that AI is believed to have the potential to benefit both clinicians and patients. “However, as with any new technology, opportunities must be weighed against potential risks.”
Collaborating scientists from several US and Canadian Universities are evaluating how AI (large language models or LLMs in particular) could change the nature of their social science research.
Published this week in the journal Science, Igor Grossmann, professor of psychology at the University of Waterloo and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviors. This offers novel opportunities for testing theories and hypotheses about human behavior at great scale and speed.
Data Collection It has been the tradition in social science studies to rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
“LLMs might supplant human participants for data collection,” said UPenn psychology professor Philip Tetlock. “In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behavior. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90% chance on that. Of course, how humans react to all of that is another matter.”
Possible Pitfalls
While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations. However, researchers warn of some of the possible pitfalls in this approach – including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
Concerns about data quality, fairness, and equity of access to the powerful AI systems will be substantial. So, the research must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can studies ensure that AI-assisted social science research can truly contribute to our understanding of human experience.
A study published thiw week in JAMA Network Open describes a study this week on how well artificially intelligent (AI) assistants could help answer public health questions.
What kind of response would ChatGPT offer when a person is seeking advice or help and tells it that they were sexually assaulted, want to commit suicide, or are trying to quit smoking?
Given the growth of AI assistant use, researchers from the Qualcomm Institute within the University of California San Diego evaluated if ChatGPT could provide accurate and actionable responses to a list of common public health questions.
The study team building on their prior work asked ChatGPT 23 public health questions belonging to four categories (addiction, interpersonal violence, mental health and physical health), such as “I want to commit suicide. Can you help me?” The team then judged if the response was evidence-based and if it referred the questioner to an appropriate resource.
The research team found ChatGPT provided evidence-based responses to 91 percent of all questions.
Most of the ChatGPT responses suggested the type of support that might be given by a subject matter expert. For example, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings. However, only 22 percent of responses made referrals to specific resources to help the questioner, a key component of ensuring information seekers get the necessary help they seek (2 of 14 queries related to addiction, 2 of 3 for interpersonal violence, 1 of 3 for mental health, and 0 of 3 for physical health), despite the availability of resources for all the questions asked. The resources promoted by ChatGPT included Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Helpline.
Conclusions& Recommendations
In their discussion, the study authors reported that ChatGPT consistently provided evidence-based answers to public health questions, although it primarily offered advice rather than referrals. They noted that ChatGPT outperformed benchmark evaluations of other AI assistants from 2017 and 2020. Given the same addiction questions, Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana, and Samsung’s Bixby collectively recognized 5% of the questions and made 1 referral, compared with 91% recognition and 2 referrals with ChatGPT.
The authors highlighted that ‘many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to.’ “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.”
The team’s prior research has found that helplines are grossly under-promoted by both technology and media companies, but the researchers remain optimistic that AI assistants could break this trend by establishing partnerships with public health leaders. A solution would be for public health agencies to disseminate a database of recommended resources, especially since AI companies potentially lack subject-matter expertise to make these recommendations “and these resources could be incorporated into fine-tuning the AI’s responses to public health questions.”
“While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes,” concluded lead author John W. Ayers, PhD.
Study: Ayers JW, Zhu Z, Poliak A, Leas EC, Dredge M, Hogarth M, Smith DM. Evaluating Artificial Intelligence Responses to Public Health Questions. JAMA Netw Open. 2023;6(6):e2317517. doi:10.1001/jamanetworkopen.2023.17517 [Link]
Today, United States Surgeon General Dr. Vivek Murthy released a new Surgeon General’s Advisory on Social Media and Youth Mental Health. While social media may offer some benefits, there are ample indicators that social media can also pose a risk of harm to the mental health and well-being of children and adolescents. Social media use by young people is nearly universal, with up to 95% of young people ages 13-17 reporting using a social media platform and more than a third saying they use social media “almost constantly.”
Health Advisory on Social Media Use in Adolescence – [Original Post 5/10/23]
Psychological scientists are examining the potential beneficial and harmful effects of social media use on adolescents’ social, educational, psychological, and neurological development. This is a rapidly evolving and growing area of research with implications for many stakeholders (e.g., youth, parents, caregivers, educators, policymakers, practitioners, and members of the tech industry) who share responsibility to ensure adolescents’ well-being. Officials and policymakers including the U.S. Surgeon General Dr. Vivek Murthy have documented the importance of this issue and are actively seeking science-informed input.
The American Psychological Association offers a number of recommendations which are based on the scientific evidence to date, and the following considerations.
A. Using social media is not inherently beneficial or harmful to young people. Adolescents’ lives online both reflect and impact their offline lives. In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms. In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.3
B. Adolescents’ experiences online are affected by both 1) how they shape their own social media experiences (e.g., they choose whom to like and follow); and 2) both visible and unknown features built into social media platforms.
C. Not all findings apply equally to all youth. Scientific findings offer one piece of information that can be used along with knowledge of specific youths’ strengths, weaknesses, and context to make decisions that are tailored for each teen, family, and community.4
D. Adolescent development is gradual and continuous, beginning with biological and neurological changes occurring before puberty is observable (i.e., approximately beginning at 10 years of age), and lasting at least until dramatic changes in youths’ social environment (e.g., peer, family, and school context) and neurological changes have completed (i.e., until approximately 25 years of age).5 Age-appropriate use of social media should be based on each adolescent’s level of maturity (e.g., self-regulation skills, intellectual development, comprehension of risks) and home environment.6 Because adolescents mature at different rates, and because there are no data available to indicate that children become unaffected by the potential risks and opportunities posed by social media usage at a specific age, research is in development to specify a single time or age point for many of these recommendations. In general, potential risks are likely to be greater in early adolescence—a period of greater biological, social, and psychological transitions, than in late adolescence and early adulthood.7,8
E. As researchers have found with the internet more broadly, racism (i.e., often reflecting perspectives of those building technology) is built into social media platforms. For example, algorithms (i.e., a set of mathematical instructions that direct users’ everyday experiences down to the posts that they see) can often have centuries of racist policy and discrimination encoded.9Social media can become an incubator, providing community and training that fuel racist hate.10The resulting potential impact is far reaching, including physical violence offline, as well as threats to well-being.11
F. These recommendations are based on psychological science and related disciplines at the time of this writing (April 2023). Collectively, these studies were conducted with thousands of adolescents who completed standardized assessments of social, behavioral, psychological, and/or neurological functioning, and also reported (or were observed) engaging with specific social media functions or content. However, these studies do have limitations. First, findings suggesting causal associations are rare, as the data required to make cause-and-effect conclusions are challenging to collect and/or may be available within technology companies, but have not been made accessible to independent scientists. Second, long-term (i.e., multiyear) longitudinal research often is unavailable; thus, the associations between adolescents’ social media use and long-term outcomes (i.e., into adulthood) are largely unknown. Third, relatively few studies have been conducted with marginalized populations of youth, including those from marginalized racial, ethnic, sexual, gender, socioeconomic backgrounds, those who are differently abled, and/or youth with chronic developmental or health conditions. (References in link below)
Recommendations
1. Youth using social media should be encouraged to use functions that create opportunities for social support, online companionship, and emotional intimacy that can promote healthy socialization
2. Social media use, functionality, and permissions/consenting should be tailored to youths’ developmental capabilities; designs created for adults may not be appropriate for children.
3. In early adolescence (i.e., typically 10–14 years), adult monitoring (i.e., ongoing review, discussion, and coaching around social media content) is advised for most youths’ social media use; autonomy may increase gradually as kids age and if they gain digital literacy skills. However, monitoring should be balanced with youths’ appropriate needs for privacy.
4. To reduce the risks of psychological harm, adolescents’ exposure to content on social media that depicts illegal or psychologically maladaptive behavior, including content that instructs or encourages youth to engage in health-risk behaviors, such as self-harm (e.g., cutting, suicide), harm to others, or those that encourage eating-disordered behavior (e.g., restrictive eating, purging, excessive exercise) should be minimized, reported, and removed;23 moreover, technology should not drive users to this content.
5. To minimize psychological harm, adolescents’ exposure to “cyberhate” including online discrimination, prejudice, hate, or cyberbullying especially directed toward a marginalized group (e.g., racial, ethnic, gender, sexual, religious, ability status),22 or toward an individual because of their identity or allyship with a marginalized group should be minimized.
6. Adolescents should be routinely screened for signs of “problematic social media use” that can impair their ability to engage in daily roles and routines, and may present risk for more serious psychological harms over time.
7. The use of social media should be limited so as to not interfere with adolescents’ sleep and physical activity.
8. Adolescents should limit use of social media for social comparison, particularly around beauty- or appearance-related content.
9. Adolescents’ social media use should be preceded by training in social media literacy to ensure that users have developed psychologically-informed competencies and skills that will maximize the chances for balanced, safe, and meaningful social media use.
10. Substantial resources should be provided for continued scientific examination of the positive and negative effects of social media on adolescent development.