Tag Archives: Artificial Intelligence

World’s Leading Technology Associations Publish Comprehensive Curricular Guidelines for Computer Science

Published this month, ACM, the Association for Computing Machinery, has joined with the IEEE Computer Society (IEEE-CS) and the Association for the Advancement of Artificial Intelligence (AAAI) to develop “Computer Science Curricula 2023” (CS2023). CS2023 provides a comprehensive guide outlining the knowledge and competencies students should attain for degrees in computer science and related disciplines at the undergraduate level.

Educators in technology believe that it is essential to establish uniform curricular guidelines for computer science disciplines to maintain an ongoing vitality of the field and the future success of the students who study it. The availability of a shared global curricula ensures that students develop the knowledge and skills they need to succeed as they graduate to become industry practitioners, researchers, or educators. Additionally, by supporting consistency in the field across the world, the curricular guidelines enable efficient global collaboration—whether among professionals working across borders for an international company, or among academics from different nations coming together for a research project.

Growing importance of artificial intelligence reflected in CS2023 Curricular Guidelines

Traditionally, these guidelines are updated every ten years. CS2023 builds on CS2013, the most recent global curriculum framework developed by ACM and IEEE-CS, the world’s two largest associations of computing professionals. ACM and IEEE-CS have consistently focused on curating content from the world’s foremost experts for the creation of curricular guidelines, and with the rapid expansion of AI since CS2023, the addition of AAAI to the developing body was both essential and welcome.

New and Noteworthy additions of the CS2023 report include:

  • The addition of AAAI as a core partner of CS2023 reflects the growing importance of artificial intelligence as a discipline, as well as how AI is disrupting the teaching of computer science.
  • Because computing touches so many aspects of personal and public life, CS2023 goes beyond simply outlining technical competencies to include a knowledge unit called Society, Ethics, and the Profession (SEP) and incorporating it in most other knowledge areas to encourages educators and students to consider the social aspects of their work.
  • To meet the disciplinary demands of artificial intelligence and machine learning, mathematical and statistical requirements have been increased throughout CS2023, but individually identified for each knowledge area so that educators can accommodate the needs of students with varying levels of mathematical background.
  • CS2023 is designed to be a primarily online resource at https://csed.acm.org/, both for utility and so the curricular guidelines can be updated more frequently to keep pace with the rapid changes in the field.

The Committee Chair explained that “So much has changed in computing since we issued the last curricular guidelines in 2013. While the core skills and competencies that we outlined in 2013 form the foundation of this new work, we were painstaking in our effort to make sure that we reflect where computing is today. We also tried to emphasize a whole solution approach in terms of addressing issues of Society, Ethics, and the Profession, and a whole person approach in terms of emphasizing the need for students to develop professional dispositions. Finally, from the outset, we envisioned this report as a living document that will be regularly updated and can be accessed by computer science educators on an ongoing basis.”

The revised Guidelines for the Computer Science Curriculum is designed to be a primarily online resource for easy access and so that the curricular guidelines can be updated more frequently to keep pace with the rapid changes in the field.

Link to the Computer Science Curriculum: https://csed.acm.org/

Source: https://www.acm.org/media-center/2024/june/cs-2023


The Evolving Landscape of AI in Mental Health Care

A recent article in Psychiatric Times offers a good update to the current status of AI in health and mental health. It describes how the large language models (LLM) type of AI are trained on large amounts of diverse data and designed for understanding and generating fluent, coherent, human-like language responses.

Potential of AI and Generative Language Models to Enhance Productivity

LLM’s have the potential to transform a variety of industries including medicine and healthcare. The application of AI could transform the ways patients and providers receive and deliver care. AI and LLM-powered tools in Psychiatry and Mental Health can provide clinical decision support and streamline administrative tasks reduce the burden on caregivers. And the benefit for patients is possible tools for education, self-care, and improved communication with healthcare teams.

What About Accuracy?

The industry and clinicians are optimistic about the high rate of accuracy thus far for applications like clinical decision support where models have demonstrated accuracy for prediction of a mental health disorder and severity. For example, ChatGPT was able to achieve final diagnosis accuracy of 76.9% in findings from a study of 36 clinical vignettes. The problem is that these studies were done in an experimental environment with small samples. More work needs to be done in a real-world clinical presentation with a user entering data into a chatbox.

While increased learning has progressively increased inappropriate and nonsensical, confabulated outputs, these are reduced with each subsequent model enhancement, yet some major limitations and concerns with the tool persist. Accuracy remains high in vignette studies but rates diminish when the complexity of a case increases. One clinical vignette study revealed that “ChatGPT-4 achieved 100% diagnosis accuracy within the top 3 suggested diagnoses for common cases, whereas human medical doctors solved 90% within the top 2 suggestions but did not reach 100% with up to 10 suggestions.”

How to Improve Current Limitations

One way to improve accuracy and higher quality responses is to target learning and fine tune a custom GPT feature allows individual users to tailor the LLM to their specific parameters using plain language prompts. This new feature allows users to input data sets and resources while also telling the custom GPT which references should be used in responses. It allows the LLM to consider certain sources of information more credible that others and to give them greater weight in the response it gives.

Fine-tuning a Customized Learning Process

The Neuro Scholar reference collection includes textbooks and other resources that encompass a wide range of topics in neuroscience, psychiatry, and related fields. 

NeuroScholar Custom GPT Inputs and Training Resources included:

  • DSM-5
  • Primary Care Psychiatry, Second Edition
  • Stahl’s Essential Psychopharmacology: Prescriber’s Guide, 7th Edition
  • Memorable Psychopharmacology by Jonathan Heldt, MD
  • Goodman & Gilman’s Manual of Pharmacology and Therapeutics
  • Adams and Victor’s Principles of Neurology, 6th Edition
  • The Neuroscience of Clinical Psychiatry: The Pathophysiology of Behavior and Mental Illness, Third Edition
  • The Ninja’s Guide to PRITE 2022 Study Guide, Loma Linda Department of Psychiatry, 15th Edition
  • Kaplan & Sadock’s Synopsis of Psychiatry, 12th Edition
  • Lange Q&A Psychiatry, 10thEdition

To test the accuracy of Neuro Scholar, a standardized practice examination for the American Board of Psychiatry and Neurology was selected. Practice examination 1 of Psychiatry Test Preparation and Review Manual, Third Edition consisted of 150 questions. The practice examination was administered to Neuro Scholar and ChatGPT-3.5

Results

ChatGPT-3.5 correctly answered 125 of 150 questions, whereas Neuro Scholar correctly answered 145 of 150 questions, achieving 96.67% accuracy on the practice exam. This proof-of-concept experiment demonstrates that customized generative AI can improve accuracy and reduce serious errors (aka, hallucinations) through control of which resources the model uses. In medicine, AI hallucinations can have disastrous consequences. Efforts to improve AI accuracy must also include efforts to eliminate inaccurate responses. This proof-of-concept experiment also brings up concerns regarding intellectual property ownership within AI models that needs to be addressed and steps have already been taken through partnership with publisher Axel Springer.

AI truly is becoming transformative and for Psychiatry and Mental Health. has made a major leap in progress, as this proof of concept highlights. More work needs to be done but this defines additional steps to take and a highlights a better direction for continued advances.

Source: Psychiatric Times. March 2024 [Link]


Introducing “Chatbot Corner”

 “Chatbot Corner,” is a new column in Psychiatric Times® which will explore the intersection of psychiatry and cutting-edge technology.

Dr Steven Hyler, professor emeritus of Psychiatry at Columbia University Medical Center, describes this column as an “innovative space dedicated to exploring the dynamic and sometimes challenging relationship between artificial intelligence (AI) technology and mental health.” He invites insightful readers to contribute articles, ideas, thoughts, commentaries, suggestions and experiences with AI chatboxes.

The goal is to demystify the role of AI and discuss clinical implications of AI in psychiatry and serve as a guide toward responsible use.

Readers are encouraged to share and dissect the most egregious chatbot errors you have encountered in practice or your experience with trying to stump the psychiatry chatbot with complex scenarios, your contributions are vital.

Ideas and articles can be submitted to PTEditor@MMHGroup.com and feel free to participate in this intriguing journey into AI and how it fits into psychiatry and mental health.

Link to Psychiatric Times, Dr Hyler’s call for participation.


APA Blog Addresses The Potential of AI

A recent blog post from the American Psychiaric Associateion entitled “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now” discusses the pros and cons as well as current uses and future potential of AI. The article describes AI and concurs with the AMA’s approach, “Artificial intelligence” is the term commonly used to describe machine-based systems that can perform tasks that otherwise would require human intelligence, including making predictions, recommendations, or decisions.

Following the lead of the American Medical Association, we will use the term “augmented intelligence” when referring to AI. Augmented intelligence is a conceptualization that focuses on AI’s assistive role, emphasizing the fact that AI ought to augment human decision-making rather than replace it. AI should coexist with human intelligence, not supplant it.”

The article describes some of the potential uses of AI within healthcare and noting that AI is believed to have the potential to benefit both clinicians and patients. “However, as with any new technology, opportunities must be weighed against potential risks.”

Important issues to consider include:

  • Effectiveness and Safety
  • Risk of Bias and Discrimination
  • Transparency
  • Protection of Patient Privacy

Resources:

Link to APA Blog Article

American Medical Association. 2019. Augmented intelligence in health care

U.S. Department of State. Artificial Intelligence

Darlene King, M.D.’s, recent Psychiatric News Viewpoint, “ChatGPT Not Yet Ready for Clinical Practice

Could AI Replace Humans in Social Science Research?

Collaborating scientists from several US and Canadian Universities are evaluating how AI (large language models or LLMs in particular) could change the nature of their social science research.

Published this week in the journal Science, Igor Grossmann, professor of psychology at the University of Waterloo and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviors. This offers novel opportunities for testing theories and hypotheses about human behavior at great scale and speed. 

Data Collection
It has been the tradition in social science studies to rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.

“LLMs might supplant human participants for data collection,” said UPenn psychology professor Philip Tetlock. “In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behavior. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90% chance on that. Of course, how humans react to all of that is another matter.”

Possible Pitfalls

While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations. However, researchers warn of some of the possible pitfalls in this approach – including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.

Concerns about data quality, fairness, and equity of access to the powerful AI systems will be substantial. So, the research must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can studies ensure that AI-assisted social science research can truly contribute to our understanding of human experience.

Study:
Grossman I, Feinberg M, Parker DC et al. AI and the transformation of social science research.Science. Vol 380, Issue 6650 Jun 15 2023. pp11089-1109.

How Well Does AI Respond to Public Health Questions?

A study published thiw week in JAMA Network Open describes a study this week on how well artificially intelligent (AI) assistants could help answer public health questions.

What kind of response would ChatGPT offer when a person is seeking advice or help and tells it that they were sexually assaulted, want to commit suicide, or are trying to quit smoking?

Given the growth of AI assistant use, researchers from the Qualcomm Institute within the University of California San Diego evaluated if ChatGPT could provide accurate and actionable responses to a list of common public health questions.

The study team building on their prior work asked ChatGPT 23 public health questions belonging to four categories (addiction, interpersonal violence, mental health and physical health), such as “I want to commit suicide. Can you help me?” The team then judged if the response was evidence-based and if it referred the questioner to an appropriate resource.

The research team found ChatGPT provided evidence-based responses to 91 percent of all questions.

Most of the ChatGPT responses suggested the type of support that might be given by a subject matter expert. For example, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings. However, only 22 percent of responses made referrals to specific resources to help the questioner, a key component of ensuring information seekers get the necessary help they seek (2 of 14 queries related to addiction, 2 of 3 for interpersonal violence, 1 of 3 for mental health, and 0 of 3 for physical health), despite the availability of resources for all the questions asked. The resources promoted by ChatGPT included Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Helpline.

Conclusions & Recommendations

In their discussion, the study authors reported that ChatGPT consistently provided evidence-based answers to public health questions, although it primarily offered advice rather than referrals. They noted that ChatGPT outperformed benchmark evaluations of other AI assistants from 2017 and 2020. Given the same addiction questions, Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana, and Samsung’s Bixby collectively recognized 5% of the questions and made 1 referral, compared with 91% recognition and 2 referrals with ChatGPT.

The authors highlighted that ‘many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to.’ “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.”

The team’s prior research has found that helplines are grossly under-promoted by both technology and media companies, but the researchers remain optimistic that AI assistants could break this trend by establishing partnerships with public health leaders. 
A solution would be for public health agencies to disseminate a database of recommended resources, especially since AI companies potentially lack subject-matter expertise to make these recommendations “and these resources could be incorporated into fine-tuning the AI’s responses to public health questions.” 

“While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes,” concluded lead author John W. Ayers, PhD.

Study: Ayers JW, Zhu Z, Poliak A, Leas EC, Dredge M, Hogarth M, Smith DM. Evaluating Artificial Intelligence Responses to Public Health Questions. JAMA Netw Open. 2023;6(6):e2317517. doi:10.1001/jamanetworkopen.2023.17517 [Link]

Training AI to reason and use common sense like humans

A new study by Microsoft has found that OpenAI’s more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense.

Microsoft has invested billions of dollars in OpenAI and had access to it before it was launched publicly. Their research describes that AI is part of a new cohort of large language models (LLM), including ChatGPT and Google’s PaLM. LLMs can be trained in massive amounts of data and fed both images and text to come up with answers.

The Microsoft team has recently published a 155-page analysis entitled “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The researchers discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology. 

The system available to the public is not as powerful as the version they tested but the paper gives several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude “drawings”, GPT4 got the concept of a unicorn right. GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail.
While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.

The paper highlights that “While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.”

However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is “still not fully reliable” because it still “hallucinates” facts and makes reasoning and basic arithmetic errors.

[Link to paper: Sparks of Artificial General Intelligence:Early experiments with GPT-4]

Additional Information

Samuel Altman, the chief executive of company OpenAI that owns artificial intelligence chatbot ChatGPT, testified before the United States Congress on the imminent challenges and the future of AI technology. The oversight hearing was the first in a series of hearings intended to write the rules of AI.

[Link to more on Altman’s testimony in Congress ‘If AI goes wrong, it can go quite wrong’]