LLMs in Psychology: the good, the bad (and the evil)? | Llm Meaning law | Llm Python - - Tutorial | How to Train Generative ai Using Your Company’s Data | Turtles AI

LLMs in Psychology: the good, the bad (and the evil)?
Large Language Models (please read our guide to #LLMs to know more about them) promise #accessibility and #affordability that could help more individuals get the #support they need to cope with #anxiety, #depression and other problems. But there are still concerns about #reliance, #privacy issues and lack of human connection that therapists say must be addressed. In our essay we explore these and more concerns and try to propose solutions, in light of two different psychological currents, i.e. #Construct #Theory and #Behaviorism.  

Large Language Models in Psychology: Opportunities, Challenges, and the Perspectives of Construct Theory and Behaviorism


The rapid advancements in AI, specifically in LLMs (like #ChatGPT, #Bard, #Sage, #Claude,...) have prompted researchers to explore their potential applications in various fields, including psychology. The following discusses the possible ways LLMs can be integrated into psychological practice, focusing on their potential benefits and drawbacks. Additionally, the article examines how George Kelly's construct theory and John Watson's behaviorism might view AI and LLMs in psychology.


The emergence of ChatGPT, a state-of-the-art LLM, has sparked discussions on its potential use in psychotherapy and mental health support. Some individuals have already turned to AI, as evident on online forums and discussions, seeking advice on personal problems and difficult life events. This development raises the question of whether LLMs can effectively serve as psychotherapists. To address this question, we will discuss the implications of using AI in psychology, the potential benefits of LLMs in psychological practice, and the perspectives of construct theory and behaviorism on this matter.

The Promise of LLMs in Mental Health Support

Several factors make LLMs appealing for mental health support:
  1. Lack of judgment: AI lacks human biases and judgments, which may encourage individuals to be more open and honest about their struggles.
  2. Unlimited time: Unlike human therapists, AI has no time constraints, allowing for potentially unlimited support.
  3. Availability: AI can be available 24/7, providing help whenever it is needed.
  4. Cost-effectiveness: AI-based support may be more affordable (or even had for free) when compared to a traditional therapy, making mental health services more accessible.

Challenges and Concerns

Despite the potential benefits, using LLMs in psychology also raises concerns:
  1. DependencyUnlimited access to AI might lead to dependence, with individuals relying on constant support instead of developing self-sufficiency.
  2. Loss of human connection: The authentic human relationship between a therapist and a client is essential for effective therapy. AI cannot fully replicate this connection.
  3. Ethics and privacyAI systems may inadvertently breach confidentiality or make ethically questionable decisions.

Perspectives from Construct Theory and Behaviorism

George Kelly's Construct Theory

George Kelly's construct theory posits that individuals interpret the world through cognitive constructs that are developed based on their experiences. In this context, LLMs might be seen as having the potential to assist in the therapeutic process by helping clients explore and modify their constructs. However, construct theory emphasizes the importance of the therapeutic relationship in promoting change. Therefore, LLMs should not be considered as replacements for human therapists but rather as tools that can complement and enhance therapy when used judiciously.

John Watson's Behaviorism

Behaviorism, as proposed by John Watson, is founded on the idea that human behavior is primarily shaped by environmental stimuli and consequences. From a behaviorist perspective, LLMs could be useful in providing structured, consistent input and feedback to help individuals modify their behavior. Nonetheless, behaviorism also recognizes the importance of individualized and context-specific interventions. Thus, LLMs should be carefully tailored to the needs of each client and should not be relied upon as the sole source of therapeutic intervention.

Opportunities for Integrating LLMs in Psychological Practice

To maximize the potential of LLMs in psychology while addressing the challenges, it is crucial to adopt a balanced approach:
  1. Supplementing human therapists: LLMs should be viewed as tools that can enhance therapy, rather than as replacements for human therapists.
  2. Setting boundaries: LLMs should be used within specific therapeutic contexts, targeting particular issues or populations.
  3. Clinical validation: The efficacy of LLM-based interventions should be assessed through randomized controlled trials before being widely adopted.


The advent of LLMs like ChatGPT presents both opportunities and challenges for the field of psychology. While these AI systems hold promise in providing accessible and non-judgmental support, it is crucial to address the potential risks of dependency and the loss of human connection. By considering the perspectives of construct theory and behaviorism, psychologists can explore ways to effectively incorporate LLMs into their practice and harness their potential for positive change. In conclusion, at least for the forthcoming years, LLMs can be an interesting tool to complement human psychologists, while not substituing them. As we say at Turtle's AI, that's "AI with humans at the forefront".