Much as the non-directional Rogerian psychotherapy meshed well with ELIZA’s list-processing programming (MAD-SLIP), Ajilore explains that problem-solving treatment is relatively easy to code, as it’s a regimented form of therapy. The AI application is called Lumen, an Alexa-based voice coach that’s designed to deliver problem-solving treatment, and there are striking parallels between the approach and results shared by the Lumen team and ELIZA. Olusola Ajilore, Professor of Psychiatry at the University of Illinois Chicago, who recently co-authored the results of a pilot study testing AI voice-based virtual coaching for behavioral therapy as a means to fill the current gaps in mental health care. (Image credit: Printscreen of javascript version of ELIZA, originally written by Michael Wallace and enhanced by George Dunlop.)įor ELIZA’s programmer, Joseph Weizenbaum, this meant programming the chatbot to respond with non-directional questions, using natural language processing to identify keywords in user inputs and respond appropriately.Įlements of Rogerian therapy exist to this day in therapeutic treatment as well as in coaching and counseling so too does ELIZA, albeit in slightly different forms. The core principles of Rogerian psychotherapy were fertile grounds for AI programming Rogers was known for his belief that facilitation was key to learning, and as such the therapist's role became one of asking questions to engender self-learning and reflection on the part of the patient.īy today's standards, ELIZA's intelligence is rudimentary at best, and it certainly couldn't grasp my fears of an AI takeover. ELIZA, widely considered to be the first ‘true’ chatbot, was an early natural language processing computer program later scripted to simulate a psychotherapist of the Rogerian school, an approach to psychotherapy that was developed by Carl Rogers in the early 1940s, and which is based on person-centered or non-directive therapy The relationship between AI chatbots and mental health runs deep deeper than many may imagine. However, after speaking with mental health and AI experts about self-managed and clinical AI applications already available, I’ve come to realiz e that it doesn’t have to be all doom and gloom - but we’ve got a long way to go before we can safely trust our mental wellbeing to an AI.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |