
Psyched to Practice
Join us as your hosts, Dr. Ray Christner and Paul Wagner, as we explore the far reaches of mental health and share this experience with you. We’re going to cover a wide variety of topics in and related to the field, as well as having experts share their findings and their passion for mental health. We look forward to taking this adventure with you and hope we can get you Psyched!“ Be well, and stay psyched!”
Psyched to Practice
Masters in Practice: Implementing AI in Practice and Research w/ Dr. Dan Florell
Artificial intelligence is moving at breakneck speed, and mental health professionals are scrambling to keep up. In this episode of Psyched to Practice, Paul and Ray welcome Dr. Dan Florell—professor, psychologist, and AI expert—to break down how AI is reshaping clinical work, research, and education. Is AI a game-changing tool or a legal and ethical minefield? We cover the risks, rewards, and practical ways you can start integrating AI into your practice today. Don’t get left behind—tune in now to stay ahead of the curve.
To hear more and stay up to date with Paul Wagner, MS, LPC and Ray Christner, Psy.D., NCSP, ABPP visit our website at:
http://www.psychedtopractice.com
Please follow the link below to access all of our hosting sites.
https://www.buzzsprout.com/2007098/share
“Be well, and stay psyched”
#mentalhealth #podcast #psychology #psychedtopractice #counseling #socialwork #MentalHealthAwareness #ClinicalPractice #mentalhealth #podcast
Welcome to the Psyched to Practice podcast, your one stop for practical and useful clinical information, masterful insights from experts in the field, and a guide to daily living. Thank you for joining us today. Paul and I had a great opportunity to bring Dan Florel, who is now part of our team. Not Dan's first episode, but the first one after we've been forming the CE website. Dan is, you know, a big part of that and, you know, great to have him here on an episode. I'd love to have him, you know, be even more involved in, you know, as you listen in for today, there's a topic that Dan can really specialize in that I think, you know, he has a solid seat here at the Psyched Practice podcast. Probably pretty frequently here. Yeah, I mean, so, you know, we, we, we hit this topic of, of artificial intelligence and how that impacts Um, I guess focus a little bit more on clinical practice, but also with research and so how we're using AI and how we can do it ethically. It's new, it's emerging. I think, you know, even throwing around the term, it's the wild, wild west of AI. And, you know, that's why it's so important to just kind of We're going to stay up to date because if we just kind of pick and attach ourselves to like one AI, we might be missing out on so many different aspects that are out there. And, you know, I think that's something that Dan really, you know, covers and kind of pushes for is. Really kind of becoming, you know, a bit more well-versed in utilizing and, you know, diversifying AI. Yeah. And, you know, I, I, I talk to Dan often and uh, you know, it's, it's amazing because just in that, you know, hour of us just doing this episode, How much I continue to learn every time we talk about it. Like I just like it's and it's something that I, I try to stay aware of, but you know, his expertise and really. Diving into so many different aspects of it. Um, so yeah, I, I loved having this conversation and I, and I hope as you listen to this, you can kind of pull out some pieces of how. Maybe you can use it in your, your practice and taking from Dan, using it in a responsible manner. For those of you who are not familiar with Dan, Dan is a professor at Eastern Kentucky University and he runs a private practice called MindSci. He's trained school psychologists in graduate programs for twenty years. Um, and has done just a ton of research and publications in the world of technology and how we apply it. With that, enjoy this Masters in Practice episode, Implementing AI in Practice and Research. So Dan, welcome back to the Psych to Practice podcast. Always good to be a part of it, Ray. Yeah. And, you know, so I think we've kind of teased this out to the audience already, but, uh, Dan is now part of, uh, the psych to practice team. Now having you as a guest, uh, is a little different in the sense that, uh, You're, you're kind of behind the scenes with us as we're growing Psych2Practice into bigger and better things. So, um, I'm glad you could come back and join us. Oh, yeah, I'm thrilled to be here. It's exciting to join. I feel like it's the three amigos. It really is. You know, maybe not. Maybe you guys are the front of it, of course. But, you know, I think it was really exciting when we went down to Podfest and, uh, you know, I think it goes right into the topic that we're talking about today when we talked about how AI is being incorporated in so much of the Behind the scenes stuff and how excited we all were. I like to tell people, yeah, I haven't been to a convention that's just so nakedly capitalistic with no discussion of ethics. I'm not as striving to that approach myself, but boy, the curiosity and the inventiveness of people when you totally unleash the AI, um, to its fullest extent. And, you know, to, to the point of what we're looking at, of course, today is that, um, there are some moderations that we in the field of mental health have to make with it, but, uh, You know, I think you've, you start with the no restrictions to give you ideas and then you think about how do I walk that back so that we do everything in an ethical and legal manner. Yeah. Yeah. And I, I agreed. I think that the, uh, the, the podcast opened up a whole different, I kind Kind of, uh, view for me about what AI is like, you know, I've been aware of it. You know, we, we've talked about it when you were here with us last time, uh, to some extent, but. Seeing how it's grown at, at almost like a, I mean, an incredible pace. Yeah. I felt like. There was so much nonstop that we were learning. So maybe that's a good place for us to start. So, I mean, I know you follow this stuff, uh, way more than I do, but you know, What are, what have been kind of the big kind of overarching kind of trends with AI that's been happening over the past year or maybe, maybe we have to narrow it to the past month. I don't know. Yeah, well, I mean, one, I don't, I don't know if anybody's truly on top of this, uh, the advanced developments. It is so broad and coming in so quickly. Um, You know, I, there's more general large language models, uh, that are out there now that, uh, I, you know, you still have the chat GBTs, um, that kind of are leading the pack, but they have a lot of competitors, uh, that do various things a little bit differently. And it really depends on the style. That the person likes and also what tasks they're asking them to do. So you have things like, um, Claude with by anthropic, which is. A little bit better at narrative and has a little bit more of an ethics component to it in the task that you ask it to do and whether it's willing to do it. Which is kind of refreshing. Uh, you have on another end grok with a K as I learned this week. There is one, there's a grok with a K and a grok with a Q at the end. Okay. Um, The grok that I am referring to is the Elon Musk X version, which evidently is sarcastic in its responses. Really? And also has, you know, been all tied into the X platform. So that's a relatively newcomer to it. And evidently old Elon has offered to buy chat GPT. Uh, we'll see how all that mixes up. And then you got Google who is originally launched Bard. Now they call it Gemini. Uh, frankly, I still find it somewhat inferior, but it may be. The biggest interaction with AI that the general populace has because it's all been tied into Google search unless you've this, you know, um, you, you've, uh, clicked off, not have that be the top of it right there. Uh, then you also have, you know, all of the llama based, uh, meta platforms, uh, that use it. And then you have all the spinoffs that have been going on too. So, uh, a lot of times I talk about the big general AIs. And then you have these specialty AIs, but a lot of those specialty AIs, they're using the same basic engines. There's about four or five Really popular engines and basically it's like a car, you know, a car that's using the same engine, but you're putting stuff over the top of it to make it look like different type of car. And it performs in different ways based on all the other stuff you put around that engine. And I think that's probably the best way for a lot of our listeners to think about AI. Of course, You know, just when I was about to go present in Illinois recently, uh, DeepSeek, a Chinese AI came out and upended everything, uh, because it is so less costly, uh, to run one of their Kind of training modules, which can run several hundred million if you're chat GPT. Deep Seek supposedly did it for less than six million dollars. So if you put less than six million dollars, now you've got a lot more players that can get in the game than the Huge Amazons and, you know, Microsoft and all the usual suspects. Um, so, um, If anything, we may be talking at a really primitive basic period where there's only four or five that you got to be kind of aware of that's that's being developed. And it could blossom into just thousands, uh, depending on how good this model is. So, uh, I'm afraid I'm not going to give you any clarity here. Anything a year ago was, it was a more simplistic age, you know? Right. Where we use horse and plow for our farm and that was as complicated as it got. I think that's the, the most interesting thing to me is like, that's almost what it really does feel like. Like I, I, like, again, you kind of mentioned Podfest and I kind of went there kind of going, wow, I'm in this futuristic world. And a year ago, we were plowing fields with a horse and, and it really does, at least it felt like that for me. So, I mean, to hear you say that when I know you followed a lot more closely than me, um, yeah, I mean, it just seems like it's like light speed. I forget where I've heard this from, but it was in reference to early internet and they kind of would say like, oh, you know, the early internet, the wild, wild west of the internet. And really after Podfest, it does feel like we're kind of in this age of the wild, wild west of AI where There's really not a lot of guidance or regulations. And, you know, even like my thought goes to like when, you know, Napster or even like different elements of YouTube before it was owned by Google, like those different pieces and just kind of Really flexing the muscles of what it's able to do. And I have to imagine, you know, if we were to do an episode next year, we're having a really different conversation. And that's just like the speed in which it's moving. And, you know, in some ways, probably driven by AI that's, you know, Probably helping to create even more AI. So I imagine moving at an exponential rate. Yeah, I think, Paul, your kind of analogy with the early internet being just coming into grad school around ninety-four having one of the first web pages, you know, the search engine probably is the best equivalent. Because even before that, it was incredibly hard to find. You had these long codes you had to type in to connect to just another bulletin board and stuff. And then once the search engines came around, it was really amazing. How quickly you could do it. Now, eventually Google came on, you know, and ended up dominating and has for the last couple of decades. It feels a lot like that. You know, you've got big players that are tech adjacent or in the tech, but not necessarily AI who've gotten into this. Uh, but there's always these little jump starts like a deep seek or something that come along and they may have better, even though they don't have the same amount of resources, they may have a better model. And uh, it can put the other ones out of play. You know, there's a reason Microsoft isn't in the mobile phone business. Absolutely. Yeah. So like hearing that this is moving at such a rate, like, so those, those people that are listening that are, you know, you know, mental health professionals, educators, If they're not on this, have they kind of missed the boat kind of deal? Like, are we, you know, is it too late to enter the game, I guess? Well, as they say, it's never too late to enter, right? Okay. So, no, uh, APA came out with a practitioner pulse survey, um, just a little while ago, I want to say, uh, last year, maybe at the end of the And it wasn't these practitioner pulse surveys that they do. Uh, they aren't like some massive thousands and thousands. They get about a few hundred. Psychologists, uh, of various parts along their, their experience in their career. And they just asked the question, you know, in the last twelve months, how much have you really engaged in AI? And, uh, the result is even the young whippersnappers, even though they called them early career psychologists, uh, even 59% of them. Had not used. Wow. AI at all. Uh, and if you add once, uh, now you're up to 63%. So roughly two thirds of the youngest of the cohort. And as you might guess, the older you get. Uh, in your career, the longer you're in your career, uh, it goes all the way up to those who've been working more than thirty years. It's over 80% uh, that have never used it or maybe just used it once. So, uh, odds are a lot of our listeners are in that category and you're in good company is, is the lesson that I tell you. Um, you know, and, and it's interesting. They did do some follow up with that about what they were using it for, for the people that do use it. But I think to your point and to a lot of our listeners, uh, you know, you're not too late. Um, it's, it's still being shaken out. Now I will say once you start dabbling in it, it gets a little addictive. I honestly, if you saw what I was able to put together for a presentation I have coming up on a topic that I didn't have a great background on, I was able to get something out really high quality using AI. Using it in a way, though, that, uh, you know, is a responsible, ethical way of using it, but really utilizing it And this would have taken me a week to do otherwise. It took me about a solid day to, to put together from Pulling up new articles, analyzing those articles, putting them into a slide, um, and then animating and putting in visual graphics single day. So. Yeah. Yeah. So it's, uh, once you do get the dabbling, once you're doing it in a responsible way, once you understand it, uh, I have a whole model that I've created to kind of get people from the start line into more advanced stuff. Where they're not tripping over ethic, ethical issues with data privacy, stuff like that. So, so I'm, I'm curious, like, it almost sounds like there's kind of two things maybe our listeners. One is there's a clinical piece. Then, you know, you're giving this kind of, okay, I need to do it for a presentation. Um, and used it almost maybe more as a research tool than, than really a clinical tool. What are, what are the, where do you see the clinical applications? Like what's that look like currently? Well, I mean, that's a kind of a two two front prong kind of answer for this, right? I mean, the first thing is if you wanted to just look at pure clinical You can see the immediate relevancy of it. You use one of them yourself. It's called Bastion GBT. It's basically a chat GBT that's been modified and made HIPAA compliant. Um, to really help utilize for report writing, consolidating of notes, things like that. Uh, there are several others that are out there. I will let the people that are listening right now know that, you know, this does not mean I'm vetting these for you. These are just examples of them. Uh, as you quickly understand with AI, uh, you only have so much time in a day. So for me to toss off five or six of these things. I don't have time to use all of these. I just know they're out there. I know what we claim to do. Uh, and that's where the due diligence part When you listen to this and you listen to all the different names thrown out there and you get overwhelmed, which I know you probably will, uh, Just take a breath. Just pick one. Pick one. Use it. See what you like. Uh, and if you don't like it, know that there are many others. And I think one of the big guidelines that I use for people who are dipping their toe into AI is Just try it. See if you like it. Um, use it just a little bit for things that are minor. And then you can ramp up things as you go along. Um, you don't need to go ahead and grasp every single aspect of it. The other important thing goes along with any time we take new technology, what do I want to accomplish? What is my purpose? What is the task at hand? Don't look at what is the tool I'm using. Look at what is my destination. That's going to help you a lot when you're selecting AI. So in addition to Bastion GBT, you have a couple of ones like Auto Notes and Uphill and Mentalic, um, or Mentalic. I don't know how these guys pronounce it. You know, I always like to say I think the guys that create these companies are the ones that failed the spelling tests and elements. Uh, with the names. But basically what the three that I just mentioned, the Auto Notes, Uphill, and Metallic, um, do is basically... They can do everything from transcribing your clinical counseling session to summarizing those results, bullet pointing them, creating a less eye treatment plan from those notes. Uh, and so, you know, all in a secure platform. Now, there's some caveats with that, right? Um, the medical field's way beyond us in the mental health field as far as transcribing and Sessions and putting them into, you know, your EMR systems, your electronic medical record system. So there's the story of the individual doctor who got sued successfully. Because he was using an AI transcriber with his sessions. Of course, their sessions are much shorter than ours. But in the conversation he was having with his client or his patient, she said something, you know, I just don't have the heart for it. Uh, and so the AI transcribed that, took it out of context, and automatically put in a referral for a cardiologist. And when the doctor went back through those notes, didn't catch that, approved them. So then the woman went ahead and got, uh, Automated call that your referral to the cardiologist has been put through. You're going on blah, blah, blah date. She does the cardiologist does the workup says I don't see anything wrong. She's like, I know I don't feel like there's anything wrong to it. I'm not sure why I'm here. So they eventually trace it back to the doctor didn't appropriately catch the air of the AI because You know, it interpreted heart as, oh, there's heart problems. We need to go to the cardiologist. So it's just examples like that. I think one of the things that we need to worry about with AI is, uh, it's almost too competent, uh, And yet incompetent. And so anytime we can have an outside thing that looks like they can do it on their own, uh, we as humans are like, great, then we don't have to do that. And so it's just like, you take the wheel. I'm not going to be supervising. You're good. And I think that goes back to just how the human mind works, right? If we can figure out a shortcut to get the job done, we're going to take it. And so with AI, we got to go against that tendency if we're going to rely on it with clinical work. I will say one of the, I know we've been referencing PodFest a lot, but one of the sessions that I was in, it was, well, All about integrating AI into the podcast and being able to, to utilize it probably like with assistance, with editing, with generating better content. And they were trying to talk about what are the, all the areas that you can use it. And the presenter was more so like, Let's just focus on the areas where you really shouldn't because whatever's left over, think of it as like you can use it. And quality control was just about the only one that really came up. Want to make sure that what it's producing is actually in alignment with what you're expecting or the message that you're trying to share. And I think that same thing is, you know, what you're sharing there, Dan, is, you know, we have to have some quality control and that has to have human eyes on it. Even though we want to trust it, and most of the time we can, it's those instances where, you know, a referral for a cardiologist that, you know, ends up in a doctor being sued. Yeah, I mean, I think that's also, you know, I think that, you know, what I've found in my time dabbling with it is I don't like to use it for a lot of stuff. I don't like how it transcribes certain things like there are certain things I don't like. But what I what I tend to find it the most useful for is. I always say I, I, I write too much like a psychologist and while that's, that's wonderful at times, you know, That's not the audience that I oftentimes am really writing for. So it's like, you know, like writing a report that I want the person who I evaluated to understand it. You know, what I find is I write it and I almost use it more like a spell check and just say, you know, make sure this is at a eighth grade reading level instead of, you know, Writing in a way that I'm using kind of psycho jargon that doesn't quite, you know, connect with people. Um, and I think That's where I have found that probably clinically the most useful that it's, it's enabled me to express an idea. That I, to be honest, had a hard time putting it into, to a, a really kind of general audience type of, uh, of wording. And it's, it's helped to do that now. I don't always like what it writes and sometimes I scrap it. I would say that, you know, probably a good 75% of the time it, it gives me an idea That lets me communicate better. So I, that's, that's kind of where I find it's probably the most useful. Well, I was gonna say, Ray, I, I use it the same way, um, not for my, my clinical work per se, but, you know, I write a couple columns, one for the NAS Communique and one for our local Richmond Red Register newspaper. Um, you know, and those are two different audiences. Um, I create a thread in AI and if you don't, not sure what thread is, it's basically your ongoing conversation that you have with the AI about a particular task you like it to do. Which is makes it different than the search engine. And I have a thread that's for my column for a newspaper, which is mostly read by older individuals. And so I make sure that I run it through and in my instructions and my prompts, I say, hey, this is the audience. Please look to see if this is a content that this type of audience would know for the general public. Uh, how is the clarity? What sort of suggestions would you make for the transitions? Is there any added content that might go better? Is there anything that's confusing? So, and it's not just to Rewrite this for me, which you could certainly do. Uh, but I think getting back to our quality control and the fact that I've learned to write and spend a lot of time honing my own style to it. Yeah. You know, I want to make sure that it's in my voice, but I also do appreciate because I think we all had the experience when we were in college and we were trying to find that English major to check on our writing. For whatever thing that we've done. Um, and the worst, and the worst feedback you could ever receive if you gave it to a friend is, oh yeah, it's good. You know, and nothing more. So, uh, this, this does a good job with that. And then if you don't like the feedback, there's even a recurse button and it'll go ahead and look at it again. The other thing about AI is it's a feelings don't get offended. So if you want to say that's horrible feedback, I'm very disappointed in you. You need to do better. It'll do better. They don't know why it does that. I tell people if you've got an aggressive kind of feeling built up, Um, go ahead and talk to your AI, you know, and be like, that's awful. You know, nothing we would ever say to our human allies, but, uh. And then the weird way that AI works, it sometimes gets better. If that goes against your personality, you can always say, that's a nice try, but I really think if you gave more effort, And it'll also do better. So you don't have to just swear at your AI, but I'm saying it's open to it and you can't offend it. That's, that's the other nice thing. And I'm curious your thoughts here, Dan, because I, one of the ways I find myself using AI clinically is actually in the session. With clients, especially with my ADHD individuals, trying to incorporate certain aspects for them to create functional automation or shortcuts for them that really, you know, it's not going to take a lot of, you know, cognizant or. Really need to be mindful or intentional. It's just needing to create a specific result or something that can be reproduced. And so again, Podfest was phenomenal and trying to share some of those resources with other individuals and saying like, Hey, like if you are needing to identify certain emails, you can run it through an AI filter and it'll identify those and, you know, it'll create then a summary of what you're needing to do based off of this. And again, quality control, but being able to actually encourage AI education in some ways for, for clients. And what are some of the risks if someone's going in with AI and really doesn't have a, that ethical mindset? Yeah, no, that's an interesting, um, perspective, Paul. I hadn't really thought about as far as training clients in regards to the usage, um, You know, with productivity software. I mean, we've been doing that with technology generally with a lot of our clients that have a variety of mental health issues that. You know, affect planning organization and all sorts of things like that going all the way back to, you know, using the calendar on your phone with the alerts to go ahead and do that. Um, you know, I think Being a college professor, the emails that I get sometimes would really benefit from an AI doing a You know, I'm writing to my professor. I want to say this. What's a good way of saying it? Uh, now it would ruin a little of our fun. I just bought it, uh, usually very quickly because it's, uh, overly loquacious about it. Oh, Dr. Florell, I sincerely appreciate the fact that you're reading this and give me the time that I'm able to go ahead and express that I will not be in class, but oh, woe is me, the knowledge that I will be losing. And I have never had a student write to me like that. More likely. Yeah. One of my colleagues, one of my colleagues, this is my classic email, literally got this from a student and said, yo, I'm going to be late. I got distracted, satisfying my lady. And as it happens, and uh, I would miss that if they ran it through AI. I would love to see what AI came up with. But, you know, the pure honesty on some of this stuff. That's your point, Paul. Um, I think the ability to go ahead and, um, talk about when With your clients, you know, these AIs are not some benevolent machine. They do take your input. They put it into their learning model. And I can see it. ChatGBT, I was just using in the last couple days, has improved from what it was four months ago. It's given me better results. I'm running it through visual images. It's actually putting the words right and not looking like, you know, some hugely dyslexic AI anymore. Um, so. You know, it's constantly improving, but there's some risks in it. There's also the issue with the over-reliance, um, on any machine. And is it a way of backing out of Trying to do better with your own skill set. Um, you know, like social skills, for instance, is there something if we have neurodivergent populations and You want to do some sort of, uh, you know, texting to somebody you're interested in. And so you have the AI help you with that social skills. Is that good? I mean, it may be good for that immediate conversation. Is it good when you may eventually meet face to face and you're not going to be able to bury yourself in your phone and have your AI helping you out, you know, and then. Yet. Well, you say yet. So, you know, all of these have apps and check TBT now allows me to have an open conversation with it. And I think it can even do active translation. Um, you can even hold it up the camera and ask like, what is this book about? And a lot of times it'll even give you a summary of the book. So yeah. Yeah. As you said, Paul and Ray, you know, your earlier comment, give me another half year. We're going to have maybe a totally different conversation. So I know that this poor episode is going to be real popular and end up in the dustbin. Right. Yeah. Four months from now, like, oh, that guy is talking and oh, it's so old. Well, you know, it's interesting because, I mean, I think the interactive piece, um, yeah, I just can't think where I read it, but they, you know, they're really doing a lot of research on Using AI in kind of initial crisis intervention where it's, you know, someone either texts, you know, that they're having suicidal thoughts or I think even they have it where you can call in, um. And I think maybe I might've even saw something that it's like a visual one. Like there's actually like an avatar that you can actually talk to. Like, I mean, it seems like they're just constantly going up, but the, The interesting part is that the decision making was actually pretty accurate as far as like making sure the person got to the right place. Um, now again, it's kind of early. I'm sure that's, you know, we're not relying on that completely, but it seems like in those kinds of cases where you might need an immediate response, um, It was pretty good. And I mean, when I looked at some of the kind of dialogue that it was giving, the interesting part is I thought it was quite empathetic. Like, I thought, like, the responses back were, like, yeah, like, I could see myself saying that to someone in a session. Like, I mean, it's, it's pulling information where the responses were, like, Were really consistent. And then it, you know, based on what that person said, went through this decision tree in a really rapid fashion and got them to the right place in. In a much shorter time, you know, so where somebody might be doing a crisis appointment that takes an hour for an interview, ten minutes, AI had it in the right With the right resource. And, you know, that's interesting. I mean, it's promising. I mean, you're right. So like, what's that going to look like in six months? Well, I mean, interesting you bring that up, right? Cause that's one that I actually ran across a real example of cause there's these, these newer models that they're calling, um, kind of small language models, which are based on a much smaller database. But a higher quality one. And so the National Health Service over in the United Kingdom implemented something called Limbic. L-I-M-B-I-C and they are using it. It's a natural language chat bot that's purpose is to help with referrals for mental health services. And it's like a decision tree and it's very empathetic. But what they found is that it went ahead and was used in almost a quarter of all the talking therapy services. And that when it was utilized, it reduced the intake by about thirteen minutes. It lowered treatment dropout by almost 20%. It decreased the wait time for assessment by 13%, shortened the wait time for treatment by 5%, and the client feedback they said was 89% positive. So, What does this do? It gave you that. It gave that national health service twenty-four seven coverage. And then it only put after it talked to the people to the appropriate provider who had the specialization that met the needs of what that client was. If you were dealing with anorexia, they're not referring you to the people that specialize in bipolar disorder, right? Right. And so, you know, the advantage, of course, is that they have a nationalized system and that you Really have a high incentive to use a referral kind of chat bot like that. But, you know, that's also if you look at the output from the limbic, it's still pretty basic chat body kind of language. Right. So to your point, you know, I think it's just getting better. Um, you know, you talk about basic therapeutic technique, right? The reflection, um, you know, mirroring and emotions. That's not, I don't think that's that hard for the AIs to do. I mean, they're really good at summarizing. What is that? That's just reflection, right? Uh, so. Yeah. Anyway, yeah, I think to your point that we're headed in that direction. So you kind of mentioned a couple different ethical issues. Kind of things as we were talking and using things in a responsible kind of manner. I guess, are there, are there other ethical pieces regarding AI that I guess our listeners should really be aware of? Well, I think, you know, what it boils down to when you get into the legal ethical weeds, I guess, and this is my own approach, but, um, you know, really at the upfront informed consent, uh, anytime you're having them Do informed consent and you have AI anywhere in the process, be it therapy or especially assessment. I think it's, uh, really to be transparent. to let them know that you're going to be using some ai services and i would even spell out kind of what areas i'm going to use it in like We are going to use it for transcribing our counseling sessions and what happens to that data once they're transcribed. Now, most of these services, you transcribe it and then it gets deleted after it gets summarized so you don't have that Privacy issue, you know, with data just being out there. Um, with assessment though, you know, are you going to use it to help writes part of it? Are you going to use it to just copy edit a little bit like a grammarly kind of, AI, um, but you really need to let them know. Now there's some debate like how do we site, our usage of AI. And there is a camp that says we use it, you know, when we do our assessments, we will put it in the actual report. AI has been used to write part of this report or whatever it might be. I kind of look at that as asking for permission after the fact. Like, what are you going to do if they say, well, I didn't like you used AI and be like, well, it did. You know, or what, you want me to go back and write it again? I mean, that's not going to happen. You need to do it in the upfront of the informed consent. Plus, I've never seen a report ever talk about any technology that is utilized to Produced that report. I don't say, oh, I used Word and it offered some suggestions of words I might want to use as I was typing. And I decided to input that. I used a scoring program. I had to go online to go ahead and do that scoring program. I have never read a report that had any of that in there. So I realized AI can do a bit more. But I think the principle is the same. You always inform at the front end with the informed consent. And once again, with therapy too, if I'm going to use it to summarize my notes, like I'm just going to jot in four sentences And then the treatment plan, everything's generated by AI. I'm just going to notify the client that that's the service and that's what we can do. Answer any concerns they might have. I think that's an important part, too. Um, you know, I would even point it out in the informed consent, like, You know, here's this. This is when I can disclose whatever. And then, oh, yeah, and this is AI. This is how we're using it to make it so that we can give you better services. This is where, you know, your data is not going to be utilized. This is the... Guarantees we have about that. And I think those are kind of the, the kind of discussions to have, uh, just like we do with other aspects of informed consent. Yeah, I, I think, I mean, I think that's a great point. I mean, I, I, you know, I, I, I actually, in my consent, I've always had A area about technology. Like that's always been, you know, where, because I mean, gosh, since for twenty For fourteen years, I've used electronic health records. And so, you know, I even put that and that, you know, yes, many tests now we utilize, um, We're giving on iPads. We're using, you know, they're given digitally. And so I've always kind of advanced that. And that's kind of how I've changed mine where, you know, again, for me, like I don't use it to write reports. Not something that I feel really comfortable with. I think there's an art of writing about people and not just about scores. But I do use it like grammarly. You know, it's like so I put in there, you know, I may use this to find wording choices that will be more appropriate. And so I think I like that point. And I guess I wonder is, is it different? For those who are using it like other editing software versus someone who enters some scores and it's spitting out the report. Um, you know, do we have different obligations in. Yeah, you know, I think if you wanted to really dig into the weeds, uh, yeah, you might have different obligations. I think that's going to get things more complex to the point they don't help. I think, uh, you know, taking a relatively conservative approach. And a broader kind of, we're using AI, um, now it's fine if you want to discuss a little bit more detail if they have some concern. You know, like, but I'm not going to use it in any of my writings of it. Uh, it's just going to be used as kind of ways of, of modifying some existing stuff or however you're using it. Um, you know, and, and I think, If you try to dice it to, you know, there is no consensus. There is no guidelines. Uh, you know, APA put out some guidelines about using AI within schools and stuff. If you read that, you'll never use AI. I mean, it is so, one of them, one of the first bits of its advice was, uh, find out who is the, uh, developer and what is their background in history. I don't have time for that. That's step one, right? There's like fifteen steps. If you did all that, it's like, that's my full-time job. But AI is not my full-time job. It is AI. What I'm using to help me be more productive. And so, you know, reading sources certainly helps. Um, and just utilizing it with some common sense and knowing where. You know, the gray areas start to get into play that maybe you need to give more pause, consult with others, do more research. And, you know, as we're talking, I'm trying to kind of like distill down these ethical considerations and I'm hearing privacy, uh, this informed consent, uh, Quality control and review is another side. Um, but I also was thinking about the idea of like over-reliance on AI or even using it to operate outside of our scope in a sense. And I'm curious if that's been something you've come across Dan and, you know, It's emerging. I recognize not a ton of, uh, not a ton of research out there, but are you hearing about any of those concerns or areas, especially if there's any legal concerns, uh, similar to the doctor who. In that sense, just that was more quality control, but anyone who's kind of claiming to be able to do something when really it's, it might be over relying on AI. Well, you know, I think getting back to the model, I kind of created the screen for whether you want to use AR or not. And you tapped into one of the two main factors is, is there personal information involved? And the moment there is, That's going to kick up to where you really need to process fully. What am I doing? What are the safeguards in place? The other one is if things go badly with however I'm using it, how bad is it going to be? And if it's like going to blow up my life, uh, really going to put a pause. If it's going to be people snicker a little because I misspelled something or something didn't go over great. You know, that to me is a reasonable risk. Uh, and once again, if you do quality control, the odds of that happening are, are pretty low. Um, so, um, You know, that's, that's kind of where I've gone as far as the ethical and legal aspect of it. Um, and as just provide me a real simple guideline so I don't get overwhelmed with all of these criteria and stuff. That are, that people are coming up with. And then your second point, um, I will tell you already, it's, it's a concern for me beyond psychology. Uh, my concern, my concern, I I've told people, I think AI now that I'm well, twenty odd plus years into my career. It is the best thing. Best thing. I can spit it in there. I can scan it. I have my little BS sensor. And I know generally what's accepted. If it's generally fits into that, you know, accepted category, I can roll with it. If it's out in left field, I know enough that it's out in left field. I'm going to do more due diligence. If I'm a graduate student, if I'm two to three years into this field, I don't have that skill. And so, I begin, once again, it seems almost brilliant enough so that I don't have to do the vetting for it. And as we've seen, there's a litany of people who've over relied on it in areas that they shouldn't have, and it's gotten them in trouble. You know, uh, in the college world, you know, the big concern is cheating. Uh, students are using it with aplomb. They're not using it smartly. I told my students, if you're going to use it to cheat, please do it smartly. Uh, you know, leaving something like this is the answer to your prompt and then pasting it in that that's using it's stupid, you know, and then I tell them if you're going to use it well. Uh, you're gonna spend as much time as it probably took to do the assignment. Uh, and you're gonna miss out on all that knowledge that you could have gained. So, you know, I think you're also going to run into, Paul, to go to your point about competency, are people going to be overly competent like Well, I've not dealt with bipolar before, but chat says these are the steps for motivational interviewing. It's given me some interview questions. Yeah, I think I can take that client. Right. I think that temptation will certainly be there or. I hate to say it, but the pressure might be there from an employer to take on more than we feel competent to. And, you know, that's a hard thing to stand up to, especially if they say, well, we got you the AI system, so they provide you those guides. Right. And yet it could steer you in the wrong way, especially if you're using something that is an AI that's not been really specialized for that type of work. And it's just a more general one, like the big ones that we've been talking about. Yeah, I just actually, it's interesting, Paul kind of brought this up. I have a case that I actually just finished up kind of the everything with it and Wouldn't be kind of vague just because it's a real situation, but a situation where a school psychologist did an evaluation and made a number of recommendations and the I don't want to say the parent was offended by them. That's probably the wrong word, but they were upset by them. It wasn't consistent with what they've been told by other clinicians. So long story short, I become kind of this, uh, you know, kind of second opinion, uh, evaluator in this case. And when I was going through it, um, the recommendations were really odd. Like, I mean, I went through it and was kind of like, that's just not, I mean, like, that's not consistent with like what I know about the topic. That I have a fairly good kind of understanding of. And so I talked to the psychologist. I'm like, can you just kind of give me the reference? Like, I'm just really curious where you got the recommendations. And this individual was like... I typed in my summary and asked, I want to say chat, chat GPT, it might have been something else. And I asked it to create recommendations for me and that's what it gave me. And essentially, I mean, I think the right word is that AI hallucinates sometimes that it just kind of makes some stuff up. And in this case, it it took bits and pieces of. Maybe something that was accurate and then it kind of created some things that were not. And that's what that vetting piece, you know, so so kind of with with what Paul was saying is like this person went outside of their competence and in the sense where I said, like, How did you do that? Like, I mean, you know, again, maybe say, hey, here's what I know is a good, good recommendation and maybe get some ideas of how to clean it up or make it more interesting. But it, it ended up being like, so, you know, All of these headaches came from a bad vetting process. Then instead of just kind of cutting and pasting it, and they didn't, I don't know that any, if I didn't ask the question, I don't know that I would have known that it came through an AI process. I mean, they were written well. It just didn't make any sense. And then it was against kind of what we know research wise. So, I mean, I, I guess when we're asking it those questions, I mean, that that's where I think that competency issue comes into play. And this was a young psychologist who probably just didn't know the literature base the same way I did where I could read it and go, whoa, this was really off. Um, so yeah, I mean, so I think that probably does happen. I mean, you know, in. Maybe the thing we have to be the most cautious about. Well, if you want to, you want to stand on your head, uh, uh, Adam Lockwood, I think, you know, who, you know, is very. Oh yeah. Wonderful. Yeah. Wonderful. They've done a, um, Study that looked at, you know, is AI, you know, using a completely made up data set and stuff. Yeah. They fed it into AI and then they also had a clinical psychologist write up the same data. Compared it, gave it to 300 school psych, I mean, psychologists, you know, using it, conclusions, everything. And generally, it did a good job. The AI did. And, um, Now, while people tended to prefer a little bit more the clinical psychologist, the one area that they actually like the AI better The recommendations. Yeah. Because I think as psychologists, um, as particularly in school psychology, you know, we're used to people barely reading our report. And you kind of get a little discouraged in your recommendations. So, you know, you have your laundry list of recommendations. You pick and choose a few that You know, which is still individualized, but a little more generic. And the AI, of course, is giving 100% effort on all of it. So it's probably a little more individualized, but still to your point, they may not be the most, um. Most current recommendations or like you said, some of it may be hallucinated where it's taken a little bit of each and Um, because an important thing for, for people to know about AI is that one of its overriding algorithm missions is answer the question no matter what. So, I don't know is not answering the question. And so, it can, it will do that. So, one of the prompts that you can do, this is not foolproof, but just say, Let me know if you are able to find good information about this and indicate don't know in that part of your response. You know, so you're giving permission for it. Uh, there's lots of ways if you know how to prompt effectively to get around some of the drawbacks to minimize some of these hallucinating phenomenon. As I said, though, if you use it stupidly, uh, you're going to make stupid errors. If you use it smartly, you're going to make much fewer errors and you can use it much more productively. You're still Saving time even when you vet the stuff. Yeah. And one of the things and going back to the, you know, when I was sharing that I, I try to educate some of my, my clients on the use of AI, like that one of my recommendations is, you know, I can't, um, What you do with this is is your decision after you leave the session, but I would really encourage use it to help out with the tedious tasks that you already have a level of mastery with. And that way we're just, you know, in the same way, you know, I equate it to a calculator. It's like you can write out all of the, you know, the equation and you can do it or we can plug it into a calculator and get it. However, the value of knowing how to do it. If we don't have that calculator, we're still able to create the same results. And AI is just that tool in a way that's going to help us reach that end destination a bit more efficiently and effectively. But make sure you know what the end destination you Relatively should be landing on and it sounds like in some of these examples that there either wasn't the vetting of seeing if they arrived at that destination or not truly knowing what that destination should have been. I agree. Well, I mean, it goes back to my earlier point, right? Yeah. What is, what is the purpose? What do you want it to do? Absolutely. And AI may not be the solution. Uh, there's a lot of technology, a lot of tools that we have. I still use Google search for a lot of things. You know, and then I use AI for others. It just depends on the tasks that I have. And I don't feel beholden to any one of them. If I find better ones that can do whatever the tasks that I'm asking it to do, I'm gonna switch and I think that's what everybody needs to do. Uh, it's just, it's gonna be intimidating trying to keep up with this wild, wild west. So, you know, don't feel like you're falling behind, but listening to podcasts like this, going to trainings, uh, that's going to help you get a sense of things. Where things are going to fit well with you. And then you can go ahead and proceed. Uh, but I, I wouldn't say ignore it. It's not going to go away. And eventually you're going to run up against people who are competent in its use doing similar things to what you're doing. And they're going to ring, run, they're going to ring, run rings around you. Yeah. That one. So, so Dan, as we kind of wrap up, I guess, you know, what would you say your kind of key three takeaways Would be for those listeners that haven't started it yet. Haven't kind of really taken too much of a dive in. Um, how do they get started? So I would say step number one, go to one of the major large language model and register for a free account. So chat GPT, Claude, uh, You know, grok, any of those would be fine to start with. Dip your toe in, say, hey, how do the Cincinnati Reds look for this season and see what it says next? When you want to start using it clinically, find something that you've already done clinically that you've already vetted. You already know what the answers are and what a good response is and feed it into the machine without any identifying data. And see how it looks. If you don't like its output, Go to a different AI and see if it's better there. Uh, so you can take these little steps before you plop down some actual money to use these things. Um, but once you do, usually the moneyed, uh, versions offer you even more capabilities, more things that you can do with it. And just gradually expand, but expand naturally. Do not feel like you're missing the boat. Most of the people in our field are being cautious about this initially, as they should. Um, and then there are pioneers like me or some of the others out there. Last point would just be to go ahead and, um, Once you've specialized to go ahead and start using it in practice, uh, as comfortable as you are, you know, as long as it's not You've got the safeguards in place by whatever AI that you've decided to use. I think you're going to really like the productivity, you know, those extra ten minutes at the end of a session that you never quite get those notes and then you're staying late. Because you didn't get your notes done, the AI might reduce that to where it's only going to take you three minutes and you glide home on time and everybody in your family is happy to see you. Maybe. Yeah, I can't do that yet, Dan. I don't know. So, Dan, thanks for joining us. And yeah, I guess with this this topic being, you know, kind of ever changing. Um, I, I think we'll probably need to do this again in the fairly near future. And, uh, you know, just to make sure we're keeping people up to date on all these changes and, um, And maybe even take a dive into some real specific aspects, uh, because it sounds like there's just so, so much that you can try to learn. So, uh, as always, thanks a lot. We appreciate you joining us. Yeah, love being here. You can have me back anytime. Yeah, truly, Dan. Thank you so, so much. And, uh, you know, if, uh, those at home that are listening are interested in just kind of hearing more, staying up to date with what's going on for you, Dan, they can follow along with you. Sure, yeah, um, the private practice website that I have is, uh, www.mindpsi.net. You can also find me on wonderful Facebook under Dan Florell. It's a unique name. It'll be the only one you can find. Uh, also on LinkedIn, uh, And also even blue sky, uh, blue sky on under school psych tech. Uh, and so you've ventured over to blue sky. I am there. Uh, Uh, and those would be all different ways to kind of stay up with where I am and how I'm operating. And I'm sure our audience is going to hear more and more as we're continuing on with our continuing ed journey. And if you'd like to stay up to date with what's going on for Psych2Practice, head over to our social media accounts, Instagram, Facebook, LinkedIn. Um, we have uploaded all of our episodes on YouTube. So please go over, check that out if that's your preferred platform for listening in to a podcast. And, you know, if you wouldn't mind hitting subscribe, that just lets us know that that's a platform that you really want to see us at. But until then, we'll see you in two weeks. Be well. Stay psyched.