Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.
I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.
Thank you everyone!
i would probably look at review sites, to see if happened to anyone else, if they have private pratice, they would have entry and comment on those sites. i would look for a 2nd opinion at this point, if they are going to defer you to AI chatbox.
I just had a visit Friday and they made me sign a form agreeing to it.
I do not like this one bit.
Show him the EULA for copilot (where it’s for entertaining purposes only), and tell him you’ll be going elsewhere and leaving an appropriate review.
AI and the people pushing it are not trustworthy. They do not have your data security nor your wellbeing at heart, even if your doctor does. LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance. Likely, the AI use will be on the part of the insurance company to find ways of denying your claims.
LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
This is simply false. AI sucks but it doesn’t help to lie about it.
So your example doesn’t prove a damn thing; the data security in that case had nothing to do with the llm…
This is about extracting data that was used as training data. Just don’t do that with sensitive data.
You think they won’t use this the same way? That’s adorable.
“I don’t trust companies to hold their promises” is a very different argument from:
LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
It is certainly possible to implement a secure LLM service.
Your provider then reviews the content of that note to ensure its accuracy and completeness.
You know they’re not gonna do that, in practice.
Hello, It us absolutely justified to be worried, tell your doctor you concerns, and ask your doctor questions about the use of AI. If you want some help putting together questions for your doctor lmk.
I’m involved with the development / integration of AI. From the specific text of the AI agreement, it looks like these are the AI tools you’re consenting to:
-
Transcription tool: This is a speech-to-text tool. It can differentiate between speakers.
-
Transcript -> clinical documentation tool. This takes the text of the transcript, interprets it, and generates clinical documentation based on it.
It does not seem like, as part of the agreement, it covers taking the clinical documentation and attempting to suggest diagnosis or care steps.
I am actually concerned by the “recording and transcript are automatically deleted” line. If your doctor reviews the generated clinical documentation vs the transcript, and misses something for whatever reason, if they are unsure about something in the future they can’t go back and reference the original audio / generated transcript to verify accuracy?
There are also concerns about how they are following HIPAA laws:
What model / service are they using?
Did they do their due diligence in deciding what service to use?
Have they looked at other cases where data companies have said they don’t persist/ sell your data and then they sold it / there was a breach of data that shouldn’t have persisted in the first place?
Do they anonymize personal information before they send it to whatever service they are using? -Note that this is not possible for transcription models, as they cannot know what text to anonymize/censor until the model generates the text. That doesn’t mean there are not HIPAA-compliant text transcription models, text transcription models can even be run locally on maybe consumer-grade devices, meaning the audio doesn’t have to be sent to a 3rd party.
-
I would nope the fuck out and change doctors. A regurgitation machine prone to hallucinations has no place in medical care.
you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?
the ‘does this thing have ai in it’ is already a fucking blur as businesses link to each other via private and public APIs… healthcare is no different.
these things are already in place in many places. if youre a part of any nation wide health services, youre already impacted.
its like the fact that a huge % of our GDP is tied to like 10 companies… you cannot live your life in the modern united states without suffering products or services from those 10 companies, full stop. your life with ai will look the same.
can you work hard avoid shit and cry about it? yep. yep you can… but thats about it.
you cannot live your life in the modern united states without suffering products or services from those 10 companies
Well, its good that I don’t live there.
lucky bastard
lets hope the humans in your area are less greedy than those here, else its only a matter of time
Ummm, hallucinations are literally how LLMs work. Everything they generate is confabulation, though sometimes it’s useful confabulation.
I think we should stop using their terms.
Llms spout BULLSHIT half the time. They don’t hallucinate. They confidently state incorrect garbage.
It’s almost like the very businesses that creamed their pants about being able to replace workers and endless “blue ocean” profits exaggerated, lied, and forced AI into every. single. product. That’s not consumers’ faults…
i cant understand why people are oblivious to the multi-faced war-front that is AI.
theres the shit you hear about and see every day (oh look copilot shit the bed! claude cant add! teehehee look at all the extra fingers!) and then theres the shit that is actually being implemented in process models all over the place in nearly every department. from inventory to healthcare analysis to customer service, this shit is in daily use now … and you cannot avoid it.
ai is just an api call away and software companies suck.
you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?
The truth doesn’t care whether it’s “fresh” or not.
As long as AI still hallucinates, it will be useful for entertainment purposes only and never for anything as serious as healthcare.
haha, k. its clear you dont, but thats ok.
Dude must be some MBA crypto bro AI slop jock. His grammar isn’t good enough to be one of those idiot CEOs who just learned what artificial intelligence is. Maybe he’s a shareholder for one of those soul-less companies. Probably not that either though. Perhaps he’s just a terrible artist or programmer who uses AI slop for all of his works of shart. The possibilities really are endless these days.
im an ex corp drone whose value was replacing humans with automation.
it sucks, its already exists, it will happen more. llms are already in these pipelines and theres nothing any of us can do to avoid it.
im not saying its good. im not saying it should be. im saying, it exists right now cuz ive been a part of it.
…your value is replacing humans with machines?
Explain to me the value of that.
maybe youre new here but big business likes it when they save money. value.
I exited purposefully
Oh okay, so your only value is the pursuit of material bullshit and not the well being of human beings. Good luck getting AI to pay for your shitty wares when nobody makes money to afford them. 🤭
I have no idea what it’s like to be you, and I’m glad I don’t. Enjoy your cold empty heart! 🙂
“let yourself be exploited! do not resist!”
If this was for a GP, I would agree with this stance. But a good, fitting and competent mental health professional can be harder to find.
That’s the last fucking profession who should be using LLMs… People can gaslight themselves with chatbots without paying for a trusted professional to reinforce that bullshit.
OP didn’t state this clearly, but I went and looked. The app is not for replacing consults, only billing etc. so I’d put it in the “annoying, but not world ending” category.
I don’t believe that. They just don’t want to pay them what they’re worth. Machines don’t ask for days off.
By god they’re going to make OP change doctors just because they hate “le stochastic parrot”. And op is probably in the US which makes the whole thing even crueller.
Literally a horde of teenagers playing with a bipolar’s head because they have big feelings about stuff.
And all this for a fucking note taking app Jesus Christ. Yeah sure OP is probably risking their mental health in the process but who gives a shit about that when you have an occasion to proclaim that le AI bad.
you seem to have no clue about the problem at hand. It’s the lesser of issues that the AI transcriber could hallucinate. the worse problem, which is irreversible, that the treatment session and every private detail that gets discussed is funneled to at best questionable companies who will do whatever they want with your private information. once that happened, you can’t just make them delete what they stored in the process, it is completely unveriable what they do besides offering the original service. everything that was told in the session will not stay between the two of you.
accepting this unknowingly is very dangerous. accepting it knowingly will alter what you say and the results with it, like going to a therapist who you know personally, which is not allowed for very good reasons.You think therapists and doctors in general don’t use Docs or Notes services that are hosted or backed up in the cloud ? You think having your medical data leaked to tech companies is new ? Just because the notes transcription app is AI doesn’t make it magically worse. In fact it makes the data harder to access as you need to re-infer the whole enchilada if you want to mine it (as opposed to, say, Google Drive who can just make a SQL query on your data and get it structured and ready to use).
It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics. It’s really cool for you that you’re in this position of privilege. It’s not cool to be pushing on someone with a clinical condition in a way that will probably get them worse off, in a country with absolutely no mental health safety net. Just like antivax it’s coated in fake concern, but you’re playing a dangerous game with someone else’s life and you’re cool with it because you’re insulated from the consequences.
You guys really are a pure product of those amoral hyper-individualistic times.
It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics.
oh now I’m a privacy purist! oh god what have I become! I want totally unreasonable things!!
or, it seems you by default don’t care about privacy at all because surely who needs it, and also already forgot the case of woman in USA using online period tracker apps that outed them for having an illegal abortion.
Just like antivax it’s coated in fake concern,
fake concern, sure… my concerns are very real, and OP has come for advice, asking among others what could be the consequences. well, this is one of the consequences there will be.
You guys really are a pure product of those amoral hyper-individualistic times.
yes, blame me, not the system that made this situation. don’t you want to call the cops on me?
i would probably report him, and leave him a bad yelp review, warning others.
Yeah, though that’s about 4/5 of the actual people I’ve met working in psychology.
It would be an absolute deal breaker for me. There has never yet been a commercially available AI that doesn’t hallucinate, and there’s no element of my healthcare where I’m comfortable having facts be unreliable.
id request further information. is the LLM used for secondary analysis? is it the primary and the doctor evaluates the results manually?
‘ai’ is just a tool in the right hands that can be beneficial. that said, it absolutely can be used by lazy assholes to pretend to do their job…
so, if you ave the resources to demand ‘no ai’… go for it, but im too poor to demand much, and id be more focused on the use of the ai.
it’s just for notes, no big deal, they use it all the time down here in Australia
Though OP is asking Lemmy on AI is a bit pointless given how anti-ai it is
Can you ask how AI is used in the app?
And to piggy back this question: what alternatives do you have and are they actually viable?
The alternative is finding a different provider. I already have a long list of offices to call. Getting a list together was the first thing I did when they notified me about rolling out this app.
I can, but in truth I don’t care. I don’t want my data being used to train AI, and I don’t want my treatment to be guided by AI.
It doesn’t sound like AI is being used for either. It’s just summarizing the encounter at the end as a note, and not storing any data to train on.
The “fine print” you added doesn’t say the automated transcript will be used for training a model. I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM. If they did, the clinic would go bankrupt from lawsuits.
Also, if they only use AI for automated transcription, would you feel the same instead of “AI” it were a dedicated automated transcription tool?
If you abhor all things AI, your feelings of not continuing with this clinic are valid. However, I don’t think they are using AI in ways you think they are.
If they did, the clinic would go bankrupt from lawsuits.
for that, patients would need to be able to prove that their data was used. how would you be able to prove it?
I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM

So ask about those two specific points.
And in the session you can (probably) go over the generated notes with your doctor to double check.
The term AI is very broad and generic, today it’s used to refer to LLMs and fancy denoisers. But AI has been around for decades in some form or another. My point is, speech transcription has been around longer than the current LLM fad, so it might not be an LLM doing your transcription. Would that allay some of your concerns?
If it were a locally ran transcription software, would a healthcare provider still be required to ask your permission to use it?
I very much hope so, because in neither case can thry guarantee that the data won’t be transferred elsewhere
It records the sessions then makes a transcript for “note taking.”
Given how captured our data is by the lack of regulation even in the medical space in the US. I simply do not want my personal data to be used in anything but in house signal to noise improvement for diagnosis.
Anything else, which is most of it, is unacceptable and I do not consent.
You’re probably not suffering mental health crises desperate for bare minimum psychiatric care. It is an absolute jungle here and it can literally take years to find the right person and they are almost never on insurance.
Privacy and your rights to it and your own autonomy/med care are important.
However, some may have to weigh the safety of themselves and those around them to determine whether they should be standing on principle and refusing care.
You do not know me nor my medical needs. Don’t be so presumptuous. Even (and especially!) folks with acute medical needs deserve privacy. Urgency often presents as a moment where people can be taken advantage of.
Definitely ask for how they are using it. I know a number of physicians that are just using it as a dictation software to quickly make a first draft for their paperwork, helps lighten a big load.
Based on OPs edit that sounds exactly like what it’s doing.
This is the answer.
Most docs can’t keep up with the mountain of paperwork or billing codes required by insurance companies these days. The software helps, but requires the doc to review and sign off the notes.
It’s not an LLM coming up with treatment plans, etc. It’s transcription+
I had a visit with a PA who pantomimed the use of an inhaler she didn’t actually have on hand. The note-taking robot decided that was a “demonstration” with a billing code, and that it should be billed as $800.
Dictation and summary software could be installed onto the doctors computer.
There is something else going on here, with pushing an app onto patients.
The AI is the summary software. How else do you think the summary happens?
Lol, it happens on the doctor’s PC, without triggering clients.
AI is an overloaded marketing term. Definitely ask which kind of AI, how it is used, how and which of your data is going to be used.
I can’t answer to mental health, but for a GP, I’d be surprised if many of them would use it. There’s got to be some pride involved, no? I don’t think they’d trust the damn things anymore that we do. So, maybe they’d make gestures to appease their stupid administrators, but they have the power to ignore the things and do what they want, at least for now.
AI-assisted note-taking is already standard or close to it.
You’d be shocked then. Two folks in my inner circle are healthcare providers and they say it is prevalent.
Personally, I would straight up refuse and hunt for another practice that does not utilize LLM services. As anything that techbros call “AI” is an absolute shitshow. These techbros have a plan to absorb a lot of personal information from others in order to ‘train’ their models. Given that LLMs can never think, feel, make shit up confidently, and are geared towards being sycophantic…
They should never be used for the purposes that techbros and deluded CEOs are trying to make fetch. I would rightfully be suspicious of any practice that is willingly using this tech. I would ask for clarification from the practice in question, get shit in writing, and hunt for another practice if the meeting is unproductive.













