AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.
However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.
Karrot is a used item app that has a feature where you take a picture of an item and it IDs the item and tells you what it’s worth. It’s pretty impressive. It could ID my houseplants better than some dedicated plant ID apps I’ve used. It’s not great with one of a kind items, but otherwise it’s surprisingly accurate.
Writing and fact checking ONLY the most basic concepts and common information that is found multiple times and in multiple places online (eg. It’s strongly reenforced and verified in the training data and has/will be the same for a long time).
Mass formatting, changing formats, changing language, and decoding via common methods.
Pitching “what you mean, but can’t remember the name for”.
…and that’s about it.
Finding info in a large quantity of information.
Not a hot dog
Theme parks do use image recognition to flag obscene things in ride photos.
It’s ok for very furnace level exploration. Like 100 level stuff. If it’s something you’d google and easily find in an article it’s likely to do a ok job.
I’ve also found it’s good for tedious straightforward tasks. Anything that would be uncomfortable or timely or automate manually. Best for one offs.
I’ve also found it’s extremly good for translation, which was it’s originally use.
Well, I know its quite specific, but nothing beats AI at stereo matching and depthmap generation and that’s important in many fields.
I agree there is a lot of annoying hype. However i also agree there are some specific use cases where it can be helpful.
I for one find it handy some times when i am writing bash scripts to do things on my system. I obviously check them before running but it does save time.
Although i do recommend running models locally if possible as it is obviously preferable from a privacy and cost standpoint.The technology itself is novel and cool. Its the complete and utter meltdown of all tech companies into brainless hype machines that is harmful, which is course, is a function of capitalist incentive and the need for the tech industry to come out with some new paradigm shifting innovation every decade. A normal, healthy society would have been able to leverage machine learning and LLM technology where its most useful, like parsing large amounts of data, or running a local instance on your computer to ask a few questions, etc. We wouldn’t see LLMs in every text editor, pencilcase and pair on sneakers but these snake oil salesmen who run the US economy are absolutely desperate for a new paradigm shift so they can keep making exponentially more money.
The thing is, we don’t need to build these datacenters siphoning comically evil amounts of energy from the grid and making personal compute a thing of the past. Average everyday person doesn’t need cloud compute, they can run a local 4b parameter (very, very small) model on their laptop or phone if they need to ask chatgpt to make them a workout routine or to ask them who won the 1918 world series. But these fucking cretins don’t care, that’s not the point, they are in this because it’s a golden ticket to growth city and once they cash their check they don’t give one hot fuck about the human-spirit-stealing-machine they built.
TLDR: our society is broken and that’s why we keep getting the shittiest, lowest-common-denominator version of everything. everything has to suck by definition because that’s the only version that the system we built will allow.
Accurate
I’m chatting with Le Chat about things I wouldn’t ask a friend and wouldn’t trust a stranger about. It brings up things I wouldn’t have thought of.
I occasionally use it to find links to VODs for esports tournaments. Asking it to only link the specific game I want with no other summarization is a way to find them without spoilers (like when youtube “helpfully” suggests the last game of the grand finals of the tournament as a search result).
I’m a therapist. I use HIPAA compliant AI to generate my (editable) case notes for my sessions now. Not only is it a huge time saver to simply edit a generated note as opposed to making one from scratch, but in many cases it takes more detailed notes, including quotes from clients.
I have heard of other therapists and medical doctors also using AI to help with diagnosing.
The danger is when therapistsdon’t review the content to check for accuracy. Because occasionally it will generate something not really reflective of what the therapist might have been doing, or it might lack detail that the therapist might have otherwise inclused. But more often the stuff it comes up with is surprisingly accurate.And editing is even easier when you can just tell the AI something like, “include more details about how the client noticed their pattern of putting their own feelings last,” and it just does what you asked. You don’t necessarily have to edit manually, though you can.
I dislike this immensely and actively seek health care providers that don’t use these tools.
My core problem is that I want a professional who engages with me as a human and knows me.
I’m a professional (not in health care) but I “know” all of my clients, and I don’t think that’s an unreasonable expectation for a client or patient. When I pay $100 to talk to a GP for 10 minutes, I don’t think it’s too much to ask for them to have a conversation with me, really truly listen to me, and spend a few minutes writing some notes.
In the case of a mental health professional the time spent after an appointment with a patient is much greater. I don’t really want what I’ve said to be automatically converted to notes for a human to review. I want a human to consider the human to human conversation we have had, in the context of other conversations we have had and the relationship I have with them, and use those insights to produce appropriate documentation.
Finally, I have a strongly held belief that relying on the assistance of gen AI reduces one’s skills and abilities. For example, consider two therapists who have just completed their education and accreditation and start seeing patients. One uses gen AI to produce notes for every patient, the other eschews this practice. Ten years later, which therapist would you really trust to listen to patients and be able to distill the key elements of the conversation both spoken and unspoken?
That said, I’m aware that these services are becoming an industry standard. I suppose they may help therapists see more patients, and in the context of public health that might be a good thing. Whether or not I would use a service like this if I were a therapist is a difficult question to answer. If I were just starting out I think I probably would. That is to say my beef isn’t with you personally using a service like this, more that it’s becoming an industry standard.
I understand those concerns and I think there’s validity. But there’s also enormous potential for benefit.
I know of several therapists who are very good at being present with a client but terrible at documentation. And if one of these has a busy day or two it is easy to get behind. By the time they get around to writing the note the details are very fuzzy. Human memory is notoriously unreliable. A therapist I respect has said that if you’re writing a note 24 hours or more after the session, you’re probably writing fiction. A tool like this has the potential to greatly help the documentation process. But I agree that it should never become a replacement. I thoroughly read all my notes and often make edits to make them more relevant to me.
An attorney I know who specializes in representing therapists and regularly conducts legal and ethics trainings has also said that from a legal standpoint, when comparing human to AI generated notes, the AI notes are usually superior. They contain details like quotes and they automatically include all the stuff that matters for legal or insurance requirements. This attorney is VERY risk averse and honestly I thought she would have been against this, expecting horror stories like artifacts. Her opinion was a factor in me trying it out.
Again, I stress that this is a tool and not a replacement. When I read through a note, I am considering the things my clients said and my interventions to see if it matches up. It’s not perfect but it is very good and I’ve regularly been surprised with how helpful it can be.
Thanks for a considered response. As in all things, there’s nuance and I acknowledge there are benefits.
I’m genuinely curious as to whether you think reliance on this service will diminish someone’s opportunity to build the related skills?
So how does that work? Do you just have an AI listening throughout the session like a note-taker?
Yes basically, but since it is HIPAA compliant the recording is automatically destroyed when the note is saved. Also no protected recordings are used to teach the AI. The therapist can also choose from a number of different case note formats that might focus on different things
no protected recordings are used to teach the AI
How do you know for certain?
People conflate security with risk mitigation. It’s not secure in the way that you can confirm the data has been deleted. The risk however is mitigated due to vendor attestations reinforced by contracts.
Yep, so you can’t actually know if the recording is destroyed, it’s just contractually required to be destroyed. Big difference in my book.
Wished these sensitive audios would be processed locally and never leave the therapist’s network instead.
I can’t know for certain, as I’m not on the product side of things. But I do know that HIPAA standards are very rigorous and if it were discovered that they were intentionally misleading therapists and clients then it would invite a class action lawsuit that would be insanely large.
I do ask for and document my clients’ consent, though, so if anyone is not comfortable with it that’s fine. I just write the note the old fashioned way. Most are fine but a few have said they don’t want to and it’s not a big deal.
rubberducking for those with social anxiety. Also small friction to get surface level answers that normally took digging from multiple sources.
it’s a study monster that initially wiped chegg, duolingo, sparknotes etc. The double edge is that people forgot how to take notes, learn fundamentals to handle complex problems.
And if ChatGPT made a mistake? How would you know before it’s to late ?
Because all other information on credit cards (or anything else) on the internet available to people eager to learn is 100% accurate, all the time?
That is absolutely the worst excuse possible to shill for big tech that comes with no real guarantees about precision or accuracy.
While there are trustworthy human sources in the Internet, there are no trustworthy LLMs.
I would trust a book written by a specialist over any LLM output, though
I’ll take that over an information tool that lie 30% of the time
Honestly, Google Search has been better the last couple years after spending the previous twenty years getting consistently worse.
Most of what I use Google for is trivial. Like how old is a certain actor, or why was this author canceled, or what does this item do in a video game?
It’s great for those things. Especially the video game stuff. I don’t want to watch a 10 minute video just to get a discrete answer, and now I don’t have to.
I can even ask it for spoiler-free hints on a particular puzzle, and most of the time it gives me something useful.
I was sitting in a restaurant the other day and staring at the menu. It was Italian and none of the things made sense. Too wordy and not clear what was meat and what was fancy cheese. The waiter was utterly useless - too busy to help and when present, not answering my questions about what would be a good simple pasta in white sauce.
I took a photo and asked Claude what’s a good white sauce pasta which would be like Alfredo.
It found two options I hadn’t even looked at. AI is good at sorting through complexity. But I don’t just mean AI as in LLMs. It needs a lot more tools and knowledge to be useful. So what you need is a smart system which may or may not have AI as a component.
Going to Italy to use an LLM to find pasta Alfredo is… well, there’s your use case. Pure and unfettered ignorance. I will take my down votes now, thank you. I don’t care. Ugh. Just ugh.
Oh, it was an Italian restaurant but not in Italy. It was in North America. The menu was in English.








