Also fuck McKinsey
I actually like AI very well. Get better detailed answers and still can zero in on specifics with options. If I ask three people, I get three totally different answers.
The issue I (and many) have with the technology isn’t the thing itself, but the all-consuming drive to wedge it in everywhere.
It’s a tool. Use it where it may be appropriate, and don’t demand others use it before they deem it time.
ALL the cool kids hate AI. You want to be cool, don’t you? Come on, everyone, let’s hate the latest marvel of technological innovation together!
It uses electricity, which nothing else does. It uses water, which is then destroyed forever and can never be used again after it’s been used once for AI. It’s occasionally wrong, which nothing else ever is. It makes billionaires richer, which nothing else ever does. It reduces your creativity, which everyone has in spades. It discourages you from thinking for yourself, unlike the mob mentality telling you to hate it.
Stop acknowledging the many positives and just accept instead that it’s only ever going to be horrible. Forever. Always in all ways. It’s time to turn off your brains and just hate AI with us. Woo! 🥳
Did you write this or just prompt it?
I’ve been accused of being AI (or a robot, or an alien) for decades. Let’s put it this way: this post prompted me to write this.
I’ve found it useful in a couple of cases. Not useful enough that I’d ever pay for it, though, which is probably why AI still isn’t turning a profit.
I only used Venice.ai because it stores everything in my browser (which I can erase immediately). It is just fun to get things off my chest and use a rubber ducky method for any issue I might be having, but it has been useless otherwise.
I’m reading AI Engineering by Chip Huyen and it’s an excellent read. As a technologist, I find the topic fascinating and would enjoy building AI agents. While not a silver bullet, generative models definitely represent technological progress and can boost productivity when used correctly. It’s just that as with everything else, the billionaires want to milk it for everything it’s worth and more to the point of crashing the economy and destroying supply chains for their own selfish interests. We just can’t have nice things.
And then everyone applauded.
I don’t hate AI. That’s pointless. I hate the people who use AI to ruin everything, which is the majority of AI users today.
I think you’re being too literal, they mean they hate having to use it or they hate being constantly exposed to its shitty output. Obviously pretty much nobody hates, like, Markov chains.
I reserve the hate (well, severe disdain and contempt, hate is personal in my book, haven’t needed it for quite a while) for the C-Suites and owners, users get contempt if they’re using it to think for them and a pass with some sympathy if they’ve found a way to use it as a tool while retaining executive function. LLMs and broader machine learning are fine, just a tool. You can use a wrench constructively or give someone a concussion, that’s on you.
SamA is the exception, hate that market cornering fucker (and yes it’s personal, I was going to go AM5 this year).
I also hate the term AI.
And I’m not sure about the actual code either.I had an idea for a SciFi story I wanted to write where a person’s consciousness is uploaded into a computer. Now I can’t even trito it without feeling gross because LLMs ruined everything for me, even AI scifi.
In fairness, uploading consciousness into a computer is a pretty old sci-fi trope so it would have been derivative even before the AI bubble. That being said, tropes are tropes for a reason, don’t let shitty real life “AI” stop you from writing that story. Just don’t call use the term artificial intelligence and you’re good
Reminds me of one video game,
…which is a medium spoiler for the game itself
Pretty much, but that one took it to whole new levels.
I hate AI. I don’t hate LLMs, SLMs, generative models, etc… but the marketing campaign buzzword that’s currently making hardware unattainable by average consumers and accelerating tracking, canvassing and profiling?
Absolutely hate it - with a vehement passion.
Man, I loved playing around with GPT2, I was hyped about GPT3, I paid for AI dunegon and NovelAI, I was so stoked about all of this. When ChatGPT came out, I was like “y’all are just learning about it ?” but I was so happy that this tech I loved was getting recognition.
I’m definitely in the “AI hate” camp now, this really is the worst timeline
I’ve worked with AI, off and on, for over 20 years now. The thing is, I would argue that it’s always been the same. The point of AI has always been to take the place of humans. It has always been a scary line of research with profound implications for the future.
The biggest difference now, I would argue is the accessibility. It used to be that only academics and experts could use it, but now that it uses a natural language interface, any dumb assholes and any bad actors can easily use it. And they do. In droves.
Do I love my 4-year-old? Yes
Would I let my precocious 4-year-old full of imagination write my business report? Fuck no. Are you stupid or what?
McKinsey isn’t exactly stupid, its amorality run amok and a culture of cutthroats.
Here’s a great video about what they did to disneyland https://youtu.be/Q7pgDmR-pWg
If you’ve ever worked with consultants or managers in general, like 50-75% of them are fucking stupid. Just because they can convince other idiots that they’re not, doesn’t mean they aren’t. I’ve watched the blind lead the blind into financial ruin, while getting paid big bucks to do it.
As a former consultant and manager, I wholeheartedly agree, but your % is too low. The culture of consulting is poison and makes monsters out of people.
I don’t believe the people who contract them are being duped though. They do it to delegate and dilute the chain of responsibility until their decisions become acceptable.
I think it’s a bit of both. Sometimes the people hiring them are truly clueless. The kinds of reports that management consultants make seem really well thought out and intelligent. Other times, upper management wants to make a big decision, and they think it’s the right one, but they need something to show they considered all the alternatives and that an outside source agrees with them.
Also, management consultants are very stupid, but they’re clever in a very narrow area. That’s why they succeed with upper management, because like LLMs, upper managers think they’re clever.
People around me use AI all the time to get answers to generalized topics. More and more they use it like a search engine / information augmentation system.
They are not technical people. They mostly know that the information needs to be double checked and might be wrong. But usually take it at face value if the importance is low.
Honestly this is about what they did before. They would search Google, click on the first blog, skim it, and repeat until getting some answer they believe.
I too use AI regularly for brainstorming, quickly summarizing massive text messages, and reformatting text from a jumbled mess into something more cohesive, etc.
I don’t love it or hate it. In some cases it saves a lot of time and is useful tool. In other cases it outputs trash that we cannot use for any serious case.
Just like a hammer or a shovel, it’s a tool. Can be used the right way and it can be used the wrong way.
It can be helpful for quickly summarizing a vast body of knowledge or a highly complex topic, to get a general overview and see which strings to pull further, as long as you don’t take everything at face value and understand that you still need to pull those strings yourself in order to acquire an understanding.
Like, if I suddenly wanted to learn computer programming, I wouldn’t know where to start. But querying an LLM can give me a general idea, define a few key terms and explain the difference between related concepts, without me having to browse through a hundred different tech blogs to answer all my questions in terms I can understand.
But I wouldn’t suddenly think I’m a computer programmer after doing that. I would have a better idea of where to start learning. I would be able to decide whether to focus first on object-oriented programming or functional programming, static or dynamic typing, declarative or imperative syntax, etc., instead of getting overwhelmed from the start just trying to learn the differences between those concepts.
It can also suggest resources for further learning, books or websites written by humans, links to open-source software that does what I’m trying to do, etc.
I wouldn’t expect it to write code for me, but it can be an efficient aid to self-learning and show me what programs and libraries to use for my intended purpose.
Or for astrophysics, for example. I wouldn’t expect it to give me an accurate breakdown of the engineering specs required to build a pair of O’Reilly cylinders at a Lagrange point, but it can suggest software for rendering prototypes or for simulating the forces that need to be accounted for.
That wouldn’t make me an astrophysicist, but it’s kind of cool that you don’t need to be one to learn about this stuff and tinker around in a field that’s so vast and technical as to be otherwise prohibitive for non-experts.
It also depends on the LLM of course. I think Mistral and Lumo are generally pretty okay at doing what I described above. Their algorithms aren’t corrupted by american venture capital, at least, so they have more incentive to give you an accurate response rather than being sycophantic and hugboxing.
I think of an LLM as extraordinarily lossy compression. All the training data is essentially encoded in the model. You can get an approximation of the data back out again with the right input.
I don’t think it’s any less reliable that random blogs on the web, and I don’t have to wade through SEO tripe either.
That’s what makes them shitty though.
When I have a hard technical problem I often search for and read through a dozen different sources. Many of them are wrong, or are right but not covering exactly the situation I’m looking at. Eventually I’ll find one that’s either right and answers my problem, or gives me the clue I need so I can figure out the solution for myself.
If I ask an LLM to solve the problem, it will make up an answer that would seamlessly blend in with all its training data. In other words, it’s most likely to produce something that’s wrong, or something that’s right but not for my particular case, or something that’s close but incomplete. That’s effectively useless. At worst it blends in with its training data enough to convince me it’s right, while not actually being right. At best it’s something that is close enough to give me the clue I need. Most of the time it’s going to be something that’s wrong and I know it’s wrong because if it were that simple I wouldn’t have had to resort to the AI bullshit generator.
The annoying thing though is that all the random blogs on the web are written with using these LLMs now. It makes it much harder to be critical of your sources, because they’re all coming from a unnamed, proprietary LLM with no information about who owns it or the training data. At least before, I could look up the user or check out their other articles, now every article is randomly generated from some unknown prompt.
I would argue this isn’t only a bad thing though. Even before AI, many bogus articles and information existed. Eg. that people swallow spiders in their sleep, which many outlets parroted.
I would guess most people never checked (m)any sources on most information they found so long as the ‘vibe’ felt trustworthy. There is no cure to make reality simple, and the more pressure we have to teach people to think critically, the better.
- AI is much better at creating internet spam.
- AI is a vector for even reputable places to “set and forget” any article they’re in charge of. Any mistruths are simply ‘glitches’.
- The pressure on people to think critically only matters if people actually start thinking critically. Kids use this technology to skip their homework.
No disagreement here. I’m simply saying because you are more likely to be misled now than ever, being lazy about it isn’t an option anymore, and teachers can use that fact to drive the point home stronger. In the past if you were lazy about checking sources and verifying information, chances were much higher you still got somewhat valid information that didn’t harm your life down the road. Now you might just hurt yourself by putting glue on your pizza. Not saying I desire that, but the consequences of intellectual laziness have never been bigger, so the emphasis on teaching understanding must match that, since the alternative is being taken advantage of.
#3 is very important, as this is the core thing a school should teach. But lets not kid ourselves that kids weren’t cheating their way out of homework since the start of time 😄
But lets not kid ourselves that kids weren’t cheating their way out of homework since the start of time 😄
I don’t mean to come off as too aggressive because I don’t think we’re really arguing with each other. But, I tend to see statements like this as a kind of handwaving apologia for something that, to be clear, real people are doing to us on purpose. The same way that people might lament the coming of a hurricane season; nothing really to be done about it.
It can certainly be used for that, I will admit. But no that isn’t my intention. I hear many good stories on that front of teachers that have gotten a really good nose for AI and are using it as learning moments for their students. The world is filled with ways to cheat, and teachers are well aware of that. In the end, the process to unlearn them from cheating with AI is the same as cheating in conventional manners, is all I’m saying.
I’m sorry, but all the use cases you listed show that you’re just lazy. Stop it. It’s embarassing.
I’m lazy as fuck. I want to solve problems in the easiest way humanly possible. With the least amount of effort output.
What about you? Do you take the hard way?
I’ll be real with you, I typed lazy but wanted to type idiot. Read your fucking emails Jesus Christ. You still have to check all the shit generative AI writes because it lies constantly. It’s very nature does not understand what it’s generating.
Obviously, don’t rely on them to read important emails for you. But so many things don’t need additional checking. We’ve all done at least a decade of schooling. We all know basic math, science, and history. When we forget things, all it takes is a small reminder to get it back. Our brains are capable of recognizing whether we’ve seen something before or not. We’re also capable of reasoning to determine whether something we read is consistent with everything else we know.
So many other things are also so unimportant that it doesn’t matter at all if you’re wrong. For example, some actor looks familiar, it lies to you about what film they were in, and you believe it. Is your life any worse off for it?
it lies to you about what film they were in, and you believe it. Is your life any worse off for it?
I think a better question is: why, then, am I asking it questions?
If I had a friend I knew was a notorious liar, I would—big chess move—simply stop asking him who actors are. Unless it was really funny.
If it’s a liar that lies every time or most of the time, then yeah, don’t bother.
why […] am I asking it questions?
I can’t actually think of any specific scenario where something is unimportant enough to not matter but important enough that you’d ask. What I was originally thinking of were actually scenarios where I planned to verify the information at a later time, but I mistook that in my head as not verifying it.
Hard to tell if you’re trolling or trying to add value to the conversation and just missing it.
A hammer doesn’t know what it is building but it is still useful.
This is the nature of tools: for some they improve output, for some they don’t.
Everyone’s a god damn tool philosopher.
Personally, I’m fine with banning cigarettes regardless of how responsibly my dead grandpa may have used them.
Do you not cross reference multiple archived news articles and seek out past attendees to remind yourself of what Britney Spears wore at her last concert? smh
I asked ChatGPT to review my resume and make changes tailored for the job description I was applying to (which I also gave it). Also told it that this was an internal position and not really an upgrade, but a sidestep that (I felt at the time) was more aligned with my long-term career goals.
I was really happy with the improvements it suggested.
Didn’t get the job, but as I understand it, the hiring manager wanted to bring in a friend of his from the moment he posted it.
In retrospect, I’m kinda glad that’s how it panned out. Myself and the new guy are operationally equal, he’s incredibly competent. We compliment each other well, and get along great.
And he’s friends with the big boss and thus has his ear.
The main reason I even applied to the job was because I wouldn’t want to work with anyone but myself in that job. And he’s close enough. .
I did get an interview for it, which ultimately just became a 1:1 with the boss and it gave us a chance to talk openly about where I see deficiencies that need fixing. All in all it went great. Six or so months later and I’m feeling a renaissance in the air at work. Like the things I talked about with him are now front-and-center and getting the attention they needed.
This company moves quite slowly, so six months (basically, new fiscal) seems incredibly quick.
Any usage that isn’t massively more efficient than the not-llm way is unethical due to resource consumption. IE if a regular search engine would do the trick, using LLM just because you can is unethical.
I don’t hate AI I like it for a lot to things. It is especially awesome at writing 1 off scripts/small projects or allowing me to test multiple different versions of a programming solution to see what one I like most. I use it to search the web and search my own knowledge database. Its an awesome tool. I get why people would hate it having it shoved down their throats all the time but that’s not a problem I ever have to deal with cos almost everything I run is a full foss stack.
I could stomach the ills that come with AI if you could use it (as implemented in all the crap it’s in) without selling your soul to the data harvesters.
The mudslide of AI slop on YouTube is like digital gangrene, the brainrot has gone down the stem into the organs. We’re done as a species.
To be fair, it’s a genius robot slave.
Billionaires everywhere are shooting into the sky on rocket exhaust composed of their own semen.
If that worked I’d have been near Jupiter halfway through high school.
Personally I don’t hate it but I just think there’s really no urgent need for it. If they’re using it to take jobs away from people well what is everyone going to do for work? To the billionaires think there’s going to be a gigantic human die off and they’re going to be elite class of 100,000 people and they’ll be served by intelligent robots? If that’s their angle good luck.
They all grew up on classic sci fi. And, being dumbasses, they took Solaria as an aspiration rather than a warning.
Where’s the graphic?
Annnnnd bought











