I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.
Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?
Where I work, there’s:
- a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
- a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
- quarterly goals where almost every one has some amount of “with AI” in it
- letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
- a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
- teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output
Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?
We have tools to support AI deployment, are encouraged to use a paid api, and intergration to the office tools.
Thats it. No expectation that it a new god we are awaking like the OpenAI cultists push. No expectation that our jobs can be replaced by any of even the greatest models yet. Just quick low stake summeries, better autocomplete for code, and easy listening TTS of meetings notes if we missed them.
At my stock brokerage we keep talking about how we can bring AI to our customers but we can’t do that without the compliance dude throwing a fit about “noooooooooooooo you can’t recommend trades to customers, ssstttoooooooooppppp then we become responsible for their decisions guuuyyyssss” (he doesn’t talk like that but it sounds like that to me)
I recently brought up the idea of using AI for trade support and giving it all sorts of tools to access internal assets and help customers fix their accounts or figure out what happened to their order, shit like that.
if you look at the comments on YC Hacker News, it’s a relatively sane group of people RE: AI. Skeptical first adopters that have experience in the industry usually. It’s worth your time.
I worked at one that actually wasn’t too bad except we had a peer review system for client reports and I was horrified to see how many people had such poor english grammatical understanding that they just assumed the AI was always the correct and better output than human.
And I don’t mean people whose second language was english, I mean native english speakers were giving me AI feedback to change sentences that would completely change the context or horribly maim phrases into past tense where tense of the subject was very much important.
I could easily ignore the changes from coworkers, but a handful of managers would then give performance feedback telling me to utilize AI and grammarly to improve my report quality, even though all of their report feedback was utter garbage lol.
On a related note, grammarly can also go screw itself. That joke of a software suite still doesn’t hold a candle to Word 2007’s editor.
I fucking hate grammarly. And the modern Outlook webmail suggestions can go eat a back of dicks as well.
I’m too old for this shit - too old for the original show, I mean, but for some reason, my brain wants to make that title work:
Who works at a (tech) company that’s not delirious about AI?
SPONGE! BOB! SQUARE! PANTS!
It completely doesn’t work.
I’m not a lyricist, but this is at least closer…
Who works for a place that licks AI’s taint
Well, you put way more into it than I had, so I feel I have a refinement to give back as thanks - it just needs a single extra syllable. Perhaps:
Who works for a place that just licks AI’s taint
Now it scans. :)
I haven’t seen the whole show but I have been under the impression that SpongeBob and intelligence don’t cross paths very often.
who works at a tech company that’s not delicious about AI?
-- OP
I work at Tech Company that loves AI
-- people with poor reading comprehension replying to this thread
I required an outlet to bitch regardless of my ability to reed werds gud.
I’m sure I’m not the only one : D
Honestly, fair
I work at a small software company. There is a push to use AI but I would say in a reasonable way. It does speed up some tasks but no one is vibe codding and pushing things without proper review. So far no one is tracking the usage or pushing us to use it more. It’s just a new tool we’re encouraged to be familiar with and use reasonably.
The one I work at went “all in” about a month ago. I started noticing a dramatic increase in garbage/nonsensical code at the end of last week. I didn’t make the connection between the two until Tuesday.
I’ve got a manager that usually listens and they asked me to try it and take notes because they know I’ll tell them the truth. … I’ve got a lot of examples prepped for our next meeting.
The hard part is definitively blaming LLMs because I don’t have time to track down every single commit and analyze it for LLM usage but there’s 100% a correlation.
We have offshore devs that I think found the copilot button in vscode recently…seeing lots of em dashes in code review today 🫠
Yeah, I wish git blame could highlight the lines written by Claude/Codex. Usually when I ask my colleagues ‘so did you use AI much for this one’ they will say yes. But it makes code review that much harder, especially when they then take my PR comments and feed them to the LLM, so I’m coding by playing telephone with a bot.
Unfortunately they’ll never do that because they’re owned by Microslop and they can’t allow any marring of AI’s reputation
Software company here. There’s a strong external push for us to shove AI into every corner of our UI, but so far we’ve largely kept it out.
The one place we are using it is a pretty strong use-case (essentially sentiment analysis). We’ve had a chatbot in dev for a while, but are struggling to find a valid usecase for it. I think most of us are hoping the AI craze dies down and suddenly our lack of AI is no longer a marketing point our competitors use against us.
Advertise your lack of AI it will draw customers who are sick of the slop
Government - great at research, terrible at generation. If you ask it to find and summarise laws and regulation, does a great job, quotes info, can even generate reasonable overviews with a handhold.
Ask it to generate anything that isn’t directly quoted in a specific doc and it goes WILD. Even with some solid training in prompt engineering, it makes you work for focused outputs unless you give it clear everything (data, prompt, target template, revision and scoring process). But once the workflow has been solidly validated a few times I’d rate it “usable”.
My wife’s at a major video game company that, oddly enough, hasn’t gone crazy over AI. Since she’s in localization, she uses DeepL which has some machine learning, but not really an LLM and LLMs aren’t really being pushed on her since it’s a downgrade. From what I can tell, their dev team is also just keeping things human made, although they’re in Japan so that might contribute.
They aren’t saints, they did try to union bust a few years back, but their stance on AI, as well as creativity first mentality and recent pay raise guarantees and whatnot, kinda show they’re paying attention.
My company uses copilot for code reviews. They encourage at least trying a number of other tools but do not require it. Some of our product does use LLMs for various things, though I don’t personally work on those.
I do worry about the environmental impacts and ethical concerns around training data (especially pirated data used with neither consent nor compensation) so I don’t use anything personally (aside from where some company has shoved it in somewhere).
I think that local models trained ethically can have a number of uses such as classification, data cleanup, and perhaps even checking code for security issues and exploits (I’m not sure if local models can do that yet or well).
Work in a big multi national company. not a software company, but I’m on an engineering team.
Leadership makes a lot of noises about AI.
The engineers can’t even use git competently. I’ve suggested quietly maybe we should focus on learning software fundamentals instead of chasing dreams but no one here listens to me.
Our company leaders wanted a way to track the ai vibe coded apps…
I run the company git server. They decided to have someone vibe code a tracker instead. And everything needs to be manually put in, and a bunch can’t be changed unless you muck with the database directly.
I just use AI to fill in the stupid forms HR make us do and don’t verify its output because I don’t respect it. Kills 2 birds with 1 stone.
Please God, give me an AI agent that can watch the video and do quiz for the yearly mandatory HR training
My company has started using AI voices/figures in the videos. Like they weren’t bad enough already…
AI watching AI to AI some slop to satisfy the AI the HR is using. Ugh.
My company has some mandatory training videos they redid with AI. I don’t get it, none of the actual content was any different from last year’s video. They literally paid someone to redo the video with AI instead of just reuse the previous video.
It’s kinda the same thing as Coke’s AI Christmas commercials this past year. They could have run their old, classic commercials like Hershey’s kisses does every year. Instead they paid to make new commercials with and pissed a bunch of people off
I work at a startup that classifies and extracts data from often very fuzzy sources.
We are encouraged to use agents for development. We use models in our services for things like pinpointing Coca-Cola* cans in YouTube videos. We offer our customers LLMs to discover how Coca-Cola and Pepsi are presented on YouTube.
*Soda scenario imaginary. I don’t want to dox my niche, but it’s similar enough problems that we solve.











