I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?

Where I work, there’s:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of “with AI” in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I am employed by a tiny software dev shop that develops a few apps used in education. No AI at all, unless I proactively choose to and pay for it out of my own pocket.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    We have tools to support AI deployment, are encouraged to use a paid api, and intergration to the office tools.

    Thats it. No expectation that it a new god we are awaking like the OpenAI cultists push. No expectation that our jobs can be replaced by any of even the greatest models yet. Just quick low stake summeries, better autocomplete for code, and easy listening TTS of meetings notes if we missed them.

  • danhab99@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    At my stock brokerage we keep talking about how we can bring AI to our customers but we can’t do that without the compliance dude throwing a fit about “noooooooooooooo you can’t recommend trades to customers, ssstttoooooooooppppp then we become responsible for their decisions guuuyyyssss” (he doesn’t talk like that but it sounds like that to me)

    I recently brought up the idea of using AI for trade support and giving it all sorts of tools to access internal assets and help customers fix their accounts or figure out what happened to their order, shit like that.

  • hansolo@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    if you look at the comments on YC Hacker News, it’s a relatively sane group of people RE: AI. Skeptical first adopters that have experience in the industry usually. It’s worth your time.

  • ExtremeDullard@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    My company is approaching AI like it’s been approaching anything for the past 40 years: with extreme caution. It’s coming alright, but the engineers are carefully evaluating it for coding, and it certainly isn’t being rolled out recklessly.

    I’m one of several die-hards who flat-out refuse to use it - not so much because it’s AI, but because it’s provided by an American company - and my choice is respected. Our CEO sees old-timers like me as the fallback is AI ends up shitting the company’s bed.

    • Logi@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Have you checked if Minstrel Mistral can generate code? When I’m back at keyboard I’m going to see if it has, an intellij plug in.

      Edit: Yes

  • neidu3@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Not a tech company, but a petroleum exploration company, which involves a lot of tech. The petroleum industry in general is extremely conservative in terms of tech, in that older and proven technologies tend to stick around. For example, I often write data to magnetic tape.

    However, the industry doesn’t shy away from newer technologies where it does make sense. There is some AI at play, but it is limited in scope, and only deployed where it makes sense. Most of it is done on the processing side, so I don’t know much about it, but I get the impression it’s used in a similar manner to those headlines you see from time about AI predicting rectal cancer 99% correctly. Interpreting seismic survey data involves some geophysical wizardry that I’ve never quite understood - I just make sure the production servers offshore work.

    • leoj@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      seems like large scale data analysis and mathematics are the strong points of AI if I understand the tools correctly, less ambiguity and room for hallucinations.

      Do people agree?

      • neidu3@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Yeah, I think so. When you have a huge dataset with low signal to noise, AI tools seem pretty great.

      • CodexArcanum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        “Artificial Intelligence” is a very broad term that, within computer science, covers a range of techniques and tools that broadly cover the study of “human-like behavior and impersonation.” Before the current fad of calling LLMs “AI”, the term was most often used in video games and covered techniques for pathfinding, decision making, reacting, seeming to speak, etc. Before that, pre-90s basically, “AI” had already undergone a few boom and bust cycles of hype with chess playing machines and, as always, chat bots.

        In many fields, many of these same techniques and their descendants are being used to model and simulate and predict. All of them have trade-offs and limitations, that’s what computer science is all about.

        • leoj@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          I do remember talking to chatbots on AIM back in the day, so I think I had a leg up on other people in already understanding that the technology has existed for decades, which made me more cautious about the claims.

          • chunes@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            11 days ago

            They made such a big leap so quickly, though. I remember even in 2018 thinking no bot would ever pass the Turing test.

    • Nighed@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      For the size of data that oil exploration requires, tapes make lots of sense still.

      They have higher density, and they are more shock proof. When you need to move masses of data round the world, writing it to tape, then sticking it on a plane is still the fastest way to move it (probably, may have changed I guess)

      • neidu3@sh.itjust.worksM
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Yup, I 100% agree. Tapes are often viewed as obsolete, but there is no more cost-effective way storing data in the petabytes in a safer way than tape.

        Hell, at work I have a few live storage clusters measured in petabytes, and being responsible for them can be pretty stressful at times. Data loss isn’t just bad, it is fucking terrifying when its data costs hundreds of thousands of dollars per day to collect.

        I have yet to experience data loss, but I breathe a sigh of relief for every batch of data that has been confirmed written to tape. Because once it is, I know that it is safe and no longer my responsibility.

        It’s written to two sets of tape at a time, both of which are read back to confirm data integrity, and once it is, that’s when I know that my live copy is officially not supposed to be a backup.

        One set of tapes is stored on board in case something stupid happens with the other set during transport to a literal mountain for storage. There it is re-read and checksummed, confirming that the other set of tapes can be rewritten with the next dataset. (Yes, every tape cartridge is written to twice).

  • starlinguk@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I work at a renowned tech company that frequently reminds its employees that AI hallucinates. We do a lot of work for the army, a mistake caused by hallucinating AI would be a disaster.

    • EvilBit@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      Meanwhile we’re just waiting until Hegseth accidentally turns a Bethesda-area Target into a smoking crater because he was drunk-Grokking and fucks up ordering an airstrike to cheer himself up after the mainstream librul media hurt his fee-fees.

  • Ю ⁂@nahe.social
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    @pageflight Small design company. We experiment with llms in different areas but so far there are marginal improvements and very little work-safe use cases. Totally not up to the hype.

  • 🌞 Alexander Daychilde 🌞@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I’m too old for this shit - too old for the original show, I mean, but for some reason, my brain wants to make that title work:

    Who works at a (tech) company that’s not delirious about AI?

    SPONGE! BOB! SQUARE! PANTS!

    It completely doesn’t work.

      • Widdershins@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        10 days ago

        I haven’t seen the whole show but I have been under the impression that SpongeBob and intelligence don’t cross paths very often.

      • 🌞 Alexander Daychilde 🌞@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Well, you put way more into it than I had, so I feel I have a refinement to give back as thanks - it just needs a single extra syllable. Perhaps:

        Who works for a place that just licks AI’s taint

        Now it scans. :)

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I worked at one that actually wasn’t too bad except we had a peer review system for client reports and I was horrified to see how many people had such poor english grammatical understanding that they just assumed the AI was always the correct and better output than human.

    And I don’t mean people whose second language was english, I mean native english speakers were giving me AI feedback to change sentences that would completely change the context or horribly maim phrases into past tense where tense of the subject was very much important.

    I could easily ignore the changes from coworkers, but a handful of managers would then give performance feedback telling me to utilize AI and grammarly to improve my report quality, even though all of their report feedback was utter garbage lol.

    On a related note, grammarly can also go screw itself. That joke of a software suite still doesn’t hold a candle to Word 2007’s editor.

    • Crozekiel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I fucking hate grammarly. And the modern Outlook webmail suggestions can go eat a back of dicks as well.

  • gwl [he/him]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    who works at a tech company that’s not delicious about AI?

    -- OP

    I work at Tech Company that loves AI

    -- people with poor reading comprehension replying to this thread

  • ExLisper@lemmy.curiana.net
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I work at a small software company. There is a push to use AI but I would say in a reasonable way. It does speed up some tasks but no one is vibe codding and pushing things without proper review. So far no one is tracking the usage or pushing us to use it more. It’s just a new tool we’re encouraged to be familiar with and use reasonably.

  • kersploosh@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    11 days ago

    Medical device industry here. Some of our software and electrical engineers are using Claude as a sounding board for ideas, or as a starting point to find possible paths forward when they get stuck with a hard problem. Nobody trusts the model to give an accurate answer. Nobody is being encouraged to use AI models. At the end of the day, all work committed to a project is done by real humans with the normal review processes.

    Management is cautiously looking at potential uses for AI in our products, but there is a healthy dose of skepticism all around. If your machine is displaying diagnostic data to a doctor there cannot be any question as to whether the machine is hallucinating.

    • mnemonicmonkeys@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Honestly, this is probably the best use case for LLM’s.

      Tom Scott did something recent 2-3 years ago where he fed a bunch of his video titles into an LLM and had it come up 100 new names with a similar style. Most of the output sucked, a handful he had already done, and a few more sounded plausible but didn’t exist. But he got 8-10 that he could have turned into actual videos (doing all the work himself) and even did so for a couple.

      The hallucination of AI can be used to help a human artist or programmer, designer, scientist, etc.) make a new connection they couldn’t before, and they can then use that new connection to implement their new idea. But LLM’s generally suck for anything more than that, and over-reliance on them slowly erodes people’s ability to think and create over time

  • LifeInMultipleChoice@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    We have AI built into some tools I believe, but I have never been told I had to use them. The truth is they don’t work all the time for every situation and the client is more worried about user data accidentally getting scooped up and spending time warning us to never enter any users information anywere, even so much as notating a user saying they have a limitation that explains why we performed a task in a non standard fashion is a complete not happening.

    So if someone said, “I am vision impaired,” someone reading our notes would probably be wondering… Why the f didn’t they just do a,b,c it would have been much easier. But they are worried if those notes get integrated into something the AI gobbles up in the future, they don’t want to get sued for that user information to somehow be linked to them. As that could be considered medical data I guess.

    The funny part is, if an AI does use that data for learning now, it may start trying to instruct or perform tasks based off of highly inefficient solutions designed to assist a specific disability