KB5077181 was released about a month ago as part of the February Patch Tuesday rollout. When the update first arrived, users reported a wide range of problems, including boot loops, login errors, and installation issues.

Microsoft has now acknowledged another problem linked to the same update. Some affected users see the message “C:\ is not accessible – Access denied” when trying to open the system drive.

  • Buddahriffic@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    You know what’s going on inside the large companies that are hoping to cash in on the AI thing? All workers are being pushed to use AI and goals are set that targets x% of all code written be AI-generated.

    And AI agents are deceptively bad at what they do. They are like the djinn: they will grant the word of your request but not the spirit. Eg they love to use helper functions but won’t necessarily reuse helper functions instead of writing new copies each time it needs one.

    Here’s a test that will show that, with all the fancy advancements they’ve made, they are still just advanced text predictors: pick a task and have an AI start that task and then develop it over several prompts, test and debug it (debug via LLM still). Now ask the LLM to analyse the code it just generated. It will have a lot of notes.

    An entity using intelligence would use the same approach to write the code as it does to analyze it. Not so for an LLM, which is just predicting tokens with a giant context window. There is no thought pattern behind it, even when it predicts a “thinking process” before it can act. It just fits your prompt into the best fit out of all the public git depots it was trained on, from commit notes and diffs, bug reports and discussions, stack exchange exchanges, and the like, which I’d argue is all biased towards amateur and beginner programming rather than expert-level. Plus it includes other AI-generated code now.

    So yeah, MS did introduce bugs in the past, even some pretty big ones (it was my original reason for holding back on updates, at least until the enshitification really kicked in), but now they are pushing what is pretty much a subtle bug generator on the whole company so it’s going to get worse, but admitting it has fundamental problems will pop the AI bubble, so instead they keep trying to fix it with bandaids in the hopes that it’ll run out of problems before people decide to stop feeding it money (which still isn’t enough, but at least there is revenue).

    • ExperiencedWinter@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 minutes ago

      Now ask the LLM to analyse the code it just generated. It will have a lot of notes.

      Not only will it have a lot of notes, every time you ask if to analyze the code it will find new notes. Real engineers are telling me this is a good code review tool but it can’t even find the same issues reliably. I don’t understand how adding a bunch of non-deterministic tooling is supposed to make my code better.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 minutes ago

        Though on that note, I don’t think having an LLM review your code is useless, but if it’s code that you care about, read the response and think about it to see if you agree. Sometimes it has useful pointers, sometimes it is full of shit.

    • SoleInvictus@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      22 hours ago

      You’re spot on regarding how AI operates.

      AI is stupid story time!

      I recently helped a friend with a self-hosted VPN problem. He had been using a free trial of Gemini Pro to try to fix it himself but gave up after THREE HOURS. It never tried to help him diagnose the issue, but instead kept coming up with elaborate fixes with names that suggested they were known issues, like The MTU Traffic Jam, The Packet Collision Quandary, and, my favorite, The Alpine Ridge Controller Trap. Then it would run him through an equally elaborate “fix”. When that didn’t work, it would use the failure conditions to propose a new, very serious sounding pile of bullshit and the process would repeat.

      I fixed it in about fifteen minutes, most of that time spent undoing all the unnecessary static routing, port forwarding, and driver rollbacks it had him do. The solution? He had a typo in the port number in his peer config.

      I can’t deny that LLMs are full of useful knowledge. I read through its output and all of its suggestions absolutely would have quickly and efficiently fixed their accompanying issue, even the thunderbolt/pcie bridging issue, if the real problem had been any of them. They’re just garbage at applying that information.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        27 minutes ago

        Yeah, they don’t do analysis but can fool people because they can regurgitate someone else’s analysis from their training data.

        If could just be matching a pattern like “I have a network problem with <symptoms>. Your issue is <problem> and you need to <solution>.” Where the problem and solution are related to each other but the problem isn’t related to the symptoms, because the correlation with “network problem” ends up being stronger than the correlation with the description of the symptoms.

        And that specific problem could likely be solved just by adding a description of that process to the training data. But there will be endless different versions of it that won’t be fixed by that bandaid.