• 0 Posts
  • 27 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle

  • Stop. It’s 2026, we don’t need to do this sex shaming bullshit anymore. Giving blowjobs doesn’t make you a bad person. Being good at blowjobs doesn’t make you a bad person. Don’t lump Erika fucking Kirk in with people who are simply good at blowjobs. Erika Kirk is a bad person because she’s profiting off of her husband’s death. Erika Kirk is a bad person because she’s continuing her husband’s legacy of hate and bigotry. Blowjobs have nothing to do with it, and you’re muddying the waters by making that the focus.











  • You’re disingenuously equating your full-size sedan and your crossover’s gas tanks and using that single piece of anecdotal (and completely unrelated) evidence to incorrectly imply that the drivers of sedans are going to suffer just as much as the dumbasses that still drive gas guzzlers.

    Subcompact and compact cars generally have 8-10 gallon tanks, midsize cars generally have 10-14 gallon tanks, full size cars generally have 14-18 gallon tanks. The middle of that range is actually 13 gallons, so I was off a gallon. My b.

    I like how you limited your data specifically to American sedans to fit your narrative though, despite neither of your cars being American, and despite American sedans not being even close to the top choice for sedan drivers, not even in America.









  • It needed the rules, and it needed carefully worded questions that matched the parameters set by the rules. I bet if the questions’ wording didn’t match your rules so exactly, it would generate worse answers. Heck, I bet if you gave it the rules, then asked several completely unrelated questions, then asked it your carefully worded rules-based questions, it would perform worse, because it’s context window would be muddied. Because that’s what it’s generating responses based on - the contents of it’s context window, coupled with stats-based word generation.

    I still maintain that it shouldn’t need the rules if it’s truly reasoning though. LLMs train on a massive set of data, surely the information required to reason out the answers to your container questions is in there. Surely if it can reason, it should be able to generate answers to simple logical puzzles without someone putting most of the pieces together for them first.