“The future ain’t what it used to be.”

-Yogi Berra

  • 2 Posts
  • 174 Comments
Joined 3 years ago
cake
Cake day: July 29th, 2023

help-circle










  • I’m gaming out the realistic consequences of what a law will mean. It has nothing to do whatsoever if you approve if these companies or not to try and understand the consequences of what will happen if a law like this passes. You don’t get to pick or choose if the speech is from an LLM or a company that gets limited or from an individual. There is no difference from a legal perspective.

    And this law and approach to limiting speech to “protect people” from the stupid consequences of their own action, they aren’t new. And we already know the consequences. Large corporate entities will just get around them or pay an inconsequential fine, and individuals will have their rights curtailed as a result

    The entire thread here is falling for an incredibly obvious astroturfing campaign because they associate LLMs with big bad corporations and the real consequences these bad companies have wreaked. But limiting free speech on the internet won’t stop them, what it will stop is our ability to communicate and resist them.



  • Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

    Yes. And neither are LLMs or their derivatives.

    The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

    And yet people do, and we accept that as a necessary consequence of maintaining free speech as a principal.

    The exact arguments being accepted in this thread are the same which led directly to crackdowns in Hungary, China, and Russia.

    If you are okay with limiting and regulating LLMs as a form of speech, I promise it’s your speech which will end up limited, and a very small number of companies will control all speech on the internet. You should stop.






  • Wikipedia isn’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

    Okay lets try this then:

    Chat bots aren’t giving you advice, it’s giving you information. There is a big difference between me taking information and forming an opinion, versus being given an opinion by a system that is responding to a specific situation explained to it.

    Show me the difference.

    Also, people get in trouble for giving legal advice,

    No, they don’t, unless they are genuinely misrepresenting their positions. Sovcit influencers are well within their rights to make up all kinds of gobbly-gookey-garbage pseudo-legal advice.

    People who get in trouble are those that follow the gobbly-gookey-garbage pseudo-legal advice.


  • I don’t think you are wrong, but again, thats not the case.

    You’re making an argument about speech here.

    Lets say you make a fan website based entirely on fine tuned LLM which acts and responds as James Spader from Boston legal. Are you liable if a user of that website construes that speech as legal advice?

    If you are willing to give up access to speech so easily, I have almost no hope for Americans in the near future.

    What laws like this do is create an incredibly high pass filter to in positions of established power. Its literally suicidal in regards to freedom of speech on the internet.

    The right answer is that if you are dumb enough to have gotten your legal advice from an AI hallucination of James Spader, you get to absorb those consequences. The wrong answer is to tell people they aren’t allowed to build fan websites of James Spader giving questionable legal advice.


  • Wikipedia doesn’t give “legal advice”, it has information about these laws, with the sources cited.

    That is very different than asking LLM anything and it throws you random bullshit from unknown sources, with no easy way to verify where it is from or if it is at all accurate.

    It seems like your argument is that because Wikipedia “gets it right” and has cited sources, it isn’t liable? Which I promise, is not how liability works.

    What if it was Wikipedia versus “Some random sovcit facebook post” then? Is the Sovcit post liable because its sources are bullshit? Since there sources are random bullshit and or unknown, do they absorb liability? Again, its the same case, that is not how liability works.

    People are going to have to acknowledge you can’t have it both ways.

    Also…

    with no easy way to verify where it is from or if it is at all accurate.

    C’mon. Plenty of LLM’s can also hallucinate sources which are easily verified. And like with Wikipedia, one could go check them.