I’m asking for public policy ideas here. A lot of countries are enacting age verification now. But of course this is a privacy nightmare and is ripe for abuse. At the same time though, I also understand why people are concerned with how kids are using social media. These products are designed to be addictive and are known to cause body image issues and so forth. So what’s the middle ground? How can we protect kids from the harms of social media in a way that respects everyone’s privacy?

    • madnificent@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      I’ll reply to this random one with that statement. There’s no winning move as a parent.

      Problem is being locked out. If your kid is the only one not on social media and all other kids are, your kid will be socially left out.

      All kids are on a chat platform you don’t support. What do you? Disallow it and give them a social handicap that might scar them, or allow it and take the risk?

      The same goes for allowing images on other platforms. Since GDPR schools seem to care. Yet if it’s a recording that will be put on social media you can explain your 4 year old why they weren’t allowed to participate… It sucks.

      I don’t know what the right way forward is. I don’t think this is it. Something is needed though. We should at least signal what we find acceptable as a society. Bog stupid rules which are trivial to circumvent might be good enough, or perhaps some add campaigns like we did with smoking (hehe, if it’s for something we support then adds are good?).

      Regardless, the current situation clearly doesn’t work. It would be great if we could find and promote the least invasive solutions.

      • frostedtrailblazer@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        12 days ago

        I feel that communicating your concerns with other parents and their school can help. I feel it can make sense to have some forms of socialization when they are in middle school or high school, but even then you’d want a pretty locked down system, imo.

        I feel that not every parent is going to let their kids use technologically to talk to their friends, especially not all the time. That’s not how I grew up and I was fine developmentally speaking. As a parent you can seek out other parents that live by similar philosophy locally for your kids to have as friends as well.

        • undeffeined@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          11 days ago

          You’d be surprised with what parents let their kids do. My little anecdotal sample size contains mostly highly educated people but most of them don’t place any restrictions on screen time of their kids. They claim they talked to their kids and they have assured them they don’t look at anything they are not supposed to but that’s just not what happens in reality.

          What really happens is that the kids with no restrictions will engage with all the predatory bullshit on these platforms, nonstop. I can see this with my own eyes and my kid brings their friends over.

          Communication is key but unfortunately the business model of these platforms is based on addiction and children are not equiped to deal with it and parental controls are an essential component.

          • madnificent@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            11 days ago

            I believe the parent post is nicely sketching out what a “best” move is. I have seen no better approach myself. At the same time I see what you see. The best approach isn’t all that great. If you’re lucky and find the right people it could work. There’s a lot of luck involved there.

            That’s why I do think there should be some regulations indicating what is tolerated. It seems to me parent poster may agree (and thus also woth your take).

            Since GDPR you can tell the school you don’t want pictures on platforms you disagree with. You may miss out on seeing the photo’s, you might come across as crazy, but you can (and you should). We were given a choice at the cost of extra paperwork and some limitations.

            Even without the addiction problem of these platforms we should nurture and find a good society around us. It’s a valid take to try and find likeminded people.

            I don’t think that’s the end of it. Given the state we’re in, the network effect, and the fragile ego of developing kids, I suppose we need a stronger push.

            AI enforced age verification or logins which allow you to be followed anywhere is not the solution in my current opinion, it’s a different problem. The problems are the addictive and steering nature of the platforms which seems to be hard to describe in a clear way legally.

            I wonder how “these platforms” should be defined and what minimum set of limitations would give us and the children the necessary breathing space.

            • flamingleg@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              11 days ago

              the minimum would be transparency for the algorithm. If users can see exactly what a social media algorithm is doing with their content feed, they would always have a way to identify and escape dark patterns of addiction.

              But this minimum itself would require powers to compel tech companies to give up what they would describe as intellectual property. Which would probably require a digital bill of rights?

              The most practical option would be to just ask your kids directly about the kinds of content they’ve been consuming and why. Dinner table conversations can probably reveal those dark patterns just as well

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    ban social media metrics and information trading/markets. make it a truly anonymous service like it was in the early 2000s.

    if protecting children was the point they would stop corporations from identifying all users and selling their identities/profiles online.

    but, protecting the children is NOT the point. the point is control of freedom of speech, or rather who gets to have the freedom of speech.

    • Fleur_@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Most people don’t want social media to be anonymous. They want to be themselves and connect with real people. How exactly is an anonymous tinder supposed to work?

  • ChristerMLB@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Some of it can be accomplished by just setting universal demands for how social media works for all users:

    • ban targeted advertising
    • make it mandatory for companies to ensure algorithms don’t prioritize posts for making users angry, scared or depressed

    Stuff like that. These kinds of regulations don’t involve ID checks, and could take care of a big chunk of the problem.

        • Saledovil@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          9 days ago

          I figure a ban of targeted advertisement would look like “The ads are only allowed to change once a day, and everybody during said day sees the same ads”. Whereas currently, each time you load a website, there’s an impromptu auction to sell the ad spots. (Advertisers don’t actually have to pay until you click their ad). So there would be less incentive to keep the user constantly engaged, as it would be enough if the user just visits regularly.

  • UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    By getting rid of shitty corporate social media that makes money by exploiting people.

    This is like suggesting that the solution to protecting your kids from tigers roaming the street is to lock them in their rooms. Nah, just rid of the fucking tigers.

    • ageedizzle@piefed.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      As long as corporate social media is closed source, it would be hard to know if a no-advertising policy is being fully adhered to. A good example of this is the class action lawsuit against Chrome’s incognito mode: for years, Chrome got away with collecting personal browsing data when people browsed in incognito mode despite insisting that they didn’t do that. Something similar might happen with social media. To get around that, there could be a legal requirement for social media to be open source. That might run into issues with intellectual property law though, and the lobbying against it would be so intense that I’m not sure if a law like that would ever pass without massive political will.

  • KingOfTheCouch@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    I like to think I’m a tech savvy parent and the amount of tooth gnashing to setup and maintain child accounts is incredible. I’m convinced the foxes guarding the henhouse are using dark patterns to make parents give up.

    Why can’t I just get a notification on my phone saying “Hey, kiddo wants to have screen time. Approve?”

    Hell, I’d love a notification saying “Kiddo started watching Mr. Blah.” If I got the notification and I didn’t want them watching that, I could block the video, or creator with a click. WHY ARE WE NOT AT THIS LEVEL OF CONVENIENCE?

    A LOT of these concerns would go away if phones/tablets/tv’s had these simple controls. Move those privacy controls into the home and MAKE them so easy a neanderthal could operate them.

    If I have to *.newsocialbook.com into my router, you can bet your damn ass that “LiveLaughLoveMom<3” is going to keep demanding that someone else do it for her.

    • LePoisson@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      Capitalism. Everything you described costs money to create and maintain and it generates zero (or negative) profit. Most people aren’t going to want to pay for some sort of nanny toolkit.

      Don’t get me wrong, I agree with you and it should be like that. Our current systems are not going to bring that about though.

  • lemmy_outta_here@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    Kill the engagement algorithm. Your feed should contain a chronological list of posts made by people you subscribe to. In one stroke you could end the doomscroll - not just for kids, but for everybody. Also, infinite scrolling should be banned.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    The German passport allows services to verify age through you NFC reading your passport on your phone and confirmation of validity through intermediates state service. All they see is a confirmation of age requirement met. No name, no age, no address, no face.

    Some other countries have similar systems. It’s already a EU directive to be implemented on a broader European level.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      How would that work online? How would they confirm it’s your passport, and that it’s a real passport that was really scanned (instead of a browser plugin)?

      • Kissaki@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago
        1. Register as a service, with justification why you need to be able to read the fields or properties you say you need
        2. Upon acceptance, aquire a digital permission certificate
        3. Set up a server, that handles communication with the ID
        4. For a request, prove you own the permission cert through a challenge sent by the ID document
        5. ID document proves through a challenge to the server that it is what it is (a set of produced ID documents use the same private and public keys so they are not personally identifiable / associatable to an individual)
        6. User enters PIN so that this process can proceed
        7. Open secured connection between server and ID document
        8. Server can request/challenge age verification, and the ID document answers with “is met”

        At least the Wikipedia page is not detailed/technical on step 8, but if you were to attempt to man-in-the-middle, you could not because you can’t fake identifying as a valid ID document, which is ensured by the challenge and private/public key cryptography.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          8 days ago

          I’ll need to look into it a bit more, but I’m skeptical that this will work in practice:

          How can they confirm that I’m the owner of the passport? How do you prevent them from selling the fields they requested, that have been uniquely linked to you? How do you prevent the government from keeping track of all the services you’re using?

  • DFX4509B@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    Parental controls have been an effective way for decades. In combination with actually looking over your kids, of course.

  • shaggyb@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    10 days ago

    Stop. Giving. Them. Phones.

    Stop whining. No they don’t need one. NO THEY DON’T.

    No.

    No they’re not special.

    No they’re not too busy. Neither are you.

    No iPad either.

    Stop. Shut up. No. Phones.

    • ErevanDB@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I agree, if you limit “phones” to “smart phones and portable computers”. There are reasons to give a kid a small, no internet dumbphone. But yes, don’t give kids unrestricted access to the family PC, and DEFINITELY dont give them their own.

    • YeahIgotskills2@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      That’s the tack I’m taking. My eldest goes to high school next year and most of his peers are automatically getting a smartphone at that point. He’ll be 13. He can forget it. A dumb phone at a push, for safety. That’s it.

  • ameancow@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    You can’t, however you frame this issue there’s going to be a sacrifice. We have to all digest this.

    The best kind of sacrifice you can make though for the best outcome is to limit your child’s screen-time, AND ALSO YOUR OWN. Spend more time together, practice what you preach, you are also a child being harmed by social media.

  • Æ@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    I think we should reframe the question.

    How can we protect adults from the harms of not being able to post meaningless bullshit anonymously to online anonymous strangers we never agree with without sacrificing everyones children’s mental stability?

    Maybe put childrens rights before adult rights. Adults had fun and got along fine without social media back before the 2000’s. I refuse to believe that we are no longer capable of that. Especially if it means kids get to to go back to using the internet as a resource for homework and playing outside and using their own imaginations. Adults too.

  • baller_w@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    11 days ago

    In not in favor of providing ID for anything. If a service requires it, I won’t use that service. Also, I can’t think of a verification system like this that hasn’t been bypassed or exploited, so it’s largely an exercise in futility.

    However, a compelling argument is to use your phone’s biometrics to perform a challenge and verification. Basically, your device acts as your ID so sites never have it. I think this way better than all websites to keep a copy of the identity.

    • ageedizzle@piefed.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Using biometrics is an interesting idea. It could be similar to Apples face-scan to unlock feature, where the model of your face never leaves your local device but can still be used as two factor authentication to access your banking, for example.

      • baller_w@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        7 days ago

        Exactly this. If I had to chose between hundreds of third party websites having my ID and my phone, I’ll take my phone.

        We already have very sophisticated ways of validating payment and passport information with our devices. Validating age could be as simple as a registration procedure between the device and the identity issuer , validating the device is held by a person “of age” and then that’s it. If that user successfully completes a biometric challenge, then allow the activity.

        So web browsing goes from “I’m John Doe and here’s my ID proving it” to every site (which has HUUUUUGE PRIVACY ISSUES) to “This anonymous user is over 18; this one is over 21, this one’s not”.

        Also, if this behavior of forcing websites to ID you continues, it will enable a renaissance in data mining. Right now companies see “actor is in ZIP code 90210; rain in the forecast “ and put the two together to show “maybe they need a new slicker”. That’s simplified of course, but that’s basically the trick. You can use hundreds or thousands of these data points to paint an ever clearer picture of the person, but you never know exactly who they are. These ID laws are changing this rapidly.

        This also has the potential to be used for some very dark purposes. Example: said something on Instagram critical of the US President? You don’t get to vote because of some label.

        My position is still if the site or service requires my ID, then I don’t need it that badly.