• bahmanm@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Interesting topic - I’ve seen it surface up a few times recently.

    I’ve never been a mod anywhere so I can’t accurately think what workflows/tools a mod needs to be satisfied w/ their, well, mod’ing.

    For the sake of my education at least, can you elaborate what do you consider decent moderation tools/workflows? What gaps do you see between that and Lemmy?

    PS: I genuinely want to understand this topic better but your post doesn’t provide any details. 😅

    • smoothbrain coldtakes@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      One of the major issues is replication and propagation of illegal material. Because of the way that content is mirrored and replicated across the fediverse, attacks that flood communities with things like CSAM inevitably find their way to other federated sites due to the interconnectedness of the fediverse.

      The only response currently to dealing with these types of attacks, even if they’re not directed at you, is to generally defederate with the instance being attacked. This means whoever was attacking the site with CSAM has won, because they successfully made it so that the community becomes disjointed and disconnected from the rest of the fediverse with the hopes that it will die.

      • bahmanm@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I see.

        So what do you think would help w/ this particular challenge? What kinds of tools/facilities would help counter that?


        Off the top of my head, do you think

        • The sign up process should be more rigorous?
        • The first couple of posts/comments by new users should be verified by the mods?
        • Mods should be notified of posts/comments w/ poor score?

        cc @PrettyFlyForAFatGuy@lemmy.ml

        • PrettyFlyForAFatGuy@lemmy.ml
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I can think of some things i could implement on the lemmy server side that could help with this, i’m pretty sure that the IWF maintains a list of file hashes for CSAM and there are probably a few other hash sources you could draw from too.

          so the process would be something like the following

          • create a local db for and periodically (like once a day) update CSAM hash list
          • I would be very surprised if hashes for uploads are not already created, compare this hash with list of known harmful material
          • if match found, reject upload and automatically permaban user, then if feasible automatically report as much information as possible about user to law enforcement

          so for known CSAM you don’t have to subject mods or user to it before it gets pulled.

          for new/edited media with unrecognised hashes that does contain CSAM then a mod/admin would have to review and flag it at which point the same permaban for the user, then law enforcement report could be triggered automatically.

          The federation aspect could be trickier though. which is why this would probably be better to be an embedded lemmy feature rather than a third party add on.

          I’m guessing it would be possible to create an automoderator that does all this on the community level and only approves the post to go live once it has passed checks.

          • bahmanm@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            That sounds a great starting point!

            🗣Thinking out loud here…

            Say, if a crate implements the AutomatedContentFlagger interface it would show up on the admin page as an “Automated Filter” and the admin could dis/enable it on demand. That way we can have more filters than CSAM using the same interface.