Deleted

  • jerkface@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s not so important to tell the difference between a human and a bot as it is to tell the difference between a human and ten thousand bots. So add a very small cost to passing the test that is trivial to a human but would make mass abuse impractical. Like a million dollars. And then when a bot or two does get through anyway, who cares, you got a million dollars.

    • darkrai9292@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah this seems to be the idea behind mCaptcha and other proof of work based solutions. I noticed the developers were working on adding that to Lemmy

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Any bot? That’s just impossible. We’re going to have to tie identity back to meatspace somehow eventually.

    An existing bot? I don’t think I can improve on existing captchas, really. I imagine an LLM will eventually tip their hand, too, like giving an “as an AI” answer or just knowing way too much stuff.

  • Bruce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Ask how much is 1 divided by 3; then ask to multiply this result by 6.

    If the results looks like 1.99999999998 , it’s 99.999999998% a bot.

    • Hudell@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I once worked as a 3rd party in a large internet news site and got assigned a task to replace their current captcha with a partner’s captcha system. This new system would play an ad and ask the user to type the name of the company in that ad.

      In my first test I already noticed that the company name was available in a public variable on the site and showed that to my manager by opening the dev tools and passing the captcha test with just some commands.

      His response: “no user is gonna go into that much effort just to avoid typing the company name”.

    • Notyou@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I’m pretty sure you have to have 2 bots and ask 1 bot is the other bot would lie about being a bot… something like that.

  • Zamboniman@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    How would you design a test that only a human can pass, but a bot cannot?

    Very simple.

    In every area of the world, there are one or more volunteers depending on population / 100 sq km. When someone wants to sign up, they knock on this person’s door and shakes their hand. The volunteer approves the sign-up as human. For disabled folks, a subset of volunteers will go to them to do this. In extremely remote area, various individual workarounds can be applied.

    • WaterWaiver@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      This has some similarities to the invite-tree method that lobste.rs uses. You have to convince another, existing user that you’re human to join. If a bot invites lots of other bots it’s easy to tree-ban them all, if a human is repeatedly fallible you can remove their invite privileges, but you still get bots in when they trick humans (lobsters isn’t handshakes-at-doorstep level by any margin).

      I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human, but hey, humanity through obscurity :)

  • ANGRY_MAPLE@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is a bit out there, so bear with me.

    In the past, people discovered that if they applied face paint in a specific way, cameras could no longer recognizing their face as a face. Now with this information, you get (eg. 4?) different people. You take a clean picture of each of their heads from a close proximity.

    Then, you apply makeup to each of them, using the same method that messes with facial recognition software. Next, take a picture of each of their heads from a little further away.

    Fill a captcha with pictures of the faces with the makeup. Give the end user a clean-faced picture, and then ask them to match it to the correct image of the same person’s face but with the special makeup.

    Mess around with the colours and shadow intensity of the images to make everyone’s picture match more closely with everyone else’s picture if you want to add some extra chaos to it. This last bit will keep everyone out if you go too far with it.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Face recognition ability in humans varies wildly, unfortunately. And that’s without making it harder with face paint. Regular people can get completely fooled by simple things like glasses on/off or a different hairstyle (turns out Clark Kent was on to something after all).

      • Spzi@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Not sure if I want to know how you unlock your phone.

        Common methods are fingerprint detection, face recognition, iris/retina scanning.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Not sure if I want to know how you unlock your phone.

          They take a picture of a skid mark on their underwear. Perfectly clean and safe. A bit awkward when you’re paying at the supermarket.

  • anditshottoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    The best tests I am aware of are ones that require contextual understanding of empathy.

    For example “You are walking along a beach and see a turtle upside down on it back. It is struggling and cannot move, if it can’t right itself it will starve and die. What do you do?”

    Problem is the questions need to be more or less unique.

    • lazyplayboy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      "If I encounter a turtle in distress, here’s what I would recommend doing:

      Assess the situation: Approach the turtle calmly and determine the extent of its distress. Ensure your safety and be mindful of any potential dangers in the environment.

      Protect the turtle: While keeping in mind that turtles can be easily stressed, try to shield the turtle from any direct sunlight or extreme weather conditions to prevent further harm.

      Determine the species: If you can, identify the species of the turtle, as different species have different needs and handling requirements. However, if you are unsure, treat the turtle with general care and caution.

      Handle the turtle gently: If it is safe to do so, carefully pick up the turtle by its sides, avoiding excessive pressure on the shell. Keep the turtle close to the ground to minimize any potential fall risks.

      Return the turtle to an upright position: Find a suitable location nearby where the turtle can be placed in an upright position. Ensure that the surface is not too slippery and provides the turtle with traction to move. Avoid placing the turtle back into the water immediately, as it may be disoriented and in need of rest.

      Observe the turtle: Give the turtle some space and time to recover and regain its strength. Monitor its behavior to see if it is able to move on its own. If the turtle seems unable to move or exhibits signs of injury, it would be best to seek assistance from a local wildlife rehabilitation center or animal rescue organization.

      Remember, when interacting with wildlife, it’s important to prioritize their well-being and safety. If in doubt, contacting local authorities or experts can provide the most appropriate guidance and support for the situation."

      • dylanTheDeveloper@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        I was gonna say point and laugh at gods failure of a creation because holy shit why would you evolve into a thing that can die by simply flipping onto it’s back.

    • bitsplease@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I don’t think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following

      “I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it’s feet. I would also check to make sure it’s healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home”

      Granted it’s got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don’t think this approach would work

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Modern LLMs like ChatGPT are really good at faking empathy

        They’re really not, it’s just giving that answer because a human already gave it, somewhere on the internet. That’s why OP suggested asking unique questions… but that may prove harder than it sounds. 😊

        • bitsplease@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          That’s why I used the phrase “faking empathy”, I’m fully aware the chatGPT doesn’t “understand” the question in any meaningful sense, but that doesn’t stop it from giving meaningful answers to the question - that’s literally the whole point of it. And to be frank, if you think that a unique question would stump it, I don’t think you really understand how LLMs work. I highly doubt that the answer it spit back was just copied verbatim from some response in it’s training data (which btw, includes more than just internet scraping). It doesn’t just parrot back text as is, it uses existing tangentially related text to form it’s responses, so unless you can think of an ethical quandary which is totally unlike any ethical discussion ever posed by humanity before (and continue to do so for millions of users), then it won’t have any trouble adapting to your unique questions. It’s pretty easy to test this yourself, do what writers currently do with chatGPT - go in and give it an entirely fictional context, with things that don’t actually exist in human society, then ask it questions about it. I think you’d be surprised with how well it handles that, even though it’s virtually guaranteed there are no verbatim examples to pull from for the conversation

  • Jamie@jamie.moe
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.

    Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.

    • pandarisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about