Deleted

  • Jamie@jamie.moe
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.

    Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.

    • pandarisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about