• Da_Boom@iusearchlinux.fyi
    link
    fedilink
    English
    arrow-up
    10
    ·
    il y a 1 an

    Even if we somehow manage to create a sentient AI, it will still have to rely on the information it receives from various sensors in the car. If those sensors fail, and it doesn’t have the information it needs to do the job, it could still make a mistake due to a lack of, or completely incorrect data, or if it manages to realise the data is erroneous it still could flatly refuse to work. I’d rather keep people in the loop as a final failsafe just in case that should ever happen.

    • wabafee@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      il y a 1 an

      I see your point on this but when should an sentient AI be able to decide for itself? What makes it different from a human by this point? Human, us rely on sensors too to react to the world. We make mistakes also, even dangerous one. I guess we just want to make sure this sentient AI is not working against us?

      • Da_Boom@iusearchlinux.fyi
        link
        fedilink
        English
        arrow-up
        6
        ·
        il y a 1 an

        That’s why it’s layers of security. Humans have a natural instinct - usually we can tell if our eyesight is getting worse. And any mistake we make is most likely due to us not noticing something or reacting in time, something that the AI should be able to compensate for.

        The only time where this is not true when we have a medical episode, like a grand Mal or something. But everyone knows safety is always relative. And we mitigate that by redundancies. Sensors will have redundancies, and we ourselves are also an additional redundancy. Heck we could also put in sensors for the occupants to monitor their vitals. There is once again a question of privacy, but really that’s all we should need to protect against that.

        A sentient AI, not counting any potential issues with its own sentience, would have issues with sudden failed or poorly maintained sensors. Usually when a sensor fails, it either zeros out, maxes out, or starts outputting completely erratic results.

        If any of these results look the same as normal results, they can be hard for the AI to tell. We can reconcile those sensors with our own human senses and tell if they failed. A car only has its sensors to know what it needs to know, so if it fails, will it be able to know? Sure sensor redundancy helps, but there is still that minor chance that all the redundant sensors fail in a way that the AI cannot tell, and in that case the driver should be there to take over.

        Again I will refer to the system of an aircraft, as even if it’s a 1 in a billion chance there have been a few instances where this has happened and the autpilot nearly pitched the plane into the ground or ocean, and the plane was only saved due to the pilots takeover - in one of those cases it was due to a faulty sensor reporting that the angle of attack was too steeply pitched up, so the stick pusher mechanism tried to pitch the nose down, to save the plane, when infact it already was down. An autopilot, even an AI one will have no choice to trust its sensors as that’s the only mechanism it has.

        When it come to a faulty redundant sensor, the AI also has to work out which sensor to trust, and if it picks the wrong one, well you’re fucked. It might not be able to work out which sensor is more trustworthy…

        We keep ourselves safe with layered safety mechanisms and redundancy, including ourselves. So if anyone fails, the other can hopefully catch the failure.

        • wabafee@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          il y a 1 an

          Wow, I appreciate the response must have taken awhile to write.