OK, its just a deer, but the future is clear. These things are going to start kill people left and right.

How many kids is Elon going to kill before we shut him down? Whats the number of children we’re going to allow Elon to murder every year?

  • bluGill@fedia.io
    link
    fedilink
    arrow-up
    44
    arrow-down
    4
    ·
    13 days ago

    Driving is full of edge cases. Humans are also bad drivers who get edge cases wrong all the time.

    The real question isn’t is Tesla better/worse in anyone in particular, but overall how does Tesla compare. If a Tesla is better in some situations and worse in others and so overall just as bad as a human I can accept it. Is Tesla is overall worse then they shouldn’t be driving at all (If they can identify those situations they can stop and make a human take over). If a Tesla is overall better then I’ll accept a few edge cases where they are worse.

    Tesla claims overall they are better, but they may not be telling the truth. One would think regulators have data for the above - but they are not talking about it.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      13 days ago

      Tesla claims overall they are better, but they may not be telling the truth. One would think regulators have data for the above - but they are not talking about it.

      https://www.reuters.com/business/autos-transportation/nhtsa-opens-probe-into-24-mln-tesla-vehicles-over-full-self-driving-collisions-2024-10-18/

      The agency is asking if other similar FSD crashes have occurred in reduced roadway visibility conditions, and if Tesla has updated or modified the FSD system in a way that may affect it in such conditions.

      It sure seems like they aren’t being very forthcoming with their data between this and being threatened with fines last year for not providing the data. That makes me suspect they still aren’t telling the truth.

      • Billiam@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        13 days ago

        It sure seems like they aren’t being very forthcoming with their data between this and being threatened with fines last year for not providing the data. That makes me suspect they still aren’t telling the truth.

        I think their silence is very telling, just like their alleged crash test data on Cybertrucks. If your vehicles are that safe, why wouldn’t you be shoving that into every single selling point you have? Why wouldn’t that fact be plastered across every Gigafactory and blaring from every Tesla that drives past on the road? If Tesla’s FSD is that good, and Cybertrucks are that safe, why are they hiding those facts?

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          13 days ago

          If the cybertruck is so safe in crashes they would be begging third parties to test it so they could smugly lord their 3rd party verified crash test data over everyone else.

          Bu they don’t because they know it would be a repeat of smashing the bulletproof window on stage.

      • atempuser23@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        One trick used is to disengage auto pilot when it senses and imminent crash. This would vastly lower the crash count shifting all blame to the human driver.

    • ano_ba_to@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      ·
      13 days ago

      Being safer than humans is a decent starting point, but safety should be maximized to the best of a machine’s capability, even if it means adding a sensor or two. Keeping screws loose on a Boeing airplane still makes the plane safer than driving, so Boeing should not be made to take responsibility.

    • Semi-Hemi-Lemmygod@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 days ago

      Humans are also bad drivers who get edge cases wrong all the time.

      It would be so awesome if humans only got the edge cases wrong.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 days ago

        I’ve been able to get demos of autopilot in one of my friend’s cars, and I’ll always remember autopilot correctly stopping at a red light, followed by someone in the next lane over blowing right through it several seconds later at full speed.

        Unfortunately “better than the worst human driver” is a bar we passed a long time ago. From recent demos I’d say we’re getting close to the “average driver”, at least for clear visibility conditions, but I don’t think even that’s enough to have actually driverless cars driving around.

        There were over 9M car crashes with almost 40k deaths in the US in 2020, and that would be insane to just decide that’s acceptable for self driving cars as well. No company is going to want that blood on their hands.

    • atempuser23@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      13 days ago

      Yes. The question is if the Tesla is better than a anyone in particular. People are given the benefit of the doubt once they pass the drivers test. Companies and AI should not get that. The AI needs to be as good or better than a GOOD human driver. There is no valid justification to allow a poorly driving AI because it’s better than the average human. If we are going to allow these on the road they need to be good.

      The video above is HORRID. The weather was clear, there was no opposing traffic , the deer was standing still. The auto drive absolutely failed.

      If a human was driving in these conditions plowed through a deer at 60 mph and didn’t even attempt to swerve or stop they shouldn’t be driving.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      13 days ago

      If a Tesla is better in some situations and worse in others and so overall just as bad as a human I can accept it.

      This idea has a serious problem: THE BUG.

      We hear this idea very often, but you are disregarding the problem of a programmed solution: it makes it’s mistakes all the time. Infinitely.

      Humans are also bad drivers who get edge cases wrong all the time.

      So this is not exactly true.

      Humans can learn, and humans can tell when they made an error, and try to do it differently next time. And all humans are different. They make different mistakes. This tiny fact is very important. It secures our survival.

      The car does not know when it made a mistake, for example, when it killed a deer, or a person, and crashed it’s windshield and bent lot’s of it’s metal. It does not learn from it.

      It would do it again and again.

      And all the others would do exactly the same, because they run the same software with the same bug.

      Now imagine 250 million people having 250 million Teslas, and then comes the day when each one of them decides to kill a person…

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        12 days ago

        Tesla can detect a crash and send the last minute of data back so all cars learn from is. I don’t know if they do but they can.

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 days ago

          I don’t know if they do but they can.

          "Today on Oct 30 I ran into a deer but I was too dumb to see it, not even see any obstacle at all. I just did nothing. My driver had to do it all.

          Grrrrrr.

          Everybody please learn from that, wise up and get yourself some LIDAR!"

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      Yeah there are edge cases in all directions.

      When people want to say that someone is very rare they should say “corner case,” but this doesn’t seem to have made it out of QA lingo and into the popular lexicon.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      13 days ago

      Given that they market it as “supervised”, the question only has to be “are humans safer when using this tool than when not using it?”

      One of the cool things I’ve noticed since recent updates, is the car giving a nudge to help me keep centered, even when I’m not using autopilot