• Wanderer@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    9
    ·
    edit-2
    3 months ago

    Humans can drive with just vision.

    Tesla is doing it the hard way. Their model involves cars just having vision and driving the same as humans do. Humans can do it, why can’t computers? Seeing as they have more cameras than 2. In theory they should be better than human drivers. Once it is solved they could instantly drive anywhere humans can.

    Waymo has taken an easier route and they have used a lot of detailed mapping with also an assortment of additional sensors. Waymo doing it the easy way has only recently achieved this. Turns out it’s really hard. Harder than everyone including the experts expected probably.

    But with advances in computing and things like LLM’s Tesla is catching up. Who knows how long that will take though? I always thought waymo was doing the right thing so I’m biased.

    Edit: this fucking website I swear. I answered the question and got downvoted for it. What more you people want from me?

    • IllNess@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      3 months ago

      Human vision also have the brain that does a lot of automation like figuring out distance and looking out for danger with real time reaction speed. Night vision is usually better for most people too. The brain also combines that with sound so it can detect things out of vision. Eyes already have a range of view but the human head can also move around accurately. On top of all this focus is what the human brain is best at. While cameras can see 360°, years of data built in the subconscious taught a human driver what to look out for.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Human vision also have the brain that does a lot of automation like figuring out distance and looking out for danger with real time reaction speed.

        To be fair, the reaction time of a self driving vehicle is orders of magnitude greater than that of even the best human driver.

        This is what leads to many moral questions about autonomous vehicles; where as human may not have time to react when an accident is about to happen, a self-driving car does. Laws of physics may prevent it from stopping in time, but it may have the ability to choose who to hit; the kid of the grandmom.

        • IllNess@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          The reports of the safety of AVs is overstated when you consider that they are limited within a city limit, they rarely go on the highway, they follow speed limits in cities which is lower than highways, people are more aware of AVs, and during their trial runs they had an actual human in the car to correct them.

          On average, AVs are safer especially when you consider some bad drivers do not get better, people drink, people get sleepy, people distract themselves. and young drivers lack experience. But the average driver with it with their full faculties would do better in tests based solely on reactions.

          if you look at the accident reports and took out drivers who were on a substance, are younger than 25 or older than 70, was distracted with something like their phones or others in the car, were not following laws, and those who were emotional then the stats would be pretty close.

          Overall I do believe AVs are better for world because peak performance of an average driver is rare.

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 months ago

            But the average driver with it with their full faculties would do better in tests based solely on reactions.

            React faster than a computer would? I cannot imagine how that would be the case.

            • IllNess@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              3 months ago

              If it was a simple flag, you would be correct a computer will react faster than any human but when you factor in everything else like constantly analysis of surroundings, decision making, and accounting for physical limitations, then yes. It’s the reason why Waymo cars move so slowly.

              If a person was standing at a sidewalk, hidden behind an object, far away from a pedestrian way or traffic signal and jumps 2 feet in front of a car going 25 mph, the average driver with their full faculties would do better than Waymo.

              • ContrarianTrail@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 months ago

                Well yeah right now that may still be the case but I was mostly thinking about the “true” self driving cars of the future. It seems obvious to be that they would vastly outperform human drivers on literally everything. Just like a true AGI would.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      Not only that, but as far as I know, other companies are still relying on human-written code, whereas Tesla has gone with neural nets. If it turns out that manually coding how to handle every possible variation of traffic scenarios is an impossible task, those companies would essentially have to start from scratch, giving Tesla a massive lead for adopting AI so much earlier. Of course, it’s a gamble, things could go the other way too, but considering the leap FSD made from version 1.3 to 1.4, when they switched to neural nets, I’m rather confident they’re on the right track.

      • ForgotAboutDre@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        An undeterministic system is dangerous. A deterministic with flaws can be better, the flaws can be identified understood and corrected. The flaws are more likely to be present in testing.

        Machine learning is nearly always going to be undeterministic. If they then use continuous training, the situation only gets worse.

        If you use machine learning because you can’t understand how to solve the problem, then you’ll never understand how the system works. You’ll never be able to pass a basic inspection test.