A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.

    • Thassodar@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 months ago

      Shit even the motion sensors on the automated sinks have trouble recognizing dark skinned people! You have to show your palm to turn the water on most times!

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      Technically, there’s a tendency for them to be trained on datasets that don’t include nearly enough dark-skinned people. As a result, they don’t learn to make the necessary distinctions. I’d like to think that the selection of datasets for training facial recognition AI has improved since the most egregious cases of that. I’m not willing to bet on it, though.

    • CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      29
      ·
      2 months ago

      No they aren’t. This is the narrative that keeps getting repeated over and over. And the citation for it is usually the ACLU’s test on Amazon’s Rekognition system, which was deliberately flawed to produce this exact outcome (people years later still saying the same thing).

      The top FR systems have no issues with any skin tones or connections.

        • CeeBee@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          13
          ·
          edit-2
          2 months ago

          I promise I’m more aware of all the studies, technologies, and companies involved. I worked in the industry for many years.

          The technical studies you’re referring to show that the difference between a white man and a black woman (usually polar opposite in terms of results) is around 0.000001% error rate. But this usually gets blown out of proportion by media outlets.

          If you have white men at 0.000001% error rate and black women at 0.000002% error rate, then what gets reported is “facial recognition for black women is 2 times worse than for white men”.

          It’s technically true, but in practice it’s a misleading and disingenuous statement.

          Edit: here’s the actual technical report if anyone is interested

          https://pages.nist.gov/frvt/reports/1N/frvt_1N_report.pdf

          • AwesomeLowlander@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            11
            ·
            2 months ago

            Would you kindly link some studies backing up your claims, then? Because nothing I’ve seen online has similar numbers to what you’re claiming

              • Richard@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                2 months ago

                It saddens me that you are being downvoted for providing a detailed factual report from an authoritative source. I apologise in the name of all Lemmy for these ignorant people

                • CeeBee@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 months ago

                  Ya, most upvotes and downvotes are entirely emotionally driven. I knew I would get downvoted for posting all this. It happens on every forum, Reddit post, and Lemmy post. But downvotes don’t make the info I share wrong.

                • CeeBee@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 months ago

                  Np.

                  As someone else pointed out in another comment. I’ve been saying the x% accuracy number incorrectly. It’s just a colloquial way of conveying the accuracy. The truth is that no one in the industry uses “percent accuracy” and instead use FMR (false match rate) and FNMR (false non-match rate) as well as some other metrics.