• Zombie@feddit.uk
    link
    fedilink
    English
    arrow-up
    29
    ·
    5 days ago

    Both Facewatch and Sainsbury’s point to the software’s “99.98% accuracy” – but Rajah suspects the margin of error is higher and has questions about the dataset behind this claim, and if it is representative of a range of body types and skin colours.

    99.98% looks good to a layman, but that number is meaningless in reality.

    Is that 0.02% error false positives or false negatives, or both?

    Also, 0.02% means 2 in every 10,000. I don’t think it takes long for 10,000 people to go through the doors of Sainsburys every day, considering the UK population is about 65 million and they’re a nationwide company. Once this is rolled out nationwide they’re going to have constant false flags.

    Scumbag oppressive tactics by a scumbag company.

    • halcyoncmdr@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      Yeah, 0.02% of 65 Million is 1.3 Million possible errors.

      And that’s just based on the raw population, that accuracy rating could be based on raw number of scans instead. A quick search shows Sainsbury’s serves 16 million customers a week. That’s 320,000 errors every week if the error rate is just raw scans as opposed to unique scans.

        • Zombie@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 days ago

          Indeed, it’s still a ridiculous amount of errors though.

          65,000,000 x (0.02%) = 13,000 possible errors

          16,000,000 x (0.02%) = 3,200 errors every week

          3,200 / 7 = 457 errors every day

          457 potentially pissed off and put off customers every day, how long is that sustainable?

          Nevermind all the privacy implications of private companies having large databases of people’s faces, their movements, purchases, and potential to sell or use that data for nefarious purposes, etc.

    • mjr@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      And the dataset is prbably racist, although in the reported case, it sounds like good old unreliable cross-race recognition by humans, with the evil eye pinging because it spotted someone and the store staff then telling the wrong person to naff off. It seems like a process or training failure if they don’t ask the evil eye to confirm they’ve got the person it flagged before upsetting them.