Facial recognition’s latest foe: Italian knitwear

At first glance, the sweater looks straight out of the Cosby Show: colorful swirls, crazy textures, a kind of abstract collage of green, red and yellow. But his knitwear has a secret mission: to fool facial recognition software.

Rachele Didero, the founder of Italian fashion tech startup Cap_able, wanted her clothes and designs to “have a function” beyond fashion. And the resulting Manifesto collection is a range of sweaters, hoodies, t-shirts and pants, all part of an experiment in opposing artificial intelligence. She’s trying to create a blind spot in those all-seeing facial recognition systems that have become a staple for surveilling public spaces around the world.

As the AI ​​learns by processing millions of images to identify things in the real world, Didero created patterns that trick the technology into misidentifying things. “These clothes confuse the algorithm” with patterns it doesn’t expect, she said.

And with her clothing line, Didero has created something of an AI head fake – she hides patterns and shapes to trick the AI ​​into misidentifying people.

The idea came from a conversation between Didero and some of her friends at the Fashion Institute of Technology, a design school in New York. They talked about how facial recognition software is expanding and if they could do anything about it.

A Cap_able hoodie hides enemy images of zebras to trick the AI.

Turns out one of her friends was an engineer from India and had been working on some sort of adversary patch designed to intentionally trick AI programs into seeing something that isn’t there.

“And then I thought Okay, maybe we can do something together,” She said.

They created a clothing line full of hidden animals, people, and other distracting shapes that are like bright shiny objects that face recognition algorithms smolder on.

The clothing line is trying to get facial recognition to identify “zebras, elephants, giraffes or dogs,” Didero said.

To see how effective the designs really were, Cap_able tested the clothing with a deep learning algorithm called YOLO, which identifies and classifies objects. The Manifesto Collection is not foolproof. Didero said her clothes have about a 60% success rate with YOLO, meaning the software recognizes a giraffe, for example, but not a human face.

“These enemy and digital images take all the attention away from the algorithm,” Didero said. “It’s like they’re drawn to these colorful patterns and see something that’s not there.”

Contradictory design

Traditionally, enemy AI experiments have sought to improve AI, not fool it.

In 2017, a University of California, Berkeley professor named Dawn Song found a way to convince a self-driving car that a stop sign isn’t a stop sign after all. By placing stickers and tape in precise locations on the stop sign, she was able to convince the car’s image classifier that it was a “45 mph speed limit” sign instead.

“We wanted to see if an attacker could actually manipulate a physical object in a way that could fool the AI,” Song said.

And it worked. The experiment confirmed Song’s worst fears about AI and the many ways adversaries could exploit its vulnerabilities.

If AI is going to be used in the real world, Song argued, it needs to be tested. It has to be challenged. And it has to be resilient.

“We want to show that these models actually still have major flaws,” Song said, particularly in cases that could put people in physical danger.

As the facial recognition technology market booms, so does an industry working to undermine it.

Didero sees Cap_able’s clothing as just a tool in a broader struggle for privacy.

“I give people the opportunity not to be detected by this technology every moment,” she said.

On Thursday, five US senators called on the Transportation Security Administration to stop using facial recognition at airports, citing privacy concerns and a track record of disproportionately misidentifying Asian and African-American people.

“American [sic] Civil liberties are threatened when the government deploys this technology extensively without sufficient evidence that the technology is effective among people of color and does not violate Americans’ right to privacy,” they wrote.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *