close
close
Guide

Microsoft lays off AI ethics and society team

Microsoft has laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that have affected 10,000 employees across the company. platformer have learned.

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company has a responsibility to bring AI tools to the mainstream, current and Former Employees.

Microsoft still maintains an active Office of Responsible AI tasked with creating rules and principles to guide the company’s AI initiatives. The company says its overall investment in responsible work is increasing despite recent layoffs.

“Microsoft is committed to building AI products and experiences safely and responsibly by investing in people, processes and partnerships that prioritize this,” the company said in a statement. “Over the past six years, we have increased the number of people on our product teams and Office of Responsible AI who, along with all of us at Microsoft, are accountable for putting our AI principles into practice. […] We appreciate the groundbreaking work Ethics & Society has done to support us on our continued responsible AI journey.”

However, employees said the Ethics and Society team played a crucial role in ensuring the company’s responsible AI principles are actually reflected in the design of the products it ships.

“Our job was to … create rules where there weren’t any.”

“People looked at the principles that came out of the office of the responsible AI and said, ‘I don’t know how that applies,'” says a former employee. “Our task was to point them out and create rules where there were none.”

In recent years, the team has developed a role-playing game called Judgment Call, which helped designers envision potential harms that AI could cause and discuss them during product development. It was part of a larger Responsible Innovation Toolkit that the team released publicly.

More recently, the team has worked to identify risks arising from Microsoft’s adoption of OpenAI technology across its product suite.

The Ethics and Society team was at its largest in 2020 when it had around 30 employees, including engineers, designers and philosophers. In October, the team was reduced to around seven people as part of a reorganization.

In a meeting with the team after the reorganization, John Montgomery, AI’s corporate vice president, told employees that management had directed them to move quickly. “The pressure of [CTO] Kevin [Scott] And [CEO] Satya [Nadella] It’s very, very high to take these latest OpenAI models and those after them and get them into the hands of customers at very high speed,” he said, according to audio of the meeting obtained by platformer.

Because of that pressure, Montgomery said much of the team would be reassigned to other areas of the organization.

Read  How to care for nature in parks

Some members of the team pushed back. “I will be brave enough to ask you to reconsider this decision,” an employee said on the call. “While I understand business issues play a role…this team has always been very concerned about how we are affecting society and the negative impact we are having. And they are meaningful.”

Montgomery declined. “Can I reconsider? I don’t think I will do that,” he said. “Unfortunately, the pressure remains the same. You don’t have the view I have, and I suppose you can be thankful for that. A lot of stuff gets ground into the sausage.”

However, in response to questions, Montgomery said the team would not be eliminated.

“It’s not that it’s going away — it’s that it’s evolving,” he said. “It’s evolving to put more energy into the individual product teams building the services and the software, which means the central hub that did some of the work transfers its skills and responsibilities.”

Most of the team members have been relocated within Microsoft. Afterward, the remaining members of the Ethics and Society team said the smaller crew made it difficult to achieve their ambitious plans.

The move leaves a fundamental gap in the holistic design of AI products, says one employee

About five months later, on March 6, the remaining employees were told to join a Zoom call at 11:30 a.m. PT to hear a “mission-critical update” from Montgomery. During the meeting, they were told that their team would be eliminated after all.

One employee says the move leaves a fundamental gap in terms of user experience and the holistic design of AI products. “The worst thing is that by doing this we have put business and people at risk,” they explained.

The conflict underscores an ongoing tension for tech giants building divisions committed to making their products more socially responsible. At best, they help product teams anticipate potential misuse of technology and fix any issues before they ship.

But they also have a role to say “no” or “slow down” within organizations that often don’t want to hear it — or to highlight risks that could cause legal headaches for the company if they surface in legal investigations. And the resulting friction sometimes boils over into the public eye.

Read  Amplifying New Voices workshop teaches diverse game talent how to lead and speak

In 2020, Google fired ethical AI researcher Timnit Gebru after she published a critical paper on the large language models that would gain popularity two years later. The resulting uproar led to the departure of several other top executives within the department and diminished the company’s credibility on AI issues of responsibility.

Microsoft focused on delivering AI tools faster than its competitors

Ethics and Society team members said they generally try to support product development. But they said that as Microsoft focused on shipping AI tools faster than its peers, the company’s leadership was less interested in the type of long-term thinking the team specialized in.

It’s a dynamic that needs close scrutiny. On one hand, Microsoft may now have a unique opportunity to clearly outperform Google in search, productivity software, cloud computing, and other areas where the giants compete. When it relaunched Bing with AI, the company told investors that every 1 percent of search market share it could take away from Google would result in $2 billion in annual revenue.

This potential explains why Microsoft has invested $11 billion in OpenAI so far and is currently scrambling to integrate the startup’s technology into every corner of its empire. It appears to be having an initial win: the company said last week Bing now has 100 million daily active users, a third of whom are new since the search engine’s relaunch using OpenAI’s technology.

On the other hand, everyone involved in the development of AI agrees that the technology harbors both known and unknown potent and possibly existential risks. Tech giants have struggled to signal that they take these risks seriously — Microsoft alone has three separate groups working on the issue, even after eliminating the Ethics and Corporates teams. But given the deployments, any cuts in teams focused on responsible work seem remarkable.

The elimination of the Ethics and Corporates teams came just as the group’s remaining employees had focused on what is arguably their greatest challenge yet: anticipating what would happen if Microsoft released tools with OpenAI to a global audience.

Last year, the team wrote a memo detailing the trademark risks associated with Bing Image Creator, which uses OpenAI’s DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in October, making it one of Microsoft’s first public collaborations with OpenAI.

Read  How to Make TNT in Minecraft

While text-to-image technology has proven extremely popular, Microsoft researchers have correctly predicted that it could also threaten artists’ livelihoods, as it allows anyone to easily copy their style.

“When testing Bing Image Creator, we found that images generated with a simple prompt containing only the artist’s name and a medium (painting, print, photography, or sculpture) were almost impossible to distinguish from the original works,” they wrote researcher the note.

“The risk of brand damage…is real and significant enough to warrant redress.”

They added, “The risk of brand damage to both the artist and their financial stakeholders, and the negative PR for Microsoft resulting from the artists’ complaints and negative public reaction, is real and significant enough to warrant redress.” before Microsoft’s brand is harmed.”

Additionally, OpenAI updated its Terms of Service last year to give users “full ownership of the images you create with DALL-E.” The move worried Microsoft’s Ethics and Corporate Affairs team.

“When an AI image generator mathematically replicates images of works, it is ethically suspect to indicate that the person who submitted the prompt has full ownership of the resulting image,” they wrote in the memo.

Microsoft researchers compiled a list of mitigation strategies, including blocking Bing Image Creator users from using living artists’ names as prompts and creating a marketplace to sell an artist’s work that popped up when someone searched for their name .

Employees say none of these strategies were implemented, and Bing Image Creator was rolled out in test countries anyway.

Microsoft says the tool was modified before launch to address concerns raised in the document and prompted additional work from its dedicated AI team.

But legal questions about the technology remain unresolved. In February 2023, Getty Images filed a lawsuit against Stability AI, makers of the AI ​​art generator Stable Diffusion. Getty accused the AI ​​startup of misusing more than 12 million images to train its system.

The allegations echo concerns raised by Microsoft’s own AI ethicists. “It is likely that few artists have consented to their works being used as training data, and it is likely that many still do not understand how generative technology makes it possible to create variations of online images of their work in a matter of seconds,” the collaborators wrote in the past year.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
x