ChatGPT’s latest challenger: The Supreme Court

Steps to the United States Supreme Court

Getty Images

The tech industry is the focus of this week’s Supreme Court, where judges are considering two cases that could fundamentally change the way the internet works. Both cases relate to the question of liability – whether technology platforms are responsible for the harmful content they host and sometimes algorithmically promote.

During Tuesday’s hearing, Supreme Court Justice Neil Gorsuch pondered what it all means when the latest algorithmic innovation takes the tech world by storm — generative AI that can make recommendations to humans. These include conversational chatbots like ChatGPT and Microsoft’s new Bing chatbot.

Also: What is ChatGPT? Everything you need to know

The question arose during Gonzalez vs. Google’s oral argument, which specifically asks whether platforms – such as YouTube, TikTok or Google Search – can be held liable for the targeted recommendations of their algorithms. The case was brought by the relatives of a 23-year-old American woman, Nohemi Gonzalez, who was killed in Paris in 2015 when three ISIS terrorists fired into a crowd of restaurants.

Gonzalez’s relatives claimed that Google, which owns YouTube, knowingly allowed ISIS to post radicalizing videos on YouTube that incited violence and recruited potential supporters. Aside from allowing the videos to be published, the complaint alleges that Google “recommended ISIS videos to users” via its recommendation algorithm.

The case boils down to whether algorithmic recommendations – when YouTube suggests what to watch next, or ChatGPT telling you where to go on vacation – are protected by Section 230 of the Communications Decency Act, part of the Telecommunications Act 1996 are. This law exempts online platforms from liability for content posted by third parties.

Also: Bing’s AI chatbot has a new chat session limit again

While the case involves social media recommendations, Gorsuch connected the discussion to generative AI, the Washington Post noted. As reported by The Post’s Will Oremus, Gorsuch suggested that generative AI did not qualify for Section 230 protections.

“Artificial intelligence creates poetry,” Gorsuch said. “It’s generating polemics today that would be content that goes beyond picking, selecting, analyzing or digesting content. And that’s not protected. Let’s assume that’s true. Then the question becomes, what do we do with recommendations?”

This could have a significant impact on companies like Google and Microsoft as they seek to integrate chatbot conversational recommendations into their search engine platforms.

Of course, the Internet has changed dramatically since Section 230 was written in 1996. In recent years, legislators have been grappling with the question of how the law can be changed. While social media sites have primarily come under criticism for pushing the boundaries of the law, the integration of chatbot recommendations into search engines will no doubt raise further liability issues.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *