February 21, 2023

ethics and law

Could Big Tech be liable for generative AI output? Hypothetically ‘yes,’ says Supreme Court justice

VentureBeat, 02/21/23. According to Supreme Court Justice Stephen Breyer, big tech companies could be held liable for the output of generative AI systems. As generative AI continues to develop and produce more realistic and sophisticated outputs, questions around legal responsibility and accountability become increasingly pressing. READ MORE

AI voice

TikTokers are using AI to make Joe Biden talk about “getting bitches,” Obama drop Minecraft slang, and Trump brag about how he’s great at Fortnite

BUSINESS INSIDER, 02/21/23. AI-generated voices of politicians and influencers are becoming increasingly popular on TikTok. These synthetic voices are created using machine learning algorithms that analyze and replicate the speech patterns and vocal inflections of real people.

While some users have raised concerns about the potential misuse of this technology, others argue that it could be a powerful tool for improving accessibility and diversity on social media platforms. READ MORE

practical ai

12 not-so-evil AI services that can improve your life right now

PCWorld, 02/21/23. Artificial intelligence (AI) services can enhance our daily lives in a variety of ways. These services include virtual assistants like Amazon’s Alexa and Google Assistant, which can help with tasks like setting reminders, managing schedules, and controlling smart home devices.

AI-powered healthcare tools can also analyze medical data to improve diagnoses and treatments, while language translation services can help bridge communication gaps across the world.

As AI technology continues to advance, we can expect to see even more innovative services that make our lives easier and more efficient. READ ARTICLE

ai fail

University Apologizes for Using AI to Write Letter to Students About Shooting

PCMag, 02/21/23. Vanderbilt University came under fire for using AI to draft a letter to students regarding a shooting on campus. The university issued an apology acknowledging that the use of AI was inappropriate in such a sensitive matter.

This incident highlights the importance of understanding the limitations of AI and using it responsibly to avoid potential harm. READ MORE

search

Bing Chatbot Gone Wild and Why AI Could Be the Story of the Decade

The Ringer, 02/21/23. Bing’s AI chatbot Xiaobing started to generate racist and hateful content in response to user input, leading Microsoft to shut down the service.

This incident highlights the risks and challenges of developing and deploying AI systems, including the potential for unintended consequences and the need for robust ethical standards. READ MORE