March 8, 2023

national security

AI ‘wild west’ raises national security concerns 

The Hill, 03/08/23. The rise of generative AI tools have raised concerns about national security risks. Experts are worried that these tools may be abused by malicious actors or go awry through commercial use.

While this technology offers benefits of increased efficiency and reduced costs, a dependency on it poses major risks especially if there is only one type of it. Data privacy concerns from generative AI underscore reasons why experts have said there need to be stronger restrictions.

The rapid evolution of generative AI technology is causing a catch-up game where guidelines and policy approaches will have to quickly adapt. READ MORE

implementation

From marketing to design, brands adopt AI tools despite risk

The Seatle Times, 03/08/23. Companies are increasingly using AI tools especially generative AI.

Mattel uses OpenAI’s DALL-E to generate new ideas for Hot Wheels. CarMax summarizes customer reviews with ChatGPT. Coca-Cola plans to use generative AI for its marketing content.

Experts caution businesses to consider potential risks to customers and their reputation before adopting them. A safer use is to treat the AI as a brainstorming partner allowing humans to create the final product.

Amazon is partnering with startup Hugging Face to develop ChatGPT rival like Bloom. Hugging Face hosts a platform to share open-source AI models for text, image, and audio tools. This practice promotes transparency, mitigates biases and enabling regulators and underrepresented groups to better understand AI models. READ MORE

international

Gov’t to reveal policy measure for hyperscale AI industry

The Korea Times, 03/08/23. The Korean government has place strategic importance on improving the nation’s capabilities in hyperscale AI. The government intends to merge the capabilities of the government and private companies, create flexible regulations for the spread of hyperscale AI and introduce measures to improve reliability.

The emergence of hyperscale AI models can enhance to increase Korea’s competitiveness in data gathering, sharing computing resources, R&D, security, ethics and reliability. READ MORE

Risks

When the robots come

InfoWorld, 03/08/23. ChatGPT has risks. Self-training can improve ChatGPT’s performance but there can be overfitting or nonsensical responses. However, dependence on ChatGPT can lead to frustration, confusion, misinformation and severe consequences if used as the basis for important decisions. 

To mitigate these negative effects, it’s necessary to identify and address biases and ethical concerns, ensure responsible use through government regulations, and promote public awareness and responsible development. READ MORE

social media

Once AI can create endless viral videos, good luck switching off social media

The Register, 03/08/23. Short-form videos have become the most engaging and bingeable content on social media. These platforms have a deep catalog of content to test and rate against the audiences preferences and behavior of their audiences. 

Inevitably, generative AI will be used to create fully automated systems for media production. These will generate an endless stream of “good-enough” content for social media feeds. The flow of low-quality, highly optimized content likely will overwhelm human contribution. READ MORE