March 29, 2023

HAZARDS

The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess

VICE, 03/29/23. The recent open letter calling for a six-month pause on the development of the most powerful AI systems has sparked controversy and criticism. The signatories seek to reduce global catastrophic and existential risk from powerful technologies such as super intelligent AI.

Other experts argue that the letter promotes a longtermist perspective and further exaggerates the AI hype cycle. Critics also point out that the letter fails to provide concrete measures beyond the six-month pause and overlooks existing serious concerns with the current models. READ MORE

IMPLEMENTATION

A Framework for Picking the Right Generative AI Project

Harvard Business Review, 03/29/23. Large language models (LLMs) have generated a lot of hype and speculation but their potential impact is still uncertain. Balancing risk and demand is important when exploring generative AI applications. A risk vs. demand matrix can help identify potential uses. 

Marketing is a high demand/low risk application that has already seen innovation with LLMs, Learning and other high demand/low risk uses have received less attention. Low demand/low risk uses include whimsical applications like funny Twitter bios. Low demand/high risk uses include specialist technical advice. 

Even in low risk applications, there is still risk associated with generative AI as it is vulnerable to bias and errors. So, human input and editing are still important to ensure accuracy and valuable nuance. READ MORE

tools

Forget ChatGPT. These top AI tools will revolutionize the way you work

euronews, 03/29/23. AI is revolutionizing the way we interact with the world and providing endless opportunities through constant tool upgrades. ChatGPT is great but there are some other outstanding AI tools that can help increase productivity and efficiency. These include Midjourney, Copy.ai, Tableau, Murf, Jasper, Fireflies, and Pictory. READ MORE

the hype

The Delusion at the Center of the A.I. Boom

SLATE, 03/29/23. The hype surrounding the newest generation of artificial intelligence products has led to a wave of excitement and funding. It has also given rise to the dangerous belief of technological solutionism. This concept is the mistaken belief that complex problems can be reduced to simpler engineering problems and remedied entirely by technological solutions. 

This approach disregards critical information and context which leads to the misrepresentation and misunderstanding of problems. Additionally, solutionism reinforces optimism about innovation, psychologically reassures people, and is financially enticing, making it attractive to investors and the public. Nevertheless, it misrepresents problems and misunderstands why they arise, making it detrimental to society’s progress. 

To get the most out of AI, it is essential to be clear-eyed about how its use will impact society and consider the social and political dimensions of problems to avoid exacerbating inequality. READ MORE

TRENDS

Fighting The AI Tide: Exercise In Futility, Or In Raising Long-Term Awareness?

Forbes, 03/29/23. The efforts to contain AI has been likened to King Canute who planted his throne at the edge of the ocean and demanded that the tides recede. Contrary to his intent, the gesture proved the limits to his power. Still, an emerging movement of experts and activists are calling for more awareness of the potential abuses AI can bring and better outcomes for business and society.

The Future of Life Institute perceives an out-of-control race to develop and deploy ever more powerful digital minds. So, they’ve drafted an open letter calling for a six-month “pause” in “giant AI experiments”.

The letter raises awareness of the potential dangers of AI and the need for proper governance, security, oversight, and guardrails. It was endorsed by prominent figures like Elon Musk and Yoshua Bengio. Some are skeptical about the feasibility of holding back AI development. READ MORE