May 30, 2023

  • Coding

    ‘Everyone is a programmer’ with generative A.I., says Nvidia chief

    CNBC, 05/30/23. Thanks to AI, a new era of computing is upon us. You don’t need to write code to be a programmer. You can simply speak to the computer.

    Technology like the Nvidia DGX GH200 supercomputer platform create all kinds of content. Now, professionals can generate images, accelerate application development, and enhance existing applications. READ THE ARTICLE

  • Coding

    AI Doesn’t Make Everyone A Programmer Overnight

    Forbes, 05/30/23. The technology industry embraces AI’s potential, with Nvidia reaching a $1 trillion market cap. AI transforms software development by automating tasks and enhancing code quality. Nvidia’s CEO claims everyone is a programmer, but risks exist. Shadow IT prompts the citizen developer model, enabling business users to build applications with approved technology. Generative AI empowers non-coders. However, human skills like critical thinking and creativity remain crucial. AI accelerates technological progress, but human judgment remains indispensable for its full potential. READ THE ARTICLE

  • Hazard

    Who is watching you? AI can stalk unsuspecting victims with ‘ease and precision’: experts

    Foxnews, 05/30/23. The rapidly advancing abilities of AI are raising concerns about privacy and personal safety.

    Online face search engines and AI-powered surveillance systems pose risks of stalking and invasion of privacy. With a single photo, strangers can gather tons of information about a person including past locations and predicted movements.

    Stricter regulations and safeguards are necessary to prevent misuse and protect individuals. READ THE ARTICLE

  • Models

    The Race to Make A.I. Smaller (and Smarter)

    The New York Times, 05/30/23. The BabyLM Challenge is calling into question the notion that bigger AI chatbots are superior. It aims to create highly capable language models using smaller datasets, promoting efficiency and accessibility.

    Large language models are criticized for being opaque and exclusive. The challenge focuses on human language learning and seeks to bridge the gap between AI models and human understanding. It paves the way for more accessible and intuitive AI and improved industry research. READ THE ARTICLE

  • Deepfackes

    Deepfaking it: America’s 2024 election collides with AI boom

    Reuters, 05/30/23. The rise of deepfake technology in the 2024 US presidential race poses a significant threat to election integrity. Cheap and accessible “generative AI” tools like Midjourney enable the creation of convincing fake videos, blurring fact and fiction. Differentiating between real and manipulated content becomes challenging for voters. While social media platforms have made efforts to combat deepfakes, their effectiveness varies. Stricter regulation and responsible use of AI are crucial to safeguard democracy from mass misinformation. READ THE ARTICLE

  • Law

    No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked

    TechCrunch, 05/30/23. In response to an attorney’s use of AI-generated content in a federal filing, Judge Brantley Starr of Texas has implemented a rule requiring attorneys to certify that no portion of their documents was drafted by AI, or if it was, it was checked by a human. The move highlights the risks and limitations of current AI platforms, emphasizing the need for transparency and accuracy in legal proceedings. Other judges may adopt similar rules in light of this development. READ THE ARTICLE

  • Hallucinations

    The biggest problem in AI? Lying chatbots

    The Washington Post, 05/30/23. Hallucinations in AI chatbots pose a significant challenge, as they often provide inaccurate or made-up information. MIT researchers proposed a “society of minds” approach, where chatbots debated answers to improve factual accuracy. Methods like reinforcement learning and cross-checking with human input are being explored. While hallucinations are inherent to current language models, efforts to mitigate them are crucial, considering their potential harm. Advancements in AI learning methods are needed to address this pressing issue. READ THE ARTICLE