September 28, 2023

  • Regulation

    AI is going to change the world — but who will be leading that change?

    The Hill, 09/28/23. The rapid rise of artificial intelligence presents a critical juncture for society. Recent discussions in Washington, D.C., have centered on who should shape AI’s future: tech giants or government regulators. While AI offers transformative potential, it also poses significant risks. Striking the right balance between innovation and oversight is essential. Collaboration among experts, industry leaders, and policymakers is necessary to ensure responsible AI development. The United States must take a leading role in global AI regulation, as the stakes are high—AI can either improve lives or empower oppressive regimes. READ THE ARTICLE

  • Enterprise

    IBM Tries to Ease Customers’ Qualms About Using Generative A.I.

    The New York Times, 09/28/23. IBM has announced its campaign to address concerns surrounding the use of generative AI technology. As more companies experiment with this powerful tool, they worry about data handling, accuracy, and legal liabilities. IBM aims to ease these concerns by indemnifying companies against copyright claims and publishing its data sets. This approach sets IBM apart from its competitors and positions the company as a reliable partner for businesses looking to create their own AI technology. With a focus on accuracy and smaller, more efficient models, IBM aims to enable wider adoption of generative AI in various operations, offering a defensible return on investment. READ THE ARTICLE

  • Hallucinations

    The hot new thing: AI platforms that stop AI’s mistakes before production

    TechCrunch, 09/28/23. The growing trend of using AI-assisted code generation is giving rise to startups that aim to prevent issues with AI-augmented code. Startups like Digma and Kolena have recently secured seed funding to develop platforms that analyze and test AI-generated code. One such startup is Braintrust, a four-person Bay Area-based company that has just raised $3 million in funding. Braintrust is positioning itself as an “operating system for engineers building AI software.” With the support of notable investors, Braintrust aims to help developers avoid unfavorable outcomes from AI models by providing a reliable platform for testing and evaluation. READ THE ARTICLE

  • Government

    New IRS Chatbots Use AI In Aim To Assist Taxpayers—And They Actually Work

    Forbes, 09/28/23. The IRS has announced the availability of expanded chatbot technology on their website to assist taxpayers receiving notices. This is a significant development for taxpayers who often face long wait times when trying to contact the IRS. The use of chatbots is part of the IRS’s efforts to improve its technological capabilities and provide world-class customer service. The chatbots can help taxpayers resolve their tax issues and set up payment agreements. This technology will undoubtedly make it easier for taxpayers to get the information they need and avoid long wait times on the phone. READ THE ARTICLE

  • Coding

    Can AI code? In baby steps only

    ZDNet, 09/28/23. The use of generative AI in programming, exemplified by OpenAI’s ChatGPT, has raised expectations about its ability to generate computer code. However, research shows that while ChatGPT can offer suggestions and help overcome creative roadblocks, its assistance in coding is limited. Studies have found that large language models like GPT-4 perform below human coders in terms of overall code quality and correctness. Challenges in scalability, identifying errors, and effectively solving complex problems still persist. Although generative AI has potential, significant advancements are needed to overcome fundamental limitations and achieve higher levels of proficiency in coding. READ THE ARTICLE

  • Models

    AI language models can exceed PNG and FLAC in lossless compression, says study

    arsTechnica, 09/28/23. The field of data compression holds a pivotal role in minimizing data size without compromising its integrity. Successful compression hinges on pattern recognition, akin to making informed guesses. A recent study by DeepMind highlights the surprising prowess of their Chinchilla 70B language model in compressing diverse data forms, surpassing dedicated algorithms. This implies that models like Chinchilla excel not only in generating text but also in data size reduction. This breakthrough expands the horizons of large language models beyond text tasks, sparking discussions about the link between compression and intelligence. The Hutter Prize underscores the significance of compression in grasping language patterns. In essence, AI language model compression research unveils the potential of data compression, shedding light on the role of large models in achieving it. READ THE ARTICLE