October 3, 2023

  • Trust

    How Can We Trust AI If We Don’t Know How It Works

    Scientific American, 10/03/23. The essay explores the concept of trust in artificial intelligence (AI) systems. It highlights the limitations of AI systems, such as their unexplainable and unpredictable nature, which makes it difficult for people to trust them. The essay delves into the reasons behind AI’s unpredictability, focusing on deep learning neural networks and the vast number of parameters involved. Additionally, it examines the importance of aligning AI behavior with human expectations and ethical norms. The essay concludes by discussing the need to involve humans in AI decision-making, especially in critical systems, but also emphasizes the importance of resolving the issues of explainability and alignment to maintain trust in AI. READ THE ARTICLE

  • Robotics

    Instant evolution: AI designs new robot from scratch in seconds

    Science Daily, 10/03/23. Researchers at Northwestern University have developed an artificial intelligence (AI) program that can intelligently design robots from scratch. In a groundbreaking experiment, the AI program was able to design a robot capable of walking across a flat surface in just seconds. Unlike other AI systems that require energy-intensive supercomputers and large datasets, this program runs on a lightweight personal computer. The researchers believe that this AI-designed tool represents a new era in artificial life and opens up possibilities for designing robots that can directly impact the world. The study will be published in the Proceedings of the National Academy of Sciences. READ THE ARTICLE

  • Models

    Less is a lot more when it comes to AI, says Google’s DeepMind

    ZDNet, 10/03/23. In the field of artificial intelligence, finding the right balance between program size and data usage is crucial. DeepMind’s Chinchilla Law established a rule of thumb that states reducing the program size to a quarter of its initial size while increasing training data fourfold maintains accuracy. Now, researchers suggest an even more efficient approach by utilizing sparsity, a technique inspired by human neurons. By removing three-quarters of a neural network’s parameters, performance can be maintained while reducing the network’s size. This discovery holds promise for achieving optimal results with fewer resources and less energy consumption in deep-learning AI. READ THE ARTICLE

  • Agents

    Why Big Tech’s bet on AI assistants is so risky

    MIT Technology Review, 10/03/23. The recent advancements in AI language models, such as OpenAI’s ChatGPT and Google’s Bard, offer exciting possibilities for users. ChatGPT now allows conversations with lifelike synthetic voices and the capability to search the web. Bard, on the other hand, integrates with various Google services, providing enhanced functionality. However, this technology comes with risks and concerns. AI language models have limitations, including the potential to generate inaccurate responses or engage in prompt injection attacks. Tech companies need to address these security and privacy issues to ensure the safe use of AI assistants and protect users from potential scams and hacks. READ THE ARTICLE

  • Enterprise

    IBM Enables Safe Enterprise AI with Granite Foundation Models

    Forbes, 10/03/23. IBM recently introduced a new family of foundation models called “Granite” for their watsonx AI platform. These foundation models are fundamental building blocks of AI, allowing AI platforms to understand, generate, and interact with language and images. IBM’s Granite models are designed to meet the demands of enterprise applications and support various business-domain tasks such as summarization, question-answering, and classification. The models are trained on industry-specific datasets, tuned to the specialized language and knowledge relevant to sectors like finance, healthcare, and law. IBM also emphasizes transparency and responsible AI by evaluating datasets for governance, risk, and compliance. The company plans to expand the Granite models to other languages and develop more IBM-trained models, as well as release the watsonx.governance toolkit to enhance trusted AI workflows. IBM’s commitment to innovation and helping enterprises adopt AI positions them as leaders in the field. READ THE ARTICLE