October 16, 2023

  • Regulation

    Sweeping new Biden order aims to alter the AI landscape

    Politico, 10/16/23. The White House has released a draft executive order on artificial intelligence (AI) that aims to impose national order on the rapidly growing technology. The order includes guidelines for federal agencies to influence the US market through their buying power and enforcement tools. It also addresses various aspects of AI, including cybersecurity, health, competition, privacy, immigration, microchip manufacturing, telecoms, education, housing, copyright, and labor. The order seeks to promote responsible AI use, protect privacy, enhance competition, and address potential risks and benefits. Agencies will have between 90 to 240 days to fulfill the requirements. READ THE ARTICLE

  • Implementation

    Keep Your AI Projects on Track

    Harvard Business Review, 10/16/23. AI, particularly generative AI, has become a prominent topic in today’s corporate landscape. However, despite its potential, most AI projects end in failure. Estimates suggest that the failure rate is as high as 80%, almost double the failure rate of IT projects from a decade ago. To increase the chances of success, companies must navigate five critical steps: selection, development, evaluation, adoption, and management. By carefully navigating these steps, companies can significantly reduce the risk of AI project failure. READ THE ARTICLE

  • Robotics

    Stacking Boxes? Treating Cancer? AI Needs to Learn Physics First

    The Wall Street Journal, 10/16/23. Artificial intelligence (AI) has made headlines with its conversational prowess, but its potential extends far beyond chatbots. To tackle complex real-world problems in fields like robotics, science, and engineering, AI needs to learn physics. Integrating physics knowledge into AI, known as “physics-informed neural networks” or “scientific machine learning,” provides a solid foundation for problem-solving. It narrows down the solution space, making AI predictions more accurate and efficient. This approach is already benefiting industries such as electric vehicles, healthcare, and robotics, offering exciting possibilities for AI to excel in diverse applications by embracing the laws of the natural world. READ THE ARTICLE

  • AGI

    Minds of machines: The great AI consciousness conundrum

    MIT Technology Review, 10/16/23. In the field of AI consciousness, philosopher David Chalmers argues that while large language models like LaMDA and ChatGPT are impressive, they lack the necessary requisites for actual consciousness. However, as AI development progresses rapidly, Chalmers estimates a greater than one in five chance of developing conscious AI in the next decade. The question of AI consciousness is not just an intellectual puzzle but also a morally weighty problem, with potential consequences for human safety and the well-being of conscious AI. This challenging issue requires defining consciousness, which is inherently subjective. Neuroscientists like Liad Mudrik seek to understand how the brain enables both information processing and the experience of that information. The insights gained from consciousness research may guide us in navigating the uncharted waters of artificial consciousness and contribute significantly to the field. READ THE ARTICLE

  • Training

    New Training Method Helps AI Generalize like People Do

    Scientific American, 10/16/23. A new study suggests that the key to developing flexible machine-learning models lies in how they are trained, rather than the amount of training data they receive. This could lead to more accurate and less error-prone artificial intelligence models that can reason like humans. The study used a specially designed set of tasks to train a standard transformer model, teaching it how to interpret a made-up language. The model was then able to respond coherently and follow the logic of the language, even when faced with new configurations of words. The findings highlight the importance of a focused training approach in creating compositional machine learning models that can generalize like humans. However, it is important to note that this limited generalization ability is still far from achieving artificial general intelligence. READ THE ARTICLE