October 24, 2023
-
Research
Human brain’s ‘temporal scaffolding’ inspires new AI approaches
University of Rochester, 10/24/23. A hypothesis about how the human brain uses sleep and awake periods to learn over time could help overcome artificial intelligence’s limitations with lifelong learning, according to scientists. Christopher Kanan, an associate professor at the University of Rochester’s Department of Computer Science, is part of a team that received funding from the National Science Foundation to use the “temporal scaffolding” hypothesis to develop AI that can rapidly learn, adapt, and operate in uncertain conditions. This new approach aims to mimic the brain’s ability to detect important patterns within experiences during sleep. The researchers envision its applications in healthcare, autonomous systems, and national security. READ THE ARTICLE
-
Regulation
Op-ed: We cannot allow AI to make Big Tech even bigger, argues Steve Case
CNBC, 10/24/23. In this article, Steve Case highlights the need for policymakers to focus not only on the risks of artificial intelligence (AI) but also on how the AI economy should be structured. Currently, most AI innovation is being driven by Big Tech companies due to the high costs of building large language models. This departure from the usual patterns of disruptive startups challenging incumbents may result in the big getting bigger and challengers struggling to gain traction. Case emphasizes the importance of maintaining an open-source AI model and ensuring that AI development is not limited to Silicon Valley. He advocates for ground rules that allow entrepreneurs from all regions to participate in and benefit from AI innovation, turning AI into a bridge that connects the entire tech world with the rest of America. READ THE ARTICLE
-
Generative AI
Generative AI and the Transformation of Everything
CIO, 10/24/23. Generative AI is a rapidly advancing technology that has the potential to transform numerous industries. However, concerns about its impact on jobs, data security, and copyright infringement have sparked debates and controversies. While these concerns are valid, they can be alleviated with proper education, policies, and technology solutions. Enterprises must understand and recognize the risks associated with generative AI, implement data protection measures, and establish regulatory guardrails. By investing in security measures, organizations can safely harness the power of generative AI and take advantage of its transformative potential. READ THE ARTICLE
-
Strategy
4 Core Principles for States to Follow When Adopting AI
StateTech Magazine, 10/24/23. Implementing AI initiatives requires a strategy that includes short-term, attainable objectives. It is important to recognize that AI is fundamentally software and can be implemented, run, scaled, and maintained like any other modern software. Therefore, there is no need to be intimidated by its potential. Careful governance is crucial in monitoring AI applications to ensure compliance with regulations. AI does not have inherent moral code and needs to be governed to avoid biased or unintended outcomes. By embracing AI and implementing proper governance measures, organizations can stay ahead and deliver enhanced experiences to their constituents. READ THE ARTICLE
-
Amazon
Amazon’s AI-Powered Van Inspections Give It a Powerful New Data Feed
Wired, 10/24/23. Amazon is implementing a new automated vehicle inspection system called AVI at its distribution centers worldwide. These inspection stations, equipped with camera-studded archways and artificial intelligence-powered technology, will scan Amazon’s delivery vans for any damage or maintenance needs. The high-resolution cameras focus on the vehicle’s undercarriage, tire quality, and exterior, compiling the data into a 3D image used by machine-learning software. The system aims to save time and enhance safety by identifying issues early. Amazon plans to install hundreds of these inspection units in the coming years, providing valuable insight into maintenance and operational patterns. READ THE ARTICLE