April 11, 2023
Collaboration
Harnessing Hybrid Intelligence: Balancing AI Models and Human Expertise for Optimal Performance
datanami, 04/11/23. Measuring metrics on AI models is crucial for gaining insights and uncovering biases, limitations, and areas for improvement. Key metrics for classification tasks are recall, precision, F1-score, and accuracy, which can be optimized for recall or precision through various techniques.
Collaboration between AI and humans enhances effectiveness and reliability by harnessing the strengths of both parties. Integrating AI models with human workflows requires careful consideration of recall and precision.
By selecting the right metrics and driving real collaboration, we can optimize the positive impact of AI and realize its potential. READ MORE
Hazards
Can Intelligence Be Separated From the Body?
The New York Times, 04/11/23. AI needs a body to perceive and react to the environment. Embodied robots integrate language models with physical machines allowing them to learn from people’s behavior. Without a connection to the physical world, AI could make life-threatening mistakes.
Researchers suggest starting with simple robots and pairing AI with a body that can explore and learn the world’s limits for safe AI. READ MORE
Regulation
There’s no stopping AI now
COMPUTERWORLD, 04/11/23. AI poses profound risks to society and humanity. Over a thousand tech luminaries have called for powerful AI systems to be developed only once we are confident that their effects will be positive and their risks will be manageable.
The petition also called for AI labs to pause the training of AI systems more powerful than GPT-4 for at least 6 months to prevent a loss of control of civilization.
Despite these concerns, the development of AI is unlikely to stop, and businesses and big tech companies will continue to push for its advancement. We’ve got to be cautious about AI because no one really knows how it will evolve. It could pose significant risks to society. READ MORE
Regulation
The problems with a moratorium on training large AI systems
BROOKINGS, 04/11/23. Calls for a government moratorium on training powerful AI systems in the United States raise concerns about the delay of AI’s benefits, legal authority, and enforcement.
A moratorium would impede AI’s benefits. Plus, enforcing it would be difficult due to the accessibility of AI’s key ingredients.
The call for a pause is worth considering. A nationwide moratorium requires careful consideration of its potential consequences. READ MORE
EDGE AI
Floating-Point Arithmetic for AI Inference – Hit or Miss?
yahoo!, 04/11/23. AI runs on power-hungry data centers. Pushing AI to edge devices such as phones and PCs should help improve reliability, latency, privacy, network bandwidth usage, and overall cost.
Qualcomm has been investing heavily in making neural networks more efficient for edge devices. This has led to advances in deep learning model efficiency, including quantization.
In a recent whitepaper, the efficiency of floating-point and integer quantization for inference was compared. The conclusion was that the integer format is superior from a cost and performance perspective. To optimize networks even further, quantization-aware training (QAT) can achieve significant efficiency benefits without sacrificing much accuracy. READ MORE