Models

  • Models

    Pigeons problem-solve similarly to artificial intelligence, research shows

    The Guardian, 10/26/23. A recent study has discovered that pigeons demonstrate problem-solving abilities that are similar to artificial intelligence (AI). Pigeons, often underestimated as pests, are intelligent animals capable of intricate tasks, memory recall, navigation, and even life-saving acts. The study involved 24 pigeons that were given various visual tasks and found that they learned to categorize stimuli over time. The researchers noted that pigeons’ decision-making mechanisms aligned with those used in AI models, suggesting that nature has developed an effective algorithm for complex learning tasks. These findings have the potential to enhance our understanding of brain damage and cognition in humans. READ THE ARTICLE

  • Models

    Top AI Shops Fail Transparency Test

    IEEE Spectrum, 10/22/23. The recent report by Stanford’s Center for Research on Foundation Models provides valuable insight into the transparency of major AI companies. The report graded 10 of the biggest AI models on 100 different metrics and revealed a lack of transparency in the industry. Meta’s Llama 2 received the highest total score of 54 out of 100. Transparency encompasses factors such as training data, model properties, distribution, and labor. While some models scored well in specific categories, there is still a need for greater transparency in areas such as data sources and labor practices. The index aims to inform policymakers and promote better transparency in the AI industry. READ THE ARTICLE

  • Models

    In-Depth Guide to 5 Types of Conversational AI in 2023

    AI Multiple, 10/14/23. Conversational AI refers to software that enables users to have interactive conversations, whether through chatbots, social messaging apps, smart devices, or digital workers. These solutions provide support, answer questions, and help complete tasks remotely. There are various types of conversational AI, such as AI chatbots, rule-based chatbots, hybrid chatbots, voice assistants, and interactive voice assistants. Each type has its own advantages and characteristics. For example, AI chatbots use machine learning and natural language processing to generate their own answers, while rule-based chatbots follow predefined rules. Voice assistants convert voice commands into machine-readable text and perform programmed tasks. Interactive voice assistants allow callers to interact with computer-operated phone systems. These conversational AI technologies have proven to be beneficial in different industries, from improving customer experiences in banking to increasing transcription efficiency in research studies. It is important for organizations to understand the different types of conversational AI and choose the ones that best fit their needs. READ THE ARTICLE

  • Models

    MIT’s New Generative AI Outperforms Diffusion Models in Image Generation

    SciTechDaily, 10/14/23. The new generative model called PFGM++ combines the principles of diffusion and Poisson Flow, resulting in superior image generation. Developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), PFGM++ outperforms existing state-of-the-art models in generating complex patterns and realistic images. By incorporating an extra dimension and training methods, the model strikes a balance between robustness and ease of use. This interdisciplinary collaboration between physicists and computer scientists showcases how physics-inspired concepts can advance artificial intelligence. The PFGM++ model has potential applications in various fields, including antibody and RNA sequence generation, audio production, and graph generation. READ THE ARTICLE

  • Models

    Less is a lot more when it comes to AI, says Google’s DeepMind

    ZDNet, 10/03/23. In the field of artificial intelligence, finding the right balance between program size and data usage is crucial. DeepMind’s Chinchilla Law established a rule of thumb that states reducing the program size to a quarter of its initial size while increasing training data fourfold maintains accuracy. Now, researchers suggest an even more efficient approach by utilizing sparsity, a technique inspired by human neurons. By removing three-quarters of a neural network’s parameters, performance can be maintained while reducing the network’s size. This discovery holds promise for achieving optimal results with fewer resources and less energy consumption in deep-learning AI. READ THE ARTICLE

  • Models

    When Hordes of Little AI Chatbots Are More Useful Than Giants Like ChatGPT

    Singularity Hub, 10/01/23. The future of AI chatbots is evolving from generic models like ChatGPT to more specialized ones. These specialized chatbots will be better equipped to cater to specific industries or geographic areas. However, acquiring training data for advanced large language models (LLMs) like ChatGPT poses challenges and increasing costs for companies. Synthetic data, created by AI systems, is a potential solution, but it needs to strike a balance between being different enough to provide new insights and similar enough to be accurate. In addition, the demand for human feedback will likely grow to correct inaccuracies in AI models trained on synthetic data. As a result, little language models tailored for specific purposes may emerge as a trend in AI. These models, developed with expert knowledge and feedback from employees, can overcome the limitations of less data and provide more valuable insights. READ THE ARTICLE

  • Models

    Paris-Based Mistral AI Enters the Arena with Free and Powerful New Language Model

    Decrypt, 09/30/23. Mistral AI has introduced Mistral 7B, a new open-source large language model (LLM) that aims to provide a powerful and accessible alternative to established players like OpenAI ChatGPT and Google Bard. The Paris-based company claims that Mistral 7B outperforms its competitors. Testing by Decrypt found that Mistral 7B scored 8.5/10 in a GPT-4 test for generating a paragraph about Bitcoin. What sets Mistral apart is its emphasis on openness and adaptability, allowing developers to inspect, adapt, and improve upon the existing code base. Mistral also prioritizes privacy, as it can be run locally without internet access. By developing open-weight models, Mistral AI aims to benefit various sectors such as R&D, customer care, and marketing. The strong AI scene in Paris continues to thrive with Mistral’s latest breakthrough. READ THE ARTICLE