February 2024 AI Recap

February 2024 - Week 4 Recap

  1. Mistral introduced a new model Mistral Large. It reaches top-tier reasoning capabilities, is multi-lingual by design, has native function calling capacities and has 32K tokens context window. The pre-trained model has 81.2% accuracy on MMLU. Alongside Mistral Large, Mistral Small, a model optimized for latency and cost has been released. Mistral Small outperforms Mixtral 8x7B and has lower latency. Mistral also launched a ChatGPT like new conversational assistant, le Chat Mistral [Details].
  2. Alibaba Group introduced EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, it can generate vocal avatar videos with expressive facial expressions, and various head poses [Details].
  3. Ideogram introduced Ideogram 1.0, a text-to-image model trained from scratch for state-of-the-art text rendering, photorealism, prompt adherence, and a feature called Magic Prompt to help with prompting. Ideogram 1.0 is now available to all users on ideogram.ai [Details].
  4. Google DeepMind introduced Genie (generative interactive environments), a foundation world model trained exclusively from Internet videos that can generate interactive, playable environments from a single image prompt  [Details].
  5. Pika Labs launched Lip Sync feature, powered by audio from Eleven Labs, for its AI generated videos enabling users to make the characters talk with realistic mouth movements [Video].
  6.  UC Berkeley introduced Berkeley Function Calling Leaderboard (BFCL) to evaluate the function calling capability of different LLMs. Gorilla Open Functions v2, an open-source model that can help users with building AI applications with function calling and interacting with json compatible output has also been released [Details].
  7. Qualcomm launched AI Hub, a curated library of 80+ optimized AI models for superior on-device AI performance across Qualcomm and Snapdragon platforms [Details].
  8. BigCode released StarCoder2, a family of open LLMs for code and comes in 3 different sizes with 3B, 7B and 15B parameters. StarCoder2-15B is trained on over 4 trillion tokens and 600+ programming languages from The Stack v2 dataset [Details].
  9. Researchers released FuseChat-7B-VaRM, which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely NH2-Mixtral-8x7B, NH2-Solar-10.7B, and OpenChat-3.5-7B, surpassing GPT-3.5 (March), Claude-2.1, and approaching Mixtral-8x7B-Instruct [Details].
  10. The Swedish fintech Klarna’s AI assistant handles two-thirds of all customer service chats, some 2.3 million conversations so far, equivalent to the work of 700 people [Details].
  11. Lightricks introduces LTX Studio, an AI-powered film making platform, now open for waitlist sign-ups, aimed at assisting creators in story visualization [Details].
  12. Morph partners with Stability AI to launch Morph Studio, a platform to make films using Stability AI–generated clips [Details].
  13. JFrog's security team found that roughly a 100 models hosted on the Hugging Face platform feature malicious functionality [Details].
  14. Playground released Playground v2.5, an open-source text-to-image generative model, with a focus on enhanced color and contrast, improved generation for multi-aspect ratios, and improved human-centric fine detail [Details].
  15. Together AI and the Arc Institute released Evo, a long-context biological foundation model based on the StripedHyena architecture that generalizes across DNA, RNA, and proteins.. Evo is capable of both prediction tasks and generative design, from molecular to whole genome scale (over 650k tokens in length) [Details].
  16. Adobe previews a new generative AI music generation and editing tool, Project Music GenAI Control, that allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs [Details | video].
  17. Microsoft introduces Copilot for Finance, an AI chatbot for finance workers in Excel and Outlook [Details].
  18. The Intercept, Raw Story, and AlterNet sue OpenAI and Microsoft, claiming OpenAI and Microsoft intentionally removed important copyright information from training data [Details].
  19. Huawei spin-off Honor shows off tech to control a car with your eyes and chatbot based on Meta’s AI [Details].
  20. Tumblr and WordPress.com are preparing to sell user data to Midjourney and OpenAI [Details]

February 2024 - Week 3 Recap

  1. Meta AI introduces V-JEPA (Video Joint Embedding Predictive Architecture), a method for teaching machines to understand and model the physical world by watching videos. Meta AI releases a collection of V-JEPA vision models trained with a feature prediction objective using self-supervised learning. The models are able to understand and predict what is going on in a video, even with limited information [Details | GitHub].
  2. Open AI introduces Sora, a text-to-video model that can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions [Details + sample videos | Report].
  3. Google announces their next-generation model, Gemini 1.5, that uses a new Mixture-of-Experts (MoE) architecture. The first Gemini 1.5 model being released for early testing is Gemini 1.5 Pro with a context window of up to 1 million tokens, which is the longest context window of any large-scale foundation model yet. 1.5 Pro can perform sophisticated understanding and reasoning tasks for different modalities, including video and it performs at a similar level to 1.0 Ultra [Details |Tech Report].
  4. Reka introduced Reka Flash, a new 21B multimodal and multilingual model trained entirely from scratch that is competitive with Gemini Pro & GPT 3.5 on key language & vision benchmarks. Reka also present a compact variant Reka Edge , a smaller and more efficient model (7B) suitable for local and on-device deployment. Both models are in public beta and available in Reka Playground [Details].
  5. Cohere For AI released Aya, a new open-source, massively multilingual LLM & dataset to help support under-represented languages. Aya outperforms existing open-source models and covers 101 different languages – more than double covered by previous models [Details].
  6. BAAI released Bunny, a family of lightweight but powerful multimodal models. Bunny-3B model built upon SigLIP and Phi-2 outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLMs (7B), and even achieves performance on par with LLaVA-13B [Details].
  7. Amazon introduced a text-to-speech (TTS) model called BASE TTS (Big Adaptive Streamable TTS with Emergent abilities). BASE TTS is the largest TTS model to-date, trained on 100K hours of public domain speech data and exhibits “emergent” qualities improving its ability to speak even complex sentences naturally [Details | Paper].
  8. Stability AI released Stable Cascade in research preview, a new text to image model that is exceptionally easy to train and finetune on consumer hardware due to its three-stage architecture. Stable Cascade can also generate image variations and image-to-image generations. In addition to providing checkpoints and inference scripts, Stability AI has also released scripts for finetuning, ControlNet, and LoRA training [Details].
  9. Researchers from UC berkeley released Large World Model (LWM), an open-source general-purpose large-context multimodal autoregressive model, trained from LLaMA-2, that can perform language, image, and video understanding and generation. LWM answers questions about 1 hour long YouTube video even if GPT-4V and Gemini Pro both fail and can retriev facts across 1M context with high accuracy [Details].
  10. GitHub opens applications for the next cohort of GitHub Accelerator program with a focus on funding the people and projects that are building AI-based solutions under an open source license [Details].
  11. NVIDIA released Chat with RTX, a locally running (Windows PCs with specific NVIDIA GPUs) AI assistant that integrates with your file system and lets you chat with your notes, documents, and videos using open source models [Details].
  12. Open AI is testing memory with ChatGPT, enabling it to remember things you discuss across all chats. ChatGPT's memories evolve with your interactions and aren't linked to specific conversations. It is being rolled out to a small portion of ChatGPT free and Plus users this week [Details].
  13. BCG X released of AgentKit, a LangChain-based starter kit (NextJS, FastAPI) to build constrained agent applications [Details | GitHub].
  14. Elevenalabs' Speech to Speech feature, launched in November, for voice transformation with control over emotions and delivery, is now multilingual and available in 29 languages [Link]
  15. Apple introduced Keyframer, an LLM-powered animation prototyping tool that can generate animations from static images (SVGs). Users can iterate on their design by adding prompts and editing LLM-generated CSS animation code or properties [Paper].
  16. Eleven Labs launched a payout program for voice actors to earn rewards every time their voice clone is used [Details].
  17. Azure OpenAI Service announced Assistants API, new models for finetuning, new text-to-speech model and new generation of embeddings models with lower pricing [Details].
  18. Brilliant Labs, the developer of AI glasses, launched Frame, the world’s first glasses featuring an integrated AI assistant, Noa. Powered by an integrated multimodal generative AI system capable of running GPT4, Stability AI, and the Whisper AI model simultaneously, Noa performs real-world visual processing, novel image generation, and real-time speech recognition and translation. [Details].
  19. Nous Research released Nous Hermes 2 Llama-2 70B model trained on the Nous Hermes 2 dataset, with over 1,000,000 entries of primarily synthetic data [Details].
  20. Open AI in partnership with Microsoft Threat Intelligence, have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities [Details]
  21. Perplexity partners with Vercel, opening AI search to developer apps [Details].
  22. Researchers show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. The agent does not need to know the vulnerability beforehand [Paper].
  23. FCC makes AI-generated voices in unsolicited robocalls illegal [Link].
  24. Slack adds AI-powered search and summarization to the platform for enterprise plans [Details].

February 2024 - Week 2 Recap:

  1. Google launches Ultra 1.0, its largest and most capable AI model, in its ChatGPT-like assistant which has now been rebranded as Gemini (earlier called Bard). Gemini Advanced is available, in 150 countries, as a premium plan for $19.99/month, starting with a two-month trial at no cost. Google is also rolling out Android and iOS apps for Gemini [Details].
  2. Alibaba Group released Qwen1.5 series, open-sourcing models of 6 sizes: 0.5B, 1.8B, 4B, 7B, 14B, and 72B. Qwen1.5-72B outperforms Llama2-70B across all benchmarks. The Qwen1.5 series is available on Ollama and LMStudio. Additionally, API on together.ai [Details | Hugging Face].
  3. NVIDIA released Canary 1B, a multilingual model for speech-to-text recognition and translation. Canary transcribes speech in English, Spanish, German, and French and also generates text with punctuation and capitalization. It supports bi-directional translation, between English and three other supported languages. Canary outperforms similarly-sized Whisper-large-v3, and SeamlessM4T-Medium-v1 on both transcription and translation tasks and achieves the first place on HuggingFace Open ASR leaderboard with an average word error rate of 6.67%, outperforming all other open source models [Details].
  4. Researchers released Lag-Llama, the first open-source foundation model for time series forecasting [Details].
  5. LAION released BUD-E, an open-source conversational and empathic AI Voice Assistant that uses natural voices, empathy & emotional intelligence and can handle multi-speaker conversations [Details].
  6. MetaVoice released MetaVoice-1B, a 1.2B parameter base model trained on 100K hours of speech, for TTS (text-to-speech). It supports emotional speech in English and voice cloning. MetaVoice-1B has been released under the Apache 2.0 license [Details].
  7. Bria AI released RMBG v1.4, an an open-source background removal model trained on fully licensed images [Details].
  8. Researchers introduce InteractiveVideo, a user-centric framework for video generation that is designed for dynamic interaction, allowing users to instruct the generative model during the generation process [Details |GitHub ].
  9. Microsoft announced a redesigned look for its Copilot AI search and chatbot experience on the web (formerly known as Bing Chat), new built-in AI image creation and editing functionality, and Deucalion, a fine tuned model that makes Balanced mode for Copilot richer and faster [Details].
  10. Roblox introduced AI-powered real-time chat translations in 16 languages [Details].
  11. Hugging Face launched Assistants feature on HuggingChat. Assistants are custom chatbots similar to OpenAI’s GPTs that can be built for free using open source LLMs like Mistral, Llama and others [Link].
  12. DeepSeek AI released DeepSeekMath 7B model, a 7B open-source model that approaches the mathematical reasoning capability of GPT-4. DeepSeekMath-Base is initialized with DeepSeek-Coder-Base-v1.5 7B [Details].
  13. Microsoft is launching several collaborations with news organizations to adopt generative AI [Details].
  14. LG Electronics signed a partnership with Korean generative AI startup Upstage to develop small language models (SLMs) for LG’s on-device AI features and AI services on LG notebooks [Details].
  15. Stability AI released SVD 1.1, an updated model of Stable Video Diffusion model, optimized to generate short AI videos with better motion and more consistency [Details | Hugging Face] .
  16. OpenAI and Meta announced to label AI generated images [Details].
  17. Google saves your conversations with Gemini for years by default [Details].

February 2024 - Week 1 Recap:

  1. Amazon presents Diffuse to Choose, a diffusion-based image-conditioned inpainting model that allows users to virtually place any e-commerce item in any setting, ensuring detailed, semantically coherent blending with realistic lighting and shadows. Code and demo will be released soon [Details].
  2. OpenAI announced two new embedding models, new GPT-4 Turbo and moderation models, new API usage management tools, and lower pricing on GPT-3.5 Turbo. The updated GPT-4 Turbo preview model reduces cases of “laziness” where the model doesn’t complete a task. The new embedding models include a smaller and highly efficient text-embedding-3-small model, and a larger and more powerful text-embedding-3-large model. [Details].
  3. Hugging Face and Google partner to support developers building AI applications [Details].
  4. Adept introduced Adept Fuyu-Heavy, a new multimodal model designed specifically for digital agents. Fuyu-Heavy scores higher on the MMMU benchmark than Gemini Pro [Details].
  5. Fireworks.ai has open-sourced FireLLaVA, a LLaVA multi-modality model trained on OSS LLM generated instruction following data, with a commercially permissive license. Firewroks.ai is also providing both the completions API and chat completions API to devlopers [Details].
  6. 01.AI released Yi Vision Language (Yi-VL) model, an open-source, multimodal version of the Yi Large Language Model (LLM) series, enabling content comprehension, recognition, and multi-round conversations about images. Yi-VL adopts the LLaVA architecture and is free for commercial use. Yi-VL-34B is the first open-source 34B vision language model worldwide [Details].
  7. Tencent AI Lab introduced WebVoyager, an innovative Large Multimodal Model (LMM) powered web agent that can complete user instructions end-to-end by interacting with real-world websites [Paper].
  8. Prophetic introduced MORPHEUS-1, a multi-modal generative ultrasonic transformer model designed to induce and stabilize lucid dreams from brain states. Instead of generating words, Morpheus-1 generates ultrasonic holograms for neurostimulation to bring one to a lucid state [Details].
  9. Google Research presented Lumiere – a space-time video diffusion model for text-to-video, image-to-video, stylized generation, inpainting and cinemagraphs [Details].
  10. TikTok released Depth Anything, an image-based depth estimation method trained on 1.5M labeled images and 62M+ unlabeled images jointly [Details].
  11. Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use [Details].
  12. Stability AI released Stable LM 2 1.6B, 1.6 billion parameter small language model trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch. Stable LM 2 1.6B can be used now both commercially and non-commercially with a Stability AI Membership [Details].
  13. Etsy launched ‘Gift Mode,’ an AI-powered feature designed to match users with tailored gift ideas based on specific preferences [Details].
  14. Google DeepMind presented AutoRT, a framework that uses foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. In AutoRT, a VLM describes the scene, an LLM generates robot goals and filters for affordance and safety, then routes execution to policies [Details].
  15. Google Chrome gains AI features, including a writing helper, theme creator, and tab organizer [Details].
  16. Tencent AI Lab released VideoCrafter2 for high quality text-to-video generation, featuring major improvements in visual quality, motion and concept Composition compared to VideoCrafter1 [Details | Demo]
  17. Google opens beta access to the conversational experience, a new chat-based feature in Google Ads, for English language advertisers in the U.S. & U.K. It will let advertisers create optimized Search campaigns from their website URL by generating relevant ad content, including creatives and keywords [Details].

Go back to Top