Qualcomm and Meta Partner to Accelerate AI on Mobile Devices

San Diego-based semiconductor and telecommunications giant Qualcomm has announced a collaboration with social networking giant Meta to enable the deployment of their new large language model (LLM), Llama 2, on Qualcomm chips for smartphones and PCs. The partnership aims to leverage Qualcomm’s processors and Meta’s open-source Llama 2 models to facilitate the integration of intelligent virtual assistants and other AI applications on mobile devices, potentially leading to advancements in voice assistants and faster performance.

Until now, LLMs have predominantly run on large server farms using something like Nvidia graphics processors due to their intense computational requirements. This has contributed to the soaring success of Nvidia, whose stock has surged by more than 220% this year. In contrast, companies specializing in cutting-edge processors for mobile devices, such as Qualcomm, have seen more modest stock gains, with a 10% increase in 2023.

The recent partnership between Qualcomm and Meta suggests a strategic move by Qualcomm to position its processors as well-suited for on-device AI applications, rather than relying solely on cloud-based solutions. By enabling LLMs to run on smartphones and PCs, the substantial costs associated with running AI models in large data centers could potentially be reduced. This development also holds the promise of delivering improved and faster voice assistants and other AI-powered applications.

Qualcomm’s chips boast a dedicated tensor processor unit (TPU), optimized for the complex calculations required by AI models. However, it is important to note that the processing power available on mobile devices remains significantly lower compared to state-of-the-art GPUs found in data centers.

Meta’s Llama 2 stands out for its open-source nature, as the company has published the “weights” of the AI model. These weights, which define how a specific AI model operates, allow researchers and future commercial entities to utilize the AI models on their own computers without requiring permission or payment. In contrast, other noteworthy LLMs like OpenAI’s ChatGPT-4 and Google’s Bard are closed-source, with closely guarded weight information.

This collaboration builds upon the previous close partnership between Qualcomm and Meta, particularly in the development of chips for Meta’s virtual reality devices, such as the Quest. Qualcomm has previously demonstrated the performance of AI models, including the open-source image generator Stable Diffusion, running on its chips, albeit at a slower pace.

The announcement signifies a significant step towards bringing advanced language models to mobile devices, potentially expanding the reach of AI capabilities. With Qualcomm’s expertise in processors and Meta’s open-source approach, the integration of Llama 2 on Qualcomm chips could revolutionize the landscape of AI applications, delivering enhanced functionality and performance to users. The joint efforts of these industry leaders are expected to unlock new possibilities for on-device AI processing, paving the way for future advancements in mobile technology.