‘On-device Agentic AI is Here!’: Salesforce Announces the ‘Tiny Giant’ LLM
Salesforce CEO Marc Benioff is excited about the company’s latest innovation in AI, introducing the ‘Tiny Giant’ LLM, which he claims is the world’s top-performing “micro-model” for function-calling.
Salesforce’s new slimline “Tiny Giant” LLM reportedly outperforms larger models, marking a significant advancement in on-device AI. According to a paper published on Arxiv by Salesforce’s AI Research department, the xLAM-7B LLM model ranked sixth among 46 models, including those from OpenAI and Google, in a competition testing function-calling (execution of tasks or functions through API calls).
The xLAM-7B LLM was trained on just seven billion parameters, a small fraction compared to the 1.7 trillion parameters rumored to be used by GPT-4. However, Salesforce highlights the xLAM-1B, a smaller model, as its true star. Despite being trained on just one billion parameters, the xLAM-1B model finished 24th, surpassing GPT-3.5-Turbo and Claude-3 Haiku in performance.
CEO Marc Benioff proudly shared these results on X (formerly Twitter), stating: “Meet Salesforce Einstein ‘Tiny Giant.’ Our 1B parameter model xLAM-1B is now the best micro-model for function-calling, outperforming models 7x its size… On-device agentic AI is here. Congrats Salesforce Research!”
Salesforce’s research emphasizes that function-calling agents represent a significant advancement in AI and LLMs. Models like GPT-4, Gemini, and Mistral already execute API calls based on natural language prompts, enabling dynamic interactions with various digital services and applications.
While many popular models are large and resource-intensive, requiring cloud data centers and extensive infrastructure, Salesforce’s new models demonstrate that smaller, more efficient models can achieve state-of-the-art performance. To test function-calling LLMs, Salesforce developed APIGen, an “Automated Pipeline for Generating verifiable and diverse function-calling datasets,” to synthesize data for AI training.
Salesforce’s findings indicate that models trained on relatively small datasets can outperform those trained on larger datasets. “Models trained with our curated datasets, even with only seven billion parameters, can achieve state-of-the-art performance… outperforming multiple GPT-4 models,” the paper states.
The ultimate goal is to create agentic AI models capable of function-calling and task execution on devices, minimizing the need for extensive external infrastructure and enabling self-sufficient operations.
Dr. Eli David, Co-Founder of the cybersecurity firm Deep Instinct, commented on X, “Smaller, more efficient models are the way to go for widespread deployment of LLMs.”