‘On-device Agentic AI is Here!’: Salesforce Announces the ‘Tiny Giant’ LLM

Salesforce CEO Marc Benioff is excited about the company’s latest innovation in AI, introducing the ‘Tiny Giant’ LLM, which he claims is the world’s top-performing “micro-model” for function-calling.

Salesforce’s new slimline “Tiny Giant” LLM reportedly outperforms larger models, marking a significant advancement in on-device AI. According to a paper published on Arxiv by Salesforce’s AI Research department, the xLAM-7B LLM model ranked sixth among 46 models, including those from OpenAI and Google, in a competition testing function-calling (execution of tasks or functions through API calls).

The xLAM-7B LLM was trained on just seven billion parameters, a small fraction compared to the 1.7 trillion parameters rumored to be used by GPT-4. However, Salesforce highlights the xLAM-1B, a smaller model, as its true star. Despite being trained on just one billion parameters, the xLAM-1B model finished 24th, surpassing GPT-3.5-Turbo and Claude-3 Haiku in performance.

CEO Marc Benioff proudly shared these results on X (formerly Twitter), stating: “Meet Salesforce Einstein ‘Tiny Giant.’ Our 1B parameter model xLAM-1B is now the best micro-model for function-calling, outperforming models 7x its size… On-device agentic AI is here. Congrats Salesforce Research!”

Salesforce’s research emphasizes that function-calling agents represent a significant advancement in AI and LLMs. Models like GPT-4, Gemini, and Mistral already execute API calls based on natural language prompts, enabling dynamic interactions with various digital services and applications.

While many popular models are large and resource-intensive, requiring cloud data centers and extensive infrastructure, Salesforce’s new models demonstrate that smaller, more efficient models can achieve state-of-the-art performance. To test function-calling LLMs, Salesforce developed APIGen, an “Automated Pipeline for Generating verifiable and diverse function-calling datasets,” to synthesize data for AI training.

Salesforce’s findings indicate that models trained on relatively small datasets can outperform those trained on larger datasets. “Models trained with our curated datasets, even with only seven billion parameters, can achieve state-of-the-art performance… outperforming multiple GPT-4 models,” the paper states.

The ultimate goal is to create agentic AI models capable of function-calling and task execution on devices, minimizing the need for extensive external infrastructure and enabling self-sufficient operations.

Dr. Eli David, Co-Founder of the cybersecurity firm Deep Instinct, commented on X, “Smaller, more efficient models are the way to go for widespread deployment of LLMs.”

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more