In recent weeks, AI professionals had the privilege of attending groundbreaking hands-on workshops at the headquarters of two Silicon Valley giants, Salesforce and Google. These experiences offered a firsthand look at the contrasting approaches these tech titans are taking to bring enterprise-grade, large language model (LLM) applications to scale. As they immersed themselves in the cutting-edge world of AI development, a sense of excitement and awe washed over them at the unfolding history. Salesforce and Google LLMs.
Thank you for reading this post, don't forget to subscribe!The workshops provided a fascinating glimpse into the future of enterprise software, where AI is not just a buzzword but a transformative force reshaping how businesses operate. Salesforce and Google, each with their unique strengths and philosophies, are at the forefront of this revolution, pushing the boundaries of what’s possible with LLMs and retrieval-augmented generation (RAG). As they navigated through the hands-on exercises and engaged with the brilliant minds behind these innovations, they realized they were witnessing a pivotal moment in Silicon Valley and computer history.
Salesforce LLM
Salesforce: Low-Code, Business User-Friendly At the “Build the Future with AI and Data Workshop” held at Salesforce Tower in downtown San Francisco, the focus was on empowering business users with a low-code, clicks-not-code approach. The workshop, attended by around 100 people, took place in a ballroom-sized auditorium. Each attendee received a free instance of the Generative AI-enabled org, pre-populated with a luxury travel destination application, which expired in 5 days.
Data Cloud: Lots of Clicks The workshop began with setting up data ingestion and objects for linking AWS S3 buckets to Salesforce’s Data Cloud. The process was intricate, involving a new nomenclature reminiscent of SQL Views within Views, requiring a considerable number of setup steps before accessing Prompt Builder.
It should be noted that when using Einstein Studio for the first time, users don’t normally need to do Data Cloud setup. This was done in this workshop so they could later include Data Cloud embeddings in a Prompt Builder retrieval.
Prompt Builder: Easy to Use Prompt Builder was the highlight of the workshop. It allows for template variables and various prompt types, including the intriguing Field Prompt, which enables users to attach a prompt to a field. When editing a record, clicking the wizard button in that field executes the prompt, filling out the field automatically. This feature has the potential to greatly enhance data richness, with numerous use cases across industries.
Integrating Flow and Apex with Prompt Builder demonstrated the platform’s flexibility. They created an Apex Class using Code Builder, which returned a list that could be used by Prompt Builder to formulate a reply. The seamless integration of these components showcased Salesforce’s commitment to providing a cohesive, user-friendly experience.
Einstein Copilot, Salesforce’s AI assistant, exhibited out-of-the-box capabilities when integrated with custom actions. By creating a Flow and integrating it into a custom action, users could invoke Einstein Copilot to assist with various tasks.
A Warmly Received Roadmap Salesforce managers, including SVP of Product Management John Kucera, provided insights into the Generative AI roadmap during a briefing session. They emphasized upcoming features such as Recommended Actions, which package prompts into buttons, and improved context understanding for Einstein Copilot. The atmosphere in the room was warm, with genuine excitement and a sense of collaboration between Salesforce staff and attendees.
The workshop positioned Salesforce’s AI solution as an alternative to hiring an AI programmer and building AI orchestration using tools like those used in the Google workshop. Salesforce’s approach focuses on a user-friendly interface for setting up data sources and custom actions, enabling users to leverage AI without relying on code. This low-code philosophy aims to democratize AI, making it accessible to a broader range of business users.
For organizations already invested in the Salesforce ecosystem, the platform’s embedded AI capabilities offer a compelling way to build expertise and leverage the power of Data Cloud. Salesforce’s commitment to rapidly rolling out embedded AI enhancements, all building on the familiar Admin user experience, makes it an attractive option for businesses seeking to adopt AI without the steep learning curve associated with coding.
While there was palpable enthusiasm among attendees, the workshop also highlighted the complexity of setting up data sources and the challenges of working with a new nomenclature. As Salesforce continues to refine its AI offerings, striking the right balance between flexibility and ease of use will be crucial to widespread adoption.
Google LLM
Google: Engineering-Centric, Code-Intensive The “Build LLM-Powered Apps with Google” workshop, held on the Google campus in Mountain View, attracted around 150 attendees, primarily developers and engineers. They met in a large meeting room with circular tables. The event kicked off with a keynote presentation and detailed descriptions of Google’s efforts in creating retrieval-augmented generation (RAG) pipelines. They participated in a hands-on workshop, building a RAG database for an “SFO Assistant” chatbot designed to assist passengers at San Francisco airport.
Running Postgres and pgvector with BigQuery Using Google Cloud Platform, they created a new VM running Postgres with the pgvector extension. They executed a series of commands to load the SFO database and establish a connection between Gemini and the database. The workshop provided step-by-step guidance, with Google staff helping when needed. Ultimately, they successfully ran a chatbot utilizing the RAG database.
The workshop also showcased the power of BigQuery in generating prompts at scale through SQL statements. By crafting SQL queries that combined prompt engineering with retrieved data, they learned how to create personalized content, such as emails, for a group of customers in a single step. This demonstration highlighted the potential for efficient, large-scale content generation using Google’s tools.
Gemini Assistant One of the most exciting discoveries for them during the workshop was the Gemini Assistant for BigQuery, a standout IT Companion Chatbot tailored for the GCP ecosystem. Comparable to GitHub Copilot Chat or ChatGPT-Plus, Gemini Assistant demonstrated a deep understanding of GCP and the ability to generate code snippets in various programming languages. What distinguishes Gemini Assistant is its strong grounding in GCP knowledge, enabling it to provide contextually relevant and accurate responses.
During the workshop, they had the opportunity to interact with Gemini Assistant firsthand. They were impressed by its ability to generate Python code and complex BigQuery SQL statements from simple text descriptions. This level of sophistication and contextual awareness has the potential to revolutionize how developers and engineers work within the GCP ecosystem, boosting productivity and simplifying complex tasks. Moreover, Gemini Assistant often provides sources, such as blog posts or GitHub repositories, to support its answers, enhancing confidence in its outputs.
Product Presentations In addition to the hands-on workshop, they saw product presentations covering Vertex AI, Gemini Chat, Model Garden, and Anthropic’s Claude. These presentations offered insights into the latest advancements in Google’s AI ecosystem and its collaborations with leading AI companies. Arize Phoenix, an open-source project, introduced the “five pillars of LLM Observability,” underscoring the nascent stage of LLM app development and the critical importance of monitoring and understanding the behavior of these complex systems.
The Google workshop highlighted the company’s engineering-centric approach to AI development, emphasizing the use of code and CLI/API interactions to set up and operate systems. This contrasts with Salesforce’s low-code, business user-friendly approach, catering to different user personas and skill sets. However, both companies face the shared challenge of securing RAG embeddings, with Google actively developing new RAG security features in AlloyDB (its managed Postgres service) to address this concern.
Lots of Builders and Data Scientists Throughout the event, they had the opportunity to connect with professionals from various Silicon Valley giants, including HP, Cisco, and Apple. These interactions revealed a wide array of applications under development, ranging from customer chatbots and finance administration tools to enterprise search solutions. The diverse use cases and the palpable enthusiasm among attendees underscored the growing importance and potential of enterprise AI.
Attending the Google workshop provided valuable insights into the company’s cutting-edge AI technologies, its commitment to empowering developers, and its collaborative approach to driving innovation. As Google continues to refine its AI offerings and address key challenges, such as data security and observability, it is well-positioned to play a pivotal role in shaping the future of enterprise AI.
Salesforce and Google LLMs
Analysis: Different Approaches, Common Challenges While Salesforce and Google cater to different audiences – business users and engineers, respectively – both face the critical challenge of securing RAG embeddings. Google is addressing this with new RAG security features in AlloyDB, while Salesforce’s Einstein Trust Layer leverages existing metadata and security models for CRM data. And in a competitive atmosphere, Salesforce has decades of experience implementing metadata and enterprise security models. This gives Salesforce a leg up in the basic architecture required to secure RAG embeddings.
Salesforce’s low-code approach prioritizes ease of use, while Google’s engineering-centric model offers flexibility and scale. Google’s massive data processing capabilities give it an edge for handling large volumes of inferences, while Salesforce’s strength lies in its deep integration with the Salesforce ecosystem.
However, a notable omission in Salesforce’s offerings is the lack of an IT Companion Chatbot. While Salesforce ISVs like Copado, Elements.cloud, and Metazoa are filling this gap, Salesforce’s absence in this area is concerning. An IT Companion Chatbot is crucial for alleviating the cognitive load of IT professionals, providing real-time support and solutions in complex environments like Salesforce and DevOps.
The presence of a Googler at the Salesforce event, advocating for the integration of Gemini into Salesforce’s Model Builder, highlights the recognition of Google’s superior AI capabilities. Although Salesforce management’s response suggested openness to collaboration, it also hinted at the challenges Salesforce faces in keeping pace with AI advancements.
Salesforce and Google LLMs Looking Ahead
Looking Ahead The workshops at Salesforce and Google showcased the rapid pace of innovation and the growing importance of AI in shaping the future of enterprise software. As these tech giants continue to evolve their AI offerings, the focus on data security, observability, and ease of use will be paramount.
Salesforce’s low-code, user-friendly approach, and deep understanding of enterprise needs position it well, but the lack of an IT Companion Chatbot remains a significant gap. Google’s technological superiority and vast resources give it a clear advantage, but Salesforce’s ecosystem and customer relationships could help offset its limitations.
As a long-time Bay Area denizen, witnessing this pivotal moment in Silicon Valley and computer history was both exciting and thought-provoking. The race is on to provide the most compelling and trustworthy solutions for enterprise AI adoption, and the choices made by Salesforce, Google, and other players will shape the future of work for years to come.