Salesforce Einstein - gettectonic.com - Page 5
Salesforce Feeding Post-Pandemic AI-Powered Digital Transformation

Salesforce Feeding Post-Pandemic AI-Powered Digital Transformation

The Digital Transformation Imperative: Salesforce’s AI Solutions The COVID-19 pandemic didn’t just accelerate digital transformation; it cemented it as an existential imperative for businesses across all industries. The sudden shift to remote work, digital customer engagement, and e-commerce highlighted the stark contrast between organizations that had prioritized digitization and those that hadn’t. In the post-pandemic era, digital agility has become synonymous with resilience and competitiveness. Salesforce Feeding Post-Pandemic AI-Powered Digital Transformation with unparalled innovation. However, the path to digital transformation remains challenging for many companies. Legacy systems, data silos, and manual processes continue to hinder adaptation and innovation at the pace demanded by today’s market and consumer. This has led to a certain weariness and skepticism around transformation initiatives, often perceived as an ever-receding target. Salesforce’s AI-Powered Integration Solutions Salesforce’s AI-powered integration solutions aim to revitalize the digital transformation journey. With tools like Einstein for Flow, Intelligent Document Processing (IDP), and Einstein for MuleSoft, Salesforce is embedding AI across its automation and integration portfolio to address some of the most difficult challenges in digitization. Anypoint Partner Manager: Harnessing AI for B2B Integration Salesforce’s latest MuleSoft offering, Anypoint Partner Manager, exemplifies this AI-centric approach. The cloud-native B2B integration solution leverages IDP to streamline partner onboarding and manage API and EDI-based transactions, addressing a key pain point for companies in complex supply chain ecosystems. “EDI has historically been that code-driven solution. You must really know the EDI spec,” noted Andrew Comstock, VP of Product Management at Salesforce. “Partner Manager actually brings the partner definition into a form, and you can just define that, save it, and you’re off and done. We can deploy all the applications that you need for you.” By using AI to extract and structure data from unstructured documents like invoices and purchase orders, Anypoint Partner Manager democratizes B2B integration, making it accessible to businesses beyond the traditional technology sector. The solution is now generally available. MuleSoft Accelerator for Salesforce Order Management: Bridging B2B and B2C Salesforce also introduced the MuleSoft Accelerator for Salesforce Order Management. This tool provides pre-built APIs, connectors, and templates to unify B2B and B2C orders from a centralized hub. By connecting Salesforce OMS with ERP systems in real-time, the accelerator enables end-to-end visibility across channels, a critical capability in today’s omnichannel environment. “For many companies, [order management] is super critical and vital,” emphasized Comstock. “The more that they can standardize and centralize that, the better visibility, controls, and governance they have.” The MuleSoft Accelerator for Salesforce OMS is now generally available. The AI Imperative in Digital Transformation Salesforce’s AI-powered integration solutions come at a time when businesses are grappling with the realities of the post-pandemic digital imperative. Automating complex B2B processes, unifying data flows across ecosystems, and extracting insights from unstructured data is no longer a luxury but a necessity for survival in the digital economy. Salesforce Feeding Post-Pandemic AI-Powered Digital Transformation “A lot of our successes are happening at companies that are not traditional technology companies. Using solutions like MuleSoft and Salesforce allows them to build those technologies better,” noted Comstock. In this context, AI is emerging as a key enabler of digital transformation at scale. By abstracting complexity and automating manual tasks, AI-powered integration tools like those from Salesforce are helping businesses overcome the hurdles that have long stymied digitization efforts. For companies still wrestling with the challenges of digital transformation, Salesforce’s AI-powered integration portfolio offers a glimmer of hope. By harnessing the power of large language models and other AI technologies to streamline integration and automation, Salesforce is providing a new path forward for organizations looking to thrive in the post-pandemic digital landscape. Salesforce Feeding Post-Pandemic AI-Powered Digital Transformation with Einstein, Mulesoft, Flow, and more. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Databricks LakeFlow

Databricks LakeFlow

Databricks Introduces LakeFlow: Simplifying Data Engineering Databricks, the Data and AI company, yesterday announced the launch of Databricks LakeFlow, a new solution designed to unify and simplify all aspects of data engineering, from data ingestion to transformation and orchestration. LakeFlow enables data teams to efficiently ingest data at scale from databases like MySQL, Postgres, and Oracle, as well as enterprise applications such as Salesforce, Dynamics, SharePoint, Workday, NetSuite, and Google Analytics. Additionally, Databricks is introducing Real Time Mode for Apache Spark, allowing ultra-low latency stream processing. Simplified Data Engineering with LakeFlow LakeFlow automates the deployment, operation, and monitoring of data pipelines at scale, with built-in support for CI/CD and advanced workflows that include triggering, branching, and conditional execution. It integrates data quality checks and health monitoring with alerting systems such as PagerDuty, simplifying the process of building and operating production-grade data pipelines. This efficiency enables data teams to meet the growing demand for reliable data and AI. Tackling Data Pipeline Challenges Data engineering is crucial for democratizing data and AI within businesses but remains complex and challenging. Data teams often struggle with ingesting data from siloed, proprietary systems, and managing intricate logic for data preparation. Failures and latency spikes can disrupt operations and disappoint customers. The deployment of pipelines and monitoring of data quality typically involve disparate tools, complicating the process further. Fragmented solutions lead to low data quality, reliability issues, high costs, and increasing backlogs. LakeFlow addresses these challenges by providing a unified experience on the Databricks Data Intelligence Platform, with deep integrations with Unity Catalog for end-to-end governance and serverless compute for efficient and scalable execution. Key Features of LakeFlow Availability LakeFlow represents the future of unified and intelligent data engineering. The preview phase will begin soon, starting with LakeFlow Connect. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Adopt a Large Language Model

Adopt a Large Language Model

In 2023, Algo Communications, a Canadian company, faced a significant challenge. With rapid growth on the horizon, the company struggled to train customer service representatives (CSRs) quickly enough to keep pace. To address this, Algo turned to an innovative solution: generative AI. They needed to Adopt a Large Language Model. Algo adopted a large language model (LLM) to accelerate the onboarding of new CSRs. However, to ensure CSRs could accurately and fluently respond to complex customer queries, Algo needed more than a generic, off-the-shelf LLM. These models, typically trained on public internet data, lack the specific business context required for accurate answers. This led Algo to use retrieval-augmented generation, or RAG. Many people have already used generative AI models like OpenAI’s ChatGPT or Google’s Gemini (formerly Bard) for tasks like writing emails or crafting social media posts. However, achieving the best results can be challenging without mastering the art of crafting precise prompts. An AI model is only as effective as the data it’s trained on. For optimal performance, it needs accurate, contextual information rather than generic data. Off-the-shelf LLMs often lack up-to-date, reliable access to your specific data and customer relationships. RAG addresses this by embedding the most current and relevant proprietary data directly into LLM prompts. RAG isn’t limited to structured data like spreadsheets or relational databases. It can retrieve all types of data, including unstructured data such as emails, PDFs, chat logs, and social media posts, enhancing the AI’s output quality. How RAG Works RAG enables companies to retrieve and utilize data from various internal sources for improved AI results. By using your own trusted data, RAG reduces or eliminates hallucinations and incorrect outputs, ensuring responses are relevant and accurate. This process involves a specialized database called a vector database, which stores data in a numerical format suitable for AI and retrieves it when prompted. “RAG can’t do its job without the vector database doing its job,” said Ryan Schellack, Director of AI Product Marketing at Salesforce. “The two go hand in hand. Supporting retrieval-augmented generation means supporting a vector store and a machine-learning search mechanism designed for that data.” RAG, combined with a vector database, significantly enhances LLM outputs. However, users still need to understand the basics of crafting clear prompts. Faster Responses to Complex Questions In December 2023, Algo Communications began testing RAG with a few CSRs using a small sample of about 10% of its product base. They incorporated vast amounts of unstructured data, including chat logs and two years of email history, into their vector database. After about two months, CSRs became comfortable with the tool, leading to a wider rollout. In just two months, Algo’s customer service team improved case resolution times by 67%, allowing them to handle new inquiries more efficiently. “Exploring RAG helped us understand we could integrate much more data,” said Ryan Zoehner, Vice President of Commercial Operations at Algo Communications. “It enabled us to provide detailed, technically savvy responses, enhancing customer confidence.” RAG now touches 60% of Algo’s products and continues to expand. The company is continually adding new chat logs and conversations to the database, further enriching the AI’s contextual understanding. This approach has halved onboarding time, supporting Algo’s rapid growth. “RAG is making us more efficient,” Zoehner said. “It enhances job satisfaction and speeds up onboarding. Unlike other LLM efforts, RAG lets us maintain our brand identity and company ethos.” RAG has also allowed Algo’s CSRs to focus more on personalizing customer interactions. “It allows our team to ensure responses resonate well,” Zoehner said. “This human touch aligns with our brand and ensures quality across all interactions.” Write Better Prompts – Adopt a Large Language Model If you want to learn how to craft effective generative AI prompts or use Salesforce’s Prompt Builder, check out Trailhead, Salesforce’s free online learning platform. Start learning Trail: Get Started with Prompts and Prompt Builder Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Cautionary AI Tale

A Cautionary AI Tale

Oliver Lovstrom, an AI student, wrote an interesting perspective on artificial intelligence, a cautionary AI tale, if you will. The Theory and Fairy Tale My first introduction to artificial intelligence was during high school when I began exploring its theories and captivating aspects. In 2018, as self-driving cars were gaining traction, I decided to create a simple autonomous vehicle for my final project. This project filled me with excitement and hope, spurring my continued interest and learning in AI. However, I had no idea that within a few years, AI would become significantly more advanced and accessible, reaching the masses through affordable robots. For instance, who could have imagined that just two years later, we would have access to incredible AI models like ChatGPT and Gemini, developed by tech giants? The Dark Side of Accessibility My concerns grew as I observed the surge in global cybersecurity issues driven by advanced language model-powered bots. Nowadays, it’s rare to go a day without hearing about some form of cybercrime somewhere in the world. A Brief Intro to AI for Beginners To understand the risks associated with AI, we must first comprehend what AI is and its inspiration: the human brain. In biology, I learned that the human brain consists of neurons, which have two main functions: Neurons communicate with sensory organs or other neurons, determining the signals they send through learning. Throughout our lives, we learn to associate different external stimuli (inputs) with sensory outputs, like emotions. Imagine returning to your childhood home. Walking in, you are immediately overwhelmed by nostalgia. This is a learned response, where the sensory input (the scene) passes through a network of billions of neurons, triggering an emotional output. Similarly, I began learning about artificial neural networks, which mimic this behavior in computers. Artificial Neural Networks Just as biological neurons communicate within our brains, artificial neural networks try to replicate this in computers. Each dot in the graph above represents an artificial neuron, all connected and communicating with one another. Sensory inputs, like a scene, enter the network, and the resulting output, such as an emotion, emerges from the network’s processing. A unique feature of these networks is their ability to learn. Initially, an untrained neural network might produce random outputs for a given input. However, with training, these networks learn to associate specific inputs with particular outputs, mirroring the learning process of the human brain. This capability can be leveraged to handle tedious tasks, but there are deeper implications to explore. The Wishing Well As AI technology advances, it begins to resemble a wishing well from a fairy tale—a tool that could fulfill any desire, for better or worse. In 2022, the release of ChatGPT and various generative AI tools astonished many. For the first time, people had free access to a system capable of generating coherent and contextually appropriate responses to almost any prompt. And this is just the beginning. Multimodal AI and the Next Step I explored multimodal AI, which allows the processing of data in different formats, such as text, images, audio, and possibly even physical actions. This development supports the “wishing well” hypothesis, but also revealed a darker side of AI. The Villains While a wishing well in fairy tales is associated with good intentions and moral outcomes, the reality of AI is more complex. The morality of AI usage depends on the people who wield it, and the potential for harm by a single bad actor is immense. The Big Actors and Bad Apples The control of AI technology is likely to be held by powerful entities, whether governments or private corporations. Speculating on their use of this technology can be unsettling. While we might hope AI acts as a deterrent, similar to nuclear weapons, AI’s invisibility and potential for silent harm make it particularly dangerous. We are already witnessing malicious uses of AI, from fake kidnappings to deepfakes, impacting everyone from ordinary people to politicians. As AI becomes more accessible, the risk of bad actors exploiting it grows. Even if AI maintains peace on a global scale, the issue of individuals causing harm remains—a few bad apples can spoil the bunch. Unexpected Actions and the Future AI systems today can perform unexpected actions, often through jailbreaking—manipulating models to give unintended information. While currently, the consequences might seem minor, they could escalate significantly in the future. AI does not follow predetermined rules but chooses the “best” path to achieve a goal, often learned independently from human oversight. This unpredictability, especially in multimodal models, is alarming. Consider an AI tasked with making pancakes. It might need money for ingredients and, determined by its learning, might resort to creating deepfakes for blackmail. This scenario, though seemingly absurd, highlights potential dangers as AI evolves with the growth of IoT, quantum computing, and big data, leading to superintelligent, self-managing systems. As AI surpasses human intelligence, more issues will emerge, potentially leading to a loss of control. Dr. Yildiz, an AI expert, highlighted these concerns in a story titled “Artificial Intelligence Does Not Concern Me, but Artificial Super-Intelligence Frightens Me.” Hope and Optimism Despite the fears surrounding AI, I remain hopeful. We are still in the early stages of this technology, providing ample time to course-correct. This can be achieved through recognizing the risks, fostering ethical AI systems, and raising a morally conscious new generation. Although I emphasized potential dangers, my intent is not to incite fear. Like previous industrial and digital revolutions, AI has the potential to greatly enhance our lives. I stay optimistic and continue my studies to contribute positively to the field. The takeaway from my story is that by using AI ethically and collaboratively, we can harness its power for positive change and a better future for everyone. This article by Oliver Lovstrom originally was published by Medium, here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a

Read More
Gen AI Unleased With Vector Database

Gen AI Unleased With Vector Database

Salesforce Unveils Data Cloud Vector Database with GenAI Integration Salesforce has officially launched its Data Cloud Vector Database, leveraging GenAI to rapidly process a company’s vast collection of PDFs, emails, transcripts, online reviews, and other unstructured data. Gen AI Unleased With Vector Database. Rahul Auradkar, Executive Vice President and General Manager of Salesforce Unified Data Services and Einstein Units, highlighted the efficiency gains in a one-on-one briefing with InformationWeek. Auradkar demonstrated the new capabilities through a live demo, showcasing the potential of the Data Cloud Vector Database. Enhanced Efficiency and Data Utilization The new Data Cloud integrates with the Einstein 1 platform, combining unstructured and structured data for rapid analysis by sales, marketing, and customer service teams. This integration significantly enhances the accuracy of Einstein Copilot, Salesforce’s enterprise conversational AI assistant. Gen AI Unleased With Vector Database Auradkar demonstrated how a customer service query could retrieve multiple relevant results within seconds. This process, which typically takes hours of manual effort, now leverages unstructured data, which makes up 90% of customer data, to deliver swift and accurate results. “This advancement allows our customers to harness the full potential of 90% of their enterprise data—unstructured data that has been underutilized or siloed—to drive use cases, AI, automation, and analytics experiences across both structured and unstructured data,” Auradkar explained. Comprehensive Data Management Using Salesforce’s Einstein 1 platform, Data Cloud enables users to ingest, store, unify, index, and perform semantic queries on unstructured data across all applications. This data encompasses diverse unstructured content from websites, social media platforms, and other sources, resulting in more accurate outcomes and insights. Auradkar emphasized, “This represents an order of magnitude improvement in productivity and customer satisfaction. For instance, a large shipping company with thousands of customer cases can now categorize and access necessary information far more efficiently.” Additional Announcements Salesforce also introduced several new AI and Data Cloud features: Auradkar noted that these innovations enhance Salesforce’s competitive edge by prioritizing flexibility and enabling customers to take control of their data. “We’ll continue on this journey,” Auradkar said. “Our future investments will focus on how this product evolves and scales. We’re building significant flexibility for our customers to use any model they choose, including any large language model.” For more insights and updates, visit Salesforce’s official announcements and stay tuned for further developments. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
ChatGPT 5.0 is Coming

ChatGPT 5.0 is Coming

Sam Altman Teases ChatGPT-5: Here’s What We Know GPT-5: A Major Leap in AI Following the release of GPT-4, anticipation for its successor, GPT-5, has been growing. According to reports from Business Insider, GPT-5 is expected to debut in mid-2024, potentially marking a significant advancement in AI capabilities. Insiders describe GPT-5 as “materially better,” with enhancements that could transform AI-driven communication and composition. ChatGPT 5.0 is Coming. The Journey to GPT-5 After GPT-4’s launch, speculation about GPT-5’s arrival intensified. OpenAI CEO Sam Altman has hinted at the upcoming release, assuring groundbreaking advancements. However, concrete details were scarce until recent reports provided a clearer timeline for GPT-5’s debut. What to Expect from GPT-5 Early demonstrations of GPT-5 have impressed insiders, with one CEO describing it as “really good.” The model promises significant improvements, showcasing its versatility in real-world applications. From unique use cases for individual enterprises to autonomous AI agents, GPT-5 is poised to expand the boundaries of AI capabilities. Evolution of Language Models Understanding GPT-5’s significance involves tracing the evolution of OpenAI’s language models. From the groundbreaking GPT-3 in 2020 to the iterative improvements leading to GPT-4 Turbo, each iteration has advanced the sophistication of AI-driven communication tools. ChatGPT 5.0 is Coming: A Multimodal Approach Building on its predecessors, GPT-5 is expected to offer a multimodal experience, integrating text and encoded visual input. This capability opens up numerous applications, from content generation to image captioning, further embedding AI in various domains. Next-Token Prediction and Conversational AI At its core, GPT-5 remains a next-token prediction model, generating contextually relevant responses based on input prompts. This functionality underpins conversational AI applications like ChatGPT, enabling seamless user-AI interactions. Challenges and Opportunities Ahead As OpenAI prepares for GPT-5’s launch, the focus shifts to the challenges and opportunities it presents. Addressing concerns about model performance and reliability, and exploring novel use cases, the journey towards realizing the full potential of AI-driven language models is filled with possibilities. Ensuring Safety and Reliability Ensuring the success of GPT-5 involves rigorous testing and validation to guarantee its safety and reliability. As AI continues to advance, maintaining transparency and accountability in its development is crucial. Unlocking New Frontiers Beyond immediate applications, GPT-5 represents a significant step towards unlocking new frontiers in AI innovation. From enhancing natural language understanding to facilitating human-machine collaboration, the implications of GPT-5 extend far beyond its initial release. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
GPT-4o GPT4 and Gemini 1.5

GPT-4o GPT4 and Gemini 1.5

An Independent Analysis of GPT-4o’s Classification Abilities Article by Lars Wilk OpenAI’s recent unveiling of GPT-4o marks a significant advancement in AI language models, transforming how we interact with them. The most impressive feature is the live interaction capability with ChatGPT, allowing for seamless conversational interruptions. GPT-4o GPT4 and Gemini 1.5 Despite a few hiccups during the live demo, the achievements of the OpenAI team are undeniably impressive. Best of all, immediately after the demo, OpenAI granted access to the GPT-4o API. In this article, I will present my independent analysis, comparing the classification abilities of GPT-4o with GPT-4, Google’s Gemini, and Unicorn models using an English dataset I created. Which of these models is the strongest in understanding English? What’s New with GPT-4o? GPT-4o introduces the concept of an Omni model, designed to seamlessly process text, audio, and video. OpenAI aims to democratize GPT-4 level intelligence, making it accessible even to free users. Enhanced quality and speed across more than 50 languages, combined with a lower price point, promise a more inclusive and globally accessible AI experience. Additionally, paid subscribers will benefit from five times the capacity compared to non-paid users. OpenAI also announced a desktop version of ChatGPT to facilitate real-time reasoning across audio, vision, and text interfaces. How to Use the GPT-4o API The new GPT-4o model follows the existing chat-completion API, ensuring backward compatibility and ease of use: pythonCopy codefrom openai import AsyncOpenAI OPENAI_API_KEY = “<your-api-key>” def openai_chat_resolve(response: dict, strip_tokens=None) -> str: if strip_tokens is None: strip_tokens = [] if response and response.choices and len(response.choices) > 0: content = response.choices[0].message.content.strip() if content: for token in strip_tokens: content = content.replace(token, ”) return content raise Exception(f’Cannot resolve response: {response}’) async def openai_chat_request(prompt: str, model_name: str, temperature=0.0): message = {‘role’: ‘user’, ‘content’: prompt} client = AsyncOpenAI(api_key=OPENAI_API_KEY) return await client.chat.completions.create( model=model_name, messages=[message], temperature=temperature, ) openai_chat_request(prompt=”Hello!”, model_name=”gpt-4o-2024-05-13″) GPT-4o is also accessible via the ChatGPT interface. Official Evaluation GPT-4o GPT4 and Gemini 1.5 OpenAI’s blog post includes evaluation scores on known datasets such as MMLU and HumanEval, showcasing GPT-4o’s state-of-the-art performance. However, many models claim superior performance on open datasets, often due to overfitting. Independent analyses using lesser-known datasets are crucial for a realistic assessment. My Evaluation Dataset I created a dataset of 200 sentences categorized under 50 topics, designed to challenge classification tasks. The dataset is manually labeled in English. For this evaluation, I used only the English version to avoid potential biases from using the same language model for dataset creation and topic prediction. You can check out the dataset here. Performance Results I evaluated the following models: The task was to match each sentence with the correct topic, calculating an accuracy score and error rate for each model. A lower error rate indicates better performance. Conclusion This analysis using a uniquely crafted English dataset reveals insights into the state-of-the-art capabilities of these advanced language models. GPT-4o stands out with the lowest error rate, affirming OpenAI’s performance claims. Independent evaluations with diverse datasets are essential for a clearer picture of a model’s practical effectiveness beyond standardized benchmarks. Note that the dataset is fairly small, and results may vary with different datasets. This evaluation was conducted using the English dataset only; a multilingual comparison will be conducted at a later time. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Five9 Salesforce AI Integration

Five9 Salesforce AI Integration

Five9 and Salesforce Enhance AI-Powered Solutions for Superior Customer Experiences Five9 (NASDAQ: FIVN), a provider of the Intelligent CX Platform, today announced the next step in its collaboration with Salesforce. Five9 Salesforce AI Integration. This partnership aims to deliver AI-powered solutions to enhance customer experiences (CX) in contact centers. The latest release, Five9 for Service Cloud Voice with Partner Telephony, integrates Salesforce Einstein with Five9’s suite of AI solutions. This empowers agents to better service customer requests, improves management’s understanding of contact center operations, and delivers customer resolutions that exceed expectations. Using Five9’s open APIs and Five9 TranscriptStream, the Einstein AI engine identifies opportunities to provide real-time solutions for agents, prompting ‘Next Best Action’ guidance. The solution also offers real-time transcription of customer conversations, ensures call recordings’ accuracy and relevance, and integrates with Salesforce Einstein Conversation Insights to enhance conversation intelligence. “Five9 understands the power of elevating the customer experience through innovative technology and seamless integrations,” said Dan Burkland, President of Five9. “Our collaboration with Salesforce pushes the boundaries of what is possible. Infusing Einstein’s AI insights into the contact center and CRM eliminates repetitive tasks while guiding agents with the next best actions to help them be more effective.” A Long-Standing Partnership The Salesforce-Five9 collaboration, now over 15 years strong, recently introduced Five9 call dispositions for agents within the Salesforce Omni-Channel widget or Voice Call page. This allows organizations to automatically update call dispositions in the Five9 call database, ensuring accurate reporting across the integration. Both companies are meeting the growing demand for AI solutions to enhance customer engagement throughout the customer journey. “Five9’s deeper integration with Salesforce Einstein offers a new level of choice for customers seeking AI capabilities that best match their contact center needs and existing technology investments,” said Sheila McGee-Smith, President & Principal Analyst at McGee-Smith Analytics. “Coupled with features like Five9 TranscriptStream, organizations can significantly reduce an agent’s workload while enhancing the customer’s overall experience. This next step in the Salesforce-Five9 relationship demonstrates each company’s commitment to their joint customer base, enabling them to leverage the latest AI innovations easily.” “Service Cloud Voice with Five9 uses AI to deliver a better customer experience,” said Ryan Nichols, Chief Product Officer of Service Cloud, Salesforce. “Our collaboration focuses on more than just a ‘single pane of glass’– we’re bringing together customer data, knowledge, and real-time conversation transcripts to help make agents more productive and delight customers.” Availability and Further Information These new enhancements to Five9 for Service Cloud Voice with Partner Telephony will be available starting June 30. For a deeper look into the Five9 integration with Service Cloud Voice and to explore common use cases, register for the webinar “Unlock Efficiency with the Power of AI: Five9 and Salesforce Service Cloud Voice” on Tuesday, July 23. An on-demand playback of the December 2023 Five9 and Salesforce joint webinar is also available, covering topics such as using data for personalization, best practices for leveraging engagement data to improve experiences, and how companies can become more customer-centric. Salesforce, Einstein, and other related marks are trademarks of Salesforce, Inc. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Unified Knowledge in Salesforce

Unified Knowledge in Salesforce

A year following Salesforce’s introduction of the Einstein Trust Layer, aimed at safeguarding against the potential pitfalls of implementing Generative AI (GenAI) in enterprise settings, the discourse surrounding GenAI has remained both intriguing and cautionary. Business leaders are navigating its optimal applications to enrich customer and employee experiences. Enter Unified Knowledge in Salesforce. Unlocking the Power of Unified Knowledge The Einstein Trust Layer addressed critical concerns about GenAI, focusing on mitigating unwanted behaviors and preserving customer and corporate privacy. However, the current hurdle facing GenAI adoption pertains to data management. The efficacy of GenAI hinges on access to comprehensive and pertinent knowledge. This underscores the challenges in aggregating and accessing the right information, prompting Salesforce’s recent unveiling of Unified Knowledge in collaboration with Zoomin. This initiative aims to streamline data utilization across platforms, facilitating seamless integration of corporate data. Challenges in Data Aggregation and Preparation Enterprises typically grapple with fragmented data across various systems. Integrating disparate data formats and siloed systems poses a formidable challenge. Historically, the absence of automated systems to extract insights from unstructured data hindered effective data preparation. However, the advent of GenAI has underscored the need for advanced solutions to access extensive data repositories effortlessly. Salesforce’s partnership with Zoomin addresses this need, offering sophisticated tools to simplify data aggregation and preparation. Zoomin’s Role in Enhancing Salesforce Capabilities Zoomin’s technology facilitates integration with diverse third-party data sources, including Google Drive, AWS S3, Zendesk, and other Salesforce orgs. Beyond integration, Zoomin streamlines data preparation and integration processes, fostering a structured approach to managing unstructured data. Standardization through Taxonomy: Zoomin categorizes data into a hierarchical structure, enabling organizations to standardize content classification. This taxonomy is instrumental in aiding GenAI’s comprehension and retrieval of relevant information. Enhanced Search and Filtering: Tags and facets defined in the taxonomy facilitate refined searches, enhancing accessibility to specific content based on various parameters. Automated Categorization and Syncing: Zoomin’s auto-categorization features automate document classification according to the defined taxonomy. This ensures data remains current and organized within Salesforce’s ecosystem. Zoomin’s technology alleviates manual data preparation efforts through features like content tagging, auto-categorization, and seamless syncing with Salesforce Knowledge. For instance, technical manuals stored in Google Drive are automatically categorized, tagged, and synced with relevant sections in Salesforce Knowledge, ensuring quick access to accurate information. Unlocking the Power of Unified Knowledge Salesforce and Zoomin’s collaboration exemplifies efforts to harness distributed knowledge resources effectively. Unified Knowledge, currently in open Beta, is set to enhance GenAI capabilities and streamline data management. However, knowledgeable employees are essential for initial tagging to ensure accuracy. This approach ensures precise information delivery, enhancing the intelligence and responsiveness of GenAI-driven service platforms. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Outage

AI Outage

Unlike the recent mobile device network outage recently, where affected users were screaming fowl within minutes, AI experienced an outage today and you probably didn’t even know about it. AI Outage with three systems down simultaneously. Following a prolonged outage in the early morning hours, OpenAI’s ChatGPT chatbot experienced another disruption, but this time, it wasn’t alone. On Tuesday morning, both Anthropic’s Claude and Perplexity also encountered issues, albeit these were swiftly resolved compared to ChatGPT’s downtime. ChatGPT had seemingly recovered from what OpenAI described as a “major outage” earlier today, which hit millions of users worldwide. As of 3PM ET, the generative AI platform reported “All Systems Operational.” Reports indicate that Google’s Gemini was operational, although there were some user claims suggesting it might have briefly experienced downtime as well. The simultaneous outage of three major AI providers is uncommon and could suggest a broader infrastructure issue or a problem at an internet-scale level, akin to the outages affecting multiple social media platforms concurrently. Alternatively, the issues faced by Claude and Perplexity might have been a result of an overwhelming surge in traffic following ChatGPT’s outage, rather than inherent bugs or technical glitches. What has happened to all the AI platforms? An unknown glitch has affected the activity of most of the chatbots based on generative artificial intelligence (GenAI) on Tuesday, led by OpenAI’s ChatGPT and Google’s Gemini. What has happened to all the AI platforms? An unknown glitch has affected the activity of most of the chatbots based on generative artificial intelligence (GenAI) on Tuesday, led by OpenAI’s ChatGPT and Google’s Gemini. Although they have not yet reached the status of critical services such as a search engine, email or an instant messaging application, the scope of use of AI platforms is on a steady rise, for private use, work or studies. During ChatGPT’s outage, users were unable to message the AI chatbot from its landing page. The disruption began at approximately 7:33 AM PT and was resolved around 10:17 AM PT, marking another instance of multi-hour downtime. Like1 Related Posts Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Salesforce Artificial Intelligence Is artificial intelligence integrated into Salesforce? Salesforce Einstein stands as an intelligent layer embedded within the Lightning Platform, bringing robust Read more CRM Cloud Salesforce What is a CRM Cloud Salesforce? Salesforce Service Cloud is a customer relationship management (CRM) platform for Salesforce clients to Read more Salesforce’s Quest for AI for the Masses The software engine, Optimus Prime (not to be confused with the Autobot leader), originated in a basement beneath a West Read more

Read More

AI Agents

Lessons Learned in the First Year of Developing AI Agents In the first year of working on AI agents, valuable insights emerged from direct collaboration with engineers and UX designers, as they iterated on the overall product experience. The objective was to create a platform for customers to use standard data analysis agents and build custom agents tailored to specific tasks and data structures relevant to their business. This platform integrates connectors to databases like Snowflake and BigQuery with built-in security, supports RAG over a metadata layer describing database contents, and facilitates data analysis through SQL, Python, and data visualization tools. Feedback on the effectiveness of these developments came from both internal evaluations and customer insights. Users from Fortune 500 companies utilize these agents daily to analyze their internal data. Key Insights on AI Agents Additional Insights Further insights on code and infrastructure include: These lessons underscore the importance of focusing on reasoning, iterative improvements to the agent-computer interface, understanding model limitations, and building robust supporting infrastructure to enhance AI agent performance and user satisfaction. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Is AI a Bubble?

Is AI a Bubble?

Scott Galloway, Prof Marketing, NYU Stern • Host, CNN+ • Pivot, Prof G Podcasts • Bestselling author, The Four, The Algebra of Happiness, Post Corona, published an insightful look at artificial intelligence last month. Originally appearing in Medium.com Content repurposed with credit to author here. Five years ago, Nvidia was a second-tier semiconductor company, primarily known for enhancing the resolution of Call of Duty. Today, it is the third-most-valuable company globally, commanding an impressive 80% share in AI chips, the processors driving an unprecedented $8 trillion value creation in history. Since the release of ChatGPT by OpenAI in October 2022, Nvidia’s value has surged by $2 trillion, equating to Amazon’s market worth. Last week, Nvidia reported exceptional quarterly earnings, with its core business of selling chips to data centers experiencing a 427% year-over-year increase. Last year, at Cannes, Jensen Huang introduced himself to author, Scott Galloway, mentioning his admiration for Galloway’s videos. Not recognizing Huang, Galloway offered to take a photo, which Huang accepted before Galloway continued on his way. Since then, Nvidia has added $1.3 trillion in value. Galloway, on the other hand, underwent Ketamine therapy, abstained from drinking for 17 days, and installed a router with YouTube’s help. It’s been a significant year for both. There is widespread consensus on the revolutionary potential of the AI market, which explains the soaring AI stock prices. However, this unanimity raises concerns about a potential bubble. According to Scott Galloway, the situation mirrors the 1630s tulip mania, where people bid up tulips not for their beauty or utility but because they believed they could sell them at higher prices later—a phenomenon known as the “greater fool” theory. This logic also applies to meme stocks, which embody the “greatest fool” theory. Galloway advises skepticism toward any movement urging people to “stick it to the man,” as it often leaves them vulnerable. Galloway describes the dynamics of economy-distorting bubbles, where speculative psychology meets genuine economic potential. Such bubbles grow as increasing stock prices validate assumptions, attracting more speculators. Low-interest rates can fuel these bubbles, which typically have an enduring technology at their core. He draws parallels to previous bubbles: the dot-com bubble, the housing market bubble, and the cryptocurrency bubble, noting that AI appears to follow a similar trajectory. The financial media often debates whether AI represents a bubble or a genuine technological breakthrough. Galloway argues that AI’s economic promise is real, making a bubble inevitable. He cites the rapid increase in market value among AI-driven companies like Alphabet, Amazon, and Microsoft as indicative of an overvaluation bubble. Nvidia, the standout in the AI sector, faces the challenge of maintaining its valuation by dominating another market as significant as AI. Galloway highlights that the current narrative around Nvidia resembles that of Cisco during the dot-com bubble. Both companies were seen as essential investments in their respective eras, but Cisco’s stock eventually crashed along with the broader market. Timing a bubble’s burst is notoriously difficult. Galloway recounts how past investors, like John Paulson and Michael Burry, timed their bets on housing correctly, but others, like Julian Robertson and George Soros, faced significant losses by mistiming the dot-com bubble. He emphasizes that most people cannot predict market turns accurately and advises diversification and caution. Galloway speculates on how an AI market downturn might occur. A significant non-tech company scaling back its AI investments could trigger a chain reaction of declining stock prices and speculative sell-offs. This scenario mirrors the dot-com bubble’s collapse in 2000 and the housing bubble’s burst in 2007. He concludes that while the AI bubble feels more akin to the dot-com bubble than the housing crisis, its growing size could have broader economic repercussions. The AI bubble’s eventual deflation might resemble Cisco’s post-dot-com trajectory, where long-term value persists despite short-term losses. Ultimately, Nvidia’s current status as a “safe” investment suggests that it might offer returns aligned with the market, rather than the spectacular gains of past tech giants like Amazon. Scott Galloway encapsulates this analysis with a warning: when a “sure thing” stock becomes frothy, it is no longer a safe bet. Investors should be prepared for both the potential risks and rewards, securing their metaphorical tray tables as they navigate the turbulent AI investment landscape . Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
RAG Chunking Method

RAG Chunking Method

Enhancing Retrieval-Augmented Generation (RAG) Systems with Topic-Based Document Segmentation Dividing large documents into smaller, meaningful parts is crucial for the performance of Retrieval-Augmented Generation (RAG) systems. RAG Chunking Method. These systems benefit from frameworks that offer multiple document-splitting options. This Tectonic insight introduces an innovative approach that identifies topic changes using sentence embeddings, improving the subdivision process to create coherent topic-based sections. RAG Systems: An Overview A Retrieval-Augmented Generation (RAG) system combines retrieval-based and generation-based models to enhance output quality and relevance. It first retrieves relevant information from a large dataset based on an input query, then uses a transformer-based language model to generate a coherent and contextually appropriate response. This hybrid approach is particularly effective in complex or knowledge-intensive tasks. Standard Document Splitting Options Before diving into the new approach, let’s explore some standard document splitting methods using the LangChain framework, known for its robust support of various natural language processing (NLP) tasks. LangChain Framework: LangChain assists developers in applying large language models across NLP tasks, including document splitting. Here are key splitting methods available: Introducing a New Approach: Topic-Based Segmentation Segmenting large-scale documents into coherent topic-based sections poses significant challenges. Traditional methods often fail to detect subtle topic shifts accurately. This innovative approach, presented at the International Conference on Artificial Intelligence, Computer, Data Sciences, and Applications (ACDSA 2024), addresses this issue using sentence embeddings. The Core Challenge Large documents often contain multiple topics. Conventional segmentation techniques struggle to identify precise topic transitions, leading to fragmented or overlapping sections. This method leverages Sentence-BERT (SBERT) to generate embeddings for individual sentences, which reflect changes in the vector space as topics shift. Approach Breakdown 1. Using Sentence Embeddings: 2. Calculating Gap Scores: 3. Smoothing: 4. Boundary Detection: 5. Clustering Segments: Algorithm Pseudocode Gap Score Calculation: pythonCopy code# Example pseudocode for gap score calculation def calculate_gap_scores(sentences, n): embeddings = [sbert.encode(sentence) for sentence in sentences] gap_scores = [] for i in range(len(sentences) – n): before = embeddings[i:i+n] after = embeddings[i+n:i+2*n] score = cosine_similarity(before, after) gap_scores.append(score) return gap_scores Gap Score Smoothing: pythonCopy code# Example pseudocode for smoothing gap scores def smooth_gap_scores(gap_scores, k): smoothed_scores = [] for i in range(len(gap_scores)): start = max(0, i – k) end = min(len(gap_scores), i + k + 1) smoothed_score = sum(gap_scores[start:end]) / (end – start) smoothed_scores.append(smoothed_score) return smoothed_scores Boundary Detection: pythonCopy code# Example pseudocode for boundary detection def detect_boundaries(smoothed_scores, c): boundaries = [] mean_score = sum(smoothed_scores) / len(smoothed_scores) std_dev = (sum((x – mean_score) ** 2 for x in smoothed_scores) / len(smoothed_scores)) ** 0.5 for i, score in enumerate(smoothed_scores): if score < mean_score – c * std_dev: boundaries.append(i) return boundaries Future Directions Potential areas for further research include: Conclusion This method combines traditional principles with advanced sentence embeddings, leveraging SBERT and sophisticated smoothing and clustering techniques. This approach offers a robust and efficient solution for accurate topic modeling in large documents, enhancing the performance of RAG systems by providing coherent and contextually relevant text sections. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Einstein Personalization and Copilots

Einstein Personalization and Copilots

Salesforce launched a suite of new generative AI products at Connections in Chicago, including new Einstein Copilots for marketers and merchants, and Einstein Personalization. Einstein Personalization and Copilots To gain insights into these products and Salesforce’s evolving architecture, Bobby Jania, CMO of Marketing Cloud was interviewed. Salesforce’s Evolving Architecture Salesforce has a knack for introducing new names for its platforms and products, sometimes causing confusion about whether something is entirely new or simply rebranded. Reporters sought clarification on the Einstein 1 platform and its relationship to Salesforce Data Cloud. “Data Cloud is built on the Einstein 1 platform,” Jania explained. “Einstein 1 encompasses the entire Salesforce platform, including products like Sales Cloud and Service Cloud, continuing the original multi-tenant cloud concept.” Data Cloud, developed natively on Einstein 1, was the first product built on Hyperforce, Salesforce’s new cloud infrastructure. “From the start, Data Cloud has been able to connect to and read anything within Sales Cloud, Service Cloud, etc. Additionally, it can now handle both structured and unstructured data.” This marks significant progress from a few years ago when Salesforce’s platform comprised various acquisitions (like ExactTarget) that didn’t seamlessly integrate. Previously, data had to be moved between products, often resulting in duplicates. Now, Data Cloud serves as the central repository, with applications like Tableau, Commerce Cloud, Service Cloud, and Marketing Cloud all accessing the same operational customer profile without duplicating data. Salesforce customers can also import their own datasets into Data Cloud. “We wanted a federated data model,” Jania said. “If you’re using Snowflake, for example, we virtually sit on your data lake, providing value by forming comprehensive operational customer profiles.” Understanding Einstein Copilot “Copilot means having an assistant within the tool you’re using, contextually aware of your tasks and assisting you at every step,” Jania said. For marketers, this could start with a campaign brief created with Copilot’s help, identifying an audience, and developing content. “Einstein Studio is exciting because customers can create actions for Copilot that we hadn’t even envisioned.” Contrary to previous reports, there is only one Copilot, Einstein Copilot, with various use cases like marketing, merchants, and shoppers. “We use these names for clarity, but there’s just one Copilot. You can build your own use cases in addition to the ones we provide.” Marketers will need time to adapt to Copilot. “Adoption takes time,” Jania acknowledged. “This Connections event offers extensive hands-on training to help people use Data Cloud and these tools, beyond just demonstrations.” What’s New with Einstein Personalization Einstein Personalization is a real-time decision engine designed to choose the next best action or offer for customers. “What’s new is that it now runs natively on Data Cloud,” Jania explained. While many decision engines require a separate dataset, Einstein Personalization evaluates a customer holistically and recommends actions directly within Service Cloud, Sales Cloud, or Marketing Cloud. Ensuring Trust Connections presentations emphasized that while public LLMs like ChatGPT can be applied to customer data, none of this data is retained by the LLMs. This isn’t just a matter of agreements; it involves the Einstein Trust Layer. “All data passing through an LLM runs through our gateway. Personally identifiable information, such as credit card numbers or email addresses, is stripped out. The LLMs do not store the output; Salesforce retains it for auditing. Any output that returns through our gateway is logged, checked for toxicity, and only then is PII reinserted into the response. These measures ensure data safety beyond mere handshakes,” Jania said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com