Generative AI - gettectonic.com - Page 13
Salesforce Data Cloud Pioneer

Salesforce Data Cloud Pioneer

While many organizations are still building their data platforms, Salesforce Data Cloud Pioneer has made a significant leap forward. By seamlessly incorporating metadata integration, Salesforce has transformed the modern data stack into a comprehensive application platform known as the Einstein 1 Platform. Led by Muralidhar Krishnaprasad, executive vice president of engineering at Salesforce, the Einstein 1 Platform is built on the company’s metadata framework. This platform harmonizes metadata and integrates it with AI and automation, marking a new era of data utilization. The Einstein 1 Platform: Innovations and Capabilities Salesforce’s goal with the Einstein 1 Platform is to empower all business users—salespeople, service engineers, marketers, and analysts—to access, use, and act on all their data, regardless of its location, according to Krishnaprasad. The open, extensible platform not only unlocks trapped data but also equips organizations with generative AI functionality, enabling personalized experiences for employees and customers. “Analytics is very important to know how your business is doing, but you also want to make sure all that data and insights are actionable,” Krishnaprasad said. “Our goal is to blend AI, automation, and analytics together, with the metadata layer being the secret sauce.” Salesforce Data Cloud Pioneer In a conversation with George Gilbert, senior analyst at theCUBE Research, Krishnaprasad discussed the platform’s metadata integration, open-API technology, and key features. They explored how its extensibility and interoperability enhance usability across various data formats and sources. Metadata Integration: Accommodating Any IT Environment The Einstein 1 Platform is built on Trino, the federated open-source query engine, and Spark for data processing. It offers a rich set of connectors and an open, extensible environment, enabling organizations to share data between warehouses, lake houses, and other systems. “We use a hyper-engine for sub-second response times in Tableau and other data explorations,” Krishnaprasad explained. “This in-memory overlap engine ensures efficient data processing.” The platform supports various machine learning options and allows users to integrate their own large language models. Whether using Salesforce Einstein, Databricks, Vertex, SageMaker, or other solutions, users can operate without copying data. The platform includes three levels of extensibility, enabling organizations to standardize and extend their customer journey models. Users can start with basic reference models, customize them, and then generate insights, including AI-driven insights. Finally, they can introduce their own functions or triggers to act on these insights. The platform continuously performs unification, allowing users to create different unified graphs based on their needs. “We’re a multimodal system, considering your entire customer journey,” Krishnaprasad said. “We provide flexibility at all levels of the stack to create the right experience for your business.” The Triad of AI, Automation, and Analytics The platform’s foundation ingests, harmonizes, and unifies data, resulting in a standardized metadata model that offers a 360-degree view of customer interactions. This approach unlocks siloed data, much of which is in unstructured forms like conversations, documents, emails, audio, and video. “What we’ve done with this customer 360-degree model is to use unified data to generate insights and make these accessible across application surfaces, enabling reactions to these insights,” Krishnaprasad said. “This unlocks a comprehensive customer journey.” For instance, when a customer views an ad and visits the website, salespeople know what they’re interested in, service personnel understand their concerns, and analysts have the information needed for business insights. These capabilities enhance customer engagement. “Couple this with generative AI, and we enable a lot of self-service,” Krishnaprasad added. “We aim to provide accurate answers, elevating data to create a unified model and powering a unified experience across the entire customer journey.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Workday and Salesforce Unveil New AI Employee Service Agent

Workday and Salesforce Unveil New AI Employee Service Agent

In a Wednesday interview with CNBC’s Jim Cramer, the CEOs of Salesforce and Workday, Marc Benioff and Carl Eschenbach, announced their companies’ new partnership to develop an artificial intelligence assistant. Workday and Salesforce Unveil New AI Employee Service Agent. This collaboration aims to enhance onboarding, human resources, and other business processes. Salesforce Chair and CEO Marc Benioff and Workday CEO Carl Eschenbach join ‘Mad Money’ host Jim Cramer to talk their AI partnership. Both CEOs emphasized that the strength of their partnership lies in the integration of their extensive data sets. Benioff stated, “AI is all about data, and having access to extensive data enables us to deliver exceptional AI capabilities. This partnership exemplifies two companies coming together to ensure our customers have the data they need to realize the full potential of artificial intelligence.” Partnership will deliver a personalized, AI-powered assistant for employee service use cases such as onboarding, health benefits, and career development within Salesforce and Workday The two companies will establish a common data foundation that unifies HR and financial data from Workday with CRM data from Salesforce, enabling AI-powered use cases that boost productivity, lower costs, and improve the employee experience Workday will be natively integrated inside of Slack with deeper automation, so employees can seamlessly collaborate around worker, job, candidate, and similar records using AI Salesforce and Workday are both cloud-based software companies. Salesforce is renowned for its Slack application and software for sales, customer service, and marketing, while Workday specializes in human resources, recruiting, and workforce management. Eschenbach highlighted that Salesforce and Workday possess three crucial data sets in the enterprise landscape—employee data, customer data, and financial data. He added that the new initiative benefits customers by integrating services across platforms, eliminating the need to switch between different systems. “Through this partnership and our ability to share data, customers can seamlessly access our data sets whether they’re using Slack, Workday, or Salesforce,” Eschenbach said. Workday and Salesforce Unveil New AI Employee Service Agent The combination of Salesforce’s new Agentforce Platform and Einstein AI with the Workday platform and Workday AI will enable organizations to create and manage agents for a variety of employee service use cases. This AI agent will work with and elevate humans to drive employee and customer success across the business. Powered by a company’s Salesforce CRM data and Workday financial and HR data, the new AI employee service agents have a shared, trusted data foundation to communicate with employees in natural language, with human-like comprehension. As a result, taking action as part of onboarding, health benefit changes, career development, and other tasks will be easier than ever. When complex cases arise, the AI employee service agent will seamlessly transfer to the right individual for remediation, maintaining all the previous history and context for a smooth hand-off. This unique approach of humans and AI seamlessly working together will result in greater productivity, efficiency, and better experiences for employees. This is only possible by having the data, AI models, and apps deeply integrated. “The AI opportunity for every company lies in augmenting their employees and delivering incredible customer experiences. That’s why we’re so excited about our new Agentforce platform which enables humans and AI to drive customer success together, and this new partnership with Workday, to jointly build an employee service agent. Together we’ll help businesses create amazing experiences powered by generative and autonomous AI, so every employee can get answers, learn new skills, solve problems, and take action quickly and efficiently.” Marc Benioff, Chair and CEO, Salesforce Benefits to Employees Employees can now receive instant support through natural language conversations with their AI employee service agent, whether they are working in Salesforce, Slack, or Workday. This AI-driven assistant provides contextual help by understanding requests, accessing relevant information from integrated Workday-Salesforce data sources, and automating resolutions across platforms. Sal Companieh, Chief Digital and Information Officer at Cushman & Wakefield, commented, “As a leading global commercial real estate services firm, we prioritize employee support and engagement, which directly impacts client service. The ability to streamline workflows across Workday and Salesforce and deliver more personalized AI-powered employee experiences will be transformative for us.” Benefits to Employers By integrating HR, financial, and operational data into advanced AI models, Salesforce and Workday enhance workforce capabilities beyond individual productivity, fostering overall workforce intelligence, optimization, and resilience: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful

Read More
AI Trust and Optimism

AI Trust and Optimism

Building Trust in AI: A Complex Yet Essential Task The Importance of Trust in AI Trust in artificial intelligence (AI) is ultimately what will make or break the technology. AI Trust and Optimism. Amid the hype and excitement of the past 18 months, it’s widely recognized that human beings need to have faith in this new wave of automation. This trust ensures that AI systems do not overstep boundaries or undermine personal freedoms. However, building this trust is a complicated task, thankfully receiving increasing attention from responsible thought leaders in the field. The Challenge of Responsible AI Development There is a growing concern that in the AI arms race, some individuals and companies prioritize making their technology as advanced as possible without considering long-term human-centric issues or the present-day realities. This concern was highlighted when OpenAI CEO Sam Altman presented AI hallucinations as a feature, not a bug, at last year’s Dreamforce, shortly after Salesforce CEO Marc Benioff emphasized the vital nature of trust. Insights from Salesforce’s Global Study Salesforce recently released the results of a global study involving 6,000 knowledge workers from various companies. The study reveals that while respondents trust AI to manage 43% of their work tasks, they still prefer human intervention in areas such as training, onboarding, and data handling. A notable finding is the difference in trust levels between leaders and rank-and-file workers. Leaders trust AI to handle over half (51%) of their work, while other workers trust it with 40%. Furthermore, 63% of respondents believe human involvement is key to building their trust in AI, though a subset is already comfortable offloading certain tasks to autonomous AI. Specifically: The study predicts that within three years, 41% of global workers will trust AI to operate autonomously, a significant increase from the 10% who feel comfortable with this today. Ethical Considerations in AI Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, is responsible for establishing guidelines and best practices for technology adoption. Her interpretation of the study findings indicates that while workers are excited about a future with autonomous AI and are beginning to transition to it, trust gaps still need to be bridged. Goldman notes that workers are currently comfortable with AI handling tasks like writing code, uncovering data insights, and building communications. However, they are less comfortable delegating tasks such as inclusivity, onboarding, training employees, and data security to AI. Salesforce advocates for a “human at the helm” approach to AI. Goldman explains that human oversight builds trust in AI, but the way this oversight is designed must evolve to keep pace with AI’s rapid development. The traditional “human in the loop” model, where humans review every AI-generated output, is no longer feasible even with today’s sophisticated AI systems. Goldman emphasizes the need for more sophisticated controls that allow humans to focus on high-risk, high-judgment decisions while delegating other tasks. These controls should provide a macro view of AI performance and the ability to inspect it, which is crucial. Education and Training Goldman also highlights the importance of educating those steering AI systems. Trust and adoption of technology require that people are enabled to use it successfully. This includes comprehensive knowledge and training to make the most of AI capabilities. Optimism Amidst Skepticism Despite widespread fears about AI, Goldman finds a considerable amount of optimism and curiosity among workers. The study reflects a recognition of AI’s transformative potential and its rapid improvement. However, it is essential to distinguish between genuine optimism and hype-driven enthusiasm. Salesforce’s Stance on AI and Trust Salesforce has taken a strong stance on trust in relation to AI, emphasizing the non-silver bullet nature of this technology. The company acknowledges the balance between enthusiasm and pragmatism that many executives experience. While there is optimism about trusting autonomous AI within three years, this prediction needs to be substantiated with real-world evidence. Some organizations are already leading in generative AI adoption, while many others express interest in exploring its potential in the future. Conclusion Overall, this study contributes significantly to the ongoing debate about AI’s future. The concept of “human at the helm” is compelling and highlights the importance of ethical considerations in the AI-enabled future. Goldman’s role in presenting this research underscores Salesforce’s commitment to responsible AI development. For more insights, check out her blog on the subject. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in Drug Research

AI in Drug Research

Insights on Leveraging AI in Biopharmaceutical R&D: A Discussion with Kailash Swarna Last month, Accenture released a report titled “Reinventing R&D in the Age of AI,” which explores how biopharmaceutical companies can harness artificial intelligence (AI) and other advanced technologies to enhance drug and therapeutic research and development. AI in Drug Research. Kailash Swarna, managing director and Accenture Life Sciences Global Research and Clinical lead, spoke with PharmaNewsIntelligence about the report’s findings and how AI can address ongoing challenges in research and development (R&D), while offering a return on technological investments. “Data and analytics are crucial in advancing drug development, from early research to late-stage clinical trials,” said Swarna. “The industry still faces significant challenges, including the time and cost required to bring a medicine to market. As a leading technology firm, it’s our role to leverage the best in data analytics and technology for drug discovery and development.” AI in Drug Research Accenture conducted detailed interviews with leaders from biopharma companies to explore AI’s role in drug development and discovery. These interviews were part of a CEO forum held just before the JP Morgan conference, where technology emerged as a major area of opportunity and concern. Key Challenges in R&D Understanding the challenges in the drug R&D landscape is crucial for identifying how AI can be effectively utilized. Swarna highlighted several significant challenges: 1. Scientific Growth “The rapid advances in biology and disease understanding present both opportunities and challenges,” Swarna noted. “While our knowledge of human disease has greatly improved, keeping pace with scientific progress in terms of executing and reducing the time and cost of bringing new therapeutics to market remains a major challenge.” He described the clinical trial process as “fraught with complexities,” including data management issues. Despite industry efforts to accelerate drug development, it often still takes over a decade and billions of dollars. 2. Macroeconomic Factors Drug R&D companies also face challenges from macroeconomic conditions, such as reimbursement issues and the Inflation Reduction Act in the US. “These factors are reshaping how companies approach their portfolios and the disease areas they target,” Swarna explained. “The industry is undergoing a retooling to address these economic impacts.” 3. Technology Optimization Many companies have made substantial technology investments, but integrating and systematically utilizing these technologies across the entire R&D process remains a challenge. “While individual technology investments have been valuable, there is a significant opportunity to unify these efforts and streamline data usage from early research through late-stage development,” Swarna said. Reinventing R&D with AI The report emphasizes that technological advancements, particularly generative AI and analytics, can revolutionize the R&D pipeline. “This isn’t about a single technology but about a comprehensive rethinking of processes, data flows, and technology investments across the entire R&D spectrum,” Swarna stated. He stressed that the reinvention of R&D processes requires an enterprise-wide strategy and implementation. Responsible AI Swarna also highlighted the importance of addressing potential challenges associated with AI. “At Accenture, we have a robust responsible AI framework,” he said. Responsible AI encompasses managing issues like bias and security. Accenture’s framework considers factors such as choosing appropriate patient populations and understanding how bias might impact research data. It also addresses security concerns, including intellectual property protection and patient privacy. “Protecting patient privacy and complying with global regulations is crucial when utilizing AI technology,” Swarna emphasized. “Without proper safeguards, we risk data loss or breaches.” Measuring ROI of AI in Drug Research To ensure that AI technologies positively impact the R&D lifecycle, Swarna described a framework for measuring return on investment (ROI). “Given the long cycle of our industry, we’ve developed objective measures to evaluate the impact of these technologies on cost and time,” he explained. Companies can use quantitative measures to track interim milestones, such as recruitment costs and speeds. “These metrics allow us to observe progress in smaller increments rather than waiting for end-to-end results,” Swarna said. “The approach varies by company and their stage in implementing these technologies.” Benefits of AI in Clinical Trials Incorporating AI into clinical trials has the potential to reduce research times and costs. While Swarna and Accenture cannot predict policy impacts on drug pricing, he offered a theoretical benefit: optimizing technology could lower development costs, potentially making medicines more affordable and accessible. Swarna noted that reducing R&D spending could lead to more effective drugs being available to larger populations without placing an excessive burden on the healthcare system. For further details, the original report and discussion were published by Accenture and can be accessed on their official site. AI in Drug Research. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Einstein Code Generation and Amazon SageMaker

Einstein Code Generation and Amazon SageMaker

Salesforce and the Evolution of AI-Driven CRM Solutions Salesforce, Inc., headquartered in San Francisco, California, is a leading American cloud-based software company specializing in customer relationship management (CRM) software and applications. Their offerings include sales, customer service, marketing automation, e-commerce, analytics, and application development. Salesforce is at the forefront of integrating artificial general intelligence (AGI) into its services, enhancing its flagship SaaS CRM platform with predictive and generative AI capabilities and advanced automation features. Einstein Code Generation and Amazon SageMaker. Salesforce Einstein: Pioneering AI in Business Applications Salesforce Einstein represents a suite of AI technologies embedded within Salesforce’s Customer Success Platform, designed to enhance productivity and client engagement. With over 60 features available across different pricing tiers, Einstein’s capabilities are categorized into machine learning (ML), natural language processing (NLP), computer vision, and automatic speech recognition. These tools empower businesses to deliver personalized and predictive customer experiences across various functions, such as sales and customer service. Key components include out-of-the-box AI features like sales email generation in Sales Cloud and service replies in Service Cloud, along with tools like Copilot, Prompt, and Model Builder within Einstein 1 Studio for custom AI development. The Salesforce Einstein AI Platform Team: Enhancing AI Capabilities The Salesforce Einstein AI Platform team is responsible for the ongoing development and enhancement of Einstein’s AI applications. They focus on advancing large language models (LLMs) to support a wide range of business applications, aiming to provide cutting-edge NLP capabilities. By partnering with leading technology providers and leveraging open-source communities and cloud services like AWS, the team ensures Salesforce customers have access to the latest AI technologies. Optimizing LLM Performance with Amazon SageMaker In early 2023, the Einstein team sought a solution to host CodeGen, Salesforce’s in-house open-source LLM for code understanding and generation. CodeGen enables translation from natural language to programming languages like Python and is particularly tuned for the Apex programming language, integral to Salesforce’s CRM functionality. The team required a hosting solution that could handle a high volume of inference requests and multiple concurrent sessions while meeting strict throughput and latency requirements for their EinsteinGPT for Developers tool, which aids in code generation and review. After evaluating various hosting solutions, the team selected Amazon SageMaker for its robust GPU access, scalability, flexibility, and performance optimization features. SageMaker’s specialized deep learning containers (DLCs), including the Large Model Inference (LMI) containers, provided a comprehensive solution for efficient LLM hosting and deployment. Key features included advanced batching strategies, efficient request routing, and access to high-end GPUs, which significantly enhanced the model’s performance. Key Achievements and Learnings Einstein Code Generation and Amazon SageMaker The integration of SageMaker resulted in a dramatic improvement in the performance of the CodeGen model, boosting throughput by over 6,500% and reducing latency significantly. The use of SageMaker’s tools and resources enabled the team to optimize their models, streamline deployment, and effectively manage resource use, setting a benchmark for future projects. Conclusion and Future Directions Salesforce’s experience with SageMaker highlights the critical importance of leveraging advanced tools and strategies in AI model optimization. The successful collaboration underscores the need for continuous innovation and adaptation in AI technologies, ensuring that Salesforce remains at the cutting edge of CRM solutions. For those interested in deploying their LLMs on SageMaker, Salesforce’s experience serves as a valuable case study, demonstrating the platform’s capabilities in enhancing AI performance and scalability. To begin hosting your own LLMs on SageMaker, consider exploring their detailed guides and resources. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
LLMs Turn CSVs into Knowledge Graphs

LLMs Turn CSVs into Knowledge Graphs

Neo4j Runway and Healthcare Knowledge Graphs Recently, Neo4j Runway was introduced as a tool to simplify the migration of relational data into graph structures. LLMs Turn CSVs into Knowledge Graphs. According to its GitHub page, “Neo4j Runway is a Python library that simplifies the process of migrating your relational data into a graph. It provides tools that abstract communication with OpenAI to run discovery on your data and generate a data model, as well as tools to generate ingestion code and load your data into a Neo4j instance.” In essence, by uploading a CSV file, the LLM identifies the nodes and relationships, automatically generating a Knowledge Graph. Knowledge Graphs in healthcare are powerful tools for organizing and analyzing complex medical data. These graphs structure information to elucidate relationships between different entities, such as diseases, treatments, patients, and healthcare providers. Applications of Knowledge Graphs in Healthcare Integration of Diverse Data Sources Knowledge graphs can integrate data from various sources such as electronic health records (EHRs), medical research papers, clinical trial results, genomic data, and patient histories. Improving Clinical Decision Support By linking symptoms, diagnoses, treatments, and outcomes, knowledge graphs can enhance clinical decision support systems (CDSS). They provide a comprehensive view of interconnected medical knowledge, potentially improving diagnostic accuracy and treatment effectiveness. Personalized Medicine Knowledge graphs enable the development of personalized treatment plans by correlating patient-specific data with broader medical knowledge. This includes understanding relationships between genetic information, disease mechanisms, and therapeutic responses, leading to more tailored healthcare interventions. Drug Discovery and Development In pharmaceutical research, knowledge graphs can accelerate drug discovery by identifying potential drug targets and understanding the biological pathways involved in diseases. Public Health and Epidemiology Knowledge graphs are useful in public health for tracking disease outbreaks, understanding epidemiological trends, and planning interventions. They integrate data from various public health databases, social media, and other sources to provide real-time insights into public health threats. Neo4j Runway Library Neo4j Runway is an open-source library created by Alex Gilmore. The GitHub repository and a blog post describe its features and capabilities. Currently, the library supports OpenAI LLM for parsing CSVs and offers the following features: The library eliminates the need to write Cypher queries manually, as the LLM handles all CSV-to-Knowledge Graph conversions. Additionally, Langchain’s GraphCypherQAChain can be used to generate Cypher queries from prompts, allowing for querying the graph without writing a single line of Cypher code. Practical Implementation in Healthcare To test Neo4j Runway in a healthcare context, a simple dataset from Kaggle (Disease Symptoms and Patient Profile Dataset) was used. This dataset includes columns such as Disease, Fever, Cough, Fatigue, Difficulty Breathing, Age, Gender, Blood Pressure, Cholesterol Level, and Outcome Variable. The goal was to provide a medical report to the LLM to get diagnostic hypotheses. Libraries and Environment Setup pythonCopy code# Install necessary packages sudo apt install python3-pydot graphviz pip install neo4j-runway # Import necessary libraries import numpy as np import pandas as pd from neo4j_runway import Discovery, GraphDataModeler, IngestionGenerator, LLM, PyIngest from IPython.display import display, Markdown, Image Load Environment Variables pythonCopy codeload_dotenv() OPENAI_API_KEY = os.getenv(‘sk-openaiapikeyhere’) NEO4J_URL = os.getenv(‘neo4j+s://your.databases.neo4j.io’) NEO4J_PASSWORD = os.getenv(‘yourneo4jpassword’) Load and Prepare Medical Data pythonCopy codedisease_df = pd.read_csv(‘/home/user/Disease_symptom.csv’) disease_df.columns = disease_df.columns.str.strip() for i in disease_df.columns: disease_df[i] = disease_df[i].astype(str) disease_df.to_csv(‘/home/user/disease_prepared.csv’, index=False) Data Description for the LLM pythonCopy codeDATA_DESCRIPTION = { ‘Disease’: ‘The name of the disease or medical condition.’, ‘Fever’: ‘Indicates whether the patient has a fever (Yes/No).’, ‘Cough’: ‘Indicates whether the patient has a cough (Yes/No).’, ‘Fatigue’: ‘Indicates whether the patient experiences fatigue (Yes/No).’, ‘Difficulty Breathing’: ‘Indicates whether the patient has difficulty breathing (Yes/No).’, ‘Age’: ‘The age of the patient in years.’, ‘Gender’: ‘The gender of the patient (Male/Female).’, ‘Blood Pressure’: ‘The blood pressure level of the patient (Normal/High).’, ‘Cholesterol Level’: ‘The cholesterol level of the patient (Normal/High).’, ‘Outcome Variable’: ‘The outcome variable indicating the result of the diagnosis or assessment for the specific disease (Positive/Negative).’ } Data Analysis and Model Creation pythonCopy codedisc = Discovery(llm=llm, user_input=DATA_DESCRIPTION, data=disease_df) disc.run() # Instantiate and create initial graph data model gdm = GraphDataModeler(llm=llm, discovery=disc) gdm.create_initial_model() gdm.current_model.visualize() Adjust Relationships pythonCopy codegdm.iterate_model(user_corrections=”’ Let’s think step by step. Please make the following updates to the data model: 1. Remove the relationships between Patient and Disease, between Patient and Symptom and between Patient and Outcome. 2. Change the Patient node into Demographics. 3. Create a relationship HAS_DEMOGRAPHICS from Disease to Demographics. 4. Create a relationship HAS_SYMPTOM from Disease to Symptom. If the Symptom value is No, remove this relationship. 5. Create a relationship HAS_LAB from Disease to HealthIndicator. 6. Create a relationship HAS_OUTCOME from Disease to Outcome. ”’) # Visualize the updated model gdm.current_model.visualize().render(‘output’, format=’png’) img = Image(‘output.png’, width=1200) display(img) Generate Cypher Code and YAML File pythonCopy code# Instantiate ingestion generator gen = IngestionGenerator(data_model=gdm.current_model, username=”neo4j”, password=’yourneo4jpasswordhere’, uri=’neo4j+s://123654888.databases.neo4j.io’, database=”neo4j”, csv_dir=”/home/user/”, csv_name=”disease_prepared.csv”) # Create ingestion YAML pyingest_yaml = gen.generate_pyingest_yaml_string() gen.generate_pyingest_yaml_file(file_name=”disease_prepared”) # Load data into Neo4j instance PyIngest(yaml_string=pyingest_yaml, dataframe=disease_df) Querying the Graph Database cypherCopy codeMATCH (n) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m Visualizing Specific Nodes and Relationships cypherCopy codeMATCH (n:Disease {name: ‘Diabetes’}) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m MATCH (d:Disease) MATCH (d)-[r:HAS_LAB]->(l) MATCH (d)-[r2:HAS_OUTCOME]->(o) WHERE l.bloodPressure = ‘High’ AND o.result=’Positive’ RETURN d, properties(d) AS disease_properties, r, properties(r) AS relationship_properties, l, properties(l) AS lab_properties Automated Cypher Query Generation with Gemini-1.5-Flash To automatically generate a Cypher query via Langchain (GraphCypherQAChain) and retrieve possible diseases based on a patient’s symptoms and health indicators, the following setup was used: Initialize Vertex AI pythonCopy codeimport warnings import json from langchain_community.graphs import Neo4jGraph with warnings.catch_warnings(): warnings.simplefilter(‘ignore’) NEO4J_USERNAME = “neo4j” NEO4J_DATABASE = ‘neo4j’ NEO4J_URI = ‘neo4j+s://1236547.databases.neo4j.io’ NEO4J_PASSWORD = ‘yourneo4jdatabasepasswordhere’ # Get the Knowledge Graph from the instance and the schema kg = Neo4jGraph( url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD, database=NEO4J_DATABASE ) kg.refresh_schema() print(textwrap.fill(kg.schema, 60)) schema = kg.schema Initialize Vertex AI pythonCopy codefrom langchain.prompts.prompt import PromptTemplate from langchain.chains import GraphCypherQAChain from langchain.llms import VertexAI vertexai.init(project=”your-project”, location=”us-west4″) llm = VertexAI(model=”gemini-1.5-flash”) Create the Prompt Template pythonCopy codeprompt_template = “”” Let’s think step by

Read More
Einstein Service Agent is Coming

Einstein Service Agent is Coming

Salesforce is entering the AI agent arena with a new service built on its Einstein AI platform. Introducing the Einstein Service Agent, a generative AI-powered self-service tool designed for end customers. This agent provides a conversational AI interface to answer questions and resolve various issues. Similar to the employee-facing Einstein Copilot used internally within organizations, the Einstein Service Agent can take action on behalf of users, such as processing product returns or issuing refunds. It can handle both simple and complex multi-step interactions, leveraging approved company workflows already established in Salesforce. Initially, Einstein Service Agent will be deployed for customer service scenarios, with plans to expand to other Salesforce clouds in the future. What sets Einstein Service Agents apart from other AI-driven workflows is their seamless integration with Salesforce’s existing customer data and workflows. “Einstein Service Agent is a generative AI-powered, self-service conversational experience built on our Einstein trust layer and platform,” Clara Shih, CEO of Salesforce AI, told VentureBeat. “Everything is grounded in our trust layer, as well as all the customer data and official business workflows that companies have been adding into Salesforce for the last 25 years.” Distinguishing AI Agent from AI Copilot Over the past year, Salesforce has detailed various aspects of its generative AI efforts, including the development of the Einstein Copilot, which became generally available at the end of April. The Einstein Copilot enables a wide range of conversational AI experiences for Salesforce users, focusing on direct users of the Salesforce platform. “Einstein Copilot is employee-facing, for salespeople, customer service reps, marketers, and knowledge workers,” Shih explained. “Einstein Service Agent is for our customers’ customers, for their self-service.” The concept of a conversational AI bot answering basic customer questions isn’t new, but Shih emphasized that Einstein Service Agent is different. It benefits from all the data and generative AI work Salesforce has done in recent years. This agent approach is not just about answering simple questions but also about delivering knowledge-based responses and taking action. With a copilot, multiple AI engines and responses can be chained together. The AI agent approach also chains AI models together. For Shih, the difference is a matter of semantics. “It’s a spectrum toward more and more autonomy,” Shih said. Driving AI Agent Approach with Customer Workflows As an example, Shih mentioned that Salesforce is working with a major apparel company as a pilot customer for Einstein Service Agent. If a customer places an online order and receives the wrong item, they could call the retailer during business hours for assistance from a human agent, who might be using the Einstein Copilot. If the customer reaches out when human agents aren’t available or chooses a self-service route, Einstein Service Agent can step in. The customer will be able to ask about the issue and, if enabled in the workflow, get a resolution. The workflow that understands who the customer is and how to handle the issue is already part of the Salesforce Service Cloud. Shih explained that Einstein Studio is where all administrative and configuration work for Einstein AI, including Service Agents, takes place, utilizing existing Salesforce data. The Einstein Service Agent provides a new layer for customers to interact with existing logic to solve issues. “Everything seemingly that the company has invested in over the last 25 years has come to light in the last 18 months, allowing customers to securely take advantage of generative AI in a trusted way,” Shih said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Democratizing CLM

Democratizing CLM

IntelAgree, a leader in AI-powered contract lifecycle management (CLM) software, is excited to announce the integration of generative AI functionality into its existing Salesforce platform. The new feature, Saige Assist: Contract Advice, enhances the contracting process by providing users with immediate answers to contract-related questions directly within the familiar Salesforce environment. Available to IntelAgree users with AI-enabled subscriptions, Saige Assist: Contract Advice significantly enhances productivity and efficiency. Traditional inquiries to legal teams, which might take 48-72 hours due to staffing or prioritization constraints, are now addressed in seconds, enabling faster decision-making. “Many of our clients draft and manage contracts through Salesforce. With this new feature, they won’t need to leave Salesforce to get the answers they need,” said Michael Schacter, Director of Product Management at IntelAgree. “They can ask questions right within the platform, making it an all-in-one solution for contract management.” Key benefits of this new update include: “At IntelAgree, we aim to make contracting a team sport. A major part of this is meeting non-legal users where they work and how they prefer to work,” said Kyle Myers, EVP of Product and Engineering at IntelAgree. “With this new Salesforce integration update, we’re not just making contract management easier – we’re democratizing it, making AI-powered contract insights available to anyone using Salesforce.” IntelAgree distinguishes itself with a user-first approach to contract management, addressing the evolving needs of modern businesses beyond just legal departments. Looking ahead, the company plans to expand Saige Assist’s functionality to other native integrations. Along with the launch of Saige Assist: Contract Advice, IntelAgree has introduced an attributes tab to its Salesforce integration, providing users with quick access to key attribute values like arbitration, payment terms, and publicity restrictions. In a future release, users will also be able to complete smart forms within Salesforce, further minimizing the need to switch platforms. About IntelAgree:IntelAgree is an AI-powered contract lifecycle management (CLM) platform that helps enterprise teams do impactful work, not busy work. The platform uses machine learning to identify, extract, and analyze text in agreements, making contract analytics more accessible. IntelAgree also uses intelligent automation to optimize every part of the contracting process, so teams can create, negotiate, sign, manage, and analyze contracts faster. IntelAgree is trusted by leading companies, ranging from major league sports teams to Fortune 500 companies, to automate the most painful, costly parts of the contracting process. For more information about IntelAgree, visit intelagree.com. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Service Agent

Einstein Service Agent

Introducing Agentforce Service Agent: Salesforce’s Autonomous AI to Transform Chatbot Experiences Accelerate case resolutions with an intelligent, conversational interface that uses natural language and is grounded in trusted customer and business data. Deploy in minutes with ready-made templates, Salesforce components, and a large language model (LLM) to autonomously engage customers across any channel, 24/7. Establish clear privacy and security guardrails to ensure trusted responses, and escalate complex cases to human agents as needed. Editor’s Note: Einstein Service Agent is now known as Agentforce Service Agent. Salesforce has launched Agentforce Service Agent, the company’s first fully autonomous AI agent, set to redefine customer service. Unlike traditional chatbots that rely on preprogrammed responses and lack contextual understanding, Agentforce Service Agent is dynamic, capable of independently addressing a wide range of service issues, which enhances customer service efficiency. Built on the Einstein 1 Platform, Agentforce Service Agent interacts with large language models (LLMs) to analyze the context of customer messages and autonomously determine the appropriate actions. Using generative AI, it creates conversational responses based on trusted company data, such as Salesforce CRM, and aligns them with the brand’s voice and tone. This reduces the burden of routine queries, allowing human agents to focus on more complex, high-value tasks. Customers, in turn, receive faster, more accurate responses without waiting for human intervention. Available 24/7, Agentforce Service Agent communicates naturally across self-service portals and messaging channels, performing tasks proactively while adhering to the company’s defined guardrails. When an issue requires human escalation, the transition is seamless, ensuring a smooth handoff. Ease of Setup and Pilot Launch Currently in pilot, Agentforce Service Agent will be generally available later this year. It can be deployed in minutes using pre-built templates, low-code workflows, and user-friendly interfaces. “Salesforce is shaping the future where human and digital agents collaborate to elevate the customer experience,” said Kishan Chetan, General Manager of Service Cloud. “Agentforce Service Agent, our first fully autonomous AI agent, will revolutionize service teams by not only completing tasks autonomously but also augmenting human productivity. We are reimagining customer service for the AI era.” Why It Matters While most companies use chatbots today, 81% of customers would still prefer to speak to a live agent due to unsatisfactory chatbot experiences. However, 61% of customers express a preference for using self-service options for simpler issues, indicating a need for more intelligent, autonomous agents like Agentforce Service Agent that are powered by generative AI. The Future of AI-Driven Customer Service Agentforce Service Agent has the ability to hold fluid, intelligent conversations with customers by analyzing the full context of inquiries. For instance, a customer reaching out to an online retailer for a return can have their issue fully processed by Agentforce, which autonomously handles tasks such as accessing purchase history, checking inventory, and sending follow-up satisfaction surveys. With trusted business data from Salesforce’s Data Cloud, Agentforce generates accurate and personalized responses. For example, a telecommunications customer looking for a new phone will receive tailored recommendations based on data such as purchase history and service interactions. Advanced Guardrails and Quick Setup Agentforce Service Agent leverages the Einstein Trust Layer to ensure data privacy and security, including the masking of personally identifiable information (PII). It can be quickly activated with out-of-the-box templates and pre-existing Salesforce components, allowing companies to equip it with customized skills faster using natural language instructions. Multimodal Innovation Across Channels Agentforce Service Agent supports cross-channel communication, including messaging apps like WhatsApp, Facebook Messenger, and SMS, as well as self-service portals. It even understands and responds to images, video, and audio. For example, if a customer sends a photo of an issue, Agentforce can analyze it to provide troubleshooting steps or even recommend replacement products. Seamless Handoffs to Human Agents If a customer’s inquiry requires human attention, Agentforce seamlessly transfers the conversation to a human agent who will have full context, avoiding the need for the customer to repeat information. For example, a life insurance company might program Agentforce to escalate conversations if a customer mentions sensitive topics like loss or death. Similarly, if a customer requests a return outside of the company’s policy window, Agentforce can recommend that a human agent make an exception. Customer Perspective “Agentforce Service Agent’s speed and accuracy in handling inquiries is promising. It responds like a human, adhering to our diverse, country-specific guidelines. I see it becoming a key part of our service team, freeing human agents to handle higher-value issues.” — George Pokorny, SVP of Global Customer Success, OpenTable. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Increases Customer Service Efficiency

AI Increases Customer Service Efficiency

Salesforce: Enhancing Customer Service Efficiency with AI Salesforce, the global leader in AI-driven CRM solutions, has released its latest State of Service report, highlighting the benefits of artificial intelligence (AI) and data in boosting customer revenue, efficiency, and satisfaction. The comprehensive study surveyed over 5,500 service professionals across 30 countries, including Indonesia, revealing a significant reliance on AI to enhance work efficiency in the service sector. AI Increases Customer Service Efficiency. Key Findings from the State of Service Report Salesforce’s report indicates that 86% of professional services organizations in Indonesia have either implemented AI or are evaluating its benefits. Furthermore, 80% of professionals in the region plan to increase their AI investments this year. Gavin Barfield, Chief Technology Officer & Vice President of Solutions at Salesforce ASEAN, remarked on the transformative potential of AI: “Generative AI will enable agents to deliver a smoother and more personalized customer service experience, allowing them more time to focus on building relationships.” Gavin Barfield Primary Functions of AI in Indonesian Services Professionals in Indonesia identified three primary functions of AI in their service operations: The report highlights that 96% of AI-using professional services in Indonesia find AI instrumental in saving time. AI Increases Customer Service Efficiency Barfield emphasized the efficiency gains: “AI helps customer service agents become more efficient by reducing administrative tasks, thus saving time for them to focus on providing personalized and revenue-generating customer experiences. This will fundamentally shift the role of service teams in Indonesia from being cost centers to profit centers.” Conclusion Salesforce’s State of Service report underscores the critical role AI plays in transforming customer service operations. By automating routine tasks and enhancing service quality, AI empowers agents to focus on more strategic, relationship-building activities, ultimately driving greater efficiency and profitability in the service sector. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Where Will the Data Scientists Go

Where Will the Data Scientists Go

What Is to Become of the Data Scientist Role? This question frequently arises among executives, particularly as they navigate the changing roles of data teams, such as those at DataRobot. Where Will the Data Scientists Go may not be as relevant as what new places can they go with AI? The short answer? While tools may evolve, the core of data science remains steadfast. As the field of data science continues to expand, the role of the data scientist becomes increasingly vital. The need will grow, even as the role changes. Trust in AI is dependant upon human oversight. Beyond the Hype of Consumer AI The surge in consumer AI products has raised concerns among data scientists about the implications for their careers. However, these technologies are built on data and generate vast amounts of new data, presenting numerous opportunities. The real transformative potential lies in enterprise-scale automation. Enterprise-Scale Automation: The Data Scientist’s Domain Enterprise-scale automation involves creating large-scale, reliable systems. Data scientists are crucial in this effort, as they bring expertise in data exploration and systematic inference. They are uniquely positioned to identify automation opportunities, design testing and monitoring strategies, and collaborate with cross-functional teams to bring AI solutions from concept to implementation. As automation grows, the role of the data scientist is essential in ensuring these systems function effectively and safely, particularly in environments without human oversight. New Skills for Data Scientists: The Guardians of AI Applications Data scientists will need to acquire new skills to manage automation at scale, including securing the systems they build. Generative AI introduces new risks, such as potential vulnerabilities to prompt injections or other security threats. Governance and ensuring positive business impacts will become increasingly important, requiring a data science mindset. Building Great Data Teams in the Age of AI The future of data science will not be about automation replacing data scientists but about the evolution of roles and skills. Data scientists need to focus on the core foundations of their discipline rather than the specific tools they use, as tools will continue to evolve. Teams must be built intentionally, encompassing a range of skills and personalities necessary for successful enterprise automation. Business Leaders: Navigating the AI Landscape Business leaders will need to excel in decision-making, understanding the problems they aim to solve, and selecting the appropriate tools and teams. They will also need to manage evolving regulations, particularly those related to the design and deployment of AI systems. Data Scientists: Precision Thinkers at the Forefront Contrary to the belief that AI could replace coding skills, the essence of data science lies in precise thinking and clear communication. Data scientists excel in translating business needs into data-driven decisions and AI applications, ensuring that solutions are not only technically sound but also aligned with business objectives. This skill set will be crucial in the era of AI, as data scientists will play a key role in optimizing workflows, designing AI safety nets, and protecting their organization’s brand and reputation. The Evolving Role of Data Science The demand for precise, data-literate thinkers will only grow with the rise of enterprise AI systems. Whether they are called data scientists or another name, professionals who delve deeply into data and provide critical insights will remain essential in navigating the complexities of modern technology and business landscapes. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Generative AI for Tableau

Generative AI for Tableau

Tableau’s first generative AI assistant is now generally available. Generative AI for Tableau brings data prep to the masses. Earlier this month, Tableau launched its second platform update of 2024, announcing that its first two GenAI assistants would be available by the end of July, with a third set for release in August. The first of these, Einstein Copilot for Tableau Prep, became generally available on July 10. Tableau initially unveiled its plans to develop generative AI capabilities in May 2023 with the introduction of Tableau Pulse and Tableau GPT. Pulse, an insight generator that monitors data for metric changes and uses natural language to alert users, became generally available in February. Tableau GPT, now renamed Einstein Copilot for Tableau, moved into beta testing in April. Following Einstein Copilot for Tableau Prep, Einstein Copilot for Tableau Catalog is expected to be generally available before the end of July. Einstein Copilot for Tableau Web Authoring is set to follow by the end of August. With these launches, Tableau joins other data management and analytics vendors like AWS, Domo, Microsoft, and MicroStrategy, which have already made generative AI assistants generally available. Other companies, such as Qlik, DBT Labs, and Alteryx, have announced similar plans but have not yet moved their products out of preview. Tableau’s generative AI capabilities are comparable to those of its competitors, according to Doug Henschen, an analyst at Constellation Research. In some areas, such as data cataloging, Tableau’s offerings are even more advanced. “Tableau is going GA later than some of its competitors. But capabilities are pretty much in line with or more extensive than what you’re seeing from others,” Henschen said. In addition to the generative AI assistants, Tableau 2024.2 includes features such as embedding Pulse in applications. Based in Seattle and a subsidiary of Salesforce, Tableau has long been a prominent analytics vendor. Its first 2024 platform update highlighted the launch of Pulse, while the final 2023 update introduced new embedded analytics capabilities. Generative AI assistants are proliferating due to their potential to enable non-technical workers to work with data and increase efficiency for data experts. Historically, the complexity of analytics platforms, requiring coding and data literacy, has limited their widespread adoption. Studies indicate that only about one-quarter of employees regularly work with data. Vendors have attempted to overcome this barrier by introducing natural language processing (NLP) and low-code/no-code features. However, NLP features have been limited by small vocabularies requiring specific business phrasing, while low-code/no-code features only support basic tasks. Generative AI has the potential to change this dynamic. Large language models like ChatGPT and Google Gemini offer extensive vocabularies and can interpret user intent, enabling true natural language interactions. This makes data exploration and analysis accessible to non-technical users and reduces coding requirements for data experts. In response to advancements in generative AI, many data management and analytics vendors, including Tableau, have made it a focal point of their product development. Tech giants like AWS, Google, and Microsoft, as well as specialized vendors, have heavily invested in generative AI. Einstein Copilot for Tableau Prep, now generally available, allows users to describe calculations in natural language, which the tool interprets to create formulas for calculated fields in Tableau Prep. Previously, this required expertise in objects, fields, functions, and limitations. Einstein Copilot for Tableau Catalog, set for release later this month, will enable users to add descriptions for data sources, workbooks, and tables with one click. In August, Einstein Copilot for Tableau Web Authoring will allow users to explore data in natural language directly from Tableau Cloud Web Authoring, producing visualizations, formulating calculations, and suggesting follow-up questions. Tableau’s generative AI assistants are designed to enhance efficiency and productivity for both experts and generalists. The assistants streamline complex data modeling and predictive analysis, automate routine data prep tasks, and provide user-friendly interfaces for data visualization and analysis. “Whether for an expert or someone just getting started, the goal of Einstein Copilot is to boost efficiency and productivity,” said Mike Leone, an analyst at TechTarget’s Enterprise Strategy Group. The planned generative AI assistants for different parts of Tableau’s platform offer unique value in various stages of the data and AI lifecycle, according to Leone. Doug Henschen noted that the generative AI assistants for Tableau Web Authoring and Tableau Prep are similar to those being introduced by other vendors. However, the addition of a generative AI assistant for data cataloging represents a unique differentiation for Tableau. “Einstein Copilot for Tableau Catalog is unique to Tableau among analytics and BI vendors,” Henschen said. “But it’s similar to GenAI implementations being done by a few data catalog vendors.” Beyond the generative AI assistants, Tableau’s latest update includes: Among these non-Copilot capabilities, making Pulse embeddable is particularly significant. Extending generative AI capabilities to work applications will make them more effective. “Embedding Pulse insights within day-to-day applications promises to open up new possibilities for making insights actionable for business users,” Henschen said. Multi-fact relationships are also noteworthy, enabling users to relate datasets with shared dimensions and informing applications that require large amounts of high-quality data. “Multi-fact relationships are a fascinating area where Tableau is really just getting started,” Leone said. “Providing ways to improve accuracy, insights, and context goes a long way in building trust in GenAI and reducing hallucinations.” While Tableau has launched its first generative AI assistant and will soon release more, the vendor has not yet disclosed pricing for the Copilots and related features. The generative AI assistants are available through a bundle named Tableau+, a premium Tableau Cloud offering introduced in June. Beyond the generative AI assistants, Tableau+ includes advanced management capabilities, simplified data governance, data discovery features, and integration with Salesforce Data Cloud. Generative AI is compute-intensive and costly, so it’s not surprising that Tableau customers will have to pay extra for these capabilities. Some vendors are offering generative AI capabilities for free to attract new users, but Henschen believes costs will eventually be incurred. “Customers will want to understand the cost implications of adding these new capabilities,”

Read More
Data Protection Improvements from Next DLP

Data Protection Improvements from Next DLP

Insider risk and data protection company Next DLP has unveiled its new Secure Data Flow technology, designed to enhance data protection for customers. Integrated into the company’s Reveal Platform, Secure Data Flow monitors the origin, movement, and modification of data to provide comprehensive protection. Data Protection Improvements from Next DLP. This technology can secure critical business data flow from any SaaS application, including Salesforce, Workday, SAP, and GitHub, to prevent accidental data loss and malicious theft. “In modern IT environments, intellectual property often resides in SaaS applications and cloud data stores,” said John Stringer, head of product at Next DLP. “The challenge is that identifying high-impact data in these locations based on its content is difficult. Secure Data Flow, through Reveal, ensures that firms can confidently protect their most critical data assets, regardless of their location or application.” Next DLP argues that legacy data protection technologies are inadequate, relying on pattern matching, regular expressions, keywords, user-applied tags, and fingerprinting, which only cover a limited range of text-based data types. The company highlights that recent studies indicate employees download an average of 30 GB of data each month from SaaS applications to their endpoints, such as mobile phones, laptops, and desktops, emphasizing the need for advanced data protection measures. Secure Data Flow tracks data as it moves through both sanctioned and unsanctioned channels within an organization. By complementing traditional content and sensitivity classification-based approaches with origin-based data identification, manipulation detection, and data egress controls, it effectively prevents data theft and misuse. This approach results in an “all-encompassing, 100 percent effective, false-positive-free solution that simplifies the lives of security analysts,” claims Next DLP. “Secure Data Flow represents a novel approach to data protection and insider risk management,” said Ken Buckler, research director at Enterprise Management Associates. “It not only enhances detection and protection capabilities but also streamlines data management processes. This improves the accuracy of data sensitivity recognition and reduces endpoint content inspection costs in today’s diverse technological environments.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com