Knowledge - gettectonic.com - Page 8
Understanding AI Agents

Understanding AI Agents

Understanding AI Agents: A Comprehensive Guide Artificial Intelligence (AI) has come a long way, offering systems that automate tasks and provide intelligent, responsive solutions. One key concept within AI is the AI agent—an autonomous system capable of perceiving its environment and taking actions to achieve specific goals. This guide explores AI agents, their types, working mechanisms, and how to build them using platforms like Microsoft Autogen and Google Vertex AI Agent Builder. It also highlights how companies like LeewayHertz and Markovate can assist in the development of AI agents. What is an AI Agent? AI agents are systems designed to interact with their environment autonomously. They process inputs, make decisions, and execute actions based on predefined rules or learned experiences. These agents range from simple rule-based systems to complex machine learning models that adapt over time. Types of AI Agents AI agents can be classified based on complexity and functionality: How AI Agents Work The working mechanism of an AI agent involves four key components: Architectural Blocks of an Autonomous AI Agent An autonomous AI agent typically includes: Building an AI Agent: The Basics Building an AI agent involves several essential steps: Microsoft Autogen: A Platform Overview Microsoft Autogen is a powerful tool for building AI agents, offering a range of features that simplify the development, training, and deployment process. Its user-friendly interface allows developers to create custom agents quickly. Key Steps to Building AI Agents with Autogen: Benefits of Autogen: Vertex AI Agent Builder: Enabling No-Code AI Development Google’s Vertex AI Agent Builder simplifies AI agent development through a no-code platform, making it accessible to users without extensive programming experience. Its drag-and-drop functionality allows for quick and efficient AI agent creation. Key Features of Vertex AI Agent Builder: Conclusion AI agents play a critical role in automating decision-making and performing tasks independently. Platforms like Microsoft Autogen and Google Vertex AI Agent Builder make the development of these agents more accessible, providing powerful tools for both novice and experienced developers. By leveraging these technologies and partnering with companies like LeewayHertz and Markovate, businesses can build custom AI agents that enhance automation, decision-making, and operational efficiency. Whether you’re starting from scratch or looking to integrate AI capabilities into your existing systems, the right tools can make the process seamless and effective. How do you think these tools stack up next to Salesforce AI Agents? Comment below. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI for Consumers and Retailers

AI for Consumers and Retailers

Before generative AI became mainstream, tech-savvy retailers had long been leveraging transformative technologies to automate tasks and understand consumer behavior. Insights from consumer and future trends, along with predictive analytics, have long guided retailers in improving customer experiences and enhancing operational efficiency. AI for Consumers and Retailers improved customer experiences. While AI is currently used for personalized recommendations and online customer support, many consumers still harbor distrust towards AI. Salesforce is addressing this concern by promoting trustworthy AI with human oversight and implementing powerful controls that focus on mitigating high-risk AI outcomes. This approach is crucial as many knowledge workers fear losing control over AI. Although people trust AI to handle significant portions of their work, they believe that increased human oversight would bolster their confidence in AI. Building this trust is a challenge retailers must overcome to fully harness AI’s potential as a reliable assistant. So, where does the retail industry stand with AI, and how can retailers build consumer trust while developing AI responsibly? AI for Consumers and Retailers Recent research from Salesforce and the Retail AI Council highlights how AI is reshaping consumer behavior and retailer interactions. AI is now integral to providing personalized deals, suggesting tailored products, and enhancing customer service through chatbots. Retailers are increasingly embedding generative AI into their business operations. A significant majority (93%) of retailers report using generative AI for personalization, enabling customers to find products and make purchases faster through natural language interactions on digital storefronts and messaging apps. For instance, a customer might tell a retailer’s AI assistant about their camping needs, and based on location, preferences, and past purchases, the AI can recommend a suitable tent and provide a direct link for checkout and store collection. As of early 2024, 92% of retailers’ investments were directed towards AI technology. While AI is not new to retail, with 59% of merchants already using it for product recommendations and 55% utilizing digital assistants for online purchases, its applications continue to expand. From demand forecasting to customer sentiment analysis, AI enhances consumer experiences by predicting preferences and optimizing inventory levels, thereby reducing markdowns and improving efficiency. Barriers and Ethical Considerations Despite its promise, integrating generative AI in retail faces significant challenges, particularly regarding bias in AI outputs. The need for clear ethical guidelines in AI use within retail is pressing, underscoring the gap between adoption rates and ethical stewardship. Strategies that emphasize transparency and accountability are vital for fostering responsible AI innovation. Half of the surveyed retailers indicated they could fully comply with stringent data security standards and privacy regulations, demonstrating the industry’s commitment to protecting consumer data amidst evolving regulatory landscapes. Retailers are increasingly aware of the risks associated with AI integration. Concerns about bias top the list, with half of the respondents worried about prejudiced AI outcomes. Additionally, issues like hallucinations (38%) and toxicity (35%) linked to generative AI implementation highlight the need for robust mitigation strategies. A majority (62%) of retailers have established guidelines to address transparency, data security, and privacy concerns related to the ethical deployment of generative AI. These guidelines ensure responsible AI use, emphasizing trustworthy and unbiased outputs that adhere to ethical standards in the retail sector. These insights reveal a dual imperative for retailers: leveraging AI technologies to enhance operational efficiency and customer experiences while maintaining stringent ethical standards and mitigating risks. Consumer Perceptions and the Future of AI in Retail As AI continues to redefine retail, balancing ethical considerations with technological advancements is essential. To combat consumer skepticism, companies should focus on transparent communication about AI usage and emphasize that humans, not technology, are ultimately in control. Whether aiming for top-line growth or bottom-line efficiency, AI is a crucial addition to a retailer’s technology stack. However, to fully embrace AI, retailers must take consumers on the journey and earn their trust. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Rold of Small Language Models

Role of Small Language Models

The Role of Small Language Models (SLMs) in AI While much attention is often given to the capabilities of Large Language Models (LLMs), Small Language Models (SLMs) play a vital role in the AI landscape. Role of Small Language Models. Large vs. Small Language Models LLMs, like GPT-4, excel at managing complex tasks and providing sophisticated responses. However, their substantial computational and energy requirements can make them impractical for smaller organizations and devices with limited processing power. In contrast, SLMs offer a more feasible solution. Designed to be lightweight and resource-efficient, SLMs are ideal for applications operating in constrained computational environments. Their reduced resource demands make them easier and quicker to deploy, while also simplifying maintenance. What are Small Language Models? Small Language Models (SLMs) are neural networks engineered to generate natural language text. The term “small” refers not only to the model’s physical size but also to its parameter count, neural architecture, and the volume of data used during training. Parameters are numeric values that guide a model’s interpretation of inputs and output generation. Models with fewer parameters are inherently simpler, requiring less training data and computational power. Generally, models with fewer than 100 million parameters are classified as small, though some experts consider models with as few as 1 million to 10 million parameters to be small in comparison to today’s large models, which can have hundreds of billions of parameters. How Small Language Models Work SLMs achieve efficiency and effectiveness with a reduced parameter count, typically ranging from tens to hundreds of millions, as opposed to the billions seen in larger models. This design choice enhances computational efficiency and task-specific performance while maintaining strong language comprehension and generation capabilities. Techniques such as model compression, knowledge distillation, and transfer learning are critical for optimizing SLMs. These methods enable SLMs to encapsulate the broad understanding capabilities of larger models into a more concentrated, domain-specific toolset, facilitating precise and effective applications while preserving high performance. Advantages of Small Language Models Applications of Small Language Models Role of Small Language Models is lengthy. SLMs have seen increased adoption due to their ability to produce contextually coherent responses across various applications: Small Language Models vs. Large Language Models Feature LLMs SLMs Training Dataset Broad, diverse internet data Focused, domain-specific data Parameter Count Billions Tens to hundreds of millions Computational Demand High Low Cost Expensive Cost-effective Customization Limited, general-purpose High, tailored to specific needs Latency Higher Lower Security Risk of data exposure through APIs Lower risk, often not open source Maintenance Complex Easier Deployment Requires substantial infrastructure Suitable for limited hardware environments Application Broad, including complex tasks Specific, domain-focused tasks Accuracy in Specific Domains Potentially less accurate due to general training High accuracy with domain-specific training Real-time Application Less ideal due to latency Ideal due to low latency Bias and Errors Higher risk of biases and factual errors Reduced risk due to focused training Development Cycles Slower Faster Conclusion The role of Small Language Models (SLMs) is increasingly significant as they offer a practical and efficient alternative to larger models. By focusing on specific needs and operating within constrained environments, SLMs provide targeted precision, cost savings, improved security, and quick responsiveness. As industries continue to integrate AI solutions, the tailored capabilities of SLMs are set to drive innovation and efficiency across various domains. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
guide to RAG

Tectonic Guide to RAG

Guide to RAG (Retrieval-Augmented Generation) Retrieval-Augmented Generation (RAG) has become increasingly popular, and while it’s not yet as common as seeing it on a toaster oven manual, it is expected to grow in use. Despite its rising popularity, comprehensive guides that address all its nuances—such as relevance assessment and hallucination prevention—are still scarce. Drawing from practical experience, this insight offers an in-depth overview of RAG. Why is RAG Important? Large Language Models (LLMs) like ChatGPT can be employed for a wide range of tasks, from crafting horoscopes to more business-centric applications. However, there’s a notable challenge: most LLMs, including ChatGPT, do not inherently understand the specific rules, documents, or processes that companies rely on. There are two ways to address this gap: How RAG Works RAG consists of two primary components: While the system is straightforward, the effectiveness of the output heavily depends on the quality of the documents retrieved and how well the Retriever performs. Corporate documents are often unstructured, conflicting, or context-dependent, making the process challenging. Search Optimization in RAG To enhance RAG’s performance, optimization techniques are used across various stages of information retrieval and processing: Python and LangChain Implementation Example Below is a simple implementation of RAG using Python and LangChain: pythonCopy codeimport os import wget from langchain.vectorstores import Qdrant from langchain.embeddings import OpenAIEmbeddings from langchain import OpenAI from langchain_community.document_loaders import BSHTMLLoader from langchain.chains import RetrievalQA # Download ‘War and Peace’ by Tolstoy wget.download(“http://az.lib.ru/t/tolstoj_lew_nikolaewich/text_0073.shtml”) # Load text from html loader = BSHTMLLoader(“text_0073.shtml”, open_encoding=’ISO-8859-1′) war_and_peace = loader.load() # Initialize Vector Database embeddings = OpenAIEmbeddings() doc_store = Qdrant.from_documents( war_and_peace, embeddings, location=”:memory:”, collection_name=”docs”, ) llm = OpenAI() # Ask questions while True: question = input(‘Your question: ‘) qa = RetrievalQA.from_chain_type( llm=llm, chain_type=”stuff”, retriever=doc_store.as_retriever(), return_source_documents=False, ) result = qa(question) print(f”Answer: {result}”) Considerations for Effective RAG Ranking Techniques in RAG Dynamic Learning with RELP An advanced technique within RAG is Retrieval-Augmented Language Model-based Prediction (RELP). In this method, information retrieved from vector storage is used to generate example answers, which the LLM can then use to dynamically learn and respond. This allows for adaptive learning without the need for expensive retraining. Guide to RAG RAG offers a powerful alternative to retraining large language models, allowing businesses to leverage their proprietary knowledge for practical applications. While setting up and optimizing RAG systems involves navigating various complexities, including document structure, query processing, and ranking, the results are highly effective for most business use cases. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Leeds and Other Heatmap Solutions

Leeds and Other Heatmap Solutions

With over 80% of people shopping online – and the numbers are bound to rise – it’s important to know how your would-be customers behave on your website: where they click, how they scroll, and what motivates them to take specific actions. Heatmap analytics does it, allowing you to dominate CRO and UX through effective behavior data interpretation. This insight will look at Leeds and Other Heatmap Solutions. Powered by heatmap software and heatmap tools, heatmap analytics can help you convert customers at scale by optimizing their on-site and mobile experience. Make no mistake: the quality of user behavior tracking can make a difference between a closed sale and a bounce. Leads Heatmap Software is an innovative tool that transforms complex lead data into easy-to-understand, color-coded heatmaps within Salesforce CRM. This solution uses advanced data visualization techniques, enabling users to quickly identify high-potential leads. Interactive Heatmaps Leverage dynamic, real-time heatmaps to visualize lead density and quality, making it easier to pinpoint high-potential areas. Real-Time Updates Stay up-to-date with the latest information as heatmaps automatically refresh with new leads or changes to existing data, ensuring you always have the most current view. Enhanced Analytics Dive deeper into lead behavior and trends with comprehensive analytics tools that provide detailed reports and predictive insights. Detailed Lead Profiles Access in-depth lead profiles directly from the heatmap, including contact details, engagement history, and quick shortcuts for a complete view of each lead. Online Chat Integration Interact with leads instantly using integrated online chat, facilitating immediate and personalized communication. All website pages have a purpose, whether that purpose is to drive further clicks, qualify visitors, provide a solution, or even a mix of all of those things. Heatmaps and recorded user sessions allow you to see if your page is serving that purpose or going against it. What Is a Heatmap? Generally speaking, heatmaps are graphical representations of data that highlight value with color. On a website heatmap, the most popular areas are showcased in red (hot) and the least popular are in blue (cold). The colors range on a scale from red to blue. Heatmaps are an excellent method of collecting user behavior data and converting it into a deep analysis of how visitors engage with your website pages. It can analyze: That information will help you identify user trends and key into what should be optimized to up engagement. Setting up website heatmapping software is a great start to refining your website design process and understanding your users. When to Use Heatmaps The truth is that heatmaps can actually be invaluable when testing and optimizing user experiences and conversion opportunities. There are many times you should be using them. Redesigning Your Website Updating, or even upgrading, your website isn’t just a task on your to do list. Careful thought, attention, and creativity should be put into the revamp if you want it to be worth the time and resources. Heatmaps can help with studying your current design to identify what your visitors are engaging with and what they’re ignoring. You’ll be tapped into what makes your visitors tick so that you can build a site meant specifically for your unique audience. Analyzing Webpage Conversions Trying to figure out why certain pages aren’t converting the way you thought they would? Use a heatmap. You’ll be able to identify exactly what’s attracting attention and deduce why. The same goes for buttons and pages that are showing a higher rate of conversion than anticipated. By keying into the design, copy, and other elements that are working for you, you’ll know exactly how to optimize your under-performing webpages. Testing New Updates As your business grows and you develop new ideas, naturally you’ll want to test them. A/B testing allows you to measure and analyze visitor response to a project or design, but you can take it a step further with heatmapping. Leverage the data graph by examining exactly what captures your visitors’ attention. At the end of the testing period, you may be able to pull designs or elements that received high levels of engagement from the page that didn’t perform as well into the successful one. How To Analyze Visually Using the color-coded visualizations, you can read your webpage for engagement levels and attention “hot spots.” Where the map reads red, that’s where visitors are showing the highest points of interactivity. Blue reflects low numbers. You can spot design issues or opportunities to move buttons, forms, and the like with a visual read. Data Points Reviewing raw data tables will give you more specific insights into your page’s performance. You can examine HTML elements and pixel locations of clicks to really understand what’s drawing people in. You can even filter your clicks and views in order of popularity with certain software. This takes the guessing out of your redesign and testing efforts. Tableau has instant, real-time reporting in place for users looking for actionable insights. With smart dashboards and a drag and drop interface, navigating the product is easy. Their cloud storage means omni-channel data access from anywhere. You can perform ad hoc analyses whenever it’s convenient for you. You can also share your reports with anyone to boost business impact. With built in A/B testing and consolidated heatmaps, Freshmarketer puts in the extra effort to plot out visitor interactions. Recorded in real time, you can analyze heatmaps based by device, which the software automatically detects. Offering scrollmaps and click maps, Freshmarketer strives to “go beyond traditional heatmaps.” Looker offers similar services to the other software options listed, but they also supply a unique security management feature to protect your data. Also partnered with Google Cloud, you’ll have access to reporting from anywhere in the world. Primarily a data analysis solution, you’ll have access to other data intelligence and visualization features as well. Hotjar is one of the most popular website analytics software suites, offering free heatmaps for desktop, mobile, and tablet within its basic subscription plan. You can create heatmaps and synergize them with other free features like user session recordings, surveys, and

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful

Read More
AI Trust and Optimism

AI Trust and Optimism

Building Trust in AI: A Complex Yet Essential Task The Importance of Trust in AI Trust in artificial intelligence (AI) is ultimately what will make or break the technology. AI Trust and Optimism. Amid the hype and excitement of the past 18 months, it’s widely recognized that human beings need to have faith in this new wave of automation. This trust ensures that AI systems do not overstep boundaries or undermine personal freedoms. However, building this trust is a complicated task, thankfully receiving increasing attention from responsible thought leaders in the field. The Challenge of Responsible AI Development There is a growing concern that in the AI arms race, some individuals and companies prioritize making their technology as advanced as possible without considering long-term human-centric issues or the present-day realities. This concern was highlighted when OpenAI CEO Sam Altman presented AI hallucinations as a feature, not a bug, at last year’s Dreamforce, shortly after Salesforce CEO Marc Benioff emphasized the vital nature of trust. Insights from Salesforce’s Global Study Salesforce recently released the results of a global study involving 6,000 knowledge workers from various companies. The study reveals that while respondents trust AI to manage 43% of their work tasks, they still prefer human intervention in areas such as training, onboarding, and data handling. A notable finding is the difference in trust levels between leaders and rank-and-file workers. Leaders trust AI to handle over half (51%) of their work, while other workers trust it with 40%. Furthermore, 63% of respondents believe human involvement is key to building their trust in AI, though a subset is already comfortable offloading certain tasks to autonomous AI. Specifically: The study predicts that within three years, 41% of global workers will trust AI to operate autonomously, a significant increase from the 10% who feel comfortable with this today. Ethical Considerations in AI Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, is responsible for establishing guidelines and best practices for technology adoption. Her interpretation of the study findings indicates that while workers are excited about a future with autonomous AI and are beginning to transition to it, trust gaps still need to be bridged. Goldman notes that workers are currently comfortable with AI handling tasks like writing code, uncovering data insights, and building communications. However, they are less comfortable delegating tasks such as inclusivity, onboarding, training employees, and data security to AI. Salesforce advocates for a “human at the helm” approach to AI. Goldman explains that human oversight builds trust in AI, but the way this oversight is designed must evolve to keep pace with AI’s rapid development. The traditional “human in the loop” model, where humans review every AI-generated output, is no longer feasible even with today’s sophisticated AI systems. Goldman emphasizes the need for more sophisticated controls that allow humans to focus on high-risk, high-judgment decisions while delegating other tasks. These controls should provide a macro view of AI performance and the ability to inspect it, which is crucial. Education and Training Goldman also highlights the importance of educating those steering AI systems. Trust and adoption of technology require that people are enabled to use it successfully. This includes comprehensive knowledge and training to make the most of AI capabilities. Optimism Amidst Skepticism Despite widespread fears about AI, Goldman finds a considerable amount of optimism and curiosity among workers. The study reflects a recognition of AI’s transformative potential and its rapid improvement. However, it is essential to distinguish between genuine optimism and hype-driven enthusiasm. Salesforce’s Stance on AI and Trust Salesforce has taken a strong stance on trust in relation to AI, emphasizing the non-silver bullet nature of this technology. The company acknowledges the balance between enthusiasm and pragmatism that many executives experience. While there is optimism about trusting autonomous AI within three years, this prediction needs to be substantiated with real-world evidence. Some organizations are already leading in generative AI adoption, while many others express interest in exploring its potential in the future. Conclusion Overall, this study contributes significantly to the ongoing debate about AI’s future. The concept of “human at the helm” is compelling and highlights the importance of ethical considerations in the AI-enabled future. Goldman’s role in presenting this research underscores Salesforce’s commitment to responsible AI development. For more insights, check out her blog on the subject. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in Drug Research

AI in Drug Research

Insights on Leveraging AI in Biopharmaceutical R&D: A Discussion with Kailash Swarna Last month, Accenture released a report titled “Reinventing R&D in the Age of AI,” which explores how biopharmaceutical companies can harness artificial intelligence (AI) and other advanced technologies to enhance drug and therapeutic research and development. AI in Drug Research. Kailash Swarna, managing director and Accenture Life Sciences Global Research and Clinical lead, spoke with PharmaNewsIntelligence about the report’s findings and how AI can address ongoing challenges in research and development (R&D), while offering a return on technological investments. “Data and analytics are crucial in advancing drug development, from early research to late-stage clinical trials,” said Swarna. “The industry still faces significant challenges, including the time and cost required to bring a medicine to market. As a leading technology firm, it’s our role to leverage the best in data analytics and technology for drug discovery and development.” AI in Drug Research Accenture conducted detailed interviews with leaders from biopharma companies to explore AI’s role in drug development and discovery. These interviews were part of a CEO forum held just before the JP Morgan conference, where technology emerged as a major area of opportunity and concern. Key Challenges in R&D Understanding the challenges in the drug R&D landscape is crucial for identifying how AI can be effectively utilized. Swarna highlighted several significant challenges: 1. Scientific Growth “The rapid advances in biology and disease understanding present both opportunities and challenges,” Swarna noted. “While our knowledge of human disease has greatly improved, keeping pace with scientific progress in terms of executing and reducing the time and cost of bringing new therapeutics to market remains a major challenge.” He described the clinical trial process as “fraught with complexities,” including data management issues. Despite industry efforts to accelerate drug development, it often still takes over a decade and billions of dollars. 2. Macroeconomic Factors Drug R&D companies also face challenges from macroeconomic conditions, such as reimbursement issues and the Inflation Reduction Act in the US. “These factors are reshaping how companies approach their portfolios and the disease areas they target,” Swarna explained. “The industry is undergoing a retooling to address these economic impacts.” 3. Technology Optimization Many companies have made substantial technology investments, but integrating and systematically utilizing these technologies across the entire R&D process remains a challenge. “While individual technology investments have been valuable, there is a significant opportunity to unify these efforts and streamline data usage from early research through late-stage development,” Swarna said. Reinventing R&D with AI The report emphasizes that technological advancements, particularly generative AI and analytics, can revolutionize the R&D pipeline. “This isn’t about a single technology but about a comprehensive rethinking of processes, data flows, and technology investments across the entire R&D spectrum,” Swarna stated. He stressed that the reinvention of R&D processes requires an enterprise-wide strategy and implementation. Responsible AI Swarna also highlighted the importance of addressing potential challenges associated with AI. “At Accenture, we have a robust responsible AI framework,” he said. Responsible AI encompasses managing issues like bias and security. Accenture’s framework considers factors such as choosing appropriate patient populations and understanding how bias might impact research data. It also addresses security concerns, including intellectual property protection and patient privacy. “Protecting patient privacy and complying with global regulations is crucial when utilizing AI technology,” Swarna emphasized. “Without proper safeguards, we risk data loss or breaches.” Measuring ROI of AI in Drug Research To ensure that AI technologies positively impact the R&D lifecycle, Swarna described a framework for measuring return on investment (ROI). “Given the long cycle of our industry, we’ve developed objective measures to evaluate the impact of these technologies on cost and time,” he explained. Companies can use quantitative measures to track interim milestones, such as recruitment costs and speeds. “These metrics allow us to observe progress in smaller increments rather than waiting for end-to-end results,” Swarna said. “The approach varies by company and their stage in implementing these technologies.” Benefits of AI in Clinical Trials Incorporating AI into clinical trials has the potential to reduce research times and costs. While Swarna and Accenture cannot predict policy impacts on drug pricing, he offered a theoretical benefit: optimizing technology could lower development costs, potentially making medicines more affordable and accessible. Swarna noted that reducing R&D spending could lead to more effective drugs being available to larger populations without placing an excessive burden on the healthcare system. For further details, the original report and discussion were published by Accenture and can be accessed on their official site. AI in Drug Research. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
UncannyAutomator Salesforce Integration

UncannyAutomator Salesforce Integration

Integrating WordPress with Salesforce With the Uncanny Automator Elite Integrations addon, connecting your WordPress site to Salesforce is a breeze. Steps to Connect Uncanny Automator to Your Salesforce Account 1. Install the Elite Integrations Addon First, ensure you have the Elite Integrations addon for Uncanny Automator installed on your WordPress site. 2. Connect Uncanny Automator to Salesforce To establish the connection, follow these steps: You will be prompted to log into Salesforce. After logging in, you will need to allow Uncanny Automator to manage your Salesforce data by clicking Allow. You will then return to the app connection screen on your WordPress site. Using Salesforce Actions in Recipes Once connected to Salesforce, you can use Uncanny Automator to create and update contacts and leads based on user actions on your WordPress site. Here’s how: Final Steps That’s it! Your recipe will now automatically run whenever users complete the selected trigger(s), sending the desired updates directly to your Salesforce account. Installing Uncanny Automator Install the free version The free version of Uncanny Automator is hosted in the WordPress.org repository, so installing it on your WordPress site couldn’t be easier. Sign into your website as an administrator, and in /wp-admin/, navigate to Plugins > Add New. In the search field, enter “Uncanny Automator”. See the image below for more context. In the Search Results, click the Install Now button for Automator. Once it finishes installing, click Activate. That’s it! Uncanny Automator is installed and ready for use. Please note that you must have the free version installed first to use Uncanny Automator Pro. The setup wizard After activation, you will be redirected to the Uncanny Automator dashboard. From here, you can connect an account, watch tutorials or read articles in our Knowledge Base. Connecting a free account is an optional step allows you to try out some of the App non-WordPress Automator integrations (like Slack, Google Sheets and Facebook) but it is not required to use anything else in the free version. Install Uncanny Automator Pro Uncanny Automator Pro is a separate plugin from our free version, and to use Pro features, you must have both Uncanny Automator AND Uncanny Automator Pro installed and active. If you don’t yet have a copy of Automator Pro, you can purchase one from https://automatorplugin.com/pricing/. Once purchased, you can download the latest version of Uncanny Automator Pro inside your account on our website at https://automatorplugin.com/my-account/downloads/. To install the Pro version after downloading the zip file, navigate to Plugins > Add New in /wp-admin/. At the top of the page, click the Upload Plugin button. Click Choose File to select the Pro zip file, the Install Now and Activate the plugin. Once activated, be sure to visit Automator > Settings in /wp-admin/ to enter your license key. This unlocks access to automatic updates and unlimited use of non-WordPress integrations in your recipes. UncannyAutomator special triggers can be found here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Best ChatGPT Competitor Tools

Best ChatGPT Competitor Tools

ChatGPT Alternatives – Best ChatGPT Competitor Tools Discover the Future of AI Chat: Explore the Top ChatGPT Alternatives for Enhanced Communication and Productivity. In an effort to avoid playing favorites, tools are presented in alphabetical order. Best ChatGPT Competitor Tools. Do you ever found yourself wishing for a ChatGPT alternative that might better suit your specific content or AI assistant needs? Whether you’re a business owner, content creator, or student, the right AI chat tool can significantly influence how you interact with information and manage tasks. In this insight, we’re looking into the top ChatGPT alternatives available in 2024. By the end, you’ll have a clear idea of which options might be best for your particular use case and why. Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing BONUS Quillbot AI Great for paraphrasing small blocks of content. In the rapidly evolving world of AI chat technology, these top ChatGPT alternatives of 2024 offer a diverse range of capabilities to suit various needs and preferences. Whether you’re looking to streamline your workflow, enhance your learning, or simply engage in more dynamic conversations, there’s a tool out there (or 2 or 10) that can help boost your digital interactions. Each platform brings its unique strengths to the table, from specialized functionalities like summarizing texts or coding assistance to more general but highly efficient conversational capabilities. There is no reason to select only one. As you consider integrating these tools into your daily routine, think about how its features align with your goals. Embrace the possibilities and let these advanced technologies open new doors to efficiency, creativity, and connectivity. Create a bookmark folder just for GPT tools. New one’s pop up routinely. Happy chatting! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
LLMs Turn CSVs into Knowledge Graphs

LLMs Turn CSVs into Knowledge Graphs

Neo4j Runway and Healthcare Knowledge Graphs Recently, Neo4j Runway was introduced as a tool to simplify the migration of relational data into graph structures. LLMs Turn CSVs into Knowledge Graphs. According to its GitHub page, “Neo4j Runway is a Python library that simplifies the process of migrating your relational data into a graph. It provides tools that abstract communication with OpenAI to run discovery on your data and generate a data model, as well as tools to generate ingestion code and load your data into a Neo4j instance.” In essence, by uploading a CSV file, the LLM identifies the nodes and relationships, automatically generating a Knowledge Graph. Knowledge Graphs in healthcare are powerful tools for organizing and analyzing complex medical data. These graphs structure information to elucidate relationships between different entities, such as diseases, treatments, patients, and healthcare providers. Applications of Knowledge Graphs in Healthcare Integration of Diverse Data Sources Knowledge graphs can integrate data from various sources such as electronic health records (EHRs), medical research papers, clinical trial results, genomic data, and patient histories. Improving Clinical Decision Support By linking symptoms, diagnoses, treatments, and outcomes, knowledge graphs can enhance clinical decision support systems (CDSS). They provide a comprehensive view of interconnected medical knowledge, potentially improving diagnostic accuracy and treatment effectiveness. Personalized Medicine Knowledge graphs enable the development of personalized treatment plans by correlating patient-specific data with broader medical knowledge. This includes understanding relationships between genetic information, disease mechanisms, and therapeutic responses, leading to more tailored healthcare interventions. Drug Discovery and Development In pharmaceutical research, knowledge graphs can accelerate drug discovery by identifying potential drug targets and understanding the biological pathways involved in diseases. Public Health and Epidemiology Knowledge graphs are useful in public health for tracking disease outbreaks, understanding epidemiological trends, and planning interventions. They integrate data from various public health databases, social media, and other sources to provide real-time insights into public health threats. Neo4j Runway Library Neo4j Runway is an open-source library created by Alex Gilmore. The GitHub repository and a blog post describe its features and capabilities. Currently, the library supports OpenAI LLM for parsing CSVs and offers the following features: The library eliminates the need to write Cypher queries manually, as the LLM handles all CSV-to-Knowledge Graph conversions. Additionally, Langchain’s GraphCypherQAChain can be used to generate Cypher queries from prompts, allowing for querying the graph without writing a single line of Cypher code. Practical Implementation in Healthcare To test Neo4j Runway in a healthcare context, a simple dataset from Kaggle (Disease Symptoms and Patient Profile Dataset) was used. This dataset includes columns such as Disease, Fever, Cough, Fatigue, Difficulty Breathing, Age, Gender, Blood Pressure, Cholesterol Level, and Outcome Variable. The goal was to provide a medical report to the LLM to get diagnostic hypotheses. Libraries and Environment Setup pythonCopy code# Install necessary packages sudo apt install python3-pydot graphviz pip install neo4j-runway # Import necessary libraries import numpy as np import pandas as pd from neo4j_runway import Discovery, GraphDataModeler, IngestionGenerator, LLM, PyIngest from IPython.display import display, Markdown, Image Load Environment Variables pythonCopy codeload_dotenv() OPENAI_API_KEY = os.getenv(‘sk-openaiapikeyhere’) NEO4J_URL = os.getenv(‘neo4j+s://your.databases.neo4j.io’) NEO4J_PASSWORD = os.getenv(‘yourneo4jpassword’) Load and Prepare Medical Data pythonCopy codedisease_df = pd.read_csv(‘/home/user/Disease_symptom.csv’) disease_df.columns = disease_df.columns.str.strip() for i in disease_df.columns: disease_df[i] = disease_df[i].astype(str) disease_df.to_csv(‘/home/user/disease_prepared.csv’, index=False) Data Description for the LLM pythonCopy codeDATA_DESCRIPTION = { ‘Disease’: ‘The name of the disease or medical condition.’, ‘Fever’: ‘Indicates whether the patient has a fever (Yes/No).’, ‘Cough’: ‘Indicates whether the patient has a cough (Yes/No).’, ‘Fatigue’: ‘Indicates whether the patient experiences fatigue (Yes/No).’, ‘Difficulty Breathing’: ‘Indicates whether the patient has difficulty breathing (Yes/No).’, ‘Age’: ‘The age of the patient in years.’, ‘Gender’: ‘The gender of the patient (Male/Female).’, ‘Blood Pressure’: ‘The blood pressure level of the patient (Normal/High).’, ‘Cholesterol Level’: ‘The cholesterol level of the patient (Normal/High).’, ‘Outcome Variable’: ‘The outcome variable indicating the result of the diagnosis or assessment for the specific disease (Positive/Negative).’ } Data Analysis and Model Creation pythonCopy codedisc = Discovery(llm=llm, user_input=DATA_DESCRIPTION, data=disease_df) disc.run() # Instantiate and create initial graph data model gdm = GraphDataModeler(llm=llm, discovery=disc) gdm.create_initial_model() gdm.current_model.visualize() Adjust Relationships pythonCopy codegdm.iterate_model(user_corrections=”’ Let’s think step by step. Please make the following updates to the data model: 1. Remove the relationships between Patient and Disease, between Patient and Symptom and between Patient and Outcome. 2. Change the Patient node into Demographics. 3. Create a relationship HAS_DEMOGRAPHICS from Disease to Demographics. 4. Create a relationship HAS_SYMPTOM from Disease to Symptom. If the Symptom value is No, remove this relationship. 5. Create a relationship HAS_LAB from Disease to HealthIndicator. 6. Create a relationship HAS_OUTCOME from Disease to Outcome. ”’) # Visualize the updated model gdm.current_model.visualize().render(‘output’, format=’png’) img = Image(‘output.png’, width=1200) display(img) Generate Cypher Code and YAML File pythonCopy code# Instantiate ingestion generator gen = IngestionGenerator(data_model=gdm.current_model, username=”neo4j”, password=’yourneo4jpasswordhere’, uri=’neo4j+s://123654888.databases.neo4j.io’, database=”neo4j”, csv_dir=”/home/user/”, csv_name=”disease_prepared.csv”) # Create ingestion YAML pyingest_yaml = gen.generate_pyingest_yaml_string() gen.generate_pyingest_yaml_file(file_name=”disease_prepared”) # Load data into Neo4j instance PyIngest(yaml_string=pyingest_yaml, dataframe=disease_df) Querying the Graph Database cypherCopy codeMATCH (n) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m Visualizing Specific Nodes and Relationships cypherCopy codeMATCH (n:Disease {name: ‘Diabetes’}) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m MATCH (d:Disease) MATCH (d)-[r:HAS_LAB]->(l) MATCH (d)-[r2:HAS_OUTCOME]->(o) WHERE l.bloodPressure = ‘High’ AND o.result=’Positive’ RETURN d, properties(d) AS disease_properties, r, properties(r) AS relationship_properties, l, properties(l) AS lab_properties Automated Cypher Query Generation with Gemini-1.5-Flash To automatically generate a Cypher query via Langchain (GraphCypherQAChain) and retrieve possible diseases based on a patient’s symptoms and health indicators, the following setup was used: Initialize Vertex AI pythonCopy codeimport warnings import json from langchain_community.graphs import Neo4jGraph with warnings.catch_warnings(): warnings.simplefilter(‘ignore’) NEO4J_USERNAME = “neo4j” NEO4J_DATABASE = ‘neo4j’ NEO4J_URI = ‘neo4j+s://1236547.databases.neo4j.io’ NEO4J_PASSWORD = ‘yourneo4jdatabasepasswordhere’ # Get the Knowledge Graph from the instance and the schema kg = Neo4jGraph( url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD, database=NEO4J_DATABASE ) kg.refresh_schema() print(textwrap.fill(kg.schema, 60)) schema = kg.schema Initialize Vertex AI pythonCopy codefrom langchain.prompts.prompt import PromptTemplate from langchain.chains import GraphCypherQAChain from langchain.llms import VertexAI vertexai.init(project=”your-project”, location=”us-west4″) llm = VertexAI(model=”gemini-1.5-flash”) Create the Prompt Template pythonCopy codeprompt_template = “”” Let’s think step by

Read More
Salesforce Research Produces INDICT

Salesforce Research Produces INDICT

Automating and assisting in coding holds tremendous promise for speeding up and enhancing software development. Yet, ensuring that these advancements yield secure and effective code presents a significant challenge. Balancing functionality with safety is crucial, especially given the potential risks associated with malicious exploitation of generated code. Salesforce Research Produces INDICT. In practical applications, Large Language Models (LLMs) often struggle with ambiguous or adversarial instructions, sometimes leading to unintended security vulnerabilities or facilitating harmful attacks. This isn’t merely theoretical; empirical studies, such as those on GitHub’s Copilot, have revealed that a substantial portion of generated programs—about 40%—contained vulnerabilities. Addressing these risks is vital for unlocking the full potential of LLMs in coding while safeguarding against potential threats. Current strategies to mitigate these risks include fine-tuning LLMs with safety-focused datasets and implementing rule-based detectors to identify insecure code patterns. However, fine-tuning alone may not suffice against sophisticated attack prompts, and creating high-quality safety-related data can be resource-intensive. Meanwhile, rule-based systems may not cover all vulnerability scenarios, leaving gaps that could be exploited. To address these challenges, researchers at Salesforce Research have introduced the INDICT framework. INDICT employs a novel approach involving dual critics—one focused on safety and the other on helpfulness—to enhance the quality of LLM-generated code. This framework facilitates internal dialogues between the critics, leveraging external knowledge sources like code snippets and web searches to provide informed critiques and iterative feedback. INDICT operates through two key stages: preemptive and post-hoc feedback. In the preemptive stage, the safety critic assesses potential risks during code generation, while the helpfulness critic ensures alignment with task requirements. External knowledge sources enrich their evaluations. In the post-hoc stage, after code execution, both critics review outcomes to refine future outputs, ensuring continuous improvement. Evaluation of INDICT across eight diverse tasks and programming languages demonstrated substantial enhancements in both safety and helpfulness metrics. The framework achieved a remarkable 10% absolute improvement in code quality overall. For instance, in CyberSecEval-1 benchmarks, INDICT enhanced code safety by up to 30%, with over 90% of outputs deemed secure. Additionally, the helpfulness metric showed significant gains, surpassing state-of-the-art baselines by up to 70%. INDICT’s success lies in its ability to provide detailed, context-aware critiques that guide LLMs towards generating more secure and functional code. By integrating safety and helpfulness feedback, the framework sets new standards for responsible AI in coding, addressing critical concerns about functionality and security in automated software development. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Boost Payer Patient Education

Boost Payer Patient Education

As a pediatrician with 15 years of experience in the pediatric emergency department, Cathy Moffitt, MD, understands the critical role of patient education. Now, as Senior Vice President and Aetna Chief Medical Officer at CVS Health, she applies that knowledge to the payer space. “Education is empowerment. It’s engagement. It’s crucial for equipping patients to navigate their healthcare journey. Now, overseeing a large payer like Aetna, I still firmly believe in the power of health education,” Moffitt shared on an episode of Healthcare Strategies. At a payer organization like Aetna, patient education begins with data analytics to better understand the member population. According to Moffitt, key insights from data can help payers determine the optimal time to share educational materials with members. “People are most receptive to education when they need help in the moment,” she explained. If educational opportunities are presented when members aren’t focused on their health needs, the information is less likely to resonate. Aetna’s Next Best Action initiative, launched in 2018, embodies this timing-driven approach. In this program, Aetna employees proactively reach out to members with specific conditions to provide personalized guidance on managing their health. This often includes educational resources delivered at the right moment when members are most open to learning. Data also enables payers to tailor educational efforts to a member’s demographics, including race, sexual orientation, gender identity, ethnicity, and location. By factoring in these elements, payers can ensure their communications are relevant and easy to understand. To enhance this personalized approach, Aetna offers translation services and provides customer service training focused on sensitivity to sexual orientation and gender identity. In addition, updating the provider directory to reflect a diverse network helps members feel more comfortable with their care providers, making them more likely to engage with educational resources. “Understanding our members’ backgrounds and needs, whether it’s acute or chronic illness, allows us to engage them more effectively,” Moffitt said. “This is the foundation of our approach to leveraging data for meaningful patient education.” With over two decades in both provider and payer roles, Moffitt has observed key trends in patient education, particularly its success in mental health and preventive care. She highlighted the role of technology in these areas. Efforts to educate patients about mental health have reduced stigma and increased awareness of mental wellness. Telemedicine has significantly improved access to mental healthcare, according to Moffitt. In preventive care, more people are aware of the importance of cancer screenings, vaccines, wellness visits, and other preventive measures. Moffitt pointed to the rising use of home health visits and retail clinics as contributing factors for Aetna members. Looking ahead, Moffitt sees personalized engagement as the future of patient education. Members increasingly want information tailored to their preferences, delivered through their preferred channels—whether by email, text, phone, or other methods. Omnichannel solutions will be essential to meeting this demand, and while healthcare has already made progress, Moffitt expects even more innovation in the years to come. “I can’t predict exactly where we’ll be in 10 years, just as I couldn’t have predicted where we are now a decade ago,” Moffitt said. “But we will continue to evolve and meet the needs of our members with the technological advancements we’re committed to.” Contact UsTo discover how Salesforce can advance your patient payer education, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Service Agent is Coming

Einstein Service Agent is Coming

Salesforce is entering the AI agent arena with a new service built on its Einstein AI platform. Introducing the Einstein Service Agent, a generative AI-powered self-service tool designed for end customers. This agent provides a conversational AI interface to answer questions and resolve various issues. Similar to the employee-facing Einstein Copilot used internally within organizations, the Einstein Service Agent can take action on behalf of users, such as processing product returns or issuing refunds. It can handle both simple and complex multi-step interactions, leveraging approved company workflows already established in Salesforce. Initially, Einstein Service Agent will be deployed for customer service scenarios, with plans to expand to other Salesforce clouds in the future. What sets Einstein Service Agents apart from other AI-driven workflows is their seamless integration with Salesforce’s existing customer data and workflows. “Einstein Service Agent is a generative AI-powered, self-service conversational experience built on our Einstein trust layer and platform,” Clara Shih, CEO of Salesforce AI, told VentureBeat. “Everything is grounded in our trust layer, as well as all the customer data and official business workflows that companies have been adding into Salesforce for the last 25 years.” Distinguishing AI Agent from AI Copilot Over the past year, Salesforce has detailed various aspects of its generative AI efforts, including the development of the Einstein Copilot, which became generally available at the end of April. The Einstein Copilot enables a wide range of conversational AI experiences for Salesforce users, focusing on direct users of the Salesforce platform. “Einstein Copilot is employee-facing, for salespeople, customer service reps, marketers, and knowledge workers,” Shih explained. “Einstein Service Agent is for our customers’ customers, for their self-service.” The concept of a conversational AI bot answering basic customer questions isn’t new, but Shih emphasized that Einstein Service Agent is different. It benefits from all the data and generative AI work Salesforce has done in recent years. This agent approach is not just about answering simple questions but also about delivering knowledge-based responses and taking action. With a copilot, multiple AI engines and responses can be chained together. The AI agent approach also chains AI models together. For Shih, the difference is a matter of semantics. “It’s a spectrum toward more and more autonomy,” Shih said. Driving AI Agent Approach with Customer Workflows As an example, Shih mentioned that Salesforce is working with a major apparel company as a pilot customer for Einstein Service Agent. If a customer places an online order and receives the wrong item, they could call the retailer during business hours for assistance from a human agent, who might be using the Einstein Copilot. If the customer reaches out when human agents aren’t available or chooses a self-service route, Einstein Service Agent can step in. The customer will be able to ask about the issue and, if enabled in the workflow, get a resolution. The workflow that understands who the customer is and how to handle the issue is already part of the Salesforce Service Cloud. Shih explained that Einstein Studio is where all administrative and configuration work for Einstein AI, including Service Agents, takes place, utilizing existing Salesforce data. The Einstein Service Agent provides a new layer for customers to interact with existing logic to solve issues. “Everything seemingly that the company has invested in over the last 25 years has come to light in the last 18 months, allowing customers to securely take advantage of generative AI in a trusted way,” Shih said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Enthusiasm for AI Powered Future

Enthusiasm for AI Powered Future

Trust in AI to handle certain tasks independently is growing, although the preference for AI-human collaboration remains strong. Enthusiasm for AI Powered Future. Workers are increasingly relying on AI, with a recent study showing that they trust AI to handle about 43 percent of their work tasks. This shift towards delegating tasks to AI is notable. Leaders show even more confidence, trusting AI with 51 percent of their work compared to 40 percent among rank-and-file employees. Looking ahead, 77 percent of global workers anticipate they will eventually trust AI to operate autonomously. Currently, only 10 percent of workers have this level of trust, but within the next three years, 26 percent believe they will trust AI to function independently. This trust is expected to rise to 41 percent in three or more years. Despite this, the preference for AI-human collaboration remains strong, with 54 percent of global workers favoring a collaborative approach for most tasks. However, some workers are already comfortable with AI handling specific responsibilities alone. For example, 15 percent trust AI to write code, 13 percent to uncover data insights, and 12 percent each to develop communications and act as personal assistants autonomously. There are still tasks where human involvement is deemed crucial. A significant number of workers trust only humans to ensure inclusivity (47 percent), onboard and train employees (46 percent), and keep data secure (40 percent). Building trust in AI involves greater human participation. Sixty-three percent of workers believe that increased human involvement would enhance their trust in AI. A major hurdle is the lack of understanding, as 54 percent of workers admit they do not know how AI is implemented or governed in their workplaces. Those knowledgeable about AI implementation are five times more likely to trust AI to operate autonomously within the next two years compared to those who lack this knowledge. A notable gender gap exists in AI knowledge, with males being 94 percent more likely than females to understand how AI is implemented and governed at work. Additionally, 62 percent of workers feel that more skill-building and training opportunities would foster greater trust in AI. Linda Saunders, Salesforce Director of Solutions Engineering Africa, highlighted the enthusiasm for an AI-powered future, emphasizing that human engagement is key to building trust and driving AI adoption. “By empowering humans at the helm of today’s AI systems, we can build trust and drive adoption – enabling workers to unlock all that AI has to offer,” Saunders stated. The study was conducted by Salesforce in collaboration with YouGov from March 20 to April 3, 2024. It involved nearly 6,000 full-time knowledge workers from diverse companies across nine countries, including the United States, the United Kingdom, Ireland, Australia, France, Germany, India, Singapore, and Switzerland. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com