Data Archives - gettectonic.com - Page 44
AI Trust and Optimism

AI Trust and Optimism

Building Trust in AI: A Complex Yet Essential Task The Importance of Trust in AI Trust in artificial intelligence (AI) is ultimately what will make or break the technology. AI Trust and Optimism. Amid the hype and excitement of the past 18 months, it’s widely recognized that human beings need to have faith in this new wave of automation. This trust ensures that AI systems do not overstep boundaries or undermine personal freedoms. However, building this trust is a complicated task, thankfully receiving increasing attention from responsible thought leaders in the field. The Challenge of Responsible AI Development There is a growing concern that in the AI arms race, some individuals and companies prioritize making their technology as advanced as possible without considering long-term human-centric issues or the present-day realities. This concern was highlighted when OpenAI CEO Sam Altman presented AI hallucinations as a feature, not a bug, at last year’s Dreamforce, shortly after Salesforce CEO Marc Benioff emphasized the vital nature of trust. Insights from Salesforce’s Global Study Salesforce recently released the results of a global study involving 6,000 knowledge workers from various companies. The study reveals that while respondents trust AI to manage 43% of their work tasks, they still prefer human intervention in areas such as training, onboarding, and data handling. A notable finding is the difference in trust levels between leaders and rank-and-file workers. Leaders trust AI to handle over half (51%) of their work, while other workers trust it with 40%. Furthermore, 63% of respondents believe human involvement is key to building their trust in AI, though a subset is already comfortable offloading certain tasks to autonomous AI. Specifically: The study predicts that within three years, 41% of global workers will trust AI to operate autonomously, a significant increase from the 10% who feel comfortable with this today. Ethical Considerations in AI Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, is responsible for establishing guidelines and best practices for technology adoption. Her interpretation of the study findings indicates that while workers are excited about a future with autonomous AI and are beginning to transition to it, trust gaps still need to be bridged. Goldman notes that workers are currently comfortable with AI handling tasks like writing code, uncovering data insights, and building communications. However, they are less comfortable delegating tasks such as inclusivity, onboarding, training employees, and data security to AI. Salesforce advocates for a “human at the helm” approach to AI. Goldman explains that human oversight builds trust in AI, but the way this oversight is designed must evolve to keep pace with AI’s rapid development. The traditional “human in the loop” model, where humans review every AI-generated output, is no longer feasible even with today’s sophisticated AI systems. Goldman emphasizes the need for more sophisticated controls that allow humans to focus on high-risk, high-judgment decisions while delegating other tasks. These controls should provide a macro view of AI performance and the ability to inspect it, which is crucial. Education and Training Goldman also highlights the importance of educating those steering AI systems. Trust and adoption of technology require that people are enabled to use it successfully. This includes comprehensive knowledge and training to make the most of AI capabilities. Optimism Amidst Skepticism Despite widespread fears about AI, Goldman finds a considerable amount of optimism and curiosity among workers. The study reflects a recognition of AI’s transformative potential and its rapid improvement. However, it is essential to distinguish between genuine optimism and hype-driven enthusiasm. Salesforce’s Stance on AI and Trust Salesforce has taken a strong stance on trust in relation to AI, emphasizing the non-silver bullet nature of this technology. The company acknowledges the balance between enthusiasm and pragmatism that many executives experience. While there is optimism about trusting autonomous AI within three years, this prediction needs to be substantiated with real-world evidence. Some organizations are already leading in generative AI adoption, while many others express interest in exploring its potential in the future. Conclusion Overall, this study contributes significantly to the ongoing debate about AI’s future. The concept of “human at the helm” is compelling and highlights the importance of ethical considerations in the AI-enabled future. Goldman’s role in presenting this research underscores Salesforce’s commitment to responsible AI development. For more insights, check out her blog on the subject. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI in Drug Research

AI in Drug Research

Insights on Leveraging AI in Biopharmaceutical R&D: A Discussion with Kailash Swarna Last month, Accenture released a report titled “Reinventing R&D in the Age of AI,” which explores how biopharmaceutical companies can harness artificial intelligence (AI) and other advanced technologies to enhance drug and therapeutic research and development. AI in Drug Research. Kailash Swarna, managing director and Accenture Life Sciences Global Research and Clinical lead, spoke with PharmaNewsIntelligence about the report’s findings and how AI can address ongoing challenges in research and development (R&D), while offering a return on technological investments. “Data and analytics are crucial in advancing drug development, from early research to late-stage clinical trials,” said Swarna. “The industry still faces significant challenges, including the time and cost required to bring a medicine to market. As a leading technology firm, it’s our role to leverage the best in data analytics and technology for drug discovery and development.” AI in Drug Research Accenture conducted detailed interviews with leaders from biopharma companies to explore AI’s role in drug development and discovery. These interviews were part of a CEO forum held just before the JP Morgan conference, where technology emerged as a major area of opportunity and concern. Key Challenges in R&D Understanding the challenges in the drug R&D landscape is crucial for identifying how AI can be effectively utilized. Swarna highlighted several significant challenges: 1. Scientific Growth “The rapid advances in biology and disease understanding present both opportunities and challenges,” Swarna noted. “While our knowledge of human disease has greatly improved, keeping pace with scientific progress in terms of executing and reducing the time and cost of bringing new therapeutics to market remains a major challenge.” He described the clinical trial process as “fraught with complexities,” including data management issues. Despite industry efforts to accelerate drug development, it often still takes over a decade and billions of dollars. 2. Macroeconomic Factors Drug R&D companies also face challenges from macroeconomic conditions, such as reimbursement issues and the Inflation Reduction Act in the US. “These factors are reshaping how companies approach their portfolios and the disease areas they target,” Swarna explained. “The industry is undergoing a retooling to address these economic impacts.” 3. Technology Optimization Many companies have made substantial technology investments, but integrating and systematically utilizing these technologies across the entire R&D process remains a challenge. “While individual technology investments have been valuable, there is a significant opportunity to unify these efforts and streamline data usage from early research through late-stage development,” Swarna said. Reinventing R&D with AI The report emphasizes that technological advancements, particularly generative AI and analytics, can revolutionize the R&D pipeline. “This isn’t about a single technology but about a comprehensive rethinking of processes, data flows, and technology investments across the entire R&D spectrum,” Swarna stated. He stressed that the reinvention of R&D processes requires an enterprise-wide strategy and implementation. Responsible AI Swarna also highlighted the importance of addressing potential challenges associated with AI. “At Accenture, we have a robust responsible AI framework,” he said. Responsible AI encompasses managing issues like bias and security. Accenture’s framework considers factors such as choosing appropriate patient populations and understanding how bias might impact research data. It also addresses security concerns, including intellectual property protection and patient privacy. “Protecting patient privacy and complying with global regulations is crucial when utilizing AI technology,” Swarna emphasized. “Without proper safeguards, we risk data loss or breaches.” Measuring ROI of AI in Drug Research To ensure that AI technologies positively impact the R&D lifecycle, Swarna described a framework for measuring return on investment (ROI). “Given the long cycle of our industry, we’ve developed objective measures to evaluate the impact of these technologies on cost and time,” he explained. Companies can use quantitative measures to track interim milestones, such as recruitment costs and speeds. “These metrics allow us to observe progress in smaller increments rather than waiting for end-to-end results,” Swarna said. “The approach varies by company and their stage in implementing these technologies.” Benefits of AI in Clinical Trials Incorporating AI into clinical trials has the potential to reduce research times and costs. While Swarna and Accenture cannot predict policy impacts on drug pricing, he offered a theoretical benefit: optimizing technology could lower development costs, potentially making medicines more affordable and accessible. Swarna noted that reducing R&D spending could lead to more effective drugs being available to larger populations without placing an excessive burden on the healthcare system. For further details, the original report and discussion were published by Accenture and can be accessed on their official site. AI in Drug Research. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Boosting Payer Patient Education with Technology

Boosting Payer Patient Education with Technology

Data and Technology Strategies Elevate Payer-Driven Patient Education Analytics platforms, omnichannel engagement, telehealth, and other technology and data innovations are transforming patient education initiatives within the payer space. Dr. Cathy Moffitt, a pediatrician with over 15 years of emergency department experience and now Chief Medical Officer at Aetna within CVS Health, emphasizes the crucial role of patient education in empowering individuals to navigate their healthcare journeys. “Education is empowerment; it’s engagement. In my role with Aetna, I continue to see health education as fundamental,” Moffitt explained on an episode of Healthcare Strategies. Leveraging Data for Targeted Education At large payers like Aetna, patient education starts with deep data insights. By analyzing member data, payers can identify key opportunities to deliver educational content precisely when members are most receptive. “People are more open to hearing and being educated when they need help right then,” Moffitt said. Aetna’s Next Best Action initiative, launched in 2018, is one such program that reaches out to members at optimal times, focusing on guiding individuals with specific conditions on the next best steps for their health. By sharing patient education materials in these key moments, Aetna aims to maximize the impact and relevance of its outreach. Tailoring Education with Demographic Data Data on member demographics—such as race, ethnicity, gender identity, and zip code—further customizes Aetna’s educational efforts. By incorporating translation services and sensitivity training for customer representatives, Aetna ensures that all communication is accessible and relevant for members from diverse backgrounds. Additionally, having an updated provider directory allows members to connect with healthcare professionals who understand their cultural and linguistic needs, increasing trust and the likelihood of engaging with educational resources. Technology’s Role in Mental Health and Preventive Care Education With over 20 years in healthcare, Moffitt observes that patient education has made significant strides in mental health and preventive care, areas where technology has had a transformative impact. In mental health, for example, education has helped reduce stigma, and telemedicine has expanded access. Preventive care education has raised awareness of screenings, vaccines, and wellness visits, with options like home health visits and retail clinics contributing to increased engagement among Aetna’s members. The Future of Customized, Omnichannel Engagement Looking ahead, Moffitt envisions even more personalized and seamless engagement through omnichannel solutions, allowing members to receive educational materials via their preferred methods—whether email, text, or phone. “I can’t predict exactly where we’ll be in 10 years, but with the technological commitments we’re making, we’ll continue to meet evolving member demands,” Moffitt added. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
UncannyAutomator Salesforce Integration

UncannyAutomator Salesforce Integration

Integrating WordPress with Salesforce With the Uncanny Automator Elite Integrations addon, connecting your WordPress site to Salesforce is a breeze. Steps to Connect Uncanny Automator to Your Salesforce Account 1. Install the Elite Integrations Addon First, ensure you have the Elite Integrations addon for Uncanny Automator installed on your WordPress site. 2. Connect Uncanny Automator to Salesforce To establish the connection, follow these steps: You will be prompted to log into Salesforce. After logging in, you will need to allow Uncanny Automator to manage your Salesforce data by clicking Allow. You will then return to the app connection screen on your WordPress site. Using Salesforce Actions in Recipes Once connected to Salesforce, you can use Uncanny Automator to create and update contacts and leads based on user actions on your WordPress site. Here’s how: Final Steps That’s it! Your recipe will now automatically run whenever users complete the selected trigger(s), sending the desired updates directly to your Salesforce account. Installing Uncanny Automator Install the free version The free version of Uncanny Automator is hosted in the WordPress.org repository, so installing it on your WordPress site couldn’t be easier. Sign into your website as an administrator, and in /wp-admin/, navigate to Plugins > Add New. In the search field, enter “Uncanny Automator”. See the image below for more context. In the Search Results, click the Install Now button for Automator. Once it finishes installing, click Activate. That’s it! Uncanny Automator is installed and ready for use. Please note that you must have the free version installed first to use Uncanny Automator Pro. The setup wizard After activation, you will be redirected to the Uncanny Automator dashboard. From here, you can connect an account, watch tutorials or read articles in our Knowledge Base. Connecting a free account is an optional step allows you to try out some of the App non-WordPress Automator integrations (like Slack, Google Sheets and Facebook) but it is not required to use anything else in the free version. Install Uncanny Automator Pro Uncanny Automator Pro is a separate plugin from our free version, and to use Pro features, you must have both Uncanny Automator AND Uncanny Automator Pro installed and active. If you don’t yet have a copy of Automator Pro, you can purchase one from https://automatorplugin.com/pricing/. Once purchased, you can download the latest version of Uncanny Automator Pro inside your account on our website at https://automatorplugin.com/my-account/downloads/. To install the Pro version after downloading the zip file, navigate to Plugins > Add New in /wp-admin/. At the top of the page, click the Upload Plugin button. Click Choose File to select the Pro zip file, the Install Now and Activate the plugin. Once activated, be sure to visit Automator > Settings in /wp-admin/ to enter your license key. This unlocks access to automatic updates and unlimited use of non-WordPress integrations in your recipes. UncannyAutomator special triggers can be found here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Best ChatGPT Competitor Tools

Best ChatGPT Competitor Tools

ChatGPT Alternatives – Best ChatGPT Competitor Tools Discover the Future of AI Chat: Explore the Top ChatGPT Alternatives for Enhanced Communication and Productivity. In an effort to avoid playing favorites, tools are presented in alphabetical order. Best ChatGPT Competitor Tools. Do you ever found yourself wishing for a ChatGPT alternative that might better suit your specific content or AI assistant needs? Whether you’re a business owner, content creator, or student, the right AI chat tool can significantly influence how you interact with information and manage tasks. In this insight, we’re looking into the top ChatGPT alternatives available in 2024. By the end, you’ll have a clear idea of which options might be best for your particular use case and why. Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing BONUS Quillbot AI Great for paraphrasing small blocks of content. In the rapidly evolving world of AI chat technology, these top ChatGPT alternatives of 2024 offer a diverse range of capabilities to suit various needs and preferences. Whether you’re looking to streamline your workflow, enhance your learning, or simply engage in more dynamic conversations, there’s a tool out there (or 2 or 10) that can help boost your digital interactions. Each platform brings its unique strengths to the table, from specialized functionalities like summarizing texts or coding assistance to more general but highly efficient conversational capabilities. There is no reason to select only one. As you consider integrating these tools into your daily routine, think about how its features align with your goals. Embrace the possibilities and let these advanced technologies open new doors to efficiency, creativity, and connectivity. Create a bookmark folder just for GPT tools. New one’s pop up routinely. Happy chatting! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Patient Trust Tanked in Healthcare During COVID

Patient Trust Tanked in Healthcare During COVID

Patient Trust in Healthcare Declined During COVID-19 Pandemic Patient trust in healthcare providers significantly declined during the COVID-19 pandemic, a trend that some experts believe could threaten public health. New data published in JAMA Network Open outlines the negative impact the pandemic had on patient trust levels. Patient Trust Tanked in Healthcare During COVID. The study, which analyzed survey results collected between April 2020 and January 2024, revealed a 30 percentage point drop in self-reported patient trust. Factors such as age, gender (specifically female), lower educational attainment, lower income, Black race, and living in rural areas were associated with lower trust levels, according to the researchers. These findings come as the healthcare industry examines the broader implications of the pandemic. The focus on patient trust is crucial because of the significant role healthcare providers play in public health and the profound impact the pandemic had on societal attitudes. “During the COVID-19 pandemic, medicine and public health became politicized, with the internet amplifying public figures and even some physicians encouraging distrust in public health experts and scientists,” the investigators wrote. “As such, the pandemic may have represented a turning point in trust, with a profession previously seen as trustworthy increasingly subject to doubt.” The data, drawn from 24 waves of surveys involving more than 443,000 individuals over age 18, showed that healthcare professionals began the pandemic with high trust ratings—71.5% of individuals reported trust in physicians and hospitals. However, by January 2024, this number had fallen to 40.1%. The decline in trust could have serious repercussions for public health. Lower patient trust was linked to a reduced likelihood of receiving flu or COVID-19 vaccinations. Patient Trust Tanked in Healthcare During COVID “Our results cannot establish causation, but in the context of prior studies documenting associations between physician trust and more positive health outcomes, they raise the possibility that the decrease in trust during the pandemic could have long-lasting public health implications,” the researchers explained. Conversely, higher levels of trust were associated with healthier behaviors, particularly the receipt of the COVID-19 vaccine, flu shots, and COVID-19 boosters. To address this issue, the healthcare sector should focus on reaffirming patient trust in physicians and hospitals. However, this may be a challenging task. A previous Cochrane review found that no intervention meaningfully changed trust in physicians, despite numerous efforts that generally had modest effects. “A better understanding of groups exhibiting particularly low trust, and the factors associated with that diminished trust, may be valuable in guiding future intervention development and deployment,” the researchers suggested. These findings contrast sharply with the early stages of the pandemic, including the COVID-19 vaccine rollout when public health experts touted doctors as among the most trusted COVID-19 messengers. The study could not pinpoint a specific reason for the loss of patient trust, noting that it was not linked to political affiliation nor fully explained by a lack of trust in science. This indicates that there was something particular about healthcare itself that contributed to the decline in trust during the pandemic. Further research is necessary to uncover more trends among individuals whose trust levels decreased during the pandemic, the researchers recommended. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
LLMs Turn CSVs into Knowledge Graphs

LLMs Turn CSVs into Knowledge Graphs

Neo4j Runway and Healthcare Knowledge Graphs Recently, Neo4j Runway was introduced as a tool to simplify the migration of relational data into graph structures. LLMs Turn CSVs into Knowledge Graphs. According to its GitHub page, “Neo4j Runway is a Python library that simplifies the process of migrating your relational data into a graph. It provides tools that abstract communication with OpenAI to run discovery on your data and generate a data model, as well as tools to generate ingestion code and load your data into a Neo4j instance.” In essence, by uploading a CSV file, the LLM identifies the nodes and relationships, automatically generating a Knowledge Graph. Knowledge Graphs in healthcare are powerful tools for organizing and analyzing complex medical data. These graphs structure information to elucidate relationships between different entities, such as diseases, treatments, patients, and healthcare providers. Applications of Knowledge Graphs in Healthcare Integration of Diverse Data Sources Knowledge graphs can integrate data from various sources such as electronic health records (EHRs), medical research papers, clinical trial results, genomic data, and patient histories. Improving Clinical Decision Support By linking symptoms, diagnoses, treatments, and outcomes, knowledge graphs can enhance clinical decision support systems (CDSS). They provide a comprehensive view of interconnected medical knowledge, potentially improving diagnostic accuracy and treatment effectiveness. Personalized Medicine Knowledge graphs enable the development of personalized treatment plans by correlating patient-specific data with broader medical knowledge. This includes understanding relationships between genetic information, disease mechanisms, and therapeutic responses, leading to more tailored healthcare interventions. Drug Discovery and Development In pharmaceutical research, knowledge graphs can accelerate drug discovery by identifying potential drug targets and understanding the biological pathways involved in diseases. Public Health and Epidemiology Knowledge graphs are useful in public health for tracking disease outbreaks, understanding epidemiological trends, and planning interventions. They integrate data from various public health databases, social media, and other sources to provide real-time insights into public health threats. Neo4j Runway Library Neo4j Runway is an open-source library created by Alex Gilmore. The GitHub repository and a blog post describe its features and capabilities. Currently, the library supports OpenAI LLM for parsing CSVs and offers the following features: The library eliminates the need to write Cypher queries manually, as the LLM handles all CSV-to-Knowledge Graph conversions. Additionally, Langchain’s GraphCypherQAChain can be used to generate Cypher queries from prompts, allowing for querying the graph without writing a single line of Cypher code. Practical Implementation in Healthcare To test Neo4j Runway in a healthcare context, a simple dataset from Kaggle (Disease Symptoms and Patient Profile Dataset) was used. This dataset includes columns such as Disease, Fever, Cough, Fatigue, Difficulty Breathing, Age, Gender, Blood Pressure, Cholesterol Level, and Outcome Variable. The goal was to provide a medical report to the LLM to get diagnostic hypotheses. Libraries and Environment Setup pythonCopy code# Install necessary packages sudo apt install python3-pydot graphviz pip install neo4j-runway # Import necessary libraries import numpy as np import pandas as pd from neo4j_runway import Discovery, GraphDataModeler, IngestionGenerator, LLM, PyIngest from IPython.display import display, Markdown, Image Load Environment Variables pythonCopy codeload_dotenv() OPENAI_API_KEY = os.getenv(‘sk-openaiapikeyhere’) NEO4J_URL = os.getenv(‘neo4j+s://your.databases.neo4j.io’) NEO4J_PASSWORD = os.getenv(‘yourneo4jpassword’) Load and Prepare Medical Data pythonCopy codedisease_df = pd.read_csv(‘/home/user/Disease_symptom.csv’) disease_df.columns = disease_df.columns.str.strip() for i in disease_df.columns: disease_df[i] = disease_df[i].astype(str) disease_df.to_csv(‘/home/user/disease_prepared.csv’, index=False) Data Description for the LLM pythonCopy codeDATA_DESCRIPTION = { ‘Disease’: ‘The name of the disease or medical condition.’, ‘Fever’: ‘Indicates whether the patient has a fever (Yes/No).’, ‘Cough’: ‘Indicates whether the patient has a cough (Yes/No).’, ‘Fatigue’: ‘Indicates whether the patient experiences fatigue (Yes/No).’, ‘Difficulty Breathing’: ‘Indicates whether the patient has difficulty breathing (Yes/No).’, ‘Age’: ‘The age of the patient in years.’, ‘Gender’: ‘The gender of the patient (Male/Female).’, ‘Blood Pressure’: ‘The blood pressure level of the patient (Normal/High).’, ‘Cholesterol Level’: ‘The cholesterol level of the patient (Normal/High).’, ‘Outcome Variable’: ‘The outcome variable indicating the result of the diagnosis or assessment for the specific disease (Positive/Negative).’ } Data Analysis and Model Creation pythonCopy codedisc = Discovery(llm=llm, user_input=DATA_DESCRIPTION, data=disease_df) disc.run() # Instantiate and create initial graph data model gdm = GraphDataModeler(llm=llm, discovery=disc) gdm.create_initial_model() gdm.current_model.visualize() Adjust Relationships pythonCopy codegdm.iterate_model(user_corrections=”’ Let’s think step by step. Please make the following updates to the data model: 1. Remove the relationships between Patient and Disease, between Patient and Symptom and between Patient and Outcome. 2. Change the Patient node into Demographics. 3. Create a relationship HAS_DEMOGRAPHICS from Disease to Demographics. 4. Create a relationship HAS_SYMPTOM from Disease to Symptom. If the Symptom value is No, remove this relationship. 5. Create a relationship HAS_LAB from Disease to HealthIndicator. 6. Create a relationship HAS_OUTCOME from Disease to Outcome. ”’) # Visualize the updated model gdm.current_model.visualize().render(‘output’, format=’png’) img = Image(‘output.png’, width=1200) display(img) Generate Cypher Code and YAML File pythonCopy code# Instantiate ingestion generator gen = IngestionGenerator(data_model=gdm.current_model, username=”neo4j”, password=’yourneo4jpasswordhere’, uri=’neo4j+s://123654888.databases.neo4j.io’, database=”neo4j”, csv_dir=”/home/user/”, csv_name=”disease_prepared.csv”) # Create ingestion YAML pyingest_yaml = gen.generate_pyingest_yaml_string() gen.generate_pyingest_yaml_file(file_name=”disease_prepared”) # Load data into Neo4j instance PyIngest(yaml_string=pyingest_yaml, dataframe=disease_df) Querying the Graph Database cypherCopy codeMATCH (n) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m Visualizing Specific Nodes and Relationships cypherCopy codeMATCH (n:Disease {name: ‘Diabetes’}) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m MATCH (d:Disease) MATCH (d)-[r:HAS_LAB]->(l) MATCH (d)-[r2:HAS_OUTCOME]->(o) WHERE l.bloodPressure = ‘High’ AND o.result=’Positive’ RETURN d, properties(d) AS disease_properties, r, properties(r) AS relationship_properties, l, properties(l) AS lab_properties Automated Cypher Query Generation with Gemini-1.5-Flash To automatically generate a Cypher query via Langchain (GraphCypherQAChain) and retrieve possible diseases based on a patient’s symptoms and health indicators, the following setup was used: Initialize Vertex AI pythonCopy codeimport warnings import json from langchain_community.graphs import Neo4jGraph with warnings.catch_warnings(): warnings.simplefilter(‘ignore’) NEO4J_USERNAME = “neo4j” NEO4J_DATABASE = ‘neo4j’ NEO4J_URI = ‘neo4j+s://1236547.databases.neo4j.io’ NEO4J_PASSWORD = ‘yourneo4jdatabasepasswordhere’ # Get the Knowledge Graph from the instance and the schema kg = Neo4jGraph( url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD, database=NEO4J_DATABASE ) kg.refresh_schema() print(textwrap.fill(kg.schema, 60)) schema = kg.schema Initialize Vertex AI pythonCopy codefrom langchain.prompts.prompt import PromptTemplate from langchain.chains import GraphCypherQAChain from langchain.llms import VertexAI vertexai.init(project=”your-project”, location=”us-west4″) llm = VertexAI(model=”gemini-1.5-flash”) Create the Prompt Template pythonCopy codeprompt_template = “”” Let’s think step by

Read More
Salesforce Research Produces INDICT

Salesforce Research Produces INDICT

Automating and assisting in coding holds tremendous promise for speeding up and enhancing software development. Yet, ensuring that these advancements yield secure and effective code presents a significant challenge. Balancing functionality with safety is crucial, especially given the potential risks associated with malicious exploitation of generated code. Salesforce Research Produces INDICT. In practical applications, Large Language Models (LLMs) often struggle with ambiguous or adversarial instructions, sometimes leading to unintended security vulnerabilities or facilitating harmful attacks. This isn’t merely theoretical; empirical studies, such as those on GitHub’s Copilot, have revealed that a substantial portion of generated programs—about 40%—contained vulnerabilities. Addressing these risks is vital for unlocking the full potential of LLMs in coding while safeguarding against potential threats. Current strategies to mitigate these risks include fine-tuning LLMs with safety-focused datasets and implementing rule-based detectors to identify insecure code patterns. However, fine-tuning alone may not suffice against sophisticated attack prompts, and creating high-quality safety-related data can be resource-intensive. Meanwhile, rule-based systems may not cover all vulnerability scenarios, leaving gaps that could be exploited. To address these challenges, researchers at Salesforce Research have introduced the INDICT framework. INDICT employs a novel approach involving dual critics—one focused on safety and the other on helpfulness—to enhance the quality of LLM-generated code. This framework facilitates internal dialogues between the critics, leveraging external knowledge sources like code snippets and web searches to provide informed critiques and iterative feedback. INDICT operates through two key stages: preemptive and post-hoc feedback. In the preemptive stage, the safety critic assesses potential risks during code generation, while the helpfulness critic ensures alignment with task requirements. External knowledge sources enrich their evaluations. In the post-hoc stage, after code execution, both critics review outcomes to refine future outputs, ensuring continuous improvement. Evaluation of INDICT across eight diverse tasks and programming languages demonstrated substantial enhancements in both safety and helpfulness metrics. The framework achieved a remarkable 10% absolute improvement in code quality overall. For instance, in CyberSecEval-1 benchmarks, INDICT enhanced code safety by up to 30%, with over 90% of outputs deemed secure. Additionally, the helpfulness metric showed significant gains, surpassing state-of-the-art baselines by up to 70%. INDICT’s success lies in its ability to provide detailed, context-aware critiques that guide LLMs towards generating more secure and functional code. By integrating safety and helpfulness feedback, the framework sets new standards for responsible AI in coding, addressing critical concerns about functionality and security in automated software development. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Forecasting With Foundation Models

Forecasting With Foundation Models

On Hugging Face, there are 20 models tagged as “time series” at the time of writing. While this number is relatively low compared to the 125,950 results for the “text-generation-inference” tag, time series forecasting with foundation models has attracted significant interest from major companies such as Amazon, IBM, and Salesforce, which have developed their own models: Chronos, TinyTimeMixer, and Moirai, respectively. Currently, one of the most popular time series models on Hugging Face is Lag-Llama, a univariate probabilistic model developed by Kashif Rasul, Arjun Ashok, and their co-authors. Open-sourced in February 2024, the authors claim that Lag-Llama possesses strong zero-shot generalization capabilities across various datasets and domains. Once fine-tuned, they assert it becomes the best general-purpose model of its kind. In this insight, we showcase experience fine-tuning Lag-Llama and tests its capabilities against a more classical machine learning approach, specifically an XGBoost model designed for univariate time series data. Gradient boosting algorithms like XGBoost are widely regarded as the pinnacle of classical machine learning (as opposed to deep learning) and perform exceptionally well with tabular data. Therefore, it is fitting to benchmark Lag-Llama against XGBoost to determine if the foundation model lives up to its promises. The results, however, are not straightforward. The data used for this exercise is a four-year-long series of hourly wave heights off the coast of Ribadesella, a town in the Spanish region of Asturias. The data, available from the Spanish ports authority data portal, spans from June 18, 2020, to June 18, 2024. For the purposes of this study, the series is aggregated to a daily level by taking the maximum wave height recorded each day. This aggregation helps illustrate the concepts more clearly, as results become volatile with higher granularity. The target variable is the maximum height of the waves recorded each day, measured in meters. Several reasons influenced the choice of this series. First, the Lag-Llama model was trained on some weather-related data, making this type of data slightly challenging yet manageable for the model. Second, while meteorological forecasts are typically produced using numerical weather models, statistical models can complement these forecasts, especially for long-range predictions. In the era of climate change, statistical models can provide a baseline expectation and highlight deviations from typical patterns. The dataset is standard and requires minimal preprocessing, such as imputing a few missing values. After splitting the data into training, validation, and test sets, with the latter two covering five months each, the next step involves benchmarking Lag-Llama against XGBoost on two univariate forecasting tasks: point forecasting and probabilistic forecasting. Point forecasting gives a specific prediction, while probabilistic forecasting provides a confidence interval. While Lag-Llama was primarily trained for probabilistic forecasting, point forecasts are useful for illustrative purposes. Forecasts involve several considerations, such as the forecast horizon, the last observations fed into the model, and how often the model is updated. This study uses a recursive multi-step forecast without updating the model, with a step size of seven days. This means the model produces batches of seven forecasts at a time, using the latest predictions to generate the next set without retraining. Point forecasting performance is measured using Mean Absolute Error (MAE), while probabilistic forecasting is evaluated based on empirical coverage or coverage probability of 80%. The XGBoost model is defined using Skforecast, a library that facilitates the development and testing of forecasters. The ForecasterAutoreg object is created with an XGBoost regressor, and the optimal number of lags is determined through Bayesian optimization. The resulting model uses 21 lags of the target variable and various hyperparameters optimized through the search. The performance of the XGBoost forecaster is assessed through backtesting, which evaluates the model on a test set. The model’s MAE is 0.64, indicating that predictions are, on average, 64 cm off from the actual measurements. This performance is better than a simple rule-based forecast, which has an MAE of 0.84. For probabilistic forecasting, Skforecast calculates prediction intervals using bootstrapped residuals. The intervals cover 84.67% of the test set values, slightly above the target of 80%, with an interval area of 348.28. Next, the zero-shot performance of Lag-Llama is examined. Using context lengths of 32, 64, and 128 tokens, the model’s MAE ranges from 0.75 to 0.77, higher than the XGBoost forecaster’s MAE. Probabilistic forecasting with Lag-Llama shows varying coverage and interval areas, with the 128-token model achieving an 84.67% coverage and an area of 399.25, similar to XGBoost’s performance. Fine-tuning Lag-Llama involves adjusting context length and learning rate. Despite various configurations, the fine-tuned model does not significantly outperform the zero-shot model in terms of MAE or coverage. In conclusion, Lag-Llama’s performance, without training, is comparable to an optimized traditional forecaster like XGBoost. Fine-tuning does not yield substantial improvements, suggesting that more training data might be necessary. When choosing between Lag-Llama and XGBoost, factors such as ease of use, deployment, maintenance, and inference costs should be considered, with XGBoost likely having an edge in these areas. The code used in this study is publicly available on a GitHub repository for further exploration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Can We Customize Manufacturing Cloud For Our Business

Can We Customize Manufacturing Cloud For Our Business?

Yes, Salesforce Manufacturing Cloud Can Be Customized to Meet Your Business Needs Salesforce Manufacturing Cloud is designed to be highly customizable, allowing manufacturing organizations to tailor it to their unique business requirements. Whether it’s adapting the platform to fit specific workflows, integrating with third-party systems, or enhancing reporting capabilities, Salesforce provides robust customization options to meet the specific needs of manufacturers. Here are key ways Salesforce Manufacturing Cloud can be customized: 1. Custom Data Models and Objects Salesforce allows you to create custom objects and fields to track data beyond the standard model. This flexibility enables businesses to manage unique production metrics or product configurations seamlessly within the platform. Customization Options: 2. Sales Agreement Customization Sales Agreements in Salesforce Manufacturing Cloud can be tailored to reflect your business’s specific contract terms and pricing models. You can adjust agreement structures, including the customization of terms, conditions, and rebate tracking. Customization Options: 3. Custom Workflows and Automation Salesforce offers tools like Flow Builder and Process Builder, allowing manufacturers to automate routine tasks and create custom workflows that streamline operations. Customization Options: 4. Integration with Third-Party Systems Salesforce Manufacturing Cloud can integrate seamlessly with ERP systems (like SAP or Oracle), inventory management platforms, and IoT devices to ensure a smooth data flow across departments. Integration Options: 5. Custom Reports and Dashboards With Salesforce’s robust reporting tools, you can create custom reports and dashboards that provide real-time insights into key performance indicators (KPIs) relevant to your manufacturing operations. Customization Options: 6. Custom User Interfaces Salesforce Lightning allows you to customize user interfaces to meet the needs of different roles within your organization, such as production managers or sales teams. This ensures users have quick access to relevant data. Customization Options: Conclusion Salesforce Manufacturing Cloud provides a wide range of customization options to suit the unique needs of your manufacturing business. Whether it’s adjusting data models, automating processes, or integrating with external systems, Manufacturing Cloud can be tailored to meet your operational goals. By leveraging these customizations, manufacturers can optimize their operations, improve data accuracy, and gain real-time insights to boost efficiency. If you need help customizing Salesforce Manufacturing Cloud, Service Cloud, or Sales Cloud for your business, our Salesforce Manufacturing Cloud Services team is here to assist. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Boost Payer Patient Education

Boost Payer Patient Education

As a pediatrician with 15 years of experience in the pediatric emergency department, Cathy Moffitt, MD, understands the critical role of patient education. Now, as Senior Vice President and Aetna Chief Medical Officer at CVS Health, she applies that knowledge to the payer space. “Education is empowerment. It’s engagement. It’s crucial for equipping patients to navigate their healthcare journey. Now, overseeing a large payer like Aetna, I still firmly believe in the power of health education,” Moffitt shared on an episode of Healthcare Strategies. At a payer organization like Aetna, patient education begins with data analytics to better understand the member population. According to Moffitt, key insights from data can help payers determine the optimal time to share educational materials with members. “People are most receptive to education when they need help in the moment,” she explained. If educational opportunities are presented when members aren’t focused on their health needs, the information is less likely to resonate. Aetna’s Next Best Action initiative, launched in 2018, embodies this timing-driven approach. In this program, Aetna employees proactively reach out to members with specific conditions to provide personalized guidance on managing their health. This often includes educational resources delivered at the right moment when members are most open to learning. Data also enables payers to tailor educational efforts to a member’s demographics, including race, sexual orientation, gender identity, ethnicity, and location. By factoring in these elements, payers can ensure their communications are relevant and easy to understand. To enhance this personalized approach, Aetna offers translation services and provides customer service training focused on sensitivity to sexual orientation and gender identity. In addition, updating the provider directory to reflect a diverse network helps members feel more comfortable with their care providers, making them more likely to engage with educational resources. “Understanding our members’ backgrounds and needs, whether it’s acute or chronic illness, allows us to engage them more effectively,” Moffitt said. “This is the foundation of our approach to leveraging data for meaningful patient education.” With over two decades in both provider and payer roles, Moffitt has observed key trends in patient education, particularly its success in mental health and preventive care. She highlighted the role of technology in these areas. Efforts to educate patients about mental health have reduced stigma and increased awareness of mental wellness. Telemedicine has significantly improved access to mental healthcare, according to Moffitt. In preventive care, more people are aware of the importance of cancer screenings, vaccines, wellness visits, and other preventive measures. Moffitt pointed to the rising use of home health visits and retail clinics as contributing factors for Aetna members. Looking ahead, Moffitt sees personalized engagement as the future of patient education. Members increasingly want information tailored to their preferences, delivered through their preferred channels—whether by email, text, phone, or other methods. Omnichannel solutions will be essential to meeting this demand, and while healthcare has already made progress, Moffitt expects even more innovation in the years to come. “I can’t predict exactly where we’ll be in 10 years, just as I couldn’t have predicted where we are now a decade ago,” Moffitt said. “But we will continue to evolve and meet the needs of our members with the technological advancements we’re committed to.” Contact Us To discover how Salesforce can advance your patient payer education, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Private Connectivity Between Salesforce and On-Premise Network

Private Connectivity Between Salesforce and On-Premise Network

Salesforce is an AWS Partner and a trusted global leader in customer relationship management (CRM). Hyperforce is the next-generation Salesforce architecture, built on Amazon Web Services (AWS). Private Connectivity Between Salesforce and On-Premise Network explained. When business applications developed on Hyperforce are integrated with on-premises systems, traffic in both directions will flow over the internet. For customers in heavily regulated industries such as the public sector and financial services, programmatic access of the Salesforce APIs hosted on Hyperforce from on-premises systems is required to traverse a private connection. Conversely, accessing on-premises systems from business applications running in Hyperforce is required to use a private connection. In this insight, AWS describes how AWS Direct Connect and AWS Transit Gateway can be used in conjunction with Salesforce Private Connect to facilitate the private, bidirectional exchange of organizational data. Architectural overview How to use AWS Direct Connect to establish a dedicated, managed, and reliable connection to Hyperforce. The approach used a public virtual interface to facilitate connectivity to public Hyperforce endpoints. The approach in this insight demonstrates the use of a private or transit virtual interface to establish a dedicated, private connection to Hyperforce using Salesforce Private Connect. Approach AWS Direct Connect is set up between the on-premises network and a virtual private cloud (VPC) residing inside a customer’s AWS account to provide connectivity from the on-premises network to AWS. The exchange of data between the customer VPC and Salesforce’s transit VPC is facilitated through the Salesforce Private Connect feature, based on AWS PrivateLink technology. AWS PrivateLink allows consumers to securely access a service located in a service provider’s VPC as if it were located in the consumer’s VPC. Using Salesforce Private Connect, traffic is routed through a fully managed network connection between your Salesforce organization and your VPC instead of over the internet. The following table shows the definitions of inbound and outbound connections in the context of Salesforce Private Connect: Direction Inbound Outbound Description Traffic that flows into Salesforce Traffic that flows out of Salesforce Use cases AWS to Salesforce Salesforce to AWS On-premises network to Salesforce Salesforce to on-premises network Inbound and Outbound This pattern can only be adopted for Salesforce services supported by Salesforce Private Connect, such as Experience Cloud, Financial Services Cloud, Health Cloud, Platform Cloud, Sales Cloud, and Service Cloud. Check the latest Salesforce documentation for the specific Salesforce services that are supported. Furthermore, this architecture is only applicable to the inbound and outbound exchange of data and does not pertain to the access of the Salesforce UI. The following diagram shows the end-to-end solution of how private connectivity is facilitated bidirectionally. In this example, on-premises servers located on the 10.0.1.0/26 network are required to privately exchange data with applications running on the Hyperforce platform. Figure 1: Using AWS Direct Connect and Salesforce Private Connect to establish private, bidirectional connectivity Prerequisites for Private Connectivity Between Salesforce and On-Premise Network In order to implement this solution, the following prerequisites are required on both the Salesforce and AWS side. Salesforce Refer to Salesforce documentation for detailed requirements on migrating your Salesforce organization to Hyperforce. AWS Network flow between on-premises data center and Salesforce API The following figure shows how both inbound and outbound traffic flows through the architecture. Figure 2: Network flow between on-premises data center and Salesforce Inbound Outbound Considerations for Private Connectivity Between Salesforce and On-Premise Network Before you set up the private, bidirectional exchange of organizational data with AWS Direct Connect, AWS Transit Gateway, and Salesforce Private Connect, review these considerations. Resiliency We recommend that you set up multiple AWS Direct Connect connections to provide resilient communication paths to the AWS Region, especially if the traffic between your on-premises resources and Hyperforce is business-critical. Refer to the AWS documentation on how to achieve high and maximum resiliency for your AWS Direct Connect deployments. For inbound traffic flow, we recommend that the VPC endpoint is configured across Availability Zones for high availability. Configure customer DNS records for the Salesforce API with IP addresses associated with the VPC endpoint and implement the DNS failover or load-balancing mechanism on the customer side. For outbound traffic flow, we recommend that you configure your Network Load Balancer with two or more Availability Zones for high availability. Security For inbound traffic flow, source IP addresses used by the incoming connection are displayed in the Salesforce Private Connect inbound configuration. We recommend that these IP ranges be used in Salesforce configurations that permit the enforcement of source IP. Refer to the Salesforce documentation Restrict Access to Trusted IP Ranges for a Connected App to learn how you can use these IP ranges can to control access to the Salesforce APIs. You access Salesforce APIs using an encrypted TLS connection. AWS Direct Connect also offers a number of additional data in transit encryption options, including support for private IP VPNs over AWS Direct Connect and MAC security. An IP virtual private network (VPN) encrypts end-to-end traffic using an IPsec VPN tunnel, while MAC Security (MACsec) provides point-to-point encryption between devices. For outbound traffic flow, we recommend that you configure TLS listeners on your Network Load Balancers to ensure that traffic to the Network Load Balancer is encrypted. Cost optimization If your use case is to solely facilitate access to Salesforce, you can use a virtual private gateway and a private VIF instead to optimize deployment costs. However, if you plan to implement a hub-spoke network transit hub interconnecting multiple VPCs, we recommend the use of a transit gateway and a transit VIF for a more scalable approach. Refer to the Amazon Virtual Private Cloud Connectivity Options whitepaper and AWS Direct Connect Quotas for the pros and cons of each approach. Conclusion Salesforce and AWS continue to innovate together to provide multiple connectivity approaches to meet customer requirements. This post demonstrated how AWS Direct Connect can be used in conjunction with Salesforce Private Connect to secure end-to-end exchanges of data in industries where the use of the internet is not an option. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words

Read More
Lead Generation 101

Lead Generation 101

Lead Generation 101 In today’s world, where people are bombarded with countless messages and offers daily, marketers need to find effective ways to capture attention and generate genuine interest in their products and services. According to the State of the Connected Customer report, customer preferences and expectations are the top influences on digital strategy for Chief Marketing Officers (CMOs). The ultimate goal of lead generation is to build interest over time that leads to successful sales. Here’s a comprehensive guide to understanding lead generation, the role of artificial intelligence (AI), and the steps you need to take to effectively find and nurture leads. What is Lead Generation? Lead generation is the process of creating interest in a product or service and converting that interest into a sale. By focusing on the most promising prospects, lead generation enhances the efficiency of the sales cycle, leading to better customer acquisition and higher conversion rates. Leads are typically categorized into three types: The lead generation process starts with creating awareness and interest. This can be achieved by publishing educational blog posts, engaging users on social media, and capturing leads through sign-ups for email newsletters or “gated” content such as webinars, virtual events, live chats, whitepapers, or ebooks. Once you have leads, you can use their contact information to engage them with personalized communication and targeted promotions. Effective Lead Generation Strategies To successfully move prospects from interest to buyers, focus on the following strategies: How Lead Qualification and Nurturing Work To effectively evaluate and nurture leads, consider the following: Methods for Nurturing Leads Once you’ve established your lead scoring and grading, consider these nurturing methods: Current Trends in Lead Generation AI is increasingly influencing lead generation by offering advanced tools and strategies: Measuring Success in Lead Generation To evaluate the effectiveness of your lead generation efforts, track the following key metrics: Best Practices for Lead Generation To optimize lead generation efforts and build strong customer relationships, follow these best practices: Effective lead generation is essential for building trust and fostering meaningful customer relationships. By implementing these strategies and best practices, you can enhance your lead generation efforts and drive better business results. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Salesforce API Gen

Salesforce API Gen

Function-calling agent models, a significant advancement within large language models (LLMs), encounter challenges in requiring high-quality, diverse, and verifiable datasets. These models interpret natural language instructions to execute API calls crucial for real-time interactions with various digital services. However, existing datasets often lack comprehensive verification and diversity, resulting in inaccuracies and inefficiencies. Overcoming these challenges is critical for deploying function-calling agents reliably in real-world applications, such as retrieving stock market data or managing social media interactions. Salesforce API Gen. Current approaches to training these agents rely on static datasets that lack thorough verification, hampering adaptability and performance when encountering new or unseen APIs. For example, models trained on restaurant booking APIs may struggle with tasks like stock market data retrieval due to insufficient relevant training data. Addressing these limitations, researchers from Salesforce AI Research propose APIGen, an automated pipeline designed to generate diverse and verifiable function-calling datasets. APIGen integrates a multi-stage verification process to ensure data reliability and correctness. This innovative approach includes format checking, actual function executions, and semantic verification, rigorously verifying each data point to produce high-quality datasets. Salesforce API Gen APIGen initiates its data generation process by sampling APIs and query-answer pairs from a library, formatting them into standardized JSON format. The pipeline then progresses through a series of verification stages: format checking to validate JSON structure, function call execution to verify operational correctness, and semantic checking to align function calls, execution results, and query objectives. This meticulous process results in a comprehensive dataset comprising 60,000 entries, covering 3,673 APIs across 21 categories, accessible via Huggingface. The datasets generated by APIGen significantly enhance model performance, achieving state-of-the-art results on the Berkeley Function-Calling Benchmark. Models trained on these datasets outperform multiple GPT-4 models, demonstrating substantial improvements in accuracy and efficiency. For instance, a model with 7 billion parameters achieves an accuracy of 87.5%, surpassing previous benchmarks by a notable margin. These outcomes underscore the robustness and reliability of APIGen-generated datasets in advancing the capabilities of function-calling agents. In conclusion, APIGen presents a novel framework for generating high-quality, diverse datasets for function-calling agents, addressing critical challenges in AI research. Its multi-stage verification process ensures data reliability, empowering even smaller models to achieve competitive results. APIGen opens avenues for developing efficient and powerful language models, emphasizing the pivotal role of high-quality data in AI advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
gettectonic.com