Machine Learning - gettectonic.com - Page 7
einstein discovery dictionary

Einstein Discovery Dictionary

Familiarize yourself with terminology that is commonly associated with Einstein Discovery. Actionable VariableAn actionable variable is an explanatory variable that people can control, such as deciding which marketing campaign to use for a particular customer. Contrast these variables with explanatory variables that can’t be controlled, such as a customer’s street address or a person’s age. If a variable is designated as actionable, the model uses prescriptive analytics to suggest actions (improvements) the user can take to improve the predicted outcome. Actual OutcomeAn actual outcome is the real-world value of an observation’s outcome variable after the outcome has occurred. Einstein Discovery calculates model performance by comparing how closely predicted outcomes come to actual outcomes. An actual outcome is sometimes called an observed outcome. AlgorithmSee modeling algorithm. Analytics DatasetAn Analytics dataset is a collection of related data that is stored in a denormalized, yet highly compressed, form. The data is optimized for analysis and interactive exploration. AttributeSee variable. AverageIn Einstein Discovery, the average represents the statistical mean for a variable. BiasIf Einstein Discovery detects bias in your data, it means that variables are being treated unequally in your model. Removing bias from your model can produce more ethical and accountable models and, therefore, predictions. See disparate impact. Binary Classification Use CaseThe binary classification use case applies to business outcomes that are binary: categorical (text) fields with only two possible values, such as win-lose, pass-fail, public-private, retain-churn, and so on. These outcomes separate your data into two distinct groups. For analysis purposes, Einstein Discovery converts the two values into Boolean true and false. Einstein Discovery uses logistic regression to analyze binary outcomes. Binary classification is one of the main use cases that Einstein Discovery supports. Compare with multiclass classification. CardinalityCardinality is the number of distinct values in a category. Variables with high cardinality (too many distinct values) can result in complex visualizations that are difficult to read and interpret. Einstein Discovery supports up to 100 categories per variable. You can optionally consolidate the remaining categories (categories with fewer than 25 observations) into a category called Other. Null values are put into a category called Unspecified. Categorical VariableA categorical variable is a type of variable that represents qualitative values (categories). A model that represents a binary or multiclass classification use case has a categorical variable as its outcome. See category. CategoryA category is a qualitative value that usually contains categorical (text) data, such as Product Category, Lead Status, and Case Subject. Categories are handy for grouping and filtering your data. Unlike measures, you can’t perform math on categories. In Salesforce Help for Analytics datasets, categories are referred to as dimensions. CausationCausation describes a cause-and-effect relationship between things. In Einstein Discovery, causality refers to the degree to which variables influence each other (or not), such as between explanatory variables and an outcome variable. Some variables can have an obvious, direct effect on each other (for example, how price and discount affect the sales margin). Other variables can have a weaker, less obvious effect (for example, how weather can affect on-time delivery). Many variables have no effect on each other: they are independent and mutually exclusive (for example, win-loss records of soccer teams and currency exchange rates). It’s important to remember that you can’t presume a causal relationship between variables based simply on a statistical correlation between them. In fact, correlation provides you with a hint that indicates further investigation into the association between those variables. Only with more exploration can you determine whether a causal link between them really exists and, if so, how significant that effect is .CoefficientA coefficient is a numeric value that represents the impact that an explanatory variable (or a pair of explanatory variables) has on the outcome variable. The coefficient quantifies the change in the mean of the outcome variable when there’s a one-unit shift in the explanatory variable, assuming all other variables in the model remain constant. Comparative InsightComparative insights are insights derived from a model. Comparative insights reveal information about the relationships between explanatory variables and the outcome variable in your story. With comparative insights, you isolate factors (categories or buckets) and compare their impact with other factors or with global averages. Einstein Discovery shows waterfall charts to help you visualize these comparisons. CorrelationA correlation is simply the association—or “co-relationship”—between two or more things. In Einstein Discovery, correlation describes the statistical association between variables, typically between explanatory variables and an outcome variable. The strength of the correlation is quantified as a percentage. The higher the percentage, the stronger the correlation. However, keep in mind that correlation is not causation. Correlation merely describes the strength of association between variables, not whether they causally affect each other. CountA count is the number of observations (rows) associated with an analysis. The count can represent all observations in the dataset, or the subset of observations that meet associated filter criteria.DatasetSee Analytics dataset. Date VariableA date variable is a type of variable that contains date/time (temporal) data.Dependent VariableSee outcome variable. Deployment WizardThe Deployment Wizard is the Einstein Discovery tool used to deploy models into your Salesforce org. Descriptive InsightsDescriptive insights are insights derived from historical data using descriptive analytics. Descriptive insights show what happened in your data. For example, Einstein Discovery in Reports produces descriptive insights for reports. Diagnostic InsightsDiagnostic insights are insights derived from a model. Whereas descriptive insights show what happened in your data, diagnostic insights show why it happened. Diagnostic insights drill deeper into correlations to help you understand which variables most significantly impacted the business outcome you’re analyzing. The term why refers to a high statistical correlation, not necessarily a causal relationship. Disparate ImpactIf Einstein Discovery detects disparate impact in your data, it means that the data reflects discriminatory practices toward a particular demographic. For example, your data can reveal gender disparities in starting salaries. Removing disparate impact from your model can produce more accountable and ethical insights and, therefore, predictions that are fair and equitable. Dominant ValuesIf Einstein Discovery detects dominant values in a variable, it means that the data is unbalanced. Most values are in the same category, which can limit the value of the analysis. DriftOver time, a deployed model’s performance can drift, becoming less accurate in predicting outcomes. Drift can occur due to changing factors in the data or in your business environment. Drift also results from now-obsolete assumptions built into the story

Read More
Big Data and Data Visualization

Big Data and Data Visualization Explained

Data Visualization: Turning Complex Data into Clear Insights Data visualization is the practice of converting information into visual formats, such as maps or graphs, to make data more accessible and understandable. The primary purpose of data visualization is to highlight patterns, trends, and outliers within large data sets, allowing users to quickly glean insights. The term is often used interchangeably with information graphics, information visualization, and statistical graphics. The Role of Data Visualization in Data Science Data visualization is a crucial step in the data science process. After data is collected, processed, and modeled, it must be visualized to draw meaningful conclusions. It’s also a key component of data presentation architecture, a discipline focused on efficiently identifying, manipulating, formatting, and delivering data. Importance Across Professions Data visualization is essential across various fields. Teachers use it to display student performance, computer scientists to explore AI advancements, and executives to communicate information to stakeholders. In big data projects, visualization tools are vital for quickly summarizing large datasets, helping businesses make informed decisions. In advanced analytics, visualization is equally important. Data scientists use it to monitor and ensure the accuracy of predictive models and machine learning algorithms. Visual representations of complex algorithms are often easier to interpret than numerical outputs. Historical Context of Data Visualization Data visualization has evolved significantly over the centuries, long before the advent of modern technology. Today, its importance is more pronounced, as it enables quick and effective communication of information in a universally understandable manner. Why Data Visualization Matters Data visualization provides a straightforward way to communicate information, regardless of the viewer’s expertise. This universality makes it easier for employees to make decisions based on visual insights. Visualization offers numerous benefits for businesses, including: Advantages of Data Visualization Key benefits include: Challenges and Disadvantages Despite its advantages, data visualization has some challenges: Data Visualization in the Era of Big Data With the rise of big data, visualization has become more critical. Companies leverage machine learning to analyze vast amounts of data, and visualization tools help present this data in a comprehensible way. Big data visualization often employs advanced techniques, such as heat maps and fever charts, beyond the standard pie charts and graphs. However, challenges remain, including: Examples of Data Visualization Techniques Early computer-based data visualizations often relied on Microsoft Excel to create tables, bar charts, or pie charts. Today, more advanced techniques include: Common Use Cases for Data Visualization Data visualization is widely used across various industries, including: The Science Behind Data Visualization The effectiveness of data visualization is rooted in how humans process information. Daniel Kahneman and Amos Tversky’s research identified two methods of information processing: Visualization Tools and Vendors Data visualization tools are widely used for business intelligence reporting. These tools generate interactive dashboards that track performance across key metrics. Users can manipulate these visualizations to explore data in greater depth, and indicators alert them to data updates or important events. Businesses might use visualization of data software to monitor marketing campaigns or track KPIs. As tools evolve, they increasingly serve as front ends for sophisticated big data environments, assisting data engineers and scientists in exploratory analysis. Popular data visualization tools include Domo, Klipfolio, Looker, Microsoft Power BI, Qlik Sense, Tableau, and Zoho Analytics. While Microsoft Excel remains widely used, newer tools offer more advanced capabilities. Data visualization is a vital subset of the broader field of data analytics, offering powerful tools for understanding and leveraging business data across all sectors. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
The Evolution of Salesforce Data Cloud

The Evolution of Salesforce Data Cloud

The Evolution of Salesforce Data Cloud Salesforce’s journey to Data Cloud started with its acquisition of Krux in 2016, which was later rebranded as Salesforce DMP. This transformation gained momentum in 2019 when Salesforce introduced its customer data platform (CDP), incorporating Salesforce DMP. Subsequent acquisitions of Datorama, MuleSoft, Tableau, and Evergage (now Interaction Studio) enriched Salesforce CDP’s functionality, creating today’s robust Data Cloud. Understanding Customer Data Platforms (CDPs) A customer data platform (CDP) aggregates customer data from multiple channels to create a unified customer profile, enabling deeper insights and real-time personalization. A CDP serves as a centralized customer data repository, merging isolated databases from marketing, service, and ecommerce to enable easy access to customer insights. Salesforce’s “State of Marketing” report highlights the impact of CDPs, noting that 78% of high-performing businesses use CDPs, compared to 58% of underperformers. This analysis explores the evolution of CDPs and their role in transforming customer relationship management (CRM) and the broader tech ecosystem, turning customer data into real-time interactions. Key Functions of a Customer Data Platform (CDP) CDPs perform four main functions: data collection, data harmonization, data activation, and data insights. Origins of Customer Data Platforms (CDPs) CDPs evolved as the latest advancement in customer data management, driven by the need for a unified marketing data repository. Unlike earlier tools that were often limited to specific channels, CDPs enable real-time data synchronization and cross-platform engagement. Advances in AI, automation, and machine learning have made this level of segmentation and personalization attainable. The Future of Customer Data Platforms (CDPs) The next generation of CDPs, like Salesforce’s Data Cloud, supports real-time engagement across all organizational functions—sales, service, marketing, and commerce. Data Cloud continuously harmonizes and updates customer data, integrating seamlessly with Salesforce products to process over 100 billion records daily. With Data Cloud, organizations gain: Benefits of a Customer Data Platform (CDP) CDPs provide comprehensive insights into customer interactions, supporting personalization and cross-selling. Beyond segmentation, they serve as user-friendly platforms for audience analysis and data segmentation, simplifying day-to-day data management. Data Cloud allows organizations to transform customer data into personalized, seamless experiences across every customer touchpoint. Leading brands like Ford and L’Oréal utilize Data Cloud to deliver connected, real-time interactions that enhance customer engagement. The Need for Customer Data Platforms (CDPs) CDPs address critical data management challenges by unifying disjointed data sources, resolving customer identities, and enabling seamless segmentation. These capabilities empower companies to maximize the potential of their customer data. CDP vs. CRM CDPs are an evolution of traditional CRM, focusing on real-time, highly personalized interactions. While CRMs store known customer data, CDPs like Data Cloud enable real-time engagement, making it the world’s first real-time CRM by powering Salesforce’s Customer 360. Selecting the Right CDP When choosing a CDP, the focus often falls into two areas: insights and engagement. An insights-oriented CDP prioritizes data integration and management, while an engagement-focused CDP leverages data for real-time personalization. Data Cloud combines both, integrating real-time CDP capabilities to deliver unmatched insights and engagement across digital platforms. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Leverage AI and Machine Learning in Your Data Warehouse

Leverage AI and Machine Learning in Your Data Warehouse

5 Reasons to Leverage AI and Machine Learning in Your Data Warehouse Incorporating AI and machine learning (ML) into a data warehouse transforms it into a powerful tool for decision-making and insight generation across the entire organization. Here are five key benefits of integrating AI and ML into your data warehouse: 1. Improved Efficiency AI and ML streamline data warehouse operations by automating time-consuming tasks like data validation and cleansing. These technologies can manage repetitive processes, such as extraction, transformation, and loading (ETL), freeing data teams to focus on higher-priority tasks that drive business value. AI and ML ensure that inconsistencies are addressed automatically, which boosts overall operational efficiency. 2. Faster Performance ML can monitor query performance in real time, identifying bottlenecks and optimizing processes to increase speed and accuracy. Automating data ingestion and delivery enables users to act on insights faster, making real-time decision-making possible. Faster data processing leads to more timely and effective business strategies. 3. Increased Accessibility for All Users AI and ML enhance data quality and simplify data queries, making insights accessible even to non-technical users. By allowing natural language inputs and generating easy-to-understand visualizations, these technologies empower employees at all skill levels to interact with data. When everyone in the organization works from the same data foundation, decision-making becomes more aligned and consistent. 4. More Accurate Forecasting ML’s predictive capabilities allow data warehouses to anticipate trends and proactively solve problems before they arise. Predictive models and anomaly detection help prevent downtime, improve customer demand forecasting, and enhance overall accuracy. The more these algorithms are used, the more refined and effective they become, improving insights and forecasts over time. 5. Reduced Data Storage Costs AI and ML analyze data usage to optimize storage solutions, identifying and eliminating redundant data to free up space. These technologies can also optimize data architecture, making the warehouse more efficient and reducing operational costs. As an organization scales, AI and ML help manage growing data volumes without increasing expenses, ensuring cost-effective data storage and processing. By integrating AI and ML into a data warehouse, organizations can enhance speed, efficiency, and accuracy, driving better decision-making and improving business outcomes. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Success Story

Case Study: Salesforce Innovation for Hospitality

Major hospitality management firm, moves to the cloud and adopts Google Cloud and Salesforce to improve operational insights and decision-making. Tectonic assisted them to move to the cloud and obtatin quicker, actionable insights with business intelligence. Salesforce Innovation for Hospitality.

Read More
Python Alongside Salesforce

Python Alongside Salesforce

Salesforce can integrate with Python, though the platform primarily relies on its proprietary languages and frameworks for core development. Python, however, plays a crucial role in enhancing Salesforce’s capabilities through integrations, automation, data analysis, and extending functionalities via external applications. Here’s an overview of how Python works within the Salesforce ecosystem: 1. Salesforce’s Core Development Stack Before exploring Python’s use, it’s important to understand the key development tools within Salesforce: These tools are the foundation for Salesforce development. However, Python complements Salesforce by enabling integrations and automation that go beyond these native tools. 2. Python in Salesforce Integrations Python shines when integrating Salesforce with other systems, automating workflows, and extending functionality. Here’s how: a. API Interactions Salesforce’s REST and SOAP APIs allow external systems to communicate with Salesforce data. Python, with its powerful libraries, is excellent for interfacing with these APIs. Key Libraries: Example: Extracting Data via API: pythonCopy codefrom simple_salesforce import Salesforce # Connect to Salesforce sf = Salesforce(username=’your_username’, password=’your_password’, security_token=’your_token’) # Query Salesforce data accounts = sf.query(“SELECT Id, Name FROM Account LIMIT 10”) for account in accounts[‘records’]: print(account[‘Name’]) b. Data Processing and Analysis Python’s data manipulation libraries like Pandas and NumPy make it ideal for processing Salesforce data. Example: Data Cleaning and Analysis: pythonCopy codeimport pandas as pd from simple_salesforce import Salesforce # Connect to Salesforce sf = Salesforce(username=’your_username’, password=’your_password’, security_token=’your_token’) # Fetch data query = “SELECT Id, Name, AnnualRevenue FROM Account” accounts = sf.query_all(query) df = pd.DataFrame(accounts[‘records’]).drop(columns=[‘attributes’]) # Process data df[‘AnnualRevenue’] = df[‘AnnualRevenue’].fillna(0) high_revenue_accounts = df[df[‘AnnualRevenue’] > 1000000] print(high_revenue_accounts) 3. Automation and Scripting Python can automate Salesforce-related tasks, improving productivity and reducing manual effort. This can involve automating data updates, generating reports, or scheduling backups. Example: Automating Data Backup: pythonCopy codeimport schedule import time from simple_salesforce import Salesforce def backup_salesforce_data(): sf = Salesforce(username=’your_username’, password=’your_password’, security_token=’your_token’) query = “SELECT Id, Name, CreatedDate FROM Contact” contacts = sf.query_all(query) df = pd.DataFrame(contacts[‘records’]).drop(columns=[‘attributes’]) df.to_csv(‘contacts_backup.csv’, index=False) print(“Salesforce data backed up successfully.”) # Schedule the backup schedule.every().day.at(“00:00”).do(backup_salesforce_data) while True: schedule.run_pending() time.sleep(1) 4. Building External Applications Using platforms like Heroku, developers can build external applications in Python that integrate with Salesforce, extending its functionality for custom portals or advanced analytics. Example: Web App Integrating with Salesforce: pythonCopy codefrom flask import Flask, request, jsonify from simple_salesforce import Salesforce app = Flask(__name__) @app.route(‘/get_accounts’, methods=[‘GET’]) def get_accounts(): sf = Salesforce(username=’your_username’, password=’your_password’, security_token=’your_token’) accounts = sf.query(“SELECT Id, Name FROM Account LIMIT 10”) return jsonify(accounts[‘records’]) if __name__ == ‘__main__’: app.run(debug=True) 5. Data Integration and ETL Python is commonly used in ETL (Extract, Transform, Load) processes that involve Salesforce data. Tools like Apache Airflow allow you to create complex data pipelines for integrating Salesforce data with external databases. Example: ETL Pipeline with Airflow: pythonCopy codefrom airflow import DAG from airflow.operators.python_operator import PythonOperator from simple_salesforce import Salesforce import pandas as pd from datetime import datetime def extract_salesforce_data(): sf = Salesforce(username=’your_username’, password=’your_password’, security_token=’your_token’) query = “SELECT Id, Name, CreatedDate FROM Opportunity” opportunities = sf.query_all(query) df = pd.DataFrame(opportunities[‘records’]).drop(columns=[‘attributes’]) df.to_csv(‘/path/to/data/opportunities.csv’, index=False) default_args = { ‘owner’: ‘airflow’, ‘start_date’: datetime(2023, 1, 1), ‘retries’: 1, } dag = DAG(‘salesforce_etl’, default_args=default_args, schedule_interval=’@daily’) extract_task = PythonOperator( task_id=’extract_salesforce_data’, python_callable=extract_salesforce_data, dag=dag, ) extract_task 6. Machine Learning and Predictive Analytics Python’s machine learning libraries, such as Scikit-learn and TensorFlow, enable predictive analytics on Salesforce data. This helps in building models for sales forecasting, lead scoring, and customer behavior analysis. Example: Predicting Lead Conversion: pythonCopy codeimport pandas as pd from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from simple_salesforce import Salesforce # Fetch Salesforce data sf = Salesforce(username=’your_username’, password=’your_password’, security_token=’your_token’) query = “SELECT Id, LeadSource, AnnualRevenue, NumberOfEmployees, Converted FROM Lead” leads = sf.query_all(query) df = pd.DataFrame(leads[‘records’]).drop(columns=[‘attributes’]) # Preprocess and split data df = pd.get_dummies(df, columns=[‘LeadSource’]) X = df.drop(‘Converted’, axis=1) y = df[‘Converted’] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Train model model = RandomForestClassifier(n_estimators=100, random_state=42) model.fit(X_train, y_train) # Evaluate accuracy accuracy = model.score(X_test, y_test) print(f”Model Accuracy: {accuracy * 100:.2f}%”) 7. Best Practices for Using Python with Salesforce To maximize the efficiency and security of Python with Salesforce: 8. Recommended Learning Resources By leveraging Python alongside Salesforce, organizations can automate tasks, integrate systems, and enhance their data analytics, all while boosting productivity. Content updated August 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Digital Transformation for Life Sciences

Digital Transformation for Life Sciences

In hindsight, one remarkable aspect of the COVID crisis was the speed with which vaccines passed through regulatory approval processes to address the pandemic emergency. Approvals that would typically take years were expedited to mere months, a pace not usually seen in the life sciences industry. It was an extraordinary situation, as Paul Shawah, Senior Vice President of Commercial Strategy at Veeva Systems, notes: “There were things that were unnaturally fast during COVID. There was a shifting of priorities, a shifting of focus. In some cases, you had the emergency approvals or the expedited approvals of the vaccines that you saw in the early days, so there was faster growth. Everything was kind of different in the COVID environment.” Today, the industry is not operating at that same rapid pace, but the impact of this acceleration remains significant: “What it did do is it challenged companies to think about why can’t we operate faster at a steady state? There was an old steady state, then there was COVID speed. The industry is trying to get to a new steady state. It won’t be as fast as during COVID because of unique circumstances, but expectations are now much higher. This drives a need to modernize systems, embrace the cloud, become more digital, and improve efficiency.” Companies like Veeva, alongside enterprise giants such as Salesforce, SAP, and Oracle, specialize in this market and play crucial roles in life sciences digitization. According to a McKinsey study, about 45% of tech spending in life sciences goes to three key technologies: applied Artificial Intelligence, industrialized Machine Learning, and Cloud Computing. Over 80% of the top 20 global pharma and medtech companies are operating in the cloud to some extent. However, a study by Accenture found that life sciences firms are among the lowest in achieving benefits from cloud investments, with only 43% satisfied with their results and less than a quarter confident that cloud migration initiatives will deliver the promised value within expected time frames. This presents both a challenge and an opportunity. Frank Defesche, SVP & GM of Life Sciences at Salesforce, sees it as the latter, stating: “The life sciences industry faces increased competition, evolving patient expectations, and ongoing pressure to bring devices and drugs to market faster. With rising drug costs, frustrated doctors, and varying regulatory scrutiny, life sciences organizations must find ways to do more with less.” The industry also contends with an unprecedented influx of data and disparate systems, making it difficult to move quickly. Addressing changes one by one is too slow and costly. Defesche believes that a systemic solution, fueled by connected data and Artificial Intelligence (AI), is key to overcoming these challenges. Paul Shawah of Veeva emphasizes the unique challenges of the life sciences sector: “Life sciences firms primarily do two things: discover and develop medicines, and commercialize them by educating doctors and getting the right drugs to patients. The drug development cycle includes clinical trials, managing everything related to drug safety, the manufacturing process, and ensuring quality. They also manage regulatory registrations. On the commercial side, it’s about reaching out to doctors and healthcare professionals.” Veeva’s Vault platform is designed for life sciences, with customers like Merck, Eli Lilly, and Boehringer Ingelheim. Shawah acknowledges it’s “still relatively early days” for cloud computing adoption but notes successes in areas like CRM, where Veeva achieved over 80% market share by standardizing processes and reducing technical debt. Other areas, like parts of the clinical trials process, remain largely untapped by cloud computing. Shawah sees opportunities to improve patient experiences and make the process more efficient. AI represents a significant area of opportunity. Shawah explains Veeva’s approach: “I’ll break AI into two categories: traditional AI, Machine Learning, and data science, which we’ve been doing for a long time, and generative AI, which is new. We’re focusing on finding use cases that create sustainable, repeatable value. We’re building capabilities into our Vault platform to support AI.” Joe Ferraro, VP of Product, Life Sciences at Salesforce, emphasizes AI’s critical role: “We are born out of the data and AI era, and we’re taking that philosophy into everything we do from a product standpoint. We aim to move from creating a system of record to a system of insight, using data and AI to transform how users interact with software.” Ferraro highlights the need for change: “Organizations told us, ‘Please don’t build the same thing we have now. We are mired in fragmented experiences. Our sales and marketing teams aren’t talking, and our medical and commercial teams don’t understand each other.’ Life Sciences Cloud aims to move the industry from these fragmented experiences to an end-to-end, AI-powered experience engine.” The COVID crisis highlighted the critical role of the life sciences industry. There’s a massive opportunity for digital transformation, whether through specialists like Veeva or enterprise players like Salesforce, Oracle, and SAP. Data must be the foundation of any solution, especially amidst the current AI hype cycle. Ensuring this data is well-managed is a crucial starting point for industry-wide change. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Roles in AI

Salesforce’s Quest for AI for the Masses

The software engine, Optimus Prime (not to be confused with the Autobot leader), originated in a basement beneath a West Elm furniture store on University Avenue in Palo Alto. A group of artificial intelligence enthusiasts within Salesforce, seeking to enhance the impact of machine learning models, embarked on this mission two years ago. While shoppers checked out furniture above, they developed a system to automate the creation of machine learning models. Thus Salesforce’s Quest for AI for the Masses started. Despite being initially named after the Transformers leader, the tie-in was abandoned, and Salesforce named its AI program Einstein. This move reflects the ambitious yet practical approach Salesforce takes in the AI domain. In March, a significant portion of Einstein became available to all Salesforce users, aligning with the company’s tradition of making advanced software accessible via the cloud. Salesforce, although now an industry giant, retains its scrappy upstart identity. When the AI trend gained momentum, the company aimed to create “AI for everyone,” focusing on making machine learning affordable and accessible to businesses. This populist mission emphasizes practical applications over revolutionary or apocalyptic visions. Einstein’s first widely available tool is the Einstein Intelligence module, designed to assist salespeople in managing leads effectively. It ranks opportunities based on factors like the likelihood to close, offering a practical application of artificial intelligence. While other tech giants boast significant research muscle, Salesforce focuses on providing immediate market advantages to its customers. Einstein Intelligence The Einstein Intelligence module employs machine learning to study historical data, identifying factors that predict future outcomes and adjusting its model over time. This dynamic approach allows for subtler and more powerful answers, making use of various data sources beyond basic Salesforce columns. Salesforce’s AI team strives to democratize AI by offering ready-made tools, ensuring businesses can benefit from machine learning without the need for extensive customization by data scientists. The company’s multi-tenant approach, serving 150,000 customers, keeps each company’s data separate and secure. Salesforce’s Quest for AI for the Masses To scale AI implementation across its vast customer base, Salesforce developed Optimus Prime. This system automates the creation of machine learning models for each customer, eliminating the need for extensive manual involvement. Optimus Prime, the AI that builds AIs, streamlines the process and accelerates model creation from weeks to just a couple of hours. Salesforce plans to expand Einstein’s capabilities, allowing users to apply it to more customized data and enabling non-programmers to build custom apps. The company’s long-term vision includes exposing more of its machine learning system to external developers, competing directly with AI heavyweights like Google and Microsoft in the business market. Originally published in WIRED magazine on August 2, 2017 and rewritten for this insight. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com