Machine Learning - gettectonic.com - Page 4
Training and Testing Data

Training and Testing Data

Data plays a pivotal role in machine learning (ML) and artificial intelligence (AI). Tasks such as recognition, decision-making, and prediction rely on knowledge acquired through training. Much like a parent teaches their child to distinguish between a cat and a bird, or an executive learns to identify business risks hidden within detailed quarterly reports, ML models require structured training using high-quality, relevant data. As AI continues to reshape the modern business landscape, the significance of training data becomes increasingly crucial. What is Training Data? The two primary strengths of ML and AI lie in their ability to identify patterns in data and make informed decisions based on that data. To execute these tasks effectively, models need a reference framework. Training data provides this framework by establishing a baseline against which models can assess new data. For instance, consider the example of image recognition for distinguishing cats from birds. ML models cannot inherently differentiate between objects; they must be taught to do so. In this scenario, training data would consist of thousands of labeled images of cats and birds, highlighting relevant features—such as a cat’s fur, pointed ears, and four legs versus a bird’s feathers, absence of ears, and two feet. Training data is generally extensive and diverse. For the image recognition case, the dataset might include numerous examples of various cats and birds in different poses, lighting conditions, and settings. The data must be consistent enough to capture common traits while being varied enough to represent natural differences, such as cats of different fur colors in various postures like crouching, sitting, standing, and jumping. In business analytics, an ML model first needs to learn the operational patterns of a business by analyzing historical financial and operational data before it can identify problems or recognize opportunities. Once trained, the model can detect unusual patterns, like abnormally low sales for a specific item, or suggest new opportunities, such as a more cost-effective shipping option. After ML models are trained, tested, and validated, they can be applied to real-world data. For the cat versus bird example, a trained model could be integrated into an AI platform that uses real-time camera feeds to identify animals as they appear. How is Training Data Selected? The adage “garbage in, garbage out” resonates particularly well in the context of ML training data; the performance of ML models is directly tied to the quality of their training data. This underscores the importance of data sources, relevance, diversity, and quality for ML and AI developers. Data SourcesTraining data is seldom available off-the-shelf, although this is evolving. Sourcing raw data can be a complex task—imagine locating and obtaining thousands of images of cats and birds for the relatively straightforward model described earlier. Moreover, raw data alone is insufficient for supervised learning; it must be meticulously labeled to emphasize key features that the ML model should focus on. Proper labeling is crucial, as messy or inaccurately labeled data can provide little to no training value. In-house teams can collect and annotate data, but this process can be costly and time-consuming. Alternatively, businesses might acquire data from government databases, open datasets, or crowdsourced efforts, though these sources also necessitate careful attention to data quality criteria. In essence, training data must deliver a complete, diverse, and accurate representation for the intended use case. Data RelevanceTraining data should be timely, meaningful, and pertinent to the subject at hand. For example, a dataset containing thousands of animal images without any cat pictures would be useless for training an ML model to recognize cats. Furthermore, training data must relate directly to the model‘s intended application. For instance, business financial and operational data might be historically accurate and complete, but if it reflects outdated workflows and policies, any ML decisions based on it today would be irrelevant. Data Diversity and BiasA sufficiently diverse training dataset is essential for constructing an effective ML model. If a model’s goal is to identify cats in various poses, its training data should encompass images of cats in multiple positions. Conversely, if the dataset solely contains images of black cats, the model’s ability to identify white, calico, or gray cats may be severely limited. This issue, known as bias, can lead to incomplete or inaccurate predictions and diminish model performance. Data QualityTraining data must be of high quality. Problems such as inaccuracies, missing data, or poor resolution can significantly undermine a model’s effectiveness. For instance, a business’s training data may contain customer names, addresses, and other information. However, if any of these details are incorrect or missing, the ML model is unlikely to produce the expected results. Similarly, low-quality images of cats and birds that are distant, blurry, or poorly lit detract from their usefulness as training data. How is Training Data Utilized in AI and Machine Learning? Training data is input into an ML model, where algorithms analyze it to detect patterns. This process enables the ML model to make more accurate predictions or classifications on future, similar data. There are three primary training techniques: Where Does Reinforcement Learning Fit In? Unlike supervised and unsupervised learning, which rely on predefined training datasets, reinforcement learning adopts a trial-and-error approach, where an agent interacts with its environment. Feedback in the form of rewards or penalties guides the agent’s strategy improvement over time. Whereas supervised learning depends on labeled data and unsupervised learning identifies patterns in raw data, reinforcement learning emphasizes dynamic decision-making, prioritizing ongoing experience over static training data. This approach is particularly effective in fields like robotics, gaming, and other real-time applications. The Role of Humans in Supervised Training The supervised training process typically begins with raw data since comprehensive and appropriately pre-labeled datasets are rare. This data can be sourced from various locations or even generated in-house. Training Data vs. Testing Data Post-training, ML models undergo validation through testing, akin to how teachers assess students after lessons. Test data ensures that the model has been adequately trained and can deliver results within acceptable accuracy and performance ranges. In supervised learning,

Read More
Embedded Salesforce Einstein

Embedded Salesforce Einstein

In a world where data is everything, businesses are constantly seeking ways to better understand their customers, streamline operations, and make smarter decisions. Enter Salesforce Einstein—a powerful AI solution embedded within the Salesforce platform that is revolutionizing how companies operate, regardless of size. By leveraging advanced analytics, automation, and machine learning, Einstein helps businesses boost efficiency, drive innovation, and deliver exceptional customer experiences. Embedded Salesforce Einstein is the answer. Here’s how Salesforce Einstein is transforming business: Imagine anticipating customer needs, market trends, or operational challenges before they happen. While it’s not magic, Salesforce Einstein’s AI-powered insights and predictions come remarkably close. By transforming vast amounts of data into actionable insights, Einstein enables businesses to anticipate future scenarios and make well-informed decisions. Industry insight: In financial services, success hinges on anticipating market shifts and client needs. Banks and investment firms leverage Einstein to analyze historical market data and client behavior, predicting which financial products will resonate next. For example, investment advisors might receive AI-driven recommendations tailored to individual clients, boosting engagement and satisfaction. Manufacturers also benefit from Einstein’s predictive maintenance tools, which analyze data from machinery to anticipate equipment failures. A car manufacturer, for instance, could use these insights to schedule maintenance during off-peak hours, minimizing downtime and preventing costly disruptions. Personalization is now a necessity. Salesforce Einstein elevates personalization by analyzing customer data to offer tailored recommendations, messages, and services. Industry insight: In e-commerce, personalized recommendations are often the key to converting browsers into loyal customers. An online bookstore using Einstein might analyze browsing history and past purchases to suggest new releases in genres the customer loves, driving repeat sales. In healthcare, Einstein’s personalization can improve patient outcomes by providing customized follow-up care. Hospitals can use Einstein to analyze patient histories and treatment data, offering reminders tailored to each patient’s needs, improving adherence to care plans and speeding recovery. Salesforce Einstein’s sales intelligence tools, such as Lead Scoring and Opportunity Insights, enable sales teams to focus on the most promising leads. This targeted approach drives higher conversion rates and more efficient sales processes. Industry insight: In real estate, Einstein helps agents manage numerous leads by scoring potential buyers based on their engagement with property listings. A buyer who repeatedly views homes in a specific area is flagged, prompting agents to prioritize their outreach, accelerating the sales process. In the automotive industry, Einstein identifies leads closer to purchasing by analyzing behaviors such as online vehicle configuration and test drive bookings. This allows sales teams to focus on high-potential buyers, closing deals faster. Automation is at the heart of Salesforce Einstein’s ability to streamline processes and boost productivity. By automating repetitive tasks like data entry and customer inquiries, Einstein frees employees to focus on strategic activities, improving overall efficiency. Industry insight: In insurance, Einstein Bots can handle routine tasks like policy inquiries and claim submissions, freeing up human agents for more complex issues. This leads to faster response times and reduced operational costs. In banking, Einstein-powered chatbots manage routine inquiries such as balance checks or transaction histories. By automating these interactions, banks reduce the workload on call centers, allowing agents to provide more personalized financial advice. Einstein Discovery democratizes data analytics, making it easier for non-technical users to explore data and uncover actionable insights. This tool identifies key business drivers and provides recommendations, making data accessible for all. Industry insight: In healthcare, predictive insights are helping providers identify patients at risk of chronic conditions like diabetes. With Einstein Discovery, healthcare providers can flag at-risk individuals early, implementing targeted care plans that improve outcomes and reduce long-term costs. For energy companies, Einstein Discovery analyzes data from sensors and weather patterns to predict equipment failures and optimize resource management. A utility company might use these insights to schedule preventive maintenance ahead of storms, reducing outages and enhancing service reliability. More Than a Tool – Embedded Salesforce Einstein Salesforce Einstein is more than just an AI tool—it’s a transformative force enabling businesses to unlock the full potential of their data. From predicting trends and personalizing customer experiences to automating tasks and democratizing insights, Einstein equips companies to make smarter decisions and enhance performance across industries. Whether in retail, healthcare, or technology, Einstein delivers the tools needed to thrive in today’s competitive landscape. Tectonic empowers organizations with Salesforce solutions that drive organizational excellence. Contact Tectonic today. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI-Powered Contact Center Landscape

AI-Powered Contact Center Landscape

Navigating the AI-Powered Contact Center Landscape: A Roadmap for Success With thousands of solutions in the contact center ecosystem, each claiming to offer “AI-powered, next-generation technology,” it’s easy to feel overwhelmed. Many of these claims are valid, as AI and machine learning are transforming contact centers and improving customer experiences. But with so many options and combinations of AI-powered solutions, how can you be sure you’re making the right decision? The answer is that it’s almost impossible without help. Trying to research and evaluate every solution on your own could take months or even years—by which time, the technology will have evolved. Plus, if you rely solely on information from manufacturers or software providers, you may only get a one-sided perspective that leads to “CCaaS FOMO” (Fear of Missing Out). A More Objective Approach to the Contact Center Journey While we can’t claim to be 100% unbiased, we take a unique approach. We start with your business, understanding your specific needs, culture, and processes before introducing solutions that fit. Not every top-rated solution is right for your business, and the roadmap below outlines how we help you navigate this complex landscape. 1. Involving Key Stakeholders The first step is ensuring you have the right people involved—those with a vested interest in the contact center‘s success. It’s helpful to break these roles into three categories: Having clear roles and expectations helps streamline the process and ensures everyone is on the same page. 2. Conducting a Contact Center Assessment This discovery phase is crucial for identifying the key drivers behind your business needs. Each contact center is different, even within the same industry. That’s why a one-size-fits-all scorecard won’t work. It’s beneficial to bring in a third-party consultant with broad industry knowledge to conduct an assessment, offering valuable insights that help create a clear vision. 3. Creating a Unique Scorecard Once you’ve completed your assessment, stakeholders can work together to establish a customized scorecard that reflects your business objectives. Whether customer service is your primary focus or you’re more telemarketing-heavy, this scorecard ensures that your solution is tailored to your specific needs. It’s also important to involve contributors and advocates in the process to gain widespread buy-in. 4. Scheduling Solution Demonstrations With a solid scorecard in hand, it’s time to identify and evaluate vendors. A contact center consultant can help streamline this process. Scoring each solution based on how well it aligns with your goals keeps the focus on substance over flash, ensuring the right solution for your business. 5. Analyzing Scorecard Data When reviewing the scorecard data, stakeholders should ask key questions: This analysis ensures that decisions are data-driven and aligned with business goals. 6. Finalizing Vendor Selection-AI-Powered Contact Center Landscape Once the data is compiled and a consensus is reached, it’s time to move forward with a contract proposal. Beyond the solution itself, discuss critical details like implementation timelines, ongoing support, and maintenance to set clear expectations and ensure accountability. Financial Modeling: Justifying the Investment Looking at your goals through a financial lens helps quantify the benefits of your contact center investment. For example, reducing average handling time by just 12 seconds across the company might result in cost-neutral savings. Similarly, reducing call abandonment by even half a percentage point can have a significant impact. These financial considerations help justify ROI and set expectations. Partnering with Tectonic: Expertise You Can Trust At Tectonic, we live and breathe contact centers. Our team of experts comes directly from this world, so we understand the challenges and opportunities. We’re here to help you navigate the complexities of the contact center ecosystem and bring clarity to your CCaaS journey. Contact us today to get started! For more resources, visit our blog or explore our AI solutions to elevate your customer experience. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Transformative Potential of AI in Healthcare

Transformative Potential of AI in Healthcare

Healthcare leaders are increasingly optimistic about the transformative potential of AI and data analytics in the industry, according to a new market research report by Arcadia and The Harris Poll. The report, titled “The Healthcare CIO’s Role in the Age of AI,” reveals that 96% of healthcare executives believe AI adoption can provide a competitive edge, both now and in the future. While one-third of respondents see AI as essential today, 73% believe it will become critical within the next five years. How AI is Being Used in Healthcare The survey found that 63% of healthcare organizations are using AI to analyze large patient data sets, identifying trends and informing population health management. Additionally, 58% use AI to examine individual patient data to uncover opportunities for improving health outcomes. Nearly half of the respondents also reported using AI to optimize the management of electronic health records (EHRs). These findings align with a similar survey conducted by the University of Pittsburgh Medical Center’s Center for Connected Medicine (CCM), which highlighted AI as the most promising emerging technology in healthcare. The focus on AI stems from its ability to break down data silos and make use of the vast amount of clinical data healthcare organizations collect. “Healthcare leaders are preparing to harness AI’s full potential to reform care delivery,” said Aneesh Chopra, Arcadia’s chief strategy officer. “With secure data sharing scaling across the industry, technology leaders are focusing on platforms that can organize fragmented patient records into actionable insights throughout the patient journey.” Supporting Strategic Priorities with AI AI and data analytics are also seen as critical for maintaining competitiveness and resilience, particularly as organizations face digital transformation and financial challenges. In fact, 83% of respondents indicated that data-driven tools could help them stay ahead in these areas. Technology-related priorities, such as adopting an enterprise-wide approach to data analytics (44%) and enhancing decision-making through AI (41%), were top of mind for many healthcare leaders. Improving patient experience (40%), health outcomes (35%), and patient engagement (29%) were also highlighted as key strategic goals that AI could help achieve. Challenges in AI Adoption While most healthcare leaders are confident about adopting AI (96%), they also feel pressure to do so quickly, with the push primarily coming from data and analytics teams (82%), IT teams (78%), and executives (73%). One major obstacle is the lack of talent. Approximately 40% of respondents identified the shortage of skilled professionals as a top barrier to AI adoption. To address this, organizations are seeing increased demand for skills related to data analysis, machine learning, and systems integration. Additionally, 71% of IT leaders emphasized the growing need for data-driven decision-making skills. The Evolving Role of CIOs The rise of AI is reshaping the role of CIOs in healthcare. Nearly 87% of survey respondents see themselves as strategic influencers in setting and refining AI-related strategies, rather than just implementers. However, many CIOs feel constrained by the demands of day-to-day operations, with 58% reporting that tactical execution takes precedence over long-term AI strategy development. Leaders agree that to be effective, CIOs and their teams should focus more on strategic planning, dedicating around 75% of their time to developing and implementing AI strategies. Communication and workforce readiness are also crucial, with 75% of respondents citing poor communication between IT teams and clinical staff as a barrier to AI success, and 40% noting that clinical staff need more support to utilize data analytics effectively. “CIOs and their teams are setting the stage for an AI-driven transformation in healthcare,” said Michael Meucci, president and CEO of Arcadia. “The findings show that a robust data foundation and an evolving workforce are key to realizing AI’s full potential in patient care and healthcare operations.” Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Impact of EHR Adoption

Impact of EHR Adoption

Fueled by the availability of chatbot interfaces like Chat-GPT, generative AI has become a key focus across various industries, including healthcare. Many electronic health record (EHR) vendors are integrating the technology to streamline administrative workflows, allowing clinicians to focus more on patient care. Whether you see EHR adoption as easy or challenging, the Impact of EHR Adoption will be positive. Generative AI and EHR Efficiency As defined by the Government Accountability Office (GAO), generative AI is “a technology that can create content, including text, images, audio, or video, when prompted by a user.” Generative AI systems learn patterns from vast datasets, enabling them to generate new, similar content using machine learning algorithms and statistical models. One of the areas where generative AI shows promise is in automating EHR workflows, which could alleviate the burden on clinicians. Epic’s AI-Driven Innovations Phil Lindemann, vice president of data and analytics at Epic, noted that generative AI is ideal for automating repetitive tasks. One application under testing allows the technology to draft patient portal message responses for clinicians to review and send. This could save time and let doctors spend more time with patients. Another project focuses on summarizing updates to a patient’s record since their last visit, offering a quick synopsis for the provider. Epic is also exploring how generative AI could help patients better understand their health records by translating complex medical terms into more accessible language. Additionally, the system can translate this information into various languages, enhancing patient education across diverse populations. However, Lindemann emphasized that while AI offers valuable tools, it is not a cure-all for healthcare’s challenges. “We see it as a translation tool,” he said, acknowledging the importance of targeted use cases for successful implementation. Oracle Health’s Clinical Digital Assistant Oracle Health is beta-testing a generative AI chatbot aimed at reducing administrative tasks for healthcare professionals. The Clinical Digital Assistant summarizes patient information and generates automated clinical notes by listening to patient-provider conversations. Physicians can interact with the tool during consultations, asking for relevant patient data without breaking eye contact with the patient. The assistant can also suggest actions based on the discussion, which providers must review before finalizing. Oracle plans to make this tool widely available by the second quarter of 2024, with the goal of easing clinician workloads and improving the patient experience. eClinicalWorks and Ambient Listening Technology In partnership with sunoh.ai, eClinicalWorks is utilizing generative AI-powered ambient listening technology to assist with clinical documentation. This tool automatically drafts clinical notes based on patient conversations, which clinicians can then review and edit as necessary. Girish Navani, CEO of eClinicalWorks, highlighted the potential for generative AI to become a personal assistant for doctors, streamlining documentation tasks and reducing cognitive load. The integration is expected to be available to customers in early 2024. MEDITECH’s AI-Powered Discharge Summaries MEDITECH is collaborating with Google to develop a generative AI tool focused on automating hospital discharge summaries. These summaries, which are crucial for care coordination, are often time-consuming for clinicians to create, especially for patients with longer hospital stays. The AI system generates draft summaries that clinicians can review and edit, aiming to speed up discharges and reduce clinician burnout. MEDITECH is working with healthcare organizations to validate the technology before a general release. Helen Waters, executive vice president and COO of MEDITECH, stressed the importance of careful implementation. The goal is to ensure accuracy and build trust among clinicians so that generative AI can be successfully integrated into clinical workflows. The Impact of EHR Adoption EHR systems have transformed healthcare, improving care coordination and decision support. However, EHR-related administrative burdens have also contributed to clinician burnout. A 2019 study found that 40% of physician burnout was linked to EHR use. By automating time-consuming EHR tasks, generative AI could help reduce this burden and improve clinical efficiency. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Large and Small Language Models

Large and Small Language Models

Understanding Language Models in AI Language models are sophisticated AI systems designed to generate natural human language, a task that is far from simple. These models operate as probabilistic machine learning systems, predicting the likelihood of word sequences to emulate human-like intelligence. In the scientific realm, the focus of language models has been twofold: While today’s cutting-edge AI models in Natural Language Processing (NLP) are impressive, they have not yet fully passed the Turing Test—a benchmark where a machine’s communication is indistinguishable from that of a human. The Emergence of Language Models We are approaching this milestone with advancements in Large Language Models (LLMs) and the promising but less discussed Small Language Models (SLMs). Large Language Models compared to Small Language Models LLMs like ChatGPT have garnered significant attention due to their ability to handle complex interactions and provide insightful responses. These models distill vast amounts of internet data into concise and relevant information, offering an alternative to traditional search methods. Conversely, SLMs, such as Mistral 7B, while less flashy, are valuable for specific applications. They typically contain fewer parameters and focus on specialized domains, providing targeted expertise without the broad capabilities of LLMs. How LLMs Work Comparing LLMs and SLMs Choosing the Right Language Model The decision between LLMs and SLMs depends on your specific needs and available resources. LLMs are well-suited for broad applications like chatbots and customer support. In contrast, SLMs are ideal for specialized tasks in fields such as medicine, law, and finance, where domain-specific knowledge is crucial. Large and Small Language Models’ Roles Language models are powerful tools that, depending on their size and focus, can either provide broad capabilities or specialized expertise. Understanding their strengths and limitations helps in selecting the right model for your use case. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
E-Commerce Platform Improvement

E-Commerce Platform Improvement

Section I: Problem Statement CVS Health is continuously exploring ways to improve its e-commerce platform, cvs.com. One potential enhancement is the implementation of a complementary product bundle recommendation feature on its product description pages (PDPs). For instance, when a customer browses for a toothbrush, they could also see recommendations for related products like toothpaste, dental floss, mouthwash, or teeth whitening kits. A basic version of this is already available on the site through the “Frequently Bought Together” (FBT) section. Traditionally, techniques such as association rule mining or market basket analysis have been used to identify frequently purchased products. While effective, CVS aims to go further by leveraging advanced recommendation system techniques, including Graph Neural Networks (GNN) and generative AI, to create more meaningful and synergistic product bundles. This exploration focuses on expanding the existing FBT feature into FBT Bundles. Unlike the regular FBT, FBT Bundles would offer smaller, highly complementary recommendations (a bundle includes the source product plus two other items). This system would algorithmically create high-quality bundles, such as: This strategy has the potential to enhance both sales and customer satisfaction, fostering greater loyalty. While CVS does not yet have the FBT Bundles feature in production, it is developing a Minimum Viable Product (MVP) to explore this concept. Section II: High-Level Approach The core of this solution is a Graph Neural Network (GNN) architecture. Based on the work of Yan et al. (2022), CVS adapted this GNN framework to its specific needs, incorporating several modifications. The implementation consists of three main components: Section III: In-Depth Methodology Part 1: Product Embeddings Module A: Discovering Product Segment Complementarity Relations Using GPT-4 Embedding plays a critical role in this approach, converting text (like product names) into numerical vectors to help machine learning models understand relationships. CVS uses a GNN to generate embeddings for each product, ensuring that relevant and complementary products are grouped closely in the embedding space. To train this GNN, a product-relation graph is needed. While some methods rely on user interaction data, CVS found that transaction data alone was not sufficient, as customers often purchase unrelated products in the same session. For example: Instead, CVS utilized GPT-4 to identify complementary products at a higher level in the product hierarchy, specifically at the segment level. With approximately 600 distinct product segments, GPT-4 was used to identify the top 10 most complementary segments, streamlining the process. Module B: Evaluating GPT-4 Output To ensure accuracy, CVS implemented a rigorous evaluation process: These results confirmed strong performance in identifying complementary relationships. Module C: Learning Product Embeddings With complementary relationships identified at the segment level, a product-relation graph was built at the SKU level. The GNN was trained to prioritize pairs of products with high co-purchase counts, sales volume, and low price, producing an embedding space where relevant products are closer together. This allowed for initial, non-personalized product recommendations. Part 2: User Embeddings To personalize recommendations, CVS developed user embeddings. The process involves: This framework is currently based on recent purchases, but future enhancements will include demographic and other factors. Part 3: Re-Ranking Scheme To personalize recommendations, CVS introduced a re-ranking step: Section IV: Evaluation of Recommender Output Given that CVS trained the model using unlabeled data, traditional metrics like accuracy were not feasible. Instead, GPT-4 was used to evaluate recommendation bundles, scoring them on: The results showed that the model effectively generated high-quality, complementary product bundles. Section V: Use Cases Section VI: Future Work Future plans include: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Healthcare and AI

Salesforce Healthcare and AI

The Healthcare Industry’s Digital Transformation: An Opportunity Unveiled – Salesforce Healthcare and AI Historically, the healthcare sector has lagged behind in technology adoption, particularly software. It consistently invests less in IT and software compared to other industries, relying heavily on manual processes and outdated tools like faxes and phone calls. Unlike other sectors where platforms like Salesforce, Slack, JIRA, and Notion dominate, healthcare has yet to see similar technological integration. Salesforce Healthcare and AI Future While this low adoption of software has previously been seen as a drawback, it now presents a significant opportunity. Unlike industries burdened by extensive investments in legacy systems, healthcare is not encumbered by sunk costs. This freedom allows it to embrace cutting-edge AI innovations without the hesitation of overhauling existing, expensive software infrastructures. Addressing the Staffing Crisis The healthcare industry is grappling with a severe staffing crisis, with a shortfall of over 100,000 doctors and nurses projected over the next five years. The increasing complexity of medical care, driven by advancements in diagnostics, continuous monitoring, and new treatments, contributes to an overwhelming amount of information for clinicians. To manage this, healthcare requires new tools capable of processing complex data in real-time to support critical decisions for an aging population with more complex health needs. The most valuable asset in healthcare is clinical judgment, which is currently exclusive to human practitioners. A major challenge is to extend this clinical judgment beyond the existing workforce and physical locations, making it accessible to all who need it. Additionally, ensuring that every clinician performs at the highest level is crucial. The Role of Administrative and Clinical AI Administrative AI is essential for reducing the overhead of healthcare delivery, allowing for better resource management and efficiency. Clinical AI products, though challenging to develop due to their high-stakes nature, are uniquely positioned to address these needs. They must integrate seamlessly into existing environments, adding a layer of sophistication to healthcare processes. Regulatory Advantages for Clinical AI One of healthcare’s advantages in adopting AI is its well-established regulatory framework. The FDA has approved numerous clinical AI products and is developing processes to keep pace with advancements in machine learning and generative AI. This rigorous approval process ensures that only the most reliable and clinically sound products make it to market, creating a higher barrier to entry but also a stronger competitive advantage for those that succeed. The Scale of Opportunity The healthcare industry is a massive $4 trillion+ market, predominantly driven by human labor rather than technology. Historically, enterprise software companies have struggled to penetrate this sector, as IT budgets represent just 3.5% of revenue—less than half of that in financial services. However, with AI tools advancing rapidly, they are increasingly seen as “AI staff” rather than mere software. This shift opens up opportunities not just in software but in transforming service delivery, potentially disrupting a market valued in trillions rather than billions. The scale of this opportunity far exceeds past software ventures, as reflected in the significant capital and valuations flowing into AI-driven healthcare companies. Whether you’re launching a new clinic, developing infrastructure for the healthcare system, or creating innovative payment or insurance models, now is an unprecedented time to enter the healthcare space. The transformative power of AI is poised to redefine how healthcare companies are built, scaled, and brought to market. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Features Cheat Sheet

Einstein Features Cheat Sheet

Salesforce has published a great resource for Einstein users. The Einstein Cheat Sheet puts all the Einstein features and resources at your fingertips. Download here. Einstein Discover the power of the #1 AI for CRM with Einstein. Built into the Salesforce Platform, Einstein uses powerful machine learning and large language models to personalize customer interactions and make employees more productive. With Einstein powering the Customer 360, teams can accelerate time to value, predict outcomes, and automatically generate contentwithin the flow of work. Einstein is for everyone, empowering business users, Salesforce Admins and Developers to embed AI into every experience with low code. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Predictive Analytics

Predictive Analytics

Industry forecasts predict an annual growth rate of 6% to 7%, fueled by innovations in cloud computing, artificial intelligence (AI), and data engineering. In 2023, the global data analytics market was valued at approximately $41 billion and is expected to surge to $118.5 billion by 2029, with a compound annual growth rate (CAGR) of 27.1%. This significant expansion reflects the growing demand for advanced analytics tools that provide actionable insights. AI has notably enhanced the accuracy of predictive models, enabling marketers to anticipate customer behaviors and preferences with impressive precision. “We’re on the verge of a new era in predictive analytics, with tools like Salesforce Einstein Data Analytics revolutionizing how we harness data-driven insights to transform marketing strategies,” says Koushik Kumar Ganeeb, a Principal Member of Technical Staff at Salesforce Data Cloud and a distinguished Data and AI Architect. Ganeeb’s leadership spans initiatives like AI-powered Salesforce Einstein Data Analytics, Marketing Cloud Connector for Data Cloud, and Intelligence Reporting (Datorama). His expertise includes architecting vast data extraction pipelines that process trillions of transactions daily. These pipelines play a crucial role in the growth strategies of Fortune 500 companies, helping them scale their data operations efficiently by leveraging AI. Ganeeb’s visionary work has propelled Salesforce Einstein Data Analytics into the forefront of business intelligence. Under his guidance, the platform’s advanced capabilities—such as predictive modeling, real-time data analysis, and natural language processing—are now pivotal in transforming how businesses forecast trends, personalize marketing efforts, and make data-driven decisions with unprecedented precision. AI and Machine Learning: The Next Frontier Beginning in 2018, Salesforce Marketing Cloud, a leading engagement platform used by top enterprises, faced challenges in extracting actionable insights and enhancing AI capabilities from rapidly growing data across diverse systems. Ganeeb was tasked with overcoming these hurdles, leading to the development of the Salesforce Einstein Provisioning Process. This process involved the creation of extensive data import jobs and the establishment of standardized patterns based on consumer adoption learning. These automated jobs handle trillions of transactions daily, delivering critical engagement and profile data in real-time to meet the scalability needs of large enterprises. The data flows seamlessly into AI models that generate predictions on a massive scale, such as Engagement Scores and insights into messaging and language usage across the platform. “Integrating AI and machine learning into data analytics through Salesforce Einstein is not just a technological enhancement—it’s a revolutionary shift in how we approach data,” explains Ganeeb. “With our advanced predictive models and real-time data processing, we can analyze vast amounts of data instantly, delivering insights that were previously unimaginable.” This innovative approach empowers organizations to make more informed decisions, driving unprecedented growth and operational efficiency. Real-World Success Stories Under Ganeeb’s technical leadership, Salesforce Einstein Data Analytics has delivered remarkable results across industries by leveraging AI and machine learning to provide actionable insights and enhance business performance. In the past year, leading companies like T-Mobile, Fitbit, and Dell Technologies have reported significant improvements after integrating Einstein. Ganeeb’s proficiency in designing and scaling data engineering solutions has been critical in helping these enterprises optimize performance. “Scalability with Salesforce Einstein Data Analytics goes beyond managing data volumes—it ensures that every data point is converted into actionable insights,” says Ganeeb. His work processing petabytes of data daily underscores his commitment to precision and efficiency in data engineering. Navigating Data Ethics and Quality Despite the rapid growth of predictive analytics, Ganeeb emphasizes the importance of data ethics and quality. “The accuracy of predictive models depends on the integrity of the data,” he notes. Salesforce Einstein Data Analytics addresses this by curating datasets to ensure they are representative and free from bias, maintaining trust while delivering reliable insights. By implementing rigorous data quality checks and ethical considerations, Ganeeb ensures that Einstein Analytics not only delivers actionable insights but also fosters transparency and trust. This balanced approach is key to the responsible use of predictive analytics across various industries. Future Trends in Predictive Analytics The future of predictive analytics looks bright, with AI and machine learning poised to further refine the accuracy and utility of predictive models. “Success lies in embracing technological advancements while maintaining a human touch,” Ganeeb notes. “By combining AI-driven insights with human intuition, businesses can navigate market complexities and uncover new opportunities.” Ganeeb’s contributions to Salesforce Einstein Data Analytics exemplify this balanced approach, integrating cutting-edge technology with human insight to empower businesses to make strategic decisions. His work positions organizations to thrive in a data-driven world, helping them stay agile and competitive in an evolving market. Balancing Benefits and Challenges – Predictive Analytics While predictive analytics offers vast potential, Ganeeb recognizes the challenges. Ensuring data quality, addressing ethical concerns, and maintaining transparency are crucial for its responsible use. “Although challenges remain, the future of AI-based predictive analytics is promising,” Ganeeb asserts. His work with Salesforce Einstein Data Analytics continues to push the boundaries of marketing analytics, enabling businesses to harness the power of AI for transformative growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Machine Learning on Kubernetes

Machine Learning on Kubernetes

How and Why to Run Machine Learning Workloads on Kubernetes Running machine learning (ML) model development and deployment on Kubernetes has become essential for optimizing resources and managing costs. As AI and ML tools gain mainstream acceptance, business and IT professionals are increasingly familiar with these technologies. With the growing buzz around AI, engineering needs in ML and AI have expanded, particularly in managing the complexities and costs associated with these workloads. The Need for Kubernetes in ML As ML use cases become more complex, training models has become increasingly resource-intensive and costly. This has driven up demand and costs for GPUs, a key resource for ML tasks. Containerizing ML workloads offers a solution to these challenges by improving scalability, automation, and infrastructure efficiency. Kubernetes, a leading tool for container orchestration, is particularly effective for managing ML processes. By decoupling workloads into manageable containers, Kubernetes helps streamline ML operations and reduce costs. Understanding Kubernetes The evolution of engineering priorities has consistently focused on minimizing application footprints. From mainframes to modern servers and virtualization, the trend has been towards reducing operational overhead. Containers emerged as a solution to this trend, offering a way to isolate application stacks while maintaining performance. Initially, containers used Linux cgroups and namespaces, but their popularity surged with Docker. However, Docker containers had limitations in scaling and automatic recovery. Kubernetes was developed to address these issues. As an open-source orchestration platform, Kubernetes manages containerized workloads by ensuring containers are always running and properly scaled. Containers run inside resources called pods, which include everything needed to run the application. Kubernetes has also expanded its capabilities to orchestrate other resources like virtual machines. Running ML Workloads on Kubernetes ML systems demand significant computing power, including CPU, memory, and GPU resources. Traditionally, this required multiple servers, which was inefficient and costly. Kubernetes addresses this challenge by orchestrating containers and decoupling workloads, allowing multiple pods to run models simultaneously and share resources like CPU, memory, and GPU power. Using Kubernetes for ML can enhance practices such as: Challenges of ML on Kubernetes Despite its advantages, running ML workloads on Kubernetes comes with challenges: Key Tools for ML on Kubernetes Kubernetes requires specific tools to manage ML workloads effectively. These tools integrate with Kubernetes to address the unique needs of ML tasks: TensorFlow is another option, but it lacks the dedicated integration and optimization of Kubernetes-specific tools like Kubeflow. For those new to running ML workloads on Kubernetes, Kubeflow is often the best starting point. It is the most advanced and mature tool in terms of capabilities, ease of use, community support, and functionality. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
2024 AI Glossary

2024 AI Glossary

Artificial intelligence (AI) has moved from an emerging technology to a mainstream business imperative, making it essential for leaders across industries to understand and communicate its concepts. To help you unlock the full potential of AI in your organization, this 2024 AI Glossary outlines key terms and phrases that are critical for discussing and implementing AI solutions. Tectonic 2024 AI Glossary Active LearningA blend of supervised and unsupervised learning, active learning allows AI models to identify patterns, determine the next step in learning, and only seek human intervention when necessary. This makes it an efficient approach to developing specialized AI models with greater speed and precision, which is ideal for businesses aiming for reliability and efficiency in AI adoption. AI AlignmentThis subfield focuses on aligning the objectives of AI systems with the goals of their designers or users. It ensures that AI achieves intended outcomes while also integrating ethical standards and values when making decisions. AI HallucinationsThese occur when an AI system generates incorrect or misleading outputs. Hallucinations often stem from biased or insufficient training data or incorrect model assumptions. AI-Powered AutomationAlso known as “intelligent automation,” this refers to the integration of AI with rules-based automation tools like robotic process automation (RPA). By incorporating AI technologies such as machine learning (ML), natural language processing (NLP), and computer vision (CV), AI-powered automation expands the scope of tasks that can be automated, enhancing productivity and customer experience. AI Usage AuditingAn AI usage audit is a comprehensive review that ensures your AI program meets its goals, complies with legal requirements, and adheres to organizational standards. This process helps confirm the ethical and accurate performance of AI systems. Artificial General Intelligence (AGI)AGI refers to a theoretical AI system that matches human cognitive abilities and adaptability. While it remains a future concept, experts predict it may take decades or even centuries to develop true AGI. Artificial Intelligence (AI)AI encompasses computer systems that can perform complex tasks traditionally requiring human intelligence, such as reasoning, decision-making, and problem-solving. BiasBias in AI refers to skewed outcomes that unfairly disadvantage certain ideas, objectives, or groups of people. This often results from insufficient or unrepresentative training data. Confidence ScoreA confidence score is a probability measure indicating how certain an AI model is that it has performed its assigned task correctly. Conversational AIA type of AI designed to simulate human conversation using techniques like NLP and generative AI. It can be further enhanced with capabilities like image recognition. Cost ControlThis is the process of monitoring project progress in real-time, tracking resource usage, analyzing performance metrics, and addressing potential budget issues before they escalate, ensuring projects stay on track. Data Annotation (Data Labeling)The process of labeling data with specific features to help AI models learn and recognize patterns during training. Deep LearningA subset of machine learning that uses multi-layered neural networks to simulate complex human decision-making processes. Enterprise AIAI technology designed specifically to meet organizational needs, including governance, compliance, and security requirements. Foundational ModelsThese models learn from large datasets and can be fine-tuned for specific tasks. Their adaptability makes them cost-effective, reducing the need for separate models for each task. Generative AIA type of AI capable of creating new content such as text, images, audio, and synthetic data. It learns from vast datasets and generates new outputs that resemble but do not replicate the original data. Generative AI Feature GovernanceA set of principles and policies ensuring the responsible use of generative AI technologies throughout an organization, aligning with company values and societal norms. Human in the Loop (HITL)A feedback process where human intervention ensures the accuracy and ethical standards of AI outputs, essential for improving AI training and decision-making. Intelligent Document Processing (IDP)IDP extracts data from a variety of document types using AI techniques like NLP and CV to automate and analyze document-based tasks. Large Language Model (LLM)An AI technology trained on massive datasets to understand and generate text. LLMs are key in language understanding and generation and utilize transformer models for processing sequential data. Machine Learning (ML)A branch of AI that allows systems to learn from data and improve accuracy over time through algorithms. Model AccuracyA measure of how often an AI model performs tasks correctly, typically evaluated using metrics such as the F1 score, which combines precision and recall. Natural Language Processing (NLP)An AI technique that enables machines to understand, interpret, and generate human language through a combination of linguistic and statistical models. Retrieval Augmented Generation (RAG)This technique enhances the reliability of generative AI by incorporating external data to improve the accuracy of generated content. Supervised LearningA machine learning approach that uses labeled datasets to train AI models to make accurate predictions. Unsupervised LearningA type of machine learning that analyzes and groups unlabeled data without human input, often used to discover hidden patterns. By understanding these terms, you can better navigate the AI implementation world and apply its transformative power to drive innovation and efficiency across your organization. Tectonic 2024 AI Glossary Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
AI and Big Data

AI and Big Data

Over the past decade, enterprises have accumulated vast amounts of data, capturing everything from business processes to inventory statistics. This surge in data marked the onset of the big data revolution. However, merely storing and managing big data is no longer sufficient to extract its full value. As organizations become adept at handling big data, forward-thinking companies are now leveraging advanced analytics and the latest AI and machine learning techniques to unlock even greater insights. These technologies can identify patterns and provide cognitive capabilities across vast datasets, enabling organizations to elevate their data analytics to new levels. Additionally, the adoption of generative AI systems is on the rise, offering more conversational approaches to data analysis and enhancement. This allows organizations to extract significant insights from information that would otherwise remain untapped in data stores. How Are AI and Big Data Related? Applying machine learning algorithms to big data is a logical progression for companies aiming to maximize the potential of their data. Unlike traditional rules-based approaches that follow explicit instructions, machine learning systems use data-driven algorithms and statistical models to analyze and detect patterns in data. Big data serves as the raw material for these systems, which derive valuable insights from it. Organizations are increasingly recognizing the benefits of integrating big data with machine learning. However, to fully harness the power of both, it’s crucial to understand their individual capabilities. Understanding Big Data Big data involves extracting and analyzing information from large quantities of data, but volume is just one aspect. Other critical “Vs” of big data that enterprises must manage include velocity, variety, veracity, validity, visualization, and value. Understanding Machine Learning Machine learning, the backbone of modern AI, adds significant value to big data applications by deriving deeper insights. These systems learn and adapt over time without the need for explicit programming, using statistical models to analyze and infer patterns from data. Historically, companies relied on complex, rules-based systems for reporting, which often proved inflexible and unable to cope with constant changes. Today, machine learning and deep learning enable systems to learn from big data, enhancing decision-making, business intelligence, and predictive analysis. The strength of machine learning lies in its ability to discover patterns in data. The more data available, the more these algorithms can identify patterns and apply them to future data. Applications range from recommendation systems and anomaly detection to image recognition and natural language processing (NLP). Categories of Machine Learning Algorithms Machine learning algorithms generally fall into three categories: The most powerful large language models (LLMs), which underpin today’s widely used generative AI systems, utilize a combination of these methods, learning from massive datasets. Understanding Generative AI Generative AI models are among the most powerful and popular AI applications, creating new data based on patterns learned from extensive training datasets. These models, which interact with users through conversational interfaces, are trained on vast amounts of internet data, including conversations, interviews, and social media posts. With pre-trained LLMs, users can generate new text, images, audio, and other outputs using natural language prompts, without the need for coding or specialized models. How Does AI Benefit Big Data? AI, combined with big data, is transforming businesses across various sectors. Key benefits include: Big Data and Machine Learning: A Synergistic Relationship Big data and machine learning are not competing concepts; when combined, they deliver remarkable results. Emerging big data techniques offer powerful ways to manage and analyze data, while machine learning models extract valuable insights from it. Successfully handling the various “Vs” of big data enhances the accuracy and power of machine learning models, leading to better business outcomes. The volume of data is expected to grow exponentially, with predictions of over 660 zettabytes of data worldwide by 2030. As data continues to amass, machine learning will become increasingly reliant on big data, and companies that fail to leverage this combination will struggle to keep up. Examples of AI and Big Data in Action Many organizations are already harnessing the power of machine learning-enhanced big data analytics: Conclusion The integration of AI and big data is crucial for organizations seeking to drive digital transformation and gain a competitive edge. As companies continue to combine these technologies, they will unlock new opportunities for personalization, efficiency, and innovation, ensuring they remain at the forefront of their industries. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Services and Models Security Shortcomings

AI Services and Models Security Shortcomings

Orca Report: AI Services and Models Show Security Shortcomings Recent research by Orca Security reveals significant security vulnerabilities in AI services and models deployed in the cloud. The “2024 State of AI Security Report,” released in 2024, underscores the urgent need for improved security practices as AI technologies advance rapidly. AI Services and Models Security Shortcomings. AI usage is exploding. Gartner predicts that the AI software market will grow19.1% annually, reaching 8 billion by 2027. In many ways, AI is now inthe stage reminiscent of where cloud computing was over a decade ago. Orca’s analysis of cloud assets across major platforms—AWS, Azure, Google Cloud, Oracle Cloud, and Alibaba Cloud—has highlighted troubling risks associated with AI tools and models. Despite the surge in AI adoption, many organizations are neglecting fundamental security measures, potentially exposing themselves to significant threats. The report indicates that while 56% of organizations use their own AI models for various purposes, a substantial portion of these deployments contain at least one known vulnerability. Orca’s findings suggest that although most vulnerabilities are currently classified as low to medium risk, they still pose a serious threat. Notably, 62% of organizations have implemented AI packages with vulnerabilities, which have an average CVSS score of 6.9. Only 0.2% of these vulnerabilities have known public exploits, compared to the industry average of 2.5%. Insecure Configurations and Controls Orca’s research reveals concerning security practices among widely used AI services. For instance, Azure OpenAI, a popular choice for building custom applications, was found to be improperly configured in 27% of cases. This lapse could allow attackers to access or manipulate data transmitted between cloud resources and AI services. The report also criticizes default settings in Amazon SageMaker, a prominent machine learning service. It highlights that 45% of SageMaker buckets use non-randomized default names, and 98% of organizations have not disabled default root access for SageMaker notebook instances. These defaults create vulnerabilities that attackers could exploit to gain unauthorized access and perform actions on the assets. Additionally, the report points out a lack of self-managed encryption keys and encryption protection. For instance, 98% of organizations using Google Vertex have not enabled encryption at rest for their self-managed keys, potentially exposing sensitive data to unauthorized access or alteration. Exposed Access Keys and Platform Risks Security issues extend to popular AI platforms like OpenAI and Hugging Face. Orca’s report found that 20% of organizations using OpenAI and 35% using Hugging Face have exposed access keys, heightening the risk of unauthorized access. This follows recent research by Wiz, which demonstrated vulnerabilities in Hugging Face during Black Hat USA 2024, where sensitive data was compromised. Addressing the Security Challenge Orca co-founder and CEO Gil Geron emphasizes the need for clear roles and responsibilities in managing AI security. He stresses that security practitioners must recognize and address these risks by setting policies and boundaries. According to Geron, while the challenges are not new, the rapid development of AI tools makes it crucial to address security from both engineering and practitioner perspectives. Geron also highlights the importance of reviewing and adjusting default settings to enhance security, advocating for rigorous permission management and network hygiene. As AI technology continues to evolve, organizations must remain vigilant and proactive in safeguarding their systems and data. In conclusion, the Orca report serves as a critical reminder of the security risks associated with AI services and models. Organizations must take concerted action to secure their AI deployments and protect against potential vulnerabilities. Balance Innovation and Security in AI Tectonic notes Salesforce was not included in the sampling. Content updated September 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com