Testing Archives - gettectonic.com - Page 4
chatGPT open ai 01

ChatGPT Open AI o1

OpenAI has firmly established itself as a leader in the generative AI space, with its ChatGPT being one of the most well-known applications of AI today. Powered by the GPT family of large language models (LLMs), ChatGPT’s primary models, as of September 2024, are GPT-4o and GPT-3.5. In August and September 2024, rumors surfaced about a new model from OpenAI, codenamed “Strawberry.” Speculation grew as to whether this was a successor to GPT-4o or something else entirely. The mystery was resolved on September 12, 2024, when OpenAI launched its new o1 models, including o1-preview and o1-mini. What Is OpenAI o1? The OpenAI o1 family is a series of large language models optimized for enhanced reasoning capabilities. Unlike GPT-4o, the o1 models are designed to offer a different type of user experience, focusing more on multistep reasoning and complex problem-solving. As with all OpenAI models, o1 is a transformer-based architecture that excels in tasks such as content summarization, content generation, coding, and answering questions. What sets o1 apart is its improved reasoning ability. Instead of prioritizing speed, the o1 models spend more time “thinking” about the best approach to solve a problem, making them better suited for complex queries. The o1 models use chain-of-thought prompting, reasoning step by step through a problem, and employ reinforcement learning techniques to enhance performance. Initial Launch On September 12, 2024, OpenAI introduced two versions of the o1 models: Key Capabilities of OpenAI o1 OpenAI o1 can handle a variety of tasks, but it is particularly well-suited for certain use cases due to its advanced reasoning functionality: How to Use OpenAI o1 There are several ways to access the o1 models: Limitations of OpenAI o1 As an early iteration, the o1 models have several limitations: How OpenAI o1 Enhances Safety OpenAI released a System Card alongside the o1 models, detailing the safety and risk assessments conducted during their development. This includes evaluations in areas like cybersecurity, persuasion, and model autonomy. The o1 models incorporate several key safety features: GPT-4o vs. OpenAI o1: A Comparison Here’s a side-by-side comparison of GPT-4o and OpenAI o1: Feature GPT-4o o1 Models Release Date May 13, 2024 Sept. 12, 2024 Model Variants Single Model Two: o1-preview and o1-mini Reasoning Capabilities Good Enhanced, especially in STEM fields Performance Benchmarks 13% on Math Olympiad 83% on Math Olympiad, PhD-level accuracy in STEM Multimodal Capabilities Text, images, audio, video Primarily text, with developing image capabilities Context Window 128K tokens 128K tokens Speed Fast Slower due to more reasoning processes Cost (per million tokens) Input: $5; Output: $15 o1-preview: $15 input, $60 output; o1-mini: $3 input, $12 output Availability Widely available Limited to specific users Features Includes web browsing, file uploads Lacks some features from GPT-4o, like web browsing Safety and Alignment Focus on safety Improved safety, better resistance to jailbreaking ChatGPT Open AI o1 OpenAI o1 marks a significant advancement in reasoning capabilities, setting a new standard for complex problem-solving with LLMs. With enhanced safety features and the ability to tackle intricate tasks, o1 models offer a distinct upgrade over their predecessors. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Training and Testing Data

Training and Testing Data

Data plays a pivotal role in machine learning (ML) and artificial intelligence (AI). Tasks such as recognition, decision-making, and prediction rely on knowledge acquired through training. Much like a parent teaches their child to distinguish between a cat and a bird, or an executive learns to identify business risks hidden within detailed quarterly reports, ML models require structured training using high-quality, relevant data. As AI continues to reshape the modern business landscape, the significance of training data becomes increasingly crucial. What is Training Data? The two primary strengths of ML and AI lie in their ability to identify patterns in data and make informed decisions based on that data. To execute these tasks effectively, models need a reference framework. Training data provides this framework by establishing a baseline against which models can assess new data. For instance, consider the example of image recognition for distinguishing cats from birds. ML models cannot inherently differentiate between objects; they must be taught to do so. In this scenario, training data would consist of thousands of labeled images of cats and birds, highlighting relevant features—such as a cat’s fur, pointed ears, and four legs versus a bird’s feathers, absence of ears, and two feet. Training data is generally extensive and diverse. For the image recognition case, the dataset might include numerous examples of various cats and birds in different poses, lighting conditions, and settings. The data must be consistent enough to capture common traits while being varied enough to represent natural differences, such as cats of different fur colors in various postures like crouching, sitting, standing, and jumping. In business analytics, an ML model first needs to learn the operational patterns of a business by analyzing historical financial and operational data before it can identify problems or recognize opportunities. Once trained, the model can detect unusual patterns, like abnormally low sales for a specific item, or suggest new opportunities, such as a more cost-effective shipping option. After ML models are trained, tested, and validated, they can be applied to real-world data. For the cat versus bird example, a trained model could be integrated into an AI platform that uses real-time camera feeds to identify animals as they appear. How is Training Data Selected? The adage “garbage in, garbage out” resonates particularly well in the context of ML training data; the performance of ML models is directly tied to the quality of their training data. This underscores the importance of data sources, relevance, diversity, and quality for ML and AI developers. Data SourcesTraining data is seldom available off-the-shelf, although this is evolving. Sourcing raw data can be a complex task—imagine locating and obtaining thousands of images of cats and birds for the relatively straightforward model described earlier. Moreover, raw data alone is insufficient for supervised learning; it must be meticulously labeled to emphasize key features that the ML model should focus on. Proper labeling is crucial, as messy or inaccurately labeled data can provide little to no training value. In-house teams can collect and annotate data, but this process can be costly and time-consuming. Alternatively, businesses might acquire data from government databases, open datasets, or crowdsourced efforts, though these sources also necessitate careful attention to data quality criteria. In essence, training data must deliver a complete, diverse, and accurate representation for the intended use case. Data RelevanceTraining data should be timely, meaningful, and pertinent to the subject at hand. For example, a dataset containing thousands of animal images without any cat pictures would be useless for training an ML model to recognize cats. Furthermore, training data must relate directly to the model‘s intended application. For instance, business financial and operational data might be historically accurate and complete, but if it reflects outdated workflows and policies, any ML decisions based on it today would be irrelevant. Data Diversity and BiasA sufficiently diverse training dataset is essential for constructing an effective ML model. If a model’s goal is to identify cats in various poses, its training data should encompass images of cats in multiple positions. Conversely, if the dataset solely contains images of black cats, the model’s ability to identify white, calico, or gray cats may be severely limited. This issue, known as bias, can lead to incomplete or inaccurate predictions and diminish model performance. Data QualityTraining data must be of high quality. Problems such as inaccuracies, missing data, or poor resolution can significantly undermine a model’s effectiveness. For instance, a business’s training data may contain customer names, addresses, and other information. However, if any of these details are incorrect or missing, the ML model is unlikely to produce the expected results. Similarly, low-quality images of cats and birds that are distant, blurry, or poorly lit detract from their usefulness as training data. How is Training Data Utilized in AI and Machine Learning? Training data is input into an ML model, where algorithms analyze it to detect patterns. This process enables the ML model to make more accurate predictions or classifications on future, similar data. There are three primary training techniques: Where Does Reinforcement Learning Fit In? Unlike supervised and unsupervised learning, which rely on predefined training datasets, reinforcement learning adopts a trial-and-error approach, where an agent interacts with its environment. Feedback in the form of rewards or penalties guides the agent’s strategy improvement over time. Whereas supervised learning depends on labeled data and unsupervised learning identifies patterns in raw data, reinforcement learning emphasizes dynamic decision-making, prioritizing ongoing experience over static training data. This approach is particularly effective in fields like robotics, gaming, and other real-time applications. The Role of Humans in Supervised Training The supervised training process typically begins with raw data since comprehensive and appropriately pre-labeled datasets are rare. This data can be sourced from various locations or even generated in-house. Training Data vs. Testing Data Post-training, ML models undergo validation through testing, akin to how teachers assess students after lessons. Test data ensures that the model has been adequately trained and can deliver results within acceptable accuracy and performance ranges. In supervised learning,

Read More
Salesforce Email Deliverability Settings

Salesforce Email Deliverability Settings

Salesforce Email Deliverability Settings: Managing Communication in Sandboxes Salesforce provides administrators with control over the types of emails that can be sent from their environments, especially within sandbox environments used for development and testing. These email deliverability settings ensure that sensitive or erroneous emails don’t reach actual users during development. Below, we’ll dive into the details of these settings and explain their impact. Email Deliverability Settings in Salesforce Where to Find Deliverability Settings: Note: If Salesforce has restricted your ability to change these settings, they may not be editable. Three Access Levels for Email Deliverability Salesforce offers three key deliverability settings that control email access in your organization: The Importance of the “System Email Only” Setting The System Email Only setting is particularly valuable in sandbox environments. When testing workflows, triggers, or automations in a sandbox, this setting ensures only critical system emails (e.g., password resets) are sent, preventing development or test emails from reaching real users. New Sandboxes Default to System Email Only Since Salesforce’s Spring ’13 release, new and refreshed sandboxes default to the System Email Only setting. This helps prevent accidental email blasts during testing. For sandboxes created before Spring ’13, the default setting is All Email, but it’s recommended to switch to System Email Only to avoid sending test emails. Example: If you’re testing a custom email alert in a sandbox for a retail company, this setting allows you to safely test without worrying about sending emails to actual customers. Bounce Management in Salesforce Bounce management helps you track and manage email deliverability issues, particularly for emails sent via Salesforce or through an email relay. Key Points for Managing Bounces: Creating Custom Bounce Reports in Lightning Experience If the standard bounce reports aren’t available in your organization, or if you’re using Salesforce Lightning, you can create custom reports using the Email Bounced Reason and Email Bounced Date fields. To create a report in Lightning: By configuring Salesforce email deliverability settings and managing bounces, administrators can ensure smooth, secure communication across their organization—especially when working in sandbox environments. These tools help maintain control over outbound emails, protecting users from erroneous communication while providing valuable insights into email performance. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Agentforce Advances Copilot and Prompt Builder

Agentforce Advances Copilot and Prompt Builder

Agentforce was the highlight of the week in San Francisco during Salesforce’s annual Dreamforce conference—and for good reason! Agentforce Advances Copilot and Prompt Builder and that is truly exciting. Agentforce represents a groundbreaking solution that promises to transform how individuals and organizations interact with their CRM. However, as with any major product announcement, it raises many questions. This was evident during Dreamforce, where admins and developers, eager to dive into Agentforce, had numerous queries. Here’s an in-depth look at what Agentforce is, how it operates, and how organizations can leverage it to automate processes and drive value today. Agentforce Advances Copilot and Prompt Builder Many Dreamforce attendees who anticipated hearing more about Einstein Copilot were surprised by the introduction of Agents just before the event. However, understanding the distinctions between the legacy Einstein Copilot and the new Agentforce is crucial. Agentforce Advances Copilot and Prompt Builder. Agentforce Agents are essentially a rebranding of Copilot Agents but with an essential enhancement: they expand the functionality of Copilot to create autonomous agents capable of tasks such as summarizing or generating content and taking specific actions. Here are some key changes in terminology: Just like Einstein Copilot, Agents use user input—an “utterance”—entered into the Agentforce chat interface. The agent translates this utterance into a series of actions based on configurable instructions, and then executes the plan, providing a response. Understanding Agents: Topics A key difference between Einstein Copilot and Agentforce is the addition of “Topics.” Topics allow for greater flexibility and support a broader range of actions. They organize tasks by business function, helping Agents first determine the appropriate topic and then identify the necessary actions. This topic layer reduces confusion and ensures the correct action is taken. With this structure, Agentforce can support many more custom actions compared to Copilot’s 15-20, significantly expanding capabilities. Understanding Agents: Actions Actions in Agentforce function similarly to those in Einstein Copilot. These are the tasks an agent executes once it has identified the right plan. Out-of-the-box actions are available right away, providing a quick win for organizations looking to implement standard actions like opportunity summarization or sales emails. For more customized use cases, organizations can create bespoke actions using Apex, Flows, Prompts, or Service Catalog items (currently in beta). Understanding Agents: Prompts Whenever an LLM is used, prompts are necessary to provide the right input. Thoughtfully engineered prompts are essential for getting accurate, useful responses from LLMs. This is a key part of leveraging Agent Actions effectively, ensuring better results, reducing errors, and driving productive agent behavior. Prompt Builder plays a crucial role, allowing users to build, test, and refine prompts for Agent Actions, creating a seamless experience between generative AI and Salesforce workflows. How Generative AI and Agentforce Enhance CRM GenAI tools like Agentforce offer exciting enhancements to Salesforce organizations in several ways: However, these benefits are realized only when CRM users adopt and adapt to AI-assisted workflows. Organizations must prioritize change management and training, as most users will need to adjust to this new AI-powered way of working. If your company has already embraced AI, then you are halfway there. If AI hasn’t been introduced to the workforce you need to get started yesterday. Getting Started with Agentforce With all the buzz around Dreamforce, it’s no surprise that many organizations are eager to start using Agentforce. Fortunately, there are immediate opportunities to leverage these tools. The recommended approach is to begin with standard Agent actions, testing out-of-the-box features like opportunity summarization or creating close plans. From there, organizations can make incremental tweaks to customize actions for their specific needs. We have all come to expect that just as quickly as we include agentic ai into our processes and flows, Salesforce will add additional features and capabilities. As teams become more familiar with developing and deploying Agent actions, more complex use cases will become manageable, transforming the traditional point-and-click Salesforce experience into a more intelligent, agent-driven platform. Already I find myself asking, “is this an agent person or an ai-agent”? The day is coming, no doubt, when the question will be reversed. Tectonic’s AI Experts Can Help Interested in learning more about Agentforce or need guidance on getting started? Tectonic specializes in AI and analytics solutions within CRM, helping organizations unlock significant productivity gains through AI-based tools that optimize business processes. We are excited to enable you to enable Agentforce to Advance Copilot and Prompt Builder By Tectonic’s Solutions Architect, Shannan Hearne Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce and OpenAI Advances in AI

Salesforce and OpenAI Advances in AI

With investor enthusiasm for AI beginning to fade, Salesforce is shifting focus to its next AI wave, “Agentforce,” which will be showcased at the Dreamforce customer conference. This announcement comes at a time when Salesforce stock has underperformed, with revenue growth slowing and expectations building that AI-related revenue may not materialize until 2025. Salesforce and OpenAI Advances in AI. The Agentforce platform will be featured at Dreamforce, running from Sept. 17 to Sept. 19, and aims to automate routine business tasks while offering real-time insights and guidance. CEO Marc Benioff noted in a Sept. 12 briefing that Agentforce represents the third wave of AI, moving beyond conversational chatbots to more autonomous agents. Early adopters of the platform include Walt Disney, Kaiser Permanente, Fossil, Wiley, and OpenTable. Meanwhile, Salesforce faces stiff competition. Microsoft is hosting its own AI event, Microsoft 365 Copilot Wave 2, which focuses on business productivity features powered by generative AI. Like Salesforce, Microsoft’s AI tools have yet to demonstrate significant revenue impact, as customers are still testing the technologies. Salesforce is pushing Agentforce as an evolution of its previous Einstein copilot, which integrates conversational AI within its apps. Agentforce aims to take this further by reducing human oversight and improving efficiency in sales, marketing, and customer service roles. The product is scheduled for an October rollout, with a pricing model based on usage—potentially $2 per interaction for complex queries. Analysts have mixed opinions on Agentforce’s potential. Truist Securities sees the AI platform driving future subscription growth, while Barclays believes it could gain more traction than previous AI tools due to its fully autonomous nature. However, others, like Monness Crespi Hardt & Co., remain cautious, noting concerns about Salesforce’s slowing revenue growth in a challenging macroeconomic environment. Salesforce Agentforce PlatformIn its second-quarter earnings call, Salesforce shared promising results from an Agentforce trial, where the platform resolved 90% of patient inquiries for a large healthcare customer. Analysts like Morgan Stanley’s Keith Weiss see Agentforce as a key differentiator for Salesforce, enabling customers to leverage AI at scale with reduced complexity and cost. Despite this optimism, Salesforce still faces challenges. Competitors such as Meta’s AI Studio and ServiceNow are also advancing AI agent technologies. ServiceNow, for instance, emphasizes the need for strict human oversight of AI actions, a sentiment echoed by Salesforce’s chief ethical and humane use officer, Paula Goldman. As the tech industry races to enhance AI autonomy, concerns about the technology’s limitations—such as bias, hallucinations, and decision-making risks—remain central. Experts warn that while AI agents hold great potential, they must be carefully regulated to prevent unintended consequences. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
GPT-o1 GPT5 Review

GPT-o1 GPT5 Review

OpenAI has released its latest model, GPT-5, also known as Project Strawberry or GPT-o1, positioning it as a significant advancement in AI with PhD-level reasoning capabilities. This new series, OpenAI-o1, is designed to enhance problem-solving in fields such as science, coding, and mathematics, and the initial results indicate that it lives up to the anticipation. Key Features of OpenAI-o1 Enhanced Reasoning Capabilities Safety and Alignment Targeted Applications Model Variants Access and Availability The o1 models are available to ChatGPT Plus and Team users, with broader access expected soon for ChatGPT Enterprise users. Developers can access the models through the API, although certain features like function calling are still in development. Free access to o1-mini is expected to be provided in the near future. Reinforcement Learning at the Core The o1 models utilize reinforcement learning to improve their reasoning abilities. This approach focuses on training the models to think more effectively, improving their performance with additional time spent on tasks. OpenAI continues to explore how to scale this approach, though details remain limited. Major Milestones The o1 model has achieved impressive results in several competitive benchmarks: Chain of Thought Reasoning OpenAI’s o1 models employ the “Chain of Thought” prompt engineering technique, which allows the model to think through problems step by step. This method helps the model approach complex problems in a structured way, similar to human reasoning. Key aspects include: While the o1 models show immense promise, there are still some limitations, which have been covered in detail elsewhere. However, based on early tests, the model is performing impressively, and users are hopeful that these capabilities are as robust as advertised, rather than overhyped like previous projects such as SORA or SearchGPT by OpenAI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Impact of EHR Adoption

Impact of EHR Adoption

Fueled by the availability of chatbot interfaces like Chat-GPT, generative AI has become a key focus across various industries, including healthcare. Many electronic health record (EHR) vendors are integrating the technology to streamline administrative workflows, allowing clinicians to focus more on patient care. Whether you see EHR adoption as easy or challenging, the Impact of EHR Adoption will be positive. Generative AI and EHR Efficiency As defined by the Government Accountability Office (GAO), generative AI is “a technology that can create content, including text, images, audio, or video, when prompted by a user.” Generative AI systems learn patterns from vast datasets, enabling them to generate new, similar content using machine learning algorithms and statistical models. One of the areas where generative AI shows promise is in automating EHR workflows, which could alleviate the burden on clinicians. Epic’s AI-Driven Innovations Phil Lindemann, vice president of data and analytics at Epic, noted that generative AI is ideal for automating repetitive tasks. One application under testing allows the technology to draft patient portal message responses for clinicians to review and send. This could save time and let doctors spend more time with patients. Another project focuses on summarizing updates to a patient’s record since their last visit, offering a quick synopsis for the provider. Epic is also exploring how generative AI could help patients better understand their health records by translating complex medical terms into more accessible language. Additionally, the system can translate this information into various languages, enhancing patient education across diverse populations. However, Lindemann emphasized that while AI offers valuable tools, it is not a cure-all for healthcare’s challenges. “We see it as a translation tool,” he said, acknowledging the importance of targeted use cases for successful implementation. Oracle Health’s Clinical Digital Assistant Oracle Health is beta-testing a generative AI chatbot aimed at reducing administrative tasks for healthcare professionals. The Clinical Digital Assistant summarizes patient information and generates automated clinical notes by listening to patient-provider conversations. Physicians can interact with the tool during consultations, asking for relevant patient data without breaking eye contact with the patient. The assistant can also suggest actions based on the discussion, which providers must review before finalizing. Oracle plans to make this tool widely available by the second quarter of 2024, with the goal of easing clinician workloads and improving the patient experience. eClinicalWorks and Ambient Listening Technology In partnership with sunoh.ai, eClinicalWorks is utilizing generative AI-powered ambient listening technology to assist with clinical documentation. This tool automatically drafts clinical notes based on patient conversations, which clinicians can then review and edit as necessary. Girish Navani, CEO of eClinicalWorks, highlighted the potential for generative AI to become a personal assistant for doctors, streamlining documentation tasks and reducing cognitive load. The integration is expected to be available to customers in early 2024. MEDITECH’s AI-Powered Discharge Summaries MEDITECH is collaborating with Google to develop a generative AI tool focused on automating hospital discharge summaries. These summaries, which are crucial for care coordination, are often time-consuming for clinicians to create, especially for patients with longer hospital stays. The AI system generates draft summaries that clinicians can review and edit, aiming to speed up discharges and reduce clinician burnout. MEDITECH is working with healthcare organizations to validate the technology before a general release. Helen Waters, executive vice president and COO of MEDITECH, stressed the importance of careful implementation. The goal is to ensure accuracy and build trust among clinicians so that generative AI can be successfully integrated into clinical workflows. The Impact of EHR Adoption EHR systems have transformed healthcare, improving care coordination and decision support. However, EHR-related administrative burdens have also contributed to clinician burnout. A 2019 study found that 40% of physician burnout was linked to EHR use. By automating time-consuming EHR tasks, generative AI could help reduce this burden and improve clinical efficiency. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
E-Commerce Platform Improvement

E-Commerce Platform Improvement

Section I: Problem Statement CVS Health is continuously exploring ways to improve its e-commerce platform, cvs.com. One potential enhancement is the implementation of a complementary product bundle recommendation feature on its product description pages (PDPs). For instance, when a customer browses for a toothbrush, they could also see recommendations for related products like toothpaste, dental floss, mouthwash, or teeth whitening kits. A basic version of this is already available on the site through the “Frequently Bought Together” (FBT) section. Traditionally, techniques such as association rule mining or market basket analysis have been used to identify frequently purchased products. While effective, CVS aims to go further by leveraging advanced recommendation system techniques, including Graph Neural Networks (GNN) and generative AI, to create more meaningful and synergistic product bundles. This exploration focuses on expanding the existing FBT feature into FBT Bundles. Unlike the regular FBT, FBT Bundles would offer smaller, highly complementary recommendations (a bundle includes the source product plus two other items). This system would algorithmically create high-quality bundles, such as: This strategy has the potential to enhance both sales and customer satisfaction, fostering greater loyalty. While CVS does not yet have the FBT Bundles feature in production, it is developing a Minimum Viable Product (MVP) to explore this concept. Section II: High-Level Approach The core of this solution is a Graph Neural Network (GNN) architecture. Based on the work of Yan et al. (2022), CVS adapted this GNN framework to its specific needs, incorporating several modifications. The implementation consists of three main components: Section III: In-Depth Methodology Part 1: Product Embeddings Module A: Discovering Product Segment Complementarity Relations Using GPT-4 Embedding plays a critical role in this approach, converting text (like product names) into numerical vectors to help machine learning models understand relationships. CVS uses a GNN to generate embeddings for each product, ensuring that relevant and complementary products are grouped closely in the embedding space. To train this GNN, a product-relation graph is needed. While some methods rely on user interaction data, CVS found that transaction data alone was not sufficient, as customers often purchase unrelated products in the same session. For example: Instead, CVS utilized GPT-4 to identify complementary products at a higher level in the product hierarchy, specifically at the segment level. With approximately 600 distinct product segments, GPT-4 was used to identify the top 10 most complementary segments, streamlining the process. Module B: Evaluating GPT-4 Output To ensure accuracy, CVS implemented a rigorous evaluation process: These results confirmed strong performance in identifying complementary relationships. Module C: Learning Product Embeddings With complementary relationships identified at the segment level, a product-relation graph was built at the SKU level. The GNN was trained to prioritize pairs of products with high co-purchase counts, sales volume, and low price, producing an embedding space where relevant products are closer together. This allowed for initial, non-personalized product recommendations. Part 2: User Embeddings To personalize recommendations, CVS developed user embeddings. The process involves: This framework is currently based on recent purchases, but future enhancements will include demographic and other factors. Part 3: Re-Ranking Scheme To personalize recommendations, CVS introduced a re-ranking step: Section IV: Evaluation of Recommender Output Given that CVS trained the model using unlabeled data, traditional metrics like accuracy were not feasible. Instead, GPT-4 was used to evaluate recommendation bundles, scoring them on: The results showed that the model effectively generated high-quality, complementary product bundles. Section V: Use Cases Section VI: Future Work Future plans include: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
ChatGPT Memory Announced

ChatGPT Memory Announced

We’re testing memory with ChatGPT to make your experience more seamless by saving important details across chats, so you won’t have to repeat yourself. This feature helps make future conversations more helpful. You’re fully in control of ChatGPT’s memory. You can ask it to remember something, view what it recalls, and even delete specific memories either conversationally or through settings. Memory can also be turned off completely. This week, we’re rolling out memory to a small group of free and Plus users to gather feedback. Broader rollout plans will be shared soon. How Memory Works As you interact with ChatGPT, it can remember key details from your conversations, improving the quality of future responses. For instance: You’re In Control You can turn memory off at any time (Settings > Personalization > Memory). With memory off, ChatGPT won’t store or use any memories. To delete specific memories, simply ask ChatGPT to forget or manage them in settings. Memory works across interactions, meaning deleting a chat doesn’t erase its associated memory—you’ll need to delete the memory itself. ChatGPT may use the content you provide, including memories, to improve its models for everyone, unless you opt out through Data Controls. Note that content from Team and Enterprise accounts won’t be used to train models. Temporary Chat for No Memory If you’d prefer a conversation without memory, use temporary chat. These conversations won’t appear in history, won’t store memories, and won’t contribute to model training. Custom Instructions and Memory Custom Instructions let you guide ChatGPT on how to respond, while memory captures information shared in conversations. This combination allows ChatGPT to become more personalized and responsive over time. Privacy and Safety Standards We’re evolving our privacy and safety protocols to address memory’s impact. ChatGPT is designed to avoid remembering sensitive information, like health data, unless explicitly requested. Memory for Team and Enterprise Users For Team and Enterprise users, memory helps increase efficiency by learning individual preferences and reducing the need for repetitive instructions. For example, ChatGPT can remember your preferred tone and structure for content or your preferred coding languages for programming tasks. Memory in Team and Enterprise accounts remains secure and excluded from model training, with full control over how and when memories are used. Account owners can disable memory for the organization at any time. Memory for GPTs GPTs, too, will have distinct memories. Builders can choose to enable memory, and each GPT will store its own memories. For example, a book recommendation GPT can remember your favorite genres for tailored suggestions. To interact with memory-enabled GPTs, you’ll need memory on. Each GPT will have its own separate memory, so details shared with ChatGPT won’t carry over unless re-entered. Memory is now available to ChatGPT Free, Plus, Team, and Enterprise users. Based on user feedback, ChatGPT will notify you when a memory is updated, and you can easily review or delete those updates by accessing the “Manage memories” option in settings. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Org Merge Risks

Salesforce Org Merge Risks

Managing Multiple Salesforce Instances: Challenges and Solutions For growing enterprises, managing multiple Salesforce instances can be a significant challenge. Each instance may house critical business data and processes, which often need to be consolidated, particularly during mergers, acquisitions, or different stages of Salesforce adoption. This consolidation is essential to reduce operating costs and enhance efficiency. Salesforce Org Merge Risks. Salesforce Org Merge Risks Overview Salesforce consolidation involves merging several instances into a single Salesforce organization. This process aims to improve operational efficiency, data visibility, and process standardization while minimizing the total cost of ownership. It may require setting up a new Salesforce organization to facilitate the merger. Typical Salesforce Consolidation Plan A comprehensive consolidation plan typically includes the following steps: Complexity and Benefits of Salesforce Consolidation While Salesforce consolidation offers significant benefits, such as improved efficiency and reduced costs, it is a complex process requiring careful planning and execution. Many companies partner with Salesforce experts, like Tectonic, to navigate the intricacies of consolidation successfully. Salesforce Org Merge Risks Risk 1: Under-Scoping Data Mapping, Migration, and Merging Risk 2: Overlooking Metrics, Measurements, and Reports Risk 3: Limiting Stakeholder Engagement and Change Management Conclusion While meticulous planning cannot guarantee a flawless Salesforce migration, it fosters communication among Salesforce, data, and business leaders, making challenges more manageable. Although managing and consolidating systems might seem straightforward, guiding people, processes, and data through the consolidation process is inherently complex and demanding. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI-Powered Field Service

AI-Powered Field Service

Salesforce has introduced new AI-powered field service capabilities designed to streamline operations for dispatchers, technicians, and field service leaders. Leveraging the Salesforce platform and Data Cloud, these innovations aim to expedite time-consuming processes and enhance customer satisfaction by making field service operations more proactive and efficient. Why it matters: Field service teams currently spend only 32% of their time interacting with customers, with the remaining 68% consumed by administrative tasks like manually entering case notes. With 78% of field service workers in AI-enabled organizations reporting that AI helps save time, Salesforce’s new tools address these inefficiencies head-on. Key AI-driven innovations for Field Service: Availability: Paul Whitelam, GM & SVP of Salesforce Field Service, notes, “The future of field service lies in the seamless integration of AI, data, and human expertise. Our new capabilities set new standards for efficiency and service delivery.” Rudi Khoury, Chief Digital Officer at Fisher & Paykel, adds, “With Salesforce Field Service, we’re not just embracing AI and data-driven insights — we’re advancing into the future of field service, achieving unprecedented efficiency and exceptional service.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
US Comprehensive AI Legislation

US Comprehensive AI Legislation

U.S. policymakers have yet to pass comprehensive AI legislation through Congress, but several AI-related bills are now making their way to the Senate floor, presenting new opportunities for regulation. In late July, the U.S. Senate Committee on Commerce, Science, and Transportation advanced eight AI-focused bills aimed at enhancing the transparency and safety of AI systems. These bills also target AI-generated deepfakes—false images, audio, and videos. Since the launch of OpenAI’s ChatGPT in late 2022, regulating AI has become a key issue at both federal and state levels. This week, California lawmakers advanced SB 1047, a bill requiring safety testing for AI models, which is awaiting Governor Gavin Newsom’s signature. Most of the bills before the Senate center on innovation, research, and safety, with only one— the Artificial Intelligence Research, Innovation, and Accountability Act—introducing penalties for non-compliance. “Voluntary guidance and standards can help companies develop safer, more responsible AI, but without binding requirements, the real impact is unlikely,” said Enza Iannopollo, an analyst at Forrester Research. However, Hodan Omaar, a senior policy manager at the Center for Data Innovation, praised the Senate’s emphasis on AI research and innovation, expressing optimism about the progress being made. Here’s a look at the key AI bills up for consideration after Congress returns from summer recess: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Strong AI Scalability

Strong AI Scalability

The rapid pace of digital transformation has made scalability essential for any business looking to remain competitive. The stakes are high—without the ability to scale, businesses risk falling behind as customer demands and market conditions shift. So, what does it take to build a scalable business that can grow without compromising performance or customer satisfaction? In this Tectonic insight, we’ll cover key steps to future-proof your operations, avoid common pitfalls, and ensure your business doesn’t just keep pace with the market, but leads it. Master Scalability with Scale Center Scalability doesn’t have to be overwhelming. Salesforce’s Scale Center, available on Trailhead, provides a comprehensive learning path to help you optimize your scalability strategy. Why Scalability Is a Must-Have Scalability is critical to long-term success. As your business grows, so will the demands on your applications, infrastructure, and resources. If your systems aren’t prepared, you risk performance issues, outages, lost revenue, and dissatisfied customers. Unexpected spikes in demand—from increased customer activity or internal changes like onboarding large numbers of employees—can push systems to their limits, leading to overloads or downtime. A strong scalability plan helps prevent these issues. Here are three best practices to help scale your operations smoothly and sustainably. 1. Prioritize Proactive Scale Testing Scale testing should be a key part of your application lifecycle. Many businesses wait until performance issues arise before addressing them, which can result in maintenance headaches, poor user experiences, and challenges in supporting growth. Proactive steps to take: 2. Use the Right Tools for Seamless Scalability Choosing the right technology is crucial when scaling your business. Equip your team with tools that support growth management, and follow these tips for success: By integrating the right tools and technologies, you’ll not only stay ahead of the curve but also build a culture ready to scale. 3. Focus on Sustainable Growth Strategies Scaling requires a long-term approach. From development to deployment, a strategy that emphasizes scalability from the outset can help you avoid costly fixes down the road. Key practices include: DevOps Done Right Building secure, scalable AI applications and agents requires bridging the gap between tools and skills. Focus on crafting a thoughtful DevOps strategy that supports scalability. Scalability: A Marathon, Not a Sprint Scaling effectively is an ongoing process. Customer needs and market conditions will continue to change, so your strategies should evolve as well. Scalability is about more than just handling increased demand—it’s about ensuring stability and performance across the board. Consider these steps to enhance your approach: Committing to Scalability Scalability isn’t a one-time achievement—it’s a continuous commitment to growing smarter and stronger across all areas of your business. By embedding best practices into your day-to-day operations, you’ll ensure that your systems meet demand and prepare your business for future breakthroughs. As you develop your scalability strategy, remember that customer experience and trust should always guide your decisions. Tackling scalability proactively ensures your business can thrive no matter how market conditions change. It’s more than just a bonus feature—it’s a critical element of a smoother user experience, reduced costs, and the flexibility to pivot when necessary. By embracing these strategies, you’ll not only avoid potential challenges but also build lasting trust with your customers. In a world where loyalty is earned through exceptional experiences, a strong scalability plan is your key to long-term success. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Senate Bill 1047

AI Senate Bill 1047

California’s new AI bill has sparked intense debate, with proponents viewing it as necessary regulation and critics warning it could stifle innovation, particularly for small businesses. Senate Bill 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, mandates that developers of advanced AI systems costing at least $100 million to train must test their models for potential harm and put safeguards in place. It also offers whistleblower protections for employees at large AI firms and establishes CalCompute, a public cloud computing resource aimed at startups and researchers. The bill is awaiting Governor Gavin Newsom’s signature by Sept. 30 to become law. Prominent AI experts, including Geoffrey Hinton and Yoshua Bengio, support the bill. However, it has met resistance from various quarters, including Rep. Nancy Pelosi and OpenAI, who argue it could hinder innovation and the startup ecosystem. Pelosi and others have expressed concerns that the bill’s requirements might burden smaller businesses and harm California’s leadership in tech innovation. Gartner analyst Avivah Litan acknowledged the dilemma, stating that while regulation is critical for AI, the bill’s requirements might negatively impact small businesses. “Some regulation is better than none,” she said, but added that thresholds could be challenging for smaller firms. Steve Carlin, CEO of AiFi, criticized the bill for its vague language and complex demands on AI developers, including unclear guidance on enforcing the rules. He suggested that instead of focusing on AI models, legislation should address the risks and applications of AI, as seen with the EU AI Act. Despite concerns, some experts like Forrester Research’s Alla Valente support the bill’s safety testing and whistleblower protections. Valente argued that safeguarding AI models is essential across industries, though she acknowledged that the costs of compliance could be higher for small businesses. Still, she emphasized that the long-term costs of not implementing safeguards could be greater, with risks including customer lawsuits and regulatory penalties. California’s approach to AI regulation adds to the growing patchwork of state-level AI laws in the U.S. Colorado and Connecticut have also introduced AI legislation, and cities like New York have tackled issues like algorithmic bias. Carlin warned that a fragmented state-by-state regulatory framework could create a costly and complex environment for developers, calling for a unified federal standard instead. While federal legislation has been proposed, none has passed, and Valente pointed out that relying on Congress for action is a slow process. In the meantime, states like California are pushing ahead with their own AI regulations, creating both opportunities and challenges for the AI industry. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Exploring Large Action Models

Exploring Large Action Models

Exploring Large Action Models (LAMs) for Automated Workflow Processes While large language models (LLMs) are effective in generating text and media, Large Action Models (LAMs) push beyond simple generation—they perform complex tasks autonomously. Imagine an AI that not only generates content but also takes direct actions in workflows, such as managing customer relationship management (CRM) tasks, sending emails, or making real-time decisions. LAMs are engineered to execute tasks across various environments by seamlessly integrating with tools, data, and systems. They adapt to user commands, making them ideal for applications in industries like marketing, customer service, and beyond. Key Capabilities of LAMs A standout feature of LAMs is their ability to perform function-calling tasks, such as selecting the appropriate APIs to meet user requirements. Salesforce’s xLAM models are designed to optimize these tasks, achieving high performance with lower resource demands—ideal for both mobile applications and high-performance environments. The fc series models are specifically tuned for function-calling, enabling fast, precise, and structured responses by selecting the best APIs based on input queries. Practical Examples Using Salesforce LAMs In this article, we’ll explore: Implementation: Setting Up the Model and API Start by installing the necessary libraries: pythonCopy code! pip install transformers==4.41.0 datasets==2.19.1 tokenizers==0.19.1 flask==2.2.5 Next, load the xLAM model and tokenizer: pythonCopy codeimport json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = “Salesforce/xLAM-7b-fc-r” model = AutoModelForCausalLM.from_pretrained(model_name, device_map=”auto”, torch_dtype=”auto”, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) Now, define instructions and available functions. Task Instructions: The model will use function calls where applicable, based on user questions and available tools. Format Example: jsonCopy code{ “tool_calls”: [ {“name”: “func_name1”, “arguments”: {“argument1”: “value1”, “argument2”: “value2”}} ] } Define available APIs: pythonCopy codeget_weather_api = { “name”: “get_weather”, “description”: “Retrieve weather details”, “parameters”: {“location”: “string”, “unit”: “string”} } search_api = { “name”: “search”, “description”: “Search for online information”, “parameters”: {“query”: “string”} } Creating Flask APIs for Business Logic We can use Flask to create APIs to replicate business processes. pythonCopy codefrom flask import Flask, request, jsonify app = Flask(__name__) @app.route(“/customer”, methods=[‘GET’]) def get_customer(): customer_id = request.args.get(‘customer_id’) # Return dummy customer data return jsonify({“customer_id”: customer_id, “status”: “active”}) @app.route(“/send_email”, methods=[‘GET’]) def send_email(): email = request.args.get(’email’) # Return dummy response for email send status return jsonify({“status”: “sent”}) Testing the LAM Model and Flask APIs Define queries to test LAM’s function-calling capabilities: pythonCopy codequery = “What’s the weather like in New York in fahrenheit?” print(custom_func_def(query)) # Expected: {“tool_calls”: [{“name”: “get_weather”, “arguments”: {“location”: “New York”, “unit”: “fahrenheit”}}]} Function-Calling Models in Action Using base_call_api, LAMs can determine the correct API to call and manage workflow processes autonomously. pythonCopy codedef base_call_api(query): “””Calls APIs based on LAM recommendations.””” base_url = “http://localhost:5000/” json_response = json.loads(custom_func_def(query)) api_url = json_response[“tool_calls”][0][“name”] params = json_response[“tool_calls”][0][“arguments”] response = requests.get(base_url + api_url, params=params) return response.json() With LAMs, businesses can automate and streamline tasks in complex workflows, maximizing efficiency and empowering teams to focus on strategic initiatives. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com