App Archives - gettectonic.com - Page 35
user questions and answers

Salesforce Marketing Cloud: Private Domain vs. Verified Domain

Understanding the Difference Between Private Domain and Verified Domain in Salesforce Marketing Cloud A Private Domain in Salesforce Marketing Cloud offers full DKIM, SPF, and DMARC authentication for a custom domain, which can significantly improve email deliverability. In contrast, a Verified Domain verifies that the sender owns the domain but does not provide the same level of authentication. While platforms like Constant Contact allow users to add authentication records (such as DKIM, SPF, and DMARC) themselves, this approach is not applicable to Salesforce Marketing Cloud when using Verified Domains. Although technically possible to self-host DNS for a Private Domain and manually add authentication records, Salesforce must provide the specific values for these records, particularly the DKIM key. Salesforce Marketing Cloud vs Salesforce Account EngagementEmails sent through Salesforce Marketing Cloud are signed with a DKIM key, which the recipient’s mail server verifies against the DKIM record in the sender’s DNS. If the DKIM signature does not match the DNS record, the email will fail delivery. Verified Domains do not include Salesforce-signed DKIM keys, making them unsuitable for fully authenticated email sends. For organizations prioritizing email deliverability and compliance, requesting a Private Domain from Salesforce is recommended. While it may require additional setup, it ensures proper authentication and enhances the success of email campaigns. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Workday and Salesforce Unveil New AI Employee Service Agent

Workday and Salesforce Unveil New AI Employee Service Agent

In a Wednesday interview with CNBC’s Jim Cramer, the CEOs of Salesforce and Workday, Marc Benioff and Carl Eschenbach, announced their companies’ new partnership to develop an artificial intelligence assistant. Workday and Salesforce Unveil New AI Employee Service Agent. This collaboration aims to enhance onboarding, human resources, and other business processes. Salesforce Chair and CEO Marc Benioff and Workday CEO Carl Eschenbach join ‘Mad Money’ host Jim Cramer to talk their AI partnership. Both CEOs emphasized that the strength of their partnership lies in the integration of their extensive data sets. Benioff stated, “AI is all about data, and having access to extensive data enables us to deliver exceptional AI capabilities. This partnership exemplifies two companies coming together to ensure our customers have the data they need to realize the full potential of artificial intelligence.” Partnership will deliver a personalized, AI-powered assistant for employee service use cases such as onboarding, health benefits, and career development within Salesforce and Workday The two companies will establish a common data foundation that unifies HR and financial data from Workday with CRM data from Salesforce, enabling AI-powered use cases that boost productivity, lower costs, and improve the employee experience Workday will be natively integrated inside of Slack with deeper automation, so employees can seamlessly collaborate around worker, job, candidate, and similar records using AI Salesforce and Workday are both cloud-based software companies. Salesforce is renowned for its Slack application and software for sales, customer service, and marketing, while Workday specializes in human resources, recruiting, and workforce management. Eschenbach highlighted that Salesforce and Workday possess three crucial data sets in the enterprise landscape—employee data, customer data, and financial data. He added that the new initiative benefits customers by integrating services across platforms, eliminating the need to switch between different systems. “Through this partnership and our ability to share data, customers can seamlessly access our data sets whether they’re using Slack, Workday, or Salesforce,” Eschenbach said. Workday and Salesforce Unveil New AI Employee Service Agent The combination of Salesforce’s new Agentforce Platform and Einstein AI with the Workday platform and Workday AI will enable organizations to create and manage agents for a variety of employee service use cases. This AI agent will work with and elevate humans to drive employee and customer success across the business. Powered by a company’s Salesforce CRM data and Workday financial and HR data, the new AI employee service agents have a shared, trusted data foundation to communicate with employees in natural language, with human-like comprehension. As a result, taking action as part of onboarding, health benefit changes, career development, and other tasks will be easier than ever. When complex cases arise, the AI employee service agent will seamlessly transfer to the right individual for remediation, maintaining all the previous history and context for a smooth hand-off. This unique approach of humans and AI seamlessly working together will result in greater productivity, efficiency, and better experiences for employees. This is only possible by having the data, AI models, and apps deeply integrated. “The AI opportunity for every company lies in augmenting their employees and delivering incredible customer experiences. That’s why we’re so excited about our new Agentforce platform which enables humans and AI to drive customer success together, and this new partnership with Workday, to jointly build an employee service agent. Together we’ll help businesses create amazing experiences powered by generative and autonomous AI, so every employee can get answers, learn new skills, solve problems, and take action quickly and efficiently.” Marc Benioff, Chair and CEO, Salesforce Benefits to Employees Employees can now receive instant support through natural language conversations with their AI employee service agent, whether they are working in Salesforce, Slack, or Workday. This AI-driven assistant provides contextual help by understanding requests, accessing relevant information from integrated Workday-Salesforce data sources, and automating resolutions across platforms. Sal Companieh, Chief Digital and Information Officer at Cushman & Wakefield, commented, “As a leading global commercial real estate services firm, we prioritize employee support and engagement, which directly impacts client service. The ability to streamline workflows across Workday and Salesforce and deliver more personalized AI-powered employee experiences will be transformative for us.” Benefits to Employers By integrating HR, financial, and operational data into advanced AI models, Salesforce and Workday enhance workforce capabilities beyond individual productivity, fostering overall workforce intelligence, optimization, and resilience: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Data Migration Tools

Salesforce Data Migration

Salesforce Data Migration: A Key to CRM Success The migration of data into Salesforce is critical for the efficient functioning of Salesforce CRM. When executed correctly, it reduces data duplication, consolidates customer and operational data into a unified platform, and extends CRM capabilities beyond basic functionalities. Proper data migration serves as the foundation for advanced business intelligence and in-depth analytics. On the other hand, poorly managed migration can lead to transferring incorrect, duplicate, or corrupted data, compromising the system’s reliability. An efficient migration process safeguards data integrity, ensures a seamless transfer to Salesforce, and enhances overall organizational performance. What is Data Migration in Salesforce? Salesforce data migration is the process of transferring data from external systems, databases, or platforms into Salesforce. This process captures critical business information and integrates it into Salesforce’s CRM framework securely. The migration process also involves data cleansing, verification, and transforming data into formats compatible with Salesforce’s structure. Why You Need Salesforce Data Migration Importance Data migration is indispensable for companies looking to modernize their operations and enhance performance. With Salesforce, organizations can: Benefits Migrating Data from Legacy Systems to Salesforce Migrating data from legacy systems to Salesforce is essential for scalability and efficient data management. Key advantages include: Salesforce Data Migration Process Data migration involves transferring data into Salesforce to improve customer engagement and operational workflows. The process ensures data accuracy and compatibility with Salesforce’s architecture. Key Steps for Salesforce Data Migration Types of Salesforce Data Migration Top Salesforce Data Migration Tools Data Archiving in Salesforce Salesforce data archiving involves relocating unused or historical data to a separate storage area. This optimizes system performance and ensures easy access for compliance or analysis. Advantages Top Options for Data Archiving Best Practices for Salesforce Data Migration Conclusion Salesforce data migration is a pivotal step in transforming organizational processes and achieving CRM excellence. When done right, it improves efficiency, eliminates data duplication, and ensures accurate information storage. By following best practices, leveraging appropriate tools, and engaging migration specialists, organizations can unlock Salesforce’s full potential for scalability, automation, and advanced analytics. Successful migration paves the way for better decision-making and future growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful

Read More
Collaboration and Engagement in Slack With iOS Widgets

Collaboration and Engagement in Slack With iOS Widgets

Slack Introduces New iOS App Widgets to Enhance Engagement and Collaboration Slack has launched four new widgets for its iOS app, aimed at boosting worker engagement and facilitating collaboration. The new additions include the “Catch Up” widget, two versions of the “Status” widget, and the “Slack Launcher” widget. These tools are designed to keep employees connected and productive, regardless of their location. While the first three widgets are available for the home screen, the “Slack Launcher” widget is specifically designed for iOS lock screens, allowing users to quickly access their workflows and projects. In an announcement on X, Slack stated: “Three new Slack iOS widgets are here to make your workday a whole lot easier. Add the Catch Up widget, Status widget, and Slack Launcher widget to your device and stay in the know on the go.” These updates align with Slack’s broader goal of evolving into a comprehensive collaboration and communication platform, offering some of the most advanced features in the market. To use these new features, Slack users should update their app to the latest version. After updating, they can add the new widgets by pressing and holding their Home or Lock Screen to select and place the widgets. Detailed Overview of the Widgets Recent Developments from Slack In April, Salesforce announced the availability of Slack AI for all paying customers. Previously limited to customers on Slack Enterprise plans and available only in US and UK English, Slack AI now supports businesses of all sizes. This tool leverages conversational data to help users work more efficiently and intelligently. Updates to Slack AI include a morning digest summary, personalized search answers, advanced conversation summaries, and expanded language support. Additionally, Salesforce introduced Slack Lists, a project and task management tool integrated into the Slack platform. This feature helps teams manage projects, inbound requests, and top priorities within Slack, streamlining workflows and reducing the need to switch between different apps. Earlier this month, Slack announced plans to delete data that is over a year old for free users. Previously, free users could access data up to 90 days old, with older data being hidden but retrievable upon upgrading to a paid account. Now, Slack may delete messages and shared files older than a year, in line with its service agreement and compliance regulations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
What is OpenAI Strawberry?

What is OpenAI Strawberry?

OpenAI’s Secret Project: “Strawberry” Background and Goals OpenAI, the company behind ChatGPT, is working on a new AI project codenamed “Strawberry,” according to an insider and internal documents reviewed by Reuters. This project, whose details have not been previously reported, aims to showcase advanced reasoning capabilities in OpenAI’s models. The project seeks to enable AI to not only generate answers to queries but also plan and navigate the internet autonomously to perform “deep research.” What is OpenAI Strawberry? Project Overview The “Strawberry” initiative represents an evolution of the previously known Q* project, which demonstrated potential in solving complex problems like advanced math and science questions. While the precise date of the internal document is unclear, it outlines plans for using Strawberry to enhance AI’s reasoning and problem-solving abilities. The source describes the project as a work in progress, with no confirmed timeline for its public release. Technological Approach Strawberry is described as a method of post-training AI models, refining their performance after initial training on large datasets. This post-training phase involves techniques such as fine-tuning, where models are adjusted based on feedback and examples of correct and incorrect responses. The project is reportedly similar to Stanford’s 2022 “Self-Taught Reasoner” (STaR) method, which uses iterative self-improvement to enhance AI’s intelligence levels. Potential and Challenges If successful, Strawberry could revolutionize AI by improving its reasoning capabilities, allowing it to tackle complex tasks that require multi-step problem-solving and planning. This could lead to significant advancements in scientific research, software development, and various other fields. However, the project also raises concerns about ethical implications, control, accountability, and bias, necessitating careful consideration as AI becomes more autonomous. Industry Context OpenAI is not alone in this pursuit. Other major tech companies like Google, Meta, and Microsoft are also experimenting with improving AI reasoning. The broader goal across the industry is to develop AI that can achieve human or super-human levels of intelligence, capable of making major scientific discoveries and planning complex tasks. Conclusion OpenAI’s project Strawberry represents a significant step forward in AI research, pushing the boundaries of what AI can achieve. While the project is still in its early stages, its potential to enhance AI reasoning capabilities is significant. As OpenAI continues to develop and refine Strawberry, its impact on the future of artificial intelligence will be closely watched by researchers and industry leaders alike. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Trust and Optimism

AI Trust and Optimism

Building Trust in AI: A Complex Yet Essential Task The Importance of Trust in AI Trust in artificial intelligence (AI) is ultimately what will make or break the technology. AI Trust and Optimism. Amid the hype and excitement of the past 18 months, it’s widely recognized that human beings need to have faith in this new wave of automation. This trust ensures that AI systems do not overstep boundaries or undermine personal freedoms. However, building this trust is a complicated task, thankfully receiving increasing attention from responsible thought leaders in the field. The Challenge of Responsible AI Development There is a growing concern that in the AI arms race, some individuals and companies prioritize making their technology as advanced as possible without considering long-term human-centric issues or the present-day realities. This concern was highlighted when OpenAI CEO Sam Altman presented AI hallucinations as a feature, not a bug, at last year’s Dreamforce, shortly after Salesforce CEO Marc Benioff emphasized the vital nature of trust. Insights from Salesforce’s Global Study Salesforce recently released the results of a global study involving 6,000 knowledge workers from various companies. The study reveals that while respondents trust AI to manage 43% of their work tasks, they still prefer human intervention in areas such as training, onboarding, and data handling. A notable finding is the difference in trust levels between leaders and rank-and-file workers. Leaders trust AI to handle over half (51%) of their work, while other workers trust it with 40%. Furthermore, 63% of respondents believe human involvement is key to building their trust in AI, though a subset is already comfortable offloading certain tasks to autonomous AI. Specifically: The study predicts that within three years, 41% of global workers will trust AI to operate autonomously, a significant increase from the 10% who feel comfortable with this today. Ethical Considerations in AI Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, is responsible for establishing guidelines and best practices for technology adoption. Her interpretation of the study findings indicates that while workers are excited about a future with autonomous AI and are beginning to transition to it, trust gaps still need to be bridged. Goldman notes that workers are currently comfortable with AI handling tasks like writing code, uncovering data insights, and building communications. However, they are less comfortable delegating tasks such as inclusivity, onboarding, training employees, and data security to AI. Salesforce advocates for a “human at the helm” approach to AI. Goldman explains that human oversight builds trust in AI, but the way this oversight is designed must evolve to keep pace with AI’s rapid development. The traditional “human in the loop” model, where humans review every AI-generated output, is no longer feasible even with today’s sophisticated AI systems. Goldman emphasizes the need for more sophisticated controls that allow humans to focus on high-risk, high-judgment decisions while delegating other tasks. These controls should provide a macro view of AI performance and the ability to inspect it, which is crucial. Education and Training Goldman also highlights the importance of educating those steering AI systems. Trust and adoption of technology require that people are enabled to use it successfully. This includes comprehensive knowledge and training to make the most of AI capabilities. Optimism Amidst Skepticism Despite widespread fears about AI, Goldman finds a considerable amount of optimism and curiosity among workers. The study reflects a recognition of AI’s transformative potential and its rapid improvement. However, it is essential to distinguish between genuine optimism and hype-driven enthusiasm. Salesforce’s Stance on AI and Trust Salesforce has taken a strong stance on trust in relation to AI, emphasizing the non-silver bullet nature of this technology. The company acknowledges the balance between enthusiasm and pragmatism that many executives experience. While there is optimism about trusting autonomous AI within three years, this prediction needs to be substantiated with real-world evidence. Some organizations are already leading in generative AI adoption, while many others express interest in exploring its potential in the future. Conclusion Overall, this study contributes significantly to the ongoing debate about AI’s future. The concept of “human at the helm” is compelling and highlights the importance of ethical considerations in the AI-enabled future. Goldman’s role in presenting this research underscores Salesforce’s commitment to responsible AI development. For more insights, check out her blog on the subject. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Technology Cancels Your Flight

Technology Cancels Your Flight

What to Do If Technology Cancels Your Flight – the Recent Crowdstrike Microsoft Outage The recent Crowdstrike Microsoft outage caused widespread disruption beyond just computers, stranding thousands of air travelers. When Technology Cancels Your Flight, here’s what you can do next: The Impact of the Outage Air travelers posted pictures on social media of crowded airports in Europe and the United States due to the technology outage on Friday. In the U.S., major airlines like American, Delta, United, Spirit, and Allegiant had all their flights grounded for varying lengths of time. The outage affected crucial systems, including those for checking in passengers, calculating aircraft weight, and communicating with crews. Travelers began to panic. By early evening on the East Coast, nearly 2,800 U.S. flights had been canceled and almost 10,000 delayed, according to FlightAware. Worldwide, about 4,400 flights were canceled. Delta and its regional affiliates canceled 1,300 flights, United and United Express canceled more than 550 flights, and American Airlines canceled more than 450 flights. Airports became crowded zoos of passengers milling around waiting for answers. The outage, blamed on a software update from cybersecurity firm CrowdStrike, affected Microsoft’s computers used by many airlines. Despite CrowdStrike identifying and fixing the issue, the damage was done, leaving hundreds of thousands of travelers stranded. What to Do Next Contact Your Airline Check Other Airlines and Airports Weekend Flights Air Traffic Control Refunds and Reimbursements Transportation Secretary Pete Buttigieg emphasized the need for airlines to take care of passengers experiencing major delays. Airlines affected by the outage are offering rebooking, vouchers, refunds, and other assistance. The Transportation Department fined Southwest $35 million last year as part of a $140 million settlement for nearly 17,000 canceled flights in December 2022. The department maintains a “dashboard” showing what each airline promises to cover during travel disruptions. By taking proactive steps and utilizing available resources, travelers can navigate the challenges posed by this unexpected technology outage and find alternative solutions to reach their destinations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
UncannyAutomator Salesforce Integration

UncannyAutomator Salesforce Integration

Integrating WordPress with Salesforce With the Uncanny Automator Elite Integrations addon, connecting your WordPress site to Salesforce is a breeze. Steps to Connect Uncanny Automator to Your Salesforce Account 1. Install the Elite Integrations Addon First, ensure you have the Elite Integrations addon for Uncanny Automator installed on your WordPress site. 2. Connect Uncanny Automator to Salesforce To establish the connection, follow these steps: You will be prompted to log into Salesforce. After logging in, you will need to allow Uncanny Automator to manage your Salesforce data by clicking Allow. You will then return to the app connection screen on your WordPress site. Using Salesforce Actions in Recipes Once connected to Salesforce, you can use Uncanny Automator to create and update contacts and leads based on user actions on your WordPress site. Here’s how: Final Steps That’s it! Your recipe will now automatically run whenever users complete the selected trigger(s), sending the desired updates directly to your Salesforce account. Installing Uncanny Automator Install the free version The free version of Uncanny Automator is hosted in the WordPress.org repository, so installing it on your WordPress site couldn’t be easier. Sign into your website as an administrator, and in /wp-admin/, navigate to Plugins > Add New. In the search field, enter “Uncanny Automator”. See the image below for more context. In the Search Results, click the Install Now button for Automator. Once it finishes installing, click Activate. That’s it! Uncanny Automator is installed and ready for use. Please note that you must have the free version installed first to use Uncanny Automator Pro. The setup wizard After activation, you will be redirected to the Uncanny Automator dashboard. From here, you can connect an account, watch tutorials or read articles in our Knowledge Base. Connecting a free account is an optional step allows you to try out some of the App non-WordPress Automator integrations (like Slack, Google Sheets and Facebook) but it is not required to use anything else in the free version. Install Uncanny Automator Pro Uncanny Automator Pro is a separate plugin from our free version, and to use Pro features, you must have both Uncanny Automator AND Uncanny Automator Pro installed and active. If you don’t yet have a copy of Automator Pro, you can purchase one from https://automatorplugin.com/pricing/. Once purchased, you can download the latest version of Uncanny Automator Pro inside your account on our website at https://automatorplugin.com/my-account/downloads/. To install the Pro version after downloading the zip file, navigate to Plugins > Add New in /wp-admin/. At the top of the page, click the Upload Plugin button. Click Choose File to select the Pro zip file, the Install Now and Activate the plugin. Once activated, be sure to visit Automator > Settings in /wp-admin/ to enter your license key. This unlocks access to automatic updates and unlimited use of non-WordPress integrations in your recipes. UncannyAutomator special triggers can be found here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Code Generation and Amazon SageMaker

Einstein Code Generation and Amazon SageMaker

Salesforce and the Evolution of AI-Driven CRM Solutions Salesforce, Inc., headquartered in San Francisco, California, is a leading American cloud-based software company specializing in customer relationship management (CRM) software and applications. Their offerings include sales, customer service, marketing automation, e-commerce, analytics, and application development. Salesforce is at the forefront of integrating artificial general intelligence (AGI) into its services, enhancing its flagship SaaS CRM platform with predictive and generative AI capabilities and advanced automation features. Einstein Code Generation and Amazon SageMaker. Salesforce Einstein: Pioneering AI in Business Applications Salesforce Einstein represents a suite of AI technologies embedded within Salesforce’s Customer Success Platform, designed to enhance productivity and client engagement. With over 60 features available across different pricing tiers, Einstein’s capabilities are categorized into machine learning (ML), natural language processing (NLP), computer vision, and automatic speech recognition. These tools empower businesses to deliver personalized and predictive customer experiences across various functions, such as sales and customer service. Key components include out-of-the-box AI features like sales email generation in Sales Cloud and service replies in Service Cloud, along with tools like Copilot, Prompt, and Model Builder within Einstein 1 Studio for custom AI development. The Salesforce Einstein AI Platform Team: Enhancing AI Capabilities The Salesforce Einstein AI Platform team is responsible for the ongoing development and enhancement of Einstein’s AI applications. They focus on advancing large language models (LLMs) to support a wide range of business applications, aiming to provide cutting-edge NLP capabilities. By partnering with leading technology providers and leveraging open-source communities and cloud services like AWS, the team ensures Salesforce customers have access to the latest AI technologies. Optimizing LLM Performance with Amazon SageMaker In early 2023, the Einstein team sought a solution to host CodeGen, Salesforce’s in-house open-source LLM for code understanding and generation. CodeGen enables translation from natural language to programming languages like Python and is particularly tuned for the Apex programming language, integral to Salesforce’s CRM functionality. The team required a hosting solution that could handle a high volume of inference requests and multiple concurrent sessions while meeting strict throughput and latency requirements for their EinsteinGPT for Developers tool, which aids in code generation and review. After evaluating various hosting solutions, the team selected Amazon SageMaker for its robust GPU access, scalability, flexibility, and performance optimization features. SageMaker’s specialized deep learning containers (DLCs), including the Large Model Inference (LMI) containers, provided a comprehensive solution for efficient LLM hosting and deployment. Key features included advanced batching strategies, efficient request routing, and access to high-end GPUs, which significantly enhanced the model’s performance. Key Achievements and Learnings Einstein Code Generation and Amazon SageMaker The integration of SageMaker resulted in a dramatic improvement in the performance of the CodeGen model, boosting throughput by over 6,500% and reducing latency significantly. The use of SageMaker’s tools and resources enabled the team to optimize their models, streamline deployment, and effectively manage resource use, setting a benchmark for future projects. Conclusion and Future Directions Salesforce’s experience with SageMaker highlights the critical importance of leveraging advanced tools and strategies in AI model optimization. The successful collaboration underscores the need for continuous innovation and adaptation in AI technologies, ensuring that Salesforce remains at the cutting edge of CRM solutions. For those interested in deploying their LLMs on SageMaker, Salesforce’s experience serves as a valuable case study, demonstrating the platform’s capabilities in enhancing AI performance and scalability. To begin hosting your own LLMs on SageMaker, consider exploring their detailed guides and resources. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Best ChatGPT Competitor Tools

Best ChatGPT Competitor Tools

ChatGPT Alternatives – Best ChatGPT Competitor Tools Discover the Future of AI Chat: Explore the Top ChatGPT Alternatives for Enhanced Communication and Productivity. In an effort to avoid playing favorites, tools are presented in alphabetical order. Best ChatGPT Competitor Tools. Do you ever found yourself wishing for a ChatGPT alternative that might better suit your specific content or AI assistant needs? Whether you’re a business owner, content creator, or student, the right AI chat tool can significantly influence how you interact with information and manage tasks. In this insight, we’re looking into the top ChatGPT alternatives available in 2024. By the end, you’ll have a clear idea of which options might be best for your particular use case and why. Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing Features What We Like What We Don’t Like Pricing BONUS Quillbot AI Great for paraphrasing small blocks of content. In the rapidly evolving world of AI chat technology, these top ChatGPT alternatives of 2024 offer a diverse range of capabilities to suit various needs and preferences. Whether you’re looking to streamline your workflow, enhance your learning, or simply engage in more dynamic conversations, there’s a tool out there (or 2 or 10) that can help boost your digital interactions. Each platform brings its unique strengths to the table, from specialized functionalities like summarizing texts or coding assistance to more general but highly efficient conversational capabilities. There is no reason to select only one. As you consider integrating these tools into your daily routine, think about how its features align with your goals. Embrace the possibilities and let these advanced technologies open new doors to efficiency, creativity, and connectivity. Create a bookmark folder just for GPT tools. New one’s pop up routinely. Happy chatting! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
LLMs Turn CSVs into Knowledge Graphs

LLMs Turn CSVs into Knowledge Graphs

Neo4j Runway and Healthcare Knowledge Graphs Recently, Neo4j Runway was introduced as a tool to simplify the migration of relational data into graph structures. LLMs Turn CSVs into Knowledge Graphs. According to its GitHub page, “Neo4j Runway is a Python library that simplifies the process of migrating your relational data into a graph. It provides tools that abstract communication with OpenAI to run discovery on your data and generate a data model, as well as tools to generate ingestion code and load your data into a Neo4j instance.” In essence, by uploading a CSV file, the LLM identifies the nodes and relationships, automatically generating a Knowledge Graph. Knowledge Graphs in healthcare are powerful tools for organizing and analyzing complex medical data. These graphs structure information to elucidate relationships between different entities, such as diseases, treatments, patients, and healthcare providers. Applications of Knowledge Graphs in Healthcare Integration of Diverse Data Sources Knowledge graphs can integrate data from various sources such as electronic health records (EHRs), medical research papers, clinical trial results, genomic data, and patient histories. Improving Clinical Decision Support By linking symptoms, diagnoses, treatments, and outcomes, knowledge graphs can enhance clinical decision support systems (CDSS). They provide a comprehensive view of interconnected medical knowledge, potentially improving diagnostic accuracy and treatment effectiveness. Personalized Medicine Knowledge graphs enable the development of personalized treatment plans by correlating patient-specific data with broader medical knowledge. This includes understanding relationships between genetic information, disease mechanisms, and therapeutic responses, leading to more tailored healthcare interventions. Drug Discovery and Development In pharmaceutical research, knowledge graphs can accelerate drug discovery by identifying potential drug targets and understanding the biological pathways involved in diseases. Public Health and Epidemiology Knowledge graphs are useful in public health for tracking disease outbreaks, understanding epidemiological trends, and planning interventions. They integrate data from various public health databases, social media, and other sources to provide real-time insights into public health threats. Neo4j Runway Library Neo4j Runway is an open-source library created by Alex Gilmore. The GitHub repository and a blog post describe its features and capabilities. Currently, the library supports OpenAI LLM for parsing CSVs and offers the following features: The library eliminates the need to write Cypher queries manually, as the LLM handles all CSV-to-Knowledge Graph conversions. Additionally, Langchain’s GraphCypherQAChain can be used to generate Cypher queries from prompts, allowing for querying the graph without writing a single line of Cypher code. Practical Implementation in Healthcare To test Neo4j Runway in a healthcare context, a simple dataset from Kaggle (Disease Symptoms and Patient Profile Dataset) was used. This dataset includes columns such as Disease, Fever, Cough, Fatigue, Difficulty Breathing, Age, Gender, Blood Pressure, Cholesterol Level, and Outcome Variable. The goal was to provide a medical report to the LLM to get diagnostic hypotheses. Libraries and Environment Setup pythonCopy code# Install necessary packages sudo apt install python3-pydot graphviz pip install neo4j-runway # Import necessary libraries import numpy as np import pandas as pd from neo4j_runway import Discovery, GraphDataModeler, IngestionGenerator, LLM, PyIngest from IPython.display import display, Markdown, Image Load Environment Variables pythonCopy codeload_dotenv() OPENAI_API_KEY = os.getenv(‘sk-openaiapikeyhere’) NEO4J_URL = os.getenv(‘neo4j+s://your.databases.neo4j.io’) NEO4J_PASSWORD = os.getenv(‘yourneo4jpassword’) Load and Prepare Medical Data pythonCopy codedisease_df = pd.read_csv(‘/home/user/Disease_symptom.csv’) disease_df.columns = disease_df.columns.str.strip() for i in disease_df.columns: disease_df[i] = disease_df[i].astype(str) disease_df.to_csv(‘/home/user/disease_prepared.csv’, index=False) Data Description for the LLM pythonCopy codeDATA_DESCRIPTION = { ‘Disease’: ‘The name of the disease or medical condition.’, ‘Fever’: ‘Indicates whether the patient has a fever (Yes/No).’, ‘Cough’: ‘Indicates whether the patient has a cough (Yes/No).’, ‘Fatigue’: ‘Indicates whether the patient experiences fatigue (Yes/No).’, ‘Difficulty Breathing’: ‘Indicates whether the patient has difficulty breathing (Yes/No).’, ‘Age’: ‘The age of the patient in years.’, ‘Gender’: ‘The gender of the patient (Male/Female).’, ‘Blood Pressure’: ‘The blood pressure level of the patient (Normal/High).’, ‘Cholesterol Level’: ‘The cholesterol level of the patient (Normal/High).’, ‘Outcome Variable’: ‘The outcome variable indicating the result of the diagnosis or assessment for the specific disease (Positive/Negative).’ } Data Analysis and Model Creation pythonCopy codedisc = Discovery(llm=llm, user_input=DATA_DESCRIPTION, data=disease_df) disc.run() # Instantiate and create initial graph data model gdm = GraphDataModeler(llm=llm, discovery=disc) gdm.create_initial_model() gdm.current_model.visualize() Adjust Relationships pythonCopy codegdm.iterate_model(user_corrections=”’ Let’s think step by step. Please make the following updates to the data model: 1. Remove the relationships between Patient and Disease, between Patient and Symptom and between Patient and Outcome. 2. Change the Patient node into Demographics. 3. Create a relationship HAS_DEMOGRAPHICS from Disease to Demographics. 4. Create a relationship HAS_SYMPTOM from Disease to Symptom. If the Symptom value is No, remove this relationship. 5. Create a relationship HAS_LAB from Disease to HealthIndicator. 6. Create a relationship HAS_OUTCOME from Disease to Outcome. ”’) # Visualize the updated model gdm.current_model.visualize().render(‘output’, format=’png’) img = Image(‘output.png’, width=1200) display(img) Generate Cypher Code and YAML File pythonCopy code# Instantiate ingestion generator gen = IngestionGenerator(data_model=gdm.current_model, username=”neo4j”, password=’yourneo4jpasswordhere’, uri=’neo4j+s://123654888.databases.neo4j.io’, database=”neo4j”, csv_dir=”/home/user/”, csv_name=”disease_prepared.csv”) # Create ingestion YAML pyingest_yaml = gen.generate_pyingest_yaml_string() gen.generate_pyingest_yaml_file(file_name=”disease_prepared”) # Load data into Neo4j instance PyIngest(yaml_string=pyingest_yaml, dataframe=disease_df) Querying the Graph Database cypherCopy codeMATCH (n) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m Visualizing Specific Nodes and Relationships cypherCopy codeMATCH (n:Disease {name: ‘Diabetes’}) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m MATCH (d:Disease) MATCH (d)-[r:HAS_LAB]->(l) MATCH (d)-[r2:HAS_OUTCOME]->(o) WHERE l.bloodPressure = ‘High’ AND o.result=’Positive’ RETURN d, properties(d) AS disease_properties, r, properties(r) AS relationship_properties, l, properties(l) AS lab_properties Automated Cypher Query Generation with Gemini-1.5-Flash To automatically generate a Cypher query via Langchain (GraphCypherQAChain) and retrieve possible diseases based on a patient’s symptoms and health indicators, the following setup was used: Initialize Vertex AI pythonCopy codeimport warnings import json from langchain_community.graphs import Neo4jGraph with warnings.catch_warnings(): warnings.simplefilter(‘ignore’) NEO4J_USERNAME = “neo4j” NEO4J_DATABASE = ‘neo4j’ NEO4J_URI = ‘neo4j+s://1236547.databases.neo4j.io’ NEO4J_PASSWORD = ‘yourneo4jdatabasepasswordhere’ # Get the Knowledge Graph from the instance and the schema kg = Neo4jGraph( url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD, database=NEO4J_DATABASE ) kg.refresh_schema() print(textwrap.fill(kg.schema, 60)) schema = kg.schema Initialize Vertex AI pythonCopy codefrom langchain.prompts.prompt import PromptTemplate from langchain.chains import GraphCypherQAChain from langchain.llms import VertexAI vertexai.init(project=”your-project”, location=”us-west4″) llm = VertexAI(model=”gemini-1.5-flash”) Create the Prompt Template pythonCopy codeprompt_template = “”” Let’s think step by

Read More
Salesforce Research Produces INDICT

Salesforce Research Produces INDICT

Automating and assisting in coding holds tremendous promise for speeding up and enhancing software development. Yet, ensuring that these advancements yield secure and effective code presents a significant challenge. Balancing functionality with safety is crucial, especially given the potential risks associated with malicious exploitation of generated code. Salesforce Research Produces INDICT. In practical applications, Large Language Models (LLMs) often struggle with ambiguous or adversarial instructions, sometimes leading to unintended security vulnerabilities or facilitating harmful attacks. This isn’t merely theoretical; empirical studies, such as those on GitHub’s Copilot, have revealed that a substantial portion of generated programs—about 40%—contained vulnerabilities. Addressing these risks is vital for unlocking the full potential of LLMs in coding while safeguarding against potential threats. Current strategies to mitigate these risks include fine-tuning LLMs with safety-focused datasets and implementing rule-based detectors to identify insecure code patterns. However, fine-tuning alone may not suffice against sophisticated attack prompts, and creating high-quality safety-related data can be resource-intensive. Meanwhile, rule-based systems may not cover all vulnerability scenarios, leaving gaps that could be exploited. To address these challenges, researchers at Salesforce Research have introduced the INDICT framework. INDICT employs a novel approach involving dual critics—one focused on safety and the other on helpfulness—to enhance the quality of LLM-generated code. This framework facilitates internal dialogues between the critics, leveraging external knowledge sources like code snippets and web searches to provide informed critiques and iterative feedback. INDICT operates through two key stages: preemptive and post-hoc feedback. In the preemptive stage, the safety critic assesses potential risks during code generation, while the helpfulness critic ensures alignment with task requirements. External knowledge sources enrich their evaluations. In the post-hoc stage, after code execution, both critics review outcomes to refine future outputs, ensuring continuous improvement. Evaluation of INDICT across eight diverse tasks and programming languages demonstrated substantial enhancements in both safety and helpfulness metrics. The framework achieved a remarkable 10% absolute improvement in code quality overall. For instance, in CyberSecEval-1 benchmarks, INDICT enhanced code safety by up to 30%, with over 90% of outputs deemed secure. Additionally, the helpfulness metric showed significant gains, surpassing state-of-the-art baselines by up to 70%. INDICT’s success lies in its ability to provide detailed, context-aware critiques that guide LLMs towards generating more secure and functional code. By integrating safety and helpfulness feedback, the framework sets new standards for responsible AI in coding, addressing critical concerns about functionality and security in automated software development. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Forecasting With Foundation Models

Forecasting With Foundation Models

On Hugging Face, there are 20 models tagged as “time series” at the time of writing. While this number is relatively low compared to the 125,950 results for the “text-generation-inference” tag, time series forecasting with foundation models has attracted significant interest from major companies such as Amazon, IBM, and Salesforce, which have developed their own models: Chronos, TinyTimeMixer, and Moirai, respectively. Currently, one of the most popular time series models on Hugging Face is Lag-Llama, a univariate probabilistic model developed by Kashif Rasul, Arjun Ashok, and their co-authors. Open-sourced in February 2024, the authors claim that Lag-Llama possesses strong zero-shot generalization capabilities across various datasets and domains. Once fine-tuned, they assert it becomes the best general-purpose model of its kind. In this insight, we showcase experience fine-tuning Lag-Llama and tests its capabilities against a more classical machine learning approach, specifically an XGBoost model designed for univariate time series data. Gradient boosting algorithms like XGBoost are widely regarded as the pinnacle of classical machine learning (as opposed to deep learning) and perform exceptionally well with tabular data. Therefore, it is fitting to benchmark Lag-Llama against XGBoost to determine if the foundation model lives up to its promises. The results, however, are not straightforward. The data used for this exercise is a four-year-long series of hourly wave heights off the coast of Ribadesella, a town in the Spanish region of Asturias. The data, available from the Spanish ports authority data portal, spans from June 18, 2020, to June 18, 2024. For the purposes of this study, the series is aggregated to a daily level by taking the maximum wave height recorded each day. This aggregation helps illustrate the concepts more clearly, as results become volatile with higher granularity. The target variable is the maximum height of the waves recorded each day, measured in meters. Several reasons influenced the choice of this series. First, the Lag-Llama model was trained on some weather-related data, making this type of data slightly challenging yet manageable for the model. Second, while meteorological forecasts are typically produced using numerical weather models, statistical models can complement these forecasts, especially for long-range predictions. In the era of climate change, statistical models can provide a baseline expectation and highlight deviations from typical patterns. The dataset is standard and requires minimal preprocessing, such as imputing a few missing values. After splitting the data into training, validation, and test sets, with the latter two covering five months each, the next step involves benchmarking Lag-Llama against XGBoost on two univariate forecasting tasks: point forecasting and probabilistic forecasting. Point forecasting gives a specific prediction, while probabilistic forecasting provides a confidence interval. While Lag-Llama was primarily trained for probabilistic forecasting, point forecasts are useful for illustrative purposes. Forecasts involve several considerations, such as the forecast horizon, the last observations fed into the model, and how often the model is updated. This study uses a recursive multi-step forecast without updating the model, with a step size of seven days. This means the model produces batches of seven forecasts at a time, using the latest predictions to generate the next set without retraining. Point forecasting performance is measured using Mean Absolute Error (MAE), while probabilistic forecasting is evaluated based on empirical coverage or coverage probability of 80%. The XGBoost model is defined using Skforecast, a library that facilitates the development and testing of forecasters. The ForecasterAutoreg object is created with an XGBoost regressor, and the optimal number of lags is determined through Bayesian optimization. The resulting model uses 21 lags of the target variable and various hyperparameters optimized through the search. The performance of the XGBoost forecaster is assessed through backtesting, which evaluates the model on a test set. The model’s MAE is 0.64, indicating that predictions are, on average, 64 cm off from the actual measurements. This performance is better than a simple rule-based forecast, which has an MAE of 0.84. For probabilistic forecasting, Skforecast calculates prediction intervals using bootstrapped residuals. The intervals cover 84.67% of the test set values, slightly above the target of 80%, with an interval area of 348.28. Next, the zero-shot performance of Lag-Llama is examined. Using context lengths of 32, 64, and 128 tokens, the model’s MAE ranges from 0.75 to 0.77, higher than the XGBoost forecaster’s MAE. Probabilistic forecasting with Lag-Llama shows varying coverage and interval areas, with the 128-token model achieving an 84.67% coverage and an area of 399.25, similar to XGBoost’s performance. Fine-tuning Lag-Llama involves adjusting context length and learning rate. Despite various configurations, the fine-tuned model does not significantly outperform the zero-shot model in terms of MAE or coverage. In conclusion, Lag-Llama’s performance, without training, is comparable to an optimized traditional forecaster like XGBoost. Fine-tuning does not yield substantial improvements, suggesting that more training data might be necessary. When choosing between Lag-Llama and XGBoost, factors such as ease of use, deployment, maintenance, and inference costs should be considered, with XGBoost likely having an edge in these areas. The code used in this study is publicly available on a GitHub repository for further exploration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com