Upload - gettectonic.com
AI Project Planning by Workflows

Salesforce Flow Tests

Salesforce Flow Tests: What Are the Limitations? Salesforce Flow Tests are essential for ensuring automation reliability, but they aren’t without their constraints. Recognizing these limitations is key to refining your automation strategy and avoiding potential roadblocks. Here’s an overview of common challenges, along with insights into how you can navigate them to maximize the effectiveness of your testing processes. The Role of Flow Tests in Automation Automated processes in Salesforce are powerful, but they don’t optimize themselves. Proper setup and rigorous testing are essential to ensure that your automations run smoothly. While Salesforce Flow Tests help verify functionality, they have inherent limitations that, if misunderstood, could lead to inefficiencies or rework. By understanding these boundaries, you can make informed decisions to strengthen your overall approach to testing and automation. Key Limitations of Salesforce Flow Tests Final Thoughts Mastering Salesforce Flow Tests means leveraging their strengths while acknowledging their constraints. Optimized automations require careful planning, robust testing, and a clear understanding of the tools’ boundaries. Have questions about improving your Salesforce Flows or testing strategy? Let’s chat and explore ways to fine-tune your automations! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
healthcare Can prioritize ai governance

AI Data Privacy and Security

Three Key Generative AI Data Privacy and Security Concerns The rise of generative AI is reshaping the digital landscape, introducing powerful tools like ChatGPT and Microsoft Copilot into the hands of professionals, students, and casual users alike. From creating AI-generated art to summarizing complex texts, generative AI (GenAI) is transforming workflows and sparking innovation. However, for information security and privacy professionals, this rapid proliferation also brings significant challenges in data governance and protection. Below are three critical data privacy and security concerns tied to generative AI: 1. Who Owns the Data? Data ownership is a contentious issue in the age of generative AI. In the European Union, the General Data Protection Regulation (GDPR) asserts that individuals own their personal data. In contrast, data ownership laws in the United States are less clear-cut, with recent state-level regulations echoing GDPR’s principles but failing to resolve ambiguity. Generative AI often ingests vast amounts of data, much of which may not belong to the person uploading it. This creates legal risks for both users and AI model providers, especially when third-party data is involved. Cases surrounding intellectual property, such as controversies involving Slack, Reddit, and LinkedIn, highlight public resistance to having personal data used for AI training. As lawsuits in this arena emerge, prior intellectual property rulings could shape the legal landscape for generative AI. 2. What Data Can Be Derived from LLM Output? Generative AI models are designed to be helpful, but they can inadvertently expose sensitive or proprietary information submitted during training. This risk has made many wary of uploading critical data into AI models. Techniques like tokenization, anonymization, and pseudonymization can reduce these risks by obscuring sensitive data before it is fed into AI systems. However, these practices may compromise the model’s performance by limiting the quality and specificity of the training data. Advocates for GenAI stress that high-quality, accurate data is essential to achieving the best results, which adds to the complexity of balancing privacy with performance. 3. Can the Output Be Trusted? The phenomenon of “hallucinations” — when generative AI produces incorrect or fabricated information — poses another significant concern. Whether these errors stem from poor training, flawed data, or malicious intent, they raise questions about the reliability of GenAI outputs. The impact of hallucinations varies depending on the context. While some errors may cause minor inconveniences, others could have serious or even dangerous consequences, particularly in sensitive domains like healthcare or legal advisory. As generative AI continues to evolve, ensuring the accuracy and integrity of its outputs will remain a top priority. The Generative AI Data Governance Imperative Generative AI’s transformative power lies in its ability to leverage vast amounts of information. For information security, data privacy, and governance professionals, this means grappling with key questions, such as: With high stakes and no way to reverse intellectual property violations, the need for robust data governance frameworks is urgent. As society navigates this transformative era, balancing innovation with responsibility will determine whether generative AI becomes a tool for progress or a source of new challenges. While generative AI heralds a bold future, history reminds us that groundbreaking advancements often come with growing pains. It is the responsibility of stakeholders to anticipate and address these challenges to ensure a safer and more equitable AI-powered world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Winter 25 Release Notes

Winter 25 Release Experience Cloud

Salesforce Winter ’25 Release: 6 Key Enhancements to Experience Cloud The Salesforce Winter ’25 Release brings a fresh suite of updates to Experience Cloud, focusing on design customization, SEO optimization, and streamlined navigation for enhanced user engagement. We’ve summarized six major updates and additional enhancements that make this release a significant step forward for Experience Cloud sites. 1. Enhanced Design Options for LWR Sites Winter ’25 empowers site designers with more granular control over the look and feel of LWR (Lightning Web Runtime) sites. New customization options in the Experience Builder Theme panel now allow for specific styling of individual components like columns and buttons, offering a new level of precision in visual design. Additional features include a Scoped Header and Footer layout that allows fixed positioning for headers and footers, enhancing user experience with persistent navigation elements. Site admins can define unique color palettes for buttons across various states (default, hover, focus), and apply color schemes to individual columns, which can now be set in the Theme panel. Further text customizations for headings have also been added, allowing a personalized touch for every element on LWR sites. 2. SEO-Friendly URLs for Accounts and Contacts (Generally Available) To drive organic traffic, the Winter ’25 Release introduces SEO-friendly URL slugs for Account and Contact pages, replacing traditional record IDs with easily readable URLs. This enhancement allows search engines to better index content, making it easier for users to find your pages. Site managers can configure SEO-friendly URLs directly in the Administration panel and import slugs in bulk for faster setup. 3. Data Providers for LWR Sites (Beta) Experience Cloud now includes an option to configure data providers on LWR site pages, enabling seamless integration with data from various sources, including Apex and Record providers. Admins can specify data sources within Experience Builder, allowing for real-time data updates across components and pages, providing a more dynamic and responsive experience for users. 4. Revamped Navigation and New Components The Navigation Menu component has been revamped, allowing admins to design a more intuitive navigation experience for both desktop and mobile users. The beta Site Header component further enhances branding with logo placement and customizable headers, while the Grid component now ensures consistent cell height, improving the visual balance of page layouts. Tailored navigation menus for desktop and mobile screens can be customized for color, spacing, text styles, and more to provide an optimized experience across devices. 5. Expanded Data Cloud Integration for Event Tracking Winter ’25 expands Data Cloud integration to capture checkout, order, and cart events on enhanced LWR sites. Ecommerce-focused organizations can now record user interactions—like checkout initiation and address input—automatically, giving businesses richer insights into customer behavior. Data captured through these events can be viewed within Data Cloud, allowing admins to understand user engagement and optimize site design accordingly. 6. Salesforce File Linking for LWR Sites (Beta) The new File Upload Lightning Web Component enables file uploads directly from an LWR site to Salesforce, an option previously available only on Aura sites. This update streamlines the file transfer process, allowing guest users to upload files securely, which are then accessible within Salesforce. Additional Experience Cloud Enhancements In addition to the primary updates, Winter ’25 introduces several valuable, albeit smaller, features: Availability of Features Some Winter ’25 features will be accessible immediately after release, while others require setup by admins. Consider notifying users about these updates to ensure a smooth transition and to leverage the full potential of new functionalities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
More AI Tools to Use

More AI Tools to Use

Additionally, Arc’s collaboration with Perplexity elevates browsing by transforming search experiences. Perplexity functions as a personal AI research assistant, fetching and summarizing information along with sources, visuals, and follow-up questions. Premium users even have access to advanced large language models like GPT-4 and Claude. Together, Arc and Perplexity revolutionize how users navigate the web. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Flow Tests

Salesforce Flow Tests

Deploying Salesforce Flow tests is not just about hitting “go” and hoping for the best. It requires more than simply moving automations from a Sandbox environment to production. Successful deployment demands thoughtful planning and attention to detail. In this post, we’ll dive deeper into deploying Flow tests effectively, covering key factors like independent testing and ensuring environment consistency. Building on our ongoing series, we’ll provide practical insights to help you achieve smooth deployments and reliable test execution. Key Considerations for Deploying Flow Tests Steps to Deploy Flow Tests Using Change Sets Final Thoughts Deploying Flow tests effectively is critical for maintaining the integrity of your automations across environments. Skipping the testing phase is like driving with a blindfold—one mistake could disrupt your workflows and cause chaos in critical processes. By following these guidelines, particularly focusing on independent testing and post-deployment checks, you can help ensure your Salesforce Flows continue to operate smoothly. Stay tuned for future insights for Flownatics where we’ll dive into more advanced aspects of Flow tests, helping you further optimize your Salesforce automation processes. Need more advice on testing your automations in Salesforce? Let’s chat! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Document Checklist in Salesforce Screen Flow

Document Checklist in Salesforce Screen Flow

One effective way to accomplish this is by using the Document Matrix element in Discovery Framework–based OmniScripts. This approach allows you to streamline the assessment process and ensure that the advisor uploads the correct documents.

Read More
AI-Ready Text Data

AI-Ready Text Data

Large language models (LLMs) are powerful tools for processing text data from various sources. Common tasks include editing, summarizing, translating, and extracting text. However, one of the key challenges in utilizing LLMs effectively is ensuring that your data is AI-ready. This insight will explain what it means to have AI-Ready Text Data and present a few no-code solutions to help you achieve this. What Does AI-Ready Mean? We are surrounded by vast amounts of unstructured text data—web pages, PDFs, emails, organizational documents, and more. These unstructured documents hold valuable information, but they can be difficult to process using LLMs without proper preparation. Many users simply copy and paste text into a prompt, but this method is not always effective. Consider the following challenges: To be AI-ready, your data should be formatted in a way that LLMs can easily interpret, such as plain text or Markdown. This ensures efficient and accurate text processing. Plain Text vs. Markdown Plain text (.txt) is the most basic file type, containing only raw characters without any stylization. Markdown files (.md) are a type of plain text but include special characters to format the text, such as using asterisks for italics or bolding. LLMs are adept at processing Markdown because it provides both content and structure, enhancing the model’s ability to understand and organize information. Markdown’s simple syntax for headers, lists, and links allows LLMs to extract additional meaning from the document’s structure, leading to more accurate interpretations. Markdown is widely supported across various platforms (e.g., Slack, Discord, GitHub, Google Docs), making it a versatile option for preparing AI-ready text. Tools for AI-Ready Data Here are some essential tools to help you manage Markdown and integrate it into your LLM workflows: Recommended Tools for Managing AI-Ready Data Obsidian: Save and Store Plain Text Obsidian is a great tool for saving and organizing Markdown files. It’s a free text editor that supports plain-text workflows, making it an excellent choice for storing content extracted from PDFs or web pages. Jina AI Reader: Convert Web Pages to Markdown Jina AI Reader is an easy-to-use tool for converting web pages into Markdown. Simply add https://r.jina.ai/ before a webpage URL, and it will return the content in Markdown format. This method streamlines the process of extracting relevant text without the clutter of formatting. LlamaParse: Extract Plain Text from Documents Highly formatted documents like PDFs can present unique challenges when working with LLMs. LlamaParse, part of LlamaIndex’s suite, helps strip away formatting to focus on the content. By using LlamaParse, you can extract plain text or Markdown from documents and ensure only the relevant sections are processed. Our Thoughts Preparing text data for AI involves strategies to convert, store, and process content efficiently. While this may seem daunting at first, using the right tools will streamline your workflow and allow you to maximize the power of LLMs for your specific tasks. Tectonic is ready to assist. Contact us today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Viewing All Hard and Soft Credits for Household Accounts

Viewing All Hard and Soft Credits for Household Accounts

Subject: Inquiry About Viewing All Hard and Soft Credits for Household Accounts What’s the best way to view a complete list of all hard and soft credits associated with a household account? I understand that the NPSP customizable rollups allow us to aggregate soft credit totals based on the Opportunity Contact Roles for each contact in a household, which is useful but can sometimes be inaccurate due to data entry errors in assigning the correct roles. Additionally, filtered opportunity-related lists on the contact page can show different soft and hard credits assigned to individual contacts. While helpful, this can be confusing for users who prefer to see all information at the household account level and may overlook the contact details. What I’m looking for is a comprehensive list at the account level that includes all opportunities linked to every contact in a household, regardless of the Opportunity Contact Roll type. Essentially, I need to see every soft and hard credit received by all household members. I came across a post by Megan Moorehead from over three years ago titled “Soft Credit Opportunities Related List on Household.” She replied to her own post a couple of months later (on September 10, 2021), but I only partially understand her solution. Before I delve deeper into her suggestions, I wanted to check if there are any simpler or more recent out-of-the-box options added to NPSP. I’m flexible regarding how to obtain this full list of all hard and soft credits related to household members—it could be through a related list or a report. I believe this is a common need among many organizations, so any suggestions on how you’re generating this type of list would be greatly appreciated. Response: You’re correct that this isn’t available out of the box. The challenge arises because opportunities aren’t always directly tagged to the household. You might consider using a screen flow to gather opportunities from household members and display them on the account page in a data table. Alternatively, you could create a custom field on opportunities (populated by a record-triggered flow) called “Household,” which would link opportunities back to the household based on the contact’s association at the time of creation. This would allow for a separate related list on the account page. One thing to keep in mind: since soft credits are only assigned via Contact Roles, if the issue is that Contact Roles are missing, then Megan’s Mass Action Scheduler solution—or any solution—won’t help. Those opportunities will not appear in the household list, except for those where someone in the household received hard credit. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Open AI Update

Open AI Update

OpenAI has established itself as a leading force in the generative AI space, with its ChatGPT being one of the most widely recognized AI tools. Powered by the GPT series of large language models (LLMs), as of September 2024, ChatGPT primarily uses GPT-4o and GPT-3.5. This insight provides an Open AI Update. In August and September 2024, rumors circulated about a new model from OpenAI, codenamed “Strawberry.” Initially, it was unclear if this model would be a successor to GPT-4o or something entirely different. On September 12, 2024, the mystery was resolved with the official launch of OpenAI’s o1 models, including o1-preview and o1-mini. What is OpenAI o1? OpenAI o1 is a new family of LLMs optimized for advanced reasoning tasks. Unlike earlier models, o1 is designed to improve problem-solving by reasoning through queries rather than just generating quick responses. This deeper processing aims to produce more accurate answers to complex questions, particularly in fields like STEM (science, technology, engineering, and mathematics). The o1 models, currently available in preview form, are intended to provide a new type of LLM experience beyond what GPT-4o offers. Like all OpenAI LLMs, the o1 series is built on transformer architecture and can be used for tasks such as content summarization, new content generation, question answering, and writing code. Key Features of OpenAI o1 The standout feature of the o1 models is their ability to engage in multistep reasoning. By adopting a “chain-of-thought” approach, o1 models break down complex problems and reason through them iteratively. This makes them particularly adept at handling intricate queries that require a more thoughtful response. The initial September 2024 launch included two models: Use Cases for OpenAI o1 The o1 models can perform many of the same functions as GPT-4o, such as answering questions, summarizing content, and generating text. However, they are particularly suited for tasks that benefit from enhanced reasoning, including: Availability and Access The o1-preview and o1-mini models are available to users of ChatGPT Plus and Team as of September 12, 2024. OpenAI plans to extend access to ChatGPT Enterprise and Education users starting September 19, 2024. While free ChatGPT users do not have access to these models at launch, OpenAI intends to introduce o1-mini to free users in the future. Developers can also access the models through OpenAI’s API, and third-party platforms such as Microsoft Azure AI Studio and GitHub Models offer integration. Limitations of OpenAI o1 As preview models, o1 comes with certain limitations: Enhancing Safety with OpenAI o1 To ensure safety, OpenAI released a System Card that outlines how the o1 models were evaluated for risks like cybersecurity threats, persuasion, and model autonomy. The o1 models improve safety through: GPT-4o vs. OpenAI o1 Here’s a quick comparison between GPT-4o and OpenAI’s new o1 models: Feature GPT-4o o1 Models Release Date May 13, 2024 Sept. 12, 2024 Model Variants Single model Two variants: o1-preview and o1-mini Reasoning Capabilities Good Enhanced, especially for STEM fields Mathematics Olympiad Score 13% 83% Context Window 128K tokens 128K tokens Speed Faster Slower due to in-depth reasoning Cost (per million tokens) Input: $5; Output: $15 o1-preview: $15 input, $60 output; o1-mini: $3 input, $12 output Safety and Alignment Standard Enhanced safety, better jailbreak resistance OpenAI’s o1 models bring a new level of reasoning and accuracy, making them a promising advancement in generative AI. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
chatGPT open ai 01

ChatGPT Open AI o1

OpenAI has firmly established itself as a leader in the generative AI space, with its ChatGPT being one of the most well-known applications of AI today. Powered by the GPT family of large language models (LLMs), ChatGPT’s primary models, as of September 2024, are GPT-4o and GPT-3.5. In August and September 2024, rumors surfaced about a new model from OpenAI, codenamed “Strawberry.” Speculation grew as to whether this was a successor to GPT-4o or something else entirely. The mystery was resolved on September 12, 2024, when OpenAI launched its new o1 models, including o1-preview and o1-mini. What Is OpenAI o1? The OpenAI o1 family is a series of large language models optimized for enhanced reasoning capabilities. Unlike GPT-4o, the o1 models are designed to offer a different type of user experience, focusing more on multistep reasoning and complex problem-solving. As with all OpenAI models, o1 is a transformer-based architecture that excels in tasks such as content summarization, content generation, coding, and answering questions. What sets o1 apart is its improved reasoning ability. Instead of prioritizing speed, the o1 models spend more time “thinking” about the best approach to solve a problem, making them better suited for complex queries. The o1 models use chain-of-thought prompting, reasoning step by step through a problem, and employ reinforcement learning techniques to enhance performance. Initial Launch On September 12, 2024, OpenAI introduced two versions of the o1 models: Key Capabilities of OpenAI o1 OpenAI o1 can handle a variety of tasks, but it is particularly well-suited for certain use cases due to its advanced reasoning functionality: How to Use OpenAI o1 There are several ways to access the o1 models: Limitations of OpenAI o1 As an early iteration, the o1 models have several limitations: How OpenAI o1 Enhances Safety OpenAI released a System Card alongside the o1 models, detailing the safety and risk assessments conducted during their development. This includes evaluations in areas like cybersecurity, persuasion, and model autonomy. The o1 models incorporate several key safety features: GPT-4o vs. OpenAI o1: A Comparison Here’s a side-by-side comparison of GPT-4o and OpenAI o1: Feature GPT-4o o1 Models Release Date May 13, 2024 Sept. 12, 2024 Model Variants Single Model Two: o1-preview and o1-mini Reasoning Capabilities Good Enhanced, especially in STEM fields Performance Benchmarks 13% on Math Olympiad 83% on Math Olympiad, PhD-level accuracy in STEM Multimodal Capabilities Text, images, audio, video Primarily text, with developing image capabilities Context Window 128K tokens 128K tokens Speed Fast Slower due to more reasoning processes Cost (per million tokens) Input: $5; Output: $15 o1-preview: $15 input, $60 output; o1-mini: $3 input, $12 output Availability Widely available Limited to specific users Features Includes web browsing, file uploads Lacks some features from GPT-4o, like web browsing Safety and Alignment Focus on safety Improved safety, better resistance to jailbreaking ChatGPT Open AI o1 OpenAI o1 marks a significant advancement in reasoning capabilities, setting a new standard for complex problem-solving with LLMs. With enhanced safety features and the ability to tackle intricate tasks, o1 models offer a distinct upgrade over their predecessors. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Power BI

Connect Salesforce and Power BI

Hello, Im trying to connect a filtered case list (https://company.lightning.force.com/lightning/o/Case/list?filterName=blahblah) containing customer reviews in the case description into a Power BI table and connect it to my AI Hub custom prompt bot that categorises text. Ideally, when new cases get added to that filtered list –  the Power BI table automatically refreshes with the case id, subject, description and an additional column where the categorised text gets added in. eg) Case ID Case Subject Case description Category 332432 AAAA blah blah customer complaint 4243242 BBBB something product quality 424234 CCCC bleh customer praise Thanks! You might find it helpful to follow these steps: 1. Connect Salesforce filtered case list to Power BI. 2. Use Power Apps AI Builder to categorise case descriptions: 3. Configure Power BI to automatically refresh for the latest classification results. 4. Displaying Classified Data in Power BI Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI-Driven Chatbots in Education

AI-Driven Chatbots in Education

As AI-driven chatbots enter college courses, the potential to offer students 24/7 support is game-changing. However, there’s a critical caveat: when we customize chatbots by uploading documents, we don’t just add knowledge — we introduce biases. The documents we choose influence chatbot responses, subtly shaping how students interact with course material and, ultimately, how they think. So, how can we ensure that AI chatbots promote critical thinking rather than merely serving to reinforce our own viewpoints? How Course Chatbots Differ from Administrative Chatbots Chatbot teaching assistants have been around for some time in education, but low-cost access to large language models (LLMs) and accessible tools now make it easy for instructors to create customized course chatbots. Unlike chatbots used in administrative settings that rely on a defined “ground truth” (e.g., policy), educational chatbots often cover nuanced and debated topics. While instructors typically bring specific theories or perspectives to the table, a chatbot trained with tailored content can either reinforce a single view or introduce a range of academic perspectives. With tools like ChatGPT, Claude, Gemini, or Copilot, instructors can upload specific documents to fine-tune chatbot responses. This customization allows a chatbot to provide nuanced responses, often aligned with course-specific materials. But, unlike administrative chatbots that reference well-defined facts, course chatbots require ethical responsibility due to the subjective nature of academic content. Curating Content for Classroom Chatbots Having a 24/7 teaching assistant can be a powerful resource, and today’s tools make it easy to upload course documents and adapt LLMs to specific curricula. Options like OpenAI’s GPT Assistant, IBL’s AI Mentor, and Druid’s Conversational AI allow instructors to shape the knowledge base of course-specific chatbots. However, curating documents goes beyond technical ease — the content chosen affects not only what students learn but also how they think. The documents you select will significantly shape, though not dictate, chatbot responses. Combined with the LLM’s base model, chatbot instructions, and the conversation context, the curated content influences chatbot output — for better or worse — depending on your instructional goals. Curating for Critical Thinking vs. Reinforcing Bias A key educational principle is teaching students “how to think, not what to think.” However, some educators may, even inadvertently, lean toward dictating specific viewpoints when curating content. It’s critical to recognize the potential for biases that could influence students’ engagement with the material. Here are some common biases to be mindful of when curating chatbot content: While this list isn’t exhaustive, it highlights the complexities of curating content for educational chatbots. It’s important to recognize that adding data shifts — not erases — inherent biases in the LLM’s responses. Few academic disciplines offer a single, undisputed “truth.” AI-Driven Chatbots in Education. Tips for Ethical and Thoughtful Chatbot Curation Here are some practical tips to help you create an ethically balanced course chatbot: This approach helps prevent a chatbot from merely reflecting a single perspective, instead guiding students toward a broader understanding of the material. Ethical Obligations As educators, our ethical obligations extend to ensuring transparency about curated materials and explaining our selection choices. If some documents represent what you consider “ground truth” (e.g., on climate change), it’s still crucial to include alternative views and equip students to evaluate the chatbot’s outputs critically. Equity Customizing chatbots for educational use is powerful but requires deliberate consideration of potential biases. By curating diverse perspectives, being transparent in choices, and refining chatbot content, instructors can foster critical thinking and more meaningful student engagement. AI-Driven Chatbots in Education AI-powered chatbots are interactive tools that can help educational institutions streamline communication and improve the learning experience. They can be used for a variety of purposes, including: Some examples of AI chatbots in education include: While AI chatbots can be a strategic move for educational institutions, it’s important to balance innovation with the privacy and security of student data.  Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com