Testing Archives - gettectonic.com - Page 7
Mamba-2

Mamba-2

Introducing Mamba-2: A New Era in State Space Model Architecture Researchers Tri Dao and Albert Gu have unveiled Mamba-2, the next iteration of their widely popular Mamba-1 model on GitHub. This new model promises significant improvements and innovations in the realm of state space models, particularly for information-dense data like language models. What is Mamba-2? M2 is a state space model architecture designed to outperform older models, including the widely used transformers. It shows remarkable promise in handling data-intensive tasks with greater efficiency and speed. Key Features of Mamba-2 Core Innovation: Structured State Space Duality (SSD) Performance Improvements Architectural Changes Performance Metrics In rigorous testing, M2 demonstrated superior scaling and faster training times compared to M1. Pretrained models, with sizes ranging from 130 million to 2.8 billion parameters, have been trained on extensive datasets like Pile and SlimPajama. Performance remains consistent across various tasks, with only minor variations due to evaluation noise. Specifications Getting Started with Mamba-2 To start using M2, install it via the command !pip install mamba-ssm and integrate it with PyTorch. Pretrained models are available on Hugging Face, facilitating easy deployment for various tasks. Conclusion Mamba-2 marks a significant advancement in state space model architecture, offering enhanced performance and efficiency over its predecessor and other models like transformers. Whether you’re engaged in language modeling or other data-intensive projects, M2 provides a powerful and efficient solution. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Tools for Automation

AI Tools for Automation

Revolutionizing QA Testing: Top 10 AI Tools for Automation Artificial intelligence (AI) is transforming many aspects of our daily lives and professional environments, and its impact on Quality Assurance (QA) testing is particularly groundbreaking. While AI applications in areas like photo-to-anime converters gain attention, its role in automating QA testing processes is truly revolutionary. In this article, we’ll explore the top 10 AI tools that are changing the game in QA automation. AI Tools for Automation. Why Use AI in QA Testing? AI is a game-changer in QA testing, streamlining processes that were once manual and time-consuming. It enhances efficiency by optimizing test scenarios, predicting defects, and automating test creation and execution. Although manual testing remains important, AI tools are becoming crucial for achieving more accurate and efficient QA processes. Top 10 AI Testing Tools Here’s a curated list of the top 10 AI tools for test automation. Choosing the Right AI Tool To select the best AI testing tool for your needs, follow these steps: Conclusion AI is transforming QA testing by reducing preparation time, improving accuracy, and enhancing software quality. By leveraging the AI tools outlined above, you can optimize your testing processes and achieve superior results. For expert QA assistance and a detailed product testing estimate, contact our professional team. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
RAG Chunking Method

RAG Chunking Method

Enhancing Retrieval-Augmented Generation (RAG) Systems with Topic-Based Document Segmentation Dividing large documents into smaller, meaningful parts is crucial for the performance of Retrieval-Augmented Generation (RAG) systems. RAG Chunking Method. These systems benefit from frameworks that offer multiple document-splitting options. This Tectonic insight introduces an innovative approach that identifies topic changes using sentence embeddings, improving the subdivision process to create coherent topic-based sections. RAG Systems: An Overview A Retrieval-Augmented Generation (RAG) system combines retrieval-based and generation-based models to enhance output quality and relevance. It first retrieves relevant information from a large dataset based on an input query, then uses a transformer-based language model to generate a coherent and contextually appropriate response. This hybrid approach is particularly effective in complex or knowledge-intensive tasks. Standard Document Splitting Options Before diving into the new approach, let’s explore some standard document splitting methods using the LangChain framework, known for its robust support of various natural language processing (NLP) tasks. LangChain Framework: LangChain assists developers in applying large language models across NLP tasks, including document splitting. Here are key splitting methods available: Introducing a New Approach: Topic-Based Segmentation Segmenting large-scale documents into coherent topic-based sections poses significant challenges. Traditional methods often fail to detect subtle topic shifts accurately. This innovative approach, presented at the International Conference on Artificial Intelligence, Computer, Data Sciences, and Applications (ACDSA 2024), addresses this issue using sentence embeddings. The Core Challenge Large documents often contain multiple topics. Conventional segmentation techniques struggle to identify precise topic transitions, leading to fragmented or overlapping sections. This method leverages Sentence-BERT (SBERT) to generate embeddings for individual sentences, which reflect changes in the vector space as topics shift. Approach Breakdown 1. Using Sentence Embeddings: 2. Calculating Gap Scores: 3. Smoothing: 4. Boundary Detection: 5. Clustering Segments: Algorithm Pseudocode Gap Score Calculation: pythonCopy code# Example pseudocode for gap score calculation def calculate_gap_scores(sentences, n): embeddings = [sbert.encode(sentence) for sentence in sentences] gap_scores = [] for i in range(len(sentences) – n): before = embeddings[i:i+n] after = embeddings[i+n:i+2*n] score = cosine_similarity(before, after) gap_scores.append(score) return gap_scores Gap Score Smoothing: pythonCopy code# Example pseudocode for smoothing gap scores def smooth_gap_scores(gap_scores, k): smoothed_scores = [] for i in range(len(gap_scores)): start = max(0, i – k) end = min(len(gap_scores), i + k + 1) smoothed_score = sum(gap_scores[start:end]) / (end – start) smoothed_scores.append(smoothed_score) return smoothed_scores Boundary Detection: pythonCopy code# Example pseudocode for boundary detection def detect_boundaries(smoothed_scores, c): boundaries = [] mean_score = sum(smoothed_scores) / len(smoothed_scores) std_dev = (sum((x – mean_score) ** 2 for x in smoothed_scores) / len(smoothed_scores)) ** 0.5 for i, score in enumerate(smoothed_scores): if score < mean_score – c * std_dev: boundaries.append(i) return boundaries Future Directions Potential areas for further research include: Conclusion This method combines traditional principles with advanced sentence embeddings, leveraging SBERT and sophisticated smoothing and clustering techniques. This approach offers a robust and efficient solution for accurate topic modeling in large documents, enhancing the performance of RAG systems by providing coherent and contextually relevant text sections. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Unlimited+ Edition Explained

Salesforce Unlimited+ Edition Explained

Salesforce Unlimited Plus (UE+) is designed as an advanced offering that incorporates several specialized features tailored for different industries, making it particularly suitable for larger organizations and enterprises that require robust, integrated solutions for complex business processes and customer relationship management. Salesforce Unlimited+ Edition Explained. Target Audience UE+ is targeted toward large enterprises that need extensive CRM functionalities combined with AI and data analytics capabilities. This solution is ideal for organizations that: • Manage complex customer relationships across multiple channels. • Require deep integration of data and processes across departments. • Are looking to leverage advanced AI capabilities for predictive insights and automation. • Need industry-specific solutions that can be customized for unique business requirements. The integration of various Salesforce clouds (e.g., Sales Cloud, Service Cloud, Data Cloud) with enhanced features like AI and specific industry capabilities makes UE+ a comprehensive solution for organizations aiming to streamline their operations and gain a competitive edge through advanced technology adoption. Here are the five Salesforce editions for every purpose: ·Starter/Essentials: Ideal for small businesses, offering basic contact, lead, and opportunity management. ·Professional: Tailored for mid-sized companies with enhanced sales forecasting and automation capabilities. ·Enterprise: Geared towards larger organizations, providing advanced customization, reporting, and integration options. ·Unlimited: Offers comprehensive functionality, customizability, 24/7 support, and access to premium features like generative AI. ·Unlimited Plus: Most robust solution for businesses of all sizes, featuring additional functionalities and enhanced capabilities. Key Considerations: ·Business Size: Consider the number of users and overall business scale when choosing an edition. ·Features Needed: Identify the specific features crucial for your sales, service, or marketing processes. ·Scalability: Choose an edition that accommodates your projected growth and future needs. ·Budget: Evaluate the cost of each edition against its offered features and value proposition. Sales Cloud Unlimited Edition+ Features: Account and Contact Management: Complete visibility of customer profiles including activity history and communications. Opportunity Management: Tracking and details of every sales deal at each stage. Pipeline Inspection: A comprehensive tool that allows sales managers to monitor pipeline changes, offering AI-driven insights and recommendations to optimize sales strategies. Einstein AI Capabilities: Includes tools like Einstein Conversation Insights which transcribe and analyze sales calls, highlighting key parts for review and deeper analysis. Customizable Reports and Dashboards: Enhanced capabilities for building real-time reports and visualizations to track sales metrics and forecasts. Advanced Integration Features: Integration with external data and systems through various APIs including REST and SOAP. Automation and Customization: Extensive options for workflow automation and personalization of user interfaces and customer interactions using the Flow Builder and Lightning App Builder. Developer Tools: Access to tools like Developer Sandbox for safe testing and app development environments. Service Cloud Unlimited+ Features: Einstein Bots: AI-powered chatbots to handle customer inquiries automatically, available 24/7 across various communication channels. Enhanced Messaging: Integration with popular messaging platforms like WhatsApp, SMS, and Apple Messages to facilitate seamless customer interactions. Feedback Management: Tools to gather and analyze customer feedback directly within the CRM. Self-Service Capabilities: Including customizable help centers and service catalogs that allow customers to find information and resolve issues independently. Field Service Tools: Comprehensive management of field operations including work order and asset management. Real-Time Analytics: Advanced reporting features for creating in-depth analytics to monitor and improve customer service processes. Additional features include Data Cloud, Generative AI, Service Cloud Voice, Digital Engagement, Feedback Management, Self-Service, and Slack. Salesforce Unlimited+ for Industries UE+ for Industries: UE+ for Industries includes Unlimited+ for Sales and Service together with industry-specific data models and capabilities to help customers drive faster time to value within their sectors: •Financial Services Cloud UE+ for Sales and Financial Services Cloud UE+ for Service helps banks, asset management, and insurance agencies connect all of their customer data on one platform and embed AI to deliver personalized financial engagement, at scale. •An insurance carrier can use Financial Services Cloud UE+ to connect engagement data like emails, webinars, and educational content with third-party conference attendance, social media follows, and business performance data to understand what is motivating agents, helping drive more personalized relationships and grow revenue with Data Cloud and Einstein AI. •Health Cloud UE+ for Service helps healthcare, pharmaceutical, and other medical organizations improve response times at their contact centers and offer digital healthcare services with built-in intelligence, real-time collaboration, and a 360-degree view of every patient, provider, and partner. •A hospital can use the bundle to quickly create a personalized, AI-powered support center to triage and speed up time to care with self-service tools like scheduling and connecting patients and members with care teams on their preferred channels. •Manufacturing Cloud UE+ for Sales brings together tools for manufacturing organizations to build their data foundation, embed AI capabilities across the sales cycle, and maximize productivity, empowering them to scale their commercial operations and grow revenues. •A manufacturer can now look across the entire book of business to see how companies are performing against negotiated sales agreements and then use AI-generated summaries to determine where to prioritize their time and resources. By Tectonic’s AArchitecture Team Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Did Google Dethrone ChatGPT

Did Google Dethrone ChatGPT?

Google’s Bard has emerged as a contender in the realm of large language models (LLMs), sparking speculation about its potential to outshine OpenAI’s ChatGPT. This insight explores the validity of this claim and examines the tests and factors that could determine the ultimate victor in this ongoing AI rivalry. Did Google Dethrone ChatGPT? Google’s Gemini 1.5 Pro is a generational leap in terms of Multimodal Large Language Models, or MLLMs, much like GPT-4 was to LLMs back in March 2023. Did Google Dethrone ChatGPT? While initial rumors of Bard’s “dethronement” of ChatGPT surfaced from a single LinkedIn post in February 2024, substantial evidence is required to substantiate such claims. Let’s determine the potential battleground: The Testing Grounds: There’s no singular, universally recognized benchmark for evaluating LLMs. Here are some areas where Google and OpenAI may showcase their AI prowess: Generative Text Quality: Can the LLM generate various creative text formats—such as poems, code, scripts, and emails—while maintaining coherence and factual accuracy? Question Answering: How effectively can the LLM respond to open-ended, challenging, or unconventional questions, drawing on its knowledge base? Following Instructions: Can the LLM adhere to complex instructions and perform tasks requiring multi-step reasoning? Bias Mitigation: Does the LLM demonstrate impartiality in its responses, or does it exhibit traces of prejudice or social stereotypes? Beyond the Tests: While test results offer insights into LLM capabilities, other factors influence their overall impact: Accessibility: How easily can the LLM be accessed by the public? Is there a user-friendly interface or developer API? Real-World Applications: How seamlessly can the LLM be integrated into practical applications like chatbots, virtual assistants, or educational tools? Continuous Learning: How adeptly does the LLM adapt and enhance its performance over time, incorporating new data and user feedback? The Current Landscape: Declaring a definitive winner is challenging. Bard and ChatGPT excel in different domains. Here’s a speculative analysis: Generative Text Quality: Bard may have a slight advantage, leveraging Google’s extensive dataset. Question Answering: ChatGPT might excel in responding to open-ended queries with creativity, while Bard may prioritize factual accuracy. Following Instructions & Bias Mitigation: Both LLMs are continually refining their capabilities in these areas. The Future of LLMs: The landscape of LLMs is dynamic, with Google and OpenAI poised to make significant advancements. Anticipated developments include: Focus on Explainability: Efforts to understand the reasoning behind LLM responses to foster transparency and trust. Bias Mitigation: Strategies to address bias in LLMs for fairer and more inclusive interactions. Specialized LLMs: Development of domain-specific LLMs tailored to fields like medicine or law. Is Google AI better than ChatGPT? Gemini offers a better user experience, with more imagery and website links. Gemini Advanced generates better AI images than ChatGPT Plus. Gemini responses were often set out in a more readable format than ChatGPT’s responses. Gemini was better at generating spreadsheet formulas than ChatGPT. How is Bard better than ChatGPT? Bard has real-time access to the internet through Google Search, allowing it to incorporate the latest information and news into its responses. Trained on a static dataset not updated since 2021, however, ChatGPT can only access external information through plugins, and this functionality is limited. Is Google nervous about ChatGPT? It’s that the technology represents everything Google was afraid artificial intelligence would become. If ChatGPT runs rampant, the search giant fears it could ruin AI adoption for everyone. Since going viral, ChatGPT has demonstrated how generative AI can be user-friendly, practical, and productive. The narrative of ChatGPT’s dethronement may be premature. Bard and ChatGPT are evolving entities, and the ultimate victor will be determined by their ability to navigate future challenges and opportunities. As these LLMs progress, users stand to benefit from access to increasingly sophisticated and beneficial AI tools. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Gen AI and Test Automation

Gen AI and Test Automation

Generative AI has brought transformative advancements across industries, and test automation is no exception. By generating code, test scenarios, and even entire suites, Generative AI enables Software Development Engineers in Test (SDETs) to boost efficiency, expand test coverage, and improve reliability. 1. Enhanced Test Case Generation One of the biggest hurdles in test automation is generating diverse, comprehensive test cases. Traditional methods often miss edge cases or diverse scenarios. Generative AI, however, can analyze existing data and automatically generate extensive test cases, including potential edge cases that may not be apparent to human testers. Example: An SDET can use Generative AI to create test cases for a web application by feeding it requirements and user data. This enables the AI to produce hundreds of test cases, capturing diverse user behaviors and interactions that manual testers may overlook. pythonCopy codeimport openai openai.api_key = ‘YOUR_API_KEY’ def generate_test_cases(application_description): response = openai.Completion.create( engine=”text-davinci-003″, prompt=f”Generate comprehensive test cases for the following application: {application_description}”, max_tokens=500 ) return response.choices[0].text app_description = “An e-commerce platform for browsing products, adding to cart, and checking out.” test_cases = generate_test_cases(app_description) print(test_cases) Sample Output: 2. Intelligent Test Script Creation Writing test scripts manually can be labor-intensive and error-prone. Generative AI can simplify this by generating test scripts based on an application’s flow, ensuring consistency and precision. Example: If an SDET needs to automate tests for a mobile app, they can use Generative AI to generate scripts for various scenarios, significantly reducing manual work. pythonCopy codeimport hypothetical_ai_test_tool ui_description = “”” Login Page: – Username field – Password field – Login button Home Page: – Search bar – Product listings – Add to cart buttons “”” test_scripts = hypothetical_ai_test_tool.generate_selenium_scripts(ui_description) Sample Output for test_login.py: pythonCopy codefrom selenium import webdriver from selenium.webdriver.common.keys import Keys def test_login(): driver = webdriver.Chrome() driver.get(“http://example.com/login”) username_field = driver.find_element_by_name(“username”) password_field = driver.find_element_by_name(“password”) login_button = driver.find_element_by_name(“login”) username_field.send_keys(“testuser”) password_field.send_keys(“password”) login_button.click() assert “Home” in driver.title driver.quit() 3. Automated Maintenance of Test Suites As applications evolve, maintaining test suites is critical. Generative AI can monitor app changes and update test cases automatically, keeping test suites accurate and relevant. Example: In a CI/CD pipeline, an SDET can deploy Generative AI to track code changes and update affected test scripts. This minimizes downtime and ensures tests stay aligned with application updates. pythonCopy codeimport hypothetical_ai_maintenance_tool def maintain_test_suite(): changes = hypothetical_ai_maintenance_tool.analyze_code_changes() updated_scripts = hypothetical_ai_maintenance_tool.update_test_scripts(changes) for script_name, script_content in updated_scripts.items(): with open(script_name, ‘w’) as file: file.write(script_content) maintain_test_suite() Sample Output:“Updating test_login.py with new login flow changes… Test scripts updated successfully.” 4. Natural Language Processing for Test Case Design Generative AI with NLP can interpret human language, enabling SDETs to create test cases from plain-language descriptions, enhancing collaboration across technical and non-technical teams. Example: An SDET can use an NLP-powered tool to translate a feature description from a product manager into test cases. This speeds up the process and ensures that test cases reflect intended functionality. pythonCopy codeimport openai openai.api_key = ‘YOUR_API_KEY’ def create_test_cases(description): response = openai.Completion.create( engine=”text-davinci-003″, prompt=f”Create test cases based on this feature description: {description}”, max_tokens=500 ) return response.choices[0].text feature_description = “Allow users to reset passwords via email to regain account access.” test_cases = create_test_cases(feature_description) print(test_cases) Sample Output: 5. Predictive Analytics for Test Prioritization Generative AI can analyze historical data to prioritize high-risk areas, allowing SDETs to focus testing on critical functionalities. Example: An SDET can use predictive analytics to identify areas with frequent bugs, allocating resources more effectively and ensuring robust testing of high-risk components. pythonCopy codeimport hypothetical_ai_predictive_tool def prioritize_tests(): risk_areas = hypothetical_ai_predictive_tool.predict_risk_areas() prioritized_tests = hypothetical_ai_predictive_tool.prioritize_test_cases(risk_areas) return prioritized_tests prioritized_test_cases = prioritize_tests() print(“Prioritized Test Cases:”) for test in prioritized_test_cases: print(test) Sample Output: Gen AI and Test Automation Generative AI has the potential to revolutionize test automation, offering SDETs tools to enhance efficiency, coverage, and reliability. By embracing Generative AI for tasks like test case generation, script creation, suite maintenance, NLP-based design, and predictive prioritization, SDETs can reduce manual effort and focus on strategic tasks, accelerating testing processes and ensuring robust, reliable software systems. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Gen AI Role in Healthcare

Gen AI Role in Healthcare

Generative AI’s Growing Role in Healthcare: Potential and Challenges The rapid advancements in large language models (LLMs) have introduced generative AI tools into nearly every business sector, including healthcare. As defined by the Government Accountability Office, generative AI is “a technology that can create content, including text, images, audio, or video, when prompted by a user.” These systems learn patterns and relationships from vast datasets, enabling them to generate new content that resembles but is not identical to the original training data. This capability is powered by machine learning algorithms and statistical models. In healthcare, generative AI is being utilized for various applications, including clinical documentation, patient communication, and clinical text summarization. Streamlining Clinical Documentation Excessive documentation is a leading cause of clinician burnout, as highlighted by a 2022 athenahealth survey conducted by the Harris Poll. Generative AI shows promise in easing these documentation burdens, potentially improving clinician satisfaction and reducing burnout. A 2024 study published in NEJM Catalyst explored the use of ambient AI scribes within The Permanente Medical Group (TPMG). This technology employs smartphone microphones and generative AI to transcribe patient encounters in real-time, providing clinicians with draft documentation for review. In October 2023, TPMG deployed this ambient AI technology across various settings, benefiting 10,000 physicians and staff. Physicians who used the ambient AI scribe reported positive outcomes, including more personal and meaningful patient interactions and reduced after-hours electronic health record (EHR) documentation. Early patient feedback was also favorable, with improved provider interactions noted. Additionally, ambient AI produced high-quality clinical documentation for clinician review. However, a 2023 study in the Journal of the American Medical Informatics Association (JAMIA) cautioned that ambient AI might struggle with non-lexical conversational sounds (NLCSes), such as “mm-hm” or “uh-uh,” which can convey clinically relevant information. The study found that while the ambient AI tools had a word error rate of about 12% for all words, the error rate for NLCSes was significantly higher, reaching up to 98.7% for those conveying critical information. Misinterpretation of these sounds could lead to inaccuracies in clinical documentation and potential patient safety issues. Enhancing Patient Communication With the digital transformation in healthcare, patient portal messages have surged. A 2021 study in JAMIA reported a 157% increase in patient portal inbox messages since 2020. In response, some healthcare organizations are exploring the use of generative AI to draft replies to these messages. A 2024 study published in JAMA Network Open evaluated the adoption of AI-generated draft replies to patient messages at an academic medical center. After five weeks, clinicians used the AI-generated drafts 20% of the time, a notable rate considering the LLMs were not fine-tuned for patient communication. Clinicians reported reduced task load and emotional exhaustion, suggesting that AI-generated replies could help alleviate burnout. However, the study found no significant changes in reply time, read time, or write time between the pre-pilot and pilot periods. Despite this, clinicians expressed optimism about time savings, indicating that the cognitive ease of editing drafts rather than writing from scratch might not be fully captured by time metrics. Summarizing Clinical Data Summarizing information within patient records is a time-consuming task for clinicians, and errors in this process can negatively impact clinical decision support. Generative AI has shown potential in this area, with a 2023 study finding that LLM-generated summaries could outperform human expert summaries in terms of conciseness, completeness, and correctness. However, using generative AI for clinical data summarization presents risks. A viewpoint in JAMA argued that LLMs performing summarization tasks might not fall under FDA medical device oversight, as they provide language-based outputs rather than disease predictions or numerical estimates. Without statutory changes, the FDA’s authority to regulate these LLMs remains unclear. The authors also noted that differences in summary length, organization, and tone could influence clinician interpretations and subsequent decision-making. Furthermore, LLMs might exhibit biases, such as sycophancy, where responses are tailored to user expectations. To address these concerns, the authors called for comprehensive standards for LLM-generated summaries, including testing for biases and errors, as well as clinical trials to quantify potential harms and benefits. The Path Forward Generative AI holds significant promise for transforming healthcare and reducing clinician burnout, but realizing this potential requires comprehensive standards and regulatory clarity. A 2024 study published in npj Digital Medicine emphasized the need for defined leadership, adoption incentives, and ongoing regulation to deliver on the promise of generative AI in healthcare. Leadership should focus on establishing guidelines for LLM performance and identifying optimal clinical settings for AI tool trials. The study suggested that a subcommittee within the FDA, comprising physicians, healthcare administrators, developers, and investors, could effectively lead this effort. Additionally, widespread deployment of generative AI will likely require payer incentives, as most providers view these tools as capital expenses. With the right leadership, incentives, and regulatory framework, generative AI can be effectively implemented across the healthcare continuum to streamline clinical workflows and improve patient care. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Advances in AI Models

Advances in AI Models

Advances in AI Models Let’s take a moment to appreciate the transformative impact large language models (LLMs) have had on the world. Before the rise of LLMs, researchers spent years training AI to generate images, but these models had significant limitations. Advances in AI Models. One promising neural network architecture was the generative adversarial network (GAN). In a GAN, two networks play a cat-and-mouse game: one tries to create realistic images while the other tries to distinguish between generated and real images. Over time, the image-creating network improves at tricking the other. While GANs can generate convincing images, they typically excel at creating images of a single subject type. For example, a GAN that creates excellent images of cats might struggle with images of mice. GANs can also experience “mode collapse,” where the network generates the same image repeatedly because it always tricks the discriminator. An AI that produces only one image repeatedly isn’t very useful. What’s truly useful is an AI model capable of generating diverse images, whether it’s a cat, a mouse, or a cat in a mouse costume. Such models exist and are known as diffusion models, named for the underlying math that resembles diffusion processes like a drop of dye spreading in water. These models are trained to connect images and text, leveraging vast amounts of captioned images on the internet. With enough samples, a model can extract the essence of “cat,” “mouse,” and “costume,” embedding these elements into generated images using diffusion principles. The results are often stunning. Some of the most well-known diffusion models include DALL-E, Imagen, Stable Diffusion, and Midjourney. Each model differs in training data, embedding language details, and user interaction, leading to varied results. As research and development progress, these tools continue to evolve rapidly. Uses of Generative AI for Imagery Generative AI can do far more than create cute cat cartoons. By fine-tuning generative AI models and combining them with other algorithms, artists and innovators can create, manipulate, and animate imagery in various ways. Here are some examples: Text-to-Image Generative AI allows for incredible artistic variety using text-to-image techniques. For instance, you can generate a hand-drawn cat or opt for a hyperrealistic or mosaic style. If you can imagine it, diffusion models can interpret your intention successfully. Text-to-3D Model Creating 3D models traditionally requires technical skill, but generative AI tools like DreamFusion can generate 3D models along with detailed descriptions of coloring, lighting, and material properties, meeting the growing demand in commerce, manufacturing, and entertainment. Image-to-Image Images can be powerful prompts for generative AI models. Here are some use cases: Animation Creating a series of consistent images for animation is challenging due to inherent randomness in generated images. However, researchers have developed methods to reduce variations, enabling smoother animations. All the use cases for still images can be adapted for animation. For example, style transfer can turn a video of a skateboarder into an anime-style animation. AI models trained on speech patterns can animate the lips of a generated 3D character. Embracing Generative AI Generative AI offers enormous possibilities for creating stunning imagery. As you explore these capabilities, it’s essential to use them responsibly. In the next unit, you’ll learn how to leverage generative AI’s potential in an ethical and effective manner. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Copado AI Testing for Salesforce

Copado AI Testing for Salesforce

The DevOps company Copado has announced a new AI assistant for Salesforce test creation called Test Copilot. Copado already integrates with Salesforce. Copado AI Testing for Salesforce This follows the company’s recent announcement of Copado Explorer, which is an automated testing solution designed for Salesforce users, as well as the launch of its AI assistant CopadoGPT, which Test Copilot is built on. Users provide a text prompt of what needs to be tested and Test Copilot creates a test that fits those requirements. It can convert existing tests, Selenium tests, or Copado Explorer results into a new test, create tests from scratch, or turn recorded user sessions into test scripts.  Copado AI Testing for Salesforce brings AI-powered test automation for every cloud under the sun. “Copado is in the business of giving people their time back,” said Esko Hannula, senior vice president of product management at Copado. “By eliminating repeated tasks and using AI to automate the test creation process, Copado is helping release teams work faster than ever before while improving release quality. With our AI-powered testing solutions, Copado customers are not only accelerating software testing, but simplifying it.” Why do thousands of Salesforce teams use Copado? Because we make it easy to build, test and deploy the applications that power your business. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Train On Your Own Data

Train On Your Own Data

General-purpose large language models (LLMs) offer businesses the convenience of immediate use without requiring any special setup or customization. However, to maximize the potential of LLMs in business environments, organizations can achieve significant benefits by customizing these models through training on their own data. Custom LLMs excel at handling organization-specific tasks that generic LLMs—such as OpenAI’s ChatGPT or Google’s Gemini—may not manage as effectively. By training an LLM on data unique to the enterprise, businesses can fine-tune the model to produce responses that are highly relevant to specific products, workflows, and customer interactions. To determine whether to customize an LLM with organization-specific data, businesses should first explore the various types of LLMs and understand the advantages of fine-tuning a model on custom data sets. Following this, they can proceed with the necessary steps: identifying data sources, cleaning and formatting the data, adjusting model parameters, retraining the model, and testing it in production. Generic vs. Customized LLMs LLMs can be broadly categorized into two types: Training an LLM on custom data doesn’t imply starting from scratch; instead, it often involves fine-tuning a pre-trained generic model with additional training on the organization’s data. This approach allows the model to retain the broad knowledge it acquired during initial training while enhancing its capabilities in areas specific to the business. Benefits of Customizing an LLM The primary reason for retraining or fine-tuning an LLM is to achieve superior performance on business-specific tasks compared to using a generic model. For example, a company that wants to deploy a chatbot for customer support needs an LLM that understands its products in detail. Even if a generic LLM has some familiarity with the product from public data sources, it may lack the depth of knowledge that the company’s internal documentation provides. Without this comprehensive context, a generic LLM might struggle to generate accurate responses when interacting with customers about specific products. Generic models are optimized for broad usability, which means they may not be tailored for the specialized conversations required in business scenarios. Organizations can overcome these limitations by retraining or fine-tuning an LLM with data related to their products and services. During this process, AI teams can also adjust parameters, such as model weights, to influence the type of output the model generates, making it more relevant to the organization’s needs. Steps to Customize an LLM with Organization-Specific Data To customize an LLM with your organization’s data, follow these steps: By following these steps, organizations can transform a generic LLM into a powerful, customized tool tailored to their unique business needs, enhancing efficiency, customer satisfaction, and overall operational effectiveness. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Who Calls AI Ethical

Who Calls AI Ethical

Background – Who Calls AI Ethical On March 13, 2024, the European Union (EU) enacted the EU AI Act, a move that some argue has hindered its position in the global AI race. This legislation aims to ‘unify’ the development and implementation of AI within the EU, but it is seen as more restrictive than progressive. Rather than fostering innovation, the act focuses on governance, which may not be sufficient for maintaining a competitive edge. The EU AI Act embodies the EU’s stance on Ethical AI, a concept that has been met with skepticism. Critics argue that Ethical AI is often misinterpreted and, at worst, a monetizable construct. In contrast, Responsible AI, which emphasizes ensuring products perform as intended without causing harm, is seen as a more practical approach. This involves methodologies such as red-teaming and penetration testing to stress-test products. This critique of Ethical AI forms the basis of this insight,and Eric Sandosham article here. The EU AI Act To understand the implications of the EU AI Act, it is essential to summarize its key components and address the broader issues with the concept of Ethical AI. The EU defines AI as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. It infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Based on this definition, the EU AI Act can be summarized into several key points: Fear of AI The EU AI Act appears to be driven by concerns about AI being weaponized or becoming uncontrollable. Questions arise about whether the act aims to prevent job disruptions or protect against potential risks. However, AI is essentially automating and enhancing tasks that humans already perform, such as social scoring, predictive policing, and background checks. AI’s implementation is more consistent, reliable, and faster than human efforts. Existing regulations already cover vehicular safety, healthcare safety, and infrastructure safety, raising the question of why AI-specific regulations are necessary. AI solutions automate decision-making, but the parameters and outcomes are still human-designed. The fear of AI becoming uncontrollable lacks evidence, and the path to artificial general intelligence (AGI) remains distant. Ethical AI as a Red Herring In AI research and development, the terms Ethical AI and Responsible AI are often used interchangeably, but they are distinct. Ethics involve systematized rules of right and wrong, often with legal implications. Morality is informed by cultural and religious beliefs, while responsibility is about accountability and obligation. These constructs are continuously evolving, and so must the ethics and rights related to technology and AI. Promoting AI development and broad adoption can naturally improve governance through market forces, transparency, and competition. Profit-driven organizations are incentivized to enhance AI’s positive utility. The focus should be on defining responsible use of AI, especially for non-profit and government agencies. Towards Responsible AI Responsible AI emphasizes accountability and obligation. It involves defining safeguards against misuse rather than prohibiting use cases out of fear. This aligns with responsible product development, where existing legal frameworks ensure products work as intended and minimize misuse risks. AI can improve processes such as recruitment by reducing errors compared to human solutions. AI’s role is to make distinctions based on data attributes, striving for accuracy. The concern is erroneous discrimination, which can be mitigated through rigorous testing for bias as part of product quality assurance. Conclusion The EU AI Act is unlikely to become a global standard. It may slow AI research, development, and implementation within the EU, hindering AI adoption in the region and causing long-term harm. Humanity has an obligation to push the boundaries of AI innovation. As a species facing eventual extinction from various potential threats, AI could represent a means of survival and advancement beyond our biological limitations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com