AI Training Archives - gettectonic.com

2024 The Year of Generative AI

Was 2024 the Year Generative AI Delivered? Here’s What Happened Industry experts hailed 2024 as the year generative AI would take center stage. Operational use cases were emerging, technology was simplifying access, and general artificial intelligence felt imminent. So, how much of that actually came true? Well… sort of. As the year wraps up, some predictions have hit their mark, while others — like general AI — remain firmly in development. Let’s break down the trends, insights from investor Tomasz Tunguz, and what’s ahead for 2025. 1. A World Without Reason Three years into our AI evolution, businesses are finding value, but not universally. Tomasz Tunguz categorizes AI’s current capabilities into: While prediction and search have gained traction, reasoning models still struggle. Why? Model accuracy. Tunguz notes that unless a model has repeatedly seen a specific pattern, it falters. For example, an AI generating an FP&A chart might succeed — but introduce a twist, like usage-based billing, and it’s lost. For now, copilots and modestly accurate search reign supreme. 2. Process Over Tooling A tool’s value lies in how well it fits into established processes. As data teams adopt AI, they’re realizing that production-ready AI demands robust processes, not just shiny tools. Take data quality — a critical pillar for AI success. Sampling a few dbt tests or point solutions won’t cut it anymore. Teams need comprehensive solutions that deliver immediate value. In 2025, expect a shift toward end-to-end platforms that simplify incident management, enhance data quality ownership, and enable domain-level solutions. The tools that integrate seamlessly and address these priorities will shape AI’s future. 3. AI: Cost Cutter, Not Revenue Generator For now, AI’s primary business value lies in cost reduction, not revenue generation. Tools like AI-driven SDRs can increase sales pipelines, but often at the cost of quality. Instead, companies are leveraging AI to cut costs in areas like labor. Examples include Klarna reducing two-thirds of its workforce and Microsoft boosting engineering productivity by 50-75%. Cost reduction works best in scenarios with repetitive tasks, hiring challenges, or labor shortages. Meanwhile, specialized services like EvenUp, which automates legal demand letters, show potential for revenue-focused AI use cases. 4. A Slower but Smarter Adoption Curve While 2023 saw a wave of experimentation with AI, 2024 marked a period of reflection. Early adopters have faced challenges with implementation, ROI, and rapidly changing tech. According to Tunguz, this “dress rehearsal” phase has informed organizations about what works and what doesn’t. Heading into 2025, expect a more calculated wave of AI adoption, with leaders focusing on tools that deliver measurable value — and faster. 5. Small Models for Big Gains In enterprise AI, small, fine-tuned models are gaining favor over massive, general-purpose ones. Why? Small models are cheaper to run and often outperform their larger counterparts when fine-tuned for specific tasks. For example, training an 8-billion-parameter model on 10,000 support tickets can yield better results than a general model trained on a broad corpus. Legal and cost challenges surrounding large proprietary models further push enterprises toward smaller, open-source solutions, especially in highly regulated industries. 6. Blurring Lines Between Analysts and Engineers The demand for data and AI solutions is driving a shift in responsibilities. AI-enabled pipelines are lowering barriers to entry, making self-serve data workflows more accessible. This trend could consolidate analytical and engineering roles, streamlining collaboration and boosting productivity in 2025. 7. Synthetic Data: A Necessary Stopgap With finite real-world training data, synthetic datasets are emerging as a stopgap solution. Tools like Tonic and Gretel create synthetic data for AI training, particularly in regulated industries. However, synthetic data has limits. Over time, relying too heavily on it could degrade model performance, akin to a diet lacking fresh nutrients. The challenge will be finding a balance between real and synthetic data as AI advances. 8. The Rise of the Unstructured Data Stack Unstructured data — long underutilized — is poised to become a cornerstone of enterprise AI. Only about half of unstructured data is analyzed today, but as AI adoption grows, this figure will rise. Organizations are exploring tools and strategies to harness unstructured data for training and analytics, unlocking its untapped potential. 2025 will likely see the emergence of a robust “unstructured data stack” designed to drive business value from this vast, underutilized resource. 9. Agentic AI: Not Ready for Prime Time While AI copilots have proven useful, multi-step AI agents still face significant challenges. Due to compounding accuracy issues (e.g., 90% accuracy over three steps drops to ~50%), these agents are not yet ready for production use. For now, agentic AI remains more of a conversation piece than a practical tool. 10. Data Pipelines Are Growing, But Quality Isn’t As enterprises scale their AI efforts, the number of data pipelines is exploding. Smaller, fine-tuned models are being deployed at scale, often requiring hundreds of millions of pipelines. However, this rapid growth introduces data quality risks. Without robust quality management practices, teams risk inconsistent outputs, bottlenecks, and missed opportunities. Looking Ahead to 2025 As AI evolves, enterprises will face growing pains, but the opportunities are undeniable. From streamlining processes to leveraging unstructured data, 2025 promises advancements that will redefine how organizations approach AI and data strategy. The real challenge? Turning potential into measurable, lasting impact. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Agents, Tech's Next Big Bet

Business Intelligence and AI

AI in Business Intelligence: Uses, Benefits, and Challenges AI tools are increasingly becoming integral to Business Intelligence (BI) systems, enhancing analytics capabilities and streamlining tasks. In this article, we explore how AI can bring new value to BI processes and what to consider as this integration continues to evolve. AI’s Role in Business Intelligence Business Intelligence tools, such as dashboards and interactive reports, have traditionally focused on analyzing historical and current data to describe business performance—known as descriptive analytics. While valuable, many business users seek more than just a snapshot of past performance. They also want predictive insights (forecasting future trends) and prescriptive guidance (recommendations for action). Historically, implementing these advanced capabilities was challenging due to their complexity, but AI simplifies this process. By leveraging AI’s analytical power and natural language processing (NLP), businesses can move from descriptive to predictive and prescriptive analytics, enabling proactive decision-making. AI-powered BI systems also offer the advantage of real-time data analysis, providing up-to-date insights that help businesses respond quickly to changing conditions. Additionally, AI can automate routine tasks, boosting efficiency across business operations. Benefits of Using AI in BI Initiatives The integration of AI into BI systems brings several key benefits, including: Examples of AI Applications in BI AI’s role in BI is not limited to internal process improvements. It can significantly enhance customer experience (CX) and support business growth. Here are a few examples: Challenges of Implementing AI in BI While the potential for AI in BI is vast, there are several challenges companies must address: Best Practices for Deploying AI in BI To maximize the benefits of AI in BI, companies should follow these best practices: Future Trends to Watch AI is not poised to replace traditional BI tools but to augment them with new capabilities. In the future, we can expect: In conclusion, AI is transforming business intelligence by turning data analysis from a retrospective activity into a forward-looking, real-time process. While challenges remain, such as data governance, ethical concerns, and skill shortages, AI’s potential to enhance BI systems and drive business success is undeniable. By following best practices and staying abreast of industry developments, businesses can harness AI to unlock new opportunities and deliver better insights. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Inference vs. Training

AI Inference vs. Training

AI Inference vs. Training: Key Differences and Tradeoffs AI training and inference are the foundational phases of machine learning, each with distinct objectives and resource demands. Optimizing the balance between the two is crucial for managing costs, scaling models, and ensuring peak performance. Here’s a closer look at their roles, differences, and the tradeoffs involved. Understanding Training and Inference Key Differences Between Training and Inference 1. Compute Costs 2. Resource and Latency Considerations Strategic Tradeoffs Between Training and Inference Key Considerations for Balancing Training and Inference As AI technology evolves, hardware advancements may narrow the gap in resource requirements between training and inference. Nonetheless, the key to effective machine learning systems lies in strategically balancing the demands of both processes to meet specific goals and constraints. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Productivity Paradox

AI Productivity Paradox

The AI Productivity Paradox: Why Aren’t More Workers Using AI Tooks Like ChatGPT?The Real Barrier Isn’t Technical Skills — It’s Time to Think Despite the transformative potential of tools like ChatGPT, most knowledge workers aren’t utilizing them effectively. Those who do tend to use them for basic tasks like summarization. Less than 5% of ChatGPT’s user base subscribes to the paid Plus version, indicating that a small fraction of potential professional users are tapping into AI for more complex, high-value tasks. Having spent over a decade building AI products at companies such as Google Brain and Shopify Ads, the evolution of AI has been clearly evident. With the advent of ChatGPT, AI has transitioned from being an enhancement for tools like photo organizers to becoming a significant productivity booster for all knowledge workers. Most executives are aware that today’s buzz around AI is more than just hype. They’re eager to make their companies AI-forward, recognizing that it’s now more powerful and user-friendly than ever. Yet, despite this potential and enthusiasm, widespread adoption remains slow. The real issue lies in how organizations approach work itself. Systemic problems are hindering the integration of these tools into the daily workflow. Ultimately, the question executives need to ask isn’t, “How can we use AI to work faster? Or can this feature be built with AI?” but rather, “How can we use AI to create more value? What are the questions we should be asking but aren’t?” Real-world ImpactRecently, large language models (LLMs)—the technology behind tools like ChatGPT—were used to tackle a complex data structuring and analysis task. This task would typically require a cross-functional team of data analysts and content designers, taking a month or more to complete. Here’s what was accomplished in just one day using Google AI Studio: However, the process wasn’t just about pressing a button and letting AI do all the work. It required focused effort, detailed instructions, and multiple iterations. Hours were spent crafting precise prompts, providing feedback, and redirecting the AI when it went off course. In this case, the task was compressed from a month-long process to a single day. While it was mentally exhausting, the result wasn’t just a faster process—it was a fundamentally better and different outcome. The LLMs uncovered nuanced patterns and edge cases within the data that traditional analysis would have missed. The Counterintuitive TruthHere lies the key to understanding the AI productivity paradox: The success in using AI was possible because leadership allowed for a full day dedicated to rethinking data processes with AI as a thought partner. This provided the space for deep, strategic thinking, exploring connections and possibilities that would typically take weeks. However, this quality-focused work is often sacrificed under the pressure to meet deadlines. Ironically, most people don’t have time to figure out how they could save time. This lack of dedicated time for exploration is a luxury many product managers (PMs) can’t afford. Under constant pressure to deliver immediate results, many PMs don’t have even an hour for strategic thinking. For many, the only way to carve out time for this work is by pretending to be sick. This continuous pressure also hinders AI adoption. Developing thorough testing plans or proactively addressing AI-related issues is viewed as a luxury, not a necessity. This creates a counterproductive dynamic: Why use AI to spot issues in documentation if fixing them would delay launch? Why conduct further user research when the direction has already been set from above? Charting a New Course — Investing in PeopleProviding employees time to “figure out AI” isn’t enough; most need training to fully understand how to leverage ChatGPT beyond simple tasks like summarization. Yet the training required is often far less than what people expect. While the market is flooded with AI training programs, many aren’t suitable for most employees. These programs are often time-consuming, overly technical, and not tailored to specific job functions. The best results come from working closely with individuals for brief periods—10 to 15 minutes—to audit their current workflows and identify areas where LLMs could be used to streamline processes. Understanding the technical details behind token prediction isn’t necessary to create effective prompts. It’s also a myth that AI adoption is only for those with technical backgrounds under 40. In fact, attention to detail and a passion for quality work are far better indicators of success. By setting aside biases, companies may discover hidden AI enthusiasts within their ranks. For example, a lawyer in his sixties, after just five minutes of explanation, grasped the potential of LLMs. By tailoring examples to his domain, the technology helped him draft a law review article he had been putting off for months. It’s likely that many companies already have AI enthusiasts—individuals who’ve taken the initiative to explore LLMs in their work. These “LLM whisperers” could come from any department: engineering, marketing, data science, product management, or customer service. By identifying these internal innovators, organizations can leverage their expertise. Once these experts are found, they can conduct “AI audits” of current workflows, identify areas for improvement, and provide starter prompts for specific use cases. These internal experts often better understand the company’s systems and goals, making them more capable of spotting relevant opportunities. Ensuring Time for ExplorationBeyond providing training, it’s crucial that employees have the time to explore and experiment with AI tools. Companies can’t simply tell their employees to innovate with AI while demanding that another month’s worth of features be delivered by Friday at 5 p.m. Ensuring teams have a few hours a month for exploration is essential for fostering true AI adoption. Once the initial hurdle of adoption is overcome, employees will be able to identify the most promising areas for AI investment. From there, organizations will be better positioned to assess the need for more specialized training. ConclusionThe AI productivity paradox is not about the complexity of the technology but rather how organizations approach work and innovation. Harnessing AI’s potential is simpler than “AI influencers” often suggest, requiring only

Read More
Snowflake Security and Development

Snowflake Security and Development

Snowflake Unveils AI Development and Enhanced Security Features At its annual Build virtual developer conference, Snowflake introduced a suite of new capabilities focused on AI development and strengthened security measures. These enhancements aim to simplify the creation of conversational AI tools, improve collaboration, and address data security challenges following a significant breach earlier this year. AI Development Updates Snowflake announced updates to its Cortex AI suite to streamline the development of conversational AI applications. These new tools focus on enabling faster, more efficient development while ensuring data integrity and trust. Highlights include: These features address enterprise demands for generative AI tools that boost productivity while maintaining governance over proprietary data. Snowflake aims to eliminate barriers to data-driven decision-making by enabling natural language queries and easy integration of structured and unstructured data into AI models. According to Christian Kleinerman, Snowflake’s EVP of Product, the goal is to reduce the time it takes for developers to build reliable, cost-effective AI applications: “We want to help customers build conversational applications for structured and unstructured data faster and more efficiently.” Security Enhancements Following a breach last May, where hackers accessed customer data via stolen login credentials, Snowflake has implemented new security features: These additions come alongside existing tools like the Horizon Catalog for data governance. Kleinerman noted that while Snowflake’s previous security measures were effective at preventing unauthorized access, the company recognizes the need to improve user adoption of these tools: “It’s on us to ensure our customers can fully leverage the security capabilities we offer. That’s why we’re adding more monitoring, insights, and recommendations.” Collaboration Features Snowflake is also enhancing collaboration through its new Internal Marketplace, which enables organizations to share data, AI tools, and applications across business units. The Native App Framework now integrates with Snowpark Container Services to simplify the distribution and monetization of analytics and AI products. AI Governance and Competitive Position Industry analysts highlight the growing importance of AI governance as enterprises increasingly adopt generative AI tools. David Menninger of ISG’s Ventana Research emphasized that Snowflake’s governance-focused features, such as LLM observability, fill a critical gap in AI tooling: “Trustworthy AI enhancements like model explainability and observability are vital as enterprises scale their use of AI.” With these updates, Snowflake continues to compete with Databricks and other vendors. Its strategy focuses on offering both API-based flexibility for developers and built-in tools for users seeking simpler solutions. By combining innovative AI development tools with robust security and collaboration features, Snowflake aims to meet the evolving needs of enterprises while positioning itself as a leader in the data platform and AI space. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Research Agents

AI Research Agents

AI Research Agents: Transforming Knowledge Discovery by 2025 (Plus the Top 3 Free Tools) The research world is on the verge of a groundbreaking shift, driven by the evolution of AI research agents. By 2025, these agents are expected to move beyond being mere tools to becoming transformative assets for knowledge discovery, revolutionizing industries such as marketing, science, and beyond. Human researchers are inherently limited—they cannot scan 10,000 websites in an hour or analyze data at lightning speed. AI agents, however, are purpose-built for these tasks, providing efficiency and insights far beyond human capabilities. Here, we explore the anticipated impact of AI research agents and highlight three free tools redefining this space (spoiler alert: it’s not ChatGPT or Perplexity!). AI Research Agents: The New Era of Knowledge Exploration By 2030, the AI research market is projected to skyrocket from .1 billion in 2024 to .1 billion. This explosive growth represents not just advancements in AI but a fundamental transformation in how knowledge is gathered, analyzed, and applied. Unlike traditional AI systems, which require constant input and supervision, AI research agents function more like dynamic research assistants. They adapt their approach based on outcomes, handle vast quantities of data, and generate actionable insights with remarkable precision. Key Differentiator: These agents leverage advanced Retrieval Augmented Generation (RAG) technology, ensuring accuracy by pulling verified data from trusted sources. Equipped with anti-hallucination algorithms, they maintain factual integrity while citing their sources—making them indispensable for high-stakes research. The Technology Behind AI Research Agents AI research agents stand out due to their ability to: For example, an AI agent can deliver a detailed research report in 30 minutes, a task that might take a human team days. Why AI Research Agents Matter Now The timing couldn’t be more critical. The volume of data generated daily is overwhelming, and human researchers often struggle to keep up. Meanwhile, Google’s focus on Experience, Expertise, Authoritativeness, and Trustworthiness (EEAT) has heightened the demand for accurate, well-researched content. Some research teams have already reported time savings of up to 70% by integrating AI agents into their workflows. Beyond speed, these agents uncover perspectives and connections often overlooked by human researchers, adding significant value to the final output. Top 3 Free AI Research Tools 1. Stanford STORM Overview: STORM (Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking) is an open-source system designed to generate comprehensive, Wikipedia-style articles. Learn more: Visit the STORM GitHub repository. 2. CustomGPT.ai Researcher Overview: CustomGPT.ai creates highly accurate, SEO-optimized long-form articles using deep Google research or proprietary databases. Learn more: Access the free Streamlit app for CustomGPT.ai. 3. GPT Researcher Overview: This open-source agent conducts thorough research tasks, pulling data from both web and local sources to produce customized reports. Learn more: Visit the GPT Researcher GitHub repository. The Human-AI Partnership Despite their capabilities, AI research agents are not replacements for human researchers. Instead, they act as powerful assistants, enabling researchers to focus on creative problem-solving and strategic thinking. Think of them as tireless collaborators, processing vast amounts of data while humans interpret and apply the findings to solve complex challenges. Preparing for the AI Research Revolution To harness the potential of AI research agents, researchers must adapt. Universities and organizations are already incorporating AI training into their programs to prepare the next generation of professionals. For smaller labs and institutions, these tools present a unique opportunity to level the playing field, democratizing access to high-quality research capabilities. Looking Ahead By 2025, AI research agents will likely reshape the research landscape, enabling cross-disciplinary breakthroughs and empowering researchers worldwide. From small teams to global enterprises, the benefits are immense—faster insights, deeper analysis, and unprecedented innovation. As with any transformative technology, challenges remain. But the potential to address some of humanity’s biggest problems makes this an AI revolution worth embracing. Now is the time to prepare and make the most of these groundbreaking tools. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
healthcare Can prioritize ai governance

AI Data Privacy and Security

Three Key Generative AI Data Privacy and Security Concerns The rise of generative AI is reshaping the digital landscape, introducing powerful tools like ChatGPT and Microsoft Copilot into the hands of professionals, students, and casual users alike. From creating AI-generated art to summarizing complex texts, generative AI (GenAI) is transforming workflows and sparking innovation. However, for information security and privacy professionals, this rapid proliferation also brings significant challenges in data governance and protection. Below are three critical data privacy and security concerns tied to generative AI: 1. Who Owns the Data? Data ownership is a contentious issue in the age of generative AI. In the European Union, the General Data Protection Regulation (GDPR) asserts that individuals own their personal data. In contrast, data ownership laws in the United States are less clear-cut, with recent state-level regulations echoing GDPR’s principles but failing to resolve ambiguity. Generative AI often ingests vast amounts of data, much of which may not belong to the person uploading it. This creates legal risks for both users and AI model providers, especially when third-party data is involved. Cases surrounding intellectual property, such as controversies involving Slack, Reddit, and LinkedIn, highlight public resistance to having personal data used for AI training. As lawsuits in this arena emerge, prior intellectual property rulings could shape the legal landscape for generative AI. 2. What Data Can Be Derived from LLM Output? Generative AI models are designed to be helpful, but they can inadvertently expose sensitive or proprietary information submitted during training. This risk has made many wary of uploading critical data into AI models. Techniques like tokenization, anonymization, and pseudonymization can reduce these risks by obscuring sensitive data before it is fed into AI systems. However, these practices may compromise the model’s performance by limiting the quality and specificity of the training data. Advocates for GenAI stress that high-quality, accurate data is essential to achieving the best results, which adds to the complexity of balancing privacy with performance. 3. Can the Output Be Trusted? The phenomenon of “hallucinations” — when generative AI produces incorrect or fabricated information — poses another significant concern. Whether these errors stem from poor training, flawed data, or malicious intent, they raise questions about the reliability of GenAI outputs. The impact of hallucinations varies depending on the context. While some errors may cause minor inconveniences, others could have serious or even dangerous consequences, particularly in sensitive domains like healthcare or legal advisory. As generative AI continues to evolve, ensuring the accuracy and integrity of its outputs will remain a top priority. The Generative AI Data Governance Imperative Generative AI’s transformative power lies in its ability to leverage vast amounts of information. For information security, data privacy, and governance professionals, this means grappling with key questions, such as: With high stakes and no way to reverse intellectual property violations, the need for robust data governance frameworks is urgent. As society navigates this transformative era, balancing innovation with responsibility will determine whether generative AI becomes a tool for progress or a source of new challenges. While generative AI heralds a bold future, history reminds us that groundbreaking advancements often come with growing pains. It is the responsibility of stakeholders to anticipate and address these challenges to ensure a safer and more equitable AI-powered world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Will AI Hinder Digital Transformation in Healthcare?

Poisoning Your Data

Protecting Your IP from AI Training: Poisoning Your Data As more valuable intellectual property (IP) becomes accessible online, concerns over AI vendors scraping content for training models without permission are rising. If you’re worried about AI theft and want to safeguard your assets, it’s time to consider “poisoning” your content—making it difficult or even impossible for AI systems to use it effectively. Key Principle: AI “Sees” Differently Than Humans AI processes data in ways humans don’t. While people view content based on context, AI “sees” data in raw, specific formats that can be manipulated. By subtly altering your content, you can protect it without affecting human users. Image Poisoning: Misleading AI Models For images, you can “poison” them to confuse AI models without impacting human perception. A great example of this is Nightshade, a tool designed to distort images so that they remain recognizable to humans but useless to AI models. This technique ensures your artwork or images can’t be replicated, and applying it across your visual content protects your unique style. For example, if you’re concerned about your images being stolen or reused by generative AI systems, you can embed misleading text into the image itself, which is invisible to human users but interpreted by AI as nonsensical data. This ensures that an AI model trained on your images will be unable to replicate them correctly. Text Poisoning: Adding Complexity for Crawlers Text poisoning requires more finesse, depending on the sophistication of the AI’s web crawler. Simple methods include: Invisible Text One easy method is to hide text within your page using CSS. This invisible content can be placed in sidebars, between paragraphs, or anywhere within your text: cssCopy code.content { color: black; /* Same as the background */ opacity: 0.0; /* Invisible */ display: none; /* Hidden in the DOM */ } By embedding this “poisonous” content directly in the text, AI crawlers might have difficulty distinguishing it from real content. If done correctly, AI models will ingest the irrelevant data as part of your content. JavaScript-Generated Content Another technique is to use JavaScript to dynamically alter the content, making it visible only after the page loads or based on specific conditions. This can frustrate AI crawlers that only read content after the DOM is fully loaded, as they may miss the hidden data. htmlCopy code<script> // Dynamically load content based on URL parameters or other factors </script> This method ensures that AI gets a different version of the page than human users. Honeypots for AI Crawlers Honeypots are pages designed specifically for AI crawlers, containing irrelevant or distorted data. These pages don’t affect human users but can confuse AI models by feeding them inaccurate information. For example, if your website sells cheese, you can create pages that only AI crawlers can access, full of bogus details about your cheese, thus poisoning the AI model with incorrect information. By adding these “honeypot” pages, you can mislead AI models that scrape your data, preventing them from using your IP effectively. Competitive Advantage Through Data Poisoning Data poisoning can also work to your benefit. By feeding AI models biased information about your products or services, you can shape how these models interpret your brand. For example, you could subtly insert favorable competitive comparisons into your content that only AI models can read, helping to position your products in a way that biases future AI-driven decisions. For instance, you might embed positive descriptions of your brand or products in invisible text. AI models would ingest these biases, making it more likely that they favor your brand when generating results. Using Proxies for Data Poisoning Instead of modifying your CMS, consider using a proxy server to inject poisoned data into your content dynamically. This approach allows you to identify and respond to crawlers more easily, adding a layer of protection without needing to overhaul your existing systems. A proxy can insert “poisoned” content based on the type of AI crawler requesting it, ensuring that the AI gets the distorted data without modifying your main website’s user experience. Preparing for AI in a Competitive World With the increasing use of AI for training and decision-making, businesses must think proactively about protecting their IP. In an era where AI vendors may consider all publicly available data fair game, implementing data poisoning should become a standard practice for companies concerned about protecting their content and ensuring it’s represented correctly in AI models. Businesses that take these steps will be better positioned to negotiate with AI vendors if they request data for training and will have a competitive edge if AI systems are used by consumers or businesses to make decisions about their products or services. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
$15 Million to AI Training for U.S. Government Workforce

$15 Million to AI Training for U.S. Government Workforce

Google.org Commits $15 Million to AI Training for U.S. Government Workforce Google.org has announced $15 million in grants to support the development of AI skills in the U.S. government workforce, aiming to promote responsible AI use across federal, state, and local levels. These grants, part of Google.org’s broader $75 million AI Opportunity Fund, include $10 million to the Partnership for Public Service and $5 million to InnovateUS. The $10 million grant to the Partnership for Public Service will fund the establishment of the Center for Federal AI, a new hub focused on building AI expertise within the federal government. Set to open in spring 2025, the center will provide a federal AI leadership program, internships, and other initiatives designed to cultivate AI talent in the public sector. InnovateUS will use the $5 million grant to expand AI education for state and local government employees, aiming to train 100,000 workers through specialized courses, workshops, and coaching sessions. “AI is today’s electricity—a transformative technology fundamental to the public sector and society,” said Max Stier, president and CEO of the Partnership for Public Service. “Google.org’s generous support allows us to expand our programming and launch the new Center for Federal AI, empowering agencies to harness AI to better serve the public.” These grants clearly underscore Google.org’s commitment to equipping government agencies with the tools and talent necessary to navigate the evolving AI landscape responsibly. With these tools in place, Tectonic looks forward to assist you in becoming an ai-driven public sector service. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
GPUs and AI Development

GPUs and AI Development

Graphics processing units (GPUs) have become widely recognized due to their growing role in AI development. However, a lesser-known but critical technology is also gaining attention: high-bandwidth memory (HBM). HBM is a high-density memory designed to overcome bottlenecks and maximize data transfer speeds between storage and processors. AI chipmakers like Nvidia rely on HBM for its superior bandwidth and energy efficiency. Its placement next to the GPU’s processor chip gives it a performance edge over traditional server RAM, which resides between storage and the processing unit. HBM’s ability to consume less power makes it ideal for AI model training, which demands significant energy resources. However, as the AI landscape transitions from model training to AI inferencing, HBM’s widespread adoption may slow. According to Gartner’s 2023 forecast, the use of accelerator chips incorporating HBM for AI model training is expected to decline from 65% in 2022 to 30% by 2027, as inferencing becomes more cost-effective with traditional technologies. How HBM Differs from Other Memory HBM shares similarities with other memory technologies, such as graphics double data rate (GDDR), in delivering high bandwidth for graphics-intensive tasks. But HBM stands out due to its unique positioning. Unlike GDDR, which sits on the printed circuit board of the GPU, HBM is placed directly beside the processor, enhancing speed by reducing signal delays caused by longer interconnections. This proximity, combined with its stacked DRAM architecture, boosts performance compared to GDDR’s side-by-side chip design. However, this stacked approach adds complexity. HBM relies on through-silicon via (TSV), a process that connects DRAM chips using electrical wires drilled through them, requiring larger die sizes and increasing production costs. According to analysts, this makes HBM more expensive and less efficient to manufacture than server DRAM, leading to higher yield losses during production. AI’s Demand for HBM Despite its manufacturing challenges, demand for HBM is surging due to its importance in AI model training. Major suppliers like SK Hynix, Samsung, and Micron have expanded production to meet this demand, with Micron reporting that its HBM is sold out through 2025. In fact, TrendForce predicts that HBM will contribute to record revenues for the memory industry in 2025. The high demand for GPUs, especially from Nvidia, drives the need for HBM as AI companies focus on accelerating model training. Hyperscalers, looking to monetize AI, are investing heavily in HBM to speed up the process. HBM’s Future in AI While HBM has proven essential for AI training, its future may be uncertain as the focus shifts to AI inferencing, which requires less intensive memory resources. As inferencing becomes more prevalent, companies may opt for more affordable and widely available memory solutions. Experts also see HBM following the same trajectory as other memory technologies, with continuous efforts to increase bandwidth and density. The next generation, HBM3E, is already in production, with HBM4 planned for release in 2026, promising even higher speeds. Ultimately, the adoption of HBM will depend on market demand, especially from hyperscalers. If AI continues to push the limits of GPU performance, HBM could remain a critical component. However, if businesses prioritize cost efficiency over peak performance, HBM’s growth may level off. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Multi AI Agent Systems

Multi AI Agent Systems

Building Multi-AI Agent Systems: A Comprehensive Guide As technology advances at an unprecedented pace, Multi-AI Agent systems are emerging as a transformative approach to creating more intelligent and efficient applications. This guide delves into the significance of Multi-AI Agent systems and provides a step-by-step tutorial on building them using advanced frameworks like LlamaIndex and CrewAI. What Are Multi-AI Agent Systems? Multi-AI Agent systems are a groundbreaking development in artificial intelligence. Unlike single AI agents that operate independently, these systems consist of multiple autonomous agents that collaborate to tackle complex tasks or solve intricate problems. Key Features of Multi-AI Agent Systems: Applications of Multi-AI Agent Systems: Multi-agent systems are versatile and impactful across industries, including: The Workflow of a Multi-AI Agent System Building an effective Multi-AI Agent system requires a structured approach. Here’s how it works: Building Multi-AI Agent Systems with LlamaIndex and CrewAI Step 1: Define Agent Roles Clearly define the roles, goals, and specializations of each agent. For example: Step 2: Initiate the Workflow Establish a seamless workflow for agents to perform their tasks: Step 3: Leverage CrewAI for Collaboration CrewAI enhances collaboration by enabling autonomous agents to work together effectively: Step 4: Integrate LlamaIndex for Data Handling Efficient data management is crucial for agent performance: Understanding AI Inference and Training Multi-AI Agent systems rely on both AI inference and training: Key Differences: Aspect AI Training AI Inference Purpose Builds the model. Uses the model for tasks. Process Data-driven learning. Real-time decision-making. Compute Needs Resource-intensive. Optimized for efficiency. Both processes are essential: training builds the agents’ capabilities, while inference ensures swift, actionable results. Tools for Multi-AI Agent Systems LlamaIndex An advanced framework for efficient data handling: CrewAI A collaborative platform for building autonomous agents: Practical Example: Multi-AI Agent Workflow Conclusion Building Multi-AI Agent systems offers unparalleled opportunities to create intelligent, responsive, and efficient applications. By defining clear agent roles, leveraging tools like CrewAI and LlamaIndex, and integrating robust workflows, developers can unlock the full potential of these systems. As industries continue to embrace this technology, Multi-AI Agent systems are set to revolutionize how we approach problem-solving and task execution. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Revolution in Government

AI Revolution in Government

The AI Revolution in Government: Unlocking Efficiency and Public Trust As the AI boom accelerates, it’s essential to explore how artificial intelligence can streamline operations for government and public sector organizations. From enhancing data processing to bolstering cybersecurity and improving public planning, AI has the potential to make government services more efficient and effective for both agencies and constituents. AI Revolution in Government. The Role of AI in Public Sector Efficiency AI presents significant opportunities for government agencies to optimize their operations. By integrating AI-driven tools, public agencies can improve service delivery, boost efficiency, and foster greater trust between the public and private sectors. However, with these advancements comes the challenge of bridging the AI skills gap — a pressing concern as organizations ramp up investments in AI without enough trained professionals to support its deployment. According to a survey by SAS, 63% of decision-makers across various sectors, including government, believe they lack the AI and machine learning resources necessary to keep pace with the growing demand. This skills gap, combined with rapid AI adoption, has many workers concerned about the future of their jobs. Predictions from Goldman Sachs suggest that AI could replace 300 million full-time jobs globally, affecting nearly one-fifth of the workforce, particularly in fields traditionally considered automation-proof, such as administrative and legal professions. Despite concerns about job displacement, AI is also expected to create new roles. The World Economic Forum’s Future of Jobs Report estimates that 75% of companies plan to adopt AI, with 50% anticipating job growth. This presents a crucial opportunity for government organizations to upskill their workforce and ensure they are prepared for the changes AI will bring. Preparing for an AI-Driven Future in Government To fully harness the benefits of AI, public sector organizations must first modernize their data infrastructure. Data modernization is a key step in setting up a future-ready organization, allowing AI to operate effectively by leveraging accurate, connected, and real-time data. As AI automates lower-level tasks, government workers need to transition into more strategic roles, making it essential to invest in AI training and upskilling programs. AI Applications in GovernmentAI is already transforming various government functions, improving operations, and meeting the needs of citizens more effectively. The possibilities are vast: While AI holds immense potential, its successful adoption depends on having a digital-ready workforce capable of managing these applications. Yet, many government employees lack the data science and AI expertise needed to manage large citizen data sets and develop AI models that can improve service delivery. Upskilling the Government Workforce for AI Investing in AI education is critical to ensuring that government employees can meet the demands of the future. Countries like Finland and Singapore have already launched national AI training programs to prepare their populations for the AI-driven economy. For example, Finland’s “Elements of AI” program introduced AI basics to the public and has been completed by over a million people worldwide. Similarly, AI Singapore’s “AI for Everyone” initiative equips individuals and organizations with AI skills for social good. In the U.S., legislation is being considered to create an AI training program for federal supervisors and management officials, helping government leaders navigate the risks and benefits of AI in alignment with agency missions. The Importance of Trust and Data Security As public sector organizations embrace AI, trust is a critical factor. AI tools are only as effective as the data they rely on, and ensuring data integrity, security, and ethical use is paramount. The rise of the Chief Data Officer highlights the growing importance of managing and protecting government data. These roles not only oversee data management but also ensure that AI technologies are used responsibly, maintaining public trust and safeguarding privacy. By modernizing data systems and equipping employees with AI skills, government organizations can unlock the full potential of AI and automation. This transformation will help agencies better serve their communities, enhance efficiency, and build lasting trust with the people they serve. The Future of AI in Government The future of AI in government is bright, but organizations must take proactive steps to prepare for it. By unifying and securing their data, investing in AI training, and focusing on ethical AI deployment, public sector agencies can harness AI’s power to drive meaningful change. Ultimately, this is an opportunity for the public sector to improve service delivery, support their workforce, and build stronger connections with citizens. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Offers Free AI Training

Salesforce Offers Free AI Training

Salesforce has announced plans to broaden access to free AI training through its Trailhead online platform, aiming to equip 100,000 additional students with essential AI skills. With AI becoming a transformative technology that nearly every business is investing in, the demand for AI training is rapidly increasing. To meet this need, Salesforce is expanding its free AI training programs via Trailhead, which offers courses and certifications designed to enhance learners’ AI capabilities. These resources will be available until the end of 2025. At a time when employers need to upskill employees on artificial intelligence, Salesforce is at the ready. In support of this initiative, Salesforce will open new spaces at its San Francisco headquarters, including a pop-up AI Center for in-person training and a dedicated floor for employees to develop AI skills. This expansion represents a million investment in workforce development, addressing the growing AI skills gap. Salesforce aims to help every Trailblazer become an “Agentblazer,” a term for those trained on Salesforce products, by reaching 100,000 more learners through these offerings. Recent expansions to the Trailhead platform include AI-specific courses on fundamentals, ethical AI use, and prompting. Since June 2023, over 2.6 million AI and data badges have been earned by employees, jobseekers, and learners, unlocking critical skills. “AI and agents are reshaping how people work, and it’s essential that everyone has the skills to thrive in this new landscape,” said Brian Millham, president and COO of Salesforce. Tectonic credits Salesforce for offering equal training opportunities for partners, consultants, job seekers, and users. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
New Technology Risks

New Technology Risks

Organizations have always needed to manage the risks that come with adopting new technologies, and implementing artificial intelligence (AI) is no different. Many of the risks associated with AI are similar to those encountered with any new technology: poor alignment with business goals, insufficient skills to support the initiatives, and a lack of organizational buy-in. To address these challenges, executives should rely on best practices that have guided the successful adoption of other technologies, according to management consultants and AI experts. When it comes to AI, this includes: However, AI presents unique risks that executives must recognize and address proactively. Below are 15 areas of risk that organizations may encounter as they implement and use AI technologies: Managing AI Risks While the risks associated with AI cannot be entirely eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to mitigate them. This includes ensuring high-quality data for AI training, testing for biases, and continuous monitoring of AI systems to catch unintended consequences. Ethical frameworks are also crucial to ensure AI systems produce fair, transparent, and unbiased results. Involving the board and C-suite in AI governance is essential, as managing AI risk is not just an IT issue but a broader organizational challenge. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Tableau Einstein is Here

Tableau Einstein is Here

Tableau Einstein marks a new chapter for Tableau, transforming the analytics experience by moving beyond traditional reports and dashboards to deliver insights directly within the flow of a user’s work. This new AI-powered analytics platform blends existing Tableau and Salesforce capabilities with innovative features designed to revolutionize how users engage with data. The platform is built around four key areas: autonomous insight delivery through AI, AI-assisted development of a semantic layer, real-time data access, and a marketplace for data and AI products, allowing customers to personalize their Tableau experience. Some features, like Tableau Pulse and Tableau Agent, which provide autonomous insights, are already available. Additional tools, such as Tableau Semantics and a marketplace for AI products, are expected to launch in 2025. Access to Tableau Einstein is provided through a Tableau+ subscription, though pricing details remain private. Since being acquired by Salesforce in 2019, Tableau has shifted its focus toward AI, following the trend of many analytics vendors. In February, Tableau introduced Tableau Pulse, a generative AI-powered tool that delivers insights in natural language. In July, it also rolled out Tableau Agent, an AI assistant to help users prepare and analyze data. With AI at its core, Tableau Einstein reflects deeper integration between Tableau and Salesforce. David Menninger, an analyst at Ventana Research, commented that these new capabilities represent a meaningful step toward true integration between the two platforms. Donald Farmer, founder of TreeHive Strategy, agrees, highlighting that while the robustness of Tableau Einstein’s AI capabilities compared to its competitors remains to be seen, the platform offers more than just incremental add-ons. “It’s an impressive release,” he remarked. A Paradigm Shift in Analytics A significant aspect of Tableau Einstein is its agentic nature, where AI-powered agents deliver insights autonomously, without user prompts. Traditionally, users queried data and analyzed reports to derive insights. Tableau Einstein changes this model by proactively providing insights within the workflow, eliminating the need for users to formulate specific queries. The concept of autonomous insights, represented by tools like Tableau Pulse and Agentforce for Tableau, allows businesses to build autonomous agents that deliver actionable data. This aligns with the broader trend in analytics, where the market is shifting toward agentic AI and away from dashboard reliance. Menninger noted, “The market is moving toward agentic AI and analytics, where agents, not dashboards, drive decisions. Agents can act on data rather than waiting for users to interpret it.” Farmer echoed this sentiment, stating that the integration of AI within Tableau is intuitive and seamless, offering a significantly improved analytics experience. He specifically pointed out Tableau Pulse’s elegant design and the integration of Agentforce AI, which feels deeply integrated rather than a superficial add-on. Core Features and Capabilities One of the most anticipated features of Tableau Einstein is Tableau Semantics, a semantic layer designed to enhance AI models by enabling organizations to define and structure their data consistently. Expected to be generally available by February 2025, Tableau Semantics will allow enterprises to manage metrics, data dimensions, and relationships across datasets with the help of AI. Pre-built metrics for Salesforce data will also be available, along with AI-driven tools to simplify semantic layer management. Tableau is not the first to offer a semantic layer—vendors like MicroStrategy and Looker have similar features—but the infusion of AI sets Tableau’s approach apart. According to Tableau’s chief product officer, Southard Jones, AI makes Tableau’s semantic layer more agile and user-friendly compared to older, labor-intensive systems. Real-time data integration is another key component of Tableau Einstein, made possible through Salesforce’s Data Cloud. This integration enables Tableau users to securely access and combine structured and unstructured data from hundreds of sources without manual intervention. Unstructured data, such as text and images, is critical for comprehensive AI training, and Data Cloud allows enterprises to use it alongside structured data efficiently. Additionally, Tableau Einstein will feature a marketplace launching in mid-2025, which will allow users to build a composable infrastructure. Through APIs, users will be able to personalize their Tableau environment, share AI assets, and collaborate across departments more effectively. Looking Forward As Tableau continues to build on its AI-driven platform, Menninger and Farmer agree that the vendor’s move toward agentic AI is a smart evolution. While Tableau’s current capabilities are competitive, Menninger noted that the platform doesn’t necessarily set Tableau apart from competitors like Qlik, MicroStrategy, or Microsoft Fabric. However, the tight integration with Salesforce and the focus on agentic AI may provide Tableau with a short-term advantage in the fast-changing analytics landscape. Farmer added that Tableau Einstein’s autonomous insight generation feels like a significant leap forward for the platform. “Tableau has done great work in creating an agentic experience that feels, for the first time, like the real deal,” he said. Looking ahead, Tableau’s roadmap includes a continued focus on agentic AI, with the goal of providing each user with their own personal analyst. “It’s not just about productivity,” said Jones. “It’s about changing the value of what can be delivered.” Menninger concluded that Tableau’s shift away from dashboards is a reflection of where business intelligence is headed. “Dashboards, like data warehouses, don’t solve problems on their own. What matters is what you do with the information,” he said. “Tableau’s push toward agentic analytics and collaborative decision-making is the right move for its users and the market as a whole.” Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
  • 1
  • 2
gettectonic.com