ChatGPT Archives - gettectonic.com
Statement Accuracy Prediction based on Language Model Activations

Statement Accuracy Prediction based on Language Model Activations

When users first began interacting with ChatGPT, they noticed an intriguing behavior: the model would often reverse its stance when told it was wrong. This raised concerns about the reliability of its outputs. How can users trust a system that appears to contradict itself? Recent research has revealed that large language models (LLMs) not only generate inaccurate information (often referred to as “hallucinations”) but are also aware of their inaccuracies. Despite this awareness, these models proceed to present their responses confidently. Unveiling LLM Awareness of Hallucinations Researchers discovered this phenomenon by analyzing the internal mechanisms of LLMs. Whenever an LLM generates a response, it transforms the input query into a numerical representation and performs a series of computations before producing the output. At intermediate stages, these numerical representations are called “activations.” These activations contain significantly more information than what is reflected in the final output. By scrutinizing these activations, researchers can identify whether the LLM “knows” its response is inaccurate. A technique called SAPLMA (Statement Accuracy Prediction based on Language Model Activations) has been developed to explore this capability. SAPLMA examines the internal activations of LLMs to predict whether their outputs are truthful or not. Why Do Hallucinations Occur? LLMs function as next-word prediction models. Each word is selected based on its likelihood given the preceding words. For example, starting with “I ate,” the model might predict the next words as follows: The issue arises when earlier predictions constrain subsequent outputs. Once the model commits to a word, it cannot go back to revise its earlier choice. For instance: In another case: This mechanism reveals how the constraints of next-word prediction can lead to hallucinations, even when the model “knows” it is generating an incorrect response. Detecting Inaccuracies with SAPLMA To investigate whether an LLM recognizes its own inaccuracies, researchers developed the SAPLMA method. Here’s how it works: The classifier itself is a simple neural network with three dense layers, culminating in a binary output that predicts the truthfulness of the statement. Results and Insights The SAPLMA method achieved an accuracy of 60–80%, depending on the topic. While this is a promising result, it is not perfect and has notable limitations. For example: However, if LLMs can learn to detect inaccuracies during the generation process, they could potentially refine their outputs in real time, reducing hallucinations and improving reliability. The Future of Error Mitigation in LLMs The SAPLMA method represents a step forward in understanding and mitigating LLM errors. Accurate classification of inaccuracies could pave the way for models that can self-correct and produce more reliable outputs. While the current limitations are significant, ongoing research into these methods could lead to substantial improvements in LLM performance. By combining techniques like SAPLMA with advancements in LLM architecture, researchers aim to build models that are not only aware of their errors but capable of addressing them dynamically, enhancing both the accuracy and trustworthiness of AI systems. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Scope of Generative AI

Exploring Generative AI

Like most employees at most companies, I wear a few different hats around Tectonic. Whether I’m building a data model, creating and scheduing an email campaign, standing up a platform generative AI is always at my fingertips. At my very core, I’m a marketer. Have been for so long I do it without eveven thinking. Or at least, everyuthing I do has a hat tip to its future marketing needs. Today I want to share some of the AI content generators I’ve been using, am looking to use, or just heard about. But before we rip into the insight, here’s a primer. Types of AI Content Generators ChatGPT, a powerful AI chatbot, drew significant attention upon its November 2022 release. While the GPT-3 language model behind it had existed for some time, ChatGPT made this technology accessible to nontechnical users, showcasing how AI can generate content. Over two years later, numerous AI content generators have emerged to cater to diverse use cases. This rapid development raises questions about the technology’s impact on work. Schools are grappling with fears of plagiarism, while others are embracing AI. Legal debates about copyright and digital media authenticity continue. President Joe Biden’s October 2023 executive order addressed AI’s risks and opportunities in areas like education, workforce, and consumer privacy, underscoring generative AI’s transformative potential. What is AI-Generated Content? AI-generated content, also known as generative AI, refers to algorithms that automatically create new content across digital media. These algorithms are trained on extensive datasets and require minimal user input to produce novel outputs. For instance, ChatGPT sets a standard for AI-generated content. Based on GPT-4o, it processes text, images, and audio, offering natural language and multimodal capabilities. Many other generative AI tools operate similarly, leveraging large language models (LLMs) and multimodal frameworks to create diverse outputs. What are the Different Types of AI-Generated Content? AI-generated content spans multiple media types: Despite their varied outputs, most generative AI systems are built on advanced LLMs like GPT-4 and Google Gemini. These multimodal models process and generate content across multiple formats, with enhanced capabilities evolving over time. How Generative AI is Used Generative AI applications span industries: These tools often combine outputs from various media for complex, multifaceted projects. AI Content Generators AI content generators exist across various media. Below are good examples organized by gen ai type: Written Content Generators Image Content Generators Music Content Generators Code Content Generators Other AI Content Generators These tools showcase how AI-powered content generation is revolutionizing industries, making content creation faster and more accessible. I do hope you will comment below on your favorites, other AI tools not showcased above, or anything else AI-related that is on your mind. Written by Tectonic’s Marketing Operations Director, Shannan Hearne. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
From Chatbots to Agentic AI

From Chatbots to Agentic AI

The transition from LLM-powered chatbots to agentic systems, or agentic AI, can be summed up by the old saying: “Less talk, more action.” Keeping up with advancements in AI can be overwhelming, especially when managing an existing business. The speed and complexity of innovation can make it feel like the first day of school all over again. This insight offers a comprehensive look at AI agents, their components, and key characteristics. The introductory section breaks down the elements that form the term “AI agent,” providing a clear definition. After establishing this foundation, we explore the evolution of LLM applications, particularly the shift from traditional chatbots to agentic systems. The goal is to understand why AI agents are becoming increasingly vital in AI development and how they differ from LLM-powered chatbots. By the end of this guide, you will have a deeper understanding of AI agents, their potential applications, and their impact on organizational workflows. For those of you with a technical background who prefer to get hands-on, click here for the best repository for AI developers and builders. What is an AI Agent? Components of AI Agents To understand the term “AI agent,” we need to examine its two main components. First, let’s consider artificial intelligence, or AI. Artificial Intelligence (AI) refers to non-biological intelligence that mimics human cognition to perform tasks traditionally requiring human intellect. Through machine learning and deep learning techniques, algorithms—especially neural networks—learn patterns from data. AI systems are used for tasks such as detection, classification, and prediction, with content generation becoming a prominent domain due to transformer-based models. These systems can match or exceed human performance in specific scenarios. The second component is “agent,” a term commonly used in both technology and human contexts. In computer science, an agent refers to a software entity with environmental awareness, able to perceive and act within its surroundings. A computational agent typically has the ability to: In human contexts, an agent is someone who acts on behalf of another person or organization, making decisions, gathering information, and facilitating interactions. They often play intermediary roles in transactions and decision-making. To define an AI agent, we combine these two perspectives: it is a computational entity with environmental awareness, capable of perceiving inputs, acting with tools, and processing information using foundation models backed by both long-term and short-term memory. Key Components and Characteristics of AI Agents From LLMs to AI Agents Now, let’s take a step back and understand how we arrived at the concept of AI agents, particularly by looking at how LLM applications have evolved. The shift from traditional chatbots to LLM-powered applications has been rapid and transformative. Form Factor Evolution of LLM Applications Traditional Chatbots to LLM-Powered Chatbots Traditional chatbots, which existed before generative AI, were simpler and relied on heuristic responses: “If this, then that.” They followed predefined rules and decision trees to generate responses. These systems had limited interactivity, with the fallback option of “Speak to a human” for complex scenarios. LLM-Powered Chatbots The release of OpenAI’s ChatGPT on November 30, 2022, marked the introduction of LLM-powered chatbots, fundamentally changing the game. These chatbots, like ChatGPT, were built on GPT-3.5, a large language model trained on massive datasets. Unlike traditional chatbots, LLM-powered systems can generate human-like responses, offering a much more flexible and intelligent interaction. However, challenges remained. LLM-powered chatbots struggled with personalization and consistency, often generating plausible but incorrect information—a phenomenon known as “hallucination.” This led to efforts in grounding LLM responses through techniques like retrieval-augmented generation (RAG). RAG Chatbots RAG is a method that combines data retrieval with LLM generation, allowing systems to access real-time or proprietary data, improving accuracy and relevance. This hybrid approach addresses the hallucination problem, ensuring more reliable outputs. LLM-Powered Chatbots to AI Agents As LLMs expanded, their abilities grew more sophisticated, incorporating advanced reasoning, multi-step planning, and the use of external tools (function calling). Tool use refers to an LLM’s ability to invoke specific functions, enabling it to perform more complex tasks. Tool-Augmented LLMs and AI Agents As LLMs became tool-augmented, the emergence of AI agents followed. These agents integrate reasoning, planning, and tool use into an autonomous, goal-driven system that can operate iteratively within a dynamic environment. Unlike traditional chatbot interfaces, AI agents leverage a broader set of tools to interact with various systems and accomplish tasks. Agentic Systems Agentic systems—computational architectures that include AI agents—embody these advanced capabilities. They can autonomously interact with systems, make decisions, and adapt to feedback, forming the foundation for more complex AI applications. Components of an AI Agent AI agents consist of several key components: Characteristics of AI Agents AI agents are defined by the following traits: Conclusion AI agents represent a significant leap from traditional chatbots, offering greater autonomy, complexity, and interactivity. However, the term “AI agent” remains fluid, with no universal industry standard. Instead, it exists on a continuum, with varying degrees of autonomy, adaptability, and proactive behavior defining agentic systems. Value and Impact of AI Agents The key benefits of AI agents lie in their ability to automate manual processes, reduce decision-making burdens, and enhance workflows in enterprise environments. By “agentifying” repetitive tasks, AI agents offer substantial productivity gains and the potential to transform how businesses operate. As AI agents evolve, their applications will only expand, driving new efficiencies and enabling organizations to leverage AI in increasingly sophisticated ways. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce AI Research Introduces BLIP-3-Video

Salesforce AI Research Introduces BLIP-3-Video

Salesforce AI Research Introduces BLIP-3-Video: A Groundbreaking Multimodal Model for Efficient Video Understanding Vision-language models (VLMs) are transforming artificial intelligence by merging visual and textual data, enabling advancements in video analysis, human-computer interaction, and multimedia applications. These tools empower systems to generate captions, answer questions, and support decision-making, driving innovation in industries like entertainment, healthcare, and autonomous systems. However, the exponential growth in video-based tasks has created a demand for more efficient processing solutions that can manage the vast amounts of visual and temporal data inherent in videos. The Challenge of Scaling Video Understanding Existing video-processing models face significant inefficiencies. Many rely on processing each frame individually, creating thousands of visual tokens that demand extensive computational resources. This approach struggles with long or complex videos, where balancing computational efficiency and accurate temporal understanding becomes crucial. Attempts to address this issue, such as pooling techniques used by models like Video-ChatGPT and LLaVA-OneVision, have only partially succeeded, as they still produce thousands of tokens. Introducing BLIP-3-Video: A Breakthrough in Token Efficiency To tackle these challenges, Salesforce AI Research has developed BLIP-3-Video, a cutting-edge vision-language model optimized for video processing. The key innovation lies in its temporal encoder, which reduces visual tokens to just 16–32 tokens per video, significantly lowering computational requirements while maintaining strong performance. The temporal encoder employs a spatio-temporal attentional pooling mechanism, selectively extracting the most informative data from video frames. By consolidating spatial and temporal information into compact video-level tokens, BLIP-3-Video streamlines video processing without sacrificing accuracy. Efficient Architecture for Scalable Video Tasks BLIP-3-Video’s architecture integrates: This design ensures that the model efficiently captures essential temporal information while minimizing redundant data. Performance Highlights BLIP-3-Video demonstrates remarkable efficiency, achieving accuracy comparable to state-of-the-art models like Tarsier-34B while using a fraction of the tokens: For context, Tarsier-34B requires 4608 tokens for eight video frames, whereas BLIP-3-Video achieves similar results with only 32 tokens. On multiple-choice tasks, the model excelled: These results highlight BLIP-3-Video as one of the most token-efficient models in video understanding, offering top-tier performance while dramatically reducing computational costs. Advancing AI for Real-World Video Applications BLIP-3-Video addresses the critical challenge of token inefficiency, proving that complex video data can be processed effectively with far fewer resources. Developed by Salesforce AI Research, the model paves the way for scalable, real-time video processing across industries, including healthcare, autonomous systems, and entertainment. By combining efficiency with high performance, BLIP-3-Video sets a new standard for vision-language models, driving the practical application of AI in video-based systems. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
ThoughtSpot AI agent Spotter enables conversational BI

ThoughtSpot AI agent Spotter enables conversational BI

ThoughtSpot Unveils Spotter: A Generative AI-Powered Data Agent ThoughtSpot, a leading analytics vendor, has launched Spotter, an advanced generative AI-powered agent designed to revolutionize how users interact with data. Spotter enables conversational data exploration, contextual understanding, and autonomous analysis, making it a significant leap forward in the analytics landscape. Spotter’s Role in ThoughtSpot’s Evolution Spotter replaces Sage, ThoughtSpot’s earlier generative AI-powered interface, which debuted in March 2023. Despite moving from private to public preview and gaining new capabilities, Sage never reached general availability. Spotter is now generally available for ThoughtSpot Analytics, while its embedded version is in beta testing. Unlike earlier AI tools that focused on question-and-answer interactions, such as Sage and Microsoft’s copilots, Spotter takes the concept further by integrating contextual awareness and autonomous decision-making. Spotter doesn’t just respond to queries; it suggests follow-up questions, identifies anomalies, and provides proactive insights, functioning more like a virtual analyst than a reactive chatbot. Key Features of Spotter Spotter is built to enhance productivity and insight generation through the following capabilities: Generative AI’s Growing Impact on BI ThoughtSpot has long aimed to make analytics accessible to non-technical users through natural language search. However, previous NLP tools often required users to learn specific vocabularies, limiting widespread adoption. Generative AI bridges this gap. By leveraging extensive vocabularies and LLM technology, tools like Spotter enable users of all skill levels to access and analyze data effortlessly. Spotter stands out with its ability to deliver proactive insights, identify trends, and adapt to user behavior, enhancing the decision-making process. Expert Perspectives on Spotter Donald Farmer, founder of TreeHive Strategy, highlighted Spotter’s autonomy as a game-changer: “Spotter is a big move forward for ThoughtSpot and AI. The natural language interface is more conversational, but the key advantage is its autonomous analysis, which identifies trends and insights without users needing to ask.” Mike Leone, an analyst at TechTarget’s Enterprise Strategy Group, emphasized Spotter’s ability to adapt to users: “Spotter’s ability to deliver personalized and contextually relevant responses is critical for organizations pursuing generative AI initiatives. This goes a long way in delivering unique value across a business.” Farmer also pointed to Spotter’s embedded capabilities, noting its growing appeal as an embedded analytics solution integrated with productivity tools like Salesforce and ServiceNow. Competitive Positioning Spotter aligns ThoughtSpot with other vendors embracing agentic AI in analytics. Google recently introduced Conversational Analytics in Looker, and Salesforce’s Tableau platform now includes Tableau Agent. ThoughtSpot’s approach builds on its core strength in search-based analytics while expanding into generative AI-driven capabilities. Leone observed: “ThoughtSpot is right in line with the market in delivering an agentic experience and is laying the groundwork for broader AI functionality over time.” A Step Toward the Future of Analytics With Spotter, ThoughtSpot is redefining the role of AI in business intelligence. The tool combines conversational ease, proactive insights, and seamless integration, empowering users to make data-driven decisions more efficiently. As generative AI continues to evolve, tools like Spotter demonstrate how businesses can unlock the full potential of their data. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

1 Billion Enterprise AI Agents

Inside Salesforce’s Ambition to Deploy 1 Billion Enterprise AI Agents Salesforce is making a bold play in the enterprise AI space with its recently launched Agentforce platform. Introduced at the annual Dreamforce conference, Agentforce is positioned to revolutionize sales, marketing, commerce, and operations with autonomous AI agents, marking a significant evolution from Salesforce’s previous Einstein AI platform. What Makes Agentforce Different? Agentforce operates as more than just a chatbot platform. It uses real-time data and user-defined business rules to proactively manage tasks, aiming to boost efficiency and enhance customer satisfaction. Built on Salesforce’s Data Cloud, the platform simplifies deployment while maintaining powerful customization capabilities: “Salesforce takes care of 80% of the foundational work, leaving customers to focus on the 20% that truly differentiates their business,” explains Adam Forrest, SVP of Marketing at Salesforce. Forrest highlights how Agentforce enables businesses to build custom agents tailored to specific needs by incorporating their own rules and data sources. This user-centric approach empowers admins, developers, and technology teams to deploy AI without extensive technical resources. Early Adoption Across Industries Major brands have already adopted Agentforce for diverse use cases: These real-world applications illustrate Agentforce’s potential to transform workflows in industries ranging from retail to hospitality and education. AI Agents in Marketing: The New Frontier Salesforce emphasizes that Agentforce isn’t just for operations; it’s poised to redefine marketing. AI agents can automate lead qualification, optimize outreach strategies, and enhance personalization. For example, in account-based marketing, agents can analyze customer data to identify high-value opportunities, craft tailored strategies, and recommend optimal engagement times based on user behavior. “AI agents streamline lead qualification by evaluating intent signals and scoring leads, allowing sales teams to focus on high-priority prospects,” says Jonathan Franchell, CEO of B2B marketing agency Ironpaper. Once campaigns are launched, Agentforce monitors performance in real time, offering suggestions to improve ROI and resource allocation. By integrating seamlessly with CRM platforms, the tool also facilitates better collaboration between marketing and sales teams. Beyond B2C applications, AI agents in B2B contexts can evaluate customer-specific needs and provide tailored product or service recommendations, further enhancing client relationships. Enabling Creativity Through Automation By automating repetitive tasks, Agentforce aims to free marketers to focus on strategy and creativity. Dan Gardner, co-founder of Code and Theory, describes this vision: “Agentic AI eliminates friction and dissolves silos in data, organizational structures, and customer touchpoints. The result? Smarter insights, efficient distribution, and more time for creatives to do what they do best: creating.” Competitive Landscape and Challenges Despite its promise, Salesforce faces stiff competition. Microsoft—backed by its integration with OpenAI’s ChatGPT—has unveiled AI tools like Copilot, and other players such as Google, ServiceNow, and HubSpot are advancing their own AI platforms. Salesforce CEO Marc Benioff has not shied away from the rivalry. On the Masters of Scale podcast, he criticized Microsoft for overpromising on products like Copilot, asserting that Salesforce delivers tangible value: “Our tools show users exactly what is possible, what is real, and how easy it is to derive huge value from AI.” Salesforce must also demonstrate Agentforce’s scalability across diverse industries to capture a significant share of the enterprise AI market. A Transformative Vision for the Future Agentforce represents Salesforce’s commitment to bringing AI-powered automation to the forefront of enterprise operations. With its focus on seamless deployment, powerful customization, and real-time capabilities, the platform aims to reshape how businesses interact with customers and optimize internal processes. By targeting diverse use cases and emphasizing accessibility for both technical and non-technical users, Salesforce is betting on Agentforce to drive adoption at scale—and position itself as a leader in the increasingly competitive AI market. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents Set to Break Through in 2025

AI Agents Set to Break Through in 2025

2025: The Year AI Agents Transform Work and Life Despite years of hype around artificial intelligence, its true disruptive impact has so far been limited. However, industry experts believe that’s about to change in 2025 as autonomous AI agents prepare to enter and reshape nearly every facet of our lives. Since OpenAI’s ChatGPT took the world by storm in late 2022, billions of dollars have been funneled into the AI sector. Big tech and startups alike are racing to harness the transformative potential of the technology. Yet, while millions now interact with AI chatbots daily, turning them into tools that deliver tangible business value has proven challenging. A recent study by Boston Consulting Group revealed that only 26% of companies experimenting with AI have progressed beyond proof of concept to derive measurable value. This lag reflects the limitations of current AI tools, which serve primarily as copilots—capable of assisting but requiring constant oversight and remaining prone to errors. AI Agents Set to Break Through in 2025 The status quo, however, is poised for a radical shift. Autonomous AI agents—capable of independently analyzing information, making decisions, and taking action—are expected to emerge as the industry’s next big breakthrough. “For the first time, technology isn’t just offering tools for humans to do work,” Salesforce CEO Marc Benioff wrote in Time. “It’s providing intelligent, scalable digital labor that performs tasks autonomously. Instead of waiting for human input, agents can analyze information, make decisions, and adapt as they go.” At their core, AI agents leverage the same large language models (LLMs) that power tools like ChatGPT. But these agents take it further, acting as reasoning engines that develop step-by-step strategies to execute tasks. Armed with access to external data sources like customer records or financial databases and equipped with software tools, agents can achieve goals independently. While current LLMs still face reasoning limitations, advancements are on the horizon. New models like OpenAI’s “o1” and DeepSeek’s “R1” are specialized for reasoning, sparking hope that 2025 will see agents grow far more capable. Big Tech and Startups Betting Big Major players are already gearing up for this new era. Startups are also eager to carve out their share of the market. According to Pitchbook, funding deals for agent-focused ventures surged by over 80% in 2024, with the median deal value increasing nearly 50%. Challenges to Overcome Despite the enthusiasm, significant hurdles remain. 2025: A Turning Point Despite these challenges, many experts believe 2025 will mark the mainstream adoption of AI agents. A New World of Work No matter the pace, it’s clear that AI agents will dominate the industry’s focus in 2025. If the technology delivers on its promise, the workplace could undergo a profound transformation, enabling entirely new ways of working and automating tasks that once required human intervention. The question isn’t if agents will redefine the way we work—it’s how fast. By the end of 2025, the shift could be undeniable. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Meta Joins the Race to Reinvent Search with AI

Meta Joins the Race to Reinvent Search with AI

Meta Joins the Race to Reinvent Search with AI Meta, the parent company of Facebook, Instagram, and WhatsApp, is stepping into the evolving AI-driven search landscape. As vendors increasingly embrace generative AI to transform search experiences, Meta aims to challenge Google’s dominance in this space. The company is reportedly developing an AI-powered search engine designed to provide conversational, AI-generated summaries of recent events and news. These summaries would be delivered via Meta’s AI chatbot, supported by a multiyear partnership with Reuters for real-time news insights, according to The Information. AI Search: A Growing Opportunity The push comes as generative AI reshapes search technology across the industry. Google, the long-standing leader, has integrated AI features such as AI Overviews into its search platform, offering users summarized search results, product comparisons, and more. This feature, now available in over 100 countries as of October 2024, signals a shift in traditional search strategies. Similarly, OpenAI, the creator of ChatGPT, has been exploring its own AI search model, SearchGPT, and forging partnerships with media organizations like the Associated Press and Hearst. However, OpenAI faces legal challenges, such as a lawsuit from The New York Times over alleged copyright infringement. Meta’s entry into AI-powered search aligns with a broader trend among tech giants. “It makes sense for Meta to explore this,” said Mark Beccue, an analyst with TechTarget’s Enterprise Strategy Group. He noted that Meta’s approach seems more targeted at consumer engagement than enterprise solutions, particularly appealing to younger audiences who are shifting away from traditional search behaviors. Shifting User Preferences Generational changes in search habits are creating opportunities for new players in the market. Younger users, particularly Gen Z and Gen Alpha, are increasingly turning to platforms like TikTok for lifestyle advice and Amazon for product recommendations, bypassing traditional search engines like Google. “Recent studies show younger generations are no longer using ‘Google’ as a verb,” said Lisa Martin, an analyst with the Futurum Group. “This opens the playing field for competitors like Meta and OpenAI.” Forrester Research corroborates this trend, noting a diversification in search behaviors. “ChatGPT’s popularity has accelerated this shift,” said Nikhil Lai, a Forrester analyst. He added that these changes could challenge Google’s search ad market, with its dominance potentially waning in the years ahead. Meta’s AI Search Potential Meta’s foray into AI search offers an opportunity to enhance user experiences and deepen engagement. Rather than pushing news content into users’ feeds—an approach that has drawn criticism—AI-driven search could empower users to decide what content they see and when they see it. “If implemented thoughtfully, it could transform the user experience and give users more control,” said Martin. This approach could also boost engagement by keeping users within Meta’s ecosystem. The Race for Revenue and Trust While AI-powered search is expected to increase engagement, monetization strategies remain uncertain. Google has yet to monetize its AI Overviews, and OpenAI’s plans for SearchGPT remain unclear. Other vendors, like Perplexity AI, are experimenting with models such as sponsored questions instead of traditional results. Trust remains a critical factor in the evolving search landscape. “Google is still seen as more trustworthy,” Lai noted, with users often returning to Google to verify AI-generated information. Despite the competition, the conversational AI search market lacks a definitive leader. “Google dominated traditional search, but the race for conversational search is far more open-ended,” Lai concluded. Meta’s entry into this competitive space underscores the ongoing evolution of search technology, setting the stage for a reshaped digital landscape driven by AI innovation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Gen AI Unleased With Vector Database

Knowledge Graphs and Vector Databases

The Role of Knowledge Graphs and Vector Databases in Retrieval-Augmented Generation (RAG) In the dynamic AI landscape, Retrieval-Augmented Generation (RAG) systems are revolutionizing data retrieval by combining artificial intelligence with external data sources to deliver contextual, relevant outputs. Two core technologies driving this innovation are Knowledge Graphs and Vector Databases. While fundamentally different in their design and functionality, these tools complement one another, unlocking new potential for solving complex data problems across industries. Understanding Knowledge Graphs: Connecting the Dots Knowledge Graphs organize data into a network of relationships, creating a structured representation of entities and how they interact. These graphs emphasize understanding and reasoning through data, offering explainable and highly contextual results. How They Work Strengths Limitations Applications Vector Databases: The Power of Similarity In contrast, Vector Databases thrive in handling unstructured data such as text, images, and audio. By representing data as high-dimensional vectors, they excel at identifying similarities, enabling semantic understanding. How They Work Strengths Limitations Applications Combining Knowledge Graphs and Vector Databases: A Hybrid Approach While both technologies excel independently, their combination can amplify RAG systems. Knowledge Graphs bring reasoning and structure, while Vector Databases offer rapid, similarity-based retrieval, creating hybrid systems that are more intelligent and versatile. Example Use Cases Knowledge Graphs vs. Vector Databases: Key Differences Feature Knowledge Graphs Vector Databases Data Type Structured Unstructured Core Strength Relational reasoning Similarity-based retrieval Explainability High Low Scalability Limited for large datasets Efficient for massive datasets Flexibility Schema-dependent Schema-free Challenges in Implementation Future Trends: The Path to Convergence As AI evolves, the distinction between Knowledge Graphs and Vector Databases is beginning to blur. Emerging trends include: This convergence is paving the way for smarter, more adaptive systems that can handle both structured and unstructured data seamlessly. Conclusion Knowledge Graphs and Vector Databases represent two foundational technologies in the realm of Retrieval-Augmented Generation. Knowledge Graphs excel at reasoning through structured relationships, while Vector Databases shine in unstructured data retrieval. By combining their strengths, organizations can create hybrid systems that offer unparalleled insights, efficiency, and scalability. In a world where data continues to grow in complexity, leveraging these complementary tools is essential. Whether building intelligent healthcare systems, enhancing recommendation engines, or powering semantic search, the synergy between Knowledge Graphs and Vector Databases is unlocking the next frontier of AI innovation, transforming how industries harness the power of their data. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More

Microsoft Copilot as “Repackaged ChatGPT”

Salesforce CEO Marc Benioff Criticizes Microsoft Copilot as “Repackaged ChatGPT” Salesforce CEO Marc Benioff took aim at Microsoft’s Copilot AI offerings during Salesforce’s latest quarterly earnings call, dismissing them as a rebranding of OpenAI’s generative AI technology. “In many ways, it’s just repackaged ChatGPT,” Benioff asserted. He contrasted this with Salesforce’s platform, emphasizing its unique ability to operate an entire business. “You won’t find that capability on Microsoft’s website,” he added. Benioff highlighted Agentforce, Salesforce’s autonomous AI agent product, as a transformative force for both Salesforce and its customers. The tool, available on Salesforce’s support portal, is projected to manage up to half of the company’s annual support case volume. The portal currently handles over 60 million sessions and 2 million support cases annually. Agentforce Adoption and Partner Involvement Salesforce COO Brian Millham outlined the significant role of partners in driving Agentforce adoption. During the quarter, global partners were involved in 75% of Agentforce deals, including nine of Salesforce’s top 10 wins. More than 80,000 system integrators have completed Agentforce training, and numerous independent software vendors (ISVs) and technology partners are developing and selling AI agents. Millham pointed to Accenture as a notable example, leveraging Agentforce to enhance sales operations for its 52,000 global sellers. “Our partners are becoming agent-first enterprises themselves,” Millham said. Since its general availability on October 24, Agentforce has already secured 200 deals, with thousands more in the pipeline. Benioff described the tool as part of a broader shift toward digital labor, claiming, “Salesforce is now the largest supplier of digital labor.” Expanding Use Cases and Market Impact Agentforce, powered by Salesforce’s extensive data repository of 740,000 documents and 200–300 petabytes of information, supports diverse use cases, including resolving customer issues, qualifying leads, closing deals, and optimizing marketing campaigns. Salesforce has committed to hiring 1,000–2,000 additional salespeople to expand Agentforce adoption further. Benioff positioned Salesforce as the leading enterprise AI provider, citing its 2 trillion weekly transactions through its Einstein AI product. He claimed Salesforce’s unified codebase provides a competitive edge, unlike rival systems that run disparate applications, potentially limiting AI effectiveness. “This is a bold leap into the future of work,” Benioff said, “where AI agents collaborate with humans to revolutionize customer interactions.” AI Growth Across Salesforce Products AI-driven growth extended beyond Agentforce to other Salesforce products: Millham noted that AI-related $1 million+ deals more than tripled year over year. Financial Highlights For Q3 FY2024, Salesforce reported: Looking ahead, Salesforce expects Q4 revenue between $9.9 billion and $10.1 billion, representing 7%–9% year-over-year growth. The company raised its full fiscal year revenue guidance to .8– billion, an 8%–9% increase. Industry and Product Insights Salesforce’s growth was driven by its core clouds and subscription services, with health, life sciences, manufacturing, and automotive industries performing particularly well. However, retail and consumer goods saw slower growth. While subscription revenue for MuleSoft and Tableau decelerated, Salesforce’s broader portfolio continued to deliver robust performance. Benioff concluded by emphasizing the transformative potential of Salesforce’s AI ecosystem: “This is the next evolution of Salesforce—an intelligent, scalable technology that’s no longer tied to workforce growth.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Energy Solution

AI Energy Solution

Could the AI Energy Solution Make AI Unstoppable? The Rise of Brain-Based AI In 2002, Jason Padgett, a furniture salesman from Tacoma, Washington, experienced a life-altering transformation after a traumatic brain injury. Following a violent assault, Padgett began to perceive the world through intricate patterns of geometry and fractals, developing a profound, intuitive grasp of advanced mathematical concepts—despite no formal education in the subject. His extraordinary abilities, emerging from the brain’s adaptation to injury, revealed an essential truth: the human brain’s remarkable capacity for resilience and reorganization. This phenomenon underscores the brain’s reliance on inhibition, a critical mechanism that silences or separates neural processes to conserve energy, clarify signals, and enable complex cognition. Researcher Iain McGilchrist highlights that this ability to step back from immediate stimuli fosters reflection and thoughtful action. Yet this foundational trait—key to the brain’s efficiency and adaptability—is absent from today’s dominant AI models. Current AI systems, like Transformers powering tools such as ChatGPT, lack inhibition. These models rely on probabilistic predictions derived from massive datasets, resulting in inefficiencies and an inability to learn independently. However, the rise of brain-based AI seeks to emulate aspects of inhibition, creating systems that are not only more energy-efficient but also capable of learning from real-world, primary data without constant retraining. The AI Energy Problem Today’s AI landscape is dominated by Transformer models, known for their ability to process vast amounts of secondary data, such as scraped text, images, and videos. While these models have propelled significant advancements, their insatiable demand for computational power has exposed critical flaws. As energy costs rise and infrastructure investment balloons, the industry is beginning to reevaluate its reliance on Transformer models. This shift has sparked interest in brain-inspired AI, which promises sustainable solutions through decentralized, self-learning systems that mimic human cognitive efficiency. What Brain-Based AI Solves Brain-inspired models aim to address three fundamental challenges with current AI systems: The human brain’s ability to build cohesive perceptions from fragmented inputs—like stitching together a clear visual image from saccades and peripheral signals—serves as a blueprint for these models, demonstrating how advanced functionality can emerge from minimal energy expenditure. The Secret to Brain Efficiency: A Thousand Brains Jeff Hawkins, the creator of the Palm Pilot, has dedicated decades to understanding the brain’s neocortex and its potential for AI design. His Thousand Brains Theory of Intelligence posits that the neocortex operates through a universal algorithm, with approximately 150,000 cortical columns functioning as independent processors. These columns identify patterns, sequences, and spatial representations, collaborating to form a cohesive perception of the world. Hawkins’ brain-inspired approach challenges traditional AI paradigms by emphasizing predictive coding and distributed processing, reducing energy demands while enabling real-time learning. Unlike Transformers, which centralize control, brain-based AI uses localized decision-making, creating a more scalable and adaptive system. Is AI in a Bubble? Despite immense investment in AI, the market’s focus remains heavily skewed toward infrastructure rather than applications. NVIDIA’s data centers alone generate 5 billion in annualized revenue, while major AI applications collectively bring in just billion. This imbalance has led to concerns about an AI bubble, reminiscent of the early 2000s dot-com and telecom busts, where overinvestment in infrastructure outpaced actual demand. The sustainability of current AI investments hinges on the viability of new models like brain-based AI. If these systems gain widespread adoption within the next decade, today’s energy-intensive Transformer models may become obsolete, signaling a profound market correction. Controlling Brain-Based AI: A Philosophical Divide The rise of brain-based AI introduces not only technical challenges but also philosophical ones. Scholars like Joscha Bach argue for a reductionist approach, constructing intelligence through mathematical models that approximate complex phenomena. Others advocate for holistic designs, warning that purely rational systems may lack the broader perspective needed to navigate ethical and unpredictable scenarios. This philosophical debate mirrors the physical divide in the human brain: one hemisphere excels in reductionist analysis, while the other integrates holistic perspectives. As AI systems grow increasingly complex, the philosophical framework guiding their development will profoundly shape their behavior—and their impact on society. The future of AI lies in balancing efficiency, adaptability, and ethical design. Whether brain-based models succeed in replacing Transformers will depend not only on their technical advantages but also on our ability to guide their evolution responsibly. As AI inches closer to mimicking human intelligence, the stakes have never been higher. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Productivity Paradox

AI Productivity Paradox

The AI Productivity Paradox: Why Aren’t More Workers Using AI Tooks Like ChatGPT?The Real Barrier Isn’t Technical Skills — It’s Time to Think Despite the transformative potential of tools like ChatGPT, most knowledge workers aren’t utilizing them effectively. Those who do tend to use them for basic tasks like summarization. Less than 5% of ChatGPT’s user base subscribes to the paid Plus version, indicating that a small fraction of potential professional users are tapping into AI for more complex, high-value tasks. Having spent over a decade building AI products at companies such as Google Brain and Shopify Ads, the evolution of AI has been clearly evident. With the advent of ChatGPT, AI has transitioned from being an enhancement for tools like photo organizers to becoming a significant productivity booster for all knowledge workers. Most executives are aware that today’s buzz around AI is more than just hype. They’re eager to make their companies AI-forward, recognizing that it’s now more powerful and user-friendly than ever. Yet, despite this potential and enthusiasm, widespread adoption remains slow. The real issue lies in how organizations approach work itself. Systemic problems are hindering the integration of these tools into the daily workflow. Ultimately, the question executives need to ask isn’t, “How can we use AI to work faster? Or can this feature be built with AI?” but rather, “How can we use AI to create more value? What are the questions we should be asking but aren’t?” Real-world ImpactRecently, large language models (LLMs)—the technology behind tools like ChatGPT—were used to tackle a complex data structuring and analysis task. This task would typically require a cross-functional team of data analysts and content designers, taking a month or more to complete. Here’s what was accomplished in just one day using Google AI Studio: However, the process wasn’t just about pressing a button and letting AI do all the work. It required focused effort, detailed instructions, and multiple iterations. Hours were spent crafting precise prompts, providing feedback, and redirecting the AI when it went off course. In this case, the task was compressed from a month-long process to a single day. While it was mentally exhausting, the result wasn’t just a faster process—it was a fundamentally better and different outcome. The LLMs uncovered nuanced patterns and edge cases within the data that traditional analysis would have missed. The Counterintuitive TruthHere lies the key to understanding the AI productivity paradox: The success in using AI was possible because leadership allowed for a full day dedicated to rethinking data processes with AI as a thought partner. This provided the space for deep, strategic thinking, exploring connections and possibilities that would typically take weeks. However, this quality-focused work is often sacrificed under the pressure to meet deadlines. Ironically, most people don’t have time to figure out how they could save time. This lack of dedicated time for exploration is a luxury many product managers (PMs) can’t afford. Under constant pressure to deliver immediate results, many PMs don’t have even an hour for strategic thinking. For many, the only way to carve out time for this work is by pretending to be sick. This continuous pressure also hinders AI adoption. Developing thorough testing plans or proactively addressing AI-related issues is viewed as a luxury, not a necessity. This creates a counterproductive dynamic: Why use AI to spot issues in documentation if fixing them would delay launch? Why conduct further user research when the direction has already been set from above? Charting a New Course — Investing in PeopleProviding employees time to “figure out AI” isn’t enough; most need training to fully understand how to leverage ChatGPT beyond simple tasks like summarization. Yet the training required is often far less than what people expect. While the market is flooded with AI training programs, many aren’t suitable for most employees. These programs are often time-consuming, overly technical, and not tailored to specific job functions. The best results come from working closely with individuals for brief periods—10 to 15 minutes—to audit their current workflows and identify areas where LLMs could be used to streamline processes. Understanding the technical details behind token prediction isn’t necessary to create effective prompts. It’s also a myth that AI adoption is only for those with technical backgrounds under 40. In fact, attention to detail and a passion for quality work are far better indicators of success. By setting aside biases, companies may discover hidden AI enthusiasts within their ranks. For example, a lawyer in his sixties, after just five minutes of explanation, grasped the potential of LLMs. By tailoring examples to his domain, the technology helped him draft a law review article he had been putting off for months. It’s likely that many companies already have AI enthusiasts—individuals who’ve taken the initiative to explore LLMs in their work. These “LLM whisperers” could come from any department: engineering, marketing, data science, product management, or customer service. By identifying these internal innovators, organizations can leverage their expertise. Once these experts are found, they can conduct “AI audits” of current workflows, identify areas for improvement, and provide starter prompts for specific use cases. These internal experts often better understand the company’s systems and goals, making them more capable of spotting relevant opportunities. Ensuring Time for ExplorationBeyond providing training, it’s crucial that employees have the time to explore and experiment with AI tools. Companies can’t simply tell their employees to innovate with AI while demanding that another month’s worth of features be delivered by Friday at 5 p.m. Ensuring teams have a few hours a month for exploration is essential for fostering true AI adoption. Once the initial hurdle of adoption is overcome, employees will be able to identify the most promising areas for AI investment. From there, organizations will be better positioned to assess the need for more specialized training. ConclusionThe AI productivity paradox is not about the complexity of the technology but rather how organizations approach work and innovation. Harnessing AI’s potential is simpler than “AI influencers” often suggest, requiring only

Read More
gettectonic.com