Context Window Archives - gettectonic.com
ai model race

AI Model Race Intensifies

AI Model Race Intensifies as OpenAI, Google, and DeepSeek Roll Out New Releases The generative AI competition is heating up as major players like OpenAI, Google, and DeepSeek rapidly release upgraded models. However, enterprises are shifting focus from incremental model improvements to agentic AI—systems that autonomously perform complex tasks. Three Major Releases in 24 Hours This week saw a flurry of AI advancements: Competition Over Innovation? While the rapid releases highlight the breakneck pace of AI development, some analysts see diminishing differentiation between models. The Future: Agentic AI & Real-World Use Cases As model fatigue sets in, businesses are focusing on domain-specific AI applications that deliver measurable ROI. The AI race continues, but the real winners will be those who translate cutting-edge models into practical, agent-driven solutions. Key Takeaways:✔ DeepSeek’s open-source V3 pressures rivals to embrace transparency.✔ GPT-4o’s hyper-realistic images raise deepfake concerns.✔ Gemini 2.5 focuses on structured reasoning for complex tasks.✔ Agentic AI, not just model upgrades, is the next enterprise priority. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Balancing Security with Operational Flexibility

Balancing Security with Operational Flexibility

Security measures for AI agents must strike a balance between protection and the flexibility required for effective operation in production environments. As these systems advance, several key challenges remain unresolved. Practical Limitations 1. Tool Calling 2. Multi-Step Execution 3. Technical Infrastructure 4. Interaction Challenges 5. Access Control 6. Reliability & Performance The Road Ahead Scaling AI Through Test-Time Compute The future of AI agent capabilities hinges on test-time compute, or the computational resources allocated during inference. While pre-training faces limitations due to finite data availability, test-time compute offers a path to enhanced reasoning. Industry leaders suggest that large-scale reasoning may require significant computational investment. OpenAI’s Sam Altman has stated that while AGI development is now theoretically understood, real-world deployment will depend heavily on compute economics. Near-Term Evolution (2025) Core Intelligence Advancements Interface & Control Improvements Memory & Context Expansion Infrastructure & Scaling Constraints Medium-Term Developments (2026) Core Intelligence Enhancements Interface & Control Innovations Memory & Context Strengthening Current AI systems struggle with basic UI interactions, achieving only ~40% success rates in structured applications. However, novel learning approaches—such as reverse task synthesis, which allows agents to infer workflows through exploration—have nearly doubled success rates in GUI interactions. By 2026, AI agents may transition from executing predefined commands to autonomously understanding and interacting with software environments. Conclusion The trajectory of AI agents points toward increased autonomy, but significant challenges remain. The key developments driving progress include: ✅ Test-time compute unlocking scalable reasoning ✅ Memory architectures improving context retention ✅ Planning optimizations enhancing task decomposition ✅ Security frameworks ensuring safe deployment ✅ Human-AI collaboration models refining interaction efficiency While we may be approaching AGI-like capabilities in specialized domains (e.g., software development, mathematical reasoning), broader applications will depend on breakthroughs in context understanding, UI interaction, and security. Balancing computational feasibility with operational effectiveness remains the primary hurdle in transitioning AI agents from experimental technology to indispensable enterprise tools. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Shift From AI Agents to AI Agent Tool Use

Building Scalable AI Agents

Building Scalable AI Agents: Infrastructure, Planning, and Security The key building blocks of AI agents—planning, tool integration, and memory—demand sophisticated infrastructure to function effectively in production environments. As the technology advances, several critical components have emerged as essential for successful deployments. Development Frameworks & Architecture The ecosystem for AI agent development has matured, with several key frameworks leading the way: While these frameworks offer unique features, successful agents typically share three core architectural components: Despite these strong foundations, production deployments often require customization to address high-scale workloads, security requirements, and system integrations. Planning & Execution Handling complex tasks requires advanced planning and execution flows, typically structured around: An agent’s effectiveness hinges on its ability to: ✅ Generate structured plans by intelligently combining tools and knowledge (e.g., correctly sequencing API calls for a customer refund request).✅ Validate each task step to prevent errors from compounding.✅ Optimize computational costs in long-running operations.✅ Recover from failures through dynamic replanning.✅ Apply multiple validation strategies, from structural verification to runtime testing.✅ Collaborate with other agents when consensus-based decisions improve accuracy. While multi-agent consensus models improve accuracy, they are computationally expensive. Even OpenAI finds that running parallel model instances for consensus-based responses remains cost-prohibitive, with ChatGPT Pro priced at $200/month. Running majority-vote systems for complex tasks can triple or quintuple costs, making single-agent architectures with robust planning and validation more viable for production use. Memory & Retrieval AI agents require advanced memory management to maintain context and learn from experience. Memory systems typically include: 1. Context Window 2. Working Memory (State Maintained During a Task) Key context management techniques: 3. Long-Term Memory & Knowledge Management AI agents rely on structured storage systems for persistent knowledge: Advanced Memory Capabilities Standardization efforts like Anthropic’s Model Context Protocol (MCP) are emerging to streamline memory integration, but challenges remain in balancing computational efficiency, consistency, and real-time retrieval. Security & Execution As AI agents gain autonomy, security and auditability become critical. Production deployments require multiple layers of protection: 1. Tool Access Control 2. Execution Validation 3. Secure Execution Environments 4. API Governance & Access Control 5. Monitoring & Observability 6. Audit Trails These security measures must balance flexibility, reliability, and operational control to ensure trustworthy AI-driven automation. Conclusion Building production-ready AI agents requires a carefully designed infrastructure that balances:✅ Advanced memory systems for context retention.✅ Sophisticated planning capabilities to break down tasks.✅ Secure execution environments with strong access controls. While AI agents offer immense potential, their adoption remains experimental across industries. Organizations must strategically evaluate where AI agents justify their complexity, ensuring that they provide clear, measurable benefits over traditional AI models. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Google and Salesforce Expand Partnership

Google and Salesforce Expand Partnership

Google and Salesforce Expand Partnership to Enhance AI Agent Capabilities Google and Salesforce are deepening their collaboration to provide customers with greater flexibility in AI agent deployment. This expanded partnership will integrate Google Gemini within Salesforce’s Agentforce platform, enabling AI agents to process images, audio, and video with advanced multimodal capabilities. Enhanced AI Functionality with Gemini Through this integration, AI agents will gain access to Gemini’s powerful models, allowing them to handle complex tasks with extended context windows and leverage real-time insights from Google Search via Vertex AI. This collaboration aims to empower businesses with AI solutions that are not limited to a single model provider, offering crucial flexibility in AI customization. Srini Tallapragada, Salesforce’s President and Chief Engineering and Customer Success Officer, emphasized that the integration offers customers the ability to choose the applications and models that best suit their needs. “Salesforce offers a complete enterprise-grade agentic AI platform that makes it easy to deploy new capabilities quickly and realize business value fast. Google Cloud is a pioneer in enterprise agentic AI, offering some of the most powerful models, agents, and AI development tools on the planet. Together, we are creating the best place for businesses to scale with digital labor.” Key Benefits of the Integration The partnership is set to deliver significant advantages for businesses, as outlined in the official announcement: Thomas Kurian, CEO of Google Cloud, highlighted the benefits of this collaboration: “Our mutual customers have asked for seamless integration across Salesforce and Google Cloud. This expanded partnership enables them to accelerate AI transformations with state-of-the-art AI models, agentic AI, and advanced data analytics.” Strengthening Customer Service Integrations The partnership will also enhance the connection between Salesforce Service Cloud and Google Cloud’s Customer Engagement Suite, providing AI-driven improvements to customer support. Key upcoming features include: Expanding AI-Powered Decision-Making Beyond Gemini, Agentforce will integrate Google Search through Vertex AI, leveraging secure connections between Salesforce Data Cloud and Google BigQuery. This will enable AI agents to access real-time information for improved accuracy and decision-making. For example, in supply chain management, AI can track shipments, monitor inventory in Salesforce Commerce Cloud, and anticipate disruptions using real-time data on weather, port congestion, and geopolitical events. Additionally, joint customers will be able to utilize Salesforce’s unified platform—including Agentforce, Data Cloud, and Customer 360—on Google Cloud’s AI-optimized infrastructure. This integration ensures enhanced security through dynamic grounding, zero data retention, and toxicity detection via the Einstein Trust Layer. Businesses will also soon have the option to purchase Salesforce products via the Google Cloud Marketplace. More AI Innovations from Google and Salesforce Google recently announced the development of a personalized AI-powered chatbot that will be integrated into its devices, including smartphones, laptops, and tablets. This tool will automatically answer calls, process requests, and respond on behalf of users. Meanwhile, Salesforce’s Service Assistant—formerly known as Salesforce Service Planner—has launched on Service Cloud. Designed to support live agents, it generates step-by-step plans for resolving customer inquiries by analyzing intent, case history, and customer context. For optimal performance, Salesforce recommends integrating it with Data Cloud and the contact center knowledge base. With this expanded partnership, Google and Salesforce are setting the stage for businesses to leverage cutting-edge AI technology, driving innovation and operational efficiency across industries. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
The Rise of AI Agents: 2024 and Beyond

The Rise of AI Agents: 2024 and Beyond

In 2024, we witnessed major breakthroughs in AI agents. OpenAI’s o1 and o3 models demonstrated the ability to deconstruct complex tasks, while Claude 3.5 showcased AI’s capacity to interact with computers like humans—navigating interfaces and running software. These advancements, alongside improvements in memory and learning systems, are pushing AI beyond simple chat interactions into the realm of autonomous systems. AI agents are already making an impact in specialized fields, including legal analysis, scientific research, and technical support. While they excel in structured environments with defined rules, they still struggle with unpredictable scenarios and open-ended challenges. Their success rates drop significantly when handling exceptions or adapting to dynamic conditions. The field is evolving from conversational AI to intelligent systems capable of reasoning and independent action. Each step forward demands greater computational power and introduces new technical challenges. This article explores how AI agents function, their current capabilities, and the infrastructure required to ensure their reliability. What is an AI Agent? An AI agent is a system designed to reason through problems, plan solutions, and execute tasks using external tools. Unlike traditional AI models that simply respond to prompts, agents possess: Understanding the shift from passive responders to autonomous agents is key to grasping the opportunities and challenges ahead. Let’s explore the breakthroughs that have fueled this transformation. 2024’s Key Breakthroughs OpenAI o3’s High Score on the ARC-AGI Benchmark Three pivotal advancements in 2024 set the stage for autonomous AI agents: AI Agents in Action These capabilities are already yielding practical applications. As Reid Hoffman observed, we are seeing the emergence of specialized AI agents that extend human capabilities across various industries: Recent research from Sierra highlights the rapid maturation of these systems. AI agents are transitioning from experimental prototypes to real-world deployment, capable of handling complex business rules while engaging in natural conversations. The Road Ahead: Key Questions As AI agents continue to evolve, three critical questions for us all emerge: The next wave of AI innovation will be defined by how well we address these challenges. By building robust systems that balance autonomy with oversight, we can unlock the full potential of AI agents in the years ahead. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
salesforce end to end

Salesforce and Google Announcement

Salesforce (NYSE:CRM) has entered into a deal with Google (NASDAQ:GOOGL) to offer its customer relations management software, Agentforce artificial intelligence assistants, and Data Cloud offerings through Google Cloud, the companies announced today. Google and Salesforce already have many of the same clients, and this new deal will allow for more product integration between Google Workspace and Salesforce’s customer relationship management and AI offerings. Salesforce already uses Amazon (AMZN) Web Services for much of its cloud computing. “Our mutual customers have asked us to be able to work more seamlessly across Salesforce and Google Cloud, and this expanded partnership will help them accelerate their AI transformations with agentic AI, state-of-the-art AI models, data analytics, and more,” said Thomas Kurian, CEO of Google Cloud. The deal is expected to total $2.5B over the next seven years, according to a report by Bloomberg. Salesforce and Google today announced a major expansion of their strategic partnership, delivering choice in the models and capabilities businesses use to build and deploy AI-powered agents. In today’s constantly evolving AI landscape, innovations like autonomous agents are emerging so quickly that businesses struggle to keep pace. This expanded partnership provides crucial flexibility, empowering customers to develop tailored AI solutions that meet their specific needs, rather than being locked into a single model provider. Google Cloud is at the forefront of enterprise AI innovation with millions of developers building with Google’s cutting-edge Gemini models and on Google Cloud’s AI-optimized infrastructure. This expanded partnership will empower Salesforce customers to build Agentforce agents using Gemini and to deploy Salesforce on Google Cloud. This is an expansion of the existing partnership that allows customers to use data from Data Cloud and Google BigQuery bi-directionally via zero-copy technology—further equipping customers with the data, AI, trust, and actions they need to bring autonomous agents into their businesses. Additionally, this integration empowers Agentforce agents with the ability to reference up-to-the-minute data, news, current events, and credible citations, substantially enhancing their contextual awareness and ability to deliver accurate, evidence-backed responses. For example, in supply chain management and logistics, an agent built with Agentforce could track shipments and monitor inventory levels in Salesforce Commerce Cloud and proactively identify potential disruptions using real-time data from Google Search, including weather conditions, port congestion, and geopolitical events. Availability is expected in the coming months. AI: Unlocking the Power of Choice and Flexibility with Gemini and Agentforce Businesses need the freedom to choose the best models for their needs rather than be locked into one vendor. In 2025, Google’s Gemini models will also be available for prompt building and reasoning directly within Agentforce. With Gemini and Agentforce, businesses will benefit from: For example, an insurance customer can submit a claim with photos of the damage and an audio voicemail from a witness. Agentforce, using Gemini, can then help the insurance provider deliver better customer experiences by processing all these inputs, assessing the claim’s validity, and even using text-to-speech to contact the customer with a resolution, streamlining the traditionally lengthy claims process. Availability is expected this year. Trust: Salesforce Platform deployed on Google Cloud Customers will be able to use Salesforce’s unified platform (Agentforce, Data Cloud, Customer 360) on Google Cloud’s highly secure, AI-optimized infrastructure, benefiting from features like dynamic grounding, zero data retention, and toxicity detection provided by the Einstein Trust Layer. Once Salesforce products are available on Google Cloud, customers will also have the ability to procure Salesforce offerings through the Google Cloud Marketplace, opening up new possibilities for global businesses to optimize their investments across Salesforce and Google Cloud and benefiting thousands of existing joint customers. Action: Enhanced Employee Productivity and Customer Service with AI-Powered Integrations Millions use Salesforce and Google Cloud daily. This partnership prioritizes choice and flexibility, enabling seamless cross-platform work. New and deeper connections between platforms like Salesforce Service Cloud and Google Cloud’s Customer Engagement Suite, as well as Slack and Google Workspace, will empower AI agents and service representatives with unified data access, streamlined workflows, and advanced AI capabilities, regardless of platform. Salesforce and Google Cloud are deeply integrating their customer service platforms—Salesforce Service Cloud and Google Cloud’s Customer Engagement Suite—to create a seamless and intelligent support experience. Expected later this year, this unified approach empowers AI agents in Service Cloud with: Salesforce and Google Cloud are also exploring deeper integrations between Slack and Google Workspace, boosting productivity and creating a more cohesive digital workspace for teams and organizations. The companies are currently exploring use cases such as: Expanding Partnership Capabilities and Integrations This partnership goes beyond core product integrations to deliver a more connected and intelligent data foundation for businesses. Expected availability throughout 2025: This landmark partnership between Salesforce and Google represents a strategic paradigm shift in enterprise AI deployment, emphasizing infrastructure innovation, AI capability enhancement, and enterprise value. The integration of Google Search grounding provides a unique competitive advantage, offering real-time, factual responses backed by the world’s most comprehensive search engine. The companies are committed to ongoing innovation and deeper collaboration to empower businesses with even more powerful solutions. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Why Build a General-Purpose Agent?

A general-purpose LLM agent serves as an excellent starting point for prototyping use cases and establishing the foundation for a custom agentic architecture tailored to your needs. What is an LLM Agent? An LLM (Large Language Model) agent is a program where execution logic is governed by the underlying model. Unlike approaches such as few-shot prompting or fixed workflows, LLM agents adapt dynamically. They can determine which tools to use (e.g., web search or code execution), how to use them, and iterate based on results. This adaptability enables handling diverse tasks with minimal configuration. Agentic Architectures Explained:Agentic systems range from the reliability of fixed workflows to the flexibility of autonomous agents. For instance: Your architecture choice will depend on the desired balance between reliability and flexibility for your use case. Building a General-Purpose LLM Agent Step 1: Select the Right LLM Choosing the right model is critical for performance. Evaluate based on: Model Recommendations (as of now): For simpler use cases, smaller models running locally can also be effective, but with limited functionality. Step 2: Define the Agent’s Control Logic The system prompt differentiates an LLM agent from a standalone model. This prompt contains rules, instructions, and structures that guide the agent’s behavior. Common Agentic Patterns: Starting with ReAct or Plan-then-Execute patterns is recommended for general-purpose agents. Step 3: Define the Agent’s Core Instructions To optimize the agent’s behavior, clearly define its features and constraints in the system prompt: Example Instructions: Step 4: Define and Optimize Core Tools Tools expand an agent’s capabilities. Common tools include: For each tool, define: Example: Implementing an Arxiv API tool for scientific queries. Step 5: Memory Handling Strategy Since LLMs have limited memory (context window), a strategy is necessary to manage past interactions. Common approaches include: For personalization, long-term memory can store user preferences or critical information. Step 6: Parse the Agent’s Output To make raw LLM outputs actionable, implement a parser to convert outputs into a structured format like JSON. Structured outputs simplify execution and ensure consistency. Step 7: Orchestrate the Agent’s Workflow Define orchestration logic to handle the agent’s next steps after receiving an output: Example Orchestration Code: pythonCopy codedef orchestrator(llm_agent, llm_output, tools, user_query): while True: action = llm_output.get(“action”) if action == “tool_call”: tool_name = llm_output.get(“tool_name”) tool_params = llm_output.get(“tool_params”, {}) if tool_name in tools: try: tool_result = tools[tool_name](**tool_params) llm_output = llm_agent({“tool_output”: tool_result}) except Exception as e: return f”Error executing tool ‘{tool_name}’: {str(e)}” else: return f”Error: Tool ‘{tool_name}’ not found.” elif action == “return_answer”: return llm_output.get(“answer”, “No answer provided.”) else: return “Error: Unrecognized action type from LLM output.” This orchestration ensures seamless interaction between tools, memory, and user queries. When to Consider Multi-Agent Systems A single-agent setup works well for prototyping but may hit limits with complex workflows or extensive toolsets. Multi-agent architectures can: Starting with a single agent helps refine workflows, identify bottlenecks, and scale effectively. By following these steps, you’ll have a versatile system capable of handling diverse use cases, from competitive analysis to automating workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Where LLMs Fall Short

LLM Economies

Throughout history, disruptive technologies have been the catalyst for major social and economic revolutions. The invention of the plow and irrigation systems 12,000 years ago sparked the Agricultural Revolution, while Johannes Gutenberg’s 15th-century printing press fueled the Protestant Reformation and helped propel Europe out of the Middle Ages into the Renaissance. In the 18th century, James Watt’s steam engine ushered in the Industrial Revolution. More recently, the internet has revolutionized communication, commerce, and information access, shrinking the world into a global village. Similarly, smartphones have transformed how people interact with their surroundings. Now, we stand at the dawn of the AI revolution. Large Language Models (LLMs) represent a monumental leap forward, with significant economic implications at both macro and micro levels. These models are reshaping global markets, driving new forms of currency, and creating a novel economic landscape. The reason LLMs are transforming industries and redefining economies is simple: they automate both routine and complex tasks that traditionally require human intelligence. They enhance decision-making processes, boost productivity, and facilitate cost reductions across various sectors. This enables organizations to allocate human resources toward more creative and strategic endeavors, resulting in the development of new products and services. From healthcare to finance to customer service, LLMs are creating new markets and driving AI-driven services like content generation and conversational assistants into the mainstream. To truly grasp the engine driving this new global economy, it’s essential to understand the inner workings of this disruptive technology. These posts will provide both a macro-level overview of the economic forces at play and a deep dive into the technical mechanics of LLMs, equipping you with a comprehensive understanding of the revolution happening now. Why Now? The Connection Between Language and Human Intelligence AI did not begin with ChatGPT’s arrival in November 2022. Many people were developing machine learning classification models in 1999, and the roots of AI go back even further. Artificial Intelligence was formally born in 1950, when Alan Turing—considered the father of theoretical computer science and famed for cracking the Nazi Enigma code during World War II—created the first formal definition of intelligence. This definition, known as the Turing Test, demonstrated the potential for machines to exhibit human-like intelligence through natural language conversations. The test involves a human evaluator who engages in conversations with both a human and a machine. If the evaluator cannot reliably distinguish between the two, the machine is considered to have passed the test. Remarkably, after 72 years of gradual AI development, ChatGPT simulated this very interaction, passing the Turing Test and igniting the current AI explosion. But why is language so closely tied to human intelligence, rather than, for example, vision? While 70% of our brain’s neurons are devoted to vision, OpenAI’s pioneering image generation model, DALL-E, did not trigger the same level of excitement as ChatGPT. The answer lies in the profound role language has played in human evolution. The Evolution of Language The development of language was the turning point in humanity’s rise to dominance on Earth. As Yuval Noah Harari points out in his book Sapiens: A Brief History of Humankind, it was the ability to gossip and discuss abstract concepts that set humans apart from other species. Complex communication, such as gossip, requires a shared, sophisticated language. Human language evolved from primitive cave signs to structured alphabets, which, along with grammar rules, created languages capable of expressing thousands of words. In today’s digital age, language has further evolved with the inclusion of emojis, and now with the advent of GenAI, tokens have become the latest cornerstone in this progression. These shifts highlight the extraordinary journey of human language, from simple symbols to intricate digital representations. In the next post, we will explore the intricacies of LLMs, focusing specifically on tokens. But before that, let’s delve into the economic forces shaping the LLM-driven world. The Forces Shaping the LLM Economy AI Giants in Competition Karl Marx and Friedrich Engels argued that those who control the means of production hold power. The tech giants of today understand that AI is the future means of production, and the race to dominate the LLM market is well underway. This competition is fierce, with industry leaders like OpenAI, Google, Microsoft, and Facebook battling for supremacy. New challengers such as Mistral (France), AI21 (Israel), and Elon Musk’s xAI and Anthropic are also entering the fray. The LLM industry is expanding exponentially, with billions of dollars of investment pouring in. For example, Anthropic has raised $4.5 billion from 43 investors, including major players like Amazon, Google, and Microsoft. The Scarcity of GPUs Just as Bitcoin mining requires vast computational resources, training LLMs demands immense computing power, driving a search for new energy sources. Microsoft’s recent investment in nuclear energy underscores this urgency. At the heart of LLM technology are Graphics Processing Units (GPUs), essential for powering deep neural networks. These GPUs have become scarce and expensive, adding to the competitive tension. Tokens: The New Currency of the LLM Economy Tokens are the currency driving the emerging AI economy. Just as money facilitates transactions in traditional markets, tokens are the foundation of LLM economics. But what exactly are tokens? Tokens are the basic units of text that LLMs process. They can be single characters, parts of words, or entire words. For example, the word “Oscar” might be split into two tokens, “os” and “car.” The performance of LLMs—quality, speed, and cost—hinges on how efficiently they generate these tokens. LLM providers price their services based on token usage, with different rates for input (prompt) and output (completion) tokens. As companies rely more on LLMs, especially for complex tasks like agentic applications, token usage will significantly impact operational costs. With fierce competition and the rise of open-source models like Llama-3.1, the cost of tokens is rapidly decreasing. For instance, OpenAI reduced its GPT-4 pricing by about 80% over the past year and a half. This trend enables companies to expand their portfolio of AI-powered products, further fueling the LLM economy. Context Windows: Expanding Capabilities

Read More
AI Then and Now

AI Then and Now

AI: Transforming User Interactions and Experiences Have you ever been greeted by a waitress who already knows your breakfast order? It’s a relief not to detail every aspect — temperature, how do you want your eggs, what kind of juice, bacon or sausage, etc. This example encapsulates the journey we’re navigating with AI today. AI Then and Now. This article isn’t about ordering breakfast; it’s about the evolution of user interactions, particularly how generative AI might evolve based on past trends in graphical user interfaces (GUIs) and emerging trends in AI interactions. We’ll explore the significance of context bundling, user curation, trust, and ecosystems as key trends in AI user experience in this Tectonic insight. From Commands to Conversations Let’s rewind to the early days of computing when users had to type precise commands in a Command-Line Interface (CLI). Imagine the challenge of remembering the exact command to open a file or copy data. This complexity meant that only a few people could use computers effectively. To reach a broader audience, a shift was necessary. You might think Apple’s creation of the mouse and drop down menues was the pinnacle of success, but truly the evolution predates Apple. Enter ELIZA in 1964, an early natural language processing program that engaged users in basic conversations through keyword recognition and scripted responses. Although groundbreaking, ELIZA’s interactions were far from flexible or scalable. Around the same time, Xerox PARC was developing the Graphical User Interface (GUI), later popularized by Apple in 1984 and Microsoft shortly thereafter. GUIs transformed computing by replacing complex commands with icons, menus, and windows navigable by a mouse. This innovation made computers accessible and intuitive for everyday tasks, laying the groundwork for technology’s universal role in our lives. Not only did it make computing accessible to the masses but it layed the foundation upon which every household would soon have one or more computers! The Evolution of AI Interfaces Just as early computing transitioned from the complexity of CLI to the simplicity of GUIs, we’re witnessing a parallel evolution in generative AI. User prompts are essentially mini-programs crafted in natural language, with the quality of outcomes depending on our prompt engineering skills. We are moving towards bundling complex inputs into simpler, more user-friendly interfaces with the complexity hidden in the background. Context Bundling Context bundling simplifies interactions by combining related information into a single command. This addresses the challenge of conveying complex instructions to achieve desired outcomes, enhancing efficiency and output quality by aligning user intent and machine understanding in one go. We’ve seen context bundling emerge across generative AI tools. For instance, sample prompts in Edge, Google Chrome’s tab manager, and trigger-words in Stable Diffusion fine-tune AI outputs. Context bundling isn’t always about conversation; it’s about achieving user goals efficiently without lengthy interactions. Context bundling is the difference in ordering the eggs versus telling the cook how to crack and prepare it. User Curation Despite advancements, there remains a spectrum of needs where users must refine outputs to achieve specific goals. This is especially true for tasks like researching, brainstorming, creating content, refining images, or editing. As context windows and multi-modal capabilities expand, guiding users through complexity becomes even more crucial. Humans constantly curate their experiences, whether by highlighting text in a book or picking out keywords in a conversation. Similarly, users interacting with ChatGPT often highlight relevant information to guide their next steps. By making it easier for users to curate and refine their outputs, AI tools can offer higher-quality results and enrich user experiences. User creation takes ordering breakfast from a manual conversational process to the click of a button on a vending-like system. Designing for Trust Trust is a significant barrier to the widespread adoption of generative AI. To build trust, we need to consider factors such as previous experiences, risk tolerance, interaction consistency, and social context. Without trust, in AI or your breakfast order, it becomes easier just to do it yourself. Trust is broken if the waitress brings you the wrong items, or if the artificial intelligence fails to meet your reasonable expectations. Context Ecosystems Generative AI has revolutionized productivity by lowering the barrier for users to start tasks, mirroring the benefits and journey of the GUI. However, modern UX has evolved beyond simple interfaces. The future of generative AI lies in creating ecosystems where AI tools collaborate with users in a seamless workflow. We see emergent examples like Edge, Chrome, and Pixel Assistant integrating AI functionality into their software. This integration goes beyond conversational windows, making AI aware of the software context and enhancing productivity. The Future of AI Interaction Generative AI will likely evolve to become a collaborator in our daily tasks. Tools like Grammarly and Github Copilot already show how AI can assist users in creating and refining content. As our comfort with AI grows, we may see generative AI managing both digital and physical aspects of our lives, augmenting reality and redefining productivity. The evolution of generative AI interactions is repeating the history of human-computer interaction. By creating better experiences that bundle context into simpler interactions, empower user curation, and augment known ecosystems, we can make generative AI more trustworthy, accessible, usable, and beneficial for everyone. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Communicating With Machines

Communicating With Machines

For as long as machines have existed, humans have struggled to communicate effectively with them. The rise of large language models (LLMs) has transformed this dynamic, making “prompting” the bridge between our intentions and AI’s actions. By providing pre-trained models with clear instructions and context, we can ensure they understand and respond correctly. As UX practitioners, we now play a key role in facilitating this interaction, helping humans and machines truly connect. The UX discipline was born alongside graphical user interfaces (GUIs), offering a way for the average person to interact with computers without needing to write code. We introduced familiar concepts like desktops, trash cans, and save icons to align with users’ mental models, while complex code ran behind the scenes. Now, with the power of AI and the transformer architecture, a new form of interaction has emerged—natural language communication. This shift has changed the design landscape, moving us from pure graphical interfaces to an era where text-based interactions dominate. As designers, we must reconsider where our focus should lie in this evolving environment. A Mental Shift In the era of command-based design, we focused on breaking down complex user problems, mapping out customer journeys, and creating deterministic flows. Now, with AI at the forefront, our challenge is to provide models with the right context for optimal output and refine the responses through iteration. Shifting Complexity to the Edges Successful communication, whether with a person or a machine, hinges on context. Just as you would clearly explain your needs to a salesperson to get the right product, AI models also need clear instructions. Expecting users to input all the necessary information in their prompts won’t lead to widespread adoption of these models. Here, UX practitioners play a critical role. We can design user experiences that integrate context—some visible to users, others hidden—shaping how AI interacts with them. This ensures that users can seamlessly communicate with machines without the burden of detailed, manual prompts. The Craft of Prompting As designers, our role in crafting prompts falls into three main areas: Even if your team isn’t building custom models, there’s still plenty of work to be done. You can help select pre-trained models that align with user goals and design a seamless experience around them. Understanding the Context Window A key concept for UX designers to understand is the “context window“—the information a model can process to generate an output. Think of it as the amount of memory the model retains during a conversation. Companies can use this to include hidden prompts, helping guide AI responses to align with brand values and user intent. Context windows are measured in tokens, not time, so even if you return to a conversation weeks later, the model remembers previous interactions, provided they fit within the token limit. With innovations like Gemini’s 2-million-token context window, AI models are moving toward infinite memory, which will bring new design challenges for UX practitioners. How to Approach Prompting Prompting is an iterative process where you craft an instruction, test it with the model, and refine it based on the results. Some effective techniques include: Depending on the scenario, you’ll either use direct, simple prompts (for user-facing interactions) or broader, more structured system prompts (for behind-the-scenes guidance). Get Organized As prompting becomes more common, teams need a unified approach to avoid conflicting instructions. Proper documentation on system prompting is crucial, especially in larger teams. This helps prevent errors and hallucinations in model responses. Prompt experimentation may reveal limitations in AI models, and there are several ways to address these: Looking Ahead The UX landscape is evolving rapidly. Many organizations, particularly smaller ones, have yet to realize the importance of UX in AI prompting. Others may not allocate enough resources, underestimating the complexity and importance of UX in shaping AI interactions. As John Culkin said, “We shape our tools, and thereafter, our tools shape us.” The responsibility of integrating UX into AI development goes beyond just individual organizations—it’s shaping the future of human-computer interaction. This is a pivotal moment for UX, and how we adapt will define the next generation of design. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com