Chain of Thought - gettectonic.com
Pitfall of Process Optimization

Pitfall of Process Optimization

In 1963, Peter Drucker wrote one of the most influential articles on business, Managing for Business Effectiveness. Much like Fred Brooks’ 1975 classic, The Mythical Man-Month, it has profound lessons. However, through today’s lens of AI and automation, it seems we may have misinterpreted Drucker’s insights, inadvertently industrializing the problem rather than solving it. Pitfall of process optimization. Pitfalls of process optimization. One pivotal point from Drucker’s essay (highlighted by Dave Duggal) is: “The major problem is the confusion between effectiveness and efficiency. There is nothing more useless than doing efficiently what should not be done at all. Yet our tools — especially accounting concepts and data — all focus on efficiency. What we need is a way to identify areas of effectiveness and a method to concentrate on them.” While Drucker emphasized focusing on results and making data-driven decisions, his warning that “our data and accounting focus on efficiency” has been largely overlooked. Instead of addressing this, businesses have industrialized the pursuit of efficiency at the expense of effectiveness. The Efficiency Trap Drucker’s assertion that “there is nothing more useless than doing with great efficiency what should not be done at all” remains true, yet much of the business and IT landscape has fixated on eliminating steps, even if the return on this effort is minimal. He warned that too much focus is placed on problems rather than opportunities and on areas where even exceptional performance yields little impact. This mirrors many process optimization efforts, where the goal is often to remove unnecessary steps, focusing on efficiency rather than true effectiveness. The Pitfall of Process Optimization Entire business methodologies were built around simplifying processes and eliminating redundant steps. Companies created cultures centered on optimization, believing that by cutting out inefficiencies, they would achieve success. Yet, as Drucker noted, this focus on efficiency has often resulted in neglecting broader opportunities. Poor Data, Poor Outcomes Drucker’s concerns about tools and data have proven strangely prophetic. Instead of focusing on effectiveness, many organizations now face data problems, often rooted in over-optimized processes. Some of the firms most dedicated to process optimization are the very ones known for slow responses to market changes, as their data fails to keep pace with business needs. Focusing on Process, Missing the Bigger Picture When businesses focus narrowly on processes, they overlook key information needed downstream. This might improve micro-level efficiency, but it often damages macro-level outcomes. For instance, optimizing an order submission process may mean critical data isn’t captured, leading to issues further along in the supply chain. This process-driven thinking fosters data silos—disconnected systems that, while progressing individual steps, fail to offer the necessary insights for broader business decisions. Effectiveness Requires Understanding Reality AI amplifies these challenges. To fully leverage AI, businesses must shift from process-centric to reality-based thinking. Companies that can manage their digital reality, enabling AI to make smart, outcome-driven decisions, will outperform those stuck in outdated process mentalities. AI won’t just optimize individual steps like restocking inventory; it will manage complex tasks such as provisioning networks, negotiating with suppliers, or resolving customer complaints. To support this, businesses must move beyond step-based optimization and embrace new approaches that focus on multi-dimensional KPIs and AI-driven outcomes. A Shift from Process to Reality The future of business optimization will require understanding KPIs in a multi-dimensional way, embedding AI into operations, and allowing it to drive business outcomes. This will necessitate a shift in data architecture, with a focus on operational reality rather than reporting. The Dangers of Ignoring the Shift Businesses that cling to process thinking may find isolated success with AI but risk falling behind competitors that embrace a broader transformation. Like retailers who tried to compete with Amazon by merely launching websites without addressing underlying fulfillment challenges, companies may see short-term gains but falter in the long run. The Cultural Challenge of Transformation Switching from process-focused thinking to a reality-based approach will be difficult. Since Drucker’s 1963 essay, the industrialization of step-elimination has become deeply ingrained in business culture. Processes are comfortable; they allow for focused problem-solving in isolated areas. Moving to a mindset that prioritizes operational reality, dependencies, and cross-functional collaboration is a significant cultural shift. Embracing the Change However, the businesses that make this transition will gain a competitive advantage. Those that recognize the scale of change required—making cultural, organizational, and architectural shifts—will operate in a different league than those who don’t. By shifting from efficiency-driven processes to reality-based effectiveness, organizations can unlock the full potential of AI, ensuring not just operational improvements but transformational business success. You can avoid the pitfalls of process optimization. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI and Digital Transformation

AI and Digital Transformation

The buzz around AI has become the latest trend, but there’s a deeper truth behind it. While some may joke that AI’s rise means no longer needing to discuss Digital Transformation, the reality is quite the opposite. Communication Service Providers (CSPs) and infrastructure companies that have embraced Digital Transformation are now reaping the rewards of AI. But what exactly is Digital Transformation, and how has it paved the way for AI? Let’s explore. The Digital Transformation Journey Digital transformation is about more than just adopting new technologies. It involves integrating digital technology into every aspect of a business, fundamentally altering how operations are conducted and how value is delivered to customers. This transformation requires a cultural shift, pushing organizations to challenge the status quo, experiment with new ideas, and embrace the possibility of failure. For CSPs that have successfully undergone digital transformation, the benefits are clear: streamlined operations, enhanced customer experiences, and valuable data insights. This transformation has created the ideal environment for AI to thrive, as AI relies on vast amounts of data, particularly structured data. The COVID-19 pandemic accelerated the pace of digital transformation, especially for CSPs. As companies adapted to new ways of working and serving customers, the need for robust digital infrastructure became more apparent. The surge in demand for digital services—driven by remote work, e-learning, and online communication—highlighted the importance of digital agility and the ability to leverage AI to meet rapidly changing customer needs. The pandemic not only pushed CSPs to advance their digital transformation efforts but also to innovate more quickly, ensuring they remain competitive in a fast-evolving digital landscape. The AI and Data Dilemma AI is revolutionizing industries by enabling smarter decision-making, process automation, and personalized customer experiences. However, AI’s effectiveness is heavily dependent on data—clean, well-organized, and easily accessible data. This is where digital transformation becomes crucial. CSPs that have invested in digital transformation have the necessary infrastructure to effectively collect, store, and analyze data, providing the fuel that powers AI. The Consequences of Falling Behind CSPs that have not embraced digital transformation face significant challenges in the AI race. Without a solid digital foundation, these companies struggle to harness AI’s potential. Their data is often siloed, outdated, or simply unusable. Many organizations still operate with multiple billing systems and customer care platforms for each line of business, all functioning in silos without any cross-functional intelligence. Attempting to implement AI on a weak digital foundation is akin to building the house on the sand—it’s doomed to fail. Without digital transformation, companies lack the infrastructure needed to support AI initiatives, resulting in missed opportunities for efficiency gains, cost savings, and competitive advantages. This is a common reason why enterprises fail in AI adoption, with Gartner reporting that over 80% of enterprises struggle with data quality or quantity issues. Real-World Examples Companies like Amazon and Netflix have successfully undergone digital transformation and are now leveraging AI to enhance their services. Amazon uses AI for personalized recommendations and optimizing its supply chain, while Netflix utilizes AI to analyze viewer preferences and recommend content that keeps users engaged. Conversely, companies slow to adopt digital transformation face significant challenges. Traditional retailers, for example, struggle to compete with e-commerce giants. Without the ability to leverage AI for personalized marketing and inventory management, they are losing market share. The Role of IFS IFS, through its flagship product IFS Cloud, offers a unified platform with a consistent data layer, ensuring that all data is clean, well-organized, and accessible. IFS also applies “Industrial AI,” embedding AI into applications where and when it makes sense. This approach ensures that AI evolves with the product and that the necessary AI governance is embedded. By integrating AI in a way that aligns with CSP operations, IFS not only supports AI implementation but also guides organizations through their digital transformation journey in a symbiotic manner. The Path Forward The key takeaway is clear: If an organization hasn’t started its digital transformation journey, the time to begin is now. Embracing change, investing in technology, and fostering a culture that values innovation will position companies to fully leverage AI and maintain a competitive edge. Starting with AI without a strong data foundation can lead to costly investments that fail to deliver the expected efficiencies. Digital transformation is not a one-time project but an ongoing process. Companies must remain open to advances, continuously experiment, and not fear failure. Remember Edison never said he failed. He just discovered another way not to create a light bulb. The future belongs to those who are willing to adapt and evolve. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Strawberry AI Models

Strawberry AI Models

Since OpenAI introduced its “Strawberry” AI models, something intriguing has unfolded. The o1-preview and o1-mini models have quickly gained attention for their superior step-by-step reasoning, offering a structured glimpse into problem-solving. However, behind this polished façade, a hidden layer of the AI’s mind remains off-limits—an area OpenAI is determined to keep out of reach. Unlike previous models, the o1 series conceals its raw thought processes. Users only see the refined, final answer, generated by a secondary AI, while the deeper, unfiltered reasoning is locked away. Naturally, this secrecy has only fueled curiosity. Hackers, researchers, and enthusiasts are already working to break through this barrier. Using jailbreak techniques and clever prompt manipulations, they are seeking to uncover the AI’s raw chain of thought, hoping to reveal what OpenAI has concealed. Rumors of partial breakthroughs have circulated, though nothing definitive has emerged. Meanwhile, OpenAI closely monitors these efforts, issuing warnings and threatening account bans to those who dig too deep. On platforms like X, users have reported receiving warnings merely for mentioning terms like “reasoning trace” in their interactions with the o1 models. Even casual inquiries into the AI’s thinking process seem to trigger OpenAI’s defenses. The company’s warnings are explicit: any attempt to expose the hidden reasoning violates their policies and could result in revoked access to the AI. Marco Figueroa, leader of Mozilla’s GenAI bug bounty program, publicly shared his experience after attempting to probe the model’s thought process through jailbreaks—he quickly found himself flagged by OpenAI. Now I’m on their ban list,” Figueroa revealed. So, why all the secrecy? OpenAI explained in a blog post titled Learning to Reason with LLMs that concealing the raw thought process allows for better monitoring of the AI’s decision-making without interfering with its cognitive flow. Revealing this raw data, they argue, could lead to unintended consequences, such as the model being misused to manipulate users or its internal workings being copied by competitors. OpenAI acknowledges that the raw reasoning process is valuable, and exposing it could give rivals an edge in training their own models. However, critics, such as independent AI researcher Simon Willison, have condemned this decision. Willison argues that concealing the model’s thought process is a blow to transparency. “As someone working with AI systems, I need to understand how my prompts are being processed,” he wrote. “Hiding this feels like a step backward.” Ultimately, OpenAI’s decision to keep the AI’s raw thought process hidden is about more than just user safety—it’s about control. By retaining access to these concealed layers, OpenAI maintains its lead in the competitive AI race. Yet, in doing so, they’ve sparked a hunt. Researchers, hackers, and enthusiasts continue to search for what remains hidden. And until that veil is lifted, the pursuit won’t stop. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Agents Connect Tool Calling and Reasoning

AI Agents Connect Tool Calling and Reasoning

AI Agents: Bridging Tool Calling and Reasoning in Generative AI Exploring Problem Solving and Tool-Driven Decision Making in AI Introduction: The Emergence of Agentic AI Recent advancements in libraries and low-code platforms have simplified the creation of AI agents, often referred to as digital workers. Tool calling stands out as a key capability that enhances the “agentic” nature of Generative AI models, enabling them to move beyond mere conversational tasks. By executing tools (functions), these agents can act on your behalf and tackle intricate, multi-step problems requiring sound decision-making and interaction with diverse external data sources. This insight explores the role of reasoning in tool calling, examines the challenges associated with tool usage, discusses common evaluation methods for tool-calling proficiency, and provides examples of how various models and agents engage with tools. Reasoning as a Means of Problem-Solving Successful agents rely on two fundamental expressions of reasoning: reasoning through evaluation and planning, and reasoning through tool use. While both reasoning expressions are vital, they don’t always need to be combined to yield powerful solutions. For instance, OpenAI’s new o1 model excels in reasoning through evaluation and planning, having been trained to utilize chain of thought effectively. This has notably enhanced its ability to address complex challenges, achieving human PhD-level accuracy on benchmarks like GPQA across physics, biology, and chemistry, and ranking in the 86th-93rd percentile on Codeforces contests. However, the o1 model currently lacks explicit tool calling capabilities. Conversely, many models are specifically fine-tuned for reasoning through tool use, allowing them to generate function calls and interact with APIs effectively. These models focus on executing the right tool at the right moment but may not evaluate their results as thoroughly as the o1 model. The Berkeley Function Calling Leaderboard (BFCL) serves as an excellent resource for comparing the performance of various models on tool-calling tasks and provides an evaluation suite for assessing fine-tuned models against challenging scenarios. The recently released BFCL v3 now includes multi-step, multi-turn function calling, raising the standards for tool-based reasoning tasks. Both reasoning types are powerful in their own right, and their combination holds the potential to develop agents that can effectively deconstruct complex tasks and autonomously interact with their environments. For more insights into AI agent architectures for reasoning, planning, and tool calling, check out my team’s survey paper on ArXiv. Challenges in Tool Calling: Navigating Complex Agent Behaviors Creating robust and reliable agents necessitates overcoming various challenges. In tackling complex problems, an agent often must juggle multiple tasks simultaneously, including planning, timely tool interactions, accurate formatting of tool calls, retaining outputs from prior steps, avoiding repetitive loops, and adhering to guidelines to safeguard the system against jailbreaks and prompt injections. Such demands can easily overwhelm a single agent, leading to a trend where what appears to an end user as a single agent is actually a coordinated effort of multiple agents and prompts working in unison to divide and conquer the task. This division enables tasks to be segmented and addressed concurrently by distinct models and agents, each tailored to tackle specific components of the problem. This is where models with exceptional tool-calling capabilities come into play. While tool calling is a potent method for empowering productive agents, it introduces its own set of challenges. Agents must grasp the available tools, choose the appropriate one from a potentially similar set, accurately format the inputs, execute calls in the correct sequence, and potentially integrate feedback or instructions from other agents or humans. Many models are fine-tuned specifically for tool calling, allowing them to specialize in selecting functions accurately at the right time. Key considerations when fine-tuning a model for tool calling include: Common Benchmarks for Evaluating Tool Calling As tool usage in language models becomes increasingly significant, numerous datasets have emerged to facilitate the evaluation and enhancement of model tool-calling capabilities. Two prominent benchmarks include the Berkeley Function Calling Leaderboard and the Nexus Function Calling Benchmark, both utilized by Meta to assess the performance of their Llama 3.1 model series. The recent ToolACE paper illustrates how agents can generate a diverse dataset for fine-tuning and evaluating model tool use. Here’s a closer look at each benchmark: Each of these benchmarks enhances our ability to evaluate model reasoning through tool calling. They reflect a growing trend toward developing specialized models for specific tasks and extending the capabilities of LLMs to interact with the real world. Practical Applications of Tool Calling If you’re interested in observing tool calling in action, here are some examples to consider, categorized by ease of use, from simple built-in tools to utilizing fine-tuned models and agents with tool-calling capabilities. While the built-in web search feature is convenient, most applications require defining custom tools that can be integrated into your model workflows. This leads us to the next complexity level. To observe how models articulate tool calls, you can use the Databricks Playground. For example, select the Llama 3.1 405B model and grant access to sample tools like get_distance_between_locations and get_current_weather. When prompted with, “I am going on a trip from LA to New York. How far are these two cities? And what’s the weather like in New York? I want to be prepared for when I get there,” the model will decide which tools to call and what parameters to provide for an effective response. In this scenario, the model suggests two tool calls. Since the model cannot execute the tools, the user must input a sample result to simulate. Suppose you employ a model fine-tuned on the Berkeley Function Calling Leaderboard dataset. When prompted, “How many times has the word ‘freedom’ appeared in the entire works of Shakespeare?” the model will successfully retrieve and return the answer, executing the required tool calls without the user needing to define any input or manage the output format. Such models handle multi-turn interactions adeptly, processing past user messages, managing context, and generating coherent, task-specific outputs. As AI agents evolve to encompass advanced reasoning and problem-solving capabilities, they will become increasingly adept at managing

Read More
GPT-o1 GPT5 Review

GPT-o1 GPT5 Review

OpenAI has released its latest model, GPT-5, also known as Project Strawberry or GPT-o1, positioning it as a significant advancement in AI with PhD-level reasoning capabilities. This new series, OpenAI-o1, is designed to enhance problem-solving in fields such as science, coding, and mathematics, and the initial results indicate that it lives up to the anticipation. Key Features of OpenAI-o1 Enhanced Reasoning Capabilities Safety and Alignment Targeted Applications Model Variants Access and Availability The o1 models are available to ChatGPT Plus and Team users, with broader access expected soon for ChatGPT Enterprise users. Developers can access the models through the API, although certain features like function calling are still in development. Free access to o1-mini is expected to be provided in the near future. Reinforcement Learning at the Core The o1 models utilize reinforcement learning to improve their reasoning abilities. This approach focuses on training the models to think more effectively, improving their performance with additional time spent on tasks. OpenAI continues to explore how to scale this approach, though details remain limited. Major Milestones The o1 model has achieved impressive results in several competitive benchmarks: Chain of Thought Reasoning OpenAI’s o1 models employ the “Chain of Thought” prompt engineering technique, which allows the model to think through problems step by step. This method helps the model approach complex problems in a structured way, similar to human reasoning. Key aspects include: While the o1 models show immense promise, there are still some limitations, which have been covered in detail elsewhere. However, based on early tests, the model is performing impressively, and users are hopeful that these capabilities are as robust as advertised, rather than overhyped like previous projects such as SORA or SearchGPT by OpenAI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Communicating With Machines

Communicating With Machines

For as long as machines have existed, humans have struggled to communicate effectively with them. The rise of large language models (LLMs) has transformed this dynamic, making “prompting” the bridge between our intentions and AI’s actions. By providing pre-trained models with clear instructions and context, we can ensure they understand and respond correctly. As UX practitioners, we now play a key role in facilitating this interaction, helping humans and machines truly connect. The UX discipline was born alongside graphical user interfaces (GUIs), offering a way for the average person to interact with computers without needing to write code. We introduced familiar concepts like desktops, trash cans, and save icons to align with users’ mental models, while complex code ran behind the scenes. Now, with the power of AI and the transformer architecture, a new form of interaction has emerged—natural language communication. This shift has changed the design landscape, moving us from pure graphical interfaces to an era where text-based interactions dominate. As designers, we must reconsider where our focus should lie in this evolving environment. A Mental Shift In the era of command-based design, we focused on breaking down complex user problems, mapping out customer journeys, and creating deterministic flows. Now, with AI at the forefront, our challenge is to provide models with the right context for optimal output and refine the responses through iteration. Shifting Complexity to the Edges Successful communication, whether with a person or a machine, hinges on context. Just as you would clearly explain your needs to a salesperson to get the right product, AI models also need clear instructions. Expecting users to input all the necessary information in their prompts won’t lead to widespread adoption of these models. Here, UX practitioners play a critical role. We can design user experiences that integrate context—some visible to users, others hidden—shaping how AI interacts with them. This ensures that users can seamlessly communicate with machines without the burden of detailed, manual prompts. The Craft of Prompting As designers, our role in crafting prompts falls into three main areas: Even if your team isn’t building custom models, there’s still plenty of work to be done. You can help select pre-trained models that align with user goals and design a seamless experience around them. Understanding the Context Window A key concept for UX designers to understand is the “context window“—the information a model can process to generate an output. Think of it as the amount of memory the model retains during a conversation. Companies can use this to include hidden prompts, helping guide AI responses to align with brand values and user intent. Context windows are measured in tokens, not time, so even if you return to a conversation weeks later, the model remembers previous interactions, provided they fit within the token limit. With innovations like Gemini’s 2-million-token context window, AI models are moving toward infinite memory, which will bring new design challenges for UX practitioners. How to Approach Prompting Prompting is an iterative process where you craft an instruction, test it with the model, and refine it based on the results. Some effective techniques include: Depending on the scenario, you’ll either use direct, simple prompts (for user-facing interactions) or broader, more structured system prompts (for behind-the-scenes guidance). Get Organized As prompting becomes more common, teams need a unified approach to avoid conflicting instructions. Proper documentation on system prompting is crucial, especially in larger teams. This helps prevent errors and hallucinations in model responses. Prompt experimentation may reveal limitations in AI models, and there are several ways to address these: Looking Ahead The UX landscape is evolving rapidly. Many organizations, particularly smaller ones, have yet to realize the importance of UX in AI prompting. Others may not allocate enough resources, underestimating the complexity and importance of UX in shaping AI interactions. As John Culkin said, “We shape our tools, and thereafter, our tools shape us.” The responsibility of integrating UX into AI development goes beyond just individual organizations—it’s shaping the future of human-computer interaction. This is a pivotal moment for UX, and how we adapt will define the next generation of design. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com