Loop - gettectonic.com
Transforming the Role of Data Science Teams

Transforming the Role of Data Science Teams

GenAI: Transforming the Role of Data Science Teams Challenges, Opportunities, and the Evolving Responsibilities of Data Scientists Generative AI (GenAI) is revolutionizing the AI landscape, offering faster development cycles, reduced technical overhead, and enabling groundbreaking use cases that once seemed unattainable. However, it also introduces new challenges, including the risks of hallucinations and reliance on third-party APIs. For Data Scientists and Machine Learning (ML) teams, this shift directly impacts their roles. GenAI-driven projects, often powered by external providers like OpenAI, Anthropic, or Meta, blur traditional lines. AI solutions are increasingly accessible to non-technical teams, but this accessibility raises fundamental questions about the role and responsibilities of data science teams in ensuring effective, ethical, and future-proof AI systems. Let’s explore how this evolution is reshaping the field. Expanding Possibilities Without Losing Focus While GenAI unlocks opportunities to solve a broader range of challenges, not every problem warrants an AI solution. Data Scientists remain vital in assessing when and where AI is appropriate, selecting the right approaches—whether GenAI, traditional ML, or hybrid solutions—and designing reliable systems. Although GenAI broadens the toolkit, two factors shape its application: For example, incorporating features that enable user oversight of AI outputs may prove more strategic than attempting full automation with extensive fine-tuning. Differentiation will not come from simply using LLMs, which are widely accessible, but from the unique value and functionality they enable. Traditional ML Is Far from Dead—It’s Evolving with GenAI While GenAI is transformative, traditional ML continues to play a critical role. Many use cases, especially those unrelated to text or images, are best addressed with ML. GenAI often complements traditional ML, enabling faster prototyping, enhanced experimentation, and hybrid systems that blend the strengths of both approaches. For instance, traditional ML workflows—requiring extensive data preparation, training, and maintenance—contrast with GenAI’s simplified process: prompt engineering, offline evaluation, and API integration. This allows rapid proof of concept for new ideas. Once proven, teams can refine solutions using traditional ML to optimize costs or latency, or transition to Small Language Models (SMLs) for greater control and performance. Hybrid systems are increasingly common. For example, DoorDash combines LLMs with ML models for product classification. LLMs handle cases the ML model cannot classify confidently, retraining the ML system with new insights—a powerful feedback loop. GenAI Solves New Problems—But Still Needs Expertise The AI landscape is shifting from bespoke in-house models to fewer, large multi-task models provided by external vendors. While this simplifies some aspects of AI implementation, it requires teams to remain vigilant about GenAI’s probabilistic nature and inherent risks. Key challenges unique to GenAI include: Data Scientists must ensure robust evaluations, including statistical and model-based metrics, before deployment. Monitoring tools like Datadog now offer LLM-specific observability, enabling teams to track system performance in real-world environments. Teams must also address ethical concerns, applying frameworks like ComplAI to benchmark models and incorporating guardrails to align outputs with organizational and societal values. Building AI Literacy Across Organizations AI literacy is becoming a critical competency for organizations. Beyond technical implementation, competitive advantage now depends on how effectively the entire workforce understands and leverages AI. Data Scientists are uniquely positioned to champion this literacy by leading initiatives such as internal training, workshops, and hackathons. These efforts can: The New Role of Data Scientists: A Strategic Pivot The role of Data Scientists is not diminishing but evolving. Their expertise remains essential to ensure AI solutions are reliable, ethical, and impactful. Key responsibilities now include: By adapting to this new landscape, Data Scientists will continue to play a pivotal role in guiding organizations to harness AI effectively and responsibly. GenAI is not replacing them; it’s expanding their impact. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Agentforce Testing Tool

Agentforce Testing Tool

Salesforce Unveils Agentforce Testing Center: A Breakthrough in AI Agent Lifecycle Management Salesforce, the global leader in AI-powered CRM solutions, has announced the Agentforce Testing Center, a first-of-its-kind platform for managing the lifecycle of autonomous AI agents. This innovative solution enables organizations to test AI agents at scale, leveraging synthetic data in secure environments, while ensuring accurate performance and robust monitoring. Designed to meet the unique demands of deploying intelligent AI agents, the Agentforce Testing Center introduces new tools to test, prototype, and optimize AI agents without disrupting live production systems. Core Features of the Agentforce Testing Center Why It Matters Autonomous AI agents represent a paradigm shift in enterprise software, capable of reasoning, retrieving data, and acting on behalf of users. However, ensuring their reliability and trustworthiness requires a robust testing framework that eliminates risks to live systems. The Agentforce Testing Center addresses these challenges by combining: “Agentforce is helping businesses create a limitless workforce,” said Adam Evans, EVP and GM for Salesforce AI Platform. “To deliver this value quickly, CIOs need advanced tools for testing and monitoring autonomous systems. Agentforce Testing Center provides the necessary framework for secure, repeatable deployment.” Customer and Analyst Perspectives Shree Reddy, CIO, PenFed:“With nearly 3 million members, PenFed is dedicated to providing personalized, efficient service. Using Data Cloud Sandboxes, we’re able to test and refine AI agents, ensuring they deliver fast, accurate support that aligns with our members’ financial goals.” Keith Kirkpatrick, Research Director, The Futurum Group:“To instill trust in AI, businesses must rigorously test autonomous agents. Salesforce’s Testing Center enables confidence by simulating hundreds of interaction scenarios, helping organizations deploy AI agents securely and effectively.” Availability A Competitive Edge in AI Lifecycle Management Salesforce’s Agentforce Testing Center sets a new industry standard for testing and deploying AI agents at scale. By providing a secure, scalable, and transparent solution, Salesforce enables businesses to embrace an “agent-first” approach with confidence. As enterprises continue adopting AI, tools like the Agentforce Testing Center will play a critical role in accelerating innovation while maintaining trust and reliability. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI-Driven Healthcare Approvals

AI-Driven Healthcare Approvals

Salesforce and Blue Shield of California are launching an AI-driven system to streamline healthcare approvals, aiming to cut down prior authorization wait times from weeks to, potentially, the same day. This partnership, leveraging Salesforce’s healthcare cloud, integrates patient data to streamline approvals while retaining clinician oversight, ensuring AI decisions are always reviewed by a human expert.

Read More
Salesforce adds Testing Center to Agentforce for AI agents

Salesforce adds Testing Center to Agentforce for AI agents

Salesforce Unveils Agentforce Testing Center to Streamline AI Agent Lifecycle Management Salesforce has introduced the Agentforce Testing Center, a suite of tools designed to help enterprises test, deploy, and monitor autonomous AI agents in a secure and controlled environment. These innovations aim to support businesses adopting agentic AI, a transformative approach that enables intelligent systems to reason, act, and execute tasks on behalf of employees and customers. Agentforce Testing Center: A New Paradigm for AI Agent Deployment The Agentforce Testing Center offers several key capabilities to help businesses confidently deploy AI agents without risking disruptions to live production systems: Supporting a Limitless Workforce Adam Evans, EVP and GM for Salesforce AI Platform, emphasized the importance of these tools in accelerating the adoption of AI agents: “Agentforce is helping businesses create a limitless workforce. To deliver this value fast, CIOs need new tools for testing and monitoring agentic systems. Salesforce is meeting the moment with Agentforce Testing Center, enabling companies to roll out trusted AI agents with no-code tools for testing, deploying, and monitoring in a secure, repeatable way.” From Testing to Deployment Once testing is complete, enterprises can seamlessly deploy their AI agents to production using Salesforce’s proprietary tools such as Change Sets, DevOps Center, and the Salesforce CLI. Additionally, the Digital Wallet feature offers transparent usage monitoring, allowing teams to track consumption and optimize resources throughout the AI development lifecycle. Customer and Analyst Perspectives Shree Reddy, CIO of PenFed, praised the potential of Agentforce and Data Cloud Sandboxes: “By enabling rigorous pre-deployment testing, we can deliver faster, more accurate support and recommendations to our members, aligning with our commitment to financial well-being.” Keith Kirkpatrick, Research Director at The Futurum Group, highlighted the broader implications: “Salesforce is instilling confidence in AI adoption by testing hundreds of variations of agent interactions in parallel. These enhancements make it easier for businesses to pressure-test autonomous systems and ensure reliability.” Availability With these tools, Salesforce solidifies its leadership in the agentic AI space, empowering enterprises to adopt AI systems with confidence and transform their operations at scale. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO: A Breakthrough in Enhancing Reasoning for Large Language Models Large Language Models (LLMs) have revolutionized tasks such as answering questions, generating content, and assisting with workflows. However, they often struggle with advanced reasoning tasks like solving complex math problems, logical deduction, and structured data analysis. Salesforce AI Research has addressed this challenge by introducing LaTent Reasoning Optimization (LaTRO), a groundbreaking framework that enables LLMs to self-improve their reasoning capabilities during training. The Need for Advanced Reasoning in LLMs Reasoning—especially sequential, multi-step reasoning—is essential for tasks that require logical progression and problem-solving. While current models excel at simpler queries, they often fall short in tackling more complex tasks due to a reliance on external feedback mechanisms or runtime optimizations. Enhancing reasoning abilities is therefore critical to unlocking the full potential of LLMs across diverse applications, from advanced mathematics to real-time data analysis. Existing techniques like Chain-of-Thought (CoT) prompting guide models to break problems into smaller steps, while methods such as Tree-of-Thought and Program-of-Thought explore multiple reasoning pathways. Although these techniques improve runtime performance, they don’t fundamentally enhance reasoning during the model’s training phase, limiting the scope of improvement. Salesforce AI Research Introduces LaTRO: A Self-Rewarding Framework LaTRO shifts the paradigm by transforming reasoning into a training-level optimization problem. It introduces a self-rewarding mechanism that allows models to evaluate and refine their reasoning pathways without relying on external feedback or supervised fine-tuning. This intrinsic approach fosters continual improvement and empowers models to solve complex tasks more effectively. How LaTRO Works LaTRO’s methodology centers on sampling reasoning paths from a latent distribution and optimizing these paths using variational techniques. Here’s how it works: This self-rewarding cycle ensures that the model continuously refines its reasoning capabilities during training. Unlike traditional methods, LaTRO’s framework operates autonomously, without the need for external reward models or costly supervised feedback loops. Key Benefits of LaTRO Performance Highlights LaTRO’s effectiveness has been validated across various datasets and models: Applications and Implications LaTRO’s ability to foster logical coherence and structured reasoning has far-reaching applications in fields requiring robust problem-solving: By enabling LLMs to autonomously refine their reasoning processes, LaTRO brings AI closer to achieving human-like cognitive abilities. The Future of AI with LaTRO LaTRO sets a new benchmark in AI research by demonstrating that reasoning can be optimized during training, not just at runtime. This advancement by Salesforce AI Research highlights the potential for self-evolving AI models that can independently improve their problem-solving capabilities. Salesforce AI Research Introduces LaTRO As the field of AI progresses, frameworks like LaTRO pave the way for more autonomous, intelligent systems capable of navigating complex reasoning tasks across industries. LaTRO represents a significant leap forward, moving AI closer to achieving true autonomous reasoning. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Enterprises are Adopting AI-powered Automation Platforms

Enterprises are Adopting AI-powered Automation Platforms

The rapid pace of AI technological advancement is placing immense pressure on teams, often leading to disagreements due to the unrealistic expectations businesses have for the speed and agility of new technology implementation. A staggering 88% of IT professionals report that they are unable to keep up with the flood of AI-related requests within their organizations. Executives from UiPath, Salesforce, ServiceNow, and ManageEngine offer insights into how enterprises can navigate these challenges. Leading enterprises are adopting AI-powered automation platforms that understand, automate, and manage end-to-end processes. These platforms integrate seamlessly with existing enterprise technologies, using AI to reduce friction, eliminate inefficiencies, and enable teams to achieve business goals faster, with greater accuracy and efficiency. This year’s innovation drivers include tools such as Intelligent Document Processing, Communications Mining, Process and Task Mining, and Automated Testing. “Automation is the best path to deliver on AI’s potential, seamlessly integrating intelligence into daily operations, automating backend processes, upskilling employees, and revolutionizing industries,” says Mark Gibbs, EMEA President, UiPath. Jessica Constantinidis, Innovation Officer EMEA at ServiceNow, explains, “Intelligent Automation blends Robotic Process Automation (RPA), Artificial Intelligence (AI), and Machine Learning (ML) with well-defined processes to automate decision-making outcomes.” “Hyperautomation provides a business-driven, disciplined approach that enterprises can use to make informed decisions quickly by analyzing process and data feedback within the organization,” adds Constantinidis. Thierry Nicault, AVP and General Manager at Salesforce Middle East, emphasizes that while companies are eager to embrace AI, the pace of change often leads to confusion and stifles innovation. He notes, “By deploying AI and Hyperintelligent Automation tools, organizations can enhance productivity, visibility, and operational transformation.” Automation is driving growth and innovation across industries. AI-powered tools are simplifying processes, improving business revenues, and contributing to economic diversification. Ramprakash Ramamoorthy, Director of AI Research at ManageEngine, highlights how Hyperintelligent Automation, powered by AI, uses tools like Natural Language Processing (NLP) and Intelligent Document Processing to detect anomalies, forecast business trends, and empower decision-making. The IT Pushback Despite enthusiasm for AI, IT professionals are raising concerns. A Salesforce survey revealed that 88% of IT professionals feel overwhelmed by the influx of AI-related requests, with many citing resource constraints, data security concerns, and data quality issues. Business stakeholders often have unrealistic expectations about how quickly new technologies can be implemented, creating friction. According to Constantinidis of ServiceNow, many organizations lack transparency across their business units, making it difficult to fully understand their processes. As a result, automating processes becomes challenging. She adds, “Before full hyperautomation is possible, issues like data validation, classification, and privacy must be prioritized.” Automation platforms need accurate data, and governance is crucial in managing what data is used for AI models. “You need AI skills to teach and feed the data, and you also need a data specialist to clean up your data lake,” Constantinidis explains. Gibbs from UiPath stresses that automation must be designed in collaboration with the business users who understand the processes and systems. Once deployed, a feedback loop ensures continuous improvement and refinement of automated workflows. Ramamoorthy from ManageEngine notes that adopting Hyperintelligent Automation alongside existing workflows poses challenges. Enterprises must evaluate their technology stack, considering the costs, skills required, and the potential benefits. Strategic Integration of AI and Automation To successfully implement Hyperintelligent Automation tools, enterprises need a blend of IT and business skills. Mark Gibbs of UiPath points out, “These skills ensure organizations can effectively implement, manage, and optimize hyperintelligent technologies, aligning them with organizational goals.” Salesforce’s Nicault adds, “Enterprises must empower both IT and business teams to embrace AI, fostering innovation while ensuring the technology delivers real value.” Business skills are equally crucial, including strategic planning, process analysis, and change management. Ramamoorthy emphasizes that these competencies help identify automation opportunities and align them with business goals. According to Bassel Khachfeh, Digital Solutions Manager at Omnix, automation must be implemented with a focus on regulatory and compliance needs specific to the industry. This approach ensures the technology supports future growth and innovation. Transforming Customer Experiences and Business Operations As automation evolves, it’s transforming not only back-end processes but also customer experiences and decision-making at every level. Constantinidis from ServiceNow explains that hyperintelligence enables enterprises to predict outcomes and avert crises by trusting AI’s data accuracy. Gibbs from UiPath adds that automation allows enterprises to unlock untapped opportunities, speeding up the transformation of manual processes and enhancing business efficiency. AI is already making an impact in areas like supply chain management, regulatory compliance, and customer-facing processes. Ramamoorthy of ManageEngine notes that AI-powered NLP is revolutionizing enterprise chatbots and document processing, enabling businesses to automate complex workflows like invoice handling and sentiment analysis. Khachfeh from Omnix highlights how Cognitive Automation platforms elevate RPA by integrating AI-driven capabilities, such as NLP and Optical Character Recognition (OCR), to further streamline operations. Looking Ahead Hyperintelligent Automation, driven by AI, is set to revolutionize industries by enhancing efficiency, driving innovation, and enabling smarter decision-making. Enterprises that strategically adopt these tools—by integrating IT and business expertise, prioritizing data governance, and continuously refining their automated workflows—will be best positioned to navigate the complexities of AI and achieve sustainable growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Consider AI Agents Personas

Consider AI Agents Personas

Treating AI Agents as Personas: Introducing the Era of Agent-Computer Interaction The UX landscape is evolving. While the design community has quickly adopted Large Language Models (LLMs) as tools, we’ve yet to fully grasp their transformative potential. With AI agents now deeply embedded in digital products, they are shifting from tools to active participants in our digital ecosystems. This change demands a new design paradigm—one that views AI agents not just as extensions of human users but as independent personas in their own right. The Rise of Agent-Computer Interaction AI agents represent a new class of users capable of navigating interfaces autonomously and completing complex tasks. This marks the dawn of Agent-Computer Interaction (ACI)—a paradigm where user experience design encompasses the needs of both human users and AI agents. Humans still play a critical role in guiding and supervising these systems, but AI agents must now be treated as distinct personas with unique goals, abilities, and requirements. This shift challenges UX designers to consider how these agents interact with interfaces and perform their tasks, ensuring they are equipped with the information and resources necessary to operate effectively. Understanding AI Agents AI agents are intelligent systems designed to reason, plan, and work across platforms with minimal human intervention. As defined during Google I/O, these agents retain context, anticipate needs, and execute multi-step processes. Advances in AI, such as Anthropic’s Claude and its ability to interact with graphical interfaces, have unlocked new levels of agency. Unlike earlier agents that relied solely on APIs, modern agents can manipulate graphical user interfaces much like human users, enabling seamless interaction with browser-based applications. This capability creates opportunities for new forms of interaction but also demands thoughtful design choices. Two Interaction Approaches for AI Agents Design teams must evaluate these methods based on the task’s complexity and transparency requirements, striking the right balance between efficiency and oversight. Designing Experiences Considering AI Agents Personas As AI agents transition into active users, UX design must expand to accommodate their specific needs. Much like human personas, AI agents require a deep understanding of their capabilities, limitations, and workflows. Creating AI Agent Personas Developing personas for AI agents involves identifying their unique characteristics: These personas inform interface designs that optimize agent workflows, ensuring both agents and humans can collaborate effectively. New UX Research Methodologies UX teams should embrace innovative research techniques, such as A/B testing interfaces for agent performance and monitoring their interaction patterns. While AI agents lack sentience, they exhibit behaviors—reasoning, planning, and adapting—that require careful study and design consideration. Shaping the AI Mind AI agents derive their reasoning capabilities from Large Language Models (LLMs), but their behavior and effectiveness are shaped by UX design. Designers have a unique role in crafting system prompts and developing feedback loops that refine LLM behavior over time. Key Areas for Designer Involvement: This work positions UX professionals as co-creators of AI intelligence, shaping not just interfaces but the underlying behaviors that drive agent interactions. Keeping Humans in the Loop Despite the rise of AI agents, human oversight and control remain essential. UX practitioners must prioritize transparency and trust in agent-driven systems. Key Considerations: Using tools like agentic experience maps—blueprints that visualize the interactions between humans, agents, and products—designers can ensure AI systems remain human-centered. A New Frontier for UX The emergence of AI agents heralds a shift as significant as the transition from desktop to mobile. Just as mobile devices unlocked new opportunities for interaction, AI agents are poised to redefine digital experiences in ways we can’t yet fully predict. By embracing Agent-Computer Interaction, UX designers have an unprecedented opportunity to shape the future of human-AI collaboration. Those who develop expertise in designing for these intelligent agents will lead the way in creating systems that are not only powerful but also deeply human-centered. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
AI Agent Workflows

AI Agent Workflows

AI Agent Workflows: The Ultimate Guide to Choosing Between LangChain and LangGraph Explore two transformative libraries—LangChain and LangGraph—both created by the same developer, designed to build Agentic AI applications. This guide dives into their foundational components, differences in handling functionality, and how to choose the right tool for your use case. Language Models as the Bridge Modern language models have unlocked revolutionary ways to connect users with AI systems and enable AI-to-AI communication via natural language. Enterprises aiming to harness Agentic AI capabilities often face the pivotal question: “Which tools should we use?” For those eager to begin, this question can become a roadblock. Why LangChain and LangGraph? LangChain and LangGraph are among the leading frameworks for crafting Agentic AI applications. By understanding their core building blocks and approaches to functionality, you’ll gain clarity on how each aligns with your needs. Keep in mind that the rapid evolution of generative AI tools means today’s truths might shift tomorrow. Note: Initially, this guide intended to compare AutoGen, LangChain, and LangGraph. However, AutoGen’s upcoming 0.4 release introduces a foundational redesign. Stay tuned for insights post-launch! Understanding the Basics LangChain LangChain offers two primary methods: Key components include: LangGraph LangGraph is tailored for graph-based workflows, enabling flexibility in non-linear, conditional, or feedback-loop processes. It’s ideal for cases where LangChain’s predefined structure might not suffice. Key components include: Comparing Functionality Tool Calling Conversation History and Memory Retrieval-Augmented Generation (RAG) Parallelism and Error Handling When to Choose LangChain, LangGraph, or Both LangChain Only LangGraph Only Using LangChain + LangGraph Together Final Thoughts Whether you choose LangChain, LangGraph, or a combination, the decision depends on your project’s complexity and specific needs. By understanding their unique capabilities, you can confidently design robust Agentic AI workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Rise of Agentforce

Rise of Agentforce

The Rise of Agentforce: How AI Agents Are Shaping the Future of Work Salesforce wrapped up its annual Dreamforce conference this September, leaving attendees with more than just memories of John Mulaney’s quips. As the swarms of Waymos ferried participants across a cleaner-than-usual San Francisco, it became clear that AI-powered agents—dubbed Agentforce—are poised to transform the workplace. These agents, controlled within Salesforce’s ecosystem, could significantly change how work is done and how customer experiences are delivered. Dreamforce has always been known for its bold predictions about the future, but this year’s vision of AI-based agents felt particularly compelling. These agents represent the next frontier in workplace automation, but as exciting as this future is, some important questions remain. Reality Check on the Agentforce Vision During his keynote, Salesforce CEO Marc Benioff raised an interesting point: “Why would our agents be so low-hallucinogenic?” While the agents have access to vast amounts of data, workflows, and services, they currently function best within Salesforce’s own environment. Benioff even made the claim that Salesforce pioneered prompt engineering—a statement that, for some, might have evoked a scene from Austin Powers, with Dr. Evil humorously taking credit for inventing the question mark. But can Salesforce fully realize its vision for Agentforce? If they succeed, it could be transformative for how work gets done. However, as with many AI-driven innovations, the real question lies in interoperability. The Open vs. Closed Debate As powerful as Salesforce’s ecosystem is, not all business data and workflows live within it. If the future of work involves a network of AI agents working together, how far can a closed ecosystem like Salesforce’s really go? Apple, Microsoft, Amazon, and other tech giants also have their sights set on AI-driven agents, and the race is on to own this massive opportunity. As we’ve seen in previous waves of technology, this raises familiar debates about open versus closed systems. Without a standard for agents to work together across platforms, businesses could find themselves limited. Closed ecosystems may help solve some problems, but to unlock the full potential of AI agents, they must be able to operate seamlessly across different platforms and boundaries. Looking to the Open Web for Inspiration The solution may lie in the same principles that guide the open web. Just as mobile apps often require a web view to enable an array of outcomes, the same might be necessary in the multi-agent landscape. Tools like Slack’s Block Kit framework allow for simple agent interactions, but they aren’t enough for more complex use cases. Take Clockwise Prism, for example—a sophisticated scheduling agent designed to find meeting times when there’s no obvious availability. When integrated with other agents to secure that critical meeting, businesses will need a flexible interface to explore multiple scheduling options. A web view for agents could be the key. The Need for an Open Multi-Agent Standard Benioff repeatedly stressed that businesses don’t want “DIY agents.” Enterprises seek controlled, repeatable workflows that deliver consistent value—but they also don’t want to be siloed. This is why the future requires an open standard for agents to collaborate across ecosystems and platforms. Imagine initiating a set of work agents from within an Atlassian Jira ticket that’s connected to a Salesforce customer case—or vice versa. For agents to seamlessly interact regardless of the system they originate from, a standard is needed. This would allow businesses to deploy agents in a way that’s consistent, integrated, and scalable. User Experience and Human-in-the-Loop: Crucial Elements for AI Agents A significant insight from the integration of LangChain with Assistant-UI highlighted a crucial factor: user experience (UX). Whether it’s streaming, generative interfaces, or human-in-the-loop functionality, the UX of AI agents is critical. While agents need to respond quickly and efficiently, businesses must have the ability to involve humans in decision-making when necessary. This principle of human-in-the-loop is key to the agent’s scheduling process. While automation is the goal, involving the user at crucial points—such as confirming scheduling options—ensures that the agent remains reliable and adaptable. Any future standard must prioritize this capability, allowing for user involvement where necessary, while also enabling full automation when confidence levels are high. Generative or Native UI? The discussion about user interfaces for agents often leads to a debate between generative UI and native UI. The latter may be the better approach. A native UI, controlled by the responding service or agent, ensures the interface is tailored to the context and specifics of the agent’s task. Whether this UI is rendered using AI or not is an implementation detail that can vary depending on the service. What matters is that the UI feels native to the agent’s task, making the user experience seamless and intuitive. What’s Next? The Push for an Open Multi-Agent Future As we look ahead to the multi-agent future, the need for an open standard is more pressing than ever. At Clockwise, we’ve drafted something we’re calling the Open Multi-Agent Protocol (OMAP), which we hope will foster collaboration and innovation in this space. The future of work is rapidly approaching, where new roles—like Agent Orchestrators—will emerge, enabling people to leverage AI agents in unprecedented ways. While Salesforce’s vision for Agentforce is ambitious, the key to unlocking its full potential lies in creating a standard that allows agents to work together, across platforms, and beyond the boundaries of closed ecosystems. With the right approach, we can create a future where AI agents transform work in ways we’re only beginning to imagine. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial

Read More
AI in Programming

AI in Programming

Since the launch of ChatGPT in 2022, developers have been split into two camps: those who ban AI in coding and those who embrace it. Many seasoned programmers not only avoid AI-generated code but also prohibit their teams from using it. Their reasoning is simple: “AI-generated code is unreliable.” Even if one doesn’t agree with this anti-AI stance, they’ve likely faced challenges, hurdles, or frustrations when using AI for programming. The key is finding the right strategies to use AI to your advantage. Many are still using outdated AI strategies from two years ago, likened to cutting down a tree with kitchen knives. Two Major Issues with AI for Developers The Wrong Way to Use AI… …can be broken down into two parts: When ChatGPT first launched, the typical way to work with AI was to visit the website and chat with GPT-3.5 in a browser. The process was straightforward: copy code from the IDE, paste it into ChatGPT with a basic prompt like “add comments,” get the revised code, check for errors, and paste it back into the IDE. Many developers, especially beginners and students, are still using this same method. However, the AI landscape has changed significantly over the last two years, and many have not adjusted their approach to fully leverage AI’s potential. Another common pitfall is how developers use AI. They ask the LLM to generate code, test it, and go back and forth to fix any issues. Often, they fall into an endless loop of AI hallucinations when trying to get the LLM to understand what’s wrong. This can be frustrating and unproductive. Four Tools to Boost Programming Productivity with AI 1. Cursor: AI-First IDE Cursor is an AI-first IDE built on VScode but enhanced with AI features. It allows developers to integrate a chatbot API and use AI as an assistant. Some of Cursor’s standout features include: Cursor integrates seamlessly with VScode, making it easy for existing users to transition. It supports various models, including GPT-4, Claude 3.5 Sonnet, and its built-in free cursor-small model. The combination of Cursor and Sonnet 3.5 has been particularly praised for producing reliable coding results. This tool is a significant improvement over copy-pasting code between ChatGPT and an IDE. 2. Micro Agent: Code + Test Case Micro Agent takes a different approach to AI-generated code by focusing on test cases. Instead of generating large chunks of code, it begins by creating test cases based on the prompt, then writes code that passes those tests. This method results in more grounded and reliable output, especially for functions that are tricky but not overly complex. 3. SWE-agent: AI for GitHub Issues Developed by Princeton Language and Intelligence, SWE-agent specializes in resolving real-world GitHub repository issues and submitting pull requests. It’s a powerful tool for managing large repositories, as it reviews codebases, identifies issues, and makes necessary changes. SWE-agent is open-source and has gained considerable popularity on GitHub. 4. AI Commits: git commit -m AI Commits generates meaningful commit messages based on your git diff. This simple tool eliminates the need for vague or repetitive commit messages like “minor changes.” It’s easy to install and uses GPT-3.5 for efficient, AI-generated commit messages. The Path Forward To stay productive and achieve goals in the rapidly evolving AI landscape, developers need the right tools. The limitations of AI, such as hallucinations, can’t be eliminated, but using the appropriate tools can help mitigate them. Simple, manual interactions like generating code or comments through ChatGPT can be frustrating. By adopting the right strategies and tools, developers can avoid these pitfalls and confidently enhance their coding practices. AI is evolving fast, and keeping up with its changes is crucial. The right tools can make all the difference in your programming workflow. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Zendesk Launches AI Agent Builder

The State of AI

The State of AI: How We Got Here (and What’s Next) Artificial intelligence (AI) has evolved from the realm of science fiction into a transformative force reshaping industries and lives around the world. But how did AI develop into the technology we know today, and where is it headed next? At Dreamforce, two of Salesforce’s leading minds in AI—Chief Scientist Silvio Savarese and Chief Futurist Peter Schwartz—offered insights into AI’s past, present, and future. How We Got Here: The Evolution of AI AI’s roots trace back decades, and its journey has been defined by cycles of innovation and setbacks. Peter Schwartz, Salesforce’s Chief Futurist, shared a firsthand perspective on these developments. Having been involved in AI since the 1970s, Schwartz witnessed the first “AI winter,” a period of reduced funding and interest due to the immense challenges of understanding and replicating the human brain. In the 1990s and early 2000s, AI shifted from attempting to mimic human cognition to adopting data-driven models. This new direction opened up possibilities beyond the constraints of brain-inspired approaches. By the 2010s, neural networks re-emerged, revolutionizing AI by enabling systems to process raw data without extensive pre-processing. Savarese, who began his AI research during one of these challenging periods, emphasized the breakthroughs in neural networks and their successor, transformers. These advancements culminated in large language models (LLMs), which can now process massive datasets, generate natural language, and perform tasks ranging from creating content to developing action plans. Today, AI has progressed to a new frontier: large action models. These systems go beyond generating text, enabling AI to take actions, adapt through feedback, and refine performance autonomously. Where We Are Now: The Present State of AI The pace of AI innovation is staggering. Just a year ago, discussions centered on copilots—AI systems designed to assist humans. Now, the conversation has shifted to autonomous AI agents capable of performing complex tasks with minimal human oversight. Peter Schwartz highlighted the current uncertainties surrounding AI, particularly in regulated industries like banking and healthcare. Leaders are grappling with questions about deployment speed, regulatory hurdles, and the broader societal implications of AI. While many startups in the AI space will fail, some will emerge as the giants of the next generation. Salesforce’s own advancements, such as the Atlas Reasoning Engine, underscore the rapid progress. These technologies are shaping products like Agentforce, an AI-powered suite designed to revolutionize customer interactions and operational efficiency. What’s Next: The Future of AI According to Savarese, the future lies in autonomous AI systems, which include two categories: The Road Ahead As AI continues to evolve, it’s clear that its potential is boundless. However, the path forward will require careful navigation of ethical, regulatory, and practical challenges. The key to success lies in innovation, collaboration, and a commitment to creating systems that enhance human capabilities. For Salesforce, the journey has only just begun. With groundbreaking technologies and visionary leadership, the company is not just predicting the future of AI—it’s creating it. The State of AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Fivetrans Hybrid Deployment

Fivetrans Hybrid Deployment

Fivetran’s Hybrid Deployment: A Breakthrough in Data Engineering In the data engineering world, balancing efficiency with security has long been a challenge. Fivetran aims to shift this dynamic with its Hybrid Deployment solution, designed to seamlessly move data across any environment while maintaining control and flexibility. Fivetrans Hybrid Deployment. The Hybrid Advantage: Flexibility Meets Control Fivetran’s Hybrid Deployment offers a new approach for enterprises, particularly those handling sensitive data or operating in regulated sectors. Often, these businesses struggle to adopt data-driven practices due to security concerns. Hybrid Deployment changes this by enabling the secure movement of data across cloud and on-premises environments, giving businesses full control over their data while maintaining the agility of the cloud. As George Fraser, Fivetran’s CEO, notes, “Businesses no longer have to choose between managed automation and data control. They can now securely move data from all their critical sources—like Salesforce, Workday, Oracle, SAP—into a data warehouse or data lake, while keeping that data under their own control.” How it Works: A Secure, Streamlined Approach Fivetran’s Hybrid Deployment relies on a lightweight local agent to move data securely within a customer’s environment, while the Fivetran platform handles the management and monitoring. This separation of control and data planes ensures that sensitive information stays within the customer’s secure perimeter. Vinay Kumar Katta, a managing delivery architect at Capgemini, highlights the flexibility this provides, enabling businesses to design pipelines without sacrificing security. Beyond Security: Additional Benefits Hybrid Deployment’s benefits go beyond just security. It also offers: Early adopters are already seeing its value. Troy Fokken, chief architect at phData, praises how it “streamlines data pipeline processes,” especially for customers in regulated industries. AI Agent Architectures: Defining the Future of Autonomous Systems In the rapidly evolving world of AI, a new framework is emerging—AI agents designed to act autonomously, adapt dynamically, and explore digital environments. These AI agents are built on core architectural principles, bringing the next generation of autonomy to AI-driven tasks. What Are AI Agents? AI agents are systems designed to autonomously or semi-autonomously perform tasks, leveraging tools to achieve objectives. For instance, these agents may use APIs, perform web searches, or interact with digital environments. At their core, AI agents use Large Language Models (LLMs) and Foundation Models (FMs) to break down complex tasks, similar to human reasoning. Large Action Models (LAMs) Just as LLMs transformed natural language processing, Large Action Models (LAMs) are revolutionizing how AI agents interact with environments. These models excel at function calling—turning natural language into structured, executable actions, enabling AI agents to perform real-world tasks like scheduling or triggering API calls. Salesforce AI Research, for instance, has open-sourced several LAMs designed to facilitate meaningful actions. LAMs bridge the gap between unstructured inputs and structured outputs, making AI agents more effective in complex environments. Model Orchestration and Small Language Models (SLMs) Model orchestration complements LAMs by utilizing smaller, specialized models (SLMs) for niche tasks. Instead of relying on resource-heavy models, AI agents can call upon these smaller models for specific functions—such as summarizing data or executing commands—creating a more efficient system. SLMs, combined with techniques like Retrieval-Augmented Generation (RAG), allow smaller models to perform comparably to their larger counterparts, enhancing their ability to handle knowledge-intensive tasks. Vision-Enabled Language Models for Digital Exploration AI agents are becoming even more capable with vision-enabled language models, allowing them to interact with digital environments. Projects like Apple’s Ferret-UI and WebVoyager exemplify this, where agents can navigate user interfaces, recognize elements via OCR, and explore websites autonomously. Function Calling: Structured, Actionable Outputs A fundamental shift is happening with function calling in AI agents, moving from unstructured text to structured, actionable outputs. This allows AI agents to interact with systems more efficiently, triggering specific actions like booking meetings or executing API calls. The Role of Tools and Human-in-the-Loop AI agents rely on tools—algorithms, scripts, or even humans-in-the-loop—to perform tasks and guide actions. This approach is particularly valuable in high-stakes industries like healthcare and finance, where precision is crucial. The Future of AI Agents With the advent of Large Action Models, model orchestration, and function calling, AI agents are becoming powerful problem solvers. These agents are evolving to explore, learn, and act within digital ecosystems, bringing us closer to a future where AI mimics human problem-solving processes. As AI agents become more sophisticated, they will redefine how we approach digital tasks and interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com