LangGraph Archives - gettectonic.com
Agentic AI is Here

On Premise Gen AI

In 2025, enterprises transitioning generative AI (GenAI) into production after years of experimentation are increasingly considering on-premises deployment as a cost-effective alternative to the cloud. Since OpenAI ignited the AI revolution in late 2022, organizations have tested large language models powering GenAI services on platforms like AWS, Microsoft Azure, and Google Cloud. These experiments demonstrated GenAI’s potential to enhance business operations while exposing the substantial costs of cloud usage. To avoid difficult conversations with CFOs about escalating cloud expenses, CIOs are exploring on-premises AI as a financially viable solution. Advances in software from startups and packaged infrastructure from vendors such as HPE and Dell are making private data centers an attractive option for managing costs. A survey conducted by Menlo Ventures in late 2024 found that 47% of U.S. enterprises with at least 50 employees were developing GenAI solutions in-house. Similarly, Informa TechTarget’s Enterprise Strategy Group reported a rise in enterprises considering on-premises and public cloud equally for new applications—from 37% in 2024 to 45% in 2025. This shift is reflected in hardware sales. HPE reported a 16% revenue increase in AI systems, reaching $1.5 billion in Q4 2024. During the same period, Dell recorded a record .6 billion in AI server orders, with its sales pipeline expanding by over 50% across various customer segments. “Customers are seeking diverse AI-capable server solutions,” noted David Schmidt, senior director of Dell’s PowerEdge server line. While heavily regulated industries have traditionally relied on on-premises systems to ensure data privacy and security, broader adoption is now driven by the need for cost control. Fortune 2000 companies are leading this trend, opting for private infrastructure over the cloud due to more predictable expenses. “It’s not unusual to see cloud bills exceeding 0,000 or even million per month,” said John Annand, an analyst at Info-Tech Research Group. Global manufacturing giant Jabil primarily uses AWS for GenAI development but emphasizes ongoing cost management. “Does moving to the cloud provide a cost advantage? Sometimes it doesn’t,” said CIO May Yap. Jabil employs a continuous cloud financial optimization process to maximize efficiency. On-Premises AI: Technology and Trends Enterprises now have alternatives to cloud infrastructure, including as-a-service solutions like Dell APEX and HPE GreenLake, which offer flexible pay-per-use pricing for AI servers, storage, and networking tailored for private data centers or colocation facilities. “The high cost of cloud drives organizations to seek more predictable expenses,” said Tiffany Osias, vice president of global colocation services at Equinix. Walmart exemplifies in-house AI development, creating tools like a document summarization app for its benefits help desk and an AI assistant for corporate employees. Startups are also enabling enterprises to build AI applications with turnkey solutions. “About 80% of GenAI requirements can now be addressed with push-button solutions from startups,” said Tim Tully, partner at Menlo Ventures. Companies like Ragie (RAG-as-a-service) and Lamatic.ai (GenAI platform-as-a-service) are driving this innovation. Others, like Squid AI, integrate custom AI agents with existing enterprise infrastructure. Open-source frameworks like LangChain further empower on-premises development, offering tools for creating chatbots, virtual assistants, and intelligent search systems. Its extension, LangGraph, adds functionality for building multi-agent workflows. As enterprises develop AI applications internally, consulting services will play a pivotal role. “Companies offering guidance on effective AI tool usage and aligning them with business outcomes will thrive,” Annand said. This evolution in AI deployment highlights the growing importance of balancing technological innovation with financial sustainability. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
AI Agent Workflows

AI Agent Workflows

AI Agent Workflows: The Ultimate Guide to Choosing Between LangChain and LangGraph Explore two transformative libraries—LangChain and LangGraph—both created by the same developer, designed to build Agentic AI applications. This guide dives into their foundational components, differences in handling functionality, and how to choose the right tool for your use case. Language Models as the Bridge Modern language models have unlocked revolutionary ways to connect users with AI systems and enable AI-to-AI communication via natural language. Enterprises aiming to harness Agentic AI capabilities often face the pivotal question: “Which tools should we use?” For those eager to begin, this question can become a roadblock. Why LangChain and LangGraph? LangChain and LangGraph are among the leading frameworks for crafting Agentic AI applications. By understanding their core building blocks and approaches to functionality, you’ll gain clarity on how each aligns with your needs. Keep in mind that the rapid evolution of generative AI tools means today’s truths might shift tomorrow. Note: Initially, this guide intended to compare AutoGen, LangChain, and LangGraph. However, AutoGen’s upcoming 0.4 release introduces a foundational redesign. Stay tuned for insights post-launch! Understanding the Basics LangChain LangChain offers two primary methods: Key components include: LangGraph LangGraph is tailored for graph-based workflows, enabling flexibility in non-linear, conditional, or feedback-loop processes. It’s ideal for cases where LangChain’s predefined structure might not suffice. Key components include: Comparing Functionality Tool Calling Conversation History and Memory Retrieval-Augmented Generation (RAG) Parallelism and Error Handling When to Choose LangChain, LangGraph, or Both LangChain Only LangGraph Only Using LangChain + LangGraph Together Final Thoughts Whether you choose LangChain, LangGraph, or a combination, the decision depends on your project’s complexity and specific needs. By understanding their unique capabilities, you can confidently design robust Agentic AI workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Exploring Emerging LLM

Exploring Emerging LLM

Exploring Emerging LLM Agent Types and Architectures The Evolution Beyond ReAct AgentsThe shortcomings of first-generation ReAct agents have paved the way for a new era of LLM agents, bringing innovative architectures and possibilities. In 2024, agents have taken center stage in the AI landscape. Companies globally are developing chatbot agents, tools like MultiOn are bridging agents to external websites, and frameworks like LangGraph and LlamaIndex Workflows are helping developers build more structured, capable agents. However, despite their rising popularity within the AI community, agents are yet to see widespread adoption among consumers or enterprises. This leaves businesses wondering: How do we navigate these emerging frameworks and architectures? Which tools should we leverage for our next application? Having recently developed a sophisticated agent as a product copilot, we share key insights to guide you through the evolving agent ecosystem. What Are LLM-Based Agents? At their core, LLM-based agents are software systems designed to execute complex tasks by chaining together multiple processing steps, including LLM calls. These agents: The Rise and Fall of ReAct Agents ReAct (reason, act) agents marked the first wave of LLM-powered tools. Promising broad functionality through abstraction, they fell short due to their limited utility and overgeneralized design. These challenges spurred the emergence of second-generation agents, emphasizing structure and specificity. The Second Generation: Structured, Scalable Agents Modern agents are defined by smaller solution spaces, offering narrower but more reliable capabilities. Instead of open-ended design, these agents map out defined paths for actions, improving precision and performance. Key characteristics of second-gen agents include: Common Agent Architectures Agent Development Frameworks Several frameworks are now available to simplify and streamline agent development: While frameworks can impose best practices and tooling, they may introduce limitations for highly complex applications. Many developers still prefer code-driven solutions for greater control. Should You Build an Agent? Before investing in agent development, consider these criteria: If you answered “yes,” an agent may be a suitable choice. Challenges and Solutions in Agent Development Common Issues: Strategies to Address Challenges: Conclusion The generative AI landscape is brimming with new frameworks and fervent innovation. Before diving into development, evaluate your application needs and consider whether agent frameworks align with your objectives. By thoughtfully assessing the tools and architectures available, you can create agents that deliver measurable value while avoiding unnecessary complexity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com