LangChain - gettectonic.com
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
AI Agent Workflows

AI Agent Workflows

AI Agent Workflows: The Ultimate Guide to Choosing Between LangChain and LangGraph Explore two transformative libraries—LangChain and LangGraph—both created by the same developer, designed to build Agentic AI applications. This guide dives into their foundational components, differences in handling functionality, and how to choose the right tool for your use case. Language Models as the Bridge Modern language models have unlocked revolutionary ways to connect users with AI systems and enable AI-to-AI communication via natural language. Enterprises aiming to harness Agentic AI capabilities often face the pivotal question: “Which tools should we use?” For those eager to begin, this question can become a roadblock. Why LangChain and LangGraph? LangChain and LangGraph are among the leading frameworks for crafting Agentic AI applications. By understanding their core building blocks and approaches to functionality, you’ll gain clarity on how each aligns with your needs. Keep in mind that the rapid evolution of generative AI tools means today’s truths might shift tomorrow. Note: Initially, this guide intended to compare AutoGen, LangChain, and LangGraph. However, AutoGen’s upcoming 0.4 release introduces a foundational redesign. Stay tuned for insights post-launch! Understanding the Basics LangChain LangChain offers two primary methods: Key components include: LangGraph LangGraph is tailored for graph-based workflows, enabling flexibility in non-linear, conditional, or feedback-loop processes. It’s ideal for cases where LangChain’s predefined structure might not suffice. Key components include: Comparing Functionality Tool Calling Conversation History and Memory Retrieval-Augmented Generation (RAG) Parallelism and Error Handling When to Choose LangChain, LangGraph, or Both LangChain Only LangGraph Only Using LangChain + LangGraph Together Final Thoughts Whether you choose LangChain, LangGraph, or a combination, the decision depends on your project’s complexity and specific needs. By understanding their unique capabilities, you can confidently design robust Agentic AI workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Rise of Agentforce

Rise of Agentforce

The Rise of Agentforce: How AI Agents Are Shaping the Future of Work Salesforce wrapped up its annual Dreamforce conference this September, leaving attendees with more than just memories of John Mulaney’s quips. As the swarms of Waymos ferried participants across a cleaner-than-usual San Francisco, it became clear that AI-powered agents—dubbed Agentforce—are poised to transform the workplace. These agents, controlled within Salesforce’s ecosystem, could significantly change how work is done and how customer experiences are delivered. Dreamforce has always been known for its bold predictions about the future, but this year’s vision of AI-based agents felt particularly compelling. These agents represent the next frontier in workplace automation, but as exciting as this future is, some important questions remain. Reality Check on the Agentforce Vision During his keynote, Salesforce CEO Marc Benioff raised an interesting point: “Why would our agents be so low-hallucinogenic?” While the agents have access to vast amounts of data, workflows, and services, they currently function best within Salesforce’s own environment. Benioff even made the claim that Salesforce pioneered prompt engineering—a statement that, for some, might have evoked a scene from Austin Powers, with Dr. Evil humorously taking credit for inventing the question mark. But can Salesforce fully realize its vision for Agentforce? If they succeed, it could be transformative for how work gets done. However, as with many AI-driven innovations, the real question lies in interoperability. The Open vs. Closed Debate As powerful as Salesforce’s ecosystem is, not all business data and workflows live within it. If the future of work involves a network of AI agents working together, how far can a closed ecosystem like Salesforce’s really go? Apple, Microsoft, Amazon, and other tech giants also have their sights set on AI-driven agents, and the race is on to own this massive opportunity. As we’ve seen in previous waves of technology, this raises familiar debates about open versus closed systems. Without a standard for agents to work together across platforms, businesses could find themselves limited. Closed ecosystems may help solve some problems, but to unlock the full potential of AI agents, they must be able to operate seamlessly across different platforms and boundaries. Looking to the Open Web for Inspiration The solution may lie in the same principles that guide the open web. Just as mobile apps often require a web view to enable an array of outcomes, the same might be necessary in the multi-agent landscape. Tools like Slack’s Block Kit framework allow for simple agent interactions, but they aren’t enough for more complex use cases. Take Clockwise Prism, for example—a sophisticated scheduling agent designed to find meeting times when there’s no obvious availability. When integrated with other agents to secure that critical meeting, businesses will need a flexible interface to explore multiple scheduling options. A web view for agents could be the key. The Need for an Open Multi-Agent Standard Benioff repeatedly stressed that businesses don’t want “DIY agents.” Enterprises seek controlled, repeatable workflows that deliver consistent value—but they also don’t want to be siloed. This is why the future requires an open standard for agents to collaborate across ecosystems and platforms. Imagine initiating a set of work agents from within an Atlassian Jira ticket that’s connected to a Salesforce customer case—or vice versa. For agents to seamlessly interact regardless of the system they originate from, a standard is needed. This would allow businesses to deploy agents in a way that’s consistent, integrated, and scalable. User Experience and Human-in-the-Loop: Crucial Elements for AI Agents A significant insight from the integration of LangChain with Assistant-UI highlighted a crucial factor: user experience (UX). Whether it’s streaming, generative interfaces, or human-in-the-loop functionality, the UX of AI agents is critical. While agents need to respond quickly and efficiently, businesses must have the ability to involve humans in decision-making when necessary. This principle of human-in-the-loop is key to the agent’s scheduling process. While automation is the goal, involving the user at crucial points—such as confirming scheduling options—ensures that the agent remains reliable and adaptable. Any future standard must prioritize this capability, allowing for user involvement where necessary, while also enabling full automation when confidence levels are high. Generative or Native UI? The discussion about user interfaces for agents often leads to a debate between generative UI and native UI. The latter may be the better approach. A native UI, controlled by the responding service or agent, ensures the interface is tailored to the context and specifics of the agent’s task. Whether this UI is rendered using AI or not is an implementation detail that can vary depending on the service. What matters is that the UI feels native to the agent’s task, making the user experience seamless and intuitive. What’s Next? The Push for an Open Multi-Agent Future As we look ahead to the multi-agent future, the need for an open standard is more pressing than ever. At Clockwise, we’ve drafted something we’re calling the Open Multi-Agent Protocol (OMAP), which we hope will foster collaboration and innovation in this space. The future of work is rapidly approaching, where new roles—like Agent Orchestrators—will emerge, enabling people to leverage AI agents in unprecedented ways. While Salesforce’s vision for Agentforce is ambitious, the key to unlocking its full potential lies in creating a standard that allows agents to work together, across platforms, and beyond the boundaries of closed ecosystems. With the right approach, we can create a future where AI agents transform work in ways we’re only beginning to imagine. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial

Read More
guide to RAG

Tectonic Guide to RAG

Guide to RAG (Retrieval-Augmented Generation) Retrieval-Augmented Generation (RAG) has become increasingly popular, and while it’s not yet as common as seeing it on a toaster oven manual, it is expected to grow in use. Despite its rising popularity, comprehensive guides that address all its nuances—such as relevance assessment and hallucination prevention—are still scarce. Drawing from practical experience, this insight offers an in-depth overview of RAG. Why is RAG Important? Large Language Models (LLMs) like ChatGPT can be employed for a wide range of tasks, from crafting horoscopes to more business-centric applications. However, there’s a notable challenge: most LLMs, including ChatGPT, do not inherently understand the specific rules, documents, or processes that companies rely on. There are two ways to address this gap: How RAG Works RAG consists of two primary components: While the system is straightforward, the effectiveness of the output heavily depends on the quality of the documents retrieved and how well the Retriever performs. Corporate documents are often unstructured, conflicting, or context-dependent, making the process challenging. Search Optimization in RAG To enhance RAG’s performance, optimization techniques are used across various stages of information retrieval and processing: Python and LangChain Implementation Example Below is a simple implementation of RAG using Python and LangChain: pythonCopy codeimport os import wget from langchain.vectorstores import Qdrant from langchain.embeddings import OpenAIEmbeddings from langchain import OpenAI from langchain_community.document_loaders import BSHTMLLoader from langchain.chains import RetrievalQA # Download ‘War and Peace’ by Tolstoy wget.download(“http://az.lib.ru/t/tolstoj_lew_nikolaewich/text_0073.shtml”) # Load text from html loader = BSHTMLLoader(“text_0073.shtml”, open_encoding=’ISO-8859-1′) war_and_peace = loader.load() # Initialize Vector Database embeddings = OpenAIEmbeddings() doc_store = Qdrant.from_documents( war_and_peace, embeddings, location=”:memory:”, collection_name=”docs”, ) llm = OpenAI() # Ask questions while True: question = input(‘Your question: ‘) qa = RetrievalQA.from_chain_type( llm=llm, chain_type=”stuff”, retriever=doc_store.as_retriever(), return_source_documents=False, ) result = qa(question) print(f”Answer: {result}”) Considerations for Effective RAG Ranking Techniques in RAG Dynamic Learning with RELP An advanced technique within RAG is Retrieval-Augmented Language Model-based Prediction (RELP). In this method, information retrieved from vector storage is used to generate example answers, which the LLM can then use to dynamically learn and respond. This allows for adaptive learning without the need for expensive retraining. Guide to RAG RAG offers a powerful alternative to retraining large language models, allowing businesses to leverage their proprietary knowledge for practical applications. While setting up and optimizing RAG systems involves navigating various complexities, including document structure, query processing, and ranking, the results are highly effective for most business use cases. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
gettectonic.com