Edges Archives - gettectonic.com
Large and Small Language Models

Architecture for Enterprise-Grade Agentic AI Systems

LangGraph: The Architecture for Enterprise-Grade Agentic AI Systems Modern enterprises need AI that doesn’t just answer questions—but thinks, plans, and acts autonomously. LangGraph provides the framework to build these next-generation agentic systems capable of: ✅ Multi-step reasoning across complex workflows✅ Dynamic decision-making with real-time tool selection✅ Stateful execution that maintains context across operations✅ Seamless integration with enterprise knowledge bases and APIs 1. LangGraph’s Graph-Based Architecture At its core, LangGraph models AI workflows as Directed Acyclic Graphs (DAGs): This structure enables:✔ Conditional branching (different paths based on data)✔ Parallel processing where possible✔ Guaranteed completion (no infinite loops) Example Use Case:A customer service agent that: 2. Multi-Hop Knowledge Retrieval Enterprise queries often require connecting information across multiple sources. LangGraph treats this as a graph traversal problem: python Copy # Neo4j integration for structured knowledge from langchain.graphs import Neo4jGraph graph = Neo4jGraph(url=”bolt://localhost:7687″, username=”neo4j”, password=”password”) query = “”” MATCH (doc:Document)-[:REFERENCES]->(policy:Policy) WHERE policy.name = ‘GDPR’ RETURN doc.title, doc.url “”” results = graph.query(query) # → Feeds into LangGraph nodes Hybrid Approach: 3. Building Autonomous Agents LangGraph + LangChain agents create systems that: python Copy from langchain.agents import initialize_agent, Tool from langchain.chat_models import ChatOpenAI # Define tools search_tool = Tool( name=”ProductSearch”, func=search_product_db, description=”Searches internal product catalog” ) # Initialize agent agent = initialize_agent( tools=[search_tool], llm=ChatOpenAI(model=”gpt-4″), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION ) # Execute response = agent.run(“Find compatible accessories for Model X-42”) 4. Full Implementation Example Enterprise Document Processing System: python Copy from langgraph.graph import StateGraph from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Pinecone # 1. Define shared state class DocProcessingState(BaseModel): query: str retrieved_docs: list = [] analysis: str = “” actions: list = [] # 2. Create nodes def retrieve(state): vectorstore = Pinecone.from_existing_index(“docs”, OpenAIEmbeddings()) state.retrieved_docs = vectorstore.similarity_search(state.query) return state def analyze(state): # LLM analysis of documents state.analysis = llm(f”Summarize key points from: {state.retrieved_docs}”) return state # 3. Build workflow workflow = StateGraph(DocProcessingState) workflow.add_node(“retrieve”, retrieve) workflow.add_node(“analyze”, analyze) workflow.add_edge(“retrieve”, “analyze”) workflow.add_edge(“analyze”, END) # 4. Execute agent = workflow.compile() result = agent.invoke({“query”: “2025 compliance changes”}) Why This Matters for Enterprises The Future:LangGraph enables AI systems that don’t just assist workers—but autonomously execute complete business processes while adhering to organizational rules and structures. “This isn’t chatbot AI—it’s digital workforce AI.” Next Steps: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Nonprofit Cloud

Salesforce Connects Donors and Nonprofits

Your foundation is connecting donors, nonprofits, and local leaders to create meaningful change. But keeping track of those relationships, managing funds, and ensuring every dollar is accounted for can be overwhelming without the right tools. That’s where Salesforce comes in. With Salesforce, your foundation can bring everything together in one place, giving you a clear view of your donors, grants, and community impact—all while making daily operations easier for your team. Get the Full Picture with a 360° View Every interaction with a donor, nonprofit, grant applicant, board member, or volunteer is part of your foundation’s story. Salesforce acts as a central hub, giving you a complete picture of the people and organizations you work with. Imagine this: Imagine you’re preparing for a meeting with a longtime donor. Instead of scrambling through spreadsheets or multiple systems, you pull up Salesforce and see everything in one place: their total giving history, past conversations, and even which nonprofits they’ve supported the most. You also notice they served on a committee and attended an event a few years ago, which gives you a natural way to reconnect. No more hunting for details. Now everything you need is at your fingertips, making every interaction more meaningful. Keep Fundraising and Grant Tracking on the Same Page Fundraising fuels your mission, and keeping up with donors and grant funding requires a system that keeps everyone on the same page. Imagine this: A foundation’s fundraising team is working on a major gift proposal. In Salesforce, they track every interaction, from the first conversation to the moment the gift agreement is signed. Meanwhile, across the office, another team is preparing a grant application. Since Salesforce also keeps track of the foundation’s outgoing grants, they can easily pull reports, track deadlines, and ensure every requirement is met before submission. No loose files. No forgotten follow-ups. Just one system that keeps everything moving forward. Awarding Grants and Supporting Your Community Whether funded by donor-advised contributions or your foundation’s own initiatives, grants make a lasting difference in the communities you serve. Managing these funds should be simple, not stressful. Imagine this: A small nonprofit is looking for funding to expand its after-school program. On the foundation’s website, they find an open grant opportunity and apply directly through the portal. They can see exactly where their application stands—submitted, under review, or approved—without needing to follow up with foundation staff. Once awarded, Salesforce reminds them when reports are due, ensuring compliance is easy and stress-free for both the nonprofit and the foundation. Draft and Share Fund Agreements Without the Hassle Manually digging through old emails, updating Word docs, and waiting on signatures can slow down the handling of fund agreements, donor pledges, and grant documents. Imagine this: A donor is excited to establish a new scholarship fund at your foundation. In the past, your team would draft the agreement in a Word document, email it back and forth for revisions, print it for signatures, and then scan it back into the system—hoping nothing got lost along the way. With Salesforce, that entire process is now streamlined. The agreement is generated directly from the donor’s record, reviewed within the system, and sent electronically via a third-party app for signature. The signed document is automatically saved, ready to access whenever needed. This same process applies to grant agreements. Instead of juggling multiple versions and manually tracking who has signed what, foundation staff can send, e-sign, and store documents without extra steps. No more delays. No more misplaced paperwork. Just a faster, easier way to keep things moving. (Note: eSignature services are available through a third-party app, like DocuSign) Let Salesforce Handle the Follow-Ups Instead of manually tracking deadlines and reminders, let Salesforce do the work for you. Imagine this: Before Salesforce, foundation staff spent hours tracking reporting deadlines, manually sending reminders, and drafting thank-you emails. With automation, those tasks happen behind the scenes. Now, grant recipients receive timely reminders before their reports are due. Small donations automatically trigger thank-you emails, making sure every donor feels appreciated. And when staff enter new information, custom-built screens make it quick and intuitive. What used to take hours now happens in minutes—allowing staff to focus on bigger priorities. Give Donors and Nonprofits Easy Access to Their Information Donors and grantees shouldn’t have to call your team for every update. With Experience Cloud, they can log in and find the information they need on their own. Fund Holders can check their giving history and see how much they have available to grant. Grant Applicants can apply for funding, track their application status, and submit reports—all in one place. This saves time for both your staff and the people who depend on your foundation. Connect Salesforce with the Tools You Already Use Salesforce doesn’t replace your existing systems—it works with them. By integrating Salesforce with tools your foundation already relies on, you can reduce duplicate work and keep your data connected. Email (Outlook & Gmail): Save important conversations directly to donor and grant records. Marketing (Marketing Cloud or Other Platforms): Track who subscribes to your newsletters and see which emails get the most engagement. Accounting Software: Sync financial data so staff can see fund balances, pledges, and spending updates without switching systems. Wealth Screening Tools: Give gift officers a better understanding of donor capacity before making an ask. Electronic Signatures: Integrate Salesforce and DocuSign for automatic routing of signatures and uploading of signed documents. Online Giving Apps: Donations made on your website can be recorded in Salesforce instantly—no manual entry needed! With everything connected, your team can work more efficiently and spend less time on data entry. Salesforce Grows with Your Foundation No two foundations are the same, and that’s the best part—Salesforce can be adapted to fit the way your team works. Whether you need to track event attendees, manage volunteers, or run custom reports, Salesforce can be configured to support your unique needs. We’d love to learn more about how your foundation operates and explore ways to make

Read More
Neuro-symbolic AI

Neuro-symbolic AI

Neuro-Symbolic AI: Bridging Neural Networks and Symbolic Processing for Smarter AI Systems Neuro-symbolic AI integrates neural networks with rules-based symbolic processing to enhance artificial intelligence systems’ accuracy, explainability, and precision. Neural networks leverage statistical deep learning to identify patterns in large datasets, while symbolic AI applies logic and rules-based reasoning common in mathematics, programming languages, and expert systems. The Balance Between Neural and Symbolic AIThe fusion of neural and symbolic methods has revived debates in the AI community regarding their relative strengths. Neural AI excels in deep learning, including generative AI, by distilling patterns from data through distributed statistical processing across interconnected neurons. However, this approach often requires significant computational resources and may struggle with explainability. Conversely, symbolic AI, which relies on predefined rules and logic, has historically powered applications like fraud detection, expert systems, and argument mining. While symbolic systems are faster and more interpretable, their reliance on manual rule creation has been a limitation. Innovations in training generative AI models now allow more efficient automation of these processes, though challenges like hallucinations and poor mathematical reasoning persist. Complementary Thinking ModelsPsychologist Daniel Kahneman’s analogy of System 1 and System 2 thinking aptly describes the interplay between neural and symbolic AI. Neural AI, akin to System 1, is intuitive and fast—ideal for tasks like image recognition. Symbolic AI mirrors System 2, engaging in slower, deliberate reasoning, such as understanding the context and relationships in a scene. Core Concepts of Neural NetworksArtificial neural networks (ANNs) mimic the statistical connections between biological neurons. By modeling patterns in data, ANNs enable learning and feature extraction at different abstraction levels, such as edges, shapes, and objects in images. Key ANN architectures include: Despite their strengths, neural networks are prone to hallucinations, particularly when overconfident in their predictions, making human oversight crucial. The Role of Symbolic ReasoningSymbolic reasoning underpins modern programming languages, where logical constructs (e.g., “if-then” statements) drive decision-making. Symbolic AI excels in structured applications like solving math problems, representing knowledge, and decision-making. Algorithms like expert systems, Bayesian networks, and fuzzy logic offer precision and efficiency in well-defined workflows but struggle with ambiguity and edge cases. Although symbolic systems like IBM Watson demonstrated success in trivia and reasoning, scaling them to broader, dynamic applications has proven challenging due to their dependency on manual configuration. Neuro-Symbolic IntegrationThe integration of neural and symbolic AI spans a spectrum of techniques, from loosely coupled processes to tightly integrated systems. Examples of integration include: History of Neuro-Symbolic AIBoth neural and symbolic AI trace their roots to the 1950s, with symbolic methods dominating early AI due to their logical approach. Neural networks fell out of favor until the 1980s when innovations like backpropagation revived interest. The 2010s saw a breakthrough with GPUs enabling scalable neural network training, ushering in today’s deep learning era. Applications and Future DirectionsApplications of neuro-symbolic AI include: The next wave of innovation aims to merge these approaches more deeply. For instance, combining granular structural information from neural networks with symbolic abstraction can improve explainability and efficiency in AI systems like intelligent document processing or IoT data interpretation. Neuro-symbolic AI offers the potential to create smarter, more explainable systems by blending the pattern-recognition capabilities of neural networks with the precision of symbolic reasoning. As research advances, this synergy may unlock new horizons in AI capabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Gen AI Unleased With Vector Database

Knowledge Graphs and Vector Databases

The Role of Knowledge Graphs and Vector Databases in Retrieval-Augmented Generation (RAG) In the dynamic AI landscape, Retrieval-Augmented Generation (RAG) systems are revolutionizing data retrieval by combining artificial intelligence with external data sources to deliver contextual, relevant outputs. Two core technologies driving this innovation are Knowledge Graphs and Vector Databases. While fundamentally different in their design and functionality, these tools complement one another, unlocking new potential for solving complex data problems across industries. Understanding Knowledge Graphs: Connecting the Dots Knowledge Graphs organize data into a network of relationships, creating a structured representation of entities and how they interact. These graphs emphasize understanding and reasoning through data, offering explainable and highly contextual results. How They Work Strengths Limitations Applications Vector Databases: The Power of Similarity In contrast, Vector Databases thrive in handling unstructured data such as text, images, and audio. By representing data as high-dimensional vectors, they excel at identifying similarities, enabling semantic understanding. How They Work Strengths Limitations Applications Combining Knowledge Graphs and Vector Databases: A Hybrid Approach While both technologies excel independently, their combination can amplify RAG systems. Knowledge Graphs bring reasoning and structure, while Vector Databases offer rapid, similarity-based retrieval, creating hybrid systems that are more intelligent and versatile. Example Use Cases Knowledge Graphs vs. Vector Databases: Key Differences Feature Knowledge Graphs Vector Databases Data Type Structured Unstructured Core Strength Relational reasoning Similarity-based retrieval Explainability High Low Scalability Limited for large datasets Efficient for massive datasets Flexibility Schema-dependent Schema-free Challenges in Implementation Future Trends: The Path to Convergence As AI evolves, the distinction between Knowledge Graphs and Vector Databases is beginning to blur. Emerging trends include: This convergence is paving the way for smarter, more adaptive systems that can handle both structured and unstructured data seamlessly. Conclusion Knowledge Graphs and Vector Databases represent two foundational technologies in the realm of Retrieval-Augmented Generation. Knowledge Graphs excel at reasoning through structured relationships, while Vector Databases shine in unstructured data retrieval. By combining their strengths, organizations can create hybrid systems that offer unparalleled insights, efficiency, and scalability. In a world where data continues to grow in complexity, leveraging these complementary tools is essential. Whether building intelligent healthcare systems, enhancing recommendation engines, or powering semantic search, the synergy between Knowledge Graphs and Vector Databases is unlocking the next frontier of AI innovation, transforming how industries harness the power of their data. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI and Disability

AI and Disability

Dr. Johnathan Flowers of American University recently sparked a conversation on Bluesky regarding a statement from the organizers of NaNoWriMo, which endorsed the use of generative AI technologies, such as LLM chatbots, in this year’s event. Dr. Flowers expressed concern about the implication that AI assistance was necessary for accessibility, arguing that it could undermine the creativity and agency of individuals with disabilities. He believes that art often serves as a unique space where barriers imposed by disability can be transcended without relying on external help or engaging in forced intimacy. For Dr. Flowers, suggesting the need for AI support may inadvertently diminish the perceived capabilities of disabled and marginalized artists. Since the announcement, NaNoWriMo organizers have revised their stance in response to criticism, though much of the social media discussion has become unproductive. In earlier discussions, the author has explored the implications of generative AI in art, focusing on the human connection that art typically fosters, which AI-generated content may not fully replicate. However, they now wish to address the role of AI as a tool for accessibility. Not being personally affected by physical disability, the author approaches this topic from a social scientific perspective. They acknowledge that the views expressed are personal and not representative of any particular community or organization. Defining AI In a recent presentation, the author offered a new definition of AI, drawing from contemporary regulatory and policy discussions: AI: The application of specific forms of machine learning to perform tasks that would otherwise require human labor. This definition is intentionally broad, encompassing not just generative AI but also other machine learning applications aimed at automating tasks. AI as an Accessibility Tool AI has potential to enhance autonomy and independence for individuals with disabilities, paralleling technological advancements seen in fields like the Paris Paralympics. However, the author is keen to explore what unique benefits AI offers and what risks might arise. Benefits Risks AI and Disability The author acknowledges that this overview touches only on some key issues related to AI and disability. It is crucial for those working in machine learning to be aware of these dynamics, striving to balance benefits with potential risks and ensuring equitable access to technological advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
AI Agent Workflows

AI Agent Workflows

AI Agent Workflows: The Ultimate Guide to Choosing Between LangChain and LangGraph Explore two transformative libraries—LangChain and LangGraph—both created by the same developer, designed to build Agentic AI applications. This guide dives into their foundational components, differences in handling functionality, and how to choose the right tool for your use case. Language Models as the Bridge Modern language models have unlocked revolutionary ways to connect users with AI systems and enable AI-to-AI communication via natural language. Enterprises aiming to harness Agentic AI capabilities often face the pivotal question: “Which tools should we use?” For those eager to begin, this question can become a roadblock. Why LangChain and LangGraph? LangChain and LangGraph are among the leading frameworks for crafting Agentic AI applications. By understanding their core building blocks and approaches to functionality, you’ll gain clarity on how each aligns with your needs. Keep in mind that the rapid evolution of generative AI tools means today’s truths might shift tomorrow. Note: Initially, this guide intended to compare AutoGen, LangChain, and LangGraph. However, AutoGen’s upcoming 0.4 release introduces a foundational redesign. Stay tuned for insights post-launch! Understanding the Basics LangChain LangChain offers two primary methods: Key components include: LangGraph LangGraph is tailored for graph-based workflows, enabling flexibility in non-linear, conditional, or feedback-loop processes. It’s ideal for cases where LangChain’s predefined structure might not suffice. Key components include: Comparing Functionality Tool Calling Conversation History and Memory Retrieval-Augmented Generation (RAG) Parallelism and Error Handling When to Choose LangChain, LangGraph, or Both LangChain Only LangGraph Only Using LangChain + LangGraph Together Final Thoughts Whether you choose LangChain, LangGraph, or a combination, the decision depends on your project’s complexity and specific needs. By understanding their unique capabilities, you can confidently design robust Agentic AI workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Ambient AI Enhances Patient-Provider Relationship

Ambient AI Enhances Patient-Provider Relationship

How Ambient AI is Enhancing the Patient-Provider Relationship Ambient AI is transforming the patient-provider experience at Ochsner Health by enabling clinicians to focus more on their patients and less on their screens. While some view technology as a barrier to human interaction, Ochsner’s innovation officer, Dr. Jason Hill, believes ambient AI is doing the opposite by fostering stronger connections between patients and providers. Researchers estimate that physicians spend over 40% of consultation time focused on electronic health records (EHRs), limiting face-to-face interactions. “We have highly skilled professionals spending time inputting data instead of caring for patients, and as a result, patients feel disconnected due to the screen barrier,” Hill said. Additionally, increased documentation demands related to quality reporting, patient satisfaction, and reimbursement are straining providers. Ambient AI scribes help relieve this burden by automating clinical documentation, allowing providers to focus on their patients. Using machine learning, these AI tools generate clinical notes in seconds from recorded conversations. Clinicians then review and edit the drafts before finalizing the record. Ochsner began exploring ambient AI several years ago, but only with the advent of advanced language models like OpenAI’s GPT did the technology become scalable and cost-effective for large health systems. “Once the technology became affordable for large-scale deployment, we were immediately interested,” Hill explained. Selecting the Right Vendor Ochsner piloted two ambient AI tools before choosing DeepScribe for an enterprise-wide partnership. After the initial rollout to 60 physicians, the tool achieved a 75% adoption rate and improved patient satisfaction scores by 6%. What set DeepScribe apart were its customization features. “We can create templates for different specialties, but individual doctors retain control over their note outputs based on specific clinical encounters,” Hill said. This flexibility was crucial in gaining physician buy-in. Ochsner also valued DeepScribe’s strong vendor support, which included tailored training modules and direct assistance to clinicians. One example of this support was the development of a software module that allowed Ochsner’s providers to see EHR reminders within the ambient AI app. “DeepScribe built a bridge to bring EHR data into the app, so clinicians could access important information right before the visit,” Hill noted. Ensuring Documentation Quality Ochsner has implemented several safeguards to maintain the accuracy of AI-generated clinical documentation. Providers undergo training before using the ambient AI system, with a focus on reviewing and finalizing all AI-generated notes. Notes created by the AI remain in a “pended” state until the provider signs off. Ochsner also tracks how much text is generated by the AI versus added by the provider, using this as a marker for the level of editing required. Following the successful pilot, Ochsner plans to expand ambient AI to 600 clinicians by the end of the year, with the eventual goal of providing access to all 4,700 physicians. While Hill anticipates widespread adoption, he acknowledges that the technology may not be suitable for all providers. “Some clinicians have different documentation needs, but for the vast majority, this will likely become the standard way we document at Ochsner within a year,” he said. Conclusion By integrating ambient AI, Ochsner Health is not only improving operational efficiency but also strengthening the human connection between patients and providers. As the technology becomes more widespread, it holds the potential to reshape how clinical documentation is handled, freeing up time for more meaningful patient interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Zendesk Launches AI Agent Builder

Zendesk Launches AI Agent Builder

Zendesk Launches AI Agent Builder and Enhances Agent Copilot Zendesk has unveiled its AI Agent Builder, a key feature in a series of significant updates across its platform. This new tool enables customer service teams to create bots—now referred to as “AI Agents”—using natural language descriptions. For example, a user might input: “A customer wants to return a product.” The AI Agent Builder will recognize the scenario and automatically create a framework for the AI Agent, which can then be reviewed, tested, and deployed. This framework might include essential steps like checking the order number, verifying the items for return, and cross-referencing the return policy. Matthias Goehler, CTO for EMEA at Zendesk, explains, “You can define any number of workflows in the same straightforward manner. The best part is that business users can do this without needing to design complex flowcharts or decision trees.” However, developers may still need to consult an API when creating AI Agents that interact with multiple third-party applications. Other Enhancements to Zendesk’s AI Agents The AI Agent Builder simplifies the automation of customer interactions that involve multiple steps. For more straightforward queries, Zendesk can connect a single AI Agent to trusted knowledge sources, allowing it to autonomously provide answers. Recently, the vendor has expanded this capability to email and strengthened its partnership with Poly.AI to integrate conversational AI capabilities into the voice channel. Goehler remarked, “When I first heard a Poly bot, I thought it was a human; it even had subtle dialects and varied pacing.” This natural-sounding voice, combined with real-time data processing, enables the bot to understand customer intent and guide them through various processes. Zendesk aims to help customers automate up to 80 percent of their service inquiries. However, Goehler acknowledges that some situations will always require human intervention, whether due to case complexity or customer preferences. Therefore, the company continues to enhance its Agent Copilot, which now includes several new features. The “Enhanced” Zendesk Agent Copilot One of the most exciting new features in Agent Copilot is its “Procedure” capability. This allows contact centers to define specific procedures for the Copilot to execute on behalf of live agents. Users can specify these procedures in natural language, such as: “Do this first, then this, and finally this.” During live interactions, agents can request the Copilot to carry out tasks like scheduling appointments or sending shipping labels. The Copilot can also proactively suggest procedures, share recommended responses, and offer guidance through its new “auto-assist” mode. While the live agent remains in control, they can approve the Copilot’s suggestions, allowing it to handle much of the workload. Goehler noted, “If the agent wants to adjust something, they can do that, too. The AI continues to suggest steps and solutions.” This feature is particularly beneficial for companies facing high staff turnover, as it allows new agents to quickly adapt with consistent, high-quality guidance. Zendesk has also introduced Agent Copilot for Voice, making many of its capabilities accessible during customer calls. Agents will receive live call insights and relevant knowledge base content to enhance their interactions. Elsewhere at Zendesk 2024 has been a transformative year for Zendesk. The company has entered the workforce engagement management (WEM) market with acquisitions of Klaus and Tymeshift. This follows the integration of Ultimate, which laid the groundwork for the new Zendesk AI Agents and significantly enhanced the vendor’s conversational AI expertise. Additionally, Zendesk has developed a customer messaging app in collaboration with Meta, established a venture arm for AI startups, and announced new partnerships with AWS and Anthropic. Notably, Zendesk has gained attention for introducing an “industry-first” outcome-based pricing model. This move is significant as many CCaaS and CRM vendors, facing pressure from AI solutions that reduce headcounts, have traditionally relied on seat-based pricing models. By adopting outcome-based pricing, Zendesk ensures that customers only pay more when they achieve desired outcomes, addressing a key challenge in the industry. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Transforming Fundraising for Nonprofits

Transforming Fundraising for Nonprofits

Tectonic’s Expertise in Salesforce Nonprofit Cloud: Transforming Fundraising for Nonprofits Salesforce’s Nonprofit Cloud (NPC) is revolutionizing how organizations manage their fundraising, offering tools specifically designed to meet the unique needs of the nonprofit sector. A standout feature of Nonprofit Cloud is its comprehensive fundraising functionality, which goes beyond simple transaction management to support the entire lifecycle of donor engagement. Central to understanding this functionality is the “three P’s” concept—Pursuit, Promise, and Payment. These three stages enable nonprofits to effectively track and manage donor relationships and contributions. Pursuit: Tracking the Opportunity The first “P” in Salesforce’s Nonprofit Cloud Fundraising process is Pursuit. This refers to the opportunity record, where the organization is actively seeking donations but no financial transaction has occurred yet. For example, a nonprofit might be pursuing a major donation of $500,000 from a corporate sponsor. At this stage, fundraisers track their progress through various phases of the opportunity, whether they win or lose the donation bid. The focus here is on relationship-building and securing commitments rather than managing financial transactions. This early-stage tracking lays the foundation for a more organized approach as the process advances. Promise: Earninging the Commitment Once a donor—whether an individual or a corporation—has committed to contributing, the Promise phase begins. Here, the Opportunity record transforms into a Gift Commitment in Salesforce. For instance, when the company officially pledges the $500,000 donation, this formalizes their promise. The Gift Commitment record is dynamic and can be modified over time to reflect changes, such as adjusting the amount to 0,000 or setting up recurring donations. This flexibility enables nonprofits to track pledges over time and maintain accurate records of what has been promised versus what has been received. Financial teams especially benefit from this capability, as it aids in reporting and financial planning. Payment: Completing the Financial Act The final “P” is Payment, capturing the financial transaction. This is where the Gift Transaction record comes into play, reflecting the completion of the financial act. For example, once the company has paid $250,000 of the promised $400,000, the Payment record updates to reflect this. Payment records can either stand alone for one-time donations or be linked to Gift Commitments or a Gift Commitment Schedule for installment payments or recurring donations. This structure gives nonprofits the flexibility to track all stages of financial fulfillment and adjust their fundraising strategies accordingly. Leveraging the Three P’s for Success The Pursuit, Promise, and Payment framework provides nonprofits with a clear, structured approach to managing the entire donor lifecycle. This system also eases the transition from Salesforce’s legacy Nonprofit Success Pack (NPSP) to the new Nonprofit Cloud framework. By effectively tracking donation pursuits, managing gift commitments, and documenting payments, nonprofits can maintain a comprehensive, real-time view of their fundraising efforts. This streamlined process not only improves data management but also enhances transparency, fostering trust with donors. The Future of Fundraising with Salesforce Nonprofit Cloud Salesforce’s Nonprofit Cloud Fundraising functionality, anchored by the three P’s, represents a significant evolution in nonprofit technology. By offering tools that manage every stage of donor engagement—from pursuit to payment—Salesforce empowers nonprofits to maximize their fundraising potential. Organizations can cultivate stronger donor relationships, track commitments more accurately, and ensure financial transactions are completed and documented efficiently. This holistic approach enables nonprofits to make informed decisions, boost donor trust, and drive their missions forward. Want to learn more about how Tectonic can help streamline donation processes, track total payments, maintain a full 360° history of the donation cycle, and create funder-worthy visualizations? Contact us at [email protected]. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Strawberry AI Models

Strawberry AI Models

Since OpenAI introduced its “Strawberry” AI models, something intriguing has unfolded. The o1-preview and o1-mini models have quickly gained attention for their superior step-by-step reasoning, offering a structured glimpse into problem-solving. However, behind this polished façade, a hidden layer of the AI’s mind remains off-limits—an area OpenAI is determined to keep out of reach. Unlike previous models, the o1 series conceals its raw thought processes. Users only see the refined, final answer, generated by a secondary AI, while the deeper, unfiltered reasoning is locked away. Naturally, this secrecy has only fueled curiosity. Hackers, researchers, and enthusiasts are already working to break through this barrier. Using jailbreak techniques and clever prompt manipulations, they are seeking to uncover the AI’s raw chain of thought, hoping to reveal what OpenAI has concealed. Rumors of partial breakthroughs have circulated, though nothing definitive has emerged. Meanwhile, OpenAI closely monitors these efforts, issuing warnings and threatening account bans to those who dig too deep. On platforms like X, users have reported receiving warnings merely for mentioning terms like “reasoning trace” in their interactions with the o1 models. Even casual inquiries into the AI’s thinking process seem to trigger OpenAI’s defenses. The company’s warnings are explicit: any attempt to expose the hidden reasoning violates their policies and could result in revoked access to the AI. Marco Figueroa, leader of Mozilla’s GenAI bug bounty program, publicly shared his experience after attempting to probe the model’s thought process through jailbreaks—he quickly found himself flagged by OpenAI. Now I’m on their ban list,” Figueroa revealed. So, why all the secrecy? OpenAI explained in a blog post titled Learning to Reason with LLMs that concealing the raw thought process allows for better monitoring of the AI’s decision-making without interfering with its cognitive flow. Revealing this raw data, they argue, could lead to unintended consequences, such as the model being misused to manipulate users or its internal workings being copied by competitors. OpenAI acknowledges that the raw reasoning process is valuable, and exposing it could give rivals an edge in training their own models. However, critics, such as independent AI researcher Simon Willison, have condemned this decision. Willison argues that concealing the model’s thought process is a blow to transparency. “As someone working with AI systems, I need to understand how my prompts are being processed,” he wrote. “Hiding this feels like a step backward.” Ultimately, OpenAI’s decision to keep the AI’s raw thought process hidden is about more than just user safety—it’s about control. By retaining access to these concealed layers, OpenAI maintains its lead in the competitive AI race. Yet, in doing so, they’ve sparked a hunt. Researchers, hackers, and enthusiasts continue to search for what remains hidden. And until that veil is lifted, the pursuit won’t stop. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Exploring Emerging LLM

Exploring Emerging LLM

Exploring Emerging LLM Agent Types and Architectures The Evolution Beyond ReAct AgentsThe shortcomings of first-generation ReAct agents have paved the way for a new era of LLM agents, bringing innovative architectures and possibilities. In 2024, agents have taken center stage in the AI landscape. Companies globally are developing chatbot agents, tools like MultiOn are bridging agents to external websites, and frameworks like LangGraph and LlamaIndex Workflows are helping developers build more structured, capable agents. However, despite their rising popularity within the AI community, agents are yet to see widespread adoption among consumers or enterprises. This leaves businesses wondering: How do we navigate these emerging frameworks and architectures? Which tools should we leverage for our next application? Having recently developed a sophisticated agent as a product copilot, we share key insights to guide you through the evolving agent ecosystem. What Are LLM-Based Agents? At their core, LLM-based agents are software systems designed to execute complex tasks by chaining together multiple processing steps, including LLM calls. These agents: The Rise and Fall of ReAct Agents ReAct (reason, act) agents marked the first wave of LLM-powered tools. Promising broad functionality through abstraction, they fell short due to their limited utility and overgeneralized design. These challenges spurred the emergence of second-generation agents, emphasizing structure and specificity. The Second Generation: Structured, Scalable Agents Modern agents are defined by smaller solution spaces, offering narrower but more reliable capabilities. Instead of open-ended design, these agents map out defined paths for actions, improving precision and performance. Key characteristics of second-gen agents include: Common Agent Architectures Agent Development Frameworks Several frameworks are now available to simplify and streamline agent development: While frameworks can impose best practices and tooling, they may introduce limitations for highly complex applications. Many developers still prefer code-driven solutions for greater control. Should You Build an Agent? Before investing in agent development, consider these criteria: If you answered “yes,” an agent may be a suitable choice. Challenges and Solutions in Agent Development Common Issues: Strategies to Address Challenges: Conclusion The generative AI landscape is brimming with new frameworks and fervent innovation. Before diving into development, evaluate your application needs and consider whether agent frameworks align with your objectives. By thoughtfully assessing the tools and architectures available, you can create agents that deliver measurable value while avoiding unnecessary complexity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Communicating With Machines

Communicating With Machines

For as long as machines have existed, humans have struggled to communicate effectively with them. The rise of large language models (LLMs) has transformed this dynamic, making “prompting” the bridge between our intentions and AI’s actions. By providing pre-trained models with clear instructions and context, we can ensure they understand and respond correctly. As UX practitioners, we now play a key role in facilitating this interaction, helping humans and machines truly connect. The UX discipline was born alongside graphical user interfaces (GUIs), offering a way for the average person to interact with computers without needing to write code. We introduced familiar concepts like desktops, trash cans, and save icons to align with users’ mental models, while complex code ran behind the scenes. Now, with the power of AI and the transformer architecture, a new form of interaction has emerged—natural language communication. This shift has changed the design landscape, moving us from pure graphical interfaces to an era where text-based interactions dominate. As designers, we must reconsider where our focus should lie in this evolving environment. A Mental Shift In the era of command-based design, we focused on breaking down complex user problems, mapping out customer journeys, and creating deterministic flows. Now, with AI at the forefront, our challenge is to provide models with the right context for optimal output and refine the responses through iteration. Shifting Complexity to the Edges Successful communication, whether with a person or a machine, hinges on context. Just as you would clearly explain your needs to a salesperson to get the right product, AI models also need clear instructions. Expecting users to input all the necessary information in their prompts won’t lead to widespread adoption of these models. Here, UX practitioners play a critical role. We can design user experiences that integrate context—some visible to users, others hidden—shaping how AI interacts with them. This ensures that users can seamlessly communicate with machines without the burden of detailed, manual prompts. The Craft of Prompting As designers, our role in crafting prompts falls into three main areas: Even if your team isn’t building custom models, there’s still plenty of work to be done. You can help select pre-trained models that align with user goals and design a seamless experience around them. Understanding the Context Window A key concept for UX designers to understand is the “context window“—the information a model can process to generate an output. Think of it as the amount of memory the model retains during a conversation. Companies can use this to include hidden prompts, helping guide AI responses to align with brand values and user intent. Context windows are measured in tokens, not time, so even if you return to a conversation weeks later, the model remembers previous interactions, provided they fit within the token limit. With innovations like Gemini’s 2-million-token context window, AI models are moving toward infinite memory, which will bring new design challenges for UX practitioners. How to Approach Prompting Prompting is an iterative process where you craft an instruction, test it with the model, and refine it based on the results. Some effective techniques include: Depending on the scenario, you’ll either use direct, simple prompts (for user-facing interactions) or broader, more structured system prompts (for behind-the-scenes guidance). Get Organized As prompting becomes more common, teams need a unified approach to avoid conflicting instructions. Proper documentation on system prompting is crucial, especially in larger teams. This helps prevent errors and hallucinations in model responses. Prompt experimentation may reveal limitations in AI models, and there are several ways to address these: Looking Ahead The UX landscape is evolving rapidly. Many organizations, particularly smaller ones, have yet to realize the importance of UX in AI prompting. Others may not allocate enough resources, underestimating the complexity and importance of UX in shaping AI interactions. As John Culkin said, “We shape our tools, and thereafter, our tools shape us.” The responsibility of integrating UX into AI development goes beyond just individual organizations—it’s shaping the future of human-computer interaction. This is a pivotal moment for UX, and how we adapt will define the next generation of design. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
  • 1
  • 2
gettectonic.com