Edges Archives - gettectonic.com
Gen AI Unleased With Vector Database

Knowledge Graphs and Vector Databases

The Role of Knowledge Graphs and Vector Databases in Retrieval-Augmented Generation (RAG) In the dynamic AI landscape, Retrieval-Augmented Generation (RAG) systems are revolutionizing data retrieval by combining artificial intelligence with external data sources to deliver contextual, relevant outputs. Two core technologies driving this innovation are Knowledge Graphs and Vector Databases. While fundamentally different in their design and functionality, these tools complement one another, unlocking new potential for solving complex data problems across industries. Understanding Knowledge Graphs: Connecting the Dots Knowledge Graphs organize data into a network of relationships, creating a structured representation of entities and how they interact. These graphs emphasize understanding and reasoning through data, offering explainable and highly contextual results. How They Work Strengths Limitations Applications Vector Databases: The Power of Similarity In contrast, Vector Databases thrive in handling unstructured data such as text, images, and audio. By representing data as high-dimensional vectors, they excel at identifying similarities, enabling semantic understanding. How They Work Strengths Limitations Applications Combining Knowledge Graphs and Vector Databases: A Hybrid Approach While both technologies excel independently, their combination can amplify RAG systems. Knowledge Graphs bring reasoning and structure, while Vector Databases offer rapid, similarity-based retrieval, creating hybrid systems that are more intelligent and versatile. Example Use Cases Knowledge Graphs vs. Vector Databases: Key Differences Feature Knowledge Graphs Vector Databases Data Type Structured Unstructured Core Strength Relational reasoning Similarity-based retrieval Explainability High Low Scalability Limited for large datasets Efficient for massive datasets Flexibility Schema-dependent Schema-free Challenges in Implementation Future Trends: The Path to Convergence As AI evolves, the distinction between Knowledge Graphs and Vector Databases is beginning to blur. Emerging trends include: This convergence is paving the way for smarter, more adaptive systems that can handle both structured and unstructured data seamlessly. Conclusion Knowledge Graphs and Vector Databases represent two foundational technologies in the realm of Retrieval-Augmented Generation. Knowledge Graphs excel at reasoning through structured relationships, while Vector Databases shine in unstructured data retrieval. By combining their strengths, organizations can create hybrid systems that offer unparalleled insights, efficiency, and scalability. In a world where data continues to grow in complexity, leveraging these complementary tools is essential. Whether building intelligent healthcare systems, enhancing recommendation engines, or powering semantic search, the synergy between Knowledge Graphs and Vector Databases is unlocking the next frontier of AI innovation, transforming how industries harness the power of their data. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI and Disability

AI and Disability

Dr. Johnathan Flowers of American University recently sparked a conversation on Bluesky regarding a statement from the organizers of NaNoWriMo, which endorsed the use of generative AI technologies, such as LLM chatbots, in this year’s event. Dr. Flowers expressed concern about the implication that AI assistance was necessary for accessibility, arguing that it could undermine the creativity and agency of individuals with disabilities. He believes that art often serves as a unique space where barriers imposed by disability can be transcended without relying on external help or engaging in forced intimacy. For Dr. Flowers, suggesting the need for AI support may inadvertently diminish the perceived capabilities of disabled and marginalized artists. Since the announcement, NaNoWriMo organizers have revised their stance in response to criticism, though much of the social media discussion has become unproductive. In earlier discussions, the author has explored the implications of generative AI in art, focusing on the human connection that art typically fosters, which AI-generated content may not fully replicate. However, they now wish to address the role of AI as a tool for accessibility. Not being personally affected by physical disability, the author approaches this topic from a social scientific perspective. They acknowledge that the views expressed are personal and not representative of any particular community or organization. Defining AI In a recent presentation, the author offered a new definition of AI, drawing from contemporary regulatory and policy discussions: AI: The application of specific forms of machine learning to perform tasks that would otherwise require human labor. This definition is intentionally broad, encompassing not just generative AI but also other machine learning applications aimed at automating tasks. AI as an Accessibility Tool AI has potential to enhance autonomy and independence for individuals with disabilities, paralleling technological advancements seen in fields like the Paris Paralympics. However, the author is keen to explore what unique benefits AI offers and what risks might arise. Benefits Risks AI and Disability The author acknowledges that this overview touches only on some key issues related to AI and disability. It is crucial for those working in machine learning to be aware of these dynamics, striving to balance benefits with potential risks and ensuring equitable access to technological advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
AI Agent Workflows

AI Agent Workflows

AI Agent Workflows: The Ultimate Guide to Choosing Between LangChain and LangGraph Explore two transformative libraries—LangChain and LangGraph—both created by the same developer, designed to build Agentic AI applications. This guide dives into their foundational components, differences in handling functionality, and how to choose the right tool for your use case. Language Models as the Bridge Modern language models have unlocked revolutionary ways to connect users with AI systems and enable AI-to-AI communication via natural language. Enterprises aiming to harness Agentic AI capabilities often face the pivotal question: “Which tools should we use?” For those eager to begin, this question can become a roadblock. Why LangChain and LangGraph? LangChain and LangGraph are among the leading frameworks for crafting Agentic AI applications. By understanding their core building blocks and approaches to functionality, you’ll gain clarity on how each aligns with your needs. Keep in mind that the rapid evolution of generative AI tools means today’s truths might shift tomorrow. Note: Initially, this guide intended to compare AutoGen, LangChain, and LangGraph. However, AutoGen’s upcoming 0.4 release introduces a foundational redesign. Stay tuned for insights post-launch! Understanding the Basics LangChain LangChain offers two primary methods: Key components include: LangGraph LangGraph is tailored for graph-based workflows, enabling flexibility in non-linear, conditional, or feedback-loop processes. It’s ideal for cases where LangChain’s predefined structure might not suffice. Key components include: Comparing Functionality Tool Calling Conversation History and Memory Retrieval-Augmented Generation (RAG) Parallelism and Error Handling When to Choose LangChain, LangGraph, or Both LangChain Only LangGraph Only Using LangChain + LangGraph Together Final Thoughts Whether you choose LangChain, LangGraph, or a combination, the decision depends on your project’s complexity and specific needs. By understanding their unique capabilities, you can confidently design robust Agentic AI workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Ambient AI Enhances Patient-Provider Relationship

Ambient AI Enhances Patient-Provider Relationship

How Ambient AI is Enhancing the Patient-Provider Relationship Ambient AI is transforming the patient-provider experience at Ochsner Health by enabling clinicians to focus more on their patients and less on their screens. While some view technology as a barrier to human interaction, Ochsner’s innovation officer, Dr. Jason Hill, believes ambient AI is doing the opposite by fostering stronger connections between patients and providers. Researchers estimate that physicians spend over 40% of consultation time focused on electronic health records (EHRs), limiting face-to-face interactions. “We have highly skilled professionals spending time inputting data instead of caring for patients, and as a result, patients feel disconnected due to the screen barrier,” Hill said. Additionally, increased documentation demands related to quality reporting, patient satisfaction, and reimbursement are straining providers. Ambient AI scribes help relieve this burden by automating clinical documentation, allowing providers to focus on their patients. Using machine learning, these AI tools generate clinical notes in seconds from recorded conversations. Clinicians then review and edit the drafts before finalizing the record. Ochsner began exploring ambient AI several years ago, but only with the advent of advanced language models like OpenAI’s GPT did the technology become scalable and cost-effective for large health systems. “Once the technology became affordable for large-scale deployment, we were immediately interested,” Hill explained. Selecting the Right Vendor Ochsner piloted two ambient AI tools before choosing DeepScribe for an enterprise-wide partnership. After the initial rollout to 60 physicians, the tool achieved a 75% adoption rate and improved patient satisfaction scores by 6%. What set DeepScribe apart were its customization features. “We can create templates for different specialties, but individual doctors retain control over their note outputs based on specific clinical encounters,” Hill said. This flexibility was crucial in gaining physician buy-in. Ochsner also valued DeepScribe’s strong vendor support, which included tailored training modules and direct assistance to clinicians. One example of this support was the development of a software module that allowed Ochsner’s providers to see EHR reminders within the ambient AI app. “DeepScribe built a bridge to bring EHR data into the app, so clinicians could access important information right before the visit,” Hill noted. Ensuring Documentation Quality Ochsner has implemented several safeguards to maintain the accuracy of AI-generated clinical documentation. Providers undergo training before using the ambient AI system, with a focus on reviewing and finalizing all AI-generated notes. Notes created by the AI remain in a “pended” state until the provider signs off. Ochsner also tracks how much text is generated by the AI versus added by the provider, using this as a marker for the level of editing required. Following the successful pilot, Ochsner plans to expand ambient AI to 600 clinicians by the end of the year, with the eventual goal of providing access to all 4,700 physicians. While Hill anticipates widespread adoption, he acknowledges that the technology may not be suitable for all providers. “Some clinicians have different documentation needs, but for the vast majority, this will likely become the standard way we document at Ochsner within a year,” he said. Conclusion By integrating ambient AI, Ochsner Health is not only improving operational efficiency but also strengthening the human connection between patients and providers. As the technology becomes more widespread, it holds the potential to reshape how clinical documentation is handled, freeing up time for more meaningful patient interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Zendesk Launches AI Agent Builder

Zendesk Launches AI Agent Builder

Zendesk Launches AI Agent Builder and Enhances Agent Copilot Zendesk has unveiled its AI Agent Builder, a key feature in a series of significant updates across its platform. This new tool enables customer service teams to create bots—now referred to as “AI Agents”—using natural language descriptions. For example, a user might input: “A customer wants to return a product.” The AI Agent Builder will recognize the scenario and automatically create a framework for the AI Agent, which can then be reviewed, tested, and deployed. This framework might include essential steps like checking the order number, verifying the items for return, and cross-referencing the return policy. Matthias Goehler, CTO for EMEA at Zendesk, explains, “You can define any number of workflows in the same straightforward manner. The best part is that business users can do this without needing to design complex flowcharts or decision trees.” However, developers may still need to consult an API when creating AI Agents that interact with multiple third-party applications. Other Enhancements to Zendesk’s AI Agents The AI Agent Builder simplifies the automation of customer interactions that involve multiple steps. For more straightforward queries, Zendesk can connect a single AI Agent to trusted knowledge sources, allowing it to autonomously provide answers. Recently, the vendor has expanded this capability to email and strengthened its partnership with Poly.AI to integrate conversational AI capabilities into the voice channel. Goehler remarked, “When I first heard a Poly bot, I thought it was a human; it even had subtle dialects and varied pacing.” This natural-sounding voice, combined with real-time data processing, enables the bot to understand customer intent and guide them through various processes. Zendesk aims to help customers automate up to 80 percent of their service inquiries. However, Goehler acknowledges that some situations will always require human intervention, whether due to case complexity or customer preferences. Therefore, the company continues to enhance its Agent Copilot, which now includes several new features. The “Enhanced” Zendesk Agent Copilot One of the most exciting new features in Agent Copilot is its “Procedure” capability. This allows contact centers to define specific procedures for the Copilot to execute on behalf of live agents. Users can specify these procedures in natural language, such as: “Do this first, then this, and finally this.” During live interactions, agents can request the Copilot to carry out tasks like scheduling appointments or sending shipping labels. The Copilot can also proactively suggest procedures, share recommended responses, and offer guidance through its new “auto-assist” mode. While the live agent remains in control, they can approve the Copilot’s suggestions, allowing it to handle much of the workload. Goehler noted, “If the agent wants to adjust something, they can do that, too. The AI continues to suggest steps and solutions.” This feature is particularly beneficial for companies facing high staff turnover, as it allows new agents to quickly adapt with consistent, high-quality guidance. Zendesk has also introduced Agent Copilot for Voice, making many of its capabilities accessible during customer calls. Agents will receive live call insights and relevant knowledge base content to enhance their interactions. Elsewhere at Zendesk 2024 has been a transformative year for Zendesk. The company has entered the workforce engagement management (WEM) market with acquisitions of Klaus and Tymeshift. This follows the integration of Ultimate, which laid the groundwork for the new Zendesk AI Agents and significantly enhanced the vendor’s conversational AI expertise. Additionally, Zendesk has developed a customer messaging app in collaboration with Meta, established a venture arm for AI startups, and announced new partnerships with AWS and Anthropic. Notably, Zendesk has gained attention for introducing an “industry-first” outcome-based pricing model. This move is significant as many CCaaS and CRM vendors, facing pressure from AI solutions that reduce headcounts, have traditionally relied on seat-based pricing models. By adopting outcome-based pricing, Zendesk ensures that customers only pay more when they achieve desired outcomes, addressing a key challenge in the industry. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Transforming Fundraising for Nonprofits

Transforming Fundraising for Nonprofits

Tectonic’s Expertise in Salesforce Nonprofit Cloud: Transforming Fundraising for Nonprofits Salesforce’s Nonprofit Cloud (NPC) is revolutionizing how organizations manage their fundraising, offering tools specifically designed to meet the unique needs of the nonprofit sector. A standout feature of Nonprofit Cloud is its comprehensive fundraising functionality, which goes beyond simple transaction management to support the entire lifecycle of donor engagement. Central to understanding this functionality is the “three P’s” concept—Pursuit, Promise, and Payment. These three stages enable nonprofits to effectively track and manage donor relationships and contributions. Pursuit: Tracking the Opportunity The first “P” in Salesforce’s Nonprofit Cloud Fundraising process is Pursuit. This refers to the opportunity record, where the organization is actively seeking donations but no financial transaction has occurred yet. For example, a nonprofit might be pursuing a major donation of $500,000 from a corporate sponsor. At this stage, fundraisers track their progress through various phases of the opportunity, whether they win or lose the donation bid. The focus here is on relationship-building and securing commitments rather than managing financial transactions. This early-stage tracking lays the foundation for a more organized approach as the process advances. Promise: Earninging the Commitment Once a donor—whether an individual or a corporation—has committed to contributing, the Promise phase begins. Here, the Opportunity record transforms into a Gift Commitment in Salesforce. For instance, when the company officially pledges the $500,000 donation, this formalizes their promise. The Gift Commitment record is dynamic and can be modified over time to reflect changes, such as adjusting the amount to 0,000 or setting up recurring donations. This flexibility enables nonprofits to track pledges over time and maintain accurate records of what has been promised versus what has been received. Financial teams especially benefit from this capability, as it aids in reporting and financial planning. Payment: Completing the Financial Act The final “P” is Payment, capturing the financial transaction. This is where the Gift Transaction record comes into play, reflecting the completion of the financial act. For example, once the company has paid $250,000 of the promised $400,000, the Payment record updates to reflect this. Payment records can either stand alone for one-time donations or be linked to Gift Commitments or a Gift Commitment Schedule for installment payments or recurring donations. This structure gives nonprofits the flexibility to track all stages of financial fulfillment and adjust their fundraising strategies accordingly. Leveraging the Three P’s for Success The Pursuit, Promise, and Payment framework provides nonprofits with a clear, structured approach to managing the entire donor lifecycle. This system also eases the transition from Salesforce’s legacy Nonprofit Success Pack (NPSP) to the new Nonprofit Cloud framework. By effectively tracking donation pursuits, managing gift commitments, and documenting payments, nonprofits can maintain a comprehensive, real-time view of their fundraising efforts. This streamlined process not only improves data management but also enhances transparency, fostering trust with donors. The Future of Fundraising with Salesforce Nonprofit Cloud Salesforce’s Nonprofit Cloud Fundraising functionality, anchored by the three P’s, represents a significant evolution in nonprofit technology. By offering tools that manage every stage of donor engagement—from pursuit to payment—Salesforce empowers nonprofits to maximize their fundraising potential. Organizations can cultivate stronger donor relationships, track commitments more accurately, and ensure financial transactions are completed and documented efficiently. This holistic approach enables nonprofits to make informed decisions, boost donor trust, and drive their missions forward. Want to learn more about how Tectonic can help streamline donation processes, track total payments, maintain a full 360° history of the donation cycle, and create funder-worthy visualizations? Contact us at [email protected]. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Strawberry AI Models

Strawberry AI Models

Since OpenAI introduced its “Strawberry” AI models, something intriguing has unfolded. The o1-preview and o1-mini models have quickly gained attention for their superior step-by-step reasoning, offering a structured glimpse into problem-solving. However, behind this polished façade, a hidden layer of the AI’s mind remains off-limits—an area OpenAI is determined to keep out of reach. Unlike previous models, the o1 series conceals its raw thought processes. Users only see the refined, final answer, generated by a secondary AI, while the deeper, unfiltered reasoning is locked away. Naturally, this secrecy has only fueled curiosity. Hackers, researchers, and enthusiasts are already working to break through this barrier. Using jailbreak techniques and clever prompt manipulations, they are seeking to uncover the AI’s raw chain of thought, hoping to reveal what OpenAI has concealed. Rumors of partial breakthroughs have circulated, though nothing definitive has emerged. Meanwhile, OpenAI closely monitors these efforts, issuing warnings and threatening account bans to those who dig too deep. On platforms like X, users have reported receiving warnings merely for mentioning terms like “reasoning trace” in their interactions with the o1 models. Even casual inquiries into the AI’s thinking process seem to trigger OpenAI’s defenses. The company’s warnings are explicit: any attempt to expose the hidden reasoning violates their policies and could result in revoked access to the AI. Marco Figueroa, leader of Mozilla’s GenAI bug bounty program, publicly shared his experience after attempting to probe the model’s thought process through jailbreaks—he quickly found himself flagged by OpenAI. Now I’m on their ban list,” Figueroa revealed. So, why all the secrecy? OpenAI explained in a blog post titled Learning to Reason with LLMs that concealing the raw thought process allows for better monitoring of the AI’s decision-making without interfering with its cognitive flow. Revealing this raw data, they argue, could lead to unintended consequences, such as the model being misused to manipulate users or its internal workings being copied by competitors. OpenAI acknowledges that the raw reasoning process is valuable, and exposing it could give rivals an edge in training their own models. However, critics, such as independent AI researcher Simon Willison, have condemned this decision. Willison argues that concealing the model’s thought process is a blow to transparency. “As someone working with AI systems, I need to understand how my prompts are being processed,” he wrote. “Hiding this feels like a step backward.” Ultimately, OpenAI’s decision to keep the AI’s raw thought process hidden is about more than just user safety—it’s about control. By retaining access to these concealed layers, OpenAI maintains its lead in the competitive AI race. Yet, in doing so, they’ve sparked a hunt. Researchers, hackers, and enthusiasts continue to search for what remains hidden. And until that veil is lifted, the pursuit won’t stop. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Exploring Emerging LLM

Exploring Emerging LLM

Exploring Emerging LLM Agent Types and Architectures The Evolution Beyond ReAct AgentsThe shortcomings of first-generation ReAct agents have paved the way for a new era of LLM agents, bringing innovative architectures and possibilities. In 2024, agents have taken center stage in the AI landscape. Companies globally are developing chatbot agents, tools like MultiOn are bridging agents to external websites, and frameworks like LangGraph and LlamaIndex Workflows are helping developers build more structured, capable agents. However, despite their rising popularity within the AI community, agents are yet to see widespread adoption among consumers or enterprises. This leaves businesses wondering: How do we navigate these emerging frameworks and architectures? Which tools should we leverage for our next application? Having recently developed a sophisticated agent as a product copilot, we share key insights to guide you through the evolving agent ecosystem. What Are LLM-Based Agents? At their core, LLM-based agents are software systems designed to execute complex tasks by chaining together multiple processing steps, including LLM calls. These agents: The Rise and Fall of ReAct Agents ReAct (reason, act) agents marked the first wave of LLM-powered tools. Promising broad functionality through abstraction, they fell short due to their limited utility and overgeneralized design. These challenges spurred the emergence of second-generation agents, emphasizing structure and specificity. The Second Generation: Structured, Scalable Agents Modern agents are defined by smaller solution spaces, offering narrower but more reliable capabilities. Instead of open-ended design, these agents map out defined paths for actions, improving precision and performance. Key characteristics of second-gen agents include: Common Agent Architectures Agent Development Frameworks Several frameworks are now available to simplify and streamline agent development: While frameworks can impose best practices and tooling, they may introduce limitations for highly complex applications. Many developers still prefer code-driven solutions for greater control. Should You Build an Agent? Before investing in agent development, consider these criteria: If you answered “yes,” an agent may be a suitable choice. Challenges and Solutions in Agent Development Common Issues: Strategies to Address Challenges: Conclusion The generative AI landscape is brimming with new frameworks and fervent innovation. Before diving into development, evaluate your application needs and consider whether agent frameworks align with your objectives. By thoughtfully assessing the tools and architectures available, you can create agents that deliver measurable value while avoiding unnecessary complexity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Communicating With Machines

Communicating With Machines

For as long as machines have existed, humans have struggled to communicate effectively with them. The rise of large language models (LLMs) has transformed this dynamic, making “prompting” the bridge between our intentions and AI’s actions. By providing pre-trained models with clear instructions and context, we can ensure they understand and respond correctly. As UX practitioners, we now play a key role in facilitating this interaction, helping humans and machines truly connect. The UX discipline was born alongside graphical user interfaces (GUIs), offering a way for the average person to interact with computers without needing to write code. We introduced familiar concepts like desktops, trash cans, and save icons to align with users’ mental models, while complex code ran behind the scenes. Now, with the power of AI and the transformer architecture, a new form of interaction has emerged—natural language communication. This shift has changed the design landscape, moving us from pure graphical interfaces to an era where text-based interactions dominate. As designers, we must reconsider where our focus should lie in this evolving environment. A Mental Shift In the era of command-based design, we focused on breaking down complex user problems, mapping out customer journeys, and creating deterministic flows. Now, with AI at the forefront, our challenge is to provide models with the right context for optimal output and refine the responses through iteration. Shifting Complexity to the Edges Successful communication, whether with a person or a machine, hinges on context. Just as you would clearly explain your needs to a salesperson to get the right product, AI models also need clear instructions. Expecting users to input all the necessary information in their prompts won’t lead to widespread adoption of these models. Here, UX practitioners play a critical role. We can design user experiences that integrate context—some visible to users, others hidden—shaping how AI interacts with them. This ensures that users can seamlessly communicate with machines without the burden of detailed, manual prompts. The Craft of Prompting As designers, our role in crafting prompts falls into three main areas: Even if your team isn’t building custom models, there’s still plenty of work to be done. You can help select pre-trained models that align with user goals and design a seamless experience around them. Understanding the Context Window A key concept for UX designers to understand is the “context window“—the information a model can process to generate an output. Think of it as the amount of memory the model retains during a conversation. Companies can use this to include hidden prompts, helping guide AI responses to align with brand values and user intent. Context windows are measured in tokens, not time, so even if you return to a conversation weeks later, the model remembers previous interactions, provided they fit within the token limit. With innovations like Gemini’s 2-million-token context window, AI models are moving toward infinite memory, which will bring new design challenges for UX practitioners. How to Approach Prompting Prompting is an iterative process where you craft an instruction, test it with the model, and refine it based on the results. Some effective techniques include: Depending on the scenario, you’ll either use direct, simple prompts (for user-facing interactions) or broader, more structured system prompts (for behind-the-scenes guidance). Get Organized As prompting becomes more common, teams need a unified approach to avoid conflicting instructions. Proper documentation on system prompting is crucial, especially in larger teams. This helps prevent errors and hallucinations in model responses. Prompt experimentation may reveal limitations in AI models, and there are several ways to address these: Looking Ahead The UX landscape is evolving rapidly. Many organizations, particularly smaller ones, have yet to realize the importance of UX in AI prompting. Others may not allocate enough resources, underestimating the complexity and importance of UX in shaping AI interactions. As John Culkin said, “We shape our tools, and thereafter, our tools shape us.” The responsibility of integrating UX into AI development goes beyond just individual organizations—it’s shaping the future of human-computer interaction. This is a pivotal moment for UX, and how we adapt will define the next generation of design. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Slack and AI

Just When You Thought We Were GPT’d Out, Here Comes Slack and Generative AI

Since its public introduction in 2014, Slack has transformed from its original concept, a searchable log of all conversation and knowledge, into a comprehensive productivity platform that has reshaped how work and co-working is conducted. Get ready! Here comes Slack and Generative AI! In a recent release, Salesforce Slack unveiled a next-generation platform. A platform designed to facilitate seamless automation and integration for users of all technical levels, regardless of coding proficiency. This platform simplifies the utilization of data within Slack, offering enhanced automation and intelligence, allowing for the creation of no-code workflows, custom integrations, and the incorporation of generative AI. Steve Wood, Slack’s SVP of Product and Platform, highlights the significance of placing automation and generative AI tools directly into users’ hands as a pivotal step in Slack’s journey to redefine not only how people work but also how machines and humans interact in the future. Wood delves into the unique features of the new Slack platform, emphasizing its modular architecture grounded in building blocks like functions, triggers, and workflows. These components are remixable, reusable, and seamlessly integrate with the data flow within Slack. The platform enables developers to create tailored solutions, such as integrating with Salesforce, fostering more efficient collaboration, and automating workflows across various business functions. The introduction of generative AI, like Slack GPT, further enhances the platform’s capabilities.  Slack GPT can use Einstein GPT to gain actionable data from Salesforce Customer 360 and Data Cloud.  Wood underscores the potential of this combination to revolutionize work interactions by simplifying automation into reusable building blocks, accessible to both humans and machines. He emphasizes the transformative power of pairing data with AI and automation, anticipating a significant shift in how technology is leveraged in the workplace. Slack and GPT Wood also explains the recent Slack GPT news, detailing its native integration into the Slack user experience. Slack GPT brings generative AI directly into the platform, allowing users to summarize conversations, catch up on missed messages, and edit content effortlessly. The integration of Einstein GPT into Slack expands the conversational interface to Customer 360, providing real-time customer insights directly in Slack. This can be used to automatically generate case summaries based on data from Service Cloud AND Slack. As AI evolves over time, Wood shares his excitement about observing how people utilize Slack GPT in real-world scenarios. The focus remains on empowering platform users through native generative AI and leveraging data and behaviors to enhance the product continuously. Historical Content Wood emphasizes the historical context stored within Slack, highlighting the collective past as a valuable resource for future decision-making. Integrating AI technologies into this rich dataset within Slack presents a substantial opportunity for improving workflows and tools. Regarding the integration of Slack with Salesforce Customer 360, Wood stresses the importance of having relevant information easily accessible in one place. Slack serves as the hub where work occurs, and by incorporating generative AI, the platform aims to enhance transparency, alignment, and effectiveness in decision-making. Drawing in and analyzing the data from Slack as well as the other Salesforce platforms provides vital customer information. In reflection on the rapid adoption of this technology, Wood acknowledges the unique challenges presented by the unknown behavior of generative AI. Stability, accuracy, and safety are top concerns, with ethical and responsible development practices crucial for building trust. The future, as Wood sees it, hinges on maintaining a commitment to ethical development, ensuring customers feel confident in trusting the transformative capabilities of generative AI in the workplace and the Slack platform. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
ChatGPT and Einstein GPT

ChatGPT and Einstein GPT

Artificial intelligence (AI) has been rapidly advancing globally, with breakthroughs captivating professionals across various sectors. One milestone that has gained significant attention is the emergence of ChatGPT, a cutting-edge language model revolutionizing the tech landscape. This development has profoundly impacted businesses relying on Salesforce for their customer relationship management (CRM) needs. In March 2023, Salesforce unveiled its latest AI innovation, Einstein GPT, promising to transform how companies engage with their clientele. In this article, we explore what Salesforce Einstein GPT entails and how it can benefit teams across diverse industries. When OpenAI introduced ChatGPT in November 2022, they didn’t expect the overwhelming response it received. Initially positioned as a “research preview,” this AI chatbot aimed to refine existing technology while soliciting feedback from users. However, ChatGPT quickly became a viral sensation, surpassing OpenAI’s expectations and prompting them to adapt to its newfound popularity. Developed on the foundation of the GPT-3.5 language model, ChatGPT was specifically tailored to facilitate engaging and accessible conversations, distinguishing it from its predecessors. Its launch attracted a diverse user base keen to explore its capabilities, prompting OpenAI to prioritize addressing potential misuse and enhancing its safety features. As ChatGPT gained traction, it caught the attention of Salesforce, a leading CRM provider. In March 2023, Salesforce unveiled Einstein GPT, its own AI innovation, poised to transform customer engagement. Built on the GPT-3 architecture and seamlessly integrated into Salesforce Clouds, Einstein GPT promised to revolutionize how businesses interact with their clientele. Einstein GPT boasts a range of features designed to personalize customer experiences and streamline workflows. From generating natural language responses to crafting personalized content and automating tasks, Einstein GPT offers versatility and value across industries. By leveraging both Einstein AI and GPT technology, businesses can unlock unprecedented efficiency and deliver superior customer experiences. Despite its success, OpenAI acknowledges the need for ongoing refinement and vigilance, emphasizing the importance of responsible deployment and transparency in the development of AI technology. Exploring Einstein GPT Salesforce presents Einstein GPT as the premier generative AI tool for CRM worldwide. Utilizing the advanced GPT-3 architecture, Einstein GPT seamlessly integrates into all Salesforce Clouds, including Tableau, MuleSoft, and Slack. This groundbreaking technology empowers users to generate natural language responses to customer inquiries, craft personalized content, and compose entire email messages on behalf of sales personnel. With its high degree of customization, Einstein GPT can be finely tuned to meet the specific needs of various industries, use cases, and customer requirements, delivering significant value to businesses of all sizes and sectors. Objectives of Salesforce AI Einstein GPT Salesforce AI Einstein GPT is designed to achieve several key objectives: Distinguishing Einstein GPT from Einstein AI Einstein GPT represents the latest evolution of Salesforce’s Einstein artificial intelligence technology. Unlike its predecessors, Einstein GPT integrates proprietary Einstein AI models with ChatGPT and other leading large language models. This integration enables users to interact with CRM data using natural language prompts, resulting in highly personalized, AI-generated content and triggering powerful automations that enhance workflows and productivity. By leveraging both Einstein AI and GPT technology, businesses can achieve unparalleled efficiency and deliver exceptional customer experiences. Features of Einstein GPT in Salesforce CRM Key features and capabilities of Salesforce Einstein chatbot GPT include: Utilizing Einstein GPT for Business Improvement Einstein GPT can be leveraged across various domains to enhance business operations: Integration with Salesforce Data Cloud Salesforce Data Cloud, a cloud-based data management system, enables real-time data aggregation from diverse sources. Einstein GPT utilizes unified customer data profiles from the Salesforce Data Cloud to personalize interactions throughout the customer journey. OpenAI on ChatGPT Methods We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format. To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model responses ranked by quality. To collect this data, we took conversations that AI trainers had with the chatbot. We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process. ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022. You can learn more about the 3.5 series here. ChatGPT and GPT-3.5 were trained on an Azure AI supercomputing infrastructure. Limitations ChatGPT and Einstein GPT Salesforce Einstein GPT signifies a significant advancement in AI technology, empowering businesses to deliver tailored customer experiences and streamline operations. With its integration into Salesforce CRM and other platforms, Einstein GPT offers unprecedented capabilities for personalized engagement and automated insights, ensuring organizations remain competitive in today’s dynamic market landscape. When OpenAI quietly launched ChatGPT in late November 2022, the San Francisco-based AI company didn’t anticipate the viral sensation it would become. Initially viewed as a “research preview,” it was meant to showcase a refined version of existing technology while gathering feedback from the public to address its flaws. However, the overwhelming success of ChatGPT caught OpenAI off guard, leading to a scramble to capitalize on its newfound popularity. ChatGPT, based on the GPT-3.5 language model, was fine-tuned to be more conversational and accessible, setting it apart from previous iterations. Its release marked a significant milestone, attracting millions of users eager to test its capabilities. OpenAI quickly realized the need to address potential misuse and improve the model’s safety features. Since its launch, ChatGPT has undergone several updates, including the implementation of adversarial training to prevent users from exploiting it (known as “jailbreaking”). This technique involves pitting multiple chatbots against each other to identify and neutralize malicious behavior. Additionally,

Read More
gettectonic.com