Large Language Model - gettectonic.com - Page 2
healthcare Can prioritize ai governance

AI Data Privacy and Security

Three Key Generative AI Data Privacy and Security Concerns The rise of generative AI is reshaping the digital landscape, introducing powerful tools like ChatGPT and Microsoft Copilot into the hands of professionals, students, and casual users alike. From creating AI-generated art to summarizing complex texts, generative AI (GenAI) is transforming workflows and sparking innovation. However, for information security and privacy professionals, this rapid proliferation also brings significant challenges in data governance and protection. Below are three critical data privacy and security concerns tied to generative AI: 1. Who Owns the Data? Data ownership is a contentious issue in the age of generative AI. In the European Union, the General Data Protection Regulation (GDPR) asserts that individuals own their personal data. In contrast, data ownership laws in the United States are less clear-cut, with recent state-level regulations echoing GDPR’s principles but failing to resolve ambiguity. Generative AI often ingests vast amounts of data, much of which may not belong to the person uploading it. This creates legal risks for both users and AI model providers, especially when third-party data is involved. Cases surrounding intellectual property, such as controversies involving Slack, Reddit, and LinkedIn, highlight public resistance to having personal data used for AI training. As lawsuits in this arena emerge, prior intellectual property rulings could shape the legal landscape for generative AI. 2. What Data Can Be Derived from LLM Output? Generative AI models are designed to be helpful, but they can inadvertently expose sensitive or proprietary information submitted during training. This risk has made many wary of uploading critical data into AI models. Techniques like tokenization, anonymization, and pseudonymization can reduce these risks by obscuring sensitive data before it is fed into AI systems. However, these practices may compromise the model’s performance by limiting the quality and specificity of the training data. Advocates for GenAI stress that high-quality, accurate data is essential to achieving the best results, which adds to the complexity of balancing privacy with performance. 3. Can the Output Be Trusted? The phenomenon of “hallucinations” — when generative AI produces incorrect or fabricated information — poses another significant concern. Whether these errors stem from poor training, flawed data, or malicious intent, they raise questions about the reliability of GenAI outputs. The impact of hallucinations varies depending on the context. While some errors may cause minor inconveniences, others could have serious or even dangerous consequences, particularly in sensitive domains like healthcare or legal advisory. As generative AI continues to evolve, ensuring the accuracy and integrity of its outputs will remain a top priority. The Generative AI Data Governance Imperative Generative AI’s transformative power lies in its ability to leverage vast amounts of information. For information security, data privacy, and governance professionals, this means grappling with key questions, such as: With high stakes and no way to reverse intellectual property violations, the need for robust data governance frameworks is urgent. As society navigates this transformative era, balancing innovation with responsibility will determine whether generative AI becomes a tool for progress or a source of new challenges. While generative AI heralds a bold future, history reminds us that groundbreaking advancements often come with growing pains. It is the responsibility of stakeholders to anticipate and address these challenges to ensure a safer and more equitable AI-powered world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Consider AI Agents Personas

Consider AI Agents Personas

Treating AI Agents as Personas: Introducing the Era of Agent-Computer Interaction The UX landscape is evolving. While the design community has quickly adopted Large Language Models (LLMs) as tools, we’ve yet to fully grasp their transformative potential. With AI agents now deeply embedded in digital products, they are shifting from tools to active participants in our digital ecosystems. This change demands a new design paradigm—one that views AI agents not just as extensions of human users but as independent personas in their own right. The Rise of Agent-Computer Interaction AI agents represent a new class of users capable of navigating interfaces autonomously and completing complex tasks. This marks the dawn of Agent-Computer Interaction (ACI)—a paradigm where user experience design encompasses the needs of both human users and AI agents. Humans still play a critical role in guiding and supervising these systems, but AI agents must now be treated as distinct personas with unique goals, abilities, and requirements. This shift challenges UX designers to consider how these agents interact with interfaces and perform their tasks, ensuring they are equipped with the information and resources necessary to operate effectively. Understanding AI Agents AI agents are intelligent systems designed to reason, plan, and work across platforms with minimal human intervention. As defined during Google I/O, these agents retain context, anticipate needs, and execute multi-step processes. Advances in AI, such as Anthropic’s Claude and its ability to interact with graphical interfaces, have unlocked new levels of agency. Unlike earlier agents that relied solely on APIs, modern agents can manipulate graphical user interfaces much like human users, enabling seamless interaction with browser-based applications. This capability creates opportunities for new forms of interaction but also demands thoughtful design choices. Two Interaction Approaches for AI Agents Design teams must evaluate these methods based on the task’s complexity and transparency requirements, striking the right balance between efficiency and oversight. Designing Experiences Considering AI Agents Personas As AI agents transition into active users, UX design must expand to accommodate their specific needs. Much like human personas, AI agents require a deep understanding of their capabilities, limitations, and workflows. Creating AI Agent Personas Developing personas for AI agents involves identifying their unique characteristics: These personas inform interface designs that optimize agent workflows, ensuring both agents and humans can collaborate effectively. New UX Research Methodologies UX teams should embrace innovative research techniques, such as A/B testing interfaces for agent performance and monitoring their interaction patterns. While AI agents lack sentience, they exhibit behaviors—reasoning, planning, and adapting—that require careful study and design consideration. Shaping the AI Mind AI agents derive their reasoning capabilities from Large Language Models (LLMs), but their behavior and effectiveness are shaped by UX design. Designers have a unique role in crafting system prompts and developing feedback loops that refine LLM behavior over time. Key Areas for Designer Involvement: This work positions UX professionals as co-creators of AI intelligence, shaping not just interfaces but the underlying behaviors that drive agent interactions. Keeping Humans in the Loop Despite the rise of AI agents, human oversight and control remain essential. UX practitioners must prioritize transparency and trust in agent-driven systems. Key Considerations: Using tools like agentic experience maps—blueprints that visualize the interactions between humans, agents, and products—designers can ensure AI systems remain human-centered. A New Frontier for UX The emergence of AI agents heralds a shift as significant as the transition from desktop to mobile. Just as mobile devices unlocked new opportunities for interaction, AI agents are poised to redefine digital experiences in ways we can’t yet fully predict. By embracing Agent-Computer Interaction, UX designers have an unprecedented opportunity to shape the future of human-AI collaboration. Those who develop expertise in designing for these intelligent agents will lead the way in creating systems that are not only powerful but also deeply human-centered. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Road for AI Regulation

Road for AI Regulation

The concept of artificial intelligence, or synthetic minds capable of thinking and reasoning like humans, has been around for centuries. Ancient cultures often expressed ideas and pursued goals similar to AI, and in the early 20th century, science fiction brought these notions to modern audiences. Works like The Wizard of Oz and films such as Metropolis resonated globally, laying the groundwork for contemporary AI discussions.

Read More
AI Agents

AI Agents Interview

In the rapidly evolving world of large language models and generative AI, a new concept is gaining momentum: AI agents. AI Agents Interview explores. AI agents are advanced tools designed to handle complex tasks that traditionally required human intervention. While they may be confused with robotic process automation (RPA) bots, AI agents are much more sophisticated, leveraging generative AI technology to execute tasks autonomously. Companies like Google are positioning AI agents as virtual assistants that can drive productivity across industries. In this Q&A, Jason Gelman, Director of Product Management for Vertex AI at Google Cloud, shares insights into Google’s vision for AI agents and some of the challenges that come with this emerging technology. AI Agents Interview How does Google define AI agents? Jason Gelman: An AI agent is something that acts on your behalf. There are two key components. First, you empower the agent to act on your behalf by providing instructions and granting necessary permissions—like authentication to access systems. Second, the agent must be capable of completing tasks. This is where large language models (LLMs) come in, as they can plan out the steps to accomplish a task. What used to require human planning is now handled by the AI, including gathering information and executing various steps. What are current use cases where AI agents can thrive? Gelman: AI agents can be useful across a wide range of industries. Call centers are a common example where customers already expect AI support, and we’re seeing demand there. In healthcare, organizations like Mayo Clinic are using AI agents to sift through vast amounts of information, helping professionals navigate data more efficiently. Different industries are exploring this technology in unique ways, and it’s gaining traction across many sectors. What are some misconceptions about AI agents? Gelman: One major misconception is that the technology is more advanced than it actually is. We’re still in the early stages, building critical infrastructure like authentication and function-calling capabilities. Right now, AI agents are more like interns—they can assist, but they’re not yet fully autonomous decision-makers. While LLMs appear powerful, we’re still some time away from having AI agents that can handle everything independently. Developing the technology and building trust with users are key challenges. I often compare this to driverless cars. While they might be safer than human drivers, we still roll them out cautiously. With AI agents, the risks aren’t physical, but we still need transparency, monitoring, and debugging capabilities to ensure they operate effectively. How can enterprises balance trust in AI agents while acknowledging the technology is still evolving? Gelman: Start simple and set clear guardrails. Build an AI agent that does one task reliably, then expand from there. Once you’ve proven the technology’s capability, you can layer in additional tasks, eventually creating a network of agents that handle multiple responsibilities. Right now, most organizations are still in the proof-of-concept phase. Some companies are using AI agents for more complex tasks, but for critical areas like financial services or healthcare, humans remain in the loop to oversee decision-making. It will take time before we can fully hand over tasks to AI agents. AI Agents Interview What is the difference between Google’s AI agent and Microsoft Copilot? Gelman: Microsoft Copilot is a product designed for business users to assist with personal tasks. Google’s approach with AI agents, particularly through Vertex AI, is more focused on API-driven, developer-based solutions that can be integrated into applications. In essence, while Copilot serves as a visible assistant for users, Vertex AI operates behind the scenes, embedded within applications, offering greater flexibility and control for enterprise customers. The real potential of AI agents lies in their ability to execute a wide range of tasks at the API level, without the limitations of a low-code/no-code interface. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
More AI Tools to Use

More AI Tools to Use

Additionally, Arc’s collaboration with Perplexity elevates browsing by transforming search experiences. Perplexity functions as a personal AI research assistant, fetching and summarizing information along with sources, visuals, and follow-up questions. Premium users even have access to advanced large language models like GPT-4 and Claude. Together, Arc and Perplexity revolutionize how users navigate the web. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Trends in AI for CRM

Trends in AI for CRM

Nearly half of customer service teams, over 40% of salespeople, and a third of marketers have fully implemented artificial intelligence (AI) to enhance their work. However, 77% of business leaders report persistent challenges related to trusted data and ethical concerns that could stall their AI initiatives, according to Salesforce research released today. The Trends in AI for CRM report analyzed data from multiple studies, revealing that companies are worried about missing out on the opportunities generative AI presents if the data powering large language models (LLMs) isn’t rooted in their own trusted customer records. At the same time, respondents expressed ongoing concerns about the lack of clear company policies governing the ethical use of AI, as well as the complexity of a vendor landscape where 80% of enterprises are currently using multiple LLMs. Salesforce’s Four Keys to Enterprise AI Success Why it matters: AI is one of the most transformative technologies in generations, with projections forecasting a net gain of over $2 trillion in new business revenues by 2028 from Salesforce and its network of partners alone. As enterprises across industries develop their AI strategies, leaders in customer-facing departments such as sales, service, and marketing are eager to leverage AI to drive internal efficiencies and revolutionize customer experiences. Key Findings from the Trends in AI for CRM Report Expert Perspective “This is a pivotal moment as business leaders across industries look to AI to unlock growth, efficiency, and customer loyalty,” said Clara Shih, CEO of Salesforce AI. “But success requires much more than an LLM. Enterprise deployments need trusted data, user access control, vector search, audit trails and citations, data masking, low-code builders, and seamless UI integration. Salesforce brings all of these components together with our Einstein 1 Platform, Data Cloud, Slack, and dozens of customizable, turnkey prompts and actions offered across our clouds.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
Scaling Generative AI

Scaling Generative AI

Many organizations follow a hybrid approach to AI infrastructure, combining public clouds, colocation facilities, and on-prem solutions. Specialized GPU-as-a-service vendors, for instance, are becoming popular for handling high-demand AI computations, helping businesses manage costs without compromising performance. Business process outsourcing company TaskUs, for example, focuses on optimizing compute and data flows as it scales its gen AI deployments, while Cognizant advises that companies distinguish between training and inference needs, each with different latency requirements.

Read More

GENAI Shows No Racial or Sexual Bias

Researchers from Mass General Brigham recently published findings in PAIN indicating that large language models (LLMs) do not exhibit race- or sex-based biases when recommending opioid treatments. The team highlighted that, while biases are prevalent in many areas of healthcare, they are particularly concerning in pain management. Studies have shown that Black patients’ pain is often underestimated and undertreated by clinicians, while white patients are more likely to be prescribed opioids than other racial and ethnic groups. These disparities raise concerns that AI tools, including LLMs, could perpetuate or exacerbate such biases in healthcare. To investigate how AI tools might either mitigate or reinforce biases, the researchers explored how LLM recommendations varied based on patients’ race, ethnicity, and sex. Using 40 real-world patient cases from the MIMIC-IV Note data set—each involving complaints of headache, abdominal, back, or musculoskeletal pain—the cases were stripped of references to sex and race. Random race categories (American Indian or Alaska Native, Asian, Black, Hispanic or Latino, Native Hawaiian or Other Pacific Islander, and white) and sex (male or female) were then assigned to each case. This process was repeated until all combinations of race and sex were generated, resulting in 480 unique cases. These cases were analyzed using GPT-4 and Gemini, both of which assigned subjective pain ratings and made treatment recommendations. The analysis found that neither model made opioid treatment recommendations that differed by race or sex. However, the tools did show some differences—GPT-4 tended to rate pain as “severe” more frequently than Gemini, which was more likely to recommend opioids. While further validation is necessary, the researchers believe the results indicate that LLMs could help address biases in healthcare. “These results are reassuring in that patient race, ethnicity, and sex do not affect recommendations, indicating that these LLMs have the potential to help address existing bias in healthcare,” said co-first authors Cameron Young and Ellie Einchen, students at Harvard Medical School, in a press release. However, the study has limitations. It categorized sex as a binary variable, omitting a broader gender spectrum, and it did not fully represent mixed-race individuals, leaving certain marginalized groups underrepresented. The team suggested future research should incorporate these factors and explore how race influences LLM recommendations in other medical specialties. Marc Succi, MD, strategic innovation leader at Mass General Brigham and corresponding author of the study, emphasized the need for caution in integrating AI into healthcare. “There are many elements to consider, such as the risks of over-prescribing or under-prescribing medications and whether patients will accept AI-influenced treatment plans,” Succi said. “Our study adds key data showing how AI has the potential to reduce bias and improve health equity.” Succi also noted the broader implications of AI in clinical decision support, suggesting that AI tools will serve as complementary aids to healthcare professionals. “In the short term, AI algorithms can act as a second set of eyes, running in parallel with medical professionals,” he said. “However, the final decision will always remain with the doctor.” These findings offer important insights into the role AI could play in reducing bias and enhancing equity in pain management and healthcare overall. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI That Forgets

AI That Forgets

Salesforce has introduced a generative AI system designed to prioritize data privacy through a unique “forgetting” feature. This innovation allows the AI to process information through large language models (LLMs) without retaining the data, helping companies manage sensitive information more securely. AI That Forgets. As part of the latest wave in generative AI, Salesforce’s solution takes the form of digital “agents”—intelligent systems capable of understanding and responding to customer inquiries autonomously. CEO Marc Benioff has hailed this development as a significant breakthrough for the company, emphasizing its potential to transform customer interactions. AI That Forgets. At a recent event, Patrick Stokes, Salesforce’s EVP of Products and Industries, highlighted how this system supports organizations by reducing the costs and risks associated with building their own AI models. According to Stokes, many companies lack the resources to develop in-house AI sustainably, and Salesforce’s privacy-first approach provides an appealing alternative. Rather than focusing solely on creating the most powerful LLM, Salesforce has built AI agents that connect data and actions securely, addressing privacy concerns that have hindered AI adoption. AI That Forgets Salesforce’s approach integrates privacy-focused safeguards, which Stokes describes as a “trust layer” within the AI system. This feature verifies that data retrieved during an AI query aligns with the user’s access permissions, protecting sensitive information. Stokes notes that unlike traditional AI models that retain data, Salesforce’s LLM processes only the information required for each interaction and then “forgets” it afterward. This zero-retention approach creates a more secure environment, where companies retain governance over data usage and minimize risks associated with long-term data storage. Zahra Bahrololoumi, CEO of Salesforce UK and Ireland, also emphasized that Salesforce’s AI solutions offer users the confidence to adopt generative AI without compromising security. With over 1,000 AI agents already implemented, companies are benefiting from reduced burnout and increased productivity while maintaining data trust and integrity. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Google on Google AI

Google on Google AI

As a leading cloud provider, Google Cloud is also a major player in the generative AI market. Google on Google AI provides insights into this new tool. In the past two years, Google has been in a competitive battle with AWS, Microsoft, and OpenAI to gain dominance in the generative AI space. Recently, Google introduced several generative Artificial Intelligence products, including its flagship large language model, Gemini, and the Vertex AI Model Garden. Last week, it also unveiled Audio Overview, a tool that transforms documents into audio discussions. Despite these advancements, Google has faced criticism for lagging in some areas, such as issues with its initial image generation tool, like X’s Grok. However, the company remains committed to driving progress in generative AI. Google’s strategy focuses not only on delivering its proprietary models but also offering a broad selection of third-party models through its Model Garden. Google’s Thoughts on Google AI Warren Barkley, head of product for Google Cloud’s Vertex AI, GenAI, and machine learning, emphasized this approach in a recent episode of the Targeting AI podcast. He noted that a key part of Google’s ongoing effort is ensuring users can easily transition to more advanced models. “A lot of what we did in the early days, and we continue to do now, is make it easy for people to move to the next generation,” Barkley said. “The models we built 18 months ago are a shadow of what we have today. So, providing pathways for people to upgrade and stay on the cutting edge is critical.” Google is also focused on helping users select the right AI models for specific applications. With over 100 closed and open models available in the Model Garden, evaluating them can be challenging for customers. To address this, Google introduced evaluation tools that allow users to test prompts and compare model responses. In addition, Google is exploring advancements in Artificial Intelligence reasoning, which it views as crucial to driving the future of generative AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Fully Formatted Facts

Fully Formatted Facts

A recent discovery by programmer and inventor Michael Calvin Wood is addressing a persistent challenge in AI: hallucinations. These false or misleading outputs, long considered an inherent flaw in large language models (LLMs), have posed a significant issue for developers. However, Wood’s breakthrough is challenging this assumption, offering a solution that could transform how AI-powered applications are built and used. The Importance of Wood’s Discovery for Developers Wood’s findings have substantial implications for developers working with AI. By eliminating hallucinations, developers can ensure that AI-generated content is accurate and reliable, particularly in applications where precision is critical. Understanding the Root Cause of Hallucinations Contrary to popular belief, hallucinations are not primarily caused by insufficient training data or biased algorithms. Wood’s research reveals that the issue stems from how LLMs process and generate information based on “noun-phrase routes.” LLMs organize information around noun phrases, and when they encounter semantically similar phrases, they may conflate or misinterpret them, leading to incorrect outputs. How LLMs Organize Information For example: The Noun-Phrase Dominance Model Wood’s research led to the development of the Noun-Phrase Dominance Model, which posits that neural networks in LLMs self-organize around noun phrases. This model is key to understanding and eliminating hallucinations by addressing how AI processes noun-phrase conflicts. Fully-Formatted Facts (FFF): A Solution Wood’s solution involves transforming input data into Fully-Formatted Facts (FFF)—statements that are literally true, devoid of noun-phrase conflicts, and structured as simple, complete sentences. Presenting information in this format has led to significant improvements in AI accuracy, particularly in question-answering tasks. How FFF Processing Works While Wood has not provided a step-by-step guide for FFF processing, he hints that the process began with named-entity recognition using the Python SpaCy library and evolved into using an LLM to reduce ambiguity while retaining the original writing style. His company’s REST API offers a wrapper around GPT-4o and GPT-4o-mini models, transforming input text to remove ambiguity before processing it. Current Methods vs. Wood’s Approach Current approaches, like Retrieval Augmented Generation (RAG), attempt to reduce hallucinations by adding more context. However, these methods often introduce additional noun-phrase conflicts. For instance, even with RAG, ChatGPT-3.5 Turbo experienced a 23% hallucination rate when answering questions about Wikipedia articles. In contrast, Wood’s method focuses on eliminating noun-phrase conflicts entirely. Results: RAG FF (Retrieval Augmented Generation with Formatted Facts) Wood’s method has shown remarkable results, eliminating hallucinations in GPT-4 and GPT-3.5 Turbo during question-answering tasks using third-party datasets. Real-World Example: Translation Error Elimination Consider a simple translation example: This transformation eliminates hallucinations by removing the potential noun-phrase conflict. Implications for the Future of AI The Noun-Phrase Dominance Model and the use of Fully-Formatted Facts have far-reaching implications: Roadmap for Future Development Wood and his team plan to expand their approach by: Conclusion: A New Era of Reliable AI Wood’s discovery represents a significant leap forward in the pursuit of reliable AI. By aligning input data with how LLMs process information, he has unlocked the potential for accurate, trustworthy AI systems. As this technology continues to evolve, it could have profound implications for industries ranging from healthcare to legal services, where AI could become a consistent and reliable tool. While there is still work to be done in expanding this method across all AI tasks, the foundation has been laid for a revolution in AI accuracy. Future developments will likely focus on refining and expanding these capabilities, enabling AI to serve as a trusted resource across a range of applications. Experience RAGFix For those looking to explore this technology, RAGFix offers an implementation of these groundbreaking concepts. Visit their official website to access demos, explore REST API integration options, and stay updated on the latest advancements in hallucination-free AI: Visit RAGFix.ai Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents and Digital Transformation

AI Agents and Digital Transformation

In the rapidly developingng world of technology, Artificial Intelligence (AI) is revolutionizing industries and reshaping how we interact with digital systems. One of the most promising advancements within AI is the development of AI agents. These intelligent entities, often powered by Large Language Models (LLMs), are driving the next wave of digital transformation by enabling automation, personalization, and enhanced decision-making across various sectors. AI Agents and digital transformation are here to stay. What is an AI Agent? An AI agent, or intelligent agent, is a software entity capable of perceiving its environment, reasoning about its actions, and autonomously working toward specific goals. These agents mimic human-like behavior using advanced algorithms, data processing, and machine-learning models to interact with users and complete tasks. LLMs to AI Agents — An Evolution The evolution of AI agents is closely tied to the rise of Large Language Models (LLMs). Models like GPT (Generative Pre-trained Transformer) have showcased remarkable abilities to understand and generate human-like text. This development has enabled AI agents to interpret complex language inputs, facilitating advanced interactions with users. Key Capabilities of LLM-Based Agents LLM-powered agents possess several key advantages: Two Major Types of LLM Agents LLM agents are classified into two main categories: Multi-Agent Systems (MAS) A Multi-Agent System (MAS) is a group of autonomous agents working together to achieve shared goals or solve complex problems. MAS applications span robotics, economics, and distributed computing, where agents interact to optimize processes. AI Agent Architecture and Key Elements AI agents generally follow a modular architecture comprising: Learning Strategies for LLM-Based Agents AI agents utilize various learning techniques, including supervised, reinforcement, and self-supervised learning, to adapt and improve their performance in dynamic environments. How Autonomous AI Agents Operate Autonomous AI agents act independently of human intervention by perceiving their surroundings, reasoning through possible actions, and making decisions autonomously to achieve set goals. AI Agents’ Transformative Power Across Industries AI agents are transforming numerous industries by automating tasks, enhancing efficiency, and providing data-driven insights. Here’s a look at some key use cases: Platforms Powering AI Agents The Benefits of AI Agents and Digital Transformation AI agents offer several advantages, including: The Future of AI Agents The potential of AI agents is immense, and as AI technology advances, we can expect more sophisticated agents capable of complex reasoning, adaptive learning, and deeper integration into everyday tasks. The future promises a world where AI agents collaborate with humans to drive innovation, enhance efficiency, and unlock new opportunities for growth in the digital age. AI Agents and Digital Transformation By partnering with AI development specialists at Tectonic, organizations can access cutting-edge solutions tailored to their needs, positioning themselves to stay ahead in the rapidly evolving AI-driven market. Agentforce Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com