xAI - gettectonic.com

Empowering LLMs with a Robust Agent Framework

PydanticAI: Empowering LLMs with a Robust Agent Framework As the Generative AI landscape evolves at a historic pace, AI agents and multi-agent systems are expected to dominate 2025. Industry leaders like AWS, OpenAI, and Microsoft are racing to release frameworks, but among these, PydanticAI stands out for its unique integration of the powerful Pydantic library with large language models (LLMs). Why Pydantic Matters Pydantic, a Python library, simplifies data validation and parsing, making it indispensable for handling external inputs such as JSON, user data, or API responses. By automating data checks (e.g., type validation and format enforcement), Pydantic ensures data integrity while reducing errors and development effort. For instance, instead of manually validating fields like age or email, Pydantic allows you to define models that automatically enforce structure and constraints. Consider the following example: pythonCopy codefrom pydantic import BaseModel, EmailStr class User(BaseModel): name: str age: int email: EmailStr user_data = {“name”: “Alice”, “age”: 25, “email”: “[email protected]”} user = User(**user_data) print(user.name) # Alice print(user.age) # 25 print(user.email) # [email protected] If invalid data is provided (e.g., age as a string), Pydantic throws a detailed error, making debugging straightforward. What Makes PydanticAI Special Building on Pydantic’s strengths, PydanticAI brings structured, type-safe responses to LLM-based AI agents. Here are its standout features: Building an AI Agent with PydanticAI Below is an example of creating a PydanticAI-powered bank support agent. The agent interacts with customer data, evaluates risks, and provides structured advice. Installation bashCopy codepip install ‘pydantic-ai-slim[openai,vertexai,logfire]’ Example: Bank Support Agent pythonCopy codefrom dataclasses import dataclass from pydantic import BaseModel, Field from pydantic_ai import Agent, RunContext from bank_database import DatabaseConn @dataclass class SupportDependencies: customer_id: int db: DatabaseConn class SupportResult(BaseModel): support_advice: str = Field(description=”Advice for the customer”) block_card: bool = Field(description=”Whether to block the customer’s card”) risk: int = Field(description=”Risk level of the query”, ge=0, le=10) support_agent = Agent( ‘openai:gpt-4o’, deps_type=SupportDependencies, result_type=SupportResult, system_prompt=( “You are a support agent in our bank. Provide support to customers and assess risk levels.” ), ) @support_agent.system_prompt async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str: customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) return f”The customer’s name is {customer_name!r}” @support_agent.tool async def customer_balance(ctx: RunContext[SupportDependencies], include_pending: bool) -> float: return await ctx.deps.db.customer_balance( id=ctx.deps.customer_id, include_pending=include_pending ) async def main(): deps = SupportDependencies(customer_id=123, db=DatabaseConn()) result = await support_agent.run(‘What is my balance?’, deps=deps) print(result.data) result = await support_agent.run(‘I just lost my card!’, deps=deps) print(result.data) Key Concepts Why PydanticAI Matters PydanticAI simplifies the development of production-ready AI agents by bridging the gap between unstructured LLM outputs and structured, validated data. Its ability to handle complex workflows with type safety and its seamless integration with modern AI tools make it an essential framework for developers. As we move toward a future dominated by multi-agent AI systems, PydanticAI is poised to be a cornerstone in building reliable, scalable, and secure AI-driven applications. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Where LLMs Fall Short

LLM Economies

Throughout history, disruptive technologies have been the catalyst for major social and economic revolutions. The invention of the plow and irrigation systems 12,000 years ago sparked the Agricultural Revolution, while Johannes Gutenberg’s 15th-century printing press fueled the Protestant Reformation and helped propel Europe out of the Middle Ages into the Renaissance. In the 18th century, James Watt’s steam engine ushered in the Industrial Revolution. More recently, the internet has revolutionized communication, commerce, and information access, shrinking the world into a global village. Similarly, smartphones have transformed how people interact with their surroundings. Now, we stand at the dawn of the AI revolution. Large Language Models (LLMs) represent a monumental leap forward, with significant economic implications at both macro and micro levels. These models are reshaping global markets, driving new forms of currency, and creating a novel economic landscape. The reason LLMs are transforming industries and redefining economies is simple: they automate both routine and complex tasks that traditionally require human intelligence. They enhance decision-making processes, boost productivity, and facilitate cost reductions across various sectors. This enables organizations to allocate human resources toward more creative and strategic endeavors, resulting in the development of new products and services. From healthcare to finance to customer service, LLMs are creating new markets and driving AI-driven services like content generation and conversational assistants into the mainstream. To truly grasp the engine driving this new global economy, it’s essential to understand the inner workings of this disruptive technology. These posts will provide both a macro-level overview of the economic forces at play and a deep dive into the technical mechanics of LLMs, equipping you with a comprehensive understanding of the revolution happening now. Why Now? The Connection Between Language and Human Intelligence AI did not begin with ChatGPT’s arrival in November 2022. Many people were developing machine learning classification models in 1999, and the roots of AI go back even further. Artificial Intelligence was formally born in 1950, when Alan Turing—considered the father of theoretical computer science and famed for cracking the Nazi Enigma code during World War II—created the first formal definition of intelligence. This definition, known as the Turing Test, demonstrated the potential for machines to exhibit human-like intelligence through natural language conversations. The test involves a human evaluator who engages in conversations with both a human and a machine. If the evaluator cannot reliably distinguish between the two, the machine is considered to have passed the test. Remarkably, after 72 years of gradual AI development, ChatGPT simulated this very interaction, passing the Turing Test and igniting the current AI explosion. But why is language so closely tied to human intelligence, rather than, for example, vision? While 70% of our brain’s neurons are devoted to vision, OpenAI’s pioneering image generation model, DALL-E, did not trigger the same level of excitement as ChatGPT. The answer lies in the profound role language has played in human evolution. The Evolution of Language The development of language was the turning point in humanity’s rise to dominance on Earth. As Yuval Noah Harari points out in his book Sapiens: A Brief History of Humankind, it was the ability to gossip and discuss abstract concepts that set humans apart from other species. Complex communication, such as gossip, requires a shared, sophisticated language. Human language evolved from primitive cave signs to structured alphabets, which, along with grammar rules, created languages capable of expressing thousands of words. In today’s digital age, language has further evolved with the inclusion of emojis, and now with the advent of GenAI, tokens have become the latest cornerstone in this progression. These shifts highlight the extraordinary journey of human language, from simple symbols to intricate digital representations. In the next post, we will explore the intricacies of LLMs, focusing specifically on tokens. But before that, let’s delve into the economic forces shaping the LLM-driven world. The Forces Shaping the LLM Economy AI Giants in Competition Karl Marx and Friedrich Engels argued that those who control the means of production hold power. The tech giants of today understand that AI is the future means of production, and the race to dominate the LLM market is well underway. This competition is fierce, with industry leaders like OpenAI, Google, Microsoft, and Facebook battling for supremacy. New challengers such as Mistral (France), AI21 (Israel), and Elon Musk’s xAI and Anthropic are also entering the fray. The LLM industry is expanding exponentially, with billions of dollars of investment pouring in. For example, Anthropic has raised $4.5 billion from 43 investors, including major players like Amazon, Google, and Microsoft. The Scarcity of GPUs Just as Bitcoin mining requires vast computational resources, training LLMs demands immense computing power, driving a search for new energy sources. Microsoft’s recent investment in nuclear energy underscores this urgency. At the heart of LLM technology are Graphics Processing Units (GPUs), essential for powering deep neural networks. These GPUs have become scarce and expensive, adding to the competitive tension. Tokens: The New Currency of the LLM Economy Tokens are the currency driving the emerging AI economy. Just as money facilitates transactions in traditional markets, tokens are the foundation of LLM economics. But what exactly are tokens? Tokens are the basic units of text that LLMs process. They can be single characters, parts of words, or entire words. For example, the word “Oscar” might be split into two tokens, “os” and “car.” The performance of LLMs—quality, speed, and cost—hinges on how efficiently they generate these tokens. LLM providers price their services based on token usage, with different rates for input (prompt) and output (completion) tokens. As companies rely more on LLMs, especially for complex tasks like agentic applications, token usage will significantly impact operational costs. With fierce competition and the rise of open-source models like Llama-3.1, the cost of tokens is rapidly decreasing. For instance, OpenAI reduced its GPT-4 pricing by about 80% over the past year and a half. This trend enables companies to expand their portfolio of AI-powered products, further fueling the LLM economy. Context Windows: Expanding Capabilities

Read More
UX Principles for AI in Healthcare

UX Principles for AI in Healthcare

The Role of UX in AI-Driven Healthcare AI is poised to revolutionize the global economy, with predictions it could contribute $15.7 trillion by 2030—more than the combined economic output of China and India. Among the industries likely to see the most transformative impact is healthcare. However, during my time at NHS Digital, I saw how systems that weren’t designed with existing clinical workflows in mind added unnecessary complexity for clinicians, often leading to manual workarounds and errors due to fragmented data entry across systems. The risk is that AI, if not designed with user experience (UX) at the forefront, could exacerbate these issues, creating more disruption rather than solving problems. From diagnostic tools to consumer health apps, the role of UX in AI-driven healthcare is critical to making these innovations effective and user-friendly. This article explores the intersection of UX and AI in healthcare, outlining key UX principles to design better AI-driven experiences and highlighting trends shaping the future of healthcare. The Shift in Human-Computer Interaction with AI AI fundamentally changes how humans interact with computers. Traditionally, users took command by entering inputs—clicking, typing, and adjusting settings until the desired outcome was achieved. The computer followed instructions, while the user remained in control of each step. With AI, this dynamic shifts dramatically. Now, users specify their goal, and the AI determines how to achieve it. For example, rather than manually creating an illustration, users might instruct AI to “design a graphic for AI-driven healthcare with simple shapes and bold colors.” While this saves time, it introduces challenges around ensuring the results meet user expectations, especially when the process behind AI decisions is opaque. The Importance of UX in AI for Healthcare A significant challenge in healthcare AI is the “black box” nature of the systems. For example, consider a radiologist reviewing a lung X-ray that an AI flagged as normal, despite the presence of concerning lesions. Research has shown that commercial AI systems can perform worse than radiologists when multiple health issues are present. When AI decisions are unclear, clinicians may question the system’s reliability, especially if they cannot understand the rationale behind an AI’s recommendation. This opacity hinders feedback, making it difficult to improve the system’s performance. Addressing this issue is essential for UX designers. Bias in AI is another significant issue. Many healthcare AI tools have been documented as biased, such as systems trained on predominantly male cardiovascular data, which can fail to detect heart disease in women. AIs also struggle to identify conditions like melanoma in people with darker skin tones due to insufficient diversity in training datasets. UX can help mitigate these biases by designing interfaces that clearly explain the data used in decisions, highlight missing information, and provide confidence levels for predictions. The movement toward eXplainable AI (XAI) seeks to make AI systems more transparent and interpretable for human users. UX Principles for AI in Healthcare To ensure AI is beneficial in real-world healthcare settings, UX designers must prioritize certain principles. Below are key UX design principles for AI-enabled healthcare applications: Applications of AI in Healthcare AI is already making a significant impact in various healthcare applications, including: Real-world deployments of AI in healthcare have demonstrated that while AI can be useful, its effectiveness depends heavily on usability and UX design. By adhering to the principles of transparency, interpretability, controllability, and human-centered AI, designers can help create AI-enabled healthcare applications that are both powerful and user-friendly. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
xAI for Scientific Discovery

xAI for Scientific Discovery

xAI: Advancing AI for Scientific Discovery xAI is dedicated to developing artificial intelligence that accelerates human scientific discovery, driven by a mission to enhance our understanding of the universe. Led by Elon Musk, CEO of Tesla and SpaceX, the xAI team comprises pioneers who have contributed to key advancements in AI, including the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. Our team has introduced transformative technologies such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, and SimCLR. These innovations have played crucial roles in breakthroughs like AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4. Dan Hendrycks, director of the Center for AI Safety, serves as an advisor to xAI. We also collaborate closely with X Corp to bring our AI technologies to over 500 million users of the X app. Timeline of Key Milestones – xAI for Scientific Discovery Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com