Gemini - gettectonic.com
Google Prepares AI-Powered Jarvis Agent

Google Prepares AI-Powered Jarvis Agent

Google Prepares AI-Powered Jarvis Agent for Automated Browser Tasks in Chrome Google is reportedly gearing up to launch “Project Jarvis,” an AI-powered browser agent designed to automate tasks directly within the Chrome ecosystem. According to The Information, the tool is expected to roll out in December to select users and will leverage Google’s advanced Gemini 2.0 AI model. Jarvis aims to simplify repetitive online tasks, such as organizing information or booking reservations, offering a seamless and efficient digital assistant embedded within Chrome. This initiative reflects Google’s broader vision to enhance user experiences by automating web-based routines, making its browser a central hub for task automation. Anthropic Expands Desktop Automation with Claude 3.5 Sonnet Anthropic, a key player in the AI landscape, has advanced its Claude 3.5 model with a new “Computer Use” feature, enabling direct interaction with a user’s desktop. This update allows Claude to perform tasks such as typing, clicking, and managing multiple applications, making it a powerful tool for automating workflows like data entry, document management, and customer service. Available through APIs and platforms like Amazon Bedrock and Google Cloud’s Vertex AI, Claude’s new capabilities position it as a versatile solution for businesses seeking desktop-level automation, contrasting Google Jarvis’s browser-specific approach. By interpreting screen elements, Claude’s “Computer Use” mode supports broader applications beyond web tasks, offering businesses an edge in efficiency and scalability. How Google Jarvis Stands Out Unlike Anthropic’s desktop-oriented Claude Sonnet, Google Jarvis focuses on automating tasks within Chrome. Jarvis analyzes screenshots of web pages, interprets user commands, and executes actions like clicks or data entry. While still in development, Jarvis’s design suggests a future where mundane web-based tasks are seamlessly handled by AI. Powered by Google’s Gemini 2.0 language model, Jarvis is tailored for users who prioritize web-specific functions, creating a user-friendly assistant that requires no external software. This aligns with Google’s strategy to deepen integration within its ecosystem, making Chrome a more intuitive and productive environment. Microsoft’s Copilot Agents Lead Business Automation Microsoft, meanwhile, continues to enhance its Copilot AI agents, particularly within Dynamics 365. These specialized agents are designed to automate industry-specific workflows, from lead qualification in sales to financial data reconciliation. Unlike Google Jarvis or Anthropic Claude, Microsoft’s Copilot agents target enterprise users, embedding automation within business applications like Teams, Outlook, and SharePoint. With tools like Copilot Studio, organizations can customize workflows to meet specific needs, offering a level of flexibility that resonates with enterprise clients. Early adopters, including Vodafone and Cognizant, have reported significant productivity gains through these integrations. Microsoft’s efforts position Copilot as a robust partner for day-to-day operations, transforming tasks like analysis, project coordination, and document management into automated, efficient processes. Competing Visions for AI Agents As Google, Anthropic, and Microsoft refine their AI strategies, they’re carving out distinct niches in the AI agent landscape: These approaches highlight the diverse applications of AI agents, from enhancing individual user experiences to transforming business operations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Empowering LLMs with a Robust Agent Framework

PydanticAI: Empowering LLMs with a Robust Agent Framework As the Generative AI landscape evolves at a historic pace, AI agents and multi-agent systems are expected to dominate 2025. Industry leaders like AWS, OpenAI, and Microsoft are racing to release frameworks, but among these, PydanticAI stands out for its unique integration of the powerful Pydantic library with large language models (LLMs). Why Pydantic Matters Pydantic, a Python library, simplifies data validation and parsing, making it indispensable for handling external inputs such as JSON, user data, or API responses. By automating data checks (e.g., type validation and format enforcement), Pydantic ensures data integrity while reducing errors and development effort. For instance, instead of manually validating fields like age or email, Pydantic allows you to define models that automatically enforce structure and constraints. Consider the following example: pythonCopy codefrom pydantic import BaseModel, EmailStr class User(BaseModel): name: str age: int email: EmailStr user_data = {“name”: “Alice”, “age”: 25, “email”: “[email protected]”} user = User(**user_data) print(user.name) # Alice print(user.age) # 25 print(user.email) # [email protected] If invalid data is provided (e.g., age as a string), Pydantic throws a detailed error, making debugging straightforward. What Makes PydanticAI Special Building on Pydantic’s strengths, PydanticAI brings structured, type-safe responses to LLM-based AI agents. Here are its standout features: Building an AI Agent with PydanticAI Below is an example of creating a PydanticAI-powered bank support agent. The agent interacts with customer data, evaluates risks, and provides structured advice. Installation bashCopy codepip install ‘pydantic-ai-slim[openai,vertexai,logfire]’ Example: Bank Support Agent pythonCopy codefrom dataclasses import dataclass from pydantic import BaseModel, Field from pydantic_ai import Agent, RunContext from bank_database import DatabaseConn @dataclass class SupportDependencies: customer_id: int db: DatabaseConn class SupportResult(BaseModel): support_advice: str = Field(description=”Advice for the customer”) block_card: bool = Field(description=”Whether to block the customer’s card”) risk: int = Field(description=”Risk level of the query”, ge=0, le=10) support_agent = Agent( ‘openai:gpt-4o’, deps_type=SupportDependencies, result_type=SupportResult, system_prompt=( “You are a support agent in our bank. Provide support to customers and assess risk levels.” ), ) @support_agent.system_prompt async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str: customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id) return f”The customer’s name is {customer_name!r}” @support_agent.tool async def customer_balance(ctx: RunContext[SupportDependencies], include_pending: bool) -> float: return await ctx.deps.db.customer_balance( id=ctx.deps.customer_id, include_pending=include_pending ) async def main(): deps = SupportDependencies(customer_id=123, db=DatabaseConn()) result = await support_agent.run(‘What is my balance?’, deps=deps) print(result.data) result = await support_agent.run(‘I just lost my card!’, deps=deps) print(result.data) Key Concepts Why PydanticAI Matters PydanticAI simplifies the development of production-ready AI agents by bridging the gap between unstructured LLM outputs and structured, validated data. Its ability to handle complex workflows with type safety and its seamless integration with modern AI tools make it an essential framework for developers. As we move toward a future dominated by multi-agent AI systems, PydanticAI is poised to be a cornerstone in building reliable, scalable, and secure AI-driven applications. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Google’s Gemini 1.5 Flash-8B

Google’s Gemini 1.5 Flash-8B

Google’s Gemini 1.5 Flash-8B: A Game-Changer in Speed and Affordability Google’s latest AI model, Gemini 1.5 Flash-8B, has taken the spotlight as the company’s fastest and most cost-effective offering to date. Building on the foundation of the original Flash model, 8B introduces key upgrades in pricing, speed, and rate limits, signaling Google’s intent to dominate the affordable AI model market. What Sets Gemini 1.5 Flash-8B Apart? Google has implemented several enhancements to this lightweight model, informed by “developer feedback and testing the limits of what’s possible,” as highlighted in their announcement. These updates focus on three major areas: 1. Unprecedented Price Reduction The cost of using Flash-8B has been slashed in half compared to its predecessor, making it the most budget-friendly model in its class. This dramatic price drop solidifies Flash-8B as a leading choice for developers seeking an affordable yet reliable AI solution. 2. Enhanced Speed The Flash-8B model is 40% faster than its closest competitor, GPT-4o, according to data from Artificial Analysis. This improvement underscores Google’s focus on speed as a critical feature for developers. Whether working in AI Studio or using the Gemini API, users will notice shorter response times and smoother interactions. 3. Increased Rate Limits Flash-8B doubles the rate limits of its predecessor, allowing for 4,000 requests per minute. This improvement ensures developers and users can handle higher volumes of smaller, faster tasks without bottlenecks, enhancing efficiency in real-time applications. Accessing Flash-8B You can start using Flash-8B today through Google AI Studio or via the Gemini API. AI Studio provides a free testing environment, making it a great starting point before transitioning to API integration for larger-scale projects. Comparing Flash-8B to Other Gemini Models Flash-8B positions itself as a faster, cheaper alternative to high-performance models like Gemini 1.5 Pro. While it doesn’t outperform the Pro model across all benchmarks, it excels in cost efficiency and speed, making it ideal for tasks requiring rapid processing at scale. In benchmark evaluations, Flash-8B surpasses the base Flash model in four key areas, with only marginal decreases in other metrics. For developers prioritizing speed and affordability, Flash-8B offers a compelling balance between performance and cost. Why Flash-8B Matters Gemini 1.5 Flash-8B highlights Google’s commitment to providing accessible AI solutions for developers without compromising on quality. With its reduced costs, faster response times, and higher request limits, Flash-8B is poised to redefine expectations for lightweight AI models, catering to a broad spectrum of applications while maintaining an edge in affordability. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents and Digital Transformation

Ready for AI Agents

Brands that can effectively integrate agentic AI into their operations stand to gain a significant competitive edge. But as with any innovation, success will depend on balancing the promise of automation with the complexities of trust, privacy, and user experience.

Read More

GENAI Shows No Racial or Sexual Bias

Researchers from Mass General Brigham recently published findings in PAIN indicating that large language models (LLMs) do not exhibit race- or sex-based biases when recommending opioid treatments. The team highlighted that, while biases are prevalent in many areas of healthcare, they are particularly concerning in pain management. Studies have shown that Black patients’ pain is often underestimated and undertreated by clinicians, while white patients are more likely to be prescribed opioids than other racial and ethnic groups. These disparities raise concerns that AI tools, including LLMs, could perpetuate or exacerbate such biases in healthcare. To investigate how AI tools might either mitigate or reinforce biases, the researchers explored how LLM recommendations varied based on patients’ race, ethnicity, and sex. Using 40 real-world patient cases from the MIMIC-IV Note data set—each involving complaints of headache, abdominal, back, or musculoskeletal pain—the cases were stripped of references to sex and race. Random race categories (American Indian or Alaska Native, Asian, Black, Hispanic or Latino, Native Hawaiian or Other Pacific Islander, and white) and sex (male or female) were then assigned to each case. This process was repeated until all combinations of race and sex were generated, resulting in 480 unique cases. These cases were analyzed using GPT-4 and Gemini, both of which assigned subjective pain ratings and made treatment recommendations. The analysis found that neither model made opioid treatment recommendations that differed by race or sex. However, the tools did show some differences—GPT-4 tended to rate pain as “severe” more frequently than Gemini, which was more likely to recommend opioids. While further validation is necessary, the researchers believe the results indicate that LLMs could help address biases in healthcare. “These results are reassuring in that patient race, ethnicity, and sex do not affect recommendations, indicating that these LLMs have the potential to help address existing bias in healthcare,” said co-first authors Cameron Young and Ellie Einchen, students at Harvard Medical School, in a press release. However, the study has limitations. It categorized sex as a binary variable, omitting a broader gender spectrum, and it did not fully represent mixed-race individuals, leaving certain marginalized groups underrepresented. The team suggested future research should incorporate these factors and explore how race influences LLM recommendations in other medical specialties. Marc Succi, MD, strategic innovation leader at Mass General Brigham and corresponding author of the study, emphasized the need for caution in integrating AI into healthcare. “There are many elements to consider, such as the risks of over-prescribing or under-prescribing medications and whether patients will accept AI-influenced treatment plans,” Succi said. “Our study adds key data showing how AI has the potential to reduce bias and improve health equity.” Succi also noted the broader implications of AI in clinical decision support, suggesting that AI tools will serve as complementary aids to healthcare professionals. “In the short term, AI algorithms can act as a second set of eyes, running in parallel with medical professionals,” he said. “However, the final decision will always remain with the doctor.” These findings offer important insights into the role AI could play in reducing bias and enhancing equity in pain management and healthcare overall. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Google on Google AI

Google on Google AI

As a leading cloud provider, Google Cloud is also a major player in the generative AI market. Google on Google AI provides insights into this new tool. In the past two years, Google has been in a competitive battle with AWS, Microsoft, and OpenAI to gain dominance in the generative AI space. Recently, Google introduced several generative Artificial Intelligence products, including its flagship large language model, Gemini, and the Vertex AI Model Garden. Last week, it also unveiled Audio Overview, a tool that transforms documents into audio discussions. Despite these advancements, Google has faced criticism for lagging in some areas, such as issues with its initial image generation tool, like X’s Grok. However, the company remains committed to driving progress in generative AI. Google’s strategy focuses not only on delivering its proprietary models but also offering a broad selection of third-party models through its Model Garden. Google’s Thoughts on Google AI Warren Barkley, head of product for Google Cloud’s Vertex AI, GenAI, and machine learning, emphasized this approach in a recent episode of the Targeting AI podcast. He noted that a key part of Google’s ongoing effort is ensuring users can easily transition to more advanced models. “A lot of what we did in the early days, and we continue to do now, is make it easy for people to move to the next generation,” Barkley said. “The models we built 18 months ago are a shadow of what we have today. So, providing pathways for people to upgrade and stay on the cutting edge is critical.” Google is also focused on helping users select the right AI models for specific applications. With over 100 closed and open models available in the Model Garden, evaluating them can be challenging for customers. To address this, Google introduced evaluation tools that allow users to test prompts and compare model responses. In addition, Google is exploring advancements in Artificial Intelligence reasoning, which it views as crucial to driving the future of generative AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Battle of Copilots

Battle of Copilots

Salesforce is directly challenging Microsoft in the growing battle of AI copilots, which are designed to enhance customer experience (CX) across key business functions like sales and support. In this competitive landscape, Salesforce is taking on not only Microsoft but also major AI rivals such as Google Gemini, OpenAI GPT, and IBM watsonx. At the heart of this strategy is Salesforce Agentforce, a platform that leverages autonomous decision-making to meet enterprise demands for data and AI abstraction. Salesforce Dreamforce Highlights One of the most significant takeaways from last month’s Dreamforce conference in San Francisco was the unveiling of autonomous agents, bringing advanced GenAI capabilities to the app development process. CEO Marc Benioff and other Salesforce executives made it clear that Salesforce is positioning itself to compete with Microsoft’s Copilot, rebranding and advancing its own AI assistant, previously known as Einstein AI. Microsoft’s stronghold, however, lies in Copilot’s seamless integration with widely used products like Teams, Outlook, PowerPoint, and Word. Furthermore, Microsoft has established itself as a developer’s favorite, especially with GitHub Copilot and the Azure portfolio, which are integral to app modernization in many enterprises. “Salesforce faces an uphill battle in capturing market share from these established players,” says Charlotte Dunlap, Research Director at GlobalData. “Salesforce’s best chance lies in highlighting the autonomous capabilities of Agentforce—enabling businesses to automate more processes, moving beyond basic chatbot functions, and delivering a personalized customer experience.” This emphasis on autonomy is vital, given that many enterprises are still grappling with the complexities of emerging GenAI technologies. Dunlap points out that DevOps teams are struggling to find third-party expertise that understands how GenAI fits within existing IT systems, particularly around security and governance concerns. Salesforce’s focus on automation, combined with the integration prowess of MuleSoft, positions it as a key player in making GenAI tools more accessible and intuitive for businesses. Elevating AI Abstraction and Automation Salesforce has increasingly focused on the idea of abstracting data and AI, exemplified by its Data Cloud and low-level UI capabilities. Now, with models like the Atlas Reasoning Engine, Salesforce is looking to push beyond traditional AI assistants. These tools are designed to automate complex, previously human-dependent tasks, spanning functions like sales, service, and marketing. Simplifying the Developer Experience The true measure of Salesforce’s success in its GenAI strategy will emerge in the coming months. The company is well aware that its ability to simplify the developer experience is critical. Enterprises are looking for more than just AI innovation—they want thought leadership that can help secure budget and executive support for AI initiatives. Many companies report ongoing struggles in gaining that internal buy-in, further underscoring the importance of strong, strategic partnerships with technology providers like Salesforce. In its pursuit to rival Microsoft Copilot, Salesforce’s future hinges on how effectively it can build on its track record of simplifying the developer experience while promoting the unique autonomous qualities of Agentforce. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Fivetrans Hybrid Deployment

Fivetrans Hybrid Deployment

Fivetran’s Hybrid Deployment: A Breakthrough in Data Engineering In the data engineering world, balancing efficiency with security has long been a challenge. Fivetran aims to shift this dynamic with its Hybrid Deployment solution, designed to seamlessly move data across any environment while maintaining control and flexibility. Fivetrans Hybrid Deployment. The Hybrid Advantage: Flexibility Meets Control Fivetran’s Hybrid Deployment offers a new approach for enterprises, particularly those handling sensitive data or operating in regulated sectors. Often, these businesses struggle to adopt data-driven practices due to security concerns. Hybrid Deployment changes this by enabling the secure movement of data across cloud and on-premises environments, giving businesses full control over their data while maintaining the agility of the cloud. As George Fraser, Fivetran’s CEO, notes, “Businesses no longer have to choose between managed automation and data control. They can now securely move data from all their critical sources—like Salesforce, Workday, Oracle, SAP—into a data warehouse or data lake, while keeping that data under their own control.” How it Works: A Secure, Streamlined Approach Fivetran’s Hybrid Deployment relies on a lightweight local agent to move data securely within a customer’s environment, while the Fivetran platform handles the management and monitoring. This separation of control and data planes ensures that sensitive information stays within the customer’s secure perimeter. Vinay Kumar Katta, a managing delivery architect at Capgemini, highlights the flexibility this provides, enabling businesses to design pipelines without sacrificing security. Beyond Security: Additional Benefits Hybrid Deployment’s benefits go beyond just security. It also offers: Early adopters are already seeing its value. Troy Fokken, chief architect at phData, praises how it “streamlines data pipeline processes,” especially for customers in regulated industries. AI Agent Architectures: Defining the Future of Autonomous Systems In the rapidly evolving world of AI, a new framework is emerging—AI agents designed to act autonomously, adapt dynamically, and explore digital environments. These AI agents are built on core architectural principles, bringing the next generation of autonomy to AI-driven tasks. What Are AI Agents? AI agents are systems designed to autonomously or semi-autonomously perform tasks, leveraging tools to achieve objectives. For instance, these agents may use APIs, perform web searches, or interact with digital environments. At their core, AI agents use Large Language Models (LLMs) and Foundation Models (FMs) to break down complex tasks, similar to human reasoning. Large Action Models (LAMs) Just as LLMs transformed natural language processing, Large Action Models (LAMs) are revolutionizing how AI agents interact with environments. These models excel at function calling—turning natural language into structured, executable actions, enabling AI agents to perform real-world tasks like scheduling or triggering API calls. Salesforce AI Research, for instance, has open-sourced several LAMs designed to facilitate meaningful actions. LAMs bridge the gap between unstructured inputs and structured outputs, making AI agents more effective in complex environments. Model Orchestration and Small Language Models (SLMs) Model orchestration complements LAMs by utilizing smaller, specialized models (SLMs) for niche tasks. Instead of relying on resource-heavy models, AI agents can call upon these smaller models for specific functions—such as summarizing data or executing commands—creating a more efficient system. SLMs, combined with techniques like Retrieval-Augmented Generation (RAG), allow smaller models to perform comparably to their larger counterparts, enhancing their ability to handle knowledge-intensive tasks. Vision-Enabled Language Models for Digital Exploration AI agents are becoming even more capable with vision-enabled language models, allowing them to interact with digital environments. Projects like Apple’s Ferret-UI and WebVoyager exemplify this, where agents can navigate user interfaces, recognize elements via OCR, and explore websites autonomously. Function Calling: Structured, Actionable Outputs A fundamental shift is happening with function calling in AI agents, moving from unstructured text to structured, actionable outputs. This allows AI agents to interact with systems more efficiently, triggering specific actions like booking meetings or executing API calls. The Role of Tools and Human-in-the-Loop AI agents rely on tools—algorithms, scripts, or even humans-in-the-loop—to perform tasks and guide actions. This approach is particularly valuable in high-stakes industries like healthcare and finance, where precision is crucial. The Future of AI Agents With the advent of Large Action Models, model orchestration, and function calling, AI agents are becoming powerful problem solvers. These agents are evolving to explore, learn, and act within digital ecosystems, bringing us closer to a future where AI mimics human problem-solving processes. As AI agents become more sophisticated, they will redefine how we approach digital tasks and interactions. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Tableau Einstein Alliance to Help Partners Drive Success in the Agent Era

Tableau Einstein Alliance to Help Partners Drive Success in the Agent Era

Salesforce Unveils Tableau Einstein Alliance to Empower Partners in the AI-Driven Agent Era Salesforce today announced the launch of the Tableau Einstein Alliance, a new partner community designed to create and deliver AI-driven solutions and analytical agents for Tableau Einstein. Built on the Salesforce platform and integrated with Agentforce, this initiative aims to help partners accelerate success in the emerging AI landscape. Tableau Einstein Alliance to Help Partners Drive Success in the Agent Era The Tableau Einstein Alliance offers partners a range of exclusive benefits, including early access to Salesforce’s product roadmaps, in-house AI experts, marketing support, and co-selling opportunities. Through the Alliance, partners will be able to develop agents, apps, and AI-driven solutions, enabling customers to navigate the autonomous AI revolution and rapidly extract value from their data and AI investments. The Alliance is set to launch in February 2025 with 25 founding members, including Tectonic, Capgemini, Deloitte, IBM, and Slalom. Solutions developed within the Alliance will be available on both the Salesforce AppExchange and the forthcoming Tableau Marketplace, offering developers a platform to create, share, and monetize analytical assets. Why It Matters:Partner ecosystems have been crucial in advancing major technological innovations, from cloud computing to software-as-a-service. With the rise of Agentforce, building a dynamic partner community is more critical than ever to drive the next wave of AI and analytics adoption. Salesforce’s Perspective: “Tableau’s success is deeply rooted in our partners’ commitment to our customers. Now, we’re investing in the Tableau Einstein Alliance to cultivate an ecosystem of visionary and innovative partners who will integrate Agentforce into every facet of analytics. The future of data and analytics is here, and our partners are essential to this journey.”— Ryan Aytay, CEO, Tableau Industry Perspectives: “Atrium has championed the vision of unified analytics since Tableau joined the Salesforce ecosystem. We’ve seen the incredible potential of Data Cloud and Tableau Cloud together, and we’re thrilled to help bring Tableau Einstein to market. Its integrated features will offer customers unprecedented productivity.”— Chris Heineken, CEO, Atrium “Tectonic’s “Insight to Action” methodology (i2a) is directly improved by the launch of the Tableau Einstein Alliance. By utilizing automated AI-solutions to power data-driven insights, we are able to deliver additional value to our customers.”— Dan Grossnickle, Tectonic “Tableau Einstein represents the next step in Salesforce’s data platforms and generative AI products. The value for clients from these data-driven insights is immense. We’re excited to help lead the way through the Tableau Einstein Alliance.”— Jean-Marc Gaultier, Head of Group Strategic Initiatives and Partnerships, Capgemini “Deloitte has long benefited from Tableau’s capabilities, and we’re excited to see how this next iteration will further empower our teams with data to drive growth. Integrating key features into tools like Salesforce and Slack will unlock even greater potential for us.”— Moritz Schieder, Tableau Alliance Leader and Director, Deloitte Germany “IBM is eager to leverage Tableau Einstein to deliver more value to our customers, regardless of where they work. As a strategic Agentforce partner and Salesforce customer, we are excited to be part of the next generation of analytics alongside Salesforce.”— Mary Rowe, Global Head of IBM Consulting Salesforce Practice Tableau Einstein Alliance to Help Partners Drive Success in the Agent Era and Tectonic, an insights 2 actions company, is excited to be a part of the innovation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI-Driven Chatbots in Education

AI-Driven Chatbots in Education

As AI-driven chatbots enter college courses, the potential to offer students 24/7 support is game-changing. However, there’s a critical caveat: when we customize chatbots by uploading documents, we don’t just add knowledge — we introduce biases. The documents we choose influence chatbot responses, subtly shaping how students interact with course material and, ultimately, how they think. So, how can we ensure that AI chatbots promote critical thinking rather than merely serving to reinforce our own viewpoints? How Course Chatbots Differ from Administrative Chatbots Chatbot teaching assistants have been around for some time in education, but low-cost access to large language models (LLMs) and accessible tools now make it easy for instructors to create customized course chatbots. Unlike chatbots used in administrative settings that rely on a defined “ground truth” (e.g., policy), educational chatbots often cover nuanced and debated topics. While instructors typically bring specific theories or perspectives to the table, a chatbot trained with tailored content can either reinforce a single view or introduce a range of academic perspectives. With tools like ChatGPT, Claude, Gemini, or Copilot, instructors can upload specific documents to fine-tune chatbot responses. This customization allows a chatbot to provide nuanced responses, often aligned with course-specific materials. But, unlike administrative chatbots that reference well-defined facts, course chatbots require ethical responsibility due to the subjective nature of academic content. Curating Content for Classroom Chatbots Having a 24/7 teaching assistant can be a powerful resource, and today’s tools make it easy to upload course documents and adapt LLMs to specific curricula. Options like OpenAI’s GPT Assistant, IBL’s AI Mentor, and Druid’s Conversational AI allow instructors to shape the knowledge base of course-specific chatbots. However, curating documents goes beyond technical ease — the content chosen affects not only what students learn but also how they think. The documents you select will significantly shape, though not dictate, chatbot responses. Combined with the LLM’s base model, chatbot instructions, and the conversation context, the curated content influences chatbot output — for better or worse — depending on your instructional goals. Curating for Critical Thinking vs. Reinforcing Bias A key educational principle is teaching students “how to think, not what to think.” However, some educators may, even inadvertently, lean toward dictating specific viewpoints when curating content. It’s critical to recognize the potential for biases that could influence students’ engagement with the material. Here are some common biases to be mindful of when curating chatbot content: While this list isn’t exhaustive, it highlights the complexities of curating content for educational chatbots. It’s important to recognize that adding data shifts — not erases — inherent biases in the LLM’s responses. Few academic disciplines offer a single, undisputed “truth.” AI-Driven Chatbots in Education. Tips for Ethical and Thoughtful Chatbot Curation Here are some practical tips to help you create an ethically balanced course chatbot: This approach helps prevent a chatbot from merely reflecting a single perspective, instead guiding students toward a broader understanding of the material. Ethical Obligations As educators, our ethical obligations extend to ensuring transparency about curated materials and explaining our selection choices. If some documents represent what you consider “ground truth” (e.g., on climate change), it’s still crucial to include alternative views and equip students to evaluate the chatbot’s outputs critically. Equity Customizing chatbots for educational use is powerful but requires deliberate consideration of potential biases. By curating diverse perspectives, being transparent in choices, and refining chatbot content, instructors can foster critical thinking and more meaningful student engagement. AI-Driven Chatbots in Education AI-powered chatbots are interactive tools that can help educational institutions streamline communication and improve the learning experience. They can be used for a variety of purposes, including: Some examples of AI chatbots in education include: While AI chatbots can be a strategic move for educational institutions, it’s important to balance innovation with the privacy and security of student data.  Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Communicating With Machines

Communicating With Machines

For as long as machines have existed, humans have struggled to communicate effectively with them. The rise of large language models (LLMs) has transformed this dynamic, making “prompting” the bridge between our intentions and AI’s actions. By providing pre-trained models with clear instructions and context, we can ensure they understand and respond correctly. As UX practitioners, we now play a key role in facilitating this interaction, helping humans and machines truly connect. The UX discipline was born alongside graphical user interfaces (GUIs), offering a way for the average person to interact with computers without needing to write code. We introduced familiar concepts like desktops, trash cans, and save icons to align with users’ mental models, while complex code ran behind the scenes. Now, with the power of AI and the transformer architecture, a new form of interaction has emerged—natural language communication. This shift has changed the design landscape, moving us from pure graphical interfaces to an era where text-based interactions dominate. As designers, we must reconsider where our focus should lie in this evolving environment. A Mental Shift In the era of command-based design, we focused on breaking down complex user problems, mapping out customer journeys, and creating deterministic flows. Now, with AI at the forefront, our challenge is to provide models with the right context for optimal output and refine the responses through iteration. Shifting Complexity to the Edges Successful communication, whether with a person or a machine, hinges on context. Just as you would clearly explain your needs to a salesperson to get the right product, AI models also need clear instructions. Expecting users to input all the necessary information in their prompts won’t lead to widespread adoption of these models. Here, UX practitioners play a critical role. We can design user experiences that integrate context—some visible to users, others hidden—shaping how AI interacts with them. This ensures that users can seamlessly communicate with machines without the burden of detailed, manual prompts. The Craft of Prompting As designers, our role in crafting prompts falls into three main areas: Even if your team isn’t building custom models, there’s still plenty of work to be done. You can help select pre-trained models that align with user goals and design a seamless experience around them. Understanding the Context Window A key concept for UX designers to understand is the “context window“—the information a model can process to generate an output. Think of it as the amount of memory the model retains during a conversation. Companies can use this to include hidden prompts, helping guide AI responses to align with brand values and user intent. Context windows are measured in tokens, not time, so even if you return to a conversation weeks later, the model remembers previous interactions, provided they fit within the token limit. With innovations like Gemini’s 2-million-token context window, AI models are moving toward infinite memory, which will bring new design challenges for UX practitioners. How to Approach Prompting Prompting is an iterative process where you craft an instruction, test it with the model, and refine it based on the results. Some effective techniques include: Depending on the scenario, you’ll either use direct, simple prompts (for user-facing interactions) or broader, more structured system prompts (for behind-the-scenes guidance). Get Organized As prompting becomes more common, teams need a unified approach to avoid conflicting instructions. Proper documentation on system prompting is crucial, especially in larger teams. This helps prevent errors and hallucinations in model responses. Prompt experimentation may reveal limitations in AI models, and there are several ways to address these: Looking Ahead The UX landscape is evolving rapidly. Many organizations, particularly smaller ones, have yet to realize the importance of UX in AI prompting. Others may not allocate enough resources, underestimating the complexity and importance of UX in shaping AI interactions. As John Culkin said, “We shape our tools, and thereafter, our tools shape us.” The responsibility of integrating UX into AI development goes beyond just individual organizations—it’s shaping the future of human-computer interaction. This is a pivotal moment for UX, and how we adapt will define the next generation of design. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com