Parameters Archives - gettectonic.com
Python-Based Reasoning

Python-Based Reasoning

Introducing a Python-Based Reasoning Engine for Deterministic AI As the demand for deterministic systems grows reviving foundational ideas for the age of large language models (LLMs) is here. The Challenge One of the critical issues with modern AI systems is establishing constraints around how they validate and reason about incoming data. As we increasingly rely on stochastic LLMs to process unstructured data, enforcing rules and guardrails becomes vital for ensuring reliability and consistency. The Solution Thus a company has developed a Python-based reasoning and validation framework inspired by Pydantic, designed to empower developers and non-technical domain experts to create sophisticated rule engines. The system is: By transforming Standard Operating Procedures (SOPs) and business guardrails into enforceable code, this symbolic reasoning framework addresses the need for structured, interpretable, and reliable AI systems. Key Features System Architecture The framework includes five core components: Types of Engines Case Studies 1. Validation Engine: Mining Company Compliance A mining company needed to validate employee qualifications against region-specific requirements. The system was configured to check rules such as minimum age and required certifications for specific roles. Input Example:Employee data and validation rules were modeled as JSON: jsonCopy code{ “employees”: [ { “name”: “Sarah”, “age”: 25, “documents”: [{ “type”: “safe_handling_at_work” }] }, { “name”: “John”, “age”: 17, “documents”: [{ “type”: “heavy_lifting” }] } ], “rules”: [ { “type”: “min_age”, “parameters”: { “min_age”: 18 } } ] } Output:Violations, such as “Minimum age must be 18,” were flagged immediately, enabling quick remediation. 2. Reasoning Engine: Solving the River Crossing Puzzle To showcase its capabilities, we modeled the classic river crossing puzzle, where a farmer must transport a wolf, a goat, and a cabbage across a river without leaving incompatible items together. Steps Taken: Enhanced Scenario:Adding a new rule—“Wolf cannot be left with a chicken”—created an unsolvable scenario. By introducing a compensatory rule, “Farmer can carry two items at once,” the system adapted and solved the puzzle with fewer moves. Developer Insights The system supports rapid iteration and debugging. For example, adding rules is as simple as defining Python classes: pythonCopy codeclass GoatCabbageRule(Rule): def evaluate(self, state): return not (state.goat == state.cabbage and state.farmer != state.goat) def get_description(self): return “Goat cannot be left alone with cabbage” Real-World Impact This framework accelerates development by enabling non-technical stakeholders to contribute to rule creation through natural language, with developers approving and implementing these rules. This process reduces development time by up to 5x and adapts seamlessly to varied use cases, from logistics to compliance. 🔔🔔 Follow us on LinkedIn 🔔🔔 Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Einstein Commerce

Salesforce Commerce Cloud Passkeys

Adding Passkeys to Salesforce Commerce Cloud Ensuring secure, convenient user access is a top priority for Salesforce-based applications. Passkeys, a passwordless authentication solution, streamline user sign-up and login processes while enhancing security. By integrating passkeys into Salesforce Commerce Cloud (SFCC), businesses can protect users from password-related threats like phishing and credential theft, leveraging the security of asymmetric encryption behind passkeys. The seamless login experience offered by passkeys boosts user engagement, reduces drop-off rates, and fosters trust, improving overall user satisfaction. Implementing passkeys not only aligns with current security standards but also prepares businesses for the future of intuitive digital interactions and enhanced privacy. DIY Implementation vs. Dedicated Salesforce Commerce Cloud Passkey Solution When deciding how to integrate passkeys into Salesforce Commerce Cloud applications, businesses must weigh the options between a DIY approach and partnering with a dedicated solution provider like OwnID. Implementing passkeys from scratch can be time-consuming and resource-intensive, requiring significant technical effort to ensure compatibility with Salesforce systems and adherence to security and user experience best practices. By choosing a provider like OwnID, companies can implement passkeys in a matter of days rather than months. OwnID offers a ready-to-use, Salesforce-compatible solution that integrates seamlessly, features cutting-edge security, and provides ongoing support. This approach lifts the burden from internal development teams, speeds up deployment, and ensures a high-quality user experience without the need to manage authentication processes or stay on top of compliance updates. For more information, check out our DIY vs. Elite Passkey Implementation Guide. How to Implement the OwnID Solution in Salesforce Commerce Cloud Integrating OwnID’s passwordless login into Salesforce Commerce Cloud (SFCC) is a straightforward process that enhances both security and the user experience. Here’s an overview of the key steps involved: 1. Set Up an API Client in SFCC Begin by creating a new API Client in your SFCC environment. This client is essential for secure communication between SFCC and OwnID. Log into the Salesforce Commerce Cloud Account Manager, add a new API Client, and configure the appropriate roles and authentication methods (e.g., private_key_jwt). This step ensures secure integration between SFCC and OwnID. 2. Create and Configure an OwnID Application In the OwnID Console, set up an application dedicated to your SFCC integration. This application serves as the bridge between OwnID’s passkey system and your Salesforce Commerce Cloud app. Configure settings like API credentials, site URL, and other parameters specific to OwnID. This step connects OwnID’s authentication service to your Salesforce site seamlessly. 3. Install the OwnID Cartridge in SFCC OwnID provides a cartridge designed for SFCC integration. Installing this cartridge adds all necessary components to your SFCC instance, enabling easy interaction between OwnID and Salesforce. After installation, go to Merchant Tools → Site Preferences in SFCC to customize OwnID settings for your environment. This enables you to display the OwnID widget on login and registration pages, creating a smooth, passwordless experience. 4. Embed the OwnID SDK in Your Templates The final step is to embed the OwnID SDK script in your site’s templates (e.g., htmlHead.isml or a global template file). This SDK enables passkey-based login across all relevant pages. By embedding the script, you ensure users have access to passwordless login, enhancing security and user convenience. With these steps, OwnID will be fully integrated into your Salesforce Commerce Cloud application, offering users secure, password-free access. For more detailed instructions and configuration options, visit the OwnID Salesforce Commerce Cloud Documentation. Get Expert Help with Your Salesforce Commerce Cloud Passkey Integration Ready to implement passwordless authentication in your Salesforce Commerce Cloud application? The Tectonic team is here to guide you through every step of the integration process. From initial setup to ongoing optimization, we ensure a smooth and seamless experience for your users. For personalized support and to learn how OwnID’s passkey solution can elevate your SFCC environment, contact our expert team today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

Why Build a General-Purpose Agent?

A general-purpose LLM agent serves as an excellent starting point for prototyping use cases and establishing the foundation for a custom agentic architecture tailored to your needs. What is an LLM Agent? An LLM (Large Language Model) agent is a program where execution logic is governed by the underlying model. Unlike approaches such as few-shot prompting or fixed workflows, LLM agents adapt dynamically. They can determine which tools to use (e.g., web search or code execution), how to use them, and iterate based on results. This adaptability enables handling diverse tasks with minimal configuration. Agentic Architectures Explained:Agentic systems range from the reliability of fixed workflows to the flexibility of autonomous agents. For instance: Your architecture choice will depend on the desired balance between reliability and flexibility for your use case. Building a General-Purpose LLM Agent Step 1: Select the Right LLM Choosing the right model is critical for performance. Evaluate based on: Model Recommendations (as of now): For simpler use cases, smaller models running locally can also be effective, but with limited functionality. Step 2: Define the Agent’s Control Logic The system prompt differentiates an LLM agent from a standalone model. This prompt contains rules, instructions, and structures that guide the agent’s behavior. Common Agentic Patterns: Starting with ReAct or Plan-then-Execute patterns is recommended for general-purpose agents. Step 3: Define the Agent’s Core Instructions To optimize the agent’s behavior, clearly define its features and constraints in the system prompt: Example Instructions: Step 4: Define and Optimize Core Tools Tools expand an agent’s capabilities. Common tools include: For each tool, define: Example: Implementing an Arxiv API tool for scientific queries. Step 5: Memory Handling Strategy Since LLMs have limited memory (context window), a strategy is necessary to manage past interactions. Common approaches include: For personalization, long-term memory can store user preferences or critical information. Step 6: Parse the Agent’s Output To make raw LLM outputs actionable, implement a parser to convert outputs into a structured format like JSON. Structured outputs simplify execution and ensure consistency. Step 7: Orchestrate the Agent’s Workflow Define orchestration logic to handle the agent’s next steps after receiving an output: Example Orchestration Code: pythonCopy codedef orchestrator(llm_agent, llm_output, tools, user_query): while True: action = llm_output.get(“action”) if action == “tool_call”: tool_name = llm_output.get(“tool_name”) tool_params = llm_output.get(“tool_params”, {}) if tool_name in tools: try: tool_result = tools[tool_name](**tool_params) llm_output = llm_agent({“tool_output”: tool_result}) except Exception as e: return f”Error executing tool ‘{tool_name}’: {str(e)}” else: return f”Error: Tool ‘{tool_name}’ not found.” elif action == “return_answer”: return llm_output.get(“answer”, “No answer provided.”) else: return “Error: Unrecognized action type from LLM output.” This orchestration ensures seamless interaction between tools, memory, and user queries. When to Consider Multi-Agent Systems A single-agent setup works well for prototyping but may hit limits with complex workflows or extensive toolsets. Multi-agent architectures can: Starting with a single agent helps refine workflows, identify bottlenecks, and scale effectively. By following these steps, you’ll have a versatile system capable of handling diverse use cases, from competitive analysis to automating workflows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
Python-Based Reasoning Engine

Python-Based Reasoning Engine

Introducing a Python-Based Reasoning Engine for Deterministic AI In the age of large language models (LLMs), there’s a growing need for deterministic systems that enforce rules and constraints while reasoning about information. We’ve developed a Python-based reasoning and validation framework that bridges the gap between traditional rule-based logic and modern AI capabilities, inspired by frameworks like Pydantic. This approach is designed for developers and non-technical experts alike, making it easy to build complex rule engines that translate natural language instructions into enforceable code. Our fine-tuned model automates the creation of rules while ensuring human oversight for quality and conflict detection. The result? Faster implementation of rule engines, reduced developer overhead, and flexible extensibility across domains. The Framework at a Glance Our system consists of five core components: To analogize, this framework operates like a game of chess: Our framework supports two primary use cases: Key Features and Benefits Case Studies Validation Engine: Ensuring Compliance A mining company needed to validate employee qualifications based on age, region, and role. Example Data Structure: jsonCopy code{ “employees”: [ { “name”: “Sarah”, “age”: 25, “role”: “Manager”, “documents”: [“safe_handling_at_work”, “heavy_lifting”] }, { “name”: “John”, “age”: 17, “role”: “Laborer”, “documents”: [“heavy_lifting”] } ] } Rules: jsonCopy code{ “rules”: [ { “type”: “min_age”, “parameters”: { “min_age”: 18 } }, { “type”: “dozer_operator”, “parameters”: { “document_type”: “dozer_qualification” } } ] } Outcome:The system flagged violations, such as employees under 18 or missing required qualifications, ensuring compliance with organizational rules. Reasoning Engine: Solving the River Crossing Puzzle The classic river crossing puzzle demonstrates the engine’s reasoning capabilities. Problem Setup:A farmer must ferry a goat, a wolf, and a cabbage across a river, adhering to specific constraints (e.g., the goat cannot be left alone with the cabbage). Steps: Output:The engine generated a solution in 0.0003 seconds, showcasing its efficiency in navigating complex logic. Advanced Features: Dynamic Rule Expansion The system supports real-time rule adjustments. For instance, adding a “wolf cannot be left with a chicken” constraint introduces a conflict. By extending rules (e.g., allowing the farmer to carry two items), the engine dynamically resolves previously unsolvable scenarios. Sample Code Snippet: pythonCopy codeclass CarryingCapacityRule(Rule): def evaluate(self, state): items_moved = sum(1 for item in [‘wolf’, ‘goat’, ‘cabbage’, ‘chicken’] if getattr(state, item) == state.farmer) return items_moved <= 2 def get_description(self): return “Farmer can carry up to two items at a time” Result:The adjusted engine solved the puzzle in three moves, down from seven, while maintaining rule integrity. Collaborative UI for Rule Creation Our user interface empowers domain experts to define rules without writing code. Developers validate these rules, which are then seamlessly integrated into the system. Visual Workflow: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
AI Sales Agents Explained

AI Sales Agents Explained

If you were to ask a sales rep why they chose a job in sales, they’d probably tell you something like, “I love helping to people. I’m ambitious and goal-oriented, and no two days are ever the same.” The reality, however, is that a lot of time in sales isn’t spent selling. Recent data suggests that sales reps dedicate only 28% of their time to actual selling, with the rest swallowed up by administrative tasks and non-revenue-generating work. To ease this burden, sales teams are turning to AI sales agents, enabling them to focus more on building relationships and closing deals. Below, we explore the different types of AI sales agents and how businesses are using them to increase productivity, efficiency, and revenue. What is an AI sales agent? AI sales agents are autonomous applications that analyze and learn from sales and customer data to perform tasks with little or no human intervention. These agents can manage a wide range of activities, from top-of-funnel tasks like nurturing leads via email outreach, answering questions, booking meetings, and generating quotes to more integrated sales support like buyer roleplays and coaching. Unlike simple workflow automation, AI agents are capable of learning, enabling them to improve efficiency and act independently based on data and analysis. They often plug directly into existing CRMs, with pre-built capabilities or customizable configurations for specific business needs. Types of AI sales agents There are two primary types of AI sales agents: The ability to autonomously analyze data, create action plans, and execute them sets modern AI sales agents apart from traditional sales tools and bots. Key features of AI sales agents Benefits of AI sales agents Future trends for AI sales agents In the early days, AI in sales served primarily as a co-pilot — summarizing insights and assisting with tasks like forecasting. It often required significant human input and created siloed data challenges. Today, AI agents autonomously augment human teams, empowering them to focus on high-value tasks like building relationships. In the near future, AI sales agents are expected to handle increasingly complex workflows and multi-step processes across diverse channels. Potential advancements include: These developments promise to unlock new possibilities for efficiency, personalization, and customization in sales teams. AI sales agents pushing teams into a new era According to recent data, sales leaders are focusing on improving sales enablement, targeting new markets, and adopting new tools and technologies to drive growth. Challenges like scaling personalized interactions and hitting quotas are top of mind. AI sales agents directly address these needs, transforming sales organizations by enabling teams to offload repetitive work to autonomous systems while maintaining quality and personalization. Who uses AI sales agents? AI sales agents are used by sales teams to manage tasks such as lead qualification, follow-ups, meeting scheduling, and coaching. By handling repetitive activities, these agents free up reps to focus on relationship-building and closing deals, ultimately driving better outcomes for both teams and customers. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
MOIRAI-MoE

MOIRAI-MoE

MOIRAI-MoE represents a groundbreaking advancement in time series forecasting by introducing a flexible, data-driven approach that addresses the limitations of traditional models. Its sparse mixture of experts architecture achieves token-level specialization, offering significant performance improvements and computational efficiency. By dynamically adapting to the unique characteristics of time series data, MOIRAI-MoE sets a new standard for foundation models, paving the way for future innovations and expanding the potential of zero-shot forecasting across diverse industries.

Read More
dynamic filters in reports

Dynamic Filters in Salesforce Reports

Revolutionizing Salesforce Reports with Winter ’25Have you explored the Dynamic Filters in reports introduced in the Winter ’25 release? Gone are the days of creating separate reports every time you need a slightly different view. With Dynamic Filters, you can modify your report filters on the fly—no more starting from scratch! ✅ Save Hours of time.✅ Get tailored insights instantly.✅ Perfect for those “I need this data, but sliced differently” moments in meetings. This feature supercharges your reports with unmatched flexibility and efficiency. It’s a game-changer for Salesforce teams, leaving many wondering, “Why didn’t we have this sooner?” Understanding Dynamic Reports in Salesforce Dynamic reports allow users to adjust filter criteria in real time while running the report, eliminating the need for fixed filter values. With filters like “current user,” “current month,” or “my opportunities,” these reports adapt based on who is running them or the context, providing more relevant insights. Key Features: How to Create Dynamic Reports in Salesforce Here’s how you can set up a dynamic report step by step: Dynamic Dashboards in Salesforce A dynamic dashboard displays data tailored to the specific user viewing it, unlike standard dashboards, which show static data for a specific user or report owner. Benefits of Dynamic Dashboards: How to Create a Dynamic Dashboard Conclusion Dynamic Filters and Dashboards in Salesforce are powerful tools to streamline reporting and boost efficiency. By eliminating the need for static reports and dashboards, they allow for real-time adjustments and personalized data views, making your analytics more actionable and user-friendly. Want to level up your Salesforce reporting game? Dive deeper into the guides for creating dashboards, advanced filters, and leveraging analytics to maximize your Salesforce potential. Whether you’re an admin or a sales leader, these tools will transform how you approach data insights. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Inference vs. Training

AI Inference vs. Training

AI Inference vs. Training: Key Differences and Tradeoffs AI training and inference are the foundational phases of machine learning, each with distinct objectives and resource demands. Optimizing the balance between the two is crucial for managing costs, scaling models, and ensuring peak performance. Here’s a closer look at their roles, differences, and the tradeoffs involved. Understanding Training and Inference Key Differences Between Training and Inference 1. Compute Costs 2. Resource and Latency Considerations Strategic Tradeoffs Between Training and Inference Key Considerations for Balancing Training and Inference As AI technology evolves, hardware advancements may narrow the gap in resource requirements between training and inference. Nonetheless, the key to effective machine learning systems lies in strategically balancing the demands of both processes to meet specific goals and constraints. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
Liquid Neural Networks

Liquid Neural Networks

LNNs mark a significant departure from traditional, rigid AI structures, drawing deeply from the adaptable nature of biological neural systems. MIT researchers explored how organisms manage complex decision-making and dynamic responses with minimal neurons, translating these principles into the design of LNNs

Read More
Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO

Salesforce AI Research Introduces LaTRO: A Breakthrough in Enhancing Reasoning for Large Language Models Large Language Models (LLMs) have revolutionized tasks such as answering questions, generating content, and assisting with workflows. However, they often struggle with advanced reasoning tasks like solving complex math problems, logical deduction, and structured data analysis. Salesforce AI Research has addressed this challenge by introducing LaTent Reasoning Optimization (LaTRO), a groundbreaking framework that enables LLMs to self-improve their reasoning capabilities during training. The Need for Advanced Reasoning in LLMs Reasoning—especially sequential, multi-step reasoning—is essential for tasks that require logical progression and problem-solving. While current models excel at simpler queries, they often fall short in tackling more complex tasks due to a reliance on external feedback mechanisms or runtime optimizations. Enhancing reasoning abilities is therefore critical to unlocking the full potential of LLMs across diverse applications, from advanced mathematics to real-time data analysis. Existing techniques like Chain-of-Thought (CoT) prompting guide models to break problems into smaller steps, while methods such as Tree-of-Thought and Program-of-Thought explore multiple reasoning pathways. Although these techniques improve runtime performance, they don’t fundamentally enhance reasoning during the model’s training phase, limiting the scope of improvement. Salesforce AI Research Introduces LaTRO: A Self-Rewarding Framework LaTRO shifts the paradigm by transforming reasoning into a training-level optimization problem. It introduces a self-rewarding mechanism that allows models to evaluate and refine their reasoning pathways without relying on external feedback or supervised fine-tuning. This intrinsic approach fosters continual improvement and empowers models to solve complex tasks more effectively. How LaTRO Works LaTRO’s methodology centers on sampling reasoning paths from a latent distribution and optimizing these paths using variational techniques. Here’s how it works: This self-rewarding cycle ensures that the model continuously refines its reasoning capabilities during training. Unlike traditional methods, LaTRO’s framework operates autonomously, without the need for external reward models or costly supervised feedback loops. Key Benefits of LaTRO Performance Highlights LaTRO’s effectiveness has been validated across various datasets and models: Applications and Implications LaTRO’s ability to foster logical coherence and structured reasoning has far-reaching applications in fields requiring robust problem-solving: By enabling LLMs to autonomously refine their reasoning processes, LaTRO brings AI closer to achieving human-like cognitive abilities. The Future of AI with LaTRO LaTRO sets a new benchmark in AI research by demonstrating that reasoning can be optimized during training, not just at runtime. This advancement by Salesforce AI Research highlights the potential for self-evolving AI models that can independently improve their problem-solving capabilities. Salesforce AI Research Introduces LaTRO As the field of AI progresses, frameworks like LaTRO pave the way for more autonomous, intelligent systems capable of navigating complex reasoning tasks across industries. LaTRO represents a significant leap forward, moving AI closer to achieving true autonomous reasoning. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
salesforce agentforce ai powered agentic agents

What is an Agentic Sales Agent?

What is a Sales Agent? A sales agent is a key figure in a sales organization, representing the business’s products or services to customers. While the term is often used interchangeably with “sales representative,” it can also refer to independent contractors or reps from partner agencies. In the modern tech landscape, “sales agent” is increasingly used to describe AI-powered, autonomous applications that support sales efforts, such as lead nurturing and sales coaching. Your Limitless Sales Team: From Pipeline to Paycheck Scale effortlessly with Agentforce — your new digital workforce built on the Salesforce Platform. Sales Agents vs. Sales Reps: What’s the Difference? While “sales agents” and “sales reps” are often used interchangeably, some distinctions exist. A “sales agent” may refer to an independent contractor or an employee from a partner agency. However, in today’s technology-driven world, the term often refers to AI-driven sales applications that augment sales teams, reducing manual tasks and enhancing productivity. What Does a Sales Agent Do? A sales agent typically performs tasks traditionally handled by sales representatives or sales development representatives, such as engaging with leads, updating CRM systems, and closing deals. AI sales agents, however, function autonomously, managing tasks like lead nurturing, roleplaying sales conversations, and automating processes such as quoting and billing. These agents rely on self-learning, natural language processing, and deal data to carry out their tasks, allowing human sales teams to focus on building relationships and strategic decision-making. Types of Sales Agents Sales agents come in many forms, both human and AI-powered: Benefits of Human and AI Sales Agents Sales Agent Roles Your Company Should Hire Depending on your needs, there are several roles to consider when building a sales team: Best Practices for Measuring Sales Agent Performance Human and AI sales agents are measured on distinct sets of metrics: How Sales AI and Automation are Impacting the Role of Sales Agents Sales teams face constant challenges in managing leads and closing deals. AI sales agents are transforming this landscape by automating time-consuming tasks, allowing human agents to focus on relationship-building and strategic decision-making. AI tools such as Agentforce can augment human teams by handling administrative tasks, allowing reps to focus on the human-centric aspects of sales. Human and AI Sales Agents Leap into the Future Human agents will always be vital in sales, but AI is rapidly becoming a powerful complement. As AI continues to evolve, human sales teams will work more closely with AI agents to handle more complex workflows, across more channels, in an increasingly seamless manner. The result? Stronger customer relationships, better engagement, improved retention, and increased sales volume. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Will AI Hinder Digital Transformation in Healthcare?

Poisoning Your Data

Protecting Your IP from AI Training: Poisoning Your Data As more valuable intellectual property (IP) becomes accessible online, concerns over AI vendors scraping content for training models without permission are rising. If you’re worried about AI theft and want to safeguard your assets, it’s time to consider “poisoning” your content—making it difficult or even impossible for AI systems to use it effectively. Key Principle: AI “Sees” Differently Than Humans AI processes data in ways humans don’t. While people view content based on context, AI “sees” data in raw, specific formats that can be manipulated. By subtly altering your content, you can protect it without affecting human users. Image Poisoning: Misleading AI Models For images, you can “poison” them to confuse AI models without impacting human perception. A great example of this is Nightshade, a tool designed to distort images so that they remain recognizable to humans but useless to AI models. This technique ensures your artwork or images can’t be replicated, and applying it across your visual content protects your unique style. For example, if you’re concerned about your images being stolen or reused by generative AI systems, you can embed misleading text into the image itself, which is invisible to human users but interpreted by AI as nonsensical data. This ensures that an AI model trained on your images will be unable to replicate them correctly. Text Poisoning: Adding Complexity for Crawlers Text poisoning requires more finesse, depending on the sophistication of the AI’s web crawler. Simple methods include: Invisible Text One easy method is to hide text within your page using CSS. This invisible content can be placed in sidebars, between paragraphs, or anywhere within your text: cssCopy code.content { color: black; /* Same as the background */ opacity: 0.0; /* Invisible */ display: none; /* Hidden in the DOM */ } By embedding this “poisonous” content directly in the text, AI crawlers might have difficulty distinguishing it from real content. If done correctly, AI models will ingest the irrelevant data as part of your content. JavaScript-Generated Content Another technique is to use JavaScript to dynamically alter the content, making it visible only after the page loads or based on specific conditions. This can frustrate AI crawlers that only read content after the DOM is fully loaded, as they may miss the hidden data. htmlCopy code<script> // Dynamically load content based on URL parameters or other factors </script> This method ensures that AI gets a different version of the page than human users. Honeypots for AI Crawlers Honeypots are pages designed specifically for AI crawlers, containing irrelevant or distorted data. These pages don’t affect human users but can confuse AI models by feeding them inaccurate information. For example, if your website sells cheese, you can create pages that only AI crawlers can access, full of bogus details about your cheese, thus poisoning the AI model with incorrect information. By adding these “honeypot” pages, you can mislead AI models that scrape your data, preventing them from using your IP effectively. Competitive Advantage Through Data Poisoning Data poisoning can also work to your benefit. By feeding AI models biased information about your products or services, you can shape how these models interpret your brand. For example, you could subtly insert favorable competitive comparisons into your content that only AI models can read, helping to position your products in a way that biases future AI-driven decisions. For instance, you might embed positive descriptions of your brand or products in invisible text. AI models would ingest these biases, making it more likely that they favor your brand when generating results. Using Proxies for Data Poisoning Instead of modifying your CMS, consider using a proxy server to inject poisoned data into your content dynamically. This approach allows you to identify and respond to crawlers more easily, adding a layer of protection without needing to overhaul your existing systems. A proxy can insert “poisoned” content based on the type of AI crawler requesting it, ensuring that the AI gets the distorted data without modifying your main website’s user experience. Preparing for AI in a Competitive World With the increasing use of AI for training and decision-making, businesses must think proactively about protecting their IP. In an era where AI vendors may consider all publicly available data fair game, implementing data poisoning should become a standard practice for companies concerned about protecting their content and ensuring it’s represented correctly in AI models. Businesses that take these steps will be better positioned to negotiate with AI vendors if they request data for training and will have a competitive edge if AI systems are used by consumers or businesses to make decisions about their products or services. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Recent advancements in AI

Recent advancements in AI

Recent advancements in AI have been propelled by large language models (LLMs) containing billions to trillions of parameters. Parameters—variables used to train and fine-tune machine learning models—have played a key role in the development of generative AI. As the number of parameters grows, models like ChatGPT can generate human-like content that was unimaginable just a few years ago. Parameters are sometimes referred to as “features” or “feature counts.” While it’s tempting to equate the power of AI models with their parameter count, similar to how we think of horsepower in cars, more parameters aren’t always better. An increase in parameters can lead to additional computational overhead and even problems like overfitting. There are various ways to increase the number of parameters in AI models, but not all approaches yield the same improvements. For example, Google’s Switch Transformers scaled to trillions of parameters, but some of their smaller models outperformed them in certain use cases. Thus, other metrics should be considered when evaluating AI models. The exact relationship between parameter count and intelligence is still debated. John Blankenbaker, principal data scientist at SSA & Company, notes that larger models tend to replicate their training data more accurately, but the belief that more parameters inherently lead to greater intelligence is often wishful thinking. He points out that while these models may sound knowledgeable, they don’t actually possess true understanding. One challenge is the misunderstanding of what a parameter is. It’s not a word, feature, or unit of data but rather a component within the model‘s computation. Each parameter adjusts how the model processes inputs, much like turning a knob in a complex machine. In contrast to parameters in simpler models like linear regression, which have a clear interpretation, parameters in LLMs are opaque and offer no insight on their own. Christine Livingston, managing director at Protiviti, explains that parameters act as weights that allow flexibility in the model. However, more parameters can lead to overfitting, where the model performs well on training data but struggles with new information. Adnan Masood, chief AI architect at UST, highlights that parameters influence precision, accuracy, and data management needs. However, due to the size of LLMs, it’s impractical to focus on individual parameters. Instead, developers assess models based on their intended purpose, performance metrics, and ethical considerations. Understanding the data sources and pre-processing steps becomes critical in evaluating the model’s transparency. It’s important to differentiate between parameters, tokens, and words. A parameter is not a word; rather, it’s a value learned during training. Tokens are fragments of words, and LLMs are trained on these tokens, which are transformed into embeddings used by the model. The number of parameters influences a model’s complexity and capacity to learn. More parameters often lead to better performance, but they also increase computational demands. Larger models can be harder to train and operate, leading to slower response times and higher costs. In some cases, smaller models are preferred for domain-specific tasks because they generalize better and are easier to fine-tune. Transformer-based models like GPT-4 dwarf previous generations in parameter count. However, for edge-based applications where resources are limited, smaller models are preferred as they are more adaptable and efficient. Fine-tuning large models for specific domains remains a challenge, often requiring extensive oversight to avoid problems like overfitting. There is also growing recognition that parameter count alone is not the best way to measure a model’s performance. Alternatives like Stanford’s HELM and benchmarks such as GLUE and SuperGLUE assess models across multiple factors, including fairness, efficiency, and bias. Three trends are shaping how we think about parameters. First, AI developers are improving model performance without necessarily increasing parameters. A study of 231 models between 2012 and 2023 found that the computational power required for LLMs has halved every eight months, outpacing Moore’s Law. Second, new neural network approaches like Kolmogorov-Arnold Networks (KANs) show promise, achieving comparable results to traditional models with far fewer parameters. Lastly, agentic AI frameworks like Salesforce’s Agentforce offer a new architecture where domain-specific AI agents can outperform larger general-purpose models. As AI continues to evolve, it’s clear that while parameter count is an important consideration, it’s just one of many factors in evaluating a model’s overall capabilities. To stay on the cutting edge of artificial intelligence, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com