Testing Archives - gettectonic.com - Page 5
Exploring Large Action Models

Exploring Large Action Models

Exploring Large Action Models (LAMs) for Automated Workflow Processes While large language models (LLMs) are effective in generating text and media, Large Action Models (LAMs) push beyond simple generation—they perform complex tasks autonomously. Imagine an AI that not only generates content but also takes direct actions in workflows, such as managing customer relationship management (CRM) tasks, sending emails, or making real-time decisions. LAMs are engineered to execute tasks across various environments by seamlessly integrating with tools, data, and systems. They adapt to user commands, making them ideal for applications in industries like marketing, customer service, and beyond. Key Capabilities of LAMs A standout feature of LAMs is their ability to perform function-calling tasks, such as selecting the appropriate APIs to meet user requirements. Salesforce’s xLAM models are designed to optimize these tasks, achieving high performance with lower resource demands—ideal for both mobile applications and high-performance environments. The fc series models are specifically tuned for function-calling, enabling fast, precise, and structured responses by selecting the best APIs based on input queries. Practical Examples Using Salesforce LAMs In this article, we’ll explore: Implementation: Setting Up the Model and API Start by installing the necessary libraries: pythonCopy code! pip install transformers==4.41.0 datasets==2.19.1 tokenizers==0.19.1 flask==2.2.5 Next, load the xLAM model and tokenizer: pythonCopy codeimport json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = “Salesforce/xLAM-7b-fc-r” model = AutoModelForCausalLM.from_pretrained(model_name, device_map=”auto”, torch_dtype=”auto”, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) Now, define instructions and available functions. Task Instructions: The model will use function calls where applicable, based on user questions and available tools. Format Example: jsonCopy code{ “tool_calls”: [ {“name”: “func_name1”, “arguments”: {“argument1”: “value1”, “argument2”: “value2”}} ] } Define available APIs: pythonCopy codeget_weather_api = { “name”: “get_weather”, “description”: “Retrieve weather details”, “parameters”: {“location”: “string”, “unit”: “string”} } search_api = { “name”: “search”, “description”: “Search for online information”, “parameters”: {“query”: “string”} } Creating Flask APIs for Business Logic We can use Flask to create APIs to replicate business processes. pythonCopy codefrom flask import Flask, request, jsonify app = Flask(__name__) @app.route(“/customer”, methods=[‘GET’]) def get_customer(): customer_id = request.args.get(‘customer_id’) # Return dummy customer data return jsonify({“customer_id”: customer_id, “status”: “active”}) @app.route(“/send_email”, methods=[‘GET’]) def send_email(): email = request.args.get(’email’) # Return dummy response for email send status return jsonify({“status”: “sent”}) Testing the LAM Model and Flask APIs Define queries to test LAM’s function-calling capabilities: pythonCopy codequery = “What’s the weather like in New York in fahrenheit?” print(custom_func_def(query)) # Expected: {“tool_calls”: [{“name”: “get_weather”, “arguments”: {“location”: “New York”, “unit”: “fahrenheit”}}]} Function-Calling Models in Action Using base_call_api, LAMs can determine the correct API to call and manage workflow processes autonomously. pythonCopy codedef base_call_api(query): “””Calls APIs based on LAM recommendations.””” base_url = “http://localhost:5000/” json_response = json.loads(custom_func_def(query)) api_url = json_response[“tool_calls”][0][“name”] params = json_response[“tool_calls”][0][“arguments”] response = requests.get(base_url + api_url, params=params) return response.json() With LAMs, businesses can automate and streamline tasks in complex workflows, maximizing efficiency and empowering teams to focus on strategic initiatives. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Enhanced AppExchange Trials

Enhanced AppExchange Trials

🚀 Exciting Update: Enhanced AppExchange App Trials! 🚀 Found an app on AppExchange that checks all the boxes—great features, flexible pricing, and stellar reviews—but still unsure if it’s the right fit? At AppExchange, we understand the importance of trying before buying! Tectonic is excited to introduce new and improved features that make testing apps faster and easier than ever. Here’s what’s new: 🔑 A Streamlined Entry Point for App Trials The Try It Free button is now prominently displayed at the top of app listings. Simply click to view all available trial options, including sandbox installations, test drives, and Trialforce trials. 🔍 Compare with Confidence With multiple trial options, choosing the right one can be challenging. Now, each option comes with a summary of key features, presented side-by-side for easy comparison. ⚡ Faster, More Responsive, and Accessible Salesforce has optimized the trial experience by minimizing clicks and reducing load times. Plus, Trialforce trials now feature autogenerated usernames for quicker setup. The entire process is now more accessible and responsive, ensuring a seamless experience on any device. Interested in checking out a new solution on the AppExchange? Now it is easier than ever. Enhanced AppExchange Trials Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Converting 15-Character IDs to 18-Character in Salesforce

Converting 15-Character IDs to 18-Character in Salesforce

In Salesforce, every record is assigned a unique Record ID, which is essential for managing data, writing formulas, and referencing records as an admin or developer. There are two types of Record IDs: a 15-character version and an 18-character version, each suited for different scenarios. Converting 15-character IDs to 18-character ones can be time-consuming when done manually, but several tools and methods can simplify the process, allowing for instant conversion with just a click. Understanding Salesforce Record IDs 15-Character Record ID The 15-character Record ID is case-sensitive and typically used in Salesforce’s user interface for tasks like editing records and generating reports. However, its case sensitivity can create issues with systems that do not recognize differences between uppercase and lowercase letters. 18-Character Record ID To mitigate case sensitivity issues, Salesforce offers an 18-character ID, which is used in APIs and tools such as Data Loader. This ID adds three additional characters to the 15-character version and is always returned by these tools during data exports. When to Use Each ID For consistency, the 18-character ID is preferable, especially when working with external systems. It’s best practice to use the 18-character ID in formulas, API calls, or any data comparisons to avoid errors caused by case sensitivity. Converting IDs Using a Formula Field in Salesforce Salesforce recommends creating a formula field with the CASESAFEID(Id) function to convert the 15-character ID to an 18-character ID. Here are some key points to consider: Implementation Steps: Once completed, this formula field will display the 18-character ID on relevant records. APIs and Software DevelopmentIf you need a more scalable or efficient solution, consider using Salesforce APIs or third-party tools for ID conversion. While online tools may suffice for small tasks, they can become unwieldy when handling hundreds or thousands of records in a CSV or Excel file. Streamlining ID Conversion with Xappex Tools Imagine the frustration of manually copying and pasting IDs! That’s where the XL-Connector and G-Connector from Xappex come into play. These tools work directly in Excel or Google Sheets, simplifying the ID conversion process. Instead of juggling multiple tools or navigating complex processes, you can seamlessly convert Salesforce IDs within your spreadsheet, saving significant time and effort. Using XL-Connector for ID Conversion in Excel Using G-Connector (Google Sheets) for ID Conversion G-Connector is Xappex’s integration tool for Google Sheets and Salesforce. If you haven’t installed it yet, do so and log in to your Salesforce org. The sheet will automatically update with the new 18-character IDs and provide links to open the records directly in Salesforce. Conclusion In summary, managing Salesforce Record IDs doesn’t have to be a hassle. While converting 15-character IDs to 18-character IDs is crucial for consistency, doing it manually can be tedious. With XL-Connector and G-Connector, you can streamline ID conversion with just a click in Excel or Google Sheets, making your workflow much more efficient. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI and Big Data

AI and Big Data

Over the past decade, enterprises have accumulated vast amounts of data, capturing everything from business processes to inventory statistics. This surge in data marked the onset of the big data revolution. However, merely storing and managing big data is no longer sufficient to extract its full value. As organizations become adept at handling big data, forward-thinking companies are now leveraging advanced analytics and the latest AI and machine learning techniques to unlock even greater insights. These technologies can identify patterns and provide cognitive capabilities across vast datasets, enabling organizations to elevate their data analytics to new levels. Additionally, the adoption of generative AI systems is on the rise, offering more conversational approaches to data analysis and enhancement. This allows organizations to extract significant insights from information that would otherwise remain untapped in data stores. How Are AI and Big Data Related? Applying machine learning algorithms to big data is a logical progression for companies aiming to maximize the potential of their data. Unlike traditional rules-based approaches that follow explicit instructions, machine learning systems use data-driven algorithms and statistical models to analyze and detect patterns in data. Big data serves as the raw material for these systems, which derive valuable insights from it. Organizations are increasingly recognizing the benefits of integrating big data with machine learning. However, to fully harness the power of both, it’s crucial to understand their individual capabilities. Understanding Big Data Big data involves extracting and analyzing information from large quantities of data, but volume is just one aspect. Other critical “Vs” of big data that enterprises must manage include velocity, variety, veracity, validity, visualization, and value. Understanding Machine Learning Machine learning, the backbone of modern AI, adds significant value to big data applications by deriving deeper insights. These systems learn and adapt over time without the need for explicit programming, using statistical models to analyze and infer patterns from data. Historically, companies relied on complex, rules-based systems for reporting, which often proved inflexible and unable to cope with constant changes. Today, machine learning and deep learning enable systems to learn from big data, enhancing decision-making, business intelligence, and predictive analysis. The strength of machine learning lies in its ability to discover patterns in data. The more data available, the more these algorithms can identify patterns and apply them to future data. Applications range from recommendation systems and anomaly detection to image recognition and natural language processing (NLP). Categories of Machine Learning Algorithms Machine learning algorithms generally fall into three categories: The most powerful large language models (LLMs), which underpin today’s widely used generative AI systems, utilize a combination of these methods, learning from massive datasets. Understanding Generative AI Generative AI models are among the most powerful and popular AI applications, creating new data based on patterns learned from extensive training datasets. These models, which interact with users through conversational interfaces, are trained on vast amounts of internet data, including conversations, interviews, and social media posts. With pre-trained LLMs, users can generate new text, images, audio, and other outputs using natural language prompts, without the need for coding or specialized models. How Does AI Benefit Big Data? AI, combined with big data, is transforming businesses across various sectors. Key benefits include: Big Data and Machine Learning: A Synergistic Relationship Big data and machine learning are not competing concepts; when combined, they deliver remarkable results. Emerging big data techniques offer powerful ways to manage and analyze data, while machine learning models extract valuable insights from it. Successfully handling the various “Vs” of big data enhances the accuracy and power of machine learning models, leading to better business outcomes. The volume of data is expected to grow exponentially, with predictions of over 660 zettabytes of data worldwide by 2030. As data continues to amass, machine learning will become increasingly reliant on big data, and companies that fail to leverage this combination will struggle to keep up. Examples of AI and Big Data in Action Many organizations are already harnessing the power of machine learning-enhanced big data analytics: Conclusion The integration of AI and big data is crucial for organizations seeking to drive digital transformation and gain a competitive edge. As companies continue to combine these technologies, they will unlock new opportunities for personalization, efficiency, and innovation, ensuring they remain at the forefront of their industries. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Slack AI Exploit Prevented

Slack AI Exploit Prevented

Slack AI Exploit Prevented. Slack has patched a vulnerability in its Slack AI assistant that could have been for insider phishing attacks, according to an announcement made by the company on Wednesday. This update follows a blog post by PromptArmor, which detailed how an insider attacker—someone within the same Slack workspace as the target—could manipulate Slack AI into sending phishing links to private channels that the attacker does not have access to. The vulnerability is an example of an indirect prompt injection attack. In this type of attack, the attacker embeds malicious instructions within content that the AI processes, such as an external website or an uploaded document. In this case, the attacker could plant these instructions in a public Slack channel. Slack AI, designed to use relevant information from public channels in the workspace to generate responses, could then be tricked into acting on these malicious instructions. While placing such instructions in a public channel poses a risk of detection, PromptArmor pointed out that an attacker could create a rogue public channel with only one member—themselves—potentially avoiding detection unless another user specifically searches for that channel. Salesforce, which owns Slack, did not directly reference PromptArmor in its advisory and did not confirm to SC Media that the issue it patched is the same one described by PromptArmor. However, the advisory does mention a security researcher’s blog post published on August 20, the same day as PromptArmor’s blog. “When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data,” a Salesforce spokesperson told SC Media. How the Slack AI Exploit Could Have Extracted Secrets from Private Channels PromptArmor demonstrated two proof-of-concept exploits that would require the attacker to have access to the same workspace as the victim, such as a coworker. The attacker would create a public channel and lure the victim into clicking a link delivered by the AI. In the first exploit, the attacker aimed to extract an API key stored in a private channel that the victim is part of. The attacker could post a carefully crafted prompt in the public channel that indirectly instructs Slack AI to respond to a request for the API key with a fake error message and a URL controlled by the attacker. The AI would unknowingly insert the API key from the victim’s private channel into the URL as an HTTP parameter. If the victim clicks on the URL, the API key would be sent to the attacker’s domain. “This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see. This really makes me question how safe our AI tools are,” said Akhil Mittal, Senior Manager of Cybersecurity Strategy and Solutions at Synopsys Software Integrity Group, in an email to SC Media. “It’s not just about fixing problems but making sure these tools manage our data properly. As AI becomes more common, it’s important for organizations to keep both security and ethics in mind to protect our information and keep trust.” In a second exploit, PromptArmor demonstrated how similar crafted instructions could be used to deliver a phishing link to a private channel. The attacker would tailor the instructions to the victim’s workflow, such as asking the AI to summarize messages from their manager, and include a malicious link. PromptArmor reported the issue to Slack on August 14, with Slack acknowledging the disclosure the following day. Despite some initial skepticism from Slack about the severity of the vulnerability, the company patched the issue on August 21. “Slack’s security team had prompt responses and showcased a commitment to security and attempted to understand the issue. Given how new prompt injection is and how misunderstood it has been across the industry, this is something that will take the industry time to wrap our heads around collectively,” PromptArmor wrote in their blog. New Slack AI Feature Could Pose Further Prompt Injection Risk PromptArmor concluded its testing of Slack AI before August 14, the same day Slack announced that its AI assistant could now reference files uploaded to Slack when generating search answers. PromptArmor noted that this new feature could create additional opportunities for indirect prompt injection attacks, such as hiding malicious instructions in a PDF file by setting the font color to white. However, the researchers have not yet tested this scenario and noted that workspace admins can restrict Slack AI’s ability to read files. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
benefits of salesforce flow automation

Benefits of Salesforce Flow Automation

Salesforce Flow Automation offers robust tools to streamline operations, enhance productivity, and improve accuracy. Whether you’re new to Salesforce or refining existing workflows, here are five top tips for maximizing the benefits of Salesforce Flow Automation. 1. Define Clear Objectives Before creating any flows, clearly define your automation goals, whether it’s reducing manual data entry, accelerating approval processes, or ensuring consistent customer follow-ups. Having specific objectives will keep your flow design focused and help you measure the impact of your automation. 2. Leverage Pre-Built Flow Templates Salesforce provides a range of pre-built flow templates tailored to common business needs, saving time and effort. Start with these templates and customize them to suit your unique requirements, allowing you to implement efficient solutions without building from scratch. 3. Optimize Decision Elements Decision elements in Salesforce Flow enable branching logic based on set conditions. Use them to direct the flow according to specific criteria, such as routing different approval paths based on deal value or service type. This targeted approach ensures each scenario is handled effectively. 4. Thoroughly Test Before Deployment Testing is a critical part of the automation process. Before launching a new flow, test it in a sandbox environment to catch any issues. Cover a range of scenarios and edge cases to confirm that the flow works as expected, helping avoid disruptions and ensuring a smooth transition into live use. 5. Monitor and Continuously Improve Automation is an evolving process. After deploying flows, monitor their performance to ensure they’re achieving desired outcomes. Use Salesforce’s reporting tools to track metrics like completion rates and processing times. With this data, you can fine-tune your flows to boost efficiency and adapt to changing business needs. By following these tips, you can unlock the full potential of Salesforce Flow Automation, leading to streamlined processes and better business outcomes. Embrace automation to reduce manual work and keep focus on driving core business growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Text Messaging

Salesforce Text Messaging

Salesforce Text Messaging: Boost Customer Engagement Directly from Your CRM In this insight, we’ll explore why Salesforce SMS is a powerful tool for your business, how to make your SMS campaigns stand out, and how to send messages through Salesforce with minimal technical complexity. Salesforce Text Messaging. What is Salesforce Text Messaging? Salesforce SMS is a feature that enables users to send SMS messages directly from Salesforce. By integrating SMS with Salesforce, businesses can communicate efficiently with their customers, enhancing engagement and streamlining operations. Benefits and Use Cases of Salesforce SMS Effective customer engagement is crucial for every business, and Salesforce SMS offers an efficient way to connect with your audience. Here are some of the top advantages: 1. Enhanced Customer Engagement SMS boasts a 99% open rate, making it one of the most effective communication channels. Salesforce SMS helps businesses: 2. Real-Time Communication Timely communication is essential in customer service. Salesforce SMS enables businesses to send relevant, real-time information such as: 3. Automation and Efficiency Salesforce SMS allows for automated messaging, saving time and reducing errors. Benefits include: 4. Personalized SMS Messaging Leverage customer data in Salesforce to create personalized, targeted messages that resonate with recipients. Use Salesforce SMS to: 5. Employee Ease of Use Integrating SMS into Salesforce means employees can manage communication through a familiar platform. Benefits include: Additional Advantages of SMS in Salesforce Best Practices for Salesforce SMS Campaigns To ensure successful SMS campaigns, follow these best practices: 1. Obtain Explicit Consent It’s important to get clear consent from customers before sending SMS messages. This builds trust and ensures compliance with regulations like TCPA in the U.S. and GDPR in the EU. 2. Make Opt-in Easy Simplify the process for customers to opt-in and clarify the types and frequency of messages they will receive. 3. Provide Opt-out Options Make it easy for customers to unsubscribe at any time to maintain trust. 4. Use Clear Calls to Action (CTA) Each SMS should guide the recipient on what to do next, using actionable language such as “Click here” or “Buy now” to prompt immediate responses. 5. Monitor and Analyze Performance Regularly assess the success of your campaigns by tracking key metrics like open rates and opt-outs. Use A/B testing to optimize performance and customer feedback to refine your approach. LINK Mobility Integration with Salesforce SMS LINK Mobility‘s SMS integration (one Salesforce SMS tool) enhances your Salesforce SMS campaigns with powerful features: Enhance Customer Engagement and Efficiency with Salesforce SMS Salesforce SMS enables businesses to engage customers in real-time, enhancing relationships and operational efficiency. Whether sending order updates, appointment reminders, or personalized promotions, Salesforce SMS creates a strong foundation for meaningful communication with your customer base. Need help with Salesforce SMS? If you’re looking for assistance in configuring SMS within Salesforce or need expert guidance, Tectonic is here to help you maximize the benefits of SMS integration for your business. Contact us today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
APIs and Software Development

APIs and Software Development

The Role of APIs in Modern Software Development APIs (Application Programming Interfaces) are central to modern software development, enabling teams to integrate external features into their products, including advanced third-party AI systems. For instance, you can use an API to allow users to generate 3D models from prompts on MatchboxXR. The Rise of AI-Powered Applications Many startups focus exclusively on AI, but often they are essentially wrappers around existing technologies like ChatGPT. These applications provide specialized user interfaces for interacting with OpenAI’s GPT models rather than developing new AI from scratch. Some branding might make it seem like they’re creating groundbreaking technology, when in reality, they’re leveraging pre-built AI solutions. Solopreneur-Driven Wrappers Large Language Models (LLMs) enable individuals and small teams to create lightweight apps and websites with AI features quickly. A quick search on Reddit reveals numerous small-scale startups offering: Such features can often be built using ChatGPT or Gemini within minutes for free. Well-Funded Ventures Larger operations invest heavily in polished platforms but may allocate significant budgets to marketing and design. This raises questions about whether these ventures are also just sophisticated wrappers. Examples include: While these products offer interesting functionalities, they often rely on APIs to interact with LLMs, which brings its own set of challenges. The Impact of AI-First, API-Second Approaches Design Considerations Looking Ahead Developer Experience: As AI technologies like LLMs become mainstream, focusing on developer experience (DevEx) will be crucial. Good DevEx involves well-structured schemas, flexible functions, up-to-date documentation, and ample testing data. Future Trends: The future of AI will likely involve more integrations. Imagine: AI is powerful, but the real innovation lies in integrating hardware, data, and interactions effectively. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Infrastructure Flaws

AI Infrastructure Flaws

Wiz Researchers Warn of Security Flaws in AI Infrastructure Providers AI infrastructure providers like Hugging Face and Replicate are vulnerable to emerging attacks and need to strengthen their defenses to protect sensitive user data, according to Wiz researchers. AI Infrastructure Flaws come from security being an afterthought. During Black Hat USA 2024 on Wednesday, Wiz security experts Hillai Ben-Sasson and Sagi Tzadik presented findings from a year-long study on the security of three major AI infrastructure providers: Hugging Face, Replicate, and SAP AI Core. Their research aimed to assess the security of these platforms and the risks associated with storing valuable data on them, given the increasing targeting of AI platforms by cybercriminals and nation-state actors. Hugging Face, a machine learning platform that allows users to create models and store datasets, was recently targeted in an attack. In June, the platform detected suspicious activity on its Spaces platform, prompting a key and token reset. The researchers demonstrated how they compromised these platforms by uploading malicious models and using container escape techniques to break out of their assigned environments, moving laterally across the service. In an April blog post, Wiz detailed how they compromised Hugging Face, gaining cross-tenant access to other customers’ data and training models. Similar vulnerabilities were later identified in Replicate and SAP AI Core, and these attack techniques were showcased during Wednesday’s session. Prior to Black Hat, Ben-Sasson, Tzadik, and Ami Luttwak, Wiz’s CTO and co-founder, discussed their research. They revealed that in all three cases, they successfully breached Hugging Face, Replicate, and SAP AI Core, accessing millions of confidential AI artifacts, including models, datasets, and proprietary code—intellectual property worth millions of dollars. Luttwak highlighted that many AI service providers rely on containers as barriers between different customers, but warned that these containers can often be bypassed due to misconfigurations. “Containerization is not a secure enough barrier for tenant isolation,” Luttwak stated. After discovering these vulnerabilities, the researchers responsibly disclosed the issues to each service provider. Ben-Sasson praised Hugging Face, Replicate, and SAP for their collaborative and professional responses, and Wiz worked closely with their security teams to resolve the problems. Despite these fixes, Wiz researchers recommended that organizations update their threat models to account for potential data compromises. They also urged AI service providers to enhance their isolation and sandboxing standards to prevent lateral movement by attackers within their platforms. The Risks of Rapid AI Adoption The session also addressed the broader challenges associated with the rapid adoption of AI. The researchers emphasized that security is often an afterthought in the rush to implement AI technologies. “AI security is also infrastructure security,” Luttwak explained, noting that the novelty and complexity of AI often leave security teams ill-prepared to manage the associated risks. Many organizations testing AI models are using unfamiliar tools, often open-source, without fully understanding the security implications. Luttwak warned that these tools are frequently not built with security in mind, putting companies at risk. He stressed the importance of performing thorough security validation on AI models and tools, especially given that even major AI service providers have vulnerabilities. In a related Black Hat session, Chris Wysopal, CTO and co-founder of Veracode, discussed how developers increasingly use large language models for coding but often prioritize functionality over security, leading to concerns like data poisoning and the replication of existing vulnerabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Autonomous AI Service Agents

Autonomous AI Service Agents

Salesforce Set to Launch Autonomous AI Service Agents. Considering Tectonic only first wrote about Agentic AI in late June, its like Christmas in July! Salesforce is gearing up to introduce a new generation of customer service chatbots that leverage advanced AI tools to autonomously navigate through various actions and workflows. These bots, termed “autonomous AI agents,” are currently in pilot testing and are expected to be released later this year. Autonomous AI Service Agents Named Einstein Service Agent, these autonomous AI bots aim to utilize generative AI to understand customer intent, trigger workflows, and initiate actions within a user’s Salesforce environment, according to Ryan Nichols, Service Cloud’s chief product officer. By integrating natural language processing, predictive analytics, and generative AI, Einstein Service Agents will identify scenarios and resolve customer inquiries more efficiently. Traditional bots require programming with rules-based logic to handle specific customer service tasks, such as processing returns, issuing refunds, changing passwords, and renewing subscriptions. In contrast, the new autonomous bots, enhanced by generative AI, can better comprehend customer issues (e.g., interpreting “send back” as “return”) and summarize the steps to resolve them. Einstein Service Agent will operate across platforms like WhatsApp, Apple Messages for Business, Facebook Messenger, and SMS text, and will also process text, images, video, and audio that customers provide. Despite the promise of these new bots, their effectiveness is crucial, emphasized Liz Miller, an analyst at Constellation Research. If these bots fail to perform as expected, they risk wasting even more customer time than current technologies and damaging customer relationships. Miller also noted that successful implementation of autonomous AI agents requires human oversight for instances when the bots encounter confusion or errors. Customers, whether in B2C or B2B contexts, are often frustrated with the limitations of rules-based bots and prefer direct human interaction. It is annoying enough to be on the telephone repeating “live person” over and over again. It would be trafic to have to do it online, too. “It’s essential that these bots can handle complex questions,” Miller stated. “Advancements like this are critical, as they can prevent the bot from malfunctioning when faced with unprogrammed scenarios. However, with significant technological advancements like GenAI, it’s important to remember that human language and thought processes are intricate and challenging to map.” Nichols highlighted that the forthcoming Einstein Service Agent will be simpler to set up, as it reduces the need to manually program thousands of potential customer requests into a conversational decision tree. This new technology, which can understand multiple word permutations behind a service request, could potentially lower the need for extensive developer and data scientist involvement for Salesforce users. The pricing details for the autonomous Einstein Service Agent will be announced at its release. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Managing Data Quality in an AI World

Managing Data Quality in an AI World

Each year, Monte Carlo surveys real data professionals about the state of their data quality. This year, we turned our gaze to the shadow of AI—and the message was clear. Managing Data Quality in an AI World is getting harder. Data quality risks are evolving — and data quality management isn’t. Among the 200 data professionals polled about the state of enterprise AI, a staggering 91% said they were actively building AI applications, but two out of three admitted to not completely trusting the data these applications are built on. And “not completely” leaves a lot of room for error in the world of AI. Far from pushing the industry toward better habits and more trustworthy outputs, the introduction of GenAI seems to have exacerbated the scope and severity of data quality problems. The Core Issue Why is this happening, and what can we do about it? 2024 State of Reliable AI Survey The Wakefield Research survey, commissioned by Monte Carlo in April 2024, polled 200 data leaders and professionals. It comes as data teams grapple with the adoption of generative AI. The findings highlight several key statistics that indicate the current state of the AI race and professional sentiment about the technology: While AI is widely expected to be among the most transformative technological advancements of the last decade, these findings suggest a troubling disconnect between data teams and business stakeholders. More importantly, they suggest a risk of downward pressure toward AI initiatives without a clear understanding of the data and infrastructure that power them. Managing Data Quality in an AI World. The State of AI Infrastructure—and the Risks It’s Hiding Even before the advent of GenAI, organizations were dealing with exponentially greater volumes of data than in decades past. Since adopting GenAI programs, 91% of data leaders report that both applications and the number of critical data sources have increased even further, deepening the complexity and scale of their data estates in the process. There’s no clear solution for a successful enterprise AI architecture. Survey results reveal how data teams are approaching AI: As the complexity of AI’s architecture and the data that powers it continues to expand, one perennial problem is expanding with it: data quality issues. The Modern Data Quality Problem While data quality has always been a challenge for data teams, this year’s survey results suggest the introduction of GenAI has exacerbated both the scope and severity of the problem. More than half of respondents reported experiencing a data incident that cost their organization more than $100K. And we didn’t even ask how many they experienced. Previous surveys suggest an average of 67 data incidents per month of varying severity. This is a shocking figure when you consider that 70% of data leaders surveyed also reported that it takes longer than four hours to find a data incident—and at least another four hours to resolve it. Managing Data Quality in an AI World But the real deal breaker is this: even with 91% of teams reporting that their critical data sources are expanding, an alarming 54% of teams surveyed still rely on manual testing or have no initiative in place at all to address data quality in their AI. This anemic approach to data quality will have a demonstrable impact on enterprise AI applications and data products in the coming months—allowing more data incidents to slip through the cracks, multiplying hallucinations, diminishing the safety of outputs, and eroding confidence in both the AI and the companies that build them. Is Your Data AI-Ready? While a lot has certainly changed over the last 12 months, one thing remains absolutely clear: if AI is going to succeed, data quality needs to be front and center. “Data is the lifeblood of all AI — without secure, compliant, and reliable data, enterprise AI initiatives will fail before they get off the ground. The most advanced AI projects will prioritize data reliability at each stage of the model development life cycle, from ingestion in the database to fine-tuning or RAG.” Lior Solomon, VP of Data at Drata, The success of AI depends on the data—and the success of the data depends on your team’s ability to efficiently detect and resolve the data quality issues that impact it. By curating and pairing your own first-party context data with modern data quality management solutions like data observability, your team can mitigate the risks of building fast and deliver reliable business value for your stakeholders at every stage of your AI adventure. What can you do to improve data quality management in your organization? Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Salesforce Model Tester

Salesforce Model Tester

Salesforce is taking steps to ensure its AI models perform accurately, even with unexpected data. The company recently filed a patent for an “automated testing pipeline for neural network models.” This technology helps developers predict whether their AI models will maintain accuracy when dealing with “unseen queries,” using customer service bots as a primary example. Salesforce Model Tester Typically, developers test their AI models using a subset of the original training data. However, Salesforce notes that this approach may not be ideal for smaller datasets or when real-time data differs significantly from the training set. To address this, Salesforce’s system creates both easy and hard evaluation datasets from real-time customer data. The “hard” datasets contain queries significantly different from the training data, while the “easy” datasets are more similar. The system begins by passing customer data through a “dependency parser,” which filters out specific actions or verbs representing meaningful commands. Then, a pre-trained language model ranks the queries based on their similarity to the training data. A “bag of words” classifier removes queries that are too similar, ensuring the testing data is diverse. These curated datasets are used to evaluate the model’s performance. The pipeline also includes a “human-in-the-loop” feedback mechanism to notify developers when a model isn’t performing well, allowing for adjustments. Salesforce’s primary AI product, Einstein, enables customers to create generative AI experiences using their data. Unlike some companies that focus on building massive AI models, Salesforce aims to empower enterprise clients to develop their own models, according to Bob Rogers, Ph.D., co-founder of BeeKeeperAI and CEO of Oii.ai. This patent could enhance Salesforce’s offerings by ensuring the AI models built under its platform function as intended. “I think Salesforce wants Einstein to generate more leads and faster. And if that’s not happening, it could be a miss for Salesforce,” Rogers said. The patent’s emphasis on improving customer service chatbots suggests Salesforce is focusing on AI-driven customer interactions. This is in line with the company’s recent unveiling of its fully-autonomous Einstein Service Agent, highlighting where Salesforce believes the most traction for Einstein might be. Rogers noted that while creating tools for customers to build their own AI models is challenging, Salesforce’s approach stands out in a market dominated by companies like Google, Microsoft, and OpenAI, which offer ready-to-use AI services. “At the end of the day, most AI utilization is still people saying, ‘solve my problem for me,’” Rogers said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Implementing Salesforce Education Cloud

Implementing Salesforce Education Cloud

Client OverviewThe client is a leading educational institution offering a wide array of programs, from undergraduate degrees to continuing education. With around 15,000 students and a global alumni network of over 50,000, they are dedicated to delivering a holistic educational experience while nurturing lifelong relationships with their alumni. ChallengesBefore implementing Salesforce Education Cloud, the client faced several large challenges: ObjectivesThe institution sought to achieve the following with Salesforce Education Cloud: Solution: Salesforce Education Expertise Strategy and Planning Design and Wireframing Development Testing Deployment Results: Before and After Aspect Before After Data Management Fragmented across multiple systems Centralized in Salesforce Education Cloud Communication Disjointed communication processes Streamlined internal and external channels Alumni Engagement Outdated tools for managing alumni relationships Modern tools for enhanced engagement Before and after Salesforce Education Cloud Quantifiable OutcomesWith Salesforce Education Cloud, the client achieved: Implementing Salesforce Education CloudBy implementing Salesforce Education Cloud, the Salesforce partner delivered a transformative solution that surpassed the institution’s objectives. The integration of centralized data, enhanced communication processes, and modern alumni management tools led to: These impressive results highlight Tectonic’s commitment to providing expert Salesforce solutions that aid education clients achieve their strategic goals. Contact us today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful

Read More
gettectonic.com