Parameters Archives - gettectonic.com - Page 3

pardot1084082=08782fe4994719f386e1d3c9bbd9a12817d57a65b36593fbba0e8645340e02b6

Exploring Large Action Models

Exploring Large Action Models

Exploring Large Action Models (LAMs) for Automated Workflow Processes While large language models (LLMs) are effective in generating text and media, Large Action Models (LAMs) push beyond simple generation—they perform complex tasks autonomously. Imagine an AI that not only generates content but also takes direct actions in workflows, such as managing customer relationship management (CRM) tasks, sending emails, or making real-time decisions. LAMs are engineered to execute tasks across various environments by seamlessly integrating with tools, data, and systems. They adapt to user commands, making them ideal for applications in industries like marketing, customer service, and beyond. Key Capabilities of LAMs A standout feature of LAMs is their ability to perform function-calling tasks, such as selecting the appropriate APIs to meet user requirements. Salesforce’s xLAM models are designed to optimize these tasks, achieving high performance with lower resource demands—ideal for both mobile applications and high-performance environments. The fc series models are specifically tuned for function-calling, enabling fast, precise, and structured responses by selecting the best APIs based on input queries. Practical Examples Using Salesforce LAMs In this article, we’ll explore: Implementation: Setting Up the Model and API Start by installing the necessary libraries: pythonCopy code! pip install transformers==4.41.0 datasets==2.19.1 tokenizers==0.19.1 flask==2.2.5 Next, load the xLAM model and tokenizer: pythonCopy codeimport json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = “Salesforce/xLAM-7b-fc-r” model = AutoModelForCausalLM.from_pretrained(model_name, device_map=”auto”, torch_dtype=”auto”, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) Now, define instructions and available functions. Task Instructions: The model will use function calls where applicable, based on user questions and available tools. Format Example: jsonCopy code{ “tool_calls”: [ {“name”: “func_name1”, “arguments”: {“argument1”: “value1”, “argument2”: “value2”}} ] } Define available APIs: pythonCopy codeget_weather_api = { “name”: “get_weather”, “description”: “Retrieve weather details”, “parameters”: {“location”: “string”, “unit”: “string”} } search_api = { “name”: “search”, “description”: “Search for online information”, “parameters”: {“query”: “string”} } Creating Flask APIs for Business Logic We can use Flask to create APIs to replicate business processes. pythonCopy codefrom flask import Flask, request, jsonify app = Flask(__name__) @app.route(“/customer”, methods=[‘GET’]) def get_customer(): customer_id = request.args.get(‘customer_id’) # Return dummy customer data return jsonify({“customer_id”: customer_id, “status”: “active”}) @app.route(“/send_email”, methods=[‘GET’]) def send_email(): email = request.args.get(’email’) # Return dummy response for email send status return jsonify({“status”: “sent”}) Testing the LAM Model and Flask APIs Define queries to test LAM’s function-calling capabilities: pythonCopy codequery = “What’s the weather like in New York in fahrenheit?” print(custom_func_def(query)) # Expected: {“tool_calls”: [{“name”: “get_weather”, “arguments”: {“location”: “New York”, “unit”: “fahrenheit”}}]} Function-Calling Models in Action Using base_call_api, LAMs can determine the correct API to call and manage workflow processes autonomously. pythonCopy codedef base_call_api(query): “””Calls APIs based on LAM recommendations.””” base_url = “http://localhost:5000/” json_response = json.loads(custom_func_def(query)) api_url = json_response[“tool_calls”][0][“name”] params = json_response[“tool_calls”][0][“arguments”] response = requests.get(base_url + api_url, params=params) return response.json() With LAMs, businesses can automate and streamline tasks in complex workflows, maximizing efficiency and empowering teams to focus on strategic initiatives. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Who Calls AI Ethical

Who Calls AI Ethical

Background – Who Calls AI Ethical On March 13, 2024, the European Union (EU) enacted the EU AI Act, a move that some argue has hindered its position in the global AI race. This legislation aims to ‘unify’ the development and implementation of AI within the EU, but it is seen as more restrictive than progressive. Rather than fostering innovation, the act focuses on governance, which may not be sufficient for maintaining a competitive edge. The EU AI Act embodies the EU’s stance on Ethical AI, a concept that has been met with skepticism. Critics argue that Ethical AI is often misinterpreted and, at worst, a monetizable construct. In contrast, Responsible AI, which emphasizes ensuring products perform as intended without causing harm, is seen as a more practical approach. This involves methodologies such as red-teaming and penetration testing to stress-test products. This critique of Ethical AI forms the basis of this insight,and Eric Sandosham article here. The EU AI Act To understand the implications of the EU AI Act, it is essential to summarize its key components and address the broader issues with the concept of Ethical AI. The EU defines AI as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. It infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Based on this definition, the EU AI Act can be summarized into several key points: Fear of AI The EU AI Act appears to be driven by concerns about AI being weaponized or becoming uncontrollable. Questions arise about whether the act aims to prevent job disruptions or protect against potential risks. However, AI is essentially automating and enhancing tasks that humans already perform, such as social scoring, predictive policing, and background checks. AI’s implementation is more consistent, reliable, and faster than human efforts. Existing regulations already cover vehicular safety, healthcare safety, and infrastructure safety, raising the question of why AI-specific regulations are necessary. AI solutions automate decision-making, but the parameters and outcomes are still human-designed. The fear of AI becoming uncontrollable lacks evidence, and the path to artificial general intelligence (AGI) remains distant. Ethical AI as a Red Herring In AI research and development, the terms Ethical AI and Responsible AI are often used interchangeably, but they are distinct. Ethics involve systematized rules of right and wrong, often with legal implications. Morality is informed by cultural and religious beliefs, while responsibility is about accountability and obligation. These constructs are continuously evolving, and so must the ethics and rights related to technology and AI. Promoting AI development and broad adoption can naturally improve governance through market forces, transparency, and competition. Profit-driven organizations are incentivized to enhance AI’s positive utility. The focus should be on defining responsible use of AI, especially for non-profit and government agencies. Towards Responsible AI Responsible AI emphasizes accountability and obligation. It involves defining safeguards against misuse rather than prohibiting use cases out of fear. This aligns with responsible product development, where existing legal frameworks ensure products work as intended and minimize misuse risks. AI can improve processes such as recruitment by reducing errors compared to human solutions. AI’s role is to make distinctions based on data attributes, striving for accuracy. The concern is erroneous discrimination, which can be mitigated through rigorous testing for bias as part of product quality assurance. Conclusion The EU AI Act is unlikely to become a global standard. It may slow AI research, development, and implementation within the EU, hindering AI adoption in the region and causing long-term harm. Humanity has an obligation to push the boundaries of AI innovation. As a species facing eventual extinction from various potential threats, AI could represent a means of survival and advancement beyond our biological limitations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Marketing Cloud Engagement

Salesforce Distributed Marketing Content

Salesforce Distributed Marketing Content empowers you to extend dynamic content to dispersed teams, fostering safe and efficient interaction with that content. Integrate one or more Distributed Marketing Content Blocks seamlessly with standard or custom content areas to facilitate collaborative moments within branded email messages. For instance, enable users to personalize holiday cards with personal notes and images or provide context to market update messages. Leverage Distributed Marketing alongside AMPscript to enable users to craft customized SMS messages. Salesforce Distributed Marketing Content While marketing teams retain control over message structure, ensuring coherence, brand alignment, and compliance, collaborative content augments this framework, granting distributed teams flexibility within set parameters – a concept Salesforce refers to as “flexibility within a framework.” The usage of Distributed Marketing content is flexible and can adapt over time. Since each message is independently configurable, you can initiate with existing assets and introduce collaborative elements as needed. Please note that Distributed Marketing employs JavaScript ES6 for message personalization, requiring the disabling of Prevent Cross-Site Tracking in Safari and third-party cookies in Chrome. Enable Email Personalization with Distributed Marketing Content Blocks Utilize Distributed Marketing Content Blocks within Marketing Cloud to create personalized sections of content for Distributed Marketing users. Enable Custom SMS Messages Incorporate AMPscript into SMS messages to empower users to compose and dispatch custom SMS messages through Distributed Marketing. Personalization Data Extension Distributed Marketing establishes personalization data extensions in Marketing Cloud to store user-entered personalization data for email messages. A personalization data extension is automatically generated when connecting a journey to a campaign or enabling a journey for quick send. Custom SMS messages are not stored here but are accessible in the journey’s event data extension. Legacy Personalization While Legacy Personalization options like Introduction, Conclusion, and Greeting are available, their usage will be supported until End of Support is announced. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
UX Design Trends 2024

UX Design Trends 2024

Navigating Design Trends: AI, Discovery, Accessibility, and Collaboration. Salesforce UX Design Trends 2024. As we reflect on the past year and look ahead, design trends are emerging, signaling a pivotal moment in the intersection of creativity, usability, and AI. For developers, admins, architects, and business leaders, understanding these trends is crucial in shaping the future. Here are the four design trends steering this transformative journey: As we move forward, these design trends signify a paradigm shift, emphasizing the significance of AI, streamlined discovery, accessibility, and the growing collaboration between designers and developers. Navigating this transformative landscape requires an adaptable mindset and a commitment to ethical, inclusive design practices from the outset. When you work with Tectonic we take all these considerations to mind as we design or re-design your Salesforce org. Contact Tectonic today. UX Design Trends 2024 Like2 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Leverage AI and Machine Learning in Your Data Warehouse

Exploring Machine Learning with Salesforce

Machine Learning (ML) falls into three main categories: Supervised Learning, Unsupervised Learning, and Reinforcement Learning. Let’s dive into some issues and considerations that might leave you wondering if it’s even worth starting! Not embracing what Professor Stuart Russell called “the biggest event in human history” may be short-sighted. Don’t worry, Salesforce can help. Salesforce and Machine Learning Salesforce has a 20-year history of making complex technologies business-friendly. This extends to Machine Learning, integrating ML capabilities throughout the Salesforce Customer 360 suite, which includes solutions for Marketing, Commerce, Sales, Service, and Analytics, among others. Machine Learning in Action with Salesforce Marketing Imagine you’re in a marketing role. You want to predict the likelihood that a customer will engage with your campaigns to maximize effectiveness. Supervised Learning can help here by predicting subscriber engagement (opens, click-throughs, conversions) using historical data (90 days of engagement metrics). For example, using predictive Engagement Scoring, a Salesforce customer in the travel industry achieved a 66% drop in unsubscribe rates and a 13% revenue increase. You also want to ensure prospective customers can quickly find relevant products. Unsupervised Learning can personalize product assortments throughout the shopper journey by analyzing buying patterns, site browsing tendencies, and relationships between search terms and products. Using AI-powered Predictive Sort, businesses have seen a 9.1% increase in revenue per visitor and a 3.8% increase in conversion rates. Sales For sales teams handling many opportunities, predicting the quality of each Opportunity can help prioritize efforts. Supervised Learning, using historical data of at least 200 Closed/Won and 200 Closed/Lost Opportunities, can provide a prioritized list of Opportunities to maximize revenue potential. A large Salesforce customer in the consumer goods sector increased win rates by 48% by focusing on the best Opportunities. Service Post-sale customer support is crucial. Service agents need to address challenging cases efficiently. Supervised Learning can recommend articles to resolve current cases based on historical data from at least 1000 cases with knowledge base articles. A large electronics company using Salesforce AI-powered solutions saved 5 hours per agent per week, enhancing productivity. Simplifying Complex Technology Salesforce’s rich history of making complex technology accessible allows businesses to realize ML benefits without needing specialized knowledge. Traditional ML involves multiple steps like data collection, transformation, sampling, feature selection, model selection, score calibration, and integrating results. Salesforce simplifies this with a customizable data model, automated feature engineering, and automatic model building and selection. For example, in model selection, Salesforce runs a “model tournament” to choose the best model with varying hyper-parameters, ensuring the most accurate model is selected without requiring user intervention. Conclusion Salesforce abstracts the complexity of ML behind user-friendly interfaces, making it easier for businesses to leverage powerful technology. Whether it’s predicting customer engagement, personalizing shopping experiences, prioritizing sales opportunities, or enhancing customer support, Salesforce’s ML capabilities can drive significant business value. Discover more about how Salesforce can transform your approach to Machine Learning and help you achieve your business goals. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Exploring Google Vertex AI

Vertex AI

Exploring Google Vertex AI Conversation — Dialogflow CX with Generative AI, Data Stores, and Generators Vertex AI Conversation, built on Dialogflow and Vertex AI, introduces generative conversational features that utilize large language models (LLMs) for natural language understanding, crafting responses, and managing conversation flow. These advancements streamline agent design and enhance the quality of interactions. With Vertex AI Conversation, you can employ a state machine approach to develop sophisticated, generative AI-powered agents for dynamic conversation design and automation. In this insight, we’ll delve into the cutting-edge Dialogflow CX Generative AI technology, focusing on Data Stores and Generators. Data Stores: The Library of Information for Conversations Imagine Data Stores as an extensive library. When a question is asked, the virtual assistant acts as a librarian, locating relevant information. Dialogflow CX’s Data Store feature makes it easy to create conversations around stored information from various sources: For data preparation guidance, visit Google’s official documentation. Generators: LLM-Enhanced Dynamic Responses Dialogflow CX also enables Generators to use an LLM directly in Dialogflow CX without webhooks. Generators can perform tasks like summarization, parameter extraction, and data manipulation. Sourced from Vertex AI, they create real-time responses based on your prompts. For example, a Generator can be customized to summarize lengthy answers—an invaluable feature for simplifying conversations in chat or voice applications. You can find common Generator configurations in Google Cloud Platform (GCP) documentation. Creating a Chat Application with Vertex AI To start building, go to the Search and Conversation page in Google Cloud, agree to the terms, activate the API, and select “Chat.” Setting Up Your Agent After naming your agent and configuring data sources, like a Cloud Storage bucket with PDF documents, you’ll see your new chat app under Search & Conversation | Apps. Navigate to Dialogflow CX, where you can use your data store by setting up parameters for the agent and configuring responses. Once your agent is ready, you can test it in the Agent simulator. Adding a Generator for Summarization Using the Generator feature, you can further refine responses. Set parameters to target the Generator’s summarization feature, and link it to a specific page for summarized responses. This improves chat flow, providing concise answers for faster interactions. Integrating with Discord If you want to deploy your agent on platforms like Discord, follow Google’s integration guide for Dialogflow and adjust your code as needed. With the integration, responses will include hyperlinks for easy reference. Conclusion Vertex AI Conversation, with Dialogflow CX, enables powerful, human-like chat experiences by combining LLMs, Data Stores, and Generators. Ready to build your own dynamic conversational experiences? Now is the perfect time to experiment with this technology and see where it can take you. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
crm analytics

Build Better Tableau Dashboards

The effort made to build better Tableau dashboards pays tenfold in there readability and usability. “Dashboard design is not about making dashboards ‘pretty. It’s making them functional and helping the user to get the information they need as efficiently as possible.” ALEXANDER WALECZEK, ANALYTICS PRACTICE LEAD AND TABLEAU AMBASSADOR Effective communication with your audience involves considering their needs from start to finish. The key lies in posing the right questions. To convey information to your readers in an engaging manner, it is crucial to grasp fundamental aspects, such as: Possibly, when tailoring content for a time-pressed salesperson with only 15 seconds to spare for crucial performance indicators, it is imperative to present the most vital information in a glance. Additionally, ensuring that the dashboard is mobile-friendly and loads swiftly becomes essential. On the other hand, if your target audience consists of a team set to review quarterly dashboards over an extended period, offering more detailed views of the data might be advisable. Build Better Tableau Dashboards for Your Audience Take into account the expertise level of your audience. Gain a deeper understanding of their skill set by inquiring about their priorities and data consumption habits. This insight is crucial for determining the most effective way to present data, guiding key design decisions. For instance, a novice may require more action-oriented labels for filters or parameters compared to an advanced user. Here are four effective methods to assess the dashboard and data proficiency of your audience: Adjust Your Narrative Adjust your narrative accordingly. Tailoring your dashboards to suit the intended audience enhances their impact. Below are three visualizations depicting the distribution of tornadoes in the United States for the first nine months of the year. The distinction lies in the level of visual information employed to convey the narrative. There might e an extremely minimal presentation, progressing in complexity towards the right. None of these approaches is inherently superior to the others. The minimal visualization on the left might be ideal for audiences well-versed in the subject matter, appreciating simplicity and the elimination of redundancy. On the other hand, for newcomers or individuals viewing the visualization just once, the explicitness of the visualization on the right could be more effective. Determining what constitutes clutter versus essential information is where collaboration with colleagues becomes crucial. Crafting persuasive dashboards involves making a lasting impact on partnership. By closely collaborating with line-of-business stakeholders, you can secure the buy-in and engagement needed to tailor the dashboard to their specific requirements and expectations. This collaborative approach forms the essence of dashboard persuasion. A Work in Progress Demonstrate your process and embrace iterative refinement. Establishing a culture of analytics should be accompanied by a culture of supportive and frequent critique. Creating multiple versions of your work and actively seeking feedback throughout the process will contribute to a superior final product. Avoid isolation and stagnation; share your progress with others, use the feedback to refine your work, and repeat the process until you achieve a satisfactory result. Much like the formation of a diamond requiring extraordinary heat, pressure, and time, the outcome is worth the effort. Encouraging critiques is essential for cultivating a culture of constructive feedback. Trust among colleagues is important, arguably it enables mutual respect and trust in each other’s feedback. Developing a thick skin is also necessary, focusing on designing dashboards that cater to users and clients’ needs rather than personal preferences. Similar to writers who must “kill their darlings,” designers must prioritize the overall effectiveness of the dashboard, making honest assessments and adjustments when needed. “It also helps to have a public place—on a real or virtual wall—for sharing work. Making work public creates constant opportunities for feedback and improvements.” Tableau Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Audience Builder Marketing Cloud

Marketing Cloud Audience Builder dynamically generates targeted audiences from contacts stored in your account based on attribute and behavioral values. These audiences can be used to target or exclude contacts from your marketing activities. In today’s world, where a staggering 347.3 billion emails are sent globally every day, email inboxes have become increasingly cluttered. In your specific niche, you’re not the only one trying to reach your target audience; numerous others are vying for their attention. With consumers having a multitude of options, marketers bear the responsibility of positioning themselves in a way that makes it impossible for potential customers to overlook them. Achieving this requires embracing customer-centricity, which involves deeply engaging with different buyer personas by segmenting your contact list based on various parameters such as age, gender, location, interests, preferences, past purchases, browsing history, and position in the sales funnel. However, manually managing this segmentation, especially with a large contact list, can be overwhelming. This is where a dependable tool like Salesforce Marketing Cloud’s Audience Builder proves invaluable. The SFMC Audience Builder empowers marketers to create granular segmentation frameworks based on demographic and behavioral data, making the execution of targeted campaigns effortless. It dynamically generates targeted audiences by utilizing contacts in your account and leveraging behavioral values and stored attributes as guiding parameters. In this overview, we aim to provide a comprehensive understanding of SFMC’s Audience Builder. Key Entities and Terminologies: Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
vector index

Choosing the Right Vector Index

Finding the Needle in the Digital Haystack: Choosing the Right Vector Index Imagine searching for a needle in a vast digital haystack of millions of data points. In AI and machine learning, selecting the right vector index is like equipping yourself with a magnet—it transforms your search into a faster, more precise process. Whether you’re building a recommendation system, chatbot, or Retrieval-Augmented Generation (RAG) application, the vector index you choose significantly impacts your system’s performance. So how do you pick the right one? Let’s break it down. What Is Similarity Search? At its core, similarity search is about finding items most similar to a query item based on a defined metric. These items are often represented as high-dimensional vectors, capturing data like text embeddings, images, or user preferences. This process enables applications to deliver relevant results efficiently and effectively. What Is a Vector Index? A vector index is a specialized organizational system for high-dimensional data. Much like a library catalog helps locate books among thousands, a vector index enables algorithms to retrieve relevant information from vast datasets quickly. Different techniques offer varying trade-offs between speed, memory usage, and accuracy. Popular Vector Indexing Techniques 1. Flat Index The Flat Index is the simplest method, storing vectors without alteration, like keeping all your files in one folder. 2. Inverted File Index (IVF) The IVF improves search speed by clustering vectors, reducing the number of comparisons. 3. Product Quantization (PQ) PQ compresses high-dimensional vectors, reducing memory requirements and speeding up calculations. 4. Hierarchical Navigable Small World Graphs (HNSW) HNSW offers a graph-based approach that excels in balancing speed and accuracy. Composite Indexing Techniques Blending techniques can help balance speed, memory efficiency, and accuracy: Conclusion Choosing the right vector index depends on your specific needs—speed, memory efficiency, or accuracy. By understanding the trade-offs of each indexing technique and fine-tuning their parameters, you can optimize the performance of your AI and machine learning models. Whether you’re working with small, precise datasets or massive, high-dimensional ones, the right vector index is your key to efficient, accurate searches. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Google Analytics 4 UTM Codes

Google Analytics 4 UTM Codes

How to Generate UTM Codes Generating Google Analytics 4 UTM Codes can be done in several ways. A common method is using Google’s Campaign URL Builder, where you enter your original URL and add the desired UTM parameters. Alternatively, tools like Measureschool’s UTM Tool in Google Sheets or other free online tools can also be used to create UTM codes. Viewing UTM Data in Google Analytics 4 In Google Analytics 4 (GA4), UTM data is found in the standard Acquisition reports. Specifically, you can view this data in the Acquisition Overview, User Acquisition: First User Default Channel Grouping, and Traffic Acquisition reports. Additionally, the Explore section in GA4 allows for the creation of custom reports using UTM data. Best Practices for UTM Tagging When using UTM tags, consider the following best practices: Google Analytics 4 UTM Codes allows you to better track your customer journeys and measure ROI on your advertising and search engine optimization efforts. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
tectonic logo

AI Large Language Models

What Exactly Constitutes a Large Language Model? Picture having an exceptionally intelligent digital assistant that extensively combs through text, encompassing books, articles, websites, and various written content up to the year 2021. Yet, unlike a library that houses entire books, this digital assistant processes patterns from the textual data it undergoes. This digital assistant, akin to a large language model (LLM), represents an advanced computer model tailored to comprehend and generate text with humanlike qualities. Its training involves exposure to vast amounts of text data, allowing it to discern patterns, language structures, and relationships between words and sentences. How Do These Large Language Models Operate? Fundamentally, large language models, exemplified by GPT-3, undertake predictions on a token-by-token basis, sequentially building a coherent sequence. Given a request, they strive to predict the subsequent token, utilizing their acquired knowledge of patterns during training. These models showcase remarkable pattern recognition, generating contextually relevant content across diverse topics. The “large” aspect of these models refers to their extensive size and complexity, necessitating substantial computational resources like powerful servers equipped with multiple processors and ample memory. This capability enables the model to manage and process vast datasets, enhancing its proficiency in comprehending and generating high-quality text. While the sizes of LLMs may vary, they typically house billions of parameters—variables learned during the training process, embodying the knowledge extracted from the data. The greater the number of parameters, the more adept the model becomes at capturing intricate patterns. For instance, GPT-3 boasts around 175 billion parameters, marking a significant advancement in language processing capabilities, while GPT-4 is purported to exceed 1 trillion parameters. While these numerical feats are impressive, the challenges associated with these mammoth models include resource-intensive training, environmental implications, potential biases, and more. Large language models serve as virtual assistants with profound knowledge, aiding in a spectrum of language-related tasks. They contribute to writing, offer information, provide creative suggestions, and engage in conversations, aiming to make human-technology interactions more natural. However, users should be cognizant of their limitations and regard them as tools rather than infallible sources of truth. What Constitutes the Training of Large Language Models? Training a large language model is analogous to instructing a robot in comprehending and utilizing human language. The process involves: Fine-Tuning: A Closer Look Fine-tuning involves further training a pre-trained model on a more specific and compact dataset than the original. It is akin to training a robot proficient in various cuisines to specialize in Italian dishes using a dedicated cookbook. The significance of fine-tuning lies in: Versioning and Progression Large language models evolve through versions, with changes in size, training data, or parameters. Each iteration aims to address weaknesses, handle a broader task spectrum, or minimize biases and errors. The progression is simplified as follows: In essence, large language model versions emulate successive editions of a book series, each release striving for refinement, expansiveness, and captivating capabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
gettectonic.com