Licensing Archives - gettectonic.com
ai model race

AI Model Race Intensifies

AI Model Race Intensifies as OpenAI, Google, and DeepSeek Roll Out New Releases The generative AI competition is heating up as major players like OpenAI, Google, and DeepSeek rapidly release upgraded models. However, enterprises are shifting focus from incremental model improvements to agentic AI—systems that autonomously perform complex tasks. Three Major Releases in 24 Hours This week saw a flurry of AI advancements: Competition Over Innovation? While the rapid releases highlight the breakneck pace of AI development, some analysts see diminishing differentiation between models. The Future: Agentic AI & Real-World Use Cases As model fatigue sets in, businesses are focusing on domain-specific AI applications that deliver measurable ROI. The AI race continues, but the real winners will be those who translate cutting-edge models into practical, agent-driven solutions. Key Takeaways:✔ DeepSeek’s open-source V3 pressures rivals to embrace transparency.✔ GPT-4o’s hyper-realistic images raise deepfake concerns.✔ Gemini 2.5 focuses on structured reasoning for complex tasks.✔ Agentic AI, not just model upgrades, is the next enterprise priority. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Agentic AI is Here

The Catalytic Potential of Agentic AI in Cloud Computing

Artificial intelligence continues to drive a technological flywheel where each breakthrough enables more sophisticated systems. While generative AI has dominated discourse since ChatGPT’s 2022 debut, 2025 appears poised to become the year of agentic AI – marking a paradigm shift from passive information processing toward proactive, autonomous systems capable of executing complex workflows. The Rise of Autonomous AI Agents Unlike conventional chatbots that facilitate human-led interactions, agentic AI systems operate independently to complete multi-step processes. These autonomous agents demonstrate capabilities ranging from specialized functions like sales outreach and travel booking to broader applications in cybersecurity and human resources. Industry analysts anticipate these systems will follow an adoption curve reminiscent of early internet technologies, potentially creating multi-billion dollar markets as they become embedded in daily operations. Cloud infrastructure providers stand to benefit significantly from this evolution. The computational demands of autonomous agents – including increased data generation, processing requirements, and storage needs – may accelerate cloud adoption across industries. This trend presents opportunities throughout the technology value chain, from foundational infrastructure to specialized software solutions. Market Dynamics and Growth Projections Recent industry surveys indicate strong momentum for agentic AI adoption: Current projections estimate the agentic AI market reaching 47 billion by 2030 Infrastructure Implications and Emerging Opportunities The rise of autonomous AI systems is driving several structural changes in technology markets: Industry Adoption and Commercialization Leading technology providers have moved aggressively to capitalize on this trend: These developments suggest agentic AI is already reshaping enterprise software economics while demonstrating strong market acceptance despite premium pricing. Strategic Implications Agentic AI represents more than technological evolution – it signals a fundamental shift in how enterprises leverage artificial intelligence. By automating complex workflows and decision-making processes, these systems offer: As the technology matures, agentic AI appears poised to catalyze the next phase of cloud computing growth while creating new opportunities across the technology ecosystem. For enterprises and investors alike, understanding and positioning for this transition may prove critical in the coming years. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Data Cloud Billable Usage

Data Cloud Billable Usage Overview Usage of certain Data Cloud features impacts credit consumption. To track usage, access your Digital Wallet within your Salesforce org. For specific billing details, refer to your contract or contact your Account Executive. Important Notes ⚠️ Customer Data Platform (CDP) Licensing – If your Data Cloud org operates under a CDP license, refer to Customer Data Platform Billable Usage Calculations instead.⚠️ Sandbox Usage – Data Cloud sandbox consumption affects credits, with usage tracked separately on Data Cloud sandbox cards. Understanding Usage Calculations Credit consumption is based on the number of units used multiplied by the multiplier on the rate card for that usage type. Consumption is categorized as follows: 1. Data Service Usage Service usage is measured by records processed, queried, or analyzed. Billing Category Description Batch Data Pipeline Based on the volume of batch data processed via Data Cloud data streams. Batch Data Transforms Measured by the higher of rows read vs. rows written. Incremental transforms only count changed rows after the first run. Batch Profile Unification Based on source profiles processed by an identity resolution ruleset. After the first run, only new/modified profiles are counted. Batch Calculated Insights Based on the number of records in underlying objects used to generate Calculated Insights. Data Queries Based on records processed, which depends on query structure and total records in the queried objects. Unstructured Data Processed Measured by the amount of unstructured data (PDFs, audio/video files) processed. Streaming Data Pipeline Based on records ingested through real-time data streams (web, mobile, streaming ingestion API). Streaming Data Transforms Measured by the number of records processed in real-time transformations. Streaming Calculated Insights Usage is based on the number of records processed in streaming insights calculations. Streaming Actions (including lookups) Measured by the number of records processed in data lookups and enrichments. Inferences Based on predictive AI model usage, including one prediction, prescriptions, and top predictors. Applies to internal (Einstein AI) and external (BYOM) models. Data Share Rows Shared (Data Out) Based on the new/changed records processed for data sharing. Data Federation or Sharing Rows Accessed Based on records returned from external data sources. Only cross-region/cross-cloud queries consume credits. Sub-second Real-Time Events & API Based on profile events, engagement events, and API calls in real-time processing. Private Connect Data Processed Measured by GB of data transferred via private network routes. 🔹 Retired Billing Categories: Accelerated Data Queries and Real-Time Profile API (no longer billed after August 16, 2024). 2. Data Storage Allocation Storage usage applies to Data Cloud, Data Cloud for Marketing, and Data Cloud for Tableau. Billing Category Description Storage Beyond Allocation Measured by data storage exceeding your allocated limit. 3. Data Spaces Billing Category Description Data Spaces Usage is based on the number of data spaces beyond the default allocation. 4. Segmentation & Activation Usage applies to Data Cloud for Marketing customers and is based on records processed, queried, or activated. Billing Category Description Segmentation Based on the number of records processed for segmentation. Batch Activations Measured by records processed for batch activations. Activate DMO – Streaming Based on new/updated records in the Data Model Object (DMO) during an activation. If a data graph is used, the count is doubled. 5. Ad Audiences Service Usage Usage is calculated based on the number of ad audience targets created. Billing Category Description Ad Audiences Measured by the number of ad audience targets generated. 6. Data Cloud Real-Time Profile Real-time service usage is based on the number of records associated with real-time data graphs. Billing Category Description Sub-second Real-Time Profiles & Entities Based on the unique real-time data graph records appearing in the cache during the billing month. Each unique record is counted only once, even if it appears multiple times. 📌 Example: If a real-time data graph contains 10M cached records on day one, and 1M new records are added daily for 30 days, the total count would be 40M records. 7. Customer Data Platform (CDP) Billing Previously named Customer Data Platform orgs are billed based on contracted entitlements. Understanding these calculations can help optimize data management and cost efficiency. Track & Manage Your Usage 🔹 Digital Wallet – Monitor Data Cloud consumption across all categories.🔹 Feature & Usage Documentation – Review guidelines before activating features to optimize cost.🔹 Account Executive Consultation – Contact your AE to understand credit consumption and scalability options. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI Market Heat

AI Market Heat

Alibaba Feels the Heat as DeepSeek Shakes Up AI Market Chinese tech giant Alibaba is under pressure following the release of an AI model by Chinese startup DeepSeek that has sparked a major reaction in the West. DeepSeek claims to have trained its model—comparable to advanced Western AI—at a fraction of the cost and with significantly fewer AI chips. In response, Alibaba launched Qwen 2.5-Max, its latest AI language model, on Tuesday—just one day before the Lunar New Year, when much of China’s economy typically slows down for a 15-day holiday. A Closer Look at Qwen 2.5-Max Qwen 2.5-Max is a Mixture of Experts (MoE) model trained on 20 trillion tokens. It has undergone supervised fine-tuning and reinforcement learning from human feedback to enhance its capabilities. MoE models function by using multiple specialized “minds,” each focused on a particular domain. When a query is received, the model dynamically routes it to the most relevant expert, improving efficiency. For instance, a coding-related question would be processed by the model’s coding expert. This MoE approach reduces computational requirements, making training more cost-effective and faster. Other AI vendors, such as France-based Mistral AI, have also embraced this technique. DeepSeek’s Disruptive Impact While Qwen 2.5-Max is not a direct competitor to DeepSeek’s R1 model—the release of which triggered a global selloff in AI stocks—it is similar to DeepSeek-V3, another MoE-based model launched earlier this month. Alibaba’s swift release underscores the competitive threat posed by DeepSeek. As the world’s fourth-largest public cloud vendor, Alibaba, along with other Chinese tech giants, has been forced to respond aggressively. In the wake of DeepSeek R1’s debut, ByteDance—the owner of TikTok—also rushed to update its AI offerings. DeepSeek has already disrupted the AI market by significantly undercutting costs. In 2023, the startup introduced V2 at just 1 yuan ($0.14) per million tokens, prompting a price war. By comparison, OpenAI’s GPT-4 starts at $10 per million tokens—a staggering difference. The timing of Alibaba and ByteDance’s latest releases suggests that DeepSeek has accelerated product development cycles across the industry, forcing competitors to move faster than planned. “Alibaba’s cloud unit has been rapidly advancing its AI technology, but the pressure from DeepSeek’s rise is immense,” said Lisa Martin, an analyst at Futurum Group. A Shifting AI Landscape DeepSeek’s rapid growth reflects a broader shift in the AI market—one driven by leaner, more powerful models that challenge conventional approaches. “The drive to build more efficient models continues,” said Gartner analyst Arun Chandrasekaran. “We’re seeing significant innovation in algorithm design and software optimization, allowing AI to run on constrained infrastructure while being more cost-competitive.” This evolution is not happening in isolation. “AI companies are learning from one another, continuously reverse-engineering techniques to create better, cheaper, and more efficient models,” Chandrasekaran added. The AI industry’s perception of cost and scalability has fundamentally changed. Sam Altman, CEO of OpenAI, previously estimated that training GPT-4 cost over $100 million—but DeepSeek claims it built R1 for just $6 million. “We’ve spent years refining how transformers function, and the efficiency gains we’re seeing now are the result,” said Omdia analyst Bradley Shimmin. “These advances challenge the idea that massive computing power is required to develop state-of-the-art AI.” Competition and Data Controversies DeepSeek’s success showcases the increasing speed at which AI innovation is happening. Its distillation technique, which trains smaller models using insights from larger ones, has allowed it to create powerful AI while keeping costs low. However, OpenAI and Microsoft are now investigating whether DeepSeek improperly used their models’ data to train its own AI—a claim that, if true, could escalate into a major dispute. Ironically, OpenAI itself has faced similar accusations, leading some enterprises to prefer using its models through Microsoft Azure, which offers additional compliance safeguards. “The future of AI development will require stronger security layers,” Shimmin noted. “Enterprises need assurances that using models like Qwen 2.5 or DeepSeek R1 won’t expose their data.” For businesses evaluating AI models, licensing terms matter. Alibaba’s Qwen 2.5 series operates under an Apache 2.0 license, while DeepSeek uses an MIT license—both highly permissive, allowing companies to scrutinize the underlying code and ensure compliance. “These licenses give businesses transparency,” Shimmin explained. “You can vet the code itself, not just the weights, to mitigate privacy and security risks.” The Road Ahead The AI arms race between DeepSeek, Alibaba, OpenAI, and other players is just beginning. As vendors push the limits of efficiency and affordability, competition will likely drive further breakthroughs—and potentially reshape the AI landscape faster than anyone anticipated. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Scarf and Salesforce

Scarf and Salesforce

Scarf Integrates Open Source Software Tracking Platform with Salesforce At KubeCon + CloudNativeCon 2024, Scarf announced the integration of its open-source software usage tracking platform with Salesforce CRM. This integration arrives as debates around the definition and economics of open source remain a hot topic in the tech community. Scarf also introduced updates to its platform, including enhanced event data correction and flagging capabilities for improved accuracy in company matching and attribution. New data filtering options were also added for more refined data exports. The Scarf platform enables IT vendors to identify organizations consuming open-source software at significant scale, presenting opportunities to offer additional support or promote commercial add-ons for open-source tools. To date, the Scarf gateway has tracked over seven billion events, connecting usage data to specific organizations via attributes such as internet addresses. Strengthening the Open Source Ecosystem Scarf CEO Avi Press emphasized the platform’s role in maintaining the economic viability of the open-source ecosystem, often in partnership with organizations like The Linux Foundation. Without these insights, fewer IT vendors would sponsor open-source projects, Press noted, which would hinder the ecosystem’s growth and sustainability. However, the open-source community frequently experiences friction. Licensing changes by IT vendors often lead to project forks, with contributors reverting to previous licensing terms, sometimes backed by cloud providers. Press believes targeted commercial value opportunities—supported by tools like Scarf—can reduce this friction by fostering more productive engagements between vendors and organizations. Challenges and Evolving Definitions in Open Source While open source remains foundational to the tech world, it continues to face ideological and practical challenges. For decades, debates over licensing models have sparked disagreements, including the current contention around defining open-source AI models. Many models fail to disclose critical training details, leading to further disputes. Ultimately, each organization must navigate these issues by adopting its own definition of open source and deciding how best to support the ecosystem. Tools like Scarf’s platform aim to bridge gaps, enabling IT vendors and organizations to collaborate more effectively, ensuring the continued growth of open source. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
user q and a

Join Datasets From Multiple Salesforce Connections

Combining Data from Two Salesforce Instances and Publishing to Tableau Server If you’re working with two Salesforce instances and need to create a unified dataset for Tableau, here’s how you can tackle the challenges and achieve your goals. Challenges Identified Recommended Approach 1. Use Tableau Prep for Data Combination Tableau Prep is an ideal tool to connect to multiple Salesforce instances and combine data into a single dataset. Steps to Union/Join Data in Tableau Prep: Advantages: 2. Create Extracts in Tableau Desktop If you need to stick with Tableau Desktop: 3. Version Compatibility and Troubleshooting Resources for Success Outcome Using Tableau Prep or carefully leveraging Tableau Desktop blending, you can create a unified dataset from two Salesforce instances and publish it for broader use. Prep is particularly effective for your scenario, offering streamlined workflows and better server compatibility. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Artificial Intelligence and Sales Cloud

Artificial Intelligence and Sales Cloud

Artificial Intelligence and Sales Cloud AI enhances the sales process at every stage, making it more efficient and effective. Salesforce’s AI technology—Einstein—streamlines data entry and offers predictive analysis, empowering sales teams to maximize every opportunity. Artificial Intelligence and Sales Cloud explained. Artificial Intelligence and Sales Cloud Sales Cloud integrates several AI-driven features powered by Einstein and machine learning. To get the most out of these tools, review which features align with your needs and check the licensing requirements for each one. Einstein and Data Usage in Sales Cloud Einstein thrives on data. To fully leverage its capabilities within Sales Cloud, consult the data usage table to understand which types of data Einstein features rely on. Setting Up Einstein Opportunity Scoring in Sales Cloud Einstein Opportunity Scoring, part of the Sales Cloud Einstein suite, is available to eligible customers at no additional cost. Simply activate Einstein, and the system will handle the rest, offering predictive insights to improve your sales pipeline. Managing Access to Einstein Features in Sales Cloud Sales Cloud users can access Einstein Opportunity Scoring through the Sales Cloud Einstein For Everyone permission set. Ensure the right team members have access by reviewing the permissions, features included, and how to manage assignments. Einstein Copilot Setup for Sales Einstein Copilot helps sales teams stay organized by guiding them through deal management, closing strategies, customer communications, and sales forecasting. Each Copilot action corresponds to specific topics designed to optimize the sales process. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Generative ai energy consumption

Growing Energy Consumption in Generative AI

Growing Energy Consumption in Generative AI, but ROI Impact Remains Unclear The rising energy costs associated with generative AI aren’t always central in enterprise financial considerations, yet experts suggest IT leaders should take note. Building a business case for generative AI involves both obvious and hidden expenses. Licensing fees for large language models (LLMs) and SaaS subscriptions are visible expenses, but less apparent costs include data preparation, cloud infrastructure upgrades, and managing organizational change. Growing Energy Consumption in Generative AI. One under-the-radar cost is the energy required by generative AI. Training LLMs demands vast computing power, and even routine AI tasks like answering user queries or generating images consume energy. These intensive processes require robust cooling systems in data centers, adding to energy use. While energy costs haven’t been a focus for GenAI adopters, growing awareness has prompted the International Energy Agency (IEA) to predict a doubling of data center electricity consumption by 2026, attributing much of the increase to AI. Goldman Sachs echoed these concerns, projecting data center power consumption to more than double by 2030. For now, generative AI’s anticipated benefits outweigh energy cost concerns for most enterprises, with hyperscalers like Google bearing the brunt of these costs. Google recently reported a 13% increase in greenhouse gas emissions, citing AI as a major contributor and suggesting that reducing emissions might become more challenging with AI’s continued growth. Growing Energy Consumption in Generative AI While not a barrier to adoption, energy costs play into generative AI’s long-term viability, noted Scott Likens, global AI engineering leader at PwC, emphasizing that “there’s energy being used — you don’t take it for granted.” Energy Costs and Enterprise Adoption Generative AI users might not see a line item for energy costs, yet these are embedded in fees. Ryan Gross of Caylent points out that the costs are mainly tied to model training and inferencing, with each model query, though individually minor, adding up over time. These expenses are often spread across the customer base, as companies pay for generative AI access through a licensing model. A PwC sustainability study showed that GenAI power costs, particularly from model training, are distributed among licensees. Token-based pricing for LLM usage also reflects inferencing costs, though these charges have decreased. Likens noted that the largest expenses still come from infrastructure and data management rather than energy. Potential Efficiency Gains Though energy isn’t a primary consideration, enterprises could reduce consumption indirectly through technological advancements. Newer, more cost-efficient models like OpenAI’s GPT-4o mini are 60% less expensive per token than prior versions, enabling organizations to deploy GenAI on a larger scale while keeping costs lower. Small, fine-tuned models can be used to address latency and lower energy consumption, part of a “multimodel” approach that can provide different accuracy and latency levels with varying energy demands. Agentic AI also offers opportunities for cost and energy savings. By breaking down tasks and routing them through specialized models, companies can minimize latency and reduce power usage. According to Likens, using agentic architecture could cut costs and consumption, particularly when tasks are routed to more efficient models. Rising Data Center Energy Needs While enterprises may feel shielded from direct energy costs, data centers bear the growing power demand. Cooling solutions are evolving, with liquid cooling systems becoming more prevalent for AI workloads. As data centers face the “AI growth cycle,” the demand for energy-efficient cooling solutions has fueled a resurgence in thermal management investment. Liquid cooling, being more efficient than air cooling, is gaining traction due to the power demands of AI and high-performance computing. IDTechEx projects that data center liquid cooling revenue could exceed $50 billion by 2035. Meanwhile, data centers are exploring nuclear power, with AWS, Google, and Microsoft among those considering nuclear energy as a sustainable solution to meet AI’s power demands. Future ROI Considerations While enterprises remain shielded from the full energy costs of generative AI, careful model selection and architectural choices could help curb consumption. PwC, for instance, factors in the “carbon impact” as part of its GenAI deployment strategy, recognizing that energy considerations are now a part of the generative AI value proposition. As organizations increasingly factor sustainability into their tech decisions, energy efficiency might soon play a larger role in generative AI ROI calculations. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Collabrate With AI

Collabrate With AI

Many artists, writers, musicians, and creators are facing fears that AI is taking over their jobs. On the surface, generative AI tools can replicate work in moments that previously took creators hours to produce—often at a fraction of the cost and with similar quality. This shift has led many businesses to adopt AI for content creation, leaving creators worried about their livelihoods. Yet, there’s another way to view this situation, one that offers hope to creators everywhere. AI, at its core, is a tool of mimicry. When provided with enough data, it can replicate a style or subject with reasonable accuracy. Most of this data has been scraped from the internet, often without explicit consent, to train AI models on a wide variety of creative outputs. If you’re a creator, it’s likely that pieces of your work have contributed to the training of these AI models. Your art, words, and ideas have helped shape what these systems now consider ‘good’ in the realms of art, music, and writing. AI can combine the styles of multiple creators to generate something new, but often these creations fall flat. Why? While image-generating AI can predict pixels, it lacks an understanding of human emotions. It knows what a smile looks like but can’t grasp the underlying feelings of joy, nervousness, or flirtation that make a smile truly meaningful. AI can only generate a superficial replica unless the creator uses extensive prompt engineering to convey the context behind that smile. Emotion is uniquely human, and it’s what makes our creations resonate with others. A single brushstroke from a human artist can convey emotions that might take thousands of words to replicate through an AI prompt. We’ve all heard the saying, “A picture is worth a thousand words.” But generating that picture with AI often takes many more words. Input a short prompt, and the AI will enhance it with more words, often leading to results that stray from your original vision. To achieve a specific outcome, you may need hours of prompt engineering, trial, and error—and even then, the result might not be quite right. Without a human artist to guide the process, these generated works will often remain unimpressive, no matter how advanced the technology becomes. That’s where you, the creator, come in. By introducing your own inputs, such as images or sketches, and using workflows like those in ComfyUI, you can exert more control over the outputs. AI becomes less of a replacement for the artist and more of a tool or collaborator. It can help speed up the creative process but still relies on the artist’s hand to guide it toward a meaningful result. Artists like Martin Nebelong have embraced this approach, treating AI as just another tool in their creative toolbox. Nebelong uses high levels of control in AI-driven workflows to create works imbued with his personal emotional touch. He shares these workflows on platforms like LinkedIn and Twitter, encouraging other creators to explore how AI can speed up their processes while retaining the unique artistry that only humans can provide. Nebelong’s philosophy is clear: “I’m pro-creativity, pro-art, and pro-AI. Our tools change, the scope of what we can do changes. I don’t think creative AI tools or models have found their best form yet; they’re flawed, raw, and difficult to control. But I’m excited for when they find that form and can act as an extension of our hands, our brush, and as an amplifier of our artistic intent.” AI can help bring an artist 80% of the way to a finished product, but it’s the final 20%—the part where human skill and emotional depth come in—that elevates the piece to something truly remarkable. Think about the notorious issues with AI-generated hands. Often, the output features too many fingers or impossible poses, a telltale sign of AI’s limitations. An artist is still needed to refine the details, correct mistakes, and bring the creation in line with reality. While using AI may be faster than organizing a full photoshoot or painting from scratch, the artist’s role has shifted from full authorship to that of a collaborator, guiding AI toward a polished result. Nebelong often starts with his own artwork and integrates AI-generated elements, using them to enhance but never fully replace his vision. He might even use AI to generate 3D models, lighting, or animations, but the result is always driven by his creativity. For him, AI is just another step in the creative journey, not a shortcut or replacement for human effort. However, AI’s ability to replicate the styles of famous artists and public figures raises ethical concerns. With platforms like CIVIT.AI making it easy to train models on any style or subject, questions arise about the legality and morality of using someone else’s likeness or work without permission. As regulations catch up, we may see a future where AI models trained on specific styles or individuals are licensed, allowing creators to retain control over their works in the same way they license their traditional creations today. The future may also see businesses licensing AI models trained on actors, artists, or styles, allowing them to produce campaigns without booking the actual talent. This would lower costs while still benefiting creators through licensing fees. Actors and artists could continue to contribute their talents long after they’ve retired, or even passed on, by licensing their digital likenesses, as seen with CGI performances in movies like Rogue One. In conclusion, AI is pushing creators to learn new skills and adapt to new tools. While this can feel daunting, it’s important to remember that AI is just that—a tool. It doesn’t understand emotion, intent, or meaning, and it never will. That’s where humans come in. By guiding AI with our creativity and emotional depth, we can produce works that resonate with others on a deeper level. For example, you can tell artificial intelligence what an image should look like but not what emotions the image should evoke. Creators, your job isn’t disappearing. It’s

Read More
Licensing and Permitting with Salesforce Public Sector Solutions

Licensing and Permitting with Salesforce Public Sector Solutions

Licensing, Permitting, and Inspections Inspections are a crucial part of the licensing and permitting process, whether they involve a new home, a business seeking to open, or a follow-up based on a public complaint. Licensing and Permitting with Salesforce Public Sector Solutions aids in the critical steps in the process. Inspections can also be used independently for other assessments related to regulatory requirements, safety, and auditing. Assignments Inspections can be assigned with just a few clicks. The application reviewer or inspection dispatcher can quickly designate an inspector and schedule the visit. Mobile Inspections Public Sector Mobile Inspection automatically notifies inspectors of their daily visit plans on their mobile devices. Inspectors can use filters to view other days or prioritize tasks based on urgency and status. Inspector Checklists Configurable inspection checklists help ensure that inspectors don’t miss any steps during their onsite visits, enhancing community safety and reducing the need for follow-up inspections. Assessment Indicators Inspectors document compliance or violations against regulatory codes using configurable fields. They can also upload files, videos, or pictures from their mobile devices to support their assessments. Regulatory Codes Inspectors can easily reference relevant regulatory codes to verify their assessments, ensuring accuracy and compliance. Digital Signatures Digital signatures are captured on-site, eliminating the need for additional paperwork and streamlining the inspection process. No more emails, stamps, or standing in line. Enforcement Compliance officers can follow up on violations and create enforcement actions to ensure that stakeholders address any oustanding issues. Unified View Government agencies can access a unified 360 degree view of all relevant information in one place, enabling them to track resolution progress and assess final compliance. Experience Portal Throughout the process, stakeholders can stay informed about the status of their inspection and communicate with agency employees to ask questions or provide updates. Salesforce Experience Cloud provides an easy to apply solution to a constituent portal. Licensing and Permitting with Salesforce Public Sector Solutions With Salesforce Licensing and Permitting you can download and install process libraries that contain components for automating licensing and permitting workflows saving more time.  Public Sector Solutions provides OmniScript flows and components that automate these licensing and permitting workflows. Some components are available directly in Public Sector Solutions; others are not built-in and require that you download them from GitHub. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

AI and Consumer Goods Cloud

Salesforce’s latest “rolling thunder” of AI enhancements brings significant innovations to Consumer Goods Cloud, leveraging the power of the Einstein AI platform already integrated into Sales Cloud and Service Cloud. These enhancements are designed to optimize planning and execution for consumer goods companies. Salesforce Consumer Goods Cloud is an industry-specific solution that helps consumer goods companies streamline their route-to-market processes. By unifying trade promotion management and retail execution capabilities on a single platform, it enables seamless collaboration between headquarters and field teams. Utilizing Salesforce’s core CRM functionality and the Einstein AI platform, Consumer Goods Cloud empowers companies with data-driven insights and intelligent automation to drive profitable growth. “Consumer goods companies are laser-focused on profitable growth. With the latest Salesforce innovations for Consumer Goods Cloud, they can unify consumer and customer data to plan promotions precisely, equip every field rep with tools to increase sales and reduce downtime, and integrate trusted AI into every service agent’s workflow to solve problems and upsell more frequently,” explained Rob Garf, VP and GM of Retail and Consumer Goods at Salesforce. “In short, every consumer goods company can now transform into an AI Enterprise.” What’s New in Consumer Goods Cloud The latest updates in Consumer Goods Cloud focus on integrating Salesforce’s Data Cloud with Einstein generative AI capabilities, enhancing three key areas: Data Cloud for Consumer Goods: Account managers can now unify account and industry data to build rich customer profiles, segment accounts to the individual store level, and design hyper-localized assortment and promotion plans. For instance, a soft drink distributor can identify which citrus-flavored sodas are most popular in specific Mexican convenience stores and optimize replenishment accordingly. Einstein Copilot Account Summarization: Within the service console, agents can access AI-generated account summaries, eliminating the need to switch between screens and knowledge articles. Summaries include last interactions, order history, satisfaction scores, and promotion details, enabling agents to resolve inquiries quickly and upsell intelligently. Consumer Goods Cloud Einstein 1 for Sales: This AI-powered enhancement package provides sales managers, field reps, merchandisers, and delivery drivers with productivity and revenue-boosting insights. Real-time notifications and recommendations on stock levels, replenishment, special handling needs, and payment collection keep field teams responsive and effective. The Salesforce Embedded AI Difference Salesforce’s strategy of embedding AI via a unified Einstein platform offers several advantages: Consistency: With Einstein already integrated into Sales and Service Clouds, Salesforce can efficiently extend proven AI tools to industry-specific use cases, benefiting users with familiar interfaces and interaction paradigms. Completeness: Embedding AI at the platform level allows Salesforce to enhance the entire workflow from planning to execution. Consumer goods companies can apply intelligent insights to both back-office processes like promotion management and field activities like stock checks and payment collection. Continuous Innovation: The Einstein platform enables rapid deployment of Salesforce’s latest generative AI advancements across all clouds, ensuring customers always have access to state-of-the-art capabilities. Mars Snacking, one of the world’s largest consumer goods companies, is already benefiting from Salesforce’s AI-powered industry cloud. “At Mars Snacking, we are on an ambitious journey to rewire and almost double the size of our business by 2030,” said Bartek Kononiuk, Global Head of Product – Trade Promotion Management. “Consumer Goods Cloud and Trade Promotion Management will enable us to improve our business processes, data availability, and user experience in critical growth-enabling areas.” AI Innovation Comes at a Cost As the consumer goods industry strives to meet rapidly evolving buyer expectations, Salesforce’s embedded AI solutions for Consumer Goods Cloud offer timely advantages. By democratizing access to generative AI and data management capabilities, Salesforce enables companies of all sizes to optimize decision-making, boost field productivity, and drive profitable growth. However, these advanced functionalities come with significant costs. Salesforce’s Einstein AI enhancements often have substantial per-user surcharges, sometimes exceeding $100 per month. For large deployments involving thousands of employees, these expenses can quickly escalate. Consumer goods companies must carefully evaluate the productivity and revenue gains against the added licensing costs. Additionally, while Salesforce is leading the way in enterprise generative AI, the technology is still maturing. Early adopters may encounter instances where the AI delivers suboptimal results. Salesforce’s Trust Layer aims to mitigate these risks, but companies should approach generative AI with a clear understanding of its current limitations. The ongoing enhancements in Salesforce’s Einstein portfolio present a promising yet costly opportunity for customers to evolve into full-fledged AI Enterprises. As the costs and benefits become clearer, consumer goods companies will need to strategically decide where and how aggressively to deploy these advanced capabilities. Those that find the right balance could gain a significant competitive edge in the rapidly changing digital landscape. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

An Eye on AI

Humans often cast uneasy glances over their shoulders as artificial intelligence (AI) rapidly advances, achieving feats once exclusive to human intellect. An Eye on AI should ease their troubled minds. AI-driven chatbots can now pass rigorous exams like the bar and medical licensing tests, generate tailored images and summaries from complex texts, and simulate human-like interactions. Yet, amidst these advancements, concerns loom large — fears of widespread job loss, existential threats to humanity, and the specter of machines surpassing human control to safeguard their own existence. Skeptics of these doomsday scenarios argue that today’s AI lacks true cognition. They assert that AI, including sophisticated chatbots, operates on predictive algorithms that generate responses based on patterns in data inputs rather than genuine understanding. Even as AI capabilities evolve, it remains tethered to processing inputs into outputs without cognitive reasoning akin to human thought processes. So, are we venturing into perilous territory or merely witnessing incremental advancements in technology? Perhaps both. While the prospect of creating a malevolent AI akin to HAL 9000 from “2001: A Space Odyssey” seems far-fetched, there is a prudent assumption that human ingenuity, prioritizing survival, would prevent engineering our own demise through AI. Yet, the existential question remains — are we sufficiently safeguarded against ourselves? Doubts about AI’s true cognitive abilities persist despite its impressive functionalities. While AI models like large language models (LLMs) operate on vast amounts of data to simulate human reasoning and context awareness, they fundamentally lack consciousness. AI’s creativity, exemplified by its ability to invent new ideas or solve complex problems, remains a simulated mimicry rather than authentic intelligence. Moreover, AI’s domain-specific capabilities are constrained by its training data and programming limitations, unlike human cognition which adapts dynamically to diverse and novel situations. AI excels in pattern recognition tasks, from diagnosing diseases to classifying images, yet it does so without comprehending the underlying concepts or contexts. For instance, in medical diagnostics or art authentication, AI can achieve remarkable accuracy in identifying patterns but lacks the interpretative skills and contextual understanding that humans possess. This limitation underscores the necessity for human oversight and critical judgment in areas where AI’s decisions impact significant outcomes. The evolution of AI, rooted in neural network technologies and deep learning paradigms, marks a profound shift in how we approach complex tasks traditionally performed by human experts. However, AI’s reliance on data patterns and algorithms highlights its inherent limitations in achieving genuine cognitive understanding or autonomous decision-making. In conclusion, while AI continues to transform industries and enhance productivity, its capabilities are rooted in computational algorithms rather than conscious reasoning. As we navigate the future of AI integration, maintaining a balance between leveraging its efficiencies and preserving human expertise and oversight remains paramount. Ultimately, the intersection of AI and human intelligence will define the boundaries of technological advancement and ethical responsibility in the years to come. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
  • 1
  • 2
gettectonic.com