Cost Efficiency Archives - gettectonic.com
Salesforce Einstein Discovery

Salesforce Einstein Discovery

Unlock the Power of Historical Salesforce Data with Einstein Discovery Streamline Access to Historical Insights Salesforce Einstein Discovery (formerly Salesforce Discover) eliminates the complexity of manual data extraction, giving you instant access to complete historical Salesforce data—without maintaining pipelines or infrastructure. 🔹 Effortless Trend Analysis – Track changes across your entire org over time.🔹 Seamless Reporting – Accelerate operational insights with ready-to-use historical data.🔹 Cost Efficiency – Reduce overhead by retrieving trend data from backups instead of production. Why Use Historical Backup Data for Analytics? Most organizations struggle with incomplete or outdated SaaS data, making trend analysis slow and unreliable. With Einstein Discovery, you can:✅ Eliminate data gaps – Access every historical change in your Salesforce org.✅ Speed up decision-making – Feed clean, structured data directly to BI tools.✅ Cut infrastructure costs – Skip costly ETL processes and data warehouses. Einstein Discovery vs. Traditional Data Warehouses Traditional Approach Einstein Discovery Requires ETL pipelines & data warehouses No pipelines needed – backups auto-update Needs ongoing engineering maintenance Zero maintenance – always in sync with your org Limited historical visibility Full change history with minute-level accuracy 💡 Key Advantage: Einstein Discovery automates what used to take months of data engineering. How It Works Einstein Discovery leverages Salesforce Backup & Recover to:🔹 Track every field & record change in real time.🔹 Feed historical data directly to Tableau, Power BI, or other BI tools.🔹 Stay schema-aware – no manual adjustments needed. AI-Powered Predictive Analytics Beyond historical data, Einstein Discovery uses AI and machine learning to:🔮 Predict outcomes (e.g., sales forecasts, churn risk).📊 Surface hidden trends with automated insights.🛠 Suggest improvements (e.g., “Increase deal size by focusing on X”). Supported Use Cases: ✔ Regression (e.g., revenue forecasting)✔ Binary Classification (e.g., “Will this lead convert?”)✔ Multiclass Classification (e.g., “Which product will this customer buy?”) Deploy AI Insights Across Salesforce Once trained, models can be embedded in:📌 Lightning Pages📌 Experience Cloud📌 Tableau Dashboards📌 Salesforce Flows & Automation Get Started with Einstein Discovery 🔹 License Required: CRM Analytics Plus or Einstein Predictions.🔹 Data Prep: Pull from Salesforce or external sources.🔹 Bias Detection: Ensure ethical AI with built-in fairness checks. Transform raw data into actionable intelligence—without coding. Talk to your Salesforce rep to enable Einstein Discovery today! Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Data Cloud Billable Usage

Data Cloud Billable Usage Overview Usage of certain Data Cloud features impacts credit consumption. To track usage, access your Digital Wallet within your Salesforce org. For specific billing details, refer to your contract or contact your Account Executive. Important Notes ⚠️ Customer Data Platform (CDP) Licensing – If your Data Cloud org operates under a CDP license, refer to Customer Data Platform Billable Usage Calculations instead.⚠️ Sandbox Usage – Data Cloud sandbox consumption affects credits, with usage tracked separately on Data Cloud sandbox cards. Understanding Usage Calculations Credit consumption is based on the number of units used multiplied by the multiplier on the rate card for that usage type. Consumption is categorized as follows: 1. Data Service Usage Service usage is measured by records processed, queried, or analyzed. Billing Category Description Batch Data Pipeline Based on the volume of batch data processed via Data Cloud data streams. Batch Data Transforms Measured by the higher of rows read vs. rows written. Incremental transforms only count changed rows after the first run. Batch Profile Unification Based on source profiles processed by an identity resolution ruleset. After the first run, only new/modified profiles are counted. Batch Calculated Insights Based on the number of records in underlying objects used to generate Calculated Insights. Data Queries Based on records processed, which depends on query structure and total records in the queried objects. Unstructured Data Processed Measured by the amount of unstructured data (PDFs, audio/video files) processed. Streaming Data Pipeline Based on records ingested through real-time data streams (web, mobile, streaming ingestion API). Streaming Data Transforms Measured by the number of records processed in real-time transformations. Streaming Calculated Insights Usage is based on the number of records processed in streaming insights calculations. Streaming Actions (including lookups) Measured by the number of records processed in data lookups and enrichments. Inferences Based on predictive AI model usage, including one prediction, prescriptions, and top predictors. Applies to internal (Einstein AI) and external (BYOM) models. Data Share Rows Shared (Data Out) Based on the new/changed records processed for data sharing. Data Federation or Sharing Rows Accessed Based on records returned from external data sources. Only cross-region/cross-cloud queries consume credits. Sub-second Real-Time Events & API Based on profile events, engagement events, and API calls in real-time processing. Private Connect Data Processed Measured by GB of data transferred via private network routes. 🔹 Retired Billing Categories: Accelerated Data Queries and Real-Time Profile API (no longer billed after August 16, 2024). 2. Data Storage Allocation Storage usage applies to Data Cloud, Data Cloud for Marketing, and Data Cloud for Tableau. Billing Category Description Storage Beyond Allocation Measured by data storage exceeding your allocated limit. 3. Data Spaces Billing Category Description Data Spaces Usage is based on the number of data spaces beyond the default allocation. 4. Segmentation & Activation Usage applies to Data Cloud for Marketing customers and is based on records processed, queried, or activated. Billing Category Description Segmentation Based on the number of records processed for segmentation. Batch Activations Measured by records processed for batch activations. Activate DMO – Streaming Based on new/updated records in the Data Model Object (DMO) during an activation. If a data graph is used, the count is doubled. 5. Ad Audiences Service Usage Usage is calculated based on the number of ad audience targets created. Billing Category Description Ad Audiences Measured by the number of ad audience targets generated. 6. Data Cloud Real-Time Profile Real-time service usage is based on the number of records associated with real-time data graphs. Billing Category Description Sub-second Real-Time Profiles & Entities Based on the unique real-time data graph records appearing in the cache during the billing month. Each unique record is counted only once, even if it appears multiple times. 📌 Example: If a real-time data graph contains 10M cached records on day one, and 1M new records are added daily for 30 days, the total count would be 40M records. 7. Customer Data Platform (CDP) Billing Previously named Customer Data Platform orgs are billed based on contracted entitlements. Understanding these calculations can help optimize data management and cost efficiency. Track & Manage Your Usage 🔹 Digital Wallet – Monitor Data Cloud consumption across all categories.🔹 Feature & Usage Documentation – Review guidelines before activating features to optimize cost.🔹 Account Executive Consultation – Contact your AE to understand credit consumption and scalability options. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Salesforce Life Sciences Cloud and Honeywell

Salesforce Life Sciences Cloud and Honeywell

Honeywell’s TrackWise® Quality solution suite, in combination with Salesforce Life Sciences Cloud, will provide a comprehensive software platform for pharmaceutical and medical technology companies. Honeywell has expanded its strategic partnership with Salesforce, Inc. to introduce an integrated software platform designed for the life sciences industry. This platform, which includes Honeywell’s TrackWise Quality, Salesforce Life Sciences Cloud, Agentforce, and other solutions, aims to help pharmaceutical and medical technology companies accelerate the delivery of critical medications and healthcare devices with enhanced safety and cost efficiency. By digitizing and automating quality and compliance processes, Honeywell’s TrackWise Quality enables companies to manufacture products more efficiently, mitigate risks, reduce production costs, and bring safer products to market faster. Salesforce Life Sciences Cloud consolidates diverse data sources—such as clinical test results and medical records—to optimize interactions between pharmaceutical and medtech organizations, healthcare professionals, partners, and patients. This collaboration underscores the shared commitment of Honeywell and Salesforce to advancing automation and efficiency in the life sciences sector. “Our ongoing commitment to providing innovative software to manufacturers of critical life-supporting treatments centers around fostering trust between providers and patients,” said Frank Defesche, SVP & GM, Life Sciences at Salesforce. “The long-standing collaboration between Salesforce and Honeywell strengthens the life sciences industry by enhancing quality interactions and products, ultimately leading to improved patient outcomes.” “Partnering with Salesforce allows us to leverage our combined expertise in life sciences to drive operational efficiency, improve quality, and enhance patient-centric solutions through technology,” said Sunil Pandita, Vice President & General Manager of Honeywell Life Sciences. “From assisting patients in finding life-changing clinical trials to detecting anomalies in pharmaceutical manufacturing, Honeywell and Salesforce technologies are enabling better patient care.” Honeywell’s TrackWise Quality fosters a proactive approach to quality management by integrating industry best practices, advanced tracking and analytics, and artificial intelligence (AI) capabilities. This empowers life sciences companies to harness data more effectively, optimize performance, and enhance decision-making processes. Salesforce Life Sciences Cloud serves as a comprehensive, AI-powered engagement platform for pharmaceutical and medtech companies. Built on Sales Cloud and Service Cloud, it streamlines processes such as automating pharmacy benefit verification and expediting the screening of clinical trial candidates. These enhancements improve patient access, medication adherence, and accelerate diverse patient recruitment while reducing trial attrition. Additional Salesforce solutions within the platform include Agentforce, Data Cloud, and Analytics. As part of its continued commitment to life sciences customers, Honeywell has joined the Salesforce Agentforce Partner Network, a global ecosystem of partners developing solutions for regulatory-compliant training materials for life sciences professionals. To learn more about Honeywell’s industry-leading technologies for life sciences, visit: www.honeywell.com/us/en/industries/life-sciences. Salesforce, Salesforce Life Sciences Cloud, Agentforce, Data Cloud, and other names are trademarks of Salesforce, Inc. About Honeywell Honeywell is a diversified operating company serving a wide range of industries worldwide. Its business aligns with three key megatrends—automation, the future of aviation, and energy transition—supported by the Honeywell Accelerator operating system and Honeywell Forge IoT platform. As a trusted partner, Honeywell helps organizations address complex global challenges, delivering innovative solutions through its Aerospace Technologies, Industrial Automation, Building Automation, and Energy and Sustainability Solutions segments. For more news and updates on Honeywell, visit www.honeywell.com/newsroom. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Reward-Guided Speculative Decoding

Salesforce AI Research Unveils Reward-Guided Speculative Decoding (RSD): A Breakthrough in Large Language Model (LLM) Inference Efficiency Addressing the Computational Challenges of LLMs The rapid scaling of large language models (LLMs) has led to remarkable advancements in natural language understanding and reasoning. However, inference—the process of generating responses one token at a time—remains a major computational bottleneck. As LLMs grow in size and complexity, latency and energy consumption increase, posing challenges for real-world applications that demand cost efficiency, speed, and scalability. Traditional decoding methods, such as greedy and beam search, require repeated evaluations of large models, leading to significant computational overhead. Even parallel decoding techniques struggle to balance efficiency with output quality. These challenges have driven research into hybrid approaches that combine lightweight models with more powerful ones, optimizing speed without sacrificing performance. Introducing Reward-Guided Speculative Decoding (RSD) Salesforce AI Research introduces Reward-Guided Speculative Decoding (RSD), a novel framework designed to enhance LLM inference efficiency. RSD employs a dual-model strategy: Unlike traditional speculative decoding, which enforces strict token matching between draft and target models, RSD introduces a controlled bias that prioritizes high-reward outputs—tokens deemed more accurate or contextually relevant. This strategic bias significantly reduces unnecessary computations. RSD’s mathematically derived threshold mechanism dictates when the target model should intervene. By dynamically blending outputs from both models based on a reward function, RSD accelerates inference while maintaining or even enhancing response quality. This innovation addresses the inefficiencies inherent in sequential token generation for LLMs. Technical Insights and Benefits of RSD RSD integrates two models in a sequential, cooperative manner: This mechanism is guided by a binary step weighting function, ensuring that only high-quality tokens bypass the target model, significantly reducing computational demands. Key Benefits: The theoretical foundation of RSD, including the probabilistic mixture distribution and adaptive acceptance criteria, provides a robust framework for real-world deployment across diverse reasoning tasks. Empirical Results: Superior Performance Across Benchmarks Experiments on challenging datasets—such as GSM8K, MATH500, OlympiadBench, and GPQA—demonstrate RSD’s effectiveness. Notably, on the MATH500 benchmark, RSD achieved 88.0% accuracy using a 72B target model and a 7B PRM, outperforming the target model’s standalone accuracy of 85.6% while reducing FLOPs by nearly 4.4×. These results highlight RSD’s potential to surpass traditional methods, including speculative decoding (SD), beam search, and Best-of-N strategies, in both speed and accuracy. A Paradigm Shift in LLM Inference Reward-Guided Speculative Decoding (RSD) represents a significant advancement in LLM inference. By intelligently combining a draft model with a powerful target model and incorporating a reward-based acceptance criterion, RSD effectively mitigates computational costs without compromising quality. This biased acceleration approach strategically bypasses expensive computations for high-reward outputs, ensuring an efficient and scalable inference process. With empirical results showcasing up to 4.4× faster performance and superior accuracy, RSD sets a new benchmark for hybrid decoding frameworks, paving the way for broader adoption in real-time AI applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

Outsourced Salesforce Admin

Maximizing Business Potential with Outsourced Salesforce Admin Services Salesforce is an indispensable tool for managing customer relationships, streamlining operations, and driving growth. However, fully leveraging Salesforce’s capabilities requires skilled management, regular maintenance, and continuous updates. While some businesses prefer in-house management, outsourcing Salesforce admin services has emerged as a strategic option offering numerous advantages, including cost savings, access to specialized expertise, and improved system performance. This allows businesses to focus on core priorities. Key Benefits of Outsourcing Salesforce Admin Services 1. Access to Specialized Expertise Salesforce’s vast features and capabilities demand a deep understanding of its tools, integrations, and customizations. Outsourcing provides access to professionals with industry-specific expertise and up-to-date knowledge of Salesforce advancements. These experts ensure system optimization by implementing advanced features, automating workflows, and customizing dashboards, minimizing downtime, resolving issues efficiently, and improving overall system reliability. 2. Scalability and Flexibility Business needs evolve over time, and so do Salesforce requirements. Outsourced teams offer scalability and adaptability, making it easy to adjust services during periods of growth, mergers, system upgrades, or market expansion. This flexibility ensures businesses can meet their changing needs without disrupting operations. 3. Cost Efficiency and Resource Optimization Hiring and training in-house Salesforce administrators can be expensive. Outsourcing eliminates these costs by providing access to top-tier talent without the overhead of full-time employees. Moreover, outsourcing allows internal teams to focus on strategic initiatives rather than day-to-day Salesforce management, maximizing productivity. 4. Enhanced Security and Compliance Protecting sensitive data and ensuring regulatory compliance is critical, especially in highly regulated industries. Outsourced Salesforce administrators bring extensive experience in implementing robust security measures, conducting regular audits, and mitigating vulnerabilities. Their proactive approach ensures data integrity and minimizes risks. 5. Improved Operational Efficiency Outsourcing ensures routine maintenance, performance monitoring, and data cleansing are consistently handled, reducing errors and improving system performance. Outsourced teams also use advanced tools to identify inefficiencies and recommend optimizations, creating streamlined workflows and resource utilization. 6. Quick Issue Resolution Experienced outsourced admins can diagnose and resolve technical issues promptly, minimizing disruptions. Their expertise and access to dedicated support channels ensure faster problem resolution, enabling businesses to maintain productivity and meet customer expectations. 7. Strategic Guidance and Insights Beyond daily management, outsourced professionals provide valuable strategic insights based on their cross-industry experience. From identifying automation opportunities to recommending data-driven strategies, they help businesses leverage Salesforce to achieve long-term objectives and foster innovation. 8. Tailored Customization and Integration Salesforce’s customization potential is vast, but it requires expertise to align the system with business goals effectively. Outsourcing ensures seamless integration and customization, whether through unique workflows, custom applications, or third-party tools. This tailored approach maximizes ROI and ensures Salesforce evolves with the organization. 9. Continuity Despite Employee Turnover Employee turnover in in-house teams can disrupt Salesforce management. Outsourced providers ensure continuity through established processes and teams, minimizing downtime and reducing the burden on internal staff. 10. Focus on Core Competencies Outsourcing Salesforce management allows internal teams to focus on innovation, market expansion, and customer service, while experts handle Salesforce’s complexities. This alignment of resources drives long-term success. 11. Access to Advanced Tools and Technologies Outsourced teams leverage advanced tools for data accuracy, performance insights, and productivity enhancements. These technologies improve system usability and allow businesses to stay competitive. 12. Knowledge Updates and Ongoing Training Salesforce evolves continuously, requiring admins to stay updated with new features and industry trends. Outsourced professionals invest in ongoing training and certifications, ensuring businesses benefit from the latest advancements without dedicating internal resources to training. 13. Time-Zone Benefits and 24/7 Support For global businesses, outsourced teams provide round-the-clock support to address technical issues promptly, regardless of time zones. Maintenance tasks can also be scheduled during non-business hours, minimizing disruptions and enhancing efficiency. Conclusion Outsourcing Salesforce admin services is a strategic investment for businesses aiming to enhance performance, drive growth, and streamline operations. By leveraging the expertise of skilled professionals, businesses can benefit from seamless system management, tailored customizations, and proactive support while reducing costs and resource demands. For organizations seeking to stay competitive in today’s dynamicmarketplace, outsourcing Salesforce admin services is not just a convenience but a strategic move toward achieving long-term success. By leaving Salesforce management to the experts, businesses can focus on their core goals and drive innovation. Contact Tectonic Today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
No-Code Generative AI

No-Code Generative AI

The future of AI belongs to everyone, and no-code platforms are the key to making this vision a reality. By embracing this approach, enterprises can ensure that AI-driven innovation is inclusive, efficient, and transformative.

Read More
Standards in Healthcare Cybersecurity

Deploying Large Language Models in Healthcare

Study Identifies Cost-Effective Strategies for Deploying Large Language Models in Healthcare Efficient deployment of large language models (LLMs) at scale in healthcare can streamline clinical workflows and reduce costs by up to 17 times without compromising reliability, according to a study published in NPJ Digital Medicine by researchers at the Icahn School of Medicine at Mount Sinai. The research highlights the potential of LLMs to enhance clinical operations while addressing the financial and computational hurdles healthcare organizations face in scaling these technologies. To investigate solutions, the team evaluated 10 LLMs of varying sizes and capacities using real-world patient data. The models were tested on chained queries and increasingly complex clinical notes, with outputs assessed for accuracy, formatting quality, and adherence to clinical instructions. “Our study was driven by the need to identify practical ways to cut costs while maintaining performance, enabling health systems to confidently adopt LLMs at scale,” said Dr. Eyal Klang, director of the Generative AI Research Program at Icahn Mount Sinai. “We aimed to stress-test these models, evaluating their ability to manage multiple tasks simultaneously and identifying strategies to balance performance and affordability.” The team conducted over 300,000 experiments, finding that high-capacity models like Meta’s Llama-3-70B and GPT-4 Turbo 128k performed best, maintaining high accuracy and low failure rates. However, performance began to degrade as task volume and complexity increased, particularly beyond 50 tasks involving large prompts. The study further revealed that grouping tasks—such as identifying patients for preventive screenings, analyzing medication safety, and matching patients for clinical trials—enabled LLMs to handle up to 50 simultaneous tasks without significant accuracy loss. This strategy also led to dramatic cost savings, with API costs reduced by up to 17-fold, offering a pathway for health systems to save millions annually. “Understanding where these models reach their cognitive limits is critical for ensuring reliability and operational stability,” said Dr. Girish N. Nadkarni, co-senior author and director of The Charles Bronfman Institute of Personalized Medicine. “Our findings pave the way for the integration of generative AI in hospitals while accounting for real-world constraints.” Beyond cost efficiency, the study underscores the potential of LLMs to automate key tasks, conserve resources, and free up healthcare providers to focus more on patient care. “This research highlights how AI can transform healthcare operations. Grouping tasks not only cuts costs but also optimizes resources that can be redirected toward improving patient outcomes,” said Dr. David L. Reich, co-author and chief clinical officer of the Mount Sinai Health System. The research team plans to explore how LLMs perform in live clinical environments and assess emerging models to determine whether advancements in AI technology can expand their cognitive thresholds. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Google’s Gemini 1.5 Flash-8B

Google’s Gemini 1.5 Flash-8B

Google’s Gemini 1.5 Flash-8B: A Game-Changer in Speed and Affordability Google’s latest AI model, Gemini 1.5 Flash-8B, has taken the spotlight as the company’s fastest and most cost-effective offering to date. Building on the foundation of the original Flash model, 8B introduces key upgrades in pricing, speed, and rate limits, signaling Google’s intent to dominate the affordable AI model market. What Sets Gemini 1.5 Flash-8B Apart? Google has implemented several enhancements to this lightweight model, informed by “developer feedback and testing the limits of what’s possible,” as highlighted in their announcement. These updates focus on three major areas: 1. Unprecedented Price Reduction The cost of using Flash-8B has been slashed in half compared to its predecessor, making it the most budget-friendly model in its class. This dramatic price drop solidifies Flash-8B as a leading choice for developers seeking an affordable yet reliable AI solution. 2. Enhanced Speed The Flash-8B model is 40% faster than its closest competitor, GPT-4o, according to data from Artificial Analysis. This improvement underscores Google’s focus on speed as a critical feature for developers. Whether working in AI Studio or using the Gemini API, users will notice shorter response times and smoother interactions. 3. Increased Rate Limits Flash-8B doubles the rate limits of its predecessor, allowing for 4,000 requests per minute. This improvement ensures developers and users can handle higher volumes of smaller, faster tasks without bottlenecks, enhancing efficiency in real-time applications. Accessing Flash-8B You can start using Flash-8B today through Google AI Studio or via the Gemini API. AI Studio provides a free testing environment, making it a great starting point before transitioning to API integration for larger-scale projects. Comparing Flash-8B to Other Gemini Models Flash-8B positions itself as a faster, cheaper alternative to high-performance models like Gemini 1.5 Pro. While it doesn’t outperform the Pro model across all benchmarks, it excels in cost efficiency and speed, making it ideal for tasks requiring rapid processing at scale. In benchmark evaluations, Flash-8B surpasses the base Flash model in four key areas, with only marginal decreases in other metrics. For developers prioritizing speed and affordability, Flash-8B offers a compelling balance between performance and cost. Why Flash-8B Matters Gemini 1.5 Flash-8B highlights Google’s commitment to providing accessible AI solutions for developers without compromising on quality. With its reduced costs, faster response times, and higher request limits, Flash-8B is poised to redefine expectations for lightweight AI models, catering to a broad spectrum of applications while maintaining an edge in affordability. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
GPUs and AI Development

GPUs and AI Development

Graphics processing units (GPUs) have become widely recognized due to their growing role in AI development. However, a lesser-known but critical technology is also gaining attention: high-bandwidth memory (HBM). HBM is a high-density memory designed to overcome bottlenecks and maximize data transfer speeds between storage and processors. AI chipmakers like Nvidia rely on HBM for its superior bandwidth and energy efficiency. Its placement next to the GPU’s processor chip gives it a performance edge over traditional server RAM, which resides between storage and the processing unit. HBM’s ability to consume less power makes it ideal for AI model training, which demands significant energy resources. However, as the AI landscape transitions from model training to AI inferencing, HBM’s widespread adoption may slow. According to Gartner’s 2023 forecast, the use of accelerator chips incorporating HBM for AI model training is expected to decline from 65% in 2022 to 30% by 2027, as inferencing becomes more cost-effective with traditional technologies. How HBM Differs from Other Memory HBM shares similarities with other memory technologies, such as graphics double data rate (GDDR), in delivering high bandwidth for graphics-intensive tasks. But HBM stands out due to its unique positioning. Unlike GDDR, which sits on the printed circuit board of the GPU, HBM is placed directly beside the processor, enhancing speed by reducing signal delays caused by longer interconnections. This proximity, combined with its stacked DRAM architecture, boosts performance compared to GDDR’s side-by-side chip design. However, this stacked approach adds complexity. HBM relies on through-silicon via (TSV), a process that connects DRAM chips using electrical wires drilled through them, requiring larger die sizes and increasing production costs. According to analysts, this makes HBM more expensive and less efficient to manufacture than server DRAM, leading to higher yield losses during production. AI’s Demand for HBM Despite its manufacturing challenges, demand for HBM is surging due to its importance in AI model training. Major suppliers like SK Hynix, Samsung, and Micron have expanded production to meet this demand, with Micron reporting that its HBM is sold out through 2025. In fact, TrendForce predicts that HBM will contribute to record revenues for the memory industry in 2025. The high demand for GPUs, especially from Nvidia, drives the need for HBM as AI companies focus on accelerating model training. Hyperscalers, looking to monetize AI, are investing heavily in HBM to speed up the process. HBM’s Future in AI While HBM has proven essential for AI training, its future may be uncertain as the focus shifts to AI inferencing, which requires less intensive memory resources. As inferencing becomes more prevalent, companies may opt for more affordable and widely available memory solutions. Experts also see HBM following the same trajectory as other memory technologies, with continuous efforts to increase bandwidth and density. The next generation, HBM3E, is already in production, with HBM4 planned for release in 2026, promising even higher speeds. Ultimately, the adoption of HBM will depend on market demand, especially from hyperscalers. If AI continues to push the limits of GPU performance, HBM could remain a critical component. However, if businesses prioritize cost efficiency over peak performance, HBM’s growth may level off. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents

Gen AI and Software Development

The Future of Software Development with Generative AI Imagine developing software products at unprecedented speed and cost efficiency, allowing your company to test more ideas with real—and even virtual—customers. This capability could accelerate the time to market for targeted products while minimizing risk and resource waste. Generative AI (GenAI) is making this vision a reality. But how exactly will AI-powered product development work? We propose a four-stage framework that leverages GenAI to streamline today’s labor-intensive processes. The Challenge with Traditional Software Development As Marty Cagan of Silicon Valley Product Group has pointed out, most companies still rely on a lengthy, complex software development cycle. Typically, it follows this pattern: This approach is expensive and fraught with risk. Predicting a product’s ROI before release is notoriously inaccurate. Additionally, testing product designs with real users is both time-consuming and costly. If a final product fails to attract customers, the company loses valuable time, money, and human effort—something we’ve seen in cases like Quibi and Clubhouse. To mitigate these risks, some firms have embraced iterative development, involving end users early in the process and continuously refining their solutions. While this method improves outcomes, GenAI offers the potential to revolutionize the entire approach. How GenAI Transforms Software Development GenAI moves beyond traditional A/B testing and incremental improvements. Consider the perspective of Nikita Bier, Product Growth Partner at Lightspeed Venture Partners, who recently stated: “No, I just ship the app—and if it’s not ranked in the Apple Store, I change it until it is.” This mindset—enabled by GenAI—suggests a more agile, data-driven approach to product development, where software is rapidly iterated based on real-world feedback. We propose a simplified four-step framework that highlights GenAI’s role in transforming each stage: 1. User Research Today: Companies analyze user problems, market needs, and contextual factors to determine why a product should be built. With GenAI: AI can simulate realistic consumer behavior, reducing the need for expensive user research. For example, a recent study used OpenAI’s GPT-3.5 to predict laptop purchasing decisions based on simulated income levels. The AI accurately adjusted its price sensitivity based on whether it “earned” $50,000 or $120,000 annually—mimicking real consumer behavior. 2. Design Today: Product teams develop solutions, mapping interactions between users and the product. With GenAI: AI can translate ideas into designs for different types of creators. Visual thinkers can sketch concepts, which AI converts into formal design assets. Those who work better with words can use AI tools like Galileo and Genius to generate wireframes from natural language descriptions—seamlessly integrating with design platforms like Figma. 3. Build Today: Developers determine how the product’s components fit together, writing code to bring it to life. With GenAI: AI can generate functional software code with minimal human input. For instance, aerospace engineer Brandon Starr used a single sentence—“Create a bunny-themed Flappy Bird as an iOS app”—to instruct Replit Agent, which then built the app autonomously. 4. Learn Today: Companies analyze product performance and user feedback to refine future iterations. With GenAI: AI will integrate with top-tier product analytics tools, synthesizing data to automate improvements, rebuilds, and relaunches. As Wharton professor Ethan Mollick has demonstrated, GPT’s advanced data analysis capabilities can already perform this type of iterative optimization. The Future of AI-Powered Development What about traditional product development steps like market research, segmentation, and feature prioritization? Some will be absorbed into these four stages, while others—like extensive market analysis—will become less critical as development accelerates. An even more transformative shift is on the horizon: natural language interfaces that guide product developers through the entire process. Imagine describing a vague product idea, and AI not only builds it but also evaluates its business viability. This shift could redefine how companies structure development teams—or even empower individuals to create software on demand, much like smartphones democratized video production. As GenAI pioneers the next frontier, software development is poised to become one of its most revolutionary applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Healthcare Cloud Computing

Healthcare Cloud Computing

Cloud Computing in Healthcare: Ensuring HIPAA Compliance Amid Growing Adoption As healthcare organizations increasingly turn to cloud computing for scalable and accessible IT services, ensuring HIPAA compliance remains a top priority. The global healthcare cloud computing market is projected to grow from $53.8 billion in 2024 to $120.6 billion by 2029, according to a MarketsandMarkets report. A 2023 Forrester report also highlighted that healthcare organizations are spending an average of .5 million annually on cloud services, with public cloud adoption on the rise. While cloud computing offers benefits like enhanced data mobility and cost efficiency, maintaining a HIPAA-compliant relationship with cloud service providers (CSPs) requires careful attention to regulations, establishing business associate agreements (BAAs), and proactively addressing cloud security risks. Understanding HIPAA’s Role in Cloud Computing The National Institute of Standards and Technology (NIST) defines cloud computing as a model that provides on-demand access to shared computing resources. Based on this framework, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) has issued guidance on how HIPAA’s Security, Privacy, and Breach Notification Rules apply to cloud computing. Under the HIPAA Security Rule, CSPs classified as business associates must adhere to specific standards for safeguarding protected health information (PHI). This includes mitigating the risks of unauthorized access to administrative tools and implementing internal controls to restrict access to critical operations like storage and memory. HIPAA’s Privacy Rule further restricts the use or disclosure of PHI by CSPs, even in cases where they offer “no-view services.” CSPs cannot block a covered entity’s access to PHI, even in the event of a payment dispute. Additionally, the Breach Notification Rule requires business associates, including CSPs, to promptly report any breach of unsecured PHI. Healthcare organizations engaging with CSPs should consult legal counsel and follow standard procedures for establishing HIPAA-compliant vendor relationships. The Importance of Business Associate Agreements (BAAs) A BAA is essential for ensuring that a CSP is contractually bound to comply with HIPAA. OCR emphasizes that when a covered entity engages a CSP to create, receive, or transmit electronic PHI (ePHI), the CSP becomes a business associate under HIPAA. Even if the CSP cannot access encrypted PHI, it is still classified as a business associate due to its involvement in storing and processing PHI. In 2016, the absence of a BAA led to a .7 million settlement between Oregon Health & Science University and OCR after the university stored the PHI of over 3,000 individuals on a cloud server without the required agreement. BAAs play a crucial role in defining the permitted uses of PHI and ensure that both the healthcare organization and CSP understand their responsibilities under HIPAA. They also outline protocols for breach notifications and security measures, ensuring both parties are aligned on handling potential security incidents. Key Cloud Security Considerations Despite the protections of a BAA, there are inherent risks in partnering with any new vendor. Staying informed on cloud security threats is vital for mitigating potential risks proactively. In a 2024 report, the Cloud Security Alliance (CSA) identified misconfiguration, inadequate change control, and identity management as the top threats to cloud computing. The report also pointed to the rising sophistication of cyberattacks, supply chain risks, and the proliferation of ransomware-as-a-service as growing concerns. By understanding these risks and establishing clear security policies with CSPs, healthcare organizations can better safeguard their data. Prioritizing security, establishing robust BAAs, and ensuring HIPAA compliance will allow healthcare organizations to fully leverage the advantages of cloud computing while maintaining the privacy and security of patient information. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Conversational Commerce Explained

Conversational Commerce Explained

Conversational Commerce is a modern approach to customer engagement and sales that leverages chat-based interfaces (like messaging apps, chatbots, and voice assistants) to facilitate seamless, personalized, and real-time interactions between businesses and customers. It combines the power of conversational AI with e-commerce to create a more natural and interactive shopping experience. 1. What is Conversational Commerce? Conversational commerce allows customers to interact with brands through text or voice conversations instead of traditional methods like browsing websites or using apps. It enables businesses to engage with customers in a more personalized, immediate, and convenient way, often using tools like: 2. How Does Conversational Commerce Work? Conversational commerce uses Artificial Intelligence (AI), Natural Language Processing (NLP), and machine learning to understand and respond to customer queries. Here’s how it typically works: 3. Key Features of Conversational Commerce a) Personalization b) Real-Time Interaction c) Omnichannel Support d) Automation e) Seamless Transactions 4. Benefits of Conversational Commerce a) Improved Customer Experience b) Higher Engagement c) Increased Sales d) Cost Efficiency e) 24/7 Availability 5. Examples of Conversational Commerce a) Chatbots b) Voice Assistants c) Social Media Messaging d) In-App Messaging 6. Technologies Powering Conversational Commerce a) Artificial Intelligence (AI) b) Natural Language Processing (NLP) c) Machine Learning d) APIs and Integrations 7. The Future of Conversational Commerce 8. Challenges of Conversational Commerce In summary, conversational commerce is transforming the way businesses interact with customers by making shopping more conversational, personalized, and convenient. It’s a key trend in the future of e-commerce and customer engagement! Content updated February 2025. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Zero ETL

Zero ETL

What is Zero-ETL? Zero-ETL represents a transformative approach to data integration and analytics by bypassing the traditional ETL (Extract, Transform, Load) pipeline. Unlike conventional ETL processes, which involve extracting data from various sources, transforming it to fit specific formats, and then loading it into a data repository, Zero-ETL eliminates these steps. Instead, it enables direct querying and analysis of data from its original source, facilitating real-time insights without the need for intermediate data storage or extensive preprocessing. This innovative method simplifies data management, reducing latency and operational costs while enhancing the efficiency of data pipelines. As the demand for real-time analytics and the volume of data continue to grow, ZETL offers a more agile and effective solution for modern data needs. Challenges Addressed by Zero-ETL Benefits of ZETL Use Cases for ZETL In Summary ZETL transforms data management by directly querying and leveraging data in its original format, addressing many limitations of traditional ETL processes. It enhances data quality, streamlines analytics, and boosts productivity, making it a compelling choice for modern organizations facing increasing data complexity and volume. Embracing Zero-ETL can lead to more efficient data processes and faster, more actionable insights, positioning businesses for success in a data-driven world. Components of Zero-ETL ZETL involves various components and services tailored to specific analytics needs and resources: Advantages and Disadvantages of ZETL Comparison: Z-ETL vs. Traditional ETL Feature Zero-ETL Traditional ETL Data Virtualization Seamless data duplication through virtualization May face challenges with data virtualization due to discrete stages Data Quality Monitoring Automated approach may lead to quality issues Better monitoring due to discrete ETL stages Data Type Diversity Supports diverse data types with cloud-based data lakes Requires additional engineering for diverse data types Real-Time Deployment Near real-time analysis with minimal latency Batch processing limits real-time capabilities Cost and Maintenance More cost-effective with fewer components More expensive due to higher computational and engineering needs Scale Scales faster and more economically Scaling can be slow and costly Data Movement Minimal or no data movement required Requires data movement to the loading stage Comparison: Zero-ETL vs. Other Data Integration Techniques Top Zero-ETL Tools Conclusion Transitioning to Zero-ETL represents a significant advancement in data engineering. While it offers increased speed, enhanced security, and scalability, it also introduces new challenges, such as the need for updated skills and cloud dependency. Zero-ETL addresses the limitations of traditional ETL and provides a more agile, cost-effective, and efficient solution for modern data needs, reshaping the landscape of data management and analytics. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
  • 1
  • 2
gettectonic.com