Apple Archives - gettectonic.com - Page 2
Next Gen Commerce Cloud

Next Gen Commerce Cloud

Salesforce has launched the next generation of Commerce Cloud, delivering a unified platform that connects B2C, DTC, and B2B commerce, along with Order Management, Payments, and more, to drive seamless customer experiences and revenue growth. With these innovations, businesses can scale across digital and physical channels while leveraging trusted AI and enterprise-wide data for smarter operations. Next Gen Commerce Cloud. Key features include Autonomous Agentforce Agents, which enhance commerce for merchants, buyers, and shoppers by automating tasks such as product recommendations and order tracking. Companies like MillerKnoll have seen success by using Commerce Cloud’s innovations to scale their workforce and drive revenue across multiple channels. New Agentforce Agents for Commerce — Merchant, Buyer, and Personal Shopper — autonomously manage tasks and improve the customer journey. They handle tasks without human intervention, such as product recommendations or order lookups, drawing insights from rich data sources like customer interactions, inventory, orders, and reviews. By tapping into unified data, these agents augment employees, offering tailored experiences and increasing efficiency, while strictly adhering to privacy and security standards. Salesforce’s Commerce Cloud now natively integrates every part of the commerce journey, helping businesses break down data silos and offer consistent, personalized interactions. As Michael Affronti, SVP and GM of Commerce Cloud, highlights: “Unified commerce is the future, breaking down silos to deliver seamless experiences across all channels.” Key new features and functionalities include: With these advancements, Commerce Cloud empowers businesses to create seamless, AI-powered experiences that drive customer loyalty, operational efficiency, and revenue growth across every touchpoint. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Apple New AI

Apple New AI

Apple Unveils New AI Features at “Glowtime” Event In typical fashion, Apple revealed its latest product updates on Monday with a pre-recorded keynote titled “Glowtime,” referencing the glowing ring around the screen when Apple Intelligence is activated. Though primarily a hardware event, the real highlight was the suite of AI-powered features coming to the new iPhone models this fall. The 98-minute presentation covered updates to iPhones, AirPods, and the Apple Watch, with Apple Intelligence being the thread tying together user experiences across all devices. MacRumors has published a detailed list of all announcements, including the sleep apnea detection feature for the Apple Watch and new hearing health tools for AirPods Pro 2. Key AI Developments for Brand Marketers Apple Intelligence was first introduced at its WWDC event in June, focusing on using Apple’s large language model (LLM) to perform tasks on-device with personalized results. It draws from user data in native apps like Calendar and Mail, enabling AI to handle tasks like image generation, photo searches, and AI-generated notifications. The keynote also introduced a new “Visual Intelligence” feature for iPhone 16 models, acting as a native visual search tool. By pressing the new “camera control” button, users can access this feature to perform searches directly from their camera, such as getting restaurant info or recognizing a dog breed. Apple’s AI-powered visual search offers a strategic opportunity for brands. The information for local businesses is pulled from Apple Maps, which relies on sources like Yelp and Foursquare. Brands should ensure their listings are well-maintained on these platforms and consider optimizing their digital presence for visual search tools like Google Lens, which integrates with Apple’s search. The Camera as an Input Device and the Rise of Spatial Content The camera’s role as an input device has been expanding, with Apple emphasizing photography as a key feature of its new iPhones. This year, the iPhone 16 introduces a new camera control button, offering enhanced haptic feedback for smoother control. Third-party apps like Snapchat will also benefit from this addition, giving users more refined camera capabilities. More importantly, iPhone 16 models can now capture spatial content, including both photos and audio, optimized for the Vision Pro mixed-reality headset. Apple’s move to integrate spatial content aligns with its goal to position the iPhone as a professional creator tool. Brands can capitalize on this by exploring augmented reality (AR) features or creating immersive user-generated content experiences. Apple’s Measured Approach to AI While Apple is clearly pushing AI, it is taking a cautious, phased approach. Though the new iPhones will hit the market soon, the full range of Apple Intelligence features will roll out gradually, starting in October with tools like the AI writing assistant and photo cleanup. More advanced features will debut next spring. This measured approach allows Apple to fine-tune its AI, avoiding rushed releases that could compromise user experience. For brands, this offers a lesson in pacing AI adoption: prioritize quality and customer experience over speed. Rather than rushing to integrate AI, companies should take time to understand how it can meaningfully enhance user interactions, focusing on trust and consistency to maintain customer loyalty. By following Apple’s lead and gradually introducing AI capabilities, brands can build trust, sustain anticipation, and ensure they offer technology that genuinely improves the customer experience. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Challenges for Rural Healthcare Providers

Challenges for Rural Healthcare Providers

Rural healthcare providers have long grappled with challenges due to their geographic isolation and limited financial resources. The advent of digital health transformation, however, has introduced a new set of IT-related obstacles for these providers. EHR Adoption and New IT Challenges While federal legislation has successfully promoted Electronic Health Record (EHR) adoption across both rural and urban healthcare organizations, implementing an EHR system is only one component of a comprehensive health IT strategy. Rural healthcare facilities encounter numerous IT barriers, including inadequate infrastructure, interoperability issues, constrained resources, workforce shortages, and data security concerns. Limited Broadband Access Broadband connectivity is essential for leveraging health IT effectively. However, there is a significant disparity in broadband access between rural and urban areas. According to a Federal Communications Commission (FCC) report, approximately 96% of the U.S. population had access to broadband at the FCC’s minimum speed benchmark in 2019, compared to just 73.6% of rural Americans. The lack of broadband infrastructure hampers rural organizations’ ability to utilize IT features that enhance care delivery, such as electronic health information exchange (HIE) and virtual care. Rural facilities, in particular, rely heavily on HIE and telehealth to bridge gaps in their services. For instance, HIE facilitates data sharing between smaller ambulatory centers and larger academic medical centers, while telehealth allows rural clinicians to consult with specialists in urban centers. Additionally, telehealth can help patients in rural areas avoid long travel distances for care. However, without adequate broadband access, these services remain impractical. Despite persistent disparities, the rural-urban broadband gap has narrowed in recent years. Data from the FCC indicates that since 2016, the number of people in rural areas without access to 25/3 Mbps service has decreased by more than 46%. Various programs, including the FCC’s Rural Health Care Program and USDA funding initiatives, aim to expand broadband access in rural regions. Interoperability Challenges While HIE adoption is rising nationally, rural healthcare organizations lag behind their urban counterparts in terms of interoperability capabilities, as noted in a 2023 GAO report. Data from a 2021 American Hospital Association survey revealed that rural hospitals are less likely to engage in national or regional HIE networks compared to medium and large hospitals. Rural providers often lack the economic and technological resources to participate in electronic HIE networks, leading them to rely on manual data exchange methods such as fax or mail. Additionally, rural providers are less likely to join EHR vendor networks for data exchange, partly due to the fact that they often use different systems from those in other local settings, complicating health data exchange. Federal initiatives like TEFCA aim to improve interoperability through a network of networks approach, allowing organizations to connect to multiple HIEs through a single connection. However, TEFCA’s voluntary participation model and persistent barriers such as IT staffing shortages and broadband gaps still pose challenges for rural providers. Financial Constraints Rural hospitals often operate with slim profit margins due to lower patient volumes and higher rates of uninsured or underinsured patients. The financial strain is exacerbated by declining Medicare and Medicaid reimbursements. According to KFF, the median operating margin for rural hospitals was 1.5% in 2019, compared to 5.2% for other hospitals. With limited budgets, rural healthcare organizations struggle to invest in advanced health IT systems and the necessary training and maintenance. Many small rural hospitals are turning to cloud-based EHR platforms as a cost-effective solution. Cloud-based EHRs reduce the need for substantial upfront hardware investments and offer monthly subscription fees, some as low as $100 per month. Workforce Challenges The healthcare sector is facing widespread staff shortages, including a lack of skilled health IT professionals. Rural areas are disproportionately affected by these shortages. An insufficient number of IT specialists can impede the adoption and effective use of health IT in these regions. To address workforce gaps, the ONC suggests strategies such as cross-training multiple staff members in health IT functions and offering additional training opportunities. Some networks, like OCHIN, have secured grants to develop workforce programs, but limited broadband access can hinder participation in virtual training programs, highlighting the need for expanded broadband infrastructure. Data Security Concerns Healthcare data breaches have surged, with a 256% increase in large breaches reported to the Office for Civil Rights (OCR) over the past five years. Rural healthcare organizations, often operating with constrained budgets, may lack the resources and staff to implement robust data security measures, leaving them vulnerable to cyber threats. A cyberattack on a rural healthcare organization can disrupt patient care, as patients may need to travel significant distances to reach alternative facilities. To address cybersecurity challenges, recent legislative efforts like the Rural Hospital Cybersecurity Enhancement Act aim to develop comprehensive strategies for rural hospital cybersecurity and provide educational resources for staff training. In the interim, rural healthcare organizations can use free resources such as the Health Industry Cybersecurity Practices (HICP) publication to guide their cybersecurity strategies, including recommendations for managing vulnerabilities and protecting email systems. Does your practice need help meeting these challenges? Contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI-Powered Field Service

AI-Powered Field Service

Salesforce has introduced new AI-powered field service capabilities designed to streamline operations for dispatchers, technicians, and field service leaders. Leveraging the Salesforce platform and Data Cloud, these innovations aim to expedite time-consuming processes and enhance customer satisfaction by making field service operations more proactive and efficient. Why it matters: Field service teams currently spend only 32% of their time interacting with customers, with the remaining 68% consumed by administrative tasks like manually entering case notes. With 78% of field service workers in AI-enabled organizations reporting that AI helps save time, Salesforce’s new tools address these inefficiencies head-on. Key AI-driven innovations for Field Service: Availability: Paul Whitelam, GM & SVP of Salesforce Field Service, notes, “The future of field service lies in the seamless integration of AI, data, and human expertise. Our new capabilities set new standards for efficiency and service delivery.” Rudi Khoury, Chief Digital Officer at Fisher & Paykel, adds, “With Salesforce Field Service, we’re not just embracing AI and data-driven insights — we’re advancing into the future of field service, achieving unprecedented efficiency and exceptional service.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Deepfake Detection With New Tool

Deepfake Detection With New Tool

Pindrop Expands Deepfake Detection with New Tool On Thursday, voice authentication vendor Pindrop expanded its deepfake detection capabilities with the preview release of Pindrop Pulse Inspect, a tool designed to detect AI-generated speech in digital audio files. This new tool builds on Pindrop’s earlier launch of Pindrop Pulse at the start of the year. While Pindrop Pulse initially targeted call centers, Pulse Inspect broadens its reach, catering to media organizations, nonprofits, government agencies, and social networks. Pindrop Pulse is already integrated with the company’s fraud protection and authentication platform. The new Pulse Inspect tool allows users to upload audio files to the Pindrop platform to determine if they contain synthetic speech, providing deepfake scores in the process. The introduction of Pulse Inspect is timely, coinciding with heightened concerns over deepfakes as the U.S. general election in November approaches. In recent months, Pindrop has tested its technology on high-profile cases. The company analyzed a deepfake audio clip of presidential candidate Kamala Harris, posted on X by Elon Musk, and discovered partial deepfakes in the audio. Pindrop also examined a deepfake of Elon Musk, released on July 24, identifying voice cloning technology from vendor ElevenLabs as the source. Additionally, Pindrop detected a fake robocall, generated using ElevenLabs’ technology, impersonating President Joe Biden before the January Democratic presidential primary. ElevenLabs has publicly stated its commitment to preventing the misuse of audio AI tools. “The human ear can no longer reliably distinguish between real and synthetically generated audio,” said Rahul Sood, Pindrop’s Chief Product Officer, during a discussion on the risks deepfakes pose for the upcoming election. “It’s almost impossible to have a high level of confidence without assistance.” Fighting AI with AI Analysts emphasize the necessity of tools like Pulse Inspect in the age of generative AI. “They’re fighting AI with AI,” said Lisa Martin, an analyst at the Futurum Group, highlighting the importance of Pindrop’s technology. According to Pindrop, their detection technology is trained on over 350 deepfake generation tools, 20 million unique utterances, and more than 40 languages. “We know how powerful generative AI is—it can be used for good, but it can also be weaponized, as we’re seeing,” Martin noted. She added that with the increasing ease of creating deepfakes, the demand for detection tools like Pulse Inspect will only grow. As deepfakes continue to proliferate, companies like Pindrop and competitors such as Resemble AI are racing to develop these detection solutions. With Pulse Inspect, Pindrop is extending its technology’s application beyond call centers. Pindrop has also partnered with Respeecher, a voice cloning vendor that collaborates with Hollywood. “Respeecher is working with Pindrop to ensure their synthetic voice technology for Hollywood is not misused,” said Martin, stressing the importance of ethical development and use of AI voice cloning technology. Pulse Inspect is positioned to assist media companies, social media networks, nonprofits, and government organizations in navigating the challenges of AI-generated audio. The Challenge of Scaling Deepfake Detection While Pindrop is well-equipped to detect deepfakes, scaling this technology could be costly and complex, according to Forrester Research analyst Mo Allibhai. “Implementing this technology at scale is expensive, even from an integration standpoint,” said Allibhai. “We need to be selective in how we deploy it.” Allibhai suggested that edge AI, such as Apple’s upcoming generative AI system for iPhones, could ease these challenges by reducing the reliance on cloud computing, making solutions like Pulse Inspect more viable in the long term. Pindrop Pulse Inspect offers an API-driven batch-processing platform and user interface, designed to meet the evolving needs of organizations facing the growing threat of deepfake audio. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Autonomous AI Service Agents

Autonomous AI Service Agents

Salesforce Set to Launch Autonomous AI Service Agents. Considering Tectonic only first wrote about Agentic AI in late June, its like Christmas in July! Salesforce is gearing up to introduce a new generation of customer service chatbots that leverage advanced AI tools to autonomously navigate through various actions and workflows. These bots, termed “autonomous AI agents,” are currently in pilot testing and are expected to be released later this year. Autonomous AI Service Agents Named Einstein Service Agent, these autonomous AI bots aim to utilize generative AI to understand customer intent, trigger workflows, and initiate actions within a user’s Salesforce environment, according to Ryan Nichols, Service Cloud’s chief product officer. By integrating natural language processing, predictive analytics, and generative AI, Einstein Service Agents will identify scenarios and resolve customer inquiries more efficiently. Traditional bots require programming with rules-based logic to handle specific customer service tasks, such as processing returns, issuing refunds, changing passwords, and renewing subscriptions. In contrast, the new autonomous bots, enhanced by generative AI, can better comprehend customer issues (e.g., interpreting “send back” as “return”) and summarize the steps to resolve them. Einstein Service Agent will operate across platforms like WhatsApp, Apple Messages for Business, Facebook Messenger, and SMS text, and will also process text, images, video, and audio that customers provide. Despite the promise of these new bots, their effectiveness is crucial, emphasized Liz Miller, an analyst at Constellation Research. If these bots fail to perform as expected, they risk wasting even more customer time than current technologies and damaging customer relationships. Miller also noted that successful implementation of autonomous AI agents requires human oversight for instances when the bots encounter confusion or errors. Customers, whether in B2C or B2B contexts, are often frustrated with the limitations of rules-based bots and prefer direct human interaction. It is annoying enough to be on the telephone repeating “live person” over and over again. It would be trafic to have to do it online, too. “It’s essential that these bots can handle complex questions,” Miller stated. “Advancements like this are critical, as they can prevent the bot from malfunctioning when faced with unprogrammed scenarios. However, with significant technological advancements like GenAI, it’s important to remember that human language and thought processes are intricate and challenging to map.” Nichols highlighted that the forthcoming Einstein Service Agent will be simpler to set up, as it reduces the need to manually program thousands of potential customer requests into a conversational decision tree. This new technology, which can understand multiple word permutations behind a service request, could potentially lower the need for extensive developer and data scientist involvement for Salesforce users. The pricing details for the autonomous Einstein Service Agent will be announced at its release. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Managing Data Quality in an AI World

Managing Data Quality in an AI World

Each year, Monte Carlo surveys real data professionals about the state of their data quality. This year, we turned our gaze to the shadow of AI—and the message was clear. Managing Data Quality in an AI World is getting harder. Data quality risks are evolving — and data quality management isn’t. Among the 200 data professionals polled about the state of enterprise AI, a staggering 91% said they were actively building AI applications, but two out of three admitted to not completely trusting the data these applications are built on. And “not completely” leaves a lot of room for error in the world of AI. Far from pushing the industry toward better habits and more trustworthy outputs, the introduction of GenAI seems to have exacerbated the scope and severity of data quality problems. The Core Issue Why is this happening, and what can we do about it? 2024 State of Reliable AI Survey The Wakefield Research survey, commissioned by Monte Carlo in April 2024, polled 200 data leaders and professionals. It comes as data teams grapple with the adoption of generative AI. The findings highlight several key statistics that indicate the current state of the AI race and professional sentiment about the technology: While AI is widely expected to be among the most transformative technological advancements of the last decade, these findings suggest a troubling disconnect between data teams and business stakeholders. More importantly, they suggest a risk of downward pressure toward AI initiatives without a clear understanding of the data and infrastructure that power them. Managing Data Quality in an AI World. The State of AI Infrastructure—and the Risks It’s Hiding Even before the advent of GenAI, organizations were dealing with exponentially greater volumes of data than in decades past. Since adopting GenAI programs, 91% of data leaders report that both applications and the number of critical data sources have increased even further, deepening the complexity and scale of their data estates in the process. There’s no clear solution for a successful enterprise AI architecture. Survey results reveal how data teams are approaching AI: As the complexity of AI’s architecture and the data that powers it continues to expand, one perennial problem is expanding with it: data quality issues. The Modern Data Quality Problem While data quality has always been a challenge for data teams, this year’s survey results suggest the introduction of GenAI has exacerbated both the scope and severity of the problem. More than half of respondents reported experiencing a data incident that cost their organization more than $100K. And we didn’t even ask how many they experienced. Previous surveys suggest an average of 67 data incidents per month of varying severity. This is a shocking figure when you consider that 70% of data leaders surveyed also reported that it takes longer than four hours to find a data incident—and at least another four hours to resolve it. Managing Data Quality in an AI World But the real deal breaker is this: even with 91% of teams reporting that their critical data sources are expanding, an alarming 54% of teams surveyed still rely on manual testing or have no initiative in place at all to address data quality in their AI. This anemic approach to data quality will have a demonstrable impact on enterprise AI applications and data products in the coming months—allowing more data incidents to slip through the cracks, multiplying hallucinations, diminishing the safety of outputs, and eroding confidence in both the AI and the companies that build them. Is Your Data AI-Ready? While a lot has certainly changed over the last 12 months, one thing remains absolutely clear: if AI is going to succeed, data quality needs to be front and center. “Data is the lifeblood of all AI — without secure, compliant, and reliable data, enterprise AI initiatives will fail before they get off the ground. The most advanced AI projects will prioritize data reliability at each stage of the model development life cycle, from ingestion in the database to fine-tuning or RAG.” Lior Solomon, VP of Data at Drata, The success of AI depends on the data—and the success of the data depends on your team’s ability to efficiently detect and resolve the data quality issues that impact it. By curating and pairing your own first-party context data with modern data quality management solutions like data observability, your team can mitigate the risks of building fast and deliver reliable business value for your stakeholders at every stage of your AI adventure. What can you do to improve data quality management in your organization? Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful

Read More
Used YouTube to Train AI

Used YouTube to Train AI

Announced by siliconANGLE’s Duncan Riley. Companies Used YouTube to Train AI. A new report released today reveals that companies such as Anthropic PBC, Nvidia Corp., Apple Inc., and Salesforce Inc. have used subtitles from YouTube videos to train their AI services without obtaining permission. This raises significant ethical questions about the use of publicly available materials and facts without consent. According to Proof News, these companies allegedly utilized subtitles from 173,536 YouTube videos sourced from over 48,000 channels to enhance their AI models. Rather than scraping the content themselves, Anthropic, Nvidia, Apple, and Salesforce reportedly used a dataset provided by EleutherAI, a nonprofit AI organization. EleutherAI, founded in 2020, focuses on the interpretability and alignment of large AI models. The organization aims to democratize access to advanced AI technologies by developing and releasing open-source AI models like GPT-Neo and GPT-J. EleutherAI also advocates for open science norms in natural language processing, promoting transparency and ethical AI development. The dataset in question, known as “YouTube Subtitles,” includes transcripts from educational and online learning channels, as well as several media outlets and YouTube personalities. Notable YouTubers whose transcripts are included in the dataset are Mr. Beast, Marques Brownlee, PewDiePie, and left-wing political commentator David Pakman. Some creators whose content was used are outraged. Pakman, for example, argues that using his transcripts jeopardizes his livelihood and that of his staff. David Wiskus, CEO of streaming service Nebula, has even called the use of the data “theft.” Despite the data being publicly accessible, the controversy revolves around the fact that large language models are utilizing it. This situation echoes recent legal actions regarding the use of publicly available data to train AI models. For instance, Microsoft Corp. and OpenAI were sued in November over their use of nonfiction authors’ works for AI training. The class-action lawsuit, led by a New York Times reporter, claimed that OpenAI scraped the content of hundreds of thousands of nonfiction books to develop their AI models. Additionally, The New York Times accused OpenAI, Google LLC, and Meta Holdings Inc. in April of skirting legal boundaries in their use of AI training data. While the legality of using AI training data remains a gray area, it has yet to be extensively tested in court. Should a case arise, the key issue will likely be whether publicly stated facts, including utterances, can be copyrighted. Relevant U.S. case law includes Feist Publications Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991) and International News Service v. Associated Press (1918). In both cases, the U.S. Supreme Court ruled that facts cannot be copyrighted. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI for Tableau

Generative AI for Tableau

Tableau’s first generative AI assistant is now generally available. Generative AI for Tableau brings data prep to the masses. Earlier this month, Tableau launched its second platform update of 2024, announcing that its first two GenAI assistants would be available by the end of July, with a third set for release in August. The first of these, Einstein Copilot for Tableau Prep, became generally available on July 10. Tableau initially unveiled its plans to develop generative AI capabilities in May 2023 with the introduction of Tableau Pulse and Tableau GPT. Pulse, an insight generator that monitors data for metric changes and uses natural language to alert users, became generally available in February. Tableau GPT, now renamed Einstein Copilot for Tableau, moved into beta testing in April. Following Einstein Copilot for Tableau Prep, Einstein Copilot for Tableau Catalog is expected to be generally available before the end of July. Einstein Copilot for Tableau Web Authoring is set to follow by the end of August. With these launches, Tableau joins other data management and analytics vendors like AWS, Domo, Microsoft, and MicroStrategy, which have already made generative AI assistants generally available. Other companies, such as Qlik, DBT Labs, and Alteryx, have announced similar plans but have not yet moved their products out of preview. Tableau’s generative AI capabilities are comparable to those of its competitors, according to Doug Henschen, an analyst at Constellation Research. In some areas, such as data cataloging, Tableau’s offerings are even more advanced. “Tableau is going GA later than some of its competitors. But capabilities are pretty much in line with or more extensive than what you’re seeing from others,” Henschen said. In addition to the generative AI assistants, Tableau 2024.2 includes features such as embedding Pulse in applications. Based in Seattle and a subsidiary of Salesforce, Tableau has long been a prominent analytics vendor. Its first 2024 platform update highlighted the launch of Pulse, while the final 2023 update introduced new embedded analytics capabilities. Generative AI assistants are proliferating due to their potential to enable non-technical workers to work with data and increase efficiency for data experts. Historically, the complexity of analytics platforms, requiring coding and data literacy, has limited their widespread adoption. Studies indicate that only about one-quarter of employees regularly work with data. Vendors have attempted to overcome this barrier by introducing natural language processing (NLP) and low-code/no-code features. However, NLP features have been limited by small vocabularies requiring specific business phrasing, while low-code/no-code features only support basic tasks. Generative AI has the potential to change this dynamic. Large language models like ChatGPT and Google Gemini offer extensive vocabularies and can interpret user intent, enabling true natural language interactions. This makes data exploration and analysis accessible to non-technical users and reduces coding requirements for data experts. In response to advancements in generative AI, many data management and analytics vendors, including Tableau, have made it a focal point of their product development. Tech giants like AWS, Google, and Microsoft, as well as specialized vendors, have heavily invested in generative AI. Einstein Copilot for Tableau Prep, now generally available, allows users to describe calculations in natural language, which the tool interprets to create formulas for calculated fields in Tableau Prep. Previously, this required expertise in objects, fields, functions, and limitations. Einstein Copilot for Tableau Catalog, set for release later this month, will enable users to add descriptions for data sources, workbooks, and tables with one click. In August, Einstein Copilot for Tableau Web Authoring will allow users to explore data in natural language directly from Tableau Cloud Web Authoring, producing visualizations, formulating calculations, and suggesting follow-up questions. Tableau’s generative AI assistants are designed to enhance efficiency and productivity for both experts and generalists. The assistants streamline complex data modeling and predictive analysis, automate routine data prep tasks, and provide user-friendly interfaces for data visualization and analysis. “Whether for an expert or someone just getting started, the goal of Einstein Copilot is to boost efficiency and productivity,” said Mike Leone, an analyst at TechTarget’s Enterprise Strategy Group. The planned generative AI assistants for different parts of Tableau’s platform offer unique value in various stages of the data and AI lifecycle, according to Leone. Doug Henschen noted that the generative AI assistants for Tableau Web Authoring and Tableau Prep are similar to those being introduced by other vendors. However, the addition of a generative AI assistant for data cataloging represents a unique differentiation for Tableau. “Einstein Copilot for Tableau Catalog is unique to Tableau among analytics and BI vendors,” Henschen said. “But it’s similar to GenAI implementations being done by a few data catalog vendors.” Beyond the generative AI assistants, Tableau’s latest update includes: Among these non-Copilot capabilities, making Pulse embeddable is particularly significant. Extending generative AI capabilities to work applications will make them more effective. “Embedding Pulse insights within day-to-day applications promises to open up new possibilities for making insights actionable for business users,” Henschen said. Multi-fact relationships are also noteworthy, enabling users to relate datasets with shared dimensions and informing applications that require large amounts of high-quality data. “Multi-fact relationships are a fascinating area where Tableau is really just getting started,” Leone said. “Providing ways to improve accuracy, insights, and context goes a long way in building trust in GenAI and reducing hallucinations.” While Tableau has launched its first generative AI assistant and will soon release more, the vendor has not yet disclosed pricing for the Copilots and related features. The generative AI assistants are available through a bundle named Tableau+, a premium Tableau Cloud offering introduced in June. Beyond the generative AI assistants, Tableau+ includes advanced management capabilities, simplified data governance, data discovery features, and integration with Salesforce Data Cloud. Generative AI is compute-intensive and costly, so it’s not surprising that Tableau customers will have to pay extra for these capabilities. Some vendors are offering generative AI capabilities for free to attract new users, but Henschen believes costs will eventually be incurred. “Customers will want to understand the cost implications of adding these new capabilities,”

Read More
Proper Programmers Desk

Proper Programmers Desk

You’ve probably come across those generic “Proper Programmers Desk Items Under $100” lists. For some reason, there’s always that pointless monitor light and back support included. Let me be the first to call you out: if you need a monitor light to type, you’re not really a programmer. And if you spend money on back support but don’t use a random box to store your adapters and cables, you’re probably a poser. Now that we’ve got that out of the way, let’s look into some genuinely indispensable gadgets that every developer should consider. Simple Silicone USB-C 120W CableEven Apple, nudged by EU regulations, has transitioned to USB-C. Owning a robust, full-pin cable that ensures speedy connections between your devices is crucial. Ideal for app developers and tinkering enthusiasts alike, this cable is a steal for its price and versatility. Price: $2.68Where: Aliexpress Baseus 65W GaN ChargerSince everything is wireless and needs charging, a good charger is essential. The Baseus 65W GaN Charger is the best one we’ve come across. No more fumbling under the table for your USB or dealing with dead batteries. Just plug the cable in, and you’re good to go. Price: $26.04Where: Aliexpress External Power ButtonGot your setup under the desk or obscured by monitors? An external power button is your new best friend. Just watch out for curious cats! And touch-driven toddlers! Price: $3.09Where: Aliexpress USB 3.2 HubThe era when USB connectors were used for more than just charging might be fading, but for now, we still need to set up USB drives, keep a mouse receiver nearby, and connect various other USB devices. This hub is the fastest and most affordable option available from a reputable company. Think you are the exception to the rule? Don’t forget headsets, grab-and-go charging blocks, your Vape, the monitor light referenced above. Price: $24.90Where: Aliexpress HydrationWhether you’re into plain water or cutting-edge nootropics, staying hydrated is key. Snowmonkey flasks are my go-to: durable, excellent at maintaining temperature, and backed by fantastic customer service. They even offered a discount code just for you after hearing about this article! Promo Code: SuperShort15Where: Snowmonkey EarplugsWhile noise-canceling headphones are a game-changer, on a budget, simple earplugs are a miracle of their own. Whether foam that expands to fit, kneadable silicone, or rigid types, they’re affordable enough to try them all and find your perfect fit. Price: $1Where: Aliexpress Software EssentialsAs developers, our toolkit is incomplete without some top-notch software. While JetBrains or VIM might top your list for IDEs, let’s not spark a war over it. Here are a few essentials: Flow LauncherFlow is the ultimate Spotlight open-source alternative for Windows, surpassing everything I’ve used before. Need a fast, on-the-go translation? No problem—just choose a plugin from your settings, and you’re all set. It truly is magical. Where: Flow Launcher TickTickManaging your time becomes essential eventually. TickTick’s straightforward interface lets you jot tasks down and tick them off without fuss. Where: TickTick ObsidianA second brain for storing everything from code snippets to comprehensive project notes. Dive into tutorials on YouTube and explore its vast capabilities. Where: Obsidian CamoDitch the subpar sub-$100 webcams and use your smartphone instead. Crisp, clear, and cost-effective. Where: Camo Consider These Upgrades for the Proper Programmers Desk Good Mechanical KeyboardWe are living in a golden era for mechanical keyboard enthusiasts! You could spend years on YouTube exploring the countless options. Choose your favorite wisely. We’ve opted for the SteelSeries Apex Pro because its keys are analog, allowing you to adjust the sensitivity, making it a dream to type on. Of course, there are other viable options at this price point. Light MouseAfter extended use, you might start to feel pain and a sense of fullness in your wrists, eventually leading to sharp pain. But don’t worry. Choosing a mouse under 70 grams, like the Logitech SUPERLIGHT 2, can alleviate these issues. You could opt for the first edition, which is cheaper, but it uses a micro USB, and that’s a dealbreaker for some. Noise-Canceling HeadphonesIf you enjoy a bit of music or podcasts while programming (though they’re not the best for concentration), you might want to consider noise-canceling headphones. The best we’ve encountered are the Sony WH-1000XM4. We would suggest the newer version, but some tests indicate the previous model performs better and is more affordable. Good OLED TVWith TVs now boasting 120Hz refresh rates and various gaming modes, there’s no reason not to own a 55″+ monitor. Believe us, an OLED from LG makes all the difference. Android PhoneFinally, consider this scenario: You’re outdoors without your laptop, and suddenly, your customer’s service goes down. If you were prepared, you’d simply launch a Linux instance on your phone, open your IDE, and start coding a patch. Of course, you could also rush home, risk using a random computer, or just panic. And if money really is no object, add a Universal Robots UR20 Collaborative Arm to your desk for just south of $60,000. While marketed for moving pallets, handling packaging, and the like, we think it would be pretty cool running back and forth from the Keurig to your desk with steaming hot coffee. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Cautionary AI Tale

A Cautionary AI Tale

Oliver Lovstrom, an AI student, wrote an interesting perspective on artificial intelligence, a cautionary AI tale, if you will. The Theory and Fairy Tale My first introduction to artificial intelligence was during high school when I began exploring its theories and captivating aspects. In 2018, as self-driving cars were gaining traction, I decided to create a simple autonomous vehicle for my final project. This project filled me with excitement and hope, spurring my continued interest and learning in AI. However, I had no idea that within a few years, AI would become significantly more advanced and accessible, reaching the masses through affordable robots. For instance, who could have imagined that just two years later, we would have access to incredible AI models like ChatGPT and Gemini, developed by tech giants? The Dark Side of Accessibility My concerns grew as I observed the surge in global cybersecurity issues driven by advanced language model-powered bots. Nowadays, it’s rare to go a day without hearing about some form of cybercrime somewhere in the world. A Brief Intro to AI for Beginners To understand the risks associated with AI, we must first comprehend what AI is and its inspiration: the human brain. In biology, I learned that the human brain consists of neurons, which have two main functions: Neurons communicate with sensory organs or other neurons, determining the signals they send through learning. Throughout our lives, we learn to associate different external stimuli (inputs) with sensory outputs, like emotions. Imagine returning to your childhood home. Walking in, you are immediately overwhelmed by nostalgia. This is a learned response, where the sensory input (the scene) passes through a network of billions of neurons, triggering an emotional output. Similarly, I began learning about artificial neural networks, which mimic this behavior in computers. Artificial Neural Networks Just as biological neurons communicate within our brains, artificial neural networks try to replicate this in computers. Each dot in the graph above represents an artificial neuron, all connected and communicating with one another. Sensory inputs, like a scene, enter the network, and the resulting output, such as an emotion, emerges from the network’s processing. A unique feature of these networks is their ability to learn. Initially, an untrained neural network might produce random outputs for a given input. However, with training, these networks learn to associate specific inputs with particular outputs, mirroring the learning process of the human brain. This capability can be leveraged to handle tedious tasks, but there are deeper implications to explore. The Wishing Well As AI technology advances, it begins to resemble a wishing well from a fairy tale—a tool that could fulfill any desire, for better or worse. In 2022, the release of ChatGPT and various generative AI tools astonished many. For the first time, people had free access to a system capable of generating coherent and contextually appropriate responses to almost any prompt. And this is just the beginning. Multimodal AI and the Next Step I explored multimodal AI, which allows the processing of data in different formats, such as text, images, audio, and possibly even physical actions. This development supports the “wishing well” hypothesis, but also revealed a darker side of AI. The Villains While a wishing well in fairy tales is associated with good intentions and moral outcomes, the reality of AI is more complex. The morality of AI usage depends on the people who wield it, and the potential for harm by a single bad actor is immense. The Big Actors and Bad Apples The control of AI technology is likely to be held by powerful entities, whether governments or private corporations. Speculating on their use of this technology can be unsettling. While we might hope AI acts as a deterrent, similar to nuclear weapons, AI’s invisibility and potential for silent harm make it particularly dangerous. We are already witnessing malicious uses of AI, from fake kidnappings to deepfakes, impacting everyone from ordinary people to politicians. As AI becomes more accessible, the risk of bad actors exploiting it grows. Even if AI maintains peace on a global scale, the issue of individuals causing harm remains—a few bad apples can spoil the bunch. Unexpected Actions and the Future AI systems today can perform unexpected actions, often through jailbreaking—manipulating models to give unintended information. While currently, the consequences might seem minor, they could escalate significantly in the future. AI does not follow predetermined rules but chooses the “best” path to achieve a goal, often learned independently from human oversight. This unpredictability, especially in multimodal models, is alarming. Consider an AI tasked with making pancakes. It might need money for ingredients and, determined by its learning, might resort to creating deepfakes for blackmail. This scenario, though seemingly absurd, highlights potential dangers as AI evolves with the growth of IoT, quantum computing, and big data, leading to superintelligent, self-managing systems. As AI surpasses human intelligence, more issues will emerge, potentially leading to a loss of control. Dr. Yildiz, an AI expert, highlighted these concerns in a story titled “Artificial Intelligence Does Not Concern Me, but Artificial Super-Intelligence Frightens Me.” Hope and Optimism Despite the fears surrounding AI, I remain hopeful. We are still in the early stages of this technology, providing ample time to course-correct. This can be achieved through recognizing the risks, fostering ethical AI systems, and raising a morally conscious new generation. Although I emphasized potential dangers, my intent is not to incite fear. Like previous industrial and digital revolutions, AI has the potential to greatly enhance our lives. I stay optimistic and continue my studies to contribute positively to the field. The takeaway from my story is that by using AI ethically and collaboratively, we can harness its power for positive change and a better future for everyone. This article by Oliver Lovstrom originally was published by Medium, here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a

Read More
Public Sector Approval Process Queue

Public Sector Approval Process Queue

Share the workload effectively by establishing queues in Public Sector Solutions to enable reviewers to access ready-to-process applications. This involves creating queues with assigned members based on user roles, such as a queue for application reviewers managing initial approval steps. Multiple queues, like those for compliance officers handling onsite inspections, can be created. During the approval process, the queue takes ownership of the application record, allowing any member to advance the approval steps. In Salesforce, a public sector approval process queue allows multiple approvers to manage a backlog of applications. The queue owns the application record during the approval process, and any member of the queue can take action to complete a step. Here’s a step-by-step guide to creating a queue: To enhance communication, create an email template and enable email approval responses in Setup’s Process Automation Settings. Now, your reps can efficiently manage activities through the Cadences tab, where details and targets for each cadence are visible. Cadences in Salesforce guide reps through prospecting steps, streamlining outreach and ensuring timely logging of activities. To create a branched cadence for varied outreach based on call or email outcomes, utilize the Cadence Builder. This tool enables the addition of email, call, wait periods, or custom steps. Branching is achieved through call or listener branch steps, ensuring tailored outreach steps based on outcomes. Finally, Salesforce users can activate cadences after creation, and both reps and managers can add prospects directly from lead, contact, or person account detail pages. The Sales Engagements component on these pages enhances visibility, allowing reps to act on the next sales step conveniently. In summary, Salesforce’s Cadence Builder Classic streamlines prospecting and opportunity nurturing, while queues optimize workload distribution in Public Sector Solutions. Effective use of cadences and queues contributes to a well-organized and responsive sales process. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Einstein Prediction Builder

Einstein Prediction Builder

Einstein Prediction Builder, a sophisticated yet user-friendly tool from Salesforce Einstein, empowers users to generate predictions effortlessly, without requiring machine learning expertise or coding skills. This capability enables businesses to augment their operations with foresight-driven insights. As of the Spring ’20 release, all Enterprise Edition and above orgs can build one free prediction with Einstein Prediction Builder. Consider the potential business outcomes unlocked by leveraging Einstein Prediction Builder. Let’s delve into a hypothetical scenario: Meet Mr. Claus, the owner of ‘North Claus,’ a business that began as a modest family venture but gradually expanded its footprint. As ‘North Claus’ burgeoned across 10 countries, Mr. Claus recognized the need for Business Intelligence (BI) to navigate market dynamics effectively. BI entails gathering insights to forecast and comprehend market shifts—an imperative echoed by Jack Ma’s famous adage, “Adopt and change before any major trends and changes.” Intrigued by the prospect of BI, especially amidst the disruptive backdrop of Covid-19, Mr. Claus embarked on a journey to implement it in his company. The Formation of Business Intelligence: In today’s digital landscape, businesses amass vast amounts of data from diverse sources such as sales, customer interactions, and website traffic. This data serves as the bedrock for deriving actionable insights, enabling organizations to formulate forward-looking strategies. However, developing robust BI capabilities poses several challenges: Mr. Claus grappled with these challenges as he endeavored to develop BI independently. Recognizing the complexity involved, he turned to Salesforce, particularly intrigued by Einstein Prediction Builder. Einstein Prediction Builder Trailhead Understanding Einstein Prediction Builder: Einstein Prediction Builder, available in various Salesforce editions, leverages checkbox and formula fields to generate predictions. Before utilizing Prediction Builder, certain prerequisites must be met: Creating Einstein Predictions: To initiate the creation of Einstein Predictions, users navigate to Setup and access the Einstein Prediction Builder. The guided Setup simplifies the process, guiding users through relevant data inputs at each step. Once configured, predictions can be enabled, disabled, or cloned as needed. Key Features and Applications: Einstein Predictions integrate seamlessly with Salesforce Lightning, providing predictive insights directly on record pages. These predictions offer invaluable guidance on various aspects, such as sales opportunities and payment delays. Additionally, Prediction Builder facilitates packaging of predictions for seamless deployment across orgs and supports integration with external platforms like Tableau. Prediction Builder equips businesses with the intelligence needed to anticipate market trends, optimize workflows, and enhance customer interactions. As Mr. Claus discovered, embracing predictive analytics can revolutionize decision-making and drive sustainable growth. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com