, Author at gettectonic.com - Page 41
Do GPT 4o lies abound?

Do GPT 4o lies abound?

Is OpenAI (and others) misleading us about the pace of AI improvement? Do GPT 4o lies abound? Is AI excessively hyped, akin to the “NFT moment” that led to a subsequent downturn? Or even the dot com bubble that eventually had to burst? Daily updates on AI developments are a routine part of many AI enthusiasts reading. People tend to vacillate between the idea that we’re approaching AGI (artificial general intelligence) swiftly or hitting a plateau in LLM capabilities. Compelling arguments exist on both sides. This insight explores the notion that AI might be overly hyped. Do GPT 4o lies abound is a question being asked around the web. GPT-4o is used daily by millions of people. Observations by AI evangelists, suggest a decline in GPT-4o’s capabilities since its release. Though anecdotal, the decline is noticeable. For instance, tasks like placing affiliate links in articles are sometimes mishandled by GPT-4o, which previously performed better. The model’s abilities appear to fluctuate over time. Changing tense or tone sometimes barely happens at all and sometimes completely rewrites and changes the original meaning. While GPT-4o is notably fast, its accuracy and comprehension of instructions seem inferior to even GPT-4. OpenAI has a motive to promote GPT-4o over GPT-4 due to electricity cost savings, which are considerable. Emphasizing the speed and downplaying capability might ensure user satisfaction and maintain Plus memberships. Why it suspected AI might be overhyped? Companies have financial incentives to exaggerate AI’s capabilities to attract attention and funding.Instances of companies exaggerating claims during AI demonstrations have been documented (Google, OpenAI, Amazon, etc.).Personal experiences indicate many AI models are slowing down.Despite exponentially increasing model parameters, performance improvements are not proportional (you can’t get more juice from a lemon by squeezing harder).Some argue that AI models are made to sound more human-emulating voice, potentially blurring the line between genuine intelligence and simulated behavior.This insight believes the most advanced AI models surpass current presentations but are not energy-efficient enough for widespread affordability. Consequently, model capabilities are deliberately limited. What say you. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Einstein Service Agent

Einstein Service Agent

Introducing Agentforce Service Agent: Salesforce’s Autonomous AI to Transform Chatbot Experiences Accelerate case resolutions with an intelligent, conversational interface that uses natural language and is grounded in trusted customer and business data. Deploy in minutes with ready-made templates, Salesforce components, and a large language model (LLM) to autonomously engage customers across any channel, 24/7. Establish clear privacy and security guardrails to ensure trusted responses, and escalate complex cases to human agents as needed. Editor’s Note: Einstein Service Agent is now known as Agentforce Service Agent. Salesforce has launched Agentforce Service Agent, the company’s first fully autonomous AI agent, set to redefine customer service. Unlike traditional chatbots that rely on preprogrammed responses and lack contextual understanding, Agentforce Service Agent is dynamic, capable of independently addressing a wide range of service issues, which enhances customer service efficiency. Built on the Einstein 1 Platform, Agentforce Service Agent interacts with large language models (LLMs) to analyze the context of customer messages and autonomously determine the appropriate actions. Using generative AI, it creates conversational responses based on trusted company data, such as Salesforce CRM, and aligns them with the brand’s voice and tone. This reduces the burden of routine queries, allowing human agents to focus on more complex, high-value tasks. Customers, in turn, receive faster, more accurate responses without waiting for human intervention. Available 24/7, Agentforce Service Agent communicates naturally across self-service portals and messaging channels, performing tasks proactively while adhering to the company’s defined guardrails. When an issue requires human escalation, the transition is seamless, ensuring a smooth handoff. Ease of Setup and Pilot Launch Currently in pilot, Agentforce Service Agent will be generally available later this year. It can be deployed in minutes using pre-built templates, low-code workflows, and user-friendly interfaces. “Salesforce is shaping the future where human and digital agents collaborate to elevate the customer experience,” said Kishan Chetan, General Manager of Service Cloud. “Agentforce Service Agent, our first fully autonomous AI agent, will revolutionize service teams by not only completing tasks autonomously but also augmenting human productivity. We are reimagining customer service for the AI era.” Why It Matters While most companies use chatbots today, 81% of customers would still prefer to speak to a live agent due to unsatisfactory chatbot experiences. However, 61% of customers express a preference for using self-service options for simpler issues, indicating a need for more intelligent, autonomous agents like Agentforce Service Agent that are powered by generative AI. The Future of AI-Driven Customer Service Agentforce Service Agent has the ability to hold fluid, intelligent conversations with customers by analyzing the full context of inquiries. For instance, a customer reaching out to an online retailer for a return can have their issue fully processed by Agentforce, which autonomously handles tasks such as accessing purchase history, checking inventory, and sending follow-up satisfaction surveys. With trusted business data from Salesforce’s Data Cloud, Agentforce generates accurate and personalized responses. For example, a telecommunications customer looking for a new phone will receive tailored recommendations based on data such as purchase history and service interactions. Advanced Guardrails and Quick Setup Agentforce Service Agent leverages the Einstein Trust Layer to ensure data privacy and security, including the masking of personally identifiable information (PII). It can be quickly activated with out-of-the-box templates and pre-existing Salesforce components, allowing companies to equip it with customized skills faster using natural language instructions. Multimodal Innovation Across Channels Agentforce Service Agent supports cross-channel communication, including messaging apps like WhatsApp, Facebook Messenger, and SMS, as well as self-service portals. It even understands and responds to images, video, and audio. For example, if a customer sends a photo of an issue, Agentforce can analyze it to provide troubleshooting steps or even recommend replacement products. Seamless Handoffs to Human Agents If a customer’s inquiry requires human attention, Agentforce seamlessly transfers the conversation to a human agent who will have full context, avoiding the need for the customer to repeat information. For example, a life insurance company might program Agentforce to escalate conversations if a customer mentions sensitive topics like loss or death. Similarly, if a customer requests a return outside of the company’s policy window, Agentforce can recommend that a human agent make an exception. Customer Perspective “Agentforce Service Agent’s speed and accuracy in handling inquiries is promising. It responds like a human, adhering to our diverse, country-specific guidelines. I see it becoming a key part of our service team, freeing human agents to handle higher-value issues.” — George Pokorny, SVP of Global Customer Success, OpenTable. Content updated October 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Perplexity has launched an upgraded version of Pro Search

Perplexity has launched an upgraded version of Pro Search

Key Enhancements 1. Multi-step ReasoningPro Search now handles complex questions requiring planning and multiple steps to achieve a goal. Unlike standard search, it comprehensively analyzes results and performs smart follow-up actions based on its findings. It can conduct successive searches that build upon previous answers, enabling a more structured approach to complex queries. 2. Advanced Math and Programming CapabilitiesPro Search integrates with the Wolfram|Alpha engine, enhancing its proficiency in advanced math, programming, and data analysis for high-precision tasks. Quick Search vs. Pro Search While Quick Search provides fast, straightforward answers for quick queries, Pro Search caters to in-depth research needs, offering detailed analysis, comprehensive reporting, and access to a broad range of credible sources. Features: Usage and Subscription Options Pro Search is available with limited free access or through a subscription: Application Areas The new Pro Search upgrade is designed not just for general searches but also to support specific professional fields: Summary of Key Benefits Pro Search elevates research capabilities across various fields by providing smarter search solutions, a more structured approach to complex problems, and advanced computational support. Perplexity has launched an upgraded version of Pro Search, an advanced tool tailored for solving complex problems and streamlining research. This enhanced Pro Search features multi-step reasoning, advanced math, programming capabilities, and delivers more in-depth research insights. Key Enhancements 1. Multi-step ReasoningPro Search now handles complex questions requiring planning and multiple steps to achieve a goal. Unlike standard search, it comprehensively analyzes results and performs smart follow-up actions based on its findings. It can conduct successive searches that build upon previous answers, enabling a more structured approach to complex queries. 2. Advanced Math and Programming CapabilitiesPro Search integrates with the Wolfram|Alpha engine, enhancing its proficiency in advanced math, programming, and data analysis for high-precision tasks. Quick Search vs. Pro Search While Quick Search provides fast, straightforward answers for quick queries, Pro Search caters to in-depth research needs, offering detailed analysis, comprehensive reporting, and access to a broad range of credible sources. Features: Usage and Subscription Options Pro Search is available with limited free access or through a subscription: Application Areas The new Pro Search upgrade is designed not just for general searches but also to support specific professional fields: Summary of Key Benefits Pro Search elevates research capabilities across various fields by providing smarter search solutions, a more structured approach to complex problems, and advanced computational support. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Confidential AI Computing in Health

Confidential AI Computing in Health

Accelerating Healthcare AI Development with Confidential Computing Can confidential computing accelerate the development of clinical algorithms by creating a secure, collaborative environment for data stewards and AI developers? The potential of AI to transform healthcare is immense. However, data privacy concerns and high costs often slow down AI advancements in this sector, even as other industries experience rapid progress in algorithm development. Confidential computing has emerged as a promising solution to address these challenges, offering secure data handling during AI projects. Although its use in healthcare was previously limited to research, recent collaborations are bringing it to the forefront of clinical AI development. In 2020, the University of California, San Francisco (UCSF) Center for Digital Health Innovation (CDHI), along with Fortanix, Intel, and Microsoft Azure, formed a partnership to create a privacy-preserving confidential computing platform. This collaboration, which later evolved into BeeKeeperAI, aimed to accelerate clinical algorithm development by providing a secure, zero-trust environment for healthcare data and intellectual property (IP), while facilitating streamlined workflows and collaboration. Mary Beth Chalk, co-founder and Chief Commercial Officer of BeeKeeperAI, shared insights with Healthtech Analytics on how confidential computing can address common hurdles in clinical AI development and how stakeholders can leverage this technology in real-world applications. Overcoming Challenges in Clinical AI Development Chalk highlighted the significant barriers that hinder AI development in healthcare: privacy, security, time, and cost. These challenges often prevent effective collaboration between the two key parties involved: data stewards, who manage patient data and privacy, and algorithm developers, who work to create healthcare AI solutions. Even when these parties belong to the same organization, workflows often remain inefficient and fragmented. Before BeeKeeperAI spun out of UCSF, the team realized how time-consuming and costly the process of algorithm development was. Regulatory approvals, data access agreements, and other administrative tasks could take months to complete, delaying projects that could be finished in a matter of weeks. Chalk noted, “It was taking nine months to 18 months just to get approvals for what was essentially a two-month computing project.” This delay and inefficiency are unsustainable in a fast-moving technology environment, especially given that software innovation outpaces the development of medical devices or drugs. Confidential computing can address this challenge by helping clinical algorithm developers “move at the speed of software.” By offering encryption protection for data and IP during computation, confidential computing ensures privacy and security at every stage of the development process. Confidential Computing: A New Frontier in Healthcare AI Confidential computing protects sensitive data not only at rest and in transit but also during computation, which sets it apart from other privacy technologies like federated learning. With federated learning, data and IP are protected during storage and transmission but remain exposed during computation. This exposure raises significant privacy concerns during AI development. In contrast, confidential computing ensures end-to-end encrypted protection, safeguarding both data and intellectual property throughout the entire process. This enables stakeholders to collaborate securely while maintaining privacy and data sovereignty. Chalk emphasized that with confidential computing, stakeholders can ensure that patient privacy is protected and intellectual property remains secure, even when multiple parties are involved in the development process. As a result, confidential computing becomes an enabling core competency that facilitates faster and more efficient clinical AI development. Streamlining Clinical AI Development with Confidential Computing Confidential computing environments provide a secure, automated platform that facilitates the development process, reducing the need for manual intervention. Chalk described healthcare AI development as a “well-worn goat path,” where multiple stakeholders know the steps required but are often bogged down by time-consuming administrative tasks. BeeKeeperAI’s platform streamlines this process by allowing AI developers to upload project protocols, which are then shared with data stewards. The data steward can determine if they have the necessary clinical data and curate it according to the AI developer’s specifications. This secure collaboration is built on automated workflows, but because the data and algorithms remain encrypted, privacy is never compromised. The BeeKeeperAI platform enables a collaborative, familiar interface for developers and data stewards, allowing them to work together in a secure environment. The software does not require extensive expertise in confidential computing, as BeeKeeperAI manages the infrastructure and ensures that the data never leaves the control of the data steward. Real-World Applications of Confidential Computing Confidential computing has the potential to revolutionize healthcare AI development, particularly by improving the precision of disease detection, predicting disease trajectories, and enabling personalized treatment recommendations. Chalk emphasized that the real promise of AI in healthcare lies in precision medicine—the ability to tailor interventions to individual patients, especially those on the “tails” of the bell curve who may respond differently to treatment. For instance, confidential computing can facilitate research into precision medicine by enabling AI developers to analyze patient data securely, without risking exposure of sensitive personal information. Chalk explained, “With confidential computing, I can drill into those tails and see what was unique about those patients without exposing their identities.” Currently, real-world data access remains a significant challenge for clinical AI development, especially as research moves from synthetic or de-identified data to high-quality, real-world clinical data. Chalk noted that for clinical AI to demonstrate efficacy, improve outcomes, or enhance safety, it must operate on real-world data. However, accessing this data while ensuring privacy has been a major obstacle for AI teams. Confidential computing can help bridge this “data cliff” by providing a secure environment for researchers to access and utilize real-world data without compromising privacy. Conclusion While the use of confidential computing in healthcare is still evolving, its potential is vast. By offering secure data handling throughout the development process, confidential computing enables AI developers and data stewards to collaborate more efficiently, overcome regulatory hurdles, and accelerate clinical AI advancements. This technology could help realize the promise of precision medicine, making personalized healthcare interventions safer, more effective, and more widely available. Chalk highlighted that many healthcare and life sciences organizations are exploring confidential computing use cases, particularly in neurology, oncology, mental health, and rare diseases—fields that require the use of

Read More
Summer 24 Salesforce Maps Release

Summer 24 Salesforce Maps Release

Announcing the Salesforce Maps Summer ’24 Release! We are thrilled to announce the availability of the Salesforce Summer ’24 Maps release, designed to significantly enhance your experience and bring valuable benefits to your business. Key Features and Enhancements Summer 24 Salesforce Maps Release: For a comprehensive overview, please refer to the Maps Summer ‘24 Release Notes. We encourage you to enable this new experience and provide your valuable feedback to ensure it meets your needs and expectations. Note that the new experience will be auto-enabled in the Winter ’25 Release (October). Instructions on activating the new experience can be found here. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Confidence Scores

AI Confidence Scores

In this insight, the focus is on exploring the use of confidence scores available through the OpenAI API. The first section delves into these scores and explains their significance using a custom chat interface. The second section demonstrates how to apply confidence scores programmatically in code. Understanding Confidence Scores To begin, it’s important to understand what an LLM (Large Language Model) is doing for each token in its response: However, it’s essential to clarify that the term “probabilities” here is somewhat misleading. While mathematically, they qualify as a “probability distribution” (the values add up to one), they don’t necessarily reflect true confidence or likelihood in the way we might expect. In this sense, these values should be treated with caution. A useful way to think about these values is to consider them as “confidence” scores, though it’s crucial to remember that, much like humans, LLMs can be confident and still be wrong. The values themselves are not inherently meaningful without additional context or validation. Example: Using a Chat Interface An example of exploring these confidence scores can be seen in a chat interface where: In one case, when asked to “pick a number,” the LLM chose the word “choose” despite it having only a 21% chance of being selected. This demonstrates that LLMs don’t always pick the most likely token unless configured to do so. Additionally, this interface shows how the model might struggle with questions that have no clear answer, offering insights into detecting possible hallucinations. For example, when asked to list famous people with an interpunct in their name, the model shows low confidence in its guesses. This behavior indicates uncertainty and can be an indicator of a forthcoming incorrect response. Hallucinations and Confidence Scores The discussion also touches on the question of whether low confidence scores can help detect hallucinations—cases where the model generates false information. While low confidence often correlates with potential hallucinations, it’s not a foolproof indicator. Some hallucinations may come with high confidence, while low-confidence tokens might simply reflect natural variability in language. For instance, when asked about the capital of Kazakhstan, the model shows uncertainty due to the historical changes between Astana and Nur-Sultan. The confidence scores reflect this inconsistency, highlighting how the model can still select an answer despite having conflicting information. Using Confidence Scores in Code The next part of the discussion covers how to leverage confidence scores programmatically. For simple yes/no questions, it’s possible to compress the response into a single token and calculate the confidence score using OpenAI’s API. Key API settings include: Using this setup, one can extract the model’s confidence in its response, converting log probabilities back into regular probabilities using math.exp. Expanding to Real-World Applications The post extends this concept to more complex scenarios, such as verifying whether an image of a driver’s license is valid. By analyzing the model’s confidence in its answer, developers can determine when to flag responses for human review based on predefined confidence thresholds. This technique can also be applied to multiple-choice questions, allowing developers to extract not only the top token but also the top 10 options, along with their confidence scores. Conclusion While confidence scores from LLMs aren’t a perfect solution for detecting accuracy or truthfulness, they can provide useful insights in certain scenarios. With careful application and evaluation, developers can make informed decisions about when to trust the model’s responses and when to intervene. The final takeaway is that confidence scores, while not foolproof, can play a role in improving the reliability of LLM outputs—especially when combined with thoughtful design and ongoing calibration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Where Will the Data Scientists Go

Where Will the Data Scientists Go

What Is to Become of the Data Scientist Role? This question frequently arises among executives, particularly as they navigate the changing roles of data teams, such as those at DataRobot. Where Will the Data Scientists Go may not be as relevant as what new places can they go with AI? The short answer? While tools may evolve, the core of data science remains steadfast. As the field of data science continues to expand, the role of the data scientist becomes increasingly vital. The need will grow, even as the role changes. Trust in AI is dependant upon human oversight. Beyond the Hype of Consumer AI The surge in consumer AI products has raised concerns among data scientists about the implications for their careers. However, these technologies are built on data and generate vast amounts of new data, presenting numerous opportunities. The real transformative potential lies in enterprise-scale automation. Enterprise-Scale Automation: The Data Scientist’s Domain Enterprise-scale automation involves creating large-scale, reliable systems. Data scientists are crucial in this effort, as they bring expertise in data exploration and systematic inference. They are uniquely positioned to identify automation opportunities, design testing and monitoring strategies, and collaborate with cross-functional teams to bring AI solutions from concept to implementation. As automation grows, the role of the data scientist is essential in ensuring these systems function effectively and safely, particularly in environments without human oversight. New Skills for Data Scientists: The Guardians of AI Applications Data scientists will need to acquire new skills to manage automation at scale, including securing the systems they build. Generative AI introduces new risks, such as potential vulnerabilities to prompt injections or other security threats. Governance and ensuring positive business impacts will become increasingly important, requiring a data science mindset. Building Great Data Teams in the Age of AI The future of data science will not be about automation replacing data scientists but about the evolution of roles and skills. Data scientists need to focus on the core foundations of their discipline rather than the specific tools they use, as tools will continue to evolve. Teams must be built intentionally, encompassing a range of skills and personalities necessary for successful enterprise automation. Business Leaders: Navigating the AI Landscape Business leaders will need to excel in decision-making, understanding the problems they aim to solve, and selecting the appropriate tools and teams. They will also need to manage evolving regulations, particularly those related to the design and deployment of AI systems. Data Scientists: Precision Thinkers at the Forefront Contrary to the belief that AI could replace coding skills, the essence of data science lies in precise thinking and clear communication. Data scientists excel in translating business needs into data-driven decisions and AI applications, ensuring that solutions are not only technically sound but also aligned with business objectives. This skill set will be crucial in the era of AI, as data scientists will play a key role in optimizing workflows, designing AI safety nets, and protecting their organization’s brand and reputation. The Evolving Role of Data Science The demand for precise, data-literate thinkers will only grow with the rise of enterprise AI systems. Whether they are called data scientists or another name, professionals who delve deeply into data and provide critical insights will remain essential in navigating the complexities of modern technology and business landscapes. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Required Startup Mentality

Required Startup Mentality

Pivoting an established company’s business model is one of the most daunting challenges a CEO can face. When the new CEO of Zilliant took the company’s helm in 2022, the mandate was to accelerate growth and increase market share. It quickly became evident that success lay not in product updates or tech investments but in rethinking the organizational mindset. Required Startup Mentality. With a master’s degree in organizational behavior studies from the University of Illinois and extensive experience in organizational transformations, the CEO understood the process typically follows one of two paths: changing an existing culture or building one from scratch. High-profile examples provide inspiration for both approaches. Satya Nadella, upon becoming CEO of Microsoft in 2014, transformed the company from a “know-it-all” to a “learn-it-all” culture, fostering a growth mindset. Conversely, Marc Benioff of Salesforce instilled the “ohana” culture of family spirit, trust, and equality from the company’s inception. The CEO, having been immersed in Salesforce culture for over a decade, learned the importance of a robust support system for employees and customers. Upon joining Zilliant, the CEO brought lessons from Salesforce to the new role. Zilliant, a company with 23 years of history and a long-standing CEO, Greg Peters, had thrived in price optimization. However, to evolve further, the company needed to adopt a startup mentality. This approach included scrutinizing every budget line item, incorporating a new marketing playbook, and, crucially, leveraging existing talent in new ways. Identifying influencers within the company and placing them in positions of broader influence proved to be an effective strategy. Required Startup Mentality of leaders. This group of long-time employees, respected and experienced, became the “change champions.” Their elevated profile across the organization facilitated listening and acting on peer feedback, making the traditionally challenging task of cultural transformation more manageable. Initially, there was a struggle to clearly articulate the future vision. The transitional period was marked by confusion rather than resistance. This experience underscored the importance of vision and constant communication during transformation. The CEO discovered that merely communicating new company values wasn’t sufficient. Creating a unified vision with full conviction from the executive team was essential. Significant time was spent defining this vision in granular detail, learning from the successes and failures of other companies. Once the leadership team was aligned, this conviction was cascaded through the ranks. Instead of dictating change, employees were invited into the process through feedback sessions and pilot programs, giving them a stake in redefining cultural norms. Celebrating small wins, even if they’re a “loss,” was emphasized to support learning from missteps. Modeling desired behaviors, systematically updating policies, incentives, and processes reinforced the new mindsets and actions. It was an arduous journey, but staying intentional and bringing people along was crucial for evolving into the envisioned culture. Through the transformation, one principle remained constant: customers must see Zilliant as a partner rather than a vendor. This required individuals in every department—marketing, sales, customer success, product, and engineering—to proactively address and solve customer problems. Transitioning to a new business model and rethinking organizational mindset is a long-term effort requiring vision and commitment from all levels. The payoff, however, can be immense. Already, Zilliant has delivered two consecutive quarters of 60%-plus growth in year-over-year bookings and is positioned for continued record growth through the end of the year. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Education Cloud Support Programs

Education Cloud Support Programs

Model Support Programs with Education Cloud The Education Cloud Student Success and Mentoring apps empower educational institutions to design, plan, and deliver non-scholastic services that support each learner’s journey. From career counseling to mentoring, these apps provide the tools needed to manage a wide range of support programs effectively. Defining and Managing Support Programs Educational institutions often run various support programs to enhance learners’ experiences, such as a job placement program that offers career counseling, resume workshops, aptitude assessments, mock interviews, and corporate connect events. With the Student Success and Mentoring apps, Education Cloud provides robust program management features that allow institutions to scale these initiatives efficiently. For more information, see Get Started with Program Management. Through these apps, directors and facilitators can model programs and define the associated benefits. When an advisor adds these benefits to a learner’s care plan, the learner is automatically enrolled in the respective programs. Mentoring facilitators can also connect learners with mentors. Available benefit types include: Note: It’s essential to define the unit of measure for each benefit type to avoid errors when creating benefits. For guidance, see Define Units of Measure for Program Management. Creating and Managing Support Programs In the Student Success and Mentoring apps, you can easily create programs, enroll learners, and manage benefits through workflows accessible from the dropdown navigation menu. For a deeper understanding of the program management process, see Program Management Workflow. To create a support program: An advising program might include an advising benefit. When a learner enrolls, they are automatically assigned to an advising benefit, allowing them to schedule an appointment with a success team member. The system updates the benefit disbursement status based on the appointment. Automating Program Enrollment Processes Education Cloud offers integration procedures, flows, and invocable actions to automate program enrollment processes. For example: Note: A program enrollment record can be created regardless of the program’s status. Implement restrictions as needed in your org. Creating Assessments for Program Enrollment To understand learners’ requirements during program enrollment, create assessment questions: Setting Up Advising Programs Create an advising program that runs an assessment when learners enroll from the portal: Learners can schedule appointments with assigned case team members directly through the learner portal, ensuring they receive the support they need. By leveraging Education Cloud’s Student Success and Mentoring apps, institutions can create, manage, and scale support programs that effectively address the unique needs of each learner. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Smithsonian Data Cloud

Smithsonian Data Cloud

The Smithsonian already embraces technology through its Open Access Initiative. ousing 2D and 3D renders of collections it provides access to over 20 museums. Enter Salesforce and Smithsonian Data Cloud. The world’s largest network of research, museum, and education facilities, the Smithsonian Institution, manages over 150 million collections across its 21 museums, the National Zoo, and eight research centers. Navigating this vast array of artifacts can be overwhelming, even for dedicated history enthusiasts. To enhance accessibility, the Smithsonian Institution is collaborating with cloud computing giant Salesforce. The goal is to streamline the user experience by integrating disparate data sources, such as ticketing systems and donation histories, into a unified system. This initiative aims to provide a clearer blueprint of Smithsonian’s diverse locations and offerings, according to Lori Freeman, Salesforce’s Vice President and General Manager of Nonprofit Industry Solutions and Strategy. “Smithsonian is so progressive. They have all this content, they have all this history, they have incredible tools,” Freeman told Technical.ly. “So this technology is going to enable them to reach audiences they would never get to.” For instance, this system will allow museum staff and volunteers to assist visitors in locating exhibits across different Smithsonian locations. Becky Kobberod, the Smithsonian’s Head of Digital Transformation, illustrated this by describing how a visitor at the Hirshhorn could ask about a piece of art at the National Museum of American History. “It’s connecting the dots and creating a Smithsonian ecosystem that we currently don’t have. If you want to engage in our various museums, you go to each of them separately,” Kobberod said. “Whereas now, we’re providing you a front door, so to speak, that can help you navigate across all of the many different museums and resources that we have.” Although specifics about the technology and user interface have not been disclosed, Freeman emphasized that the main objectives are to keep visitors engaged and to build lifelong relationships with both in-person and virtual visitors. Building on Current Tech Resources The Smithsonian’s Open Access initiative, launched in early 2020, offers 2D and 3D renderings of its collections, totaling around 5 million items to date. Users can interact with 3D images to get a 360-degree view of fossils, sculptures, and artifacts like Neil Armstrong’s spacesuit. This initiative began with 2.8 million pieces and continues to grow, said Kobberod. In addition to Open Access, the Smithsonian has other digital initiatives. In 2022, the National Museum of African American History and Culture, in collaboration with Baltimore-based digital services firm Fearless, launched a searchable online platform to make certain collections and stories more accessible. Kobberod noted that only about 1% of the collections are physically displayed at any given time. Digitizing and uploading these collections not only preserves them but also makes them accessible to people who might not have the means to visit in person. “Smithsonian exists as a service to all of the American public,” she said. “We know that this is core to our future, and to making what we have available to the nation and the world.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Public Sector Loan Management

Public Sector Loan Management

Optimizing Public Sector Loan Management with Salesforce Effective loan management is vital for public sector financial operations, ensuring that funds are allocated efficiently and responsibly. As public entities face increasing demands for transparency and accountability, leveraging technology becomes essential. Salesforce, a leading customer relationship management (CRM) platform, offers robust solutions tailored to enhance loan management processes in the public sector. This article explores how Salesforce optimizes key aspects of loan management, with a focus on the Loan Boarding, Handoff, and Approval Process. Understanding Loan Boarding in Public Sector Finance Loan boarding refers to the initial steps of creating a new loan within an organization’s financial system. In the public sector, where loans often involve multiple stakeholders and complex regulations, a streamlined boarding process is critical. Salesforce’s customizable workflows and automation capabilities enable organizations to reduce manual entry errors and improve overall efficiency. Streamlined Data Entry Salesforce allows organizations to create custom fields that capture essential data points, such as borrower information, loan amounts, interest rates, and terms. When a loan application is submitted through a portal integrated with Salesforce, the platform automatically populates relevant fields, minimizing repetitive data entry and human error. Enhanced Collaboration Salesforce’s collaborative features, like Chatter, enable seamless communication between departments, such as finance, compliance, and risk management. Teams can access real-time information and discuss applications directly on the platform, eliminating the need for delays caused by emails or meetings and expediting the approval process. The Handoff Process: Ensuring Smooth Transitions Once a loan application is successfully boarded, it must be handed off to various stakeholders for review and approval. Without proper management, this process can become bottlenecked. Salesforce provides tools that automate notifications and reminders, ensuring smooth handoffs at each stage of the approval process. Automated Alerts and Task Management Salesforce’s task management system assigns specific tasks to team members and sets deadlines for completion. Automated alerts ensure no step in the approval process is overlooked, minimizing delays caused by human error or oversight and keeping the process on track. Approval Process: Simplifying Decision-Making In public sector lending, the approval process often requires multiple levels of scrutiny due to regulatory requirements. Salesforce’s powerful reporting capabilities allow decision-makers to quickly analyze applications based on predefined criteria, such as creditworthiness and compliance metrics. Custom Approval Workflows Salesforce enables organizations to design custom workflows that reflect their unique approval hierarchies. For instance, loans that require additional scrutiny based on size or risk can easily be routed to the appropriate stakeholders, ensuring compliance and mitigating risks. Document Management: Keeping Everything Organized Effective loan management relies on accurate documentation throughout the loan lifecycle, from application to repayment. Salesforce’s document management features enhance organization and compliance: Conclusion Optimizing public sector loan management with Salesforce offers substantial benefits in efficiency, accountability, and adaptability. From seamless loan boarding to enhanced collaboration, streamlined approvals, and robust document management, Salesforce provides a comprehensive solution for public sector financial operations. By leveraging these technological advancements, public sector organizations can effectively manage loans from application to repayment, ensure compliance, and build trust with the constituents they serve. Salesforce’s capabilities position public entities for operational success while maintaining the high standards required for public financing. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI for Tableau

Generative AI for Tableau

Tableau’s first generative AI assistant is now generally available. Generative AI for Tableau brings data prep to the masses. Earlier this month, Tableau launched its second platform update of 2024, announcing that its first two GenAI assistants would be available by the end of July, with a third set for release in August. The first of these, Einstein Copilot for Tableau Prep, became generally available on July 10. Tableau initially unveiled its plans to develop generative AI capabilities in May 2023 with the introduction of Tableau Pulse and Tableau GPT. Pulse, an insight generator that monitors data for metric changes and uses natural language to alert users, became generally available in February. Tableau GPT, now renamed Einstein Copilot for Tableau, moved into beta testing in April. Following Einstein Copilot for Tableau Prep, Einstein Copilot for Tableau Catalog is expected to be generally available before the end of July. Einstein Copilot for Tableau Web Authoring is set to follow by the end of August. With these launches, Tableau joins other data management and analytics vendors like AWS, Domo, Microsoft, and MicroStrategy, which have already made generative AI assistants generally available. Other companies, such as Qlik, DBT Labs, and Alteryx, have announced similar plans but have not yet moved their products out of preview. Tableau’s generative AI capabilities are comparable to those of its competitors, according to Doug Henschen, an analyst at Constellation Research. In some areas, such as data cataloging, Tableau’s offerings are even more advanced. “Tableau is going GA later than some of its competitors. But capabilities are pretty much in line with or more extensive than what you’re seeing from others,” Henschen said. In addition to the generative AI assistants, Tableau 2024.2 includes features such as embedding Pulse in applications. Based in Seattle and a subsidiary of Salesforce, Tableau has long been a prominent analytics vendor. Its first 2024 platform update highlighted the launch of Pulse, while the final 2023 update introduced new embedded analytics capabilities. Generative AI assistants are proliferating due to their potential to enable non-technical workers to work with data and increase efficiency for data experts. Historically, the complexity of analytics platforms, requiring coding and data literacy, has limited their widespread adoption. Studies indicate that only about one-quarter of employees regularly work with data. Vendors have attempted to overcome this barrier by introducing natural language processing (NLP) and low-code/no-code features. However, NLP features have been limited by small vocabularies requiring specific business phrasing, while low-code/no-code features only support basic tasks. Generative AI has the potential to change this dynamic. Large language models like ChatGPT and Google Gemini offer extensive vocabularies and can interpret user intent, enabling true natural language interactions. This makes data exploration and analysis accessible to non-technical users and reduces coding requirements for data experts. In response to advancements in generative AI, many data management and analytics vendors, including Tableau, have made it a focal point of their product development. Tech giants like AWS, Google, and Microsoft, as well as specialized vendors, have heavily invested in generative AI. Einstein Copilot for Tableau Prep, now generally available, allows users to describe calculations in natural language, which the tool interprets to create formulas for calculated fields in Tableau Prep. Previously, this required expertise in objects, fields, functions, and limitations. Einstein Copilot for Tableau Catalog, set for release later this month, will enable users to add descriptions for data sources, workbooks, and tables with one click. In August, Einstein Copilot for Tableau Web Authoring will allow users to explore data in natural language directly from Tableau Cloud Web Authoring, producing visualizations, formulating calculations, and suggesting follow-up questions. Tableau’s generative AI assistants are designed to enhance efficiency and productivity for both experts and generalists. The assistants streamline complex data modeling and predictive analysis, automate routine data prep tasks, and provide user-friendly interfaces for data visualization and analysis. “Whether for an expert or someone just getting started, the goal of Einstein Copilot is to boost efficiency and productivity,” said Mike Leone, an analyst at TechTarget’s Enterprise Strategy Group. The planned generative AI assistants for different parts of Tableau’s platform offer unique value in various stages of the data and AI lifecycle, according to Leone. Doug Henschen noted that the generative AI assistants for Tableau Web Authoring and Tableau Prep are similar to those being introduced by other vendors. However, the addition of a generative AI assistant for data cataloging represents a unique differentiation for Tableau. “Einstein Copilot for Tableau Catalog is unique to Tableau among analytics and BI vendors,” Henschen said. “But it’s similar to GenAI implementations being done by a few data catalog vendors.” Beyond the generative AI assistants, Tableau’s latest update includes: Among these non-Copilot capabilities, making Pulse embeddable is particularly significant. Extending generative AI capabilities to work applications will make them more effective. “Embedding Pulse insights within day-to-day applications promises to open up new possibilities for making insights actionable for business users,” Henschen said. Multi-fact relationships are also noteworthy, enabling users to relate datasets with shared dimensions and informing applications that require large amounts of high-quality data. “Multi-fact relationships are a fascinating area where Tableau is really just getting started,” Leone said. “Providing ways to improve accuracy, insights, and context goes a long way in building trust in GenAI and reducing hallucinations.” While Tableau has launched its first generative AI assistant and will soon release more, the vendor has not yet disclosed pricing for the Copilots and related features. The generative AI assistants are available through a bundle named Tableau+, a premium Tableau Cloud offering introduced in June. Beyond the generative AI assistants, Tableau+ includes advanced management capabilities, simplified data governance, data discovery features, and integration with Salesforce Data Cloud. Generative AI is compute-intensive and costly, so it’s not surprising that Tableau customers will have to pay extra for these capabilities. Some vendors are offering generative AI capabilities for free to attract new users, but Henschen believes costs will eventually be incurred. “Customers will want to understand the cost implications of adding these new capabilities,”

Read More
Data Protection Improvements from Next DLP

Data Protection Improvements from Next DLP

Insider risk and data protection company Next DLP has unveiled its new Secure Data Flow technology, designed to enhance data protection for customers. Integrated into the company’s Reveal Platform, Secure Data Flow monitors the origin, movement, and modification of data to provide comprehensive protection. Data Protection Improvements from Next DLP. This technology can secure critical business data flow from any SaaS application, including Salesforce, Workday, SAP, and GitHub, to prevent accidental data loss and malicious theft. “In modern IT environments, intellectual property often resides in SaaS applications and cloud data stores,” said John Stringer, head of product at Next DLP. “The challenge is that identifying high-impact data in these locations based on its content is difficult. Secure Data Flow, through Reveal, ensures that firms can confidently protect their most critical data assets, regardless of their location or application.” Next DLP argues that legacy data protection technologies are inadequate, relying on pattern matching, regular expressions, keywords, user-applied tags, and fingerprinting, which only cover a limited range of text-based data types. The company highlights that recent studies indicate employees download an average of 30 GB of data each month from SaaS applications to their endpoints, such as mobile phones, laptops, and desktops, emphasizing the need for advanced data protection measures. Secure Data Flow tracks data as it moves through both sanctioned and unsanctioned channels within an organization. By complementing traditional content and sensitivity classification-based approaches with origin-based data identification, manipulation detection, and data egress controls, it effectively prevents data theft and misuse. This approach results in an “all-encompassing, 100 percent effective, false-positive-free solution that simplifies the lives of security analysts,” claims Next DLP. “Secure Data Flow represents a novel approach to data protection and insider risk management,” said Ken Buckler, research director at Enterprise Management Associates. “It not only enhances detection and protection capabilities but also streamlines data management processes. This improves the accuracy of data sensitivity recognition and reduces endpoint content inspection costs in today’s diverse technological environments.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com