AI Confidence Score - gettectonic.com
UX Principles for AI in Healthcare

UX Principles for AI in Healthcare

The Role of UX in AI-Driven Healthcare AI is poised to revolutionize the global economy, with predictions it could contribute $15.7 trillion by 2030—more than the combined economic output of China and India. Among the industries likely to see the most transformative impact is healthcare. However, during my time at NHS Digital, I saw how systems that weren’t designed with existing clinical workflows in mind added unnecessary complexity for clinicians, often leading to manual workarounds and errors due to fragmented data entry across systems. The risk is that AI, if not designed with user experience (UX) at the forefront, could exacerbate these issues, creating more disruption rather than solving problems. From diagnostic tools to consumer health apps, the role of UX in AI-driven healthcare is critical to making these innovations effective and user-friendly. This article explores the intersection of UX and AI in healthcare, outlining key UX principles to design better AI-driven experiences and highlighting trends shaping the future of healthcare. The Shift in Human-Computer Interaction with AI AI fundamentally changes how humans interact with computers. Traditionally, users took command by entering inputs—clicking, typing, and adjusting settings until the desired outcome was achieved. The computer followed instructions, while the user remained in control of each step. With AI, this dynamic shifts dramatically. Now, users specify their goal, and the AI determines how to achieve it. For example, rather than manually creating an illustration, users might instruct AI to “design a graphic for AI-driven healthcare with simple shapes and bold colors.” While this saves time, it introduces challenges around ensuring the results meet user expectations, especially when the process behind AI decisions is opaque. The Importance of UX in AI for Healthcare A significant challenge in healthcare AI is the “black box” nature of the systems. For example, consider a radiologist reviewing a lung X-ray that an AI flagged as normal, despite the presence of concerning lesions. Research has shown that commercial AI systems can perform worse than radiologists when multiple health issues are present. When AI decisions are unclear, clinicians may question the system’s reliability, especially if they cannot understand the rationale behind an AI’s recommendation. This opacity hinders feedback, making it difficult to improve the system’s performance. Addressing this issue is essential for UX designers. Bias in AI is another significant issue. Many healthcare AI tools have been documented as biased, such as systems trained on predominantly male cardiovascular data, which can fail to detect heart disease in women. AIs also struggle to identify conditions like melanoma in people with darker skin tones due to insufficient diversity in training datasets. UX can help mitigate these biases by designing interfaces that clearly explain the data used in decisions, highlight missing information, and provide confidence levels for predictions. The movement toward eXplainable AI (XAI) seeks to make AI systems more transparent and interpretable for human users. UX Principles for AI in Healthcare To ensure AI is beneficial in real-world healthcare settings, UX designers must prioritize certain principles. Below are key UX design principles for AI-enabled healthcare applications: Applications of AI in Healthcare AI is already making a significant impact in various healthcare applications, including: Real-world deployments of AI in healthcare have demonstrated that while AI can be useful, its effectiveness depends heavily on usability and UX design. By adhering to the principles of transparency, interpretability, controllability, and human-centered AI, designers can help create AI-enabled healthcare applications that are both powerful and user-friendly. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Autonomous AI Sans Human

Autonomous AI Sans Human

Rise of Autonomous AI: Less Human Control and Increasing Adoption A recent Salesforce study reveals that nearly half of employees in Switzerland (46%) are either using or experimenting with AI technologies. While there is a general comfort with AI when it complements human efforts, many employees still prefer human oversight for tasks like training, data security, and onboarding. Despite this, the data indicates that increased investment in education and training could enhance trust in autonomous AI systems. Switzerland’s AI Adoption Compared to Other Countries Switzerland shows a higher openness to AI compared to other nations. In Germany, only 28% of respondents use AI confidently, compared to 46% in Switzerland. The UK (17%) and Ireland (15%) show even more skepticism. Conversely, India has the highest AI confidence, with 40% of respondents showing strong support. In Switzerland, however, 24% of employees are reluctant to use AI at work, and 25% are not keen on Generative AI. Sector-Specific AI Usage Trends The study also highlights significant sector differences. In the communications industry, 69% of employees are willing to use AI tools like ChatGPT and Gemini without hesitation. This contrasts with the life sciences and biotechnology sectors, where 72% of respondents are resistant to AI adoption. In the public sector, while there is general willingness, 56% express reservations due to a lack of expertise and guidelines. Notably, 39% of public sector respondents are completely opposed to using AI tools. Generational Insights on AI Proficiency Among different generations, Millennials and Gen X exhibit the highest proficiency and comfort with AI technology. In contrast, Gen Z appears more critical of AI, with 82% of this group avoiding AI assistants like IBM Watson or Microsoft Copilot. Millennials are more engaged, with 39% actively experimenting with or fully integrating AI assistants into their work routines. Gregory Leproux, Senior Director of Solution Engineering at Salesforce Switzerland, notes, “Our study reflects our customer experience: AI is widely used in Swiss companies, but human intervention remains prevalent. To fully leverage the benefits of AI, there is a need for robust control mechanisms and policies for responsible AI use, allowing for systematic review rather than piecemeal assessment. Thoughtfully designed AI systems can merge human and machine intelligence, marking the beginning of an exciting new era.” The survey, conducted by Salesforce in partnership with YouGov, took place from March 20 to April 3, 2024, with nearly 6,000 full-time employees from various industries and countries, including Switzerland (265 participants). The online survey covered nine countries: the US, UK, Ireland, Australia, France, Germany, India, Singapore, and Switzerland. Source: www.salesforce.com Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI for Consumers and Retailers

AI for Consumers and Retailers

Before generative AI became mainstream, tech-savvy retailers had long been leveraging transformative technologies to automate tasks and understand consumer behavior. Insights from consumer and future trends, along with predictive analytics, have long guided retailers in improving customer experiences and enhancing operational efficiency. AI for Consumers and Retailers improved customer experiences. While AI is currently used for personalized recommendations and online customer support, many consumers still harbor distrust towards AI. Salesforce is addressing this concern by promoting trustworthy AI with human oversight and implementing powerful controls that focus on mitigating high-risk AI outcomes. This approach is crucial as many knowledge workers fear losing control over AI. Although people trust AI to handle significant portions of their work, they believe that increased human oversight would bolster their confidence in AI. Building this trust is a challenge retailers must overcome to fully harness AI’s potential as a reliable assistant. So, where does the retail industry stand with AI, and how can retailers build consumer trust while developing AI responsibly? AI for Consumers and Retailers Recent research from Salesforce and the Retail AI Council highlights how AI is reshaping consumer behavior and retailer interactions. AI is now integral to providing personalized deals, suggesting tailored products, and enhancing customer service through chatbots. Retailers are increasingly embedding generative AI into their business operations. A significant majority (93%) of retailers report using generative AI for personalization, enabling customers to find products and make purchases faster through natural language interactions on digital storefronts and messaging apps. For instance, a customer might tell a retailer’s AI assistant about their camping needs, and based on location, preferences, and past purchases, the AI can recommend a suitable tent and provide a direct link for checkout and store collection. As of early 2024, 92% of retailers’ investments were directed towards AI technology. While AI is not new to retail, with 59% of merchants already using it for product recommendations and 55% utilizing digital assistants for online purchases, its applications continue to expand. From demand forecasting to customer sentiment analysis, AI enhances consumer experiences by predicting preferences and optimizing inventory levels, thereby reducing markdowns and improving efficiency. Barriers and Ethical Considerations Despite its promise, integrating generative AI in retail faces significant challenges, particularly regarding bias in AI outputs. The need for clear ethical guidelines in AI use within retail is pressing, underscoring the gap between adoption rates and ethical stewardship. Strategies that emphasize transparency and accountability are vital for fostering responsible AI innovation. Half of the surveyed retailers indicated they could fully comply with stringent data security standards and privacy regulations, demonstrating the industry’s commitment to protecting consumer data amidst evolving regulatory landscapes. Retailers are increasingly aware of the risks associated with AI integration. Concerns about bias top the list, with half of the respondents worried about prejudiced AI outcomes. Additionally, issues like hallucinations (38%) and toxicity (35%) linked to generative AI implementation highlight the need for robust mitigation strategies. A majority (62%) of retailers have established guidelines to address transparency, data security, and privacy concerns related to the ethical deployment of generative AI. These guidelines ensure responsible AI use, emphasizing trustworthy and unbiased outputs that adhere to ethical standards in the retail sector. These insights reveal a dual imperative for retailers: leveraging AI technologies to enhance operational efficiency and customer experiences while maintaining stringent ethical standards and mitigating risks. Consumer Perceptions and the Future of AI in Retail As AI continues to redefine retail, balancing ethical considerations with technological advancements is essential. To combat consumer skepticism, companies should focus on transparent communication about AI usage and emphasize that humans, not technology, are ultimately in control. Whether aiming for top-line growth or bottom-line efficiency, AI is a crucial addition to a retailer’s technology stack. However, to fully embrace AI, retailers must take consumers on the journey and earn their trust. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Confidence Scores

AI Confidence Scores

In this insight, the focus is on exploring the use of confidence scores available through the OpenAI API. The first section delves into these scores and explains their significance using a custom chat interface. The second section demonstrates how to apply confidence scores programmatically in code. Understanding Confidence Scores To begin, it’s important to understand what an LLM (Large Language Model) is doing for each token in its response: However, it’s essential to clarify that the term “probabilities” here is somewhat misleading. While mathematically, they qualify as a “probability distribution” (the values add up to one), they don’t necessarily reflect true confidence or likelihood in the way we might expect. In this sense, these values should be treated with caution. A useful way to think about these values is to consider them as “confidence” scores, though it’s crucial to remember that, much like humans, LLMs can be confident and still be wrong. The values themselves are not inherently meaningful without additional context or validation. Example: Using a Chat Interface An example of exploring these confidence scores can be seen in a chat interface where: In one case, when asked to “pick a number,” the LLM chose the word “choose” despite it having only a 21% chance of being selected. This demonstrates that LLMs don’t always pick the most likely token unless configured to do so. Additionally, this interface shows how the model might struggle with questions that have no clear answer, offering insights into detecting possible hallucinations. For example, when asked to list famous people with an interpunct in their name, the model shows low confidence in its guesses. This behavior indicates uncertainty and can be an indicator of a forthcoming incorrect response. Hallucinations and Confidence Scores The discussion also touches on the question of whether low confidence scores can help detect hallucinations—cases where the model generates false information. While low confidence often correlates with potential hallucinations, it’s not a foolproof indicator. Some hallucinations may come with high confidence, while low-confidence tokens might simply reflect natural variability in language. For instance, when asked about the capital of Kazakhstan, the model shows uncertainty due to the historical changes between Astana and Nur-Sultan. The confidence scores reflect this inconsistency, highlighting how the model can still select an answer despite having conflicting information. Using Confidence Scores in Code The next part of the discussion covers how to leverage confidence scores programmatically. For simple yes/no questions, it’s possible to compress the response into a single token and calculate the confidence score using OpenAI’s API. Key API settings include: Using this setup, one can extract the model’s confidence in its response, converting log probabilities back into regular probabilities using math.exp. Expanding to Real-World Applications The post extends this concept to more complex scenarios, such as verifying whether an image of a driver’s license is valid. By analyzing the model’s confidence in its answer, developers can determine when to flag responses for human review based on predefined confidence thresholds. This technique can also be applied to multiple-choice questions, allowing developers to extract not only the top token but also the top 10 options, along with their confidence scores. Conclusion While confidence scores from LLMs aren’t a perfect solution for detecting accuracy or truthfulness, they can provide useful insights in certain scenarios. With careful application and evaluation, developers can make informed decisions about when to trust the model’s responses and when to intervene. The final takeaway is that confidence scores, while not foolproof, can play a role in improving the reliability of LLM outputs—especially when combined with thoughtful design and ongoing calibration. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com