As companies rapidly embrace Artificial Intelligence and realize its benefits, trust must be their top priority. And to instill trust in AI, they must first instill trust in the data that powers it. Think about data as a well-balanced diet for AI — you’re healthiest when you avoid junk food and consume all the proper nutrients. Simply put, organizations can only harness the full power of AI when it is fueled by accurate, comprehensive data.
The Future is Here
AI is no longer a futuristic concept; it has become a reality in our living rooms, cars, and frequently, in our pockets. As this technology continues to play an ever-expanding role in our daily lives, a crucial question arises: To what extent can, and should, we place our trust in these AI systems? Trust in AI data comes more naturally for some than others.
As the prevalence of AI increases, so does the concern about ensuring that it aligns with human values. A frequently cited example illustrating this challenge involves the moral decision an autonomous car may face when confronted with a collision scenario. Consider a situation where a driver must swerve to avoid being hit and seriously injured by an oncoming bus. However, the dilemma arises as the car faces the prospect of hitting a baby if it swerves left or an elderly person if it swerves right—posing a complex ethical question for the autonomous car.
Arvind Krishna, Senior Vice President of Hybrid Cloud and Director of IBM Research, emphasizes the importance of careful programming in AI systems to prevent biases introduced by programmers from influencing outcomes. Recognizing the complexity of such issues, he discusses the need to develop frameworks for addressing these ethical challenges, a task IBM is tackling through its participation in the Partnership on AI alongside other technology organizations.
Trust in AI data vs bias:
Instances of machines demonstrating bias have already garnered attention, eroding trust in AI systems. AI technicians are actively working to identify and mitigate the origins of bias, acknowledging that machines can become biased due to inadequate representation in their training data. Guru Banavar, IBM Chief Science Officer for Cognitive Computing, notes that unintentional bias may arise from a lack of care in selecting the right training dataset, while intentional bias can result from a malicious attacker manipulating the dataset.
James Hendler, Director of the Institute for Data Exploration and Applications at Rensselaer Polytechnic Institute, reminds us that while AI can be a force for social good, it also holds the potential for diverse social impacts, where actions deemed good by one may be perceived as harmful by another. Hence, an awareness of these complexities is essential in navigating the ethical landscape of AI applications.
Artificial Intelligence (AI) is revolutionizing work processes and service delivery, empowering organizations to harness its formidable capabilities for data-driven predictions, product and service optimization, innovation augmentation, increased productivity, and cost reduction. While the benefits of AI adoption are immense, it also introduces risks and challenges, prompting concerns about the current level of trustworthiness in AI applications.
Public Trust in AI data
Unlocking the full potential and return on investment from AI necessitates a sustained commitment to building and upholding public trust. For widespread adoption, people must have confidence that AI development and utilization adhere to responsible and trustworthy practices.
In a pioneering initiative, KPMG Australia, in collaboration with the University of Queensland, conducted a world-first in-depth exploration of trust and global attitudes toward AI across 17 countries. The resulting report, “Trust in Artificial Intelligence: A Global Study 2023,” delivers comprehensive insights into the factors influencing trust, the perceived risks and benefits of AI utilization, community expectations regarding AI governance, and the entities considered trustworthy in AI development, usage, and regulation.
This report, titled “Trust in Artificial Intelligence: 2023 Global Study on the Shifting Public Perceptions of AI,” presents key findings from the global study and offers individual country snapshots, serving as a valuable resource for those leading, creating, or governing AI systems. Importantly, it outlines four critical pathways for policymakers, standards setters, governments, businesses, and non-governmental organizations (NGOs) to navigate the challenges associated with trust in the development and deployment of AI.