ONC - gettectonic.com - Page 4
AI Productivity Paradox

AI Productivity Paradox

The AI Productivity Paradox: Why Aren’t More Workers Using AI Tooks Like ChatGPT?The Real Barrier Isn’t Technical Skills — It’s Time to Think Despite the transformative potential of tools like ChatGPT, most knowledge workers aren’t utilizing them effectively. Those who do tend to use them for basic tasks like summarization. Less than 5% of ChatGPT’s user base subscribes to the paid Plus version, indicating that a small fraction of potential professional users are tapping into AI for more complex, high-value tasks. Having spent over a decade building AI products at companies such as Google Brain and Shopify Ads, the evolution of AI has been clearly evident. With the advent of ChatGPT, AI has transitioned from being an enhancement for tools like photo organizers to becoming a significant productivity booster for all knowledge workers. Most executives are aware that today’s buzz around AI is more than just hype. They’re eager to make their companies AI-forward, recognizing that it’s now more powerful and user-friendly than ever. Yet, despite this potential and enthusiasm, widespread adoption remains slow. The real issue lies in how organizations approach work itself. Systemic problems are hindering the integration of these tools into the daily workflow. Ultimately, the question executives need to ask isn’t, “How can we use AI to work faster? Or can this feature be built with AI?” but rather, “How can we use AI to create more value? What are the questions we should be asking but aren’t?” Real-world ImpactRecently, large language models (LLMs)—the technology behind tools like ChatGPT—were used to tackle a complex data structuring and analysis task. This task would typically require a cross-functional team of data analysts and content designers, taking a month or more to complete. Here’s what was accomplished in just one day using Google AI Studio: However, the process wasn’t just about pressing a button and letting AI do all the work. It required focused effort, detailed instructions, and multiple iterations. Hours were spent crafting precise prompts, providing feedback, and redirecting the AI when it went off course. In this case, the task was compressed from a month-long process to a single day. While it was mentally exhausting, the result wasn’t just a faster process—it was a fundamentally better and different outcome. The LLMs uncovered nuanced patterns and edge cases within the data that traditional analysis would have missed. The Counterintuitive TruthHere lies the key to understanding the AI productivity paradox: The success in using AI was possible because leadership allowed for a full day dedicated to rethinking data processes with AI as a thought partner. This provided the space for deep, strategic thinking, exploring connections and possibilities that would typically take weeks. However, this quality-focused work is often sacrificed under the pressure to meet deadlines. Ironically, most people don’t have time to figure out how they could save time. This lack of dedicated time for exploration is a luxury many product managers (PMs) can’t afford. Under constant pressure to deliver immediate results, many PMs don’t have even an hour for strategic thinking. For many, the only way to carve out time for this work is by pretending to be sick. This continuous pressure also hinders AI adoption. Developing thorough testing plans or proactively addressing AI-related issues is viewed as a luxury, not a necessity. This creates a counterproductive dynamic: Why use AI to spot issues in documentation if fixing them would delay launch? Why conduct further user research when the direction has already been set from above? Charting a New Course — Investing in PeopleProviding employees time to “figure out AI” isn’t enough; most need training to fully understand how to leverage ChatGPT beyond simple tasks like summarization. Yet the training required is often far less than what people expect. While the market is flooded with AI training programs, many aren’t suitable for most employees. These programs are often time-consuming, overly technical, and not tailored to specific job functions. The best results come from working closely with individuals for brief periods—10 to 15 minutes—to audit their current workflows and identify areas where LLMs could be used to streamline processes. Understanding the technical details behind token prediction isn’t necessary to create effective prompts. It’s also a myth that AI adoption is only for those with technical backgrounds under 40. In fact, attention to detail and a passion for quality work are far better indicators of success. By setting aside biases, companies may discover hidden AI enthusiasts within their ranks. For example, a lawyer in his sixties, after just five minutes of explanation, grasped the potential of LLMs. By tailoring examples to his domain, the technology helped him draft a law review article he had been putting off for months. It’s likely that many companies already have AI enthusiasts—individuals who’ve taken the initiative to explore LLMs in their work. These “LLM whisperers” could come from any department: engineering, marketing, data science, product management, or customer service. By identifying these internal innovators, organizations can leverage their expertise. Once these experts are found, they can conduct “AI audits” of current workflows, identify areas for improvement, and provide starter prompts for specific use cases. These internal experts often better understand the company’s systems and goals, making them more capable of spotting relevant opportunities. Ensuring Time for ExplorationBeyond providing training, it’s crucial that employees have the time to explore and experiment with AI tools. Companies can’t simply tell their employees to innovate with AI while demanding that another month’s worth of features be delivered by Friday at 5 p.m. Ensuring teams have a few hours a month for exploration is essential for fostering true AI adoption. Once the initial hurdle of adoption is overcome, employees will be able to identify the most promising areas for AI investment. From there, organizations will be better positioned to assess the need for more specialized training. ConclusionThe AI productivity paradox is not about the complexity of the technology but rather how organizations approach work and innovation. Harnessing AI’s potential is simpler than “AI influencers” often suggest, requiring only

Read More
AI Agent Trends

AI Agent Trends

AI Agents: Key Statistics and Trends for 2025 “The agent revolution is real and as exciting as the cloud, social, and mobile revolutions,” remarked Salesforce Chair and CEO Marc Benioff. “It will provide a level of transformation that we’ve never seen.” With the general availability of Agentforce, the era of AI-powered agents is officially here. These intelligent software agents, designed to perform tasks autonomously or in collaboration with humans, are already transforming businesses by driving efficiency and improving customer outcomes. AI Agents in Action Companies across the globe are leveraging AI agents to achieve remarkable results. For example, Wiley has seen a 40% boost in case resolution rates with Agentforce, far surpassing their previous bot’s performance. Other success stories from Saks and Opentable reinforce the ROI potential of this groundbreaking technology. Salesforce research highlights data from consumers, employees, and business leaders worldwide, demonstrating how AI agents address key pain points while unlocking significant opportunities for enterprises and individuals alike. Why Consumers Need AI Agents Traditional customer service processes often frustrate consumers, leading to inefficiency and dissatisfaction: AI agents are transforming this landscape with immediate, personalized assistance that minimizes wait times and eliminates repeated explanations. Consumer sentiment indicates a growing acceptance of this technology: Why Enterprises Need AI Agents For enterprises, inefficiency is a persistent challenge. Time-consuming administrative tasks often prevent workers from focusing on strategic, customer-centric activities: AI adoption is increasingly a priority for revenue-generating teams, with measurable benefits: Salesforce experts emphasize that while AI has already proven its value in service, sales, marketing, and commerce, the surface of its potential has only just been scratched. The Agent-First Future As organizations adopt an agent-first approach, they unlock opportunities to redefine operations, increase efficiency, and drive innovation: AI agents are not just the future—they’re the present solution to enduring challenges, empowering businesses to meet the demands of a rapidly evolving digital economy. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
DHS Introduces AI Framework to Protect Critical Infrastructure

DHS Introduces AI Framework to Protect Critical Infrastructure

The Department of Homeland Security (DHS) has unveiled the Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, a voluntary set of guidelines designed to ensure the safe and secure deployment of AI across the systems that power daily life. From energy grids to water systems, transportation, and communications, critical infrastructure increasingly relies on AI for enhanced efficiency and resilience. While AI offers transformative potential—such as detecting earthquakes, optimizing energy usage, and streamlining logistics—it also introduces new vulnerabilities. Framework Overview The framework, developed with input from cloud providers, AI developers, critical infrastructure operators, civil society, and public sector organizations, builds on DHS’s broader policies from 2023, which align with White House directives. It aims to provide a shared roadmap for balancing AI’s benefits with its risks. AI Vulnerabilities in Critical Infrastructure The DHS framework categorizes vulnerabilities into three key areas: The guidelines also address sector-specific vulnerabilities and offer strategies to ensure AI strengthens resilience while minimizing misuse risks. Industry and Government Support Arvind Krishna, Chairman and CEO of IBM, lauded the framework as a “powerful tool” for fostering responsible AI development. “We look forward to working with DHS to promote shared and individual responsibilities in advancing trusted AI systems.” Marc Benioff, CEO of Salesforce, emphasized the framework’s role in fostering collaboration among stakeholders while prioritizing trust and accountability. “Salesforce is committed to humans and AI working together to advance critical infrastructure industries in the U.S. We support this framework as a vital step toward shaping the future of AI in a safe and sustainable manner.” DHS Secretary Alejandro N. Mayorkas highlighted the urgency of proactive action. “AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms. The framework, if widely adopted, will help ensure the safety and security of critical services.” DHS Recommendations for Stakeholders A Call to Action DHS encourages widespread adoption of the framework to build safer, more resilient critical infrastructure. By prioritizing trust, transparency, and collaboration, this initiative aims to guide the responsible integration of AI into essential systems, ensuring they remain secure and effective as technology continues to evolve. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Liquid Neural Networks

Liquid Neural Networks

LNNs mark a significant departure from traditional, rigid AI structures, drawing deeply from the adaptable nature of biological neural systems. MIT researchers explored how organisms manage complex decision-making and dynamic responses with minimal neurons, translating these principles into the design of LNNs

Read More
Where LLMs Fall Short

LLM Economies

Throughout history, disruptive technologies have been the catalyst for major social and economic revolutions. The invention of the plow and irrigation systems 12,000 years ago sparked the Agricultural Revolution, while Johannes Gutenberg’s 15th-century printing press fueled the Protestant Reformation and helped propel Europe out of the Middle Ages into the Renaissance. In the 18th century, James Watt’s steam engine ushered in the Industrial Revolution. More recently, the internet has revolutionized communication, commerce, and information access, shrinking the world into a global village. Similarly, smartphones have transformed how people interact with their surroundings. Now, we stand at the dawn of the AI revolution. Large Language Models (LLMs) represent a monumental leap forward, with significant economic implications at both macro and micro levels. These models are reshaping global markets, driving new forms of currency, and creating a novel economic landscape. The reason LLMs are transforming industries and redefining economies is simple: they automate both routine and complex tasks that traditionally require human intelligence. They enhance decision-making processes, boost productivity, and facilitate cost reductions across various sectors. This enables organizations to allocate human resources toward more creative and strategic endeavors, resulting in the development of new products and services. From healthcare to finance to customer service, LLMs are creating new markets and driving AI-driven services like content generation and conversational assistants into the mainstream. To truly grasp the engine driving this new global economy, it’s essential to understand the inner workings of this disruptive technology. These posts will provide both a macro-level overview of the economic forces at play and a deep dive into the technical mechanics of LLMs, equipping you with a comprehensive understanding of the revolution happening now. Why Now? The Connection Between Language and Human Intelligence AI did not begin with ChatGPT’s arrival in November 2022. Many people were developing machine learning classification models in 1999, and the roots of AI go back even further. Artificial Intelligence was formally born in 1950, when Alan Turing—considered the father of theoretical computer science and famed for cracking the Nazi Enigma code during World War II—created the first formal definition of intelligence. This definition, known as the Turing Test, demonstrated the potential for machines to exhibit human-like intelligence through natural language conversations. The test involves a human evaluator who engages in conversations with both a human and a machine. If the evaluator cannot reliably distinguish between the two, the machine is considered to have passed the test. Remarkably, after 72 years of gradual AI development, ChatGPT simulated this very interaction, passing the Turing Test and igniting the current AI explosion. But why is language so closely tied to human intelligence, rather than, for example, vision? While 70% of our brain’s neurons are devoted to vision, OpenAI’s pioneering image generation model, DALL-E, did not trigger the same level of excitement as ChatGPT. The answer lies in the profound role language has played in human evolution. The Evolution of Language The development of language was the turning point in humanity’s rise to dominance on Earth. As Yuval Noah Harari points out in his book Sapiens: A Brief History of Humankind, it was the ability to gossip and discuss abstract concepts that set humans apart from other species. Complex communication, such as gossip, requires a shared, sophisticated language. Human language evolved from primitive cave signs to structured alphabets, which, along with grammar rules, created languages capable of expressing thousands of words. In today’s digital age, language has further evolved with the inclusion of emojis, and now with the advent of GenAI, tokens have become the latest cornerstone in this progression. These shifts highlight the extraordinary journey of human language, from simple symbols to intricate digital representations. In the next post, we will explore the intricacies of LLMs, focusing specifically on tokens. But before that, let’s delve into the economic forces shaping the LLM-driven world. The Forces Shaping the LLM Economy AI Giants in Competition Karl Marx and Friedrich Engels argued that those who control the means of production hold power. The tech giants of today understand that AI is the future means of production, and the race to dominate the LLM market is well underway. This competition is fierce, with industry leaders like OpenAI, Google, Microsoft, and Facebook battling for supremacy. New challengers such as Mistral (France), AI21 (Israel), and Elon Musk’s xAI and Anthropic are also entering the fray. The LLM industry is expanding exponentially, with billions of dollars of investment pouring in. For example, Anthropic has raised $4.5 billion from 43 investors, including major players like Amazon, Google, and Microsoft. The Scarcity of GPUs Just as Bitcoin mining requires vast computational resources, training LLMs demands immense computing power, driving a search for new energy sources. Microsoft’s recent investment in nuclear energy underscores this urgency. At the heart of LLM technology are Graphics Processing Units (GPUs), essential for powering deep neural networks. These GPUs have become scarce and expensive, adding to the competitive tension. Tokens: The New Currency of the LLM Economy Tokens are the currency driving the emerging AI economy. Just as money facilitates transactions in traditional markets, tokens are the foundation of LLM economics. But what exactly are tokens? Tokens are the basic units of text that LLMs process. They can be single characters, parts of words, or entire words. For example, the word “Oscar” might be split into two tokens, “os” and “car.” The performance of LLMs—quality, speed, and cost—hinges on how efficiently they generate these tokens. LLM providers price their services based on token usage, with different rates for input (prompt) and output (completion) tokens. As companies rely more on LLMs, especially for complex tasks like agentic applications, token usage will significantly impact operational costs. With fierce competition and the rise of open-source models like Llama-3.1, the cost of tokens is rapidly decreasing. For instance, OpenAI reduced its GPT-4 pricing by about 80% over the past year and a half. This trend enables companies to expand their portfolio of AI-powered products, further fueling the LLM economy. Context Windows: Expanding Capabilities

Read More
RAGate

RAGate

RAGate: Revolutionizing Conversational AI with Adaptive Retrieval-Augmented Generation Building Conversational AI systems is challenging.It’s not just feasible; it’s complex, resource-intensive, and time-consuming. The difficulty lies in creating systems that can not only understand and generate human-like responses but also adapt effectively to conversational nuances, ensuring meaningful engagement with users. Retrieval-Augmented Generation (RAG) has already transformed Conversational AI by combining the internal knowledge of large language models (LLMs) with external knowledge sources. By leveraging RAG with business data, organizations empower their customers to ask natural language questions and receive insightful, data-driven answers. The challenge?Not every query requires external knowledge. Over-reliance on external sources can disrupt conversational flow, much like consulting a book for every question during a conversation—even when internal knowledge is sufficient. Worse, if no external knowledge is available, the system may respond with “I don’t know,” despite having relevant internal knowledge to answer. The solution?RAGate — an adaptive mechanism that dynamically determines when to use external knowledge and when to rely on internal insights. Developed by Xi Wang, Procheta Sen, Ruizhe Li, and Emine Yilmaz and introduced in their July 2024 paper on Adaptive Retrieval-Augmented Generation for Conversational Systems, RAGate addresses this balance with precision. What Is Conversational AI? At its core, conversation involves exchanging thoughts, emotions, and information, guided by tone, context, and subtle cues. Humans excel at this due to emotional intelligence, socialization, and cultural exposure. Conversational AI aims to replicate these human-like interactions by leveraging technology to generate natural, contextually appropriate, and engaging responses. These systems adapt fluidly to user inputs, making the interaction dynamic—like conversing with a human. Internal vs. External Knowledge in AI Systems To understand RAGate’s value, we need to differentiate between two key concepts: Limitations of Traditional RAG Systems RAG integrates LLMs’ natural language capabilities with external knowledge retrieval, often guided by “guardrails” to ensure responsible, domain-specific responses. However, strict reliance on external knowledge can lead to: How RAGate Enhances Conversational AI RAGate, or Retrieval-Augmented Generation Gate, adapts dynamically to determine when external knowledge retrieval is necessary. It enhances response quality by intelligently balancing internal and external knowledge, ensuring conversational relevance and efficiency. The mechanism: Traditional RAG vs. RAGate: An Example Scenario: A healthcare chatbot offers advice based on general wellness principles and up-to-date medical research. This adaptive approach improves response accuracy, reduces latency, and enhances the overall conversational experience. RAGate Variants RAGate offers three implementation methods, each tailored to optimize performance: Variant Approach Key Feature RAGate-Prompt Uses natural language prompts to decide when external augmentation is needed. Lightweight and simple to implement. RAGate-PEFT Employs parameter-efficient fine-tuning (e.g., QLoRA) for better decision-making. Fine-tunes the model with minimal resource requirements. RAGate-MHA Leverages multi-head attention to interactively assess context and retrieve external knowledge. Optimized for complex conversational scenarios. RAGate Varients How to Implement RAGate Key Takeaways RAGate represents a breakthrough in Conversational AI, delivering adaptive, contextually relevant, and efficient responses by balancing internal and external knowledge. Its potential spans industries like healthcare, education, finance, and customer support, enhancing decision-making and user engagement. By intelligently combining retrieval-augmented generation with nuanced adaptability, RAGate is set to redefine the way businesses and individuals interact with AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
salesforce einstein insights

Salesforce Einstein Conversation Insights

Unlocking Einstein Conversation Insights in Salesforce: Setup, Integration, and Customization In this insight, we’ll guide you through setting up Einstein Conversation Insights in Salesforce, integrating it with platforms like Zoom, managing permissions, and customizing the dataflow schedule for optimal performance. As a marketer from way back when, little gets me as excited about the future of technology than marketing tools that make us smarter and faster. What is Einstein Conversation Insights? Einstein Conversation Insights (ECI) empowers teams to analyze and identify patterns, phrases, and areas of focus within voice and video interactions. By tracking terms and extracting actionable insights, managers and representatives can prioritize follow-ups and improve decision-making through detailed call logs and actionable dashboards. No longer are we hampered by the limitations of written text! Step 1: Enabling Einstein Conversation Insights To begin utilizing Einstein Conversation Insights: Step 2: Assigning Permissions To grant users access to ECI: Step 3: Connecting Recording Providers Voice Recording Providers To analyze call recordings: Video Recording Providers For video analysis, integrate your conferencing platform: Setting Up Zoom Integration To integrate Salesforce with Zoom: Once complete, users will need to link their Zoom accounts individually. A message will confirm successful setup. Click Take me there to finalize the connection. Step 4: Exploring the Conversation Insights App After linking your Zoom account, visit the Conversation Insights App under the Analytics tab. This app provides a comprehensive view of call details, recordings, and actionable insights, empowering teams to focus on strategic improvements. Step 5: Customizing Dataflow Schedule By default, ECI updates its dataflow every eight hours, refreshing your dashboards with new insights. To modify this schedule: Frequently Asked Questions 1. What are the benefits of Einstein Conversation Insights?Einstein Conversation Insights automates the transcription and analysis of calls, identifies trends, and recommends next steps to accelerate sales cycles and free up sales staff to focus on opportunity closing efforts. 2. Does ECI record calls?No, ECI does not record calls. Instead, it analyzes existing recordings from connected providers to generate insights. 3. Are there any limitations?Yes, Salesforce allows up to 100 custom insights, with each insight accommodating a maximum of 25 keywords, each up to 255 characters long. Conclusion Einstein Conversation Insights is a game-changing tool that analyzes voice and video interactions to provide actionable insights, empowering teams to make data-driven decisions. By integrating with Salesforce and platforms like Zoom, you can effortlessly track call details, identify trends, and streamline workflows. Customizing your dataflow schedule ensures your dashboards always reflect the latest information, enhancing efficiency and enabling timely decision-making. Ready to take your insights further? Start integrating Einstein Conversation Insights today! By Tectonic MarketingOpps Director, Shannan Hearne Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More

AI’s Impact on Future Information Ecosystems

AI’s Impact on Future Information Ecosystems The proliferation of generative AI technology has ignited a renewed focus within the media industry on how to strategically adapt to its capabilities. Media professionals are now confronted with crucial questions: What are the most effective ways to leverage this technology for efficiency in news production and to enhance audience experiences? Conversely, what threats do these technological advancements pose? Is legacy media on the brink of yet another wave of disintermediation from its audiences? Additionally, how does the evolution of technology impact journalism ethics? AI’s Impact on Future Information Ecosystems. In response to these challenges, the Open Society Foundations (OSF) launched the AI in Journalism Futures project earlier this year. The first phase of this ambitious initiative involved an open call for participants to develop future-oriented scenarios that explore the potential driving forces and implications of AI within the broader media ecosystem. The project sought to answer questions about what might transpire among various stakeholders in 5, 10, or 15 years. As highlighted by Nick Diakopoulos, scenarios are a valuable method for capturing a diverse range of perspectives on complex issues. While predicting the future is not the goal, understanding a variety of plausible alternatives can significantly inform current strategic thinking. Ultimately, more than 800 individuals from approximately 70 countries contributed short scenarios for analysis. The AI in Journalism Futures project subsequently utilized these scenarios as a foundation for a workshop, which refined the ideas outlined in their report. Diakopoulos emphasizes the importance of examining this broad set of initial scenarios, which OSF graciously provided in anonymized form. This analysis specifically explores (1) the various types of impacts identified within the scenarios, (2) the associated timeframes for these impacts—whether they are short, medium, or long-term, and (3) the global differences in focus across regions, highlighting how different parts of the world emphasized distinct types of impacts. While many additional questions could be explored regarding this data—such as the drivers of impacts, final outcomes, severity, stakeholders involved, or technical capabilities emphasized—this analysis focuses primarily on impacts. Refining the Data The initial pool of 872 scenarios underwent a rigorous process of cleaning, filtering, transformation, and verification before analysis. Firstly, scenarios shorter than 50 words were excluded from consideration, resulting in 852 scenarios for analysis. Additionally, 14 scenarios that were not written in English were translated using Google Sheets. To enable geographic and temporal analysis, the country of origin for each scenario writer was mapped to their respective continents, and the free-text “timeframe” field was converted into numerical representations of years. Next, impacts were extracted from each scenario using an LLM (GPT-4 in this case). The prompts for the LLM were refined through iteration, with a clear definition established for what constitutes an “impact.” Diakopoulos defined an impact as “a significant effect, consequence, or outcome that an action, event, or other factor has in the scenario.” This definition encompasses not only the ultimate state of a scenario but also intermediate outcomes. The LLM was instructed to extract distinct impacts, with each impact represented by a one-sentence description and a short label. For instance, one impact could be described as, “The proliferation of flawed AI systems leads to a compromised information ecosystem, causing a general doubt in the reliability of all information,” labeled as “Compromised Information Ecosystem.” To ensure the accuracy of this extraction process, a random sample of five scenarios was manually reviewed to validate the extracted impacts against the established definition. All extracted impacts passed the checks, leading to confidence in scaling the analysis across the entire dataset. This process resulted in the identification of 3,445 impacts from the 852 scenarios. AI’s Impact on Future Information Ecosystems A typology of impact types was developed based on the 3,445 impact descriptions, utilizing a novel method for qualitative thematic analysis from a Stanford University study. This approach clusters input texts, synthesizes concepts that reflect abstract connections, and produces scoring definitions to assess the relevance of each original text. For example, a concept like “AI Personalization” might be defined by the question, “Does the text discuss how AI personalizes content or enhances user engagement?” Each impact description was then scored against these concepts to tabulate occurrence frequencies. Impacts of AI on Media Ecosystems Through this analytical approach, 19 impact themes emerged, along with their corresponding scoring definitions: Interestingly, many scenarios articulated themes around how AI intersects with fact-checking, trust, misinformation, ethics, labor concerns, and evolving business models. Although some concepts may not be entirely distinct, this categorization offers a meaningful overview of the key ideas represented in the data. Distribution of Impact Themes Comparing these findings with those in the OSF report reveals some discrepancies. For instance, while the report emphasizes personalization and misinformation, these themes were less prevalent in the analyzed scenarios. Moreover, themes such as the rise of AI agents and audience fragmentation were mentioned but did not cluster significantly in the analysis. To capture potentially interesting but less prevalent impacts, the clustering was rerun with a smaller minimum cluster size. This adjustment yielded hundreds more concept themes, revealing insights into longer-tail issues. Positive visions for generative AI included reduced language barriers and increased accessibility for marginalized audiences, while concerns about societal fragmentation and privacy were also raised. Impacts Over Time and Around the World The analysis also explored how the impacts varied based on the timeframe selected by writers and their geographic locations. Using a Chi-Squared test, it was determined that “AI Personalization” trends towards long-term implications, while both “AI Fact-Checking” and “AI and Misinformation” skew toward shorter-term issues. This suggests that scenario writers perceive misinformation impacts as imminent threats, likely reflecting ongoing developments in the media landscape. When examining the distribution of impacts by region, it was found that “AI Fact-Checking” was more frequently noted by writers from Africa and Asia, while “AI and Misinformation” was less prevalent in scenarios from African writers but more so in those from Asian contributors. This indicates a divergence in perspectives on AI’s role in the media ecosystem.

Read More
Enterprises are Adopting AI-powered Automation Platforms

Enterprises are Adopting AI-powered Automation Platforms

The rapid pace of AI technological advancement is placing immense pressure on teams, often leading to disagreements due to the unrealistic expectations businesses have for the speed and agility of new technology implementation. A staggering 88% of IT professionals report that they are unable to keep up with the flood of AI-related requests within their organizations. Executives from UiPath, Salesforce, ServiceNow, and ManageEngine offer insights into how enterprises can navigate these challenges. Leading enterprises are adopting AI-powered automation platforms that understand, automate, and manage end-to-end processes. These platforms integrate seamlessly with existing enterprise technologies, using AI to reduce friction, eliminate inefficiencies, and enable teams to achieve business goals faster, with greater accuracy and efficiency. This year’s innovation drivers include tools such as Intelligent Document Processing, Communications Mining, Process and Task Mining, and Automated Testing. “Automation is the best path to deliver on AI’s potential, seamlessly integrating intelligence into daily operations, automating backend processes, upskilling employees, and revolutionizing industries,” says Mark Gibbs, EMEA President, UiPath. Jessica Constantinidis, Innovation Officer EMEA at ServiceNow, explains, “Intelligent Automation blends Robotic Process Automation (RPA), Artificial Intelligence (AI), and Machine Learning (ML) with well-defined processes to automate decision-making outcomes.” “Hyperautomation provides a business-driven, disciplined approach that enterprises can use to make informed decisions quickly by analyzing process and data feedback within the organization,” adds Constantinidis. Thierry Nicault, AVP and General Manager at Salesforce Middle East, emphasizes that while companies are eager to embrace AI, the pace of change often leads to confusion and stifles innovation. He notes, “By deploying AI and Hyperintelligent Automation tools, organizations can enhance productivity, visibility, and operational transformation.” Automation is driving growth and innovation across industries. AI-powered tools are simplifying processes, improving business revenues, and contributing to economic diversification. Ramprakash Ramamoorthy, Director of AI Research at ManageEngine, highlights how Hyperintelligent Automation, powered by AI, uses tools like Natural Language Processing (NLP) and Intelligent Document Processing to detect anomalies, forecast business trends, and empower decision-making. The IT Pushback Despite enthusiasm for AI, IT professionals are raising concerns. A Salesforce survey revealed that 88% of IT professionals feel overwhelmed by the influx of AI-related requests, with many citing resource constraints, data security concerns, and data quality issues. Business stakeholders often have unrealistic expectations about how quickly new technologies can be implemented, creating friction. According to Constantinidis of ServiceNow, many organizations lack transparency across their business units, making it difficult to fully understand their processes. As a result, automating processes becomes challenging. She adds, “Before full hyperautomation is possible, issues like data validation, classification, and privacy must be prioritized.” Automation platforms need accurate data, and governance is crucial in managing what data is used for AI models. “You need AI skills to teach and feed the data, and you also need a data specialist to clean up your data lake,” Constantinidis explains. Gibbs from UiPath stresses that automation must be designed in collaboration with the business users who understand the processes and systems. Once deployed, a feedback loop ensures continuous improvement and refinement of automated workflows. Ramamoorthy from ManageEngine notes that adopting Hyperintelligent Automation alongside existing workflows poses challenges. Enterprises must evaluate their technology stack, considering the costs, skills required, and the potential benefits. Strategic Integration of AI and Automation To successfully implement Hyperintelligent Automation tools, enterprises need a blend of IT and business skills. Mark Gibbs of UiPath points out, “These skills ensure organizations can effectively implement, manage, and optimize hyperintelligent technologies, aligning them with organizational goals.” Salesforce’s Nicault adds, “Enterprises must empower both IT and business teams to embrace AI, fostering innovation while ensuring the technology delivers real value.” Business skills are equally crucial, including strategic planning, process analysis, and change management. Ramamoorthy emphasizes that these competencies help identify automation opportunities and align them with business goals. According to Bassel Khachfeh, Digital Solutions Manager at Omnix, automation must be implemented with a focus on regulatory and compliance needs specific to the industry. This approach ensures the technology supports future growth and innovation. Transforming Customer Experiences and Business Operations As automation evolves, it’s transforming not only back-end processes but also customer experiences and decision-making at every level. Constantinidis from ServiceNow explains that hyperintelligence enables enterprises to predict outcomes and avert crises by trusting AI’s data accuracy. Gibbs from UiPath adds that automation allows enterprises to unlock untapped opportunities, speeding up the transformation of manual processes and enhancing business efficiency. AI is already making an impact in areas like supply chain management, regulatory compliance, and customer-facing processes. Ramamoorthy of ManageEngine notes that AI-powered NLP is revolutionizing enterprise chatbots and document processing, enabling businesses to automate complex workflows like invoice handling and sentiment analysis. Khachfeh from Omnix highlights how Cognitive Automation platforms elevate RPA by integrating AI-driven capabilities, such as NLP and Optical Character Recognition (OCR), to further streamline operations. Looking Ahead Hyperintelligent Automation, driven by AI, is set to revolutionize industries by enhancing efficiency, driving innovation, and enabling smarter decision-making. Enterprises that strategically adopt these tools—by integrating IT and business expertise, prioritizing data governance, and continuously refining their automated workflows—will be best positioned to navigate the complexities of AI and achieve sustainable growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Won't Hurt Salesforce

AI Won’t Hurt Salesforce

Marc Benioff Dismisses AI Threats, Sets Sights on a Billion AI Agents in One Year Salesforce CEO Marc Benioff has no doubts about the transformative potential of AI for enterprise software, particularly Salesforce itself. At the core of his vision are AI agents—autonomous software bots designed to handle routine tasks, freeing up human workers to focus on more strategic priorities. “What if your workforce had no limits? That’s a question we couldn’t even ask over the past 25 years of Salesforce—or the 45 years I’ve been in software,” Benioff said during an appearance on TechCrunch’s Equity podcast. The Billion-Agent Goal Benioff revealed that Salesforce’s recently launched Agentforce platform is already being adopted by “hundreds of customers” and aims to deploy a billion AI agents within a year. These agents are designed to handle tasks across industries—from enhancing customer experiences at retail brands like Gucci to assisting patients with follow-ups in healthcare. To illustrate, Benioff shared his experience with Disney’s virtual Private Tour Guides. “The AI agent analyzed park flow, ride history, and preferences, then guided me to attractions I hadn’t visited before,” he explained. Competition with Microsoft and the AI Landscape While Benioff is bullish on AI, he hasn’t hesitated to criticize competitors—particularly Microsoft. When Microsoft unveiled its new autonomous agents for Dynamics 365 in October, Benioff dismissed them as uninspired. “Copilot is the new Clippy,” he quipped, referencing Microsoft’s infamous virtual assistant from the 1990s. Benioff also cited Gartner research highlighting data security issues and administrative flaws in Microsoft’s AI tools, adding, “Copilot has disappointed so many customers. It’s not transforming companies.” However, industry skeptics argue that the real challenge to Salesforce isn’t Microsoft but the wave of AI-powered startups disrupting traditional enterprise software. With tools like OpenAI’s ChatGPT and Klarna’s in-house AI assistant “Kiki,” companies are starting to explore GenAI solutions that can replace legacy platforms like Salesforce altogether. For example, Klarna recently announced it was moving away from Salesforce and Workday, favoring GenAI tools that enable seamless, conversational interfaces and faster data access. Why Salesforce Is Positioned to Win Despite the noise, Benioff remains confident that Salesforce’s extensive data infrastructure gives it a significant edge. “We manage 230 petabytes of customer data with robust security and sharing models. That’s what allows AI to thrive in our ecosystem,” he said. While companies may question how other platforms like OpenAI handle data, Salesforce offers an integrated approach, reducing the need for complex data migrations to other clouds, such as Microsoft Azure. Salesforce’s Own Use of AI Benioff also highlighted Salesforce’s internal adoption of Agentforce, using AI agents in its customer service operations, sales processes, and help centers. “If you’re authenticated on help.salesforce.com, you’re already interacting with our agent,” he noted. AI Startups: Threat or Opportunity? As for concerns about AI startups overtaking Salesforce, Benioff sees them as acquisition opportunities rather than existential threats. “We’ve made over 60 acquisitions, many of them startups,” he said. He pointed to Agentforce itself, which was built using technology from Airkit.ai, a startup founded by a former Salesforce employee. Salesforce Ventures initially invested in Airkit.ai before acquiring and integrating it into its platform. The Path Forward Benioff is resolute in his belief that AI won’t hurt Salesforce—instead, it will revolutionize how businesses operate. While skeptics warn of a seismic shift in enterprise software, Benioff’s strategy is clear: lean into AI, leverage data, and stay agile through innovation and acquisitions. “We’re just getting started,” he concluded, reiterating his vision for a future where AI agents expand the possibilities of work and customer experience like never before. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Pioneering AI-Driven Customer Engagement

Pioneering AI-Driven Customer Engagement

With Salesforce at the forefront of the AI revolution, Agentforce, introduced at Dreamforce, represents the next phase in customer service automation. It integrates AI and human collaboration to automate repetitive tasks, freeing human talent for more strategic activities, ultimately improving customer satisfaction. Tallapragada emphasized how this AI-powered tool enables businesses, particularly in the Middle East, to scale operations and enhance efficiency, aligning with the region’s appetite for growth and innovation.

Read More
Where LLMs Fall Short

Where LLMs Fall Short

Large Language Models (LLMs) have transformed natural language processing, showcasing exceptional abilities in text generation, translation, and various language tasks. Models like GPT-4, BERT, and T5 are based on transformer architectures, which enable them to predict the next word in a sequence by training on vast text datasets. How LLMs Function LLMs process input text through multiple layers of attention mechanisms, capturing complex relationships between words and phrases. Here’s an overview of the process: Tokenization and Embedding Initially, the input text is broken down into smaller units, typically words or subwords, through tokenization. Each token is then converted into a numerical representation known as an embedding. For instance, the sentence “The cat sat on the mat” could be tokenized into [“The”, “cat”, “sat”, “on”, “the”, “mat”], each assigned a unique vector. Multi-Layer Processing The embedded tokens are passed through multiple transformer layers, each containing self-attention mechanisms and feed-forward neural networks. Contextual Understanding As the input progresses through layers, the model develops a deeper understanding of the text, capturing both local and global context. This enables the model to comprehend relationships such as: Training and Pattern Recognition During training, LLMs are exposed to vast datasets, learning patterns related to grammar, syntax, and semantics: Generating Responses When generating text, the LLM predicts the next word or token based on its learned patterns. This process is iterative, where each generated token influences the next. For example, if prompted with “The Eiffel Tower is located in,” the model would likely generate “Paris,” given its learned associations between these terms. Limitations in Reasoning and Planning Despite their capabilities, LLMs face challenges in areas like reasoning and planning. Research by Subbarao Kambhampati highlights several limitations: Lack of Causal Understanding LLMs struggle with causal reasoning, which is crucial for understanding how events and actions relate in the real world. Difficulty with Multi-Step Planning LLMs often struggle to break down tasks into a logical sequence of actions. Blocksworld Problem Kambhampati’s research on the Blocksworld problem, which involves stacking and unstacking blocks, shows that LLMs like GPT-3 struggle with even simple planning tasks. When tested on 600 Blocksworld instances, GPT-3 solved only 12.5% of them using natural language prompts. Even after fine-tuning, the model solved only 20% of the instances, highlighting the model’s reliance on pattern recognition rather than true understanding of the planning task. Performance on GPT-4 Temporal and Counterfactual Reasoning LLMs also struggle with temporal reasoning (e.g., understanding the sequence of events) and counterfactual reasoning (e.g., constructing hypothetical scenarios). Token and Numerical Errors LLMs also exhibit errors in numerical reasoning due to inconsistencies in tokenization and their lack of true numerical understanding. Tokenization and Numerical Representation Numbers are often tokenized inconsistently. For example, “380” might be one token, while “381” might split into two tokens (“38” and “1”), leading to confusion in numerical interpretation. Decimal Comparison Errors LLMs can struggle with decimal comparisons. For example, comparing 9.9 and 9.11 may result in incorrect conclusions due to how the model processes these numbers as strings rather than numerically. Examples of Numerical Errors Hallucinations and Biases Hallucinations LLMs are prone to generating false or nonsensical content, known as hallucinations. This can happen when the model produces irrelevant or fabricated information. Biases LLMs can perpetuate biases present in their training data, which can lead to the generation of biased or stereotypical content. Inconsistencies and Context Drift LLMs often struggle to maintain consistency over long sequences of text or tasks. As the input grows, the model may prioritize more recent information, leading to contradictions or neglect of earlier context. This is particularly problematic in multi-turn conversations or tasks requiring persistence. Conclusion While LLMs have advanced the field of natural language processing, they still face significant challenges in reasoning, planning, and maintaining contextual accuracy. These limitations highlight the need for further research and development of hybrid AI systems that integrate LLMs with other techniques to improve reasoning, consistency, and overall performance. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
lightning web picker in salesforce

Lightning Record Picker in Salesforce

The lightning-record-picker component enhances the record selection process in Salesforce applications, offering a more intuitive and flexible experience for users. With its ability to handle larger datasets, customizable fields, and strong validation features, it is a powerful tool for developers to incorporate into their Salesforce applications.

Read More
gettectonic.com