Generative AI - gettectonic.com - Page 3

AI’s Impact on Future Information Ecosystems

AI’s Impact on Future Information Ecosystems The proliferation of generative AI technology has ignited a renewed focus within the media industry on how to strategically adapt to its capabilities. Media professionals are now confronted with crucial questions: What are the most effective ways to leverage this technology for efficiency in news production and to enhance audience experiences? Conversely, what threats do these technological advancements pose? Is legacy media on the brink of yet another wave of disintermediation from its audiences? Additionally, how does the evolution of technology impact journalism ethics? AI’s Impact on Future Information Ecosystems. In response to these challenges, the Open Society Foundations (OSF) launched the AI in Journalism Futures project earlier this year. The first phase of this ambitious initiative involved an open call for participants to develop future-oriented scenarios that explore the potential driving forces and implications of AI within the broader media ecosystem. The project sought to answer questions about what might transpire among various stakeholders in 5, 10, or 15 years. As highlighted by Nick Diakopoulos, scenarios are a valuable method for capturing a diverse range of perspectives on complex issues. While predicting the future is not the goal, understanding a variety of plausible alternatives can significantly inform current strategic thinking. Ultimately, more than 800 individuals from approximately 70 countries contributed short scenarios for analysis. The AI in Journalism Futures project subsequently utilized these scenarios as a foundation for a workshop, which refined the ideas outlined in their report. Diakopoulos emphasizes the importance of examining this broad set of initial scenarios, which OSF graciously provided in anonymized form. This analysis specifically explores (1) the various types of impacts identified within the scenarios, (2) the associated timeframes for these impacts—whether they are short, medium, or long-term, and (3) the global differences in focus across regions, highlighting how different parts of the world emphasized distinct types of impacts. While many additional questions could be explored regarding this data—such as the drivers of impacts, final outcomes, severity, stakeholders involved, or technical capabilities emphasized—this analysis focuses primarily on impacts. Refining the Data The initial pool of 872 scenarios underwent a rigorous process of cleaning, filtering, transformation, and verification before analysis. Firstly, scenarios shorter than 50 words were excluded from consideration, resulting in 852 scenarios for analysis. Additionally, 14 scenarios that were not written in English were translated using Google Sheets. To enable geographic and temporal analysis, the country of origin for each scenario writer was mapped to their respective continents, and the free-text “timeframe” field was converted into numerical representations of years. Next, impacts were extracted from each scenario using an LLM (GPT-4 in this case). The prompts for the LLM were refined through iteration, with a clear definition established for what constitutes an “impact.” Diakopoulos defined an impact as “a significant effect, consequence, or outcome that an action, event, or other factor has in the scenario.” This definition encompasses not only the ultimate state of a scenario but also intermediate outcomes. The LLM was instructed to extract distinct impacts, with each impact represented by a one-sentence description and a short label. For instance, one impact could be described as, “The proliferation of flawed AI systems leads to a compromised information ecosystem, causing a general doubt in the reliability of all information,” labeled as “Compromised Information Ecosystem.” To ensure the accuracy of this extraction process, a random sample of five scenarios was manually reviewed to validate the extracted impacts against the established definition. All extracted impacts passed the checks, leading to confidence in scaling the analysis across the entire dataset. This process resulted in the identification of 3,445 impacts from the 852 scenarios. AI’s Impact on Future Information Ecosystems A typology of impact types was developed based on the 3,445 impact descriptions, utilizing a novel method for qualitative thematic analysis from a Stanford University study. This approach clusters input texts, synthesizes concepts that reflect abstract connections, and produces scoring definitions to assess the relevance of each original text. For example, a concept like “AI Personalization” might be defined by the question, “Does the text discuss how AI personalizes content or enhances user engagement?” Each impact description was then scored against these concepts to tabulate occurrence frequencies. Impacts of AI on Media Ecosystems Through this analytical approach, 19 impact themes emerged, along with their corresponding scoring definitions: Interestingly, many scenarios articulated themes around how AI intersects with fact-checking, trust, misinformation, ethics, labor concerns, and evolving business models. Although some concepts may not be entirely distinct, this categorization offers a meaningful overview of the key ideas represented in the data. Distribution of Impact Themes Comparing these findings with those in the OSF report reveals some discrepancies. For instance, while the report emphasizes personalization and misinformation, these themes were less prevalent in the analyzed scenarios. Moreover, themes such as the rise of AI agents and audience fragmentation were mentioned but did not cluster significantly in the analysis. To capture potentially interesting but less prevalent impacts, the clustering was rerun with a smaller minimum cluster size. This adjustment yielded hundreds more concept themes, revealing insights into longer-tail issues. Positive visions for generative AI included reduced language barriers and increased accessibility for marginalized audiences, while concerns about societal fragmentation and privacy were also raised. Impacts Over Time and Around the World The analysis also explored how the impacts varied based on the timeframe selected by writers and their geographic locations. Using a Chi-Squared test, it was determined that “AI Personalization” trends towards long-term implications, while both “AI Fact-Checking” and “AI and Misinformation” skew toward shorter-term issues. This suggests that scenario writers perceive misinformation impacts as imminent threats, likely reflecting ongoing developments in the media landscape. When examining the distribution of impacts by region, it was found that “AI Fact-Checking” was more frequently noted by writers from Africa and Asia, while “AI and Misinformation” was less prevalent in scenarios from African writers but more so in those from Asian contributors. This indicates a divergence in perspectives on AI’s role in the media ecosystem.

Read More
Snowflake Security and Development

Snowflake Security and Development

Snowflake Unveils AI Development and Enhanced Security Features At its annual Build virtual developer conference, Snowflake introduced a suite of new capabilities focused on AI development and strengthened security measures. These enhancements aim to simplify the creation of conversational AI tools, improve collaboration, and address data security challenges following a significant breach earlier this year. AI Development Updates Snowflake announced updates to its Cortex AI suite to streamline the development of conversational AI applications. These new tools focus on enabling faster, more efficient development while ensuring data integrity and trust. Highlights include: These features address enterprise demands for generative AI tools that boost productivity while maintaining governance over proprietary data. Snowflake aims to eliminate barriers to data-driven decision-making by enabling natural language queries and easy integration of structured and unstructured data into AI models. According to Christian Kleinerman, Snowflake’s EVP of Product, the goal is to reduce the time it takes for developers to build reliable, cost-effective AI applications: “We want to help customers build conversational applications for structured and unstructured data faster and more efficiently.” Security Enhancements Following a breach last May, where hackers accessed customer data via stolen login credentials, Snowflake has implemented new security features: These additions come alongside existing tools like the Horizon Catalog for data governance. Kleinerman noted that while Snowflake’s previous security measures were effective at preventing unauthorized access, the company recognizes the need to improve user adoption of these tools: “It’s on us to ensure our customers can fully leverage the security capabilities we offer. That’s why we’re adding more monitoring, insights, and recommendations.” Collaboration Features Snowflake is also enhancing collaboration through its new Internal Marketplace, which enables organizations to share data, AI tools, and applications across business units. The Native App Framework now integrates with Snowpark Container Services to simplify the distribution and monetization of analytics and AI products. AI Governance and Competitive Position Industry analysts highlight the growing importance of AI governance as enterprises increasingly adopt generative AI tools. David Menninger of ISG’s Ventana Research emphasized that Snowflake’s governance-focused features, such as LLM observability, fill a critical gap in AI tooling: “Trustworthy AI enhancements like model explainability and observability are vital as enterprises scale their use of AI.” With these updates, Snowflake continues to compete with Databricks and other vendors. Its strategy focuses on offering both API-based flexibility for developers and built-in tools for users seeking simpler solutions. By combining innovative AI development tools with robust security and collaboration features, Snowflake aims to meet the evolving needs of enterprises while positioning itself as a leader in the data platform and AI space. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
healthcare Can prioritize ai governance

Healthcare Can Prioritize AI Governance

As artificial intelligence gains momentum in healthcare, it’s critical for health systems and related stakeholders to develop robust AI governance programs. AI’s potential to address challenges in administration, operations, and clinical care is drawing interest across the sector. As this technology evolves, the range of applications in healthcare will only broaden.

Read More
Pioneering AI-Driven Customer Engagement

Pioneering AI-Driven Customer Engagement

With Salesforce at the forefront of the AI revolution, Agentforce, introduced at Dreamforce, represents the next phase in customer service automation. It integrates AI and human collaboration to automate repetitive tasks, freeing human talent for more strategic activities, ultimately improving customer satisfaction. Tallapragada emphasized how this AI-powered tool enables businesses, particularly in the Middle East, to scale operations and enhance efficiency, aligning with the region’s appetite for growth and innovation.

Read More
healthcare Can prioritize ai governance

AI Data Privacy and Security

Three Key Generative AI Data Privacy and Security Concerns The rise of generative AI is reshaping the digital landscape, introducing powerful tools like ChatGPT and Microsoft Copilot into the hands of professionals, students, and casual users alike. From creating AI-generated art to summarizing complex texts, generative AI (GenAI) is transforming workflows and sparking innovation. However, for information security and privacy professionals, this rapid proliferation also brings significant challenges in data governance and protection. Below are three critical data privacy and security concerns tied to generative AI: 1. Who Owns the Data? Data ownership is a contentious issue in the age of generative AI. In the European Union, the General Data Protection Regulation (GDPR) asserts that individuals own their personal data. In contrast, data ownership laws in the United States are less clear-cut, with recent state-level regulations echoing GDPR’s principles but failing to resolve ambiguity. Generative AI often ingests vast amounts of data, much of which may not belong to the person uploading it. This creates legal risks for both users and AI model providers, especially when third-party data is involved. Cases surrounding intellectual property, such as controversies involving Slack, Reddit, and LinkedIn, highlight public resistance to having personal data used for AI training. As lawsuits in this arena emerge, prior intellectual property rulings could shape the legal landscape for generative AI. 2. What Data Can Be Derived from LLM Output? Generative AI models are designed to be helpful, but they can inadvertently expose sensitive or proprietary information submitted during training. This risk has made many wary of uploading critical data into AI models. Techniques like tokenization, anonymization, and pseudonymization can reduce these risks by obscuring sensitive data before it is fed into AI systems. However, these practices may compromise the model’s performance by limiting the quality and specificity of the training data. Advocates for GenAI stress that high-quality, accurate data is essential to achieving the best results, which adds to the complexity of balancing privacy with performance. 3. Can the Output Be Trusted? The phenomenon of “hallucinations” — when generative AI produces incorrect or fabricated information — poses another significant concern. Whether these errors stem from poor training, flawed data, or malicious intent, they raise questions about the reliability of GenAI outputs. The impact of hallucinations varies depending on the context. While some errors may cause minor inconveniences, others could have serious or even dangerous consequences, particularly in sensitive domains like healthcare or legal advisory. As generative AI continues to evolve, ensuring the accuracy and integrity of its outputs will remain a top priority. The Generative AI Data Governance Imperative Generative AI’s transformative power lies in its ability to leverage vast amounts of information. For information security, data privacy, and governance professionals, this means grappling with key questions, such as: With high stakes and no way to reverse intellectual property violations, the need for robust data governance frameworks is urgent. As society navigates this transformative era, balancing innovation with responsibility will determine whether generative AI becomes a tool for progress or a source of new challenges. While generative AI heralds a bold future, history reminds us that groundbreaking advancements often come with growing pains. It is the responsibility of stakeholders to anticipate and address these challenges to ensure a safer and more equitable AI-powered world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Will AI Hinder Digital Transformation in Healthcare?

Poisoning Your Data

Protecting Your IP from AI Training: Poisoning Your Data As more valuable intellectual property (IP) becomes accessible online, concerns over AI vendors scraping content for training models without permission are rising. If you’re worried about AI theft and want to safeguard your assets, it’s time to consider “poisoning” your content—making it difficult or even impossible for AI systems to use it effectively. Key Principle: AI “Sees” Differently Than Humans AI processes data in ways humans don’t. While people view content based on context, AI “sees” data in raw, specific formats that can be manipulated. By subtly altering your content, you can protect it without affecting human users. Image Poisoning: Misleading AI Models For images, you can “poison” them to confuse AI models without impacting human perception. A great example of this is Nightshade, a tool designed to distort images so that they remain recognizable to humans but useless to AI models. This technique ensures your artwork or images can’t be replicated, and applying it across your visual content protects your unique style. For example, if you’re concerned about your images being stolen or reused by generative AI systems, you can embed misleading text into the image itself, which is invisible to human users but interpreted by AI as nonsensical data. This ensures that an AI model trained on your images will be unable to replicate them correctly. Text Poisoning: Adding Complexity for Crawlers Text poisoning requires more finesse, depending on the sophistication of the AI’s web crawler. Simple methods include: Invisible Text One easy method is to hide text within your page using CSS. This invisible content can be placed in sidebars, between paragraphs, or anywhere within your text: cssCopy code.content { color: black; /* Same as the background */ opacity: 0.0; /* Invisible */ display: none; /* Hidden in the DOM */ } By embedding this “poisonous” content directly in the text, AI crawlers might have difficulty distinguishing it from real content. If done correctly, AI models will ingest the irrelevant data as part of your content. JavaScript-Generated Content Another technique is to use JavaScript to dynamically alter the content, making it visible only after the page loads or based on specific conditions. This can frustrate AI crawlers that only read content after the DOM is fully loaded, as they may miss the hidden data. htmlCopy code<script> // Dynamically load content based on URL parameters or other factors </script> This method ensures that AI gets a different version of the page than human users. Honeypots for AI Crawlers Honeypots are pages designed specifically for AI crawlers, containing irrelevant or distorted data. These pages don’t affect human users but can confuse AI models by feeding them inaccurate information. For example, if your website sells cheese, you can create pages that only AI crawlers can access, full of bogus details about your cheese, thus poisoning the AI model with incorrect information. By adding these “honeypot” pages, you can mislead AI models that scrape your data, preventing them from using your IP effectively. Competitive Advantage Through Data Poisoning Data poisoning can also work to your benefit. By feeding AI models biased information about your products or services, you can shape how these models interpret your brand. For example, you could subtly insert favorable competitive comparisons into your content that only AI models can read, helping to position your products in a way that biases future AI-driven decisions. For instance, you might embed positive descriptions of your brand or products in invisible text. AI models would ingest these biases, making it more likely that they favor your brand when generating results. Using Proxies for Data Poisoning Instead of modifying your CMS, consider using a proxy server to inject poisoned data into your content dynamically. This approach allows you to identify and respond to crawlers more easily, adding a layer of protection without needing to overhaul your existing systems. A proxy can insert “poisoned” content based on the type of AI crawler requesting it, ensuring that the AI gets the distorted data without modifying your main website’s user experience. Preparing for AI in a Competitive World With the increasing use of AI for training and decision-making, businesses must think proactively about protecting their IP. In an era where AI vendors may consider all publicly available data fair game, implementing data poisoning should become a standard practice for companies concerned about protecting their content and ensuring it’s represented correctly in AI models. Businesses that take these steps will be better positioned to negotiate with AI vendors if they request data for training and will have a competitive edge if AI systems are used by consumers or businesses to make decisions about their products or services. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Road for AI Regulation

Road for AI Regulation

The concept of artificial intelligence, or synthetic minds capable of thinking and reasoning like humans, has been around for centuries. Ancient cultures often expressed ideas and pursued goals similar to AI, and in the early 20th century, science fiction brought these notions to modern audiences. Works like The Wizard of Oz and films such as Metropolis resonated globally, laying the groundwork for contemporary AI discussions.

Read More
Agentic AI Race

Agentforce Accelerator to Empower Nonprofits

Salesforce Introduces Agentforce Accelerator to Empower Nonprofits Salesforce has unveiled the Salesforce Accelerator — Agents for Impact, a groundbreaking initiative aimed at helping nonprofits harness the power of Agentforce. This suite of AI-driven tools enables organizations to build and deploy autonomous AI agents that can perform critical tasks across various functions. Through a combination of technology, funding, and expertise, the accelerator aims to empower nonprofits to enhance operational efficiency and amplify their impact in an AI-driven future. Why It Matters Nonprofits often face challenges such as staffing shortages and burnout, limiting their ability to address pressing social and environmental issues. AI agents can play a transformative role by augmenting nonprofit teams, enabling them to: While the potential is significant, developing and implementing AI solutions often remains financially and technically inaccessible for many nonprofits. How the Accelerator Works The Salesforce Accelerator — Agents for Impact bridges this gap by providing a comprehensive support package: Nonprofits from all focus areas can apply for the accelerator starting October 29, 2024, with selected organizations notified by December. Track Record of Impact The Agents for Impact initiative builds on Salesforce’s broader accelerator program, which has provided million since 2022 to support innovative nonprofit solutions in areas like AI, education, and climate action. Scaling Nonprofit Potential With the launch of Salesforce Accelerator — Agents for Impact, nonprofits now have unprecedented opportunities to adopt AI-driven solutions that enhance efficiency and scale their missions. This program reflects Salesforce’s ongoing commitment to empowering organizations to drive meaningful change in an increasingly AI-powered world. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
ChatGPT and Politics?

ChatGPT and Politics?

ChatGPT has also appeared in influence operations, with groups using it to generate political content for social media. OpenAI observed an Iranian-led operation, Storm-2035, using ChatGPT to publish politically charged content about U.S. elections and global conflicts. Yet, OpenAI noted that these AI-driven influence efforts often lack audience engagement.

Read More
Being AI-Driven

Being AI-Driven

Imagine a company where every decision, strategy, customer interaction, and routine task is enhanced by AI. From predictive analytics uncovering market insights to intelligent automation streamlining operations, this AI-driven enterprise represents what a successful business could look like. Does this company exist? Not yet, but the building blocks for creating it are already here. To envision a day in the life of such an AI enterprise, let’s fast forward to the year 2028 and visit Tectonic 5.0, a fictional 37-year-old mid-sized company in Oklahoma that provides home maintenance services. After years of steady sales and profit growth, the 2,300-employee company has hit a rough patch. Tectonic 5.0’s revenue grew just 3% last year, and its 8% operating margin is well below the industry benchmark. To jumpstart growth, Tectonic 5.0 has expanded its product portfolio and decided to break into the more lucrative commercial real estate market. But Tectonic 5.0 needs to act fast. The firm must quickly bring its new offerings to market while boosting profitability by eliminating inefficiencies and fostering collaboration across teams. To achieve these goals, Tectonic 5.0 is relying on artificial intelligence (AI). Here’s how each department at Tectonic 5.0 is using AI to reach these objectives. Spot Inefficiencies with AI With a renewed focus on cost-cutting, Tectonic 5.0 needed to identify and eliminate inefficiencies throughout the company. To assist in this effort, the company developed a tool called Jenny, an AI agent that’s automatically invited to all meetings. Always listening and analyzing, Jenny spots problems and inefficiencies that might otherwise go unnoticed. For example, Jenny compares internal data against industry benchmarks and historical data, identifying opportunities for optimization based on patterns in spending and resource allocation. Suggestions for cost-cutting can be offered in real time during meetings or shared later in a synthesized summary. AI can also analyze how meeting time is spent, revealing if too much time is wasted on non-essential issues and suggesting ways to have more constructive meetings. It does this by comparing meeting summaries against the company’s broader objectives. Tectonic 5.0’s leaders hope that by highlighting inefficiencies and communication gaps with Jenny’s help, employees will be more inclined to take action. In fact, it has already shown considerable promise, with employees being five times more likely to consider cost-cutting measures suggested by Penny. Market More Effectively with AI With cost management underway, Tectonic 5.0’s next step in its transformation is finding new revenue sources. The company has adopted a two-pronged approach: introducing a new lineup of products and services for homeowners, including smart home technology, sustainable living solutions like solar panels, and predictive maintenance on big-ticket systems like internet-connected HVACs; and expanding into commercial real estate maintenance. Smart home technology is exactly what homeowners are looking for, but Tectonic 5.0 needs to market it to the right customers, at the right time, and in the right way. A marketing platform with built-in AI capabilities is essential for spreading the word quickly and effectively about its new products. To start, the company segments its audience using generative AI, allowing marketers to ask the system, in natural language, to identify tech-savvy homeowners between the ages of 30 and 60 who have spent a certain amount on home maintenance in the last 18 months. This enables more precise audience targeting and helps marketing teams bring products to market faster. Previously, segmentation using legacy systems could take weeks, with marketing teams relying on tech teams for an audience breakdown. Now, Tectonic 5.0 is ready to reach out to its targeted customers. Using predictive AI, it can optimize personalized marketing campaigns. For example, it can determine which customers prefer to be contacted by text, email, or phone, the best time of day to reach out, and how often. The system also identifies which messaging—focused on cost savings, environmental impact, or preventative maintenance—will resonate most with each customer. This intelligence helps Tectonic 5.0 reach the optimal customer quickly in a way that speaks to their specific needs and concerns. AI also enables marketers to monitor campaign performance for red flags like decreasing open rates or click-through rates and take appropriate action. Sell More, and Faster, with AI With interested buyers lined up, it’s now up to the sales team to close deals. Generative AI for sales, integrated into CRM, can speed up and personalize the sales process for Tectonic 5.0 in several ways. First, it can generate email copy tailored to products and services that customers are interested in. Tectonic 5.0’s sales reps can prompt AI to draft solar panel prospecting emails. To maximize effectiveness, the system pulls customer info from the CRM, uncovering which emails have performed well in the past. Second, AI speeds up data analysis. Sales reps spend a significant amount of time generating, pulling, and analyzing data. Generative AI can act like a digital assistant, uncovering patterns and relationships in CRM data almost instantaneously, guiding Tectonic 5.0’s reps toward high-value deals most likely to close. Machine learning increases the accuracy of lead scoring, predicting which customers are most likely to buy based on historical data and predictive analytics. Provide Better Customer Service with AI Tectonic 5.0’s new initiatives are progressing well. Costs are starting to decrease, and sales of its new products are growing faster than expected. However, customer service calls are rising as well. Tectonic 5.0 is committed to maintaining excellent customer service, but smart home technology presents unique challenges. It’s more complex than analog systems, and customers often need help with setup and use, raising the stakes for Tectonic 5.0’s customer service team. The company knows that customers have many choices in home maintenance providers, and one bad experience could drive them to a competitor. Tectonic 5.0’s embedded AI-powered chatbots help deliver a consistent and delightful autonomous customer service experience across channels and touchpoints. Beyond answering common questions, these chatbots can greet customers, serve up knowledge articles, and even dispatch a field technician if needed. In the field, technicians can quickly diagnose and fix problems thanks to LLMs like xGen-Small, which

Read More
AI Agents

AI Agents Interview

In the rapidly evolving world of large language models and generative AI, a new concept is gaining momentum: AI agents. AI Agents Interview explores. AI agents are advanced tools designed to handle complex tasks that traditionally required human intervention. While they may be confused with robotic process automation (RPA) bots, AI agents are much more sophisticated, leveraging generative AI technology to execute tasks autonomously. Companies like Google are positioning AI agents as virtual assistants that can drive productivity across industries. In this Q&A, Jason Gelman, Director of Product Management for Vertex AI at Google Cloud, shares insights into Google’s vision for AI agents and some of the challenges that come with this emerging technology. AI Agents Interview How does Google define AI agents? Jason Gelman: An AI agent is something that acts on your behalf. There are two key components. First, you empower the agent to act on your behalf by providing instructions and granting necessary permissions—like authentication to access systems. Second, the agent must be capable of completing tasks. This is where large language models (LLMs) come in, as they can plan out the steps to accomplish a task. What used to require human planning is now handled by the AI, including gathering information and executing various steps. What are current use cases where AI agents can thrive? Gelman: AI agents can be useful across a wide range of industries. Call centers are a common example where customers already expect AI support, and we’re seeing demand there. In healthcare, organizations like Mayo Clinic are using AI agents to sift through vast amounts of information, helping professionals navigate data more efficiently. Different industries are exploring this technology in unique ways, and it’s gaining traction across many sectors. What are some misconceptions about AI agents? Gelman: One major misconception is that the technology is more advanced than it actually is. We’re still in the early stages, building critical infrastructure like authentication and function-calling capabilities. Right now, AI agents are more like interns—they can assist, but they’re not yet fully autonomous decision-makers. While LLMs appear powerful, we’re still some time away from having AI agents that can handle everything independently. Developing the technology and building trust with users are key challenges. I often compare this to driverless cars. While they might be safer than human drivers, we still roll them out cautiously. With AI agents, the risks aren’t physical, but we still need transparency, monitoring, and debugging capabilities to ensure they operate effectively. How can enterprises balance trust in AI agents while acknowledging the technology is still evolving? Gelman: Start simple and set clear guardrails. Build an AI agent that does one task reliably, then expand from there. Once you’ve proven the technology’s capability, you can layer in additional tasks, eventually creating a network of agents that handle multiple responsibilities. Right now, most organizations are still in the proof-of-concept phase. Some companies are using AI agents for more complex tasks, but for critical areas like financial services or healthcare, humans remain in the loop to oversee decision-making. It will take time before we can fully hand over tasks to AI agents. AI Agents Interview What is the difference between Google’s AI agent and Microsoft Copilot? Gelman: Microsoft Copilot is a product designed for business users to assist with personal tasks. Google’s approach with AI agents, particularly through Vertex AI, is more focused on API-driven, developer-based solutions that can be integrated into applications. In essence, while Copilot serves as a visible assistant for users, Vertex AI operates behind the scenes, embedded within applications, offering greater flexibility and control for enterprise customers. The real potential of AI agents lies in their ability to execute a wide range of tasks at the API level, without the limitations of a low-code/no-code interface. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Trends in AI for CRM

Trends in AI for CRM

Nearly half of customer service teams, over 40% of salespeople, and a third of marketers have fully implemented artificial intelligence (AI) to enhance their work. However, 77% of business leaders report persistent challenges related to trusted data and ethical concerns that could stall their AI initiatives, according to Salesforce research released today. The Trends in AI for CRM report analyzed data from multiple studies, revealing that companies are worried about missing out on the opportunities generative AI presents if the data powering large language models (LLMs) isn’t rooted in their own trusted customer records. At the same time, respondents expressed ongoing concerns about the lack of clear company policies governing the ethical use of AI, as well as the complexity of a vendor landscape where 80% of enterprises are currently using multiple LLMs. Salesforce’s Four Keys to Enterprise AI Success Why it matters: AI is one of the most transformative technologies in generations, with projections forecasting a net gain of over $2 trillion in new business revenues by 2028 from Salesforce and its network of partners alone. As enterprises across industries develop their AI strategies, leaders in customer-facing departments such as sales, service, and marketing are eager to leverage AI to drive internal efficiencies and revolutionize customer experiences. Key Findings from the Trends in AI for CRM Report Expert Perspective “This is a pivotal moment as business leaders across industries look to AI to unlock growth, efficiency, and customer loyalty,” said Clara Shih, CEO of Salesforce AI. “But success requires much more than an LLM. Enterprise deployments need trusted data, user access control, vector search, audit trails and citations, data masking, low-code builders, and seamless UI integration. Salesforce brings all of these components together with our Einstein 1 Platform, Data Cloud, Slack, and dozens of customizable, turnkey prompts and actions offered across our clouds.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI and Disability

AI and Disability

Dr. Johnathan Flowers of American University recently sparked a conversation on Bluesky regarding a statement from the organizers of NaNoWriMo, which endorsed the use of generative AI technologies, such as LLM chatbots, in this year’s event. Dr. Flowers expressed concern about the implication that AI assistance was necessary for accessibility, arguing that it could undermine the creativity and agency of individuals with disabilities. He believes that art often serves as a unique space where barriers imposed by disability can be transcended without relying on external help or engaging in forced intimacy. For Dr. Flowers, suggesting the need for AI support may inadvertently diminish the perceived capabilities of disabled and marginalized artists. Since the announcement, NaNoWriMo organizers have revised their stance in response to criticism, though much of the social media discussion has become unproductive. In earlier discussions, the author has explored the implications of generative AI in art, focusing on the human connection that art typically fosters, which AI-generated content may not fully replicate. However, they now wish to address the role of AI as a tool for accessibility. Not being personally affected by physical disability, the author approaches this topic from a social scientific perspective. They acknowledge that the views expressed are personal and not representative of any particular community or organization. Defining AI In a recent presentation, the author offered a new definition of AI, drawing from contemporary regulatory and policy discussions: AI: The application of specific forms of machine learning to perform tasks that would otherwise require human labor. This definition is intentionally broad, encompassing not just generative AI but also other machine learning applications aimed at automating tasks. AI as an Accessibility Tool AI has potential to enhance autonomy and independence for individuals with disabilities, paralleling technological advancements seen in fields like the Paris Paralympics. However, the author is keen to explore what unique benefits AI offers and what risks might arise. Benefits Risks AI and Disability The author acknowledges that this overview touches only on some key issues related to AI and disability. It is crucial for those working in machine learning to be aware of these dynamics, striving to balance benefits with potential risks and ensuring equitable access to technological advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
gettectonic.com