Google Archives - gettectonic.com - Page 8
More AI Tools to Use

More AI Tools to Use

Additionally, Arc’s collaboration with Perplexity elevates browsing by transforming search experiences. Perplexity functions as a personal AI research assistant, fetching and summarizing information along with sources, visuals, and follow-up questions. Premium users even have access to advanced large language models like GPT-4 and Claude. Together, Arc and Perplexity revolutionize how users navigate the web. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
How to Implement AI for Business Transformation

Trust Deepens as AI Revolutionizes Content Creation

Artificial intelligence (AI) is transforming the content creation industry, sparking conversations about trust, authenticity, and the future of human creativity. As developers increasingly adopt AI tools, their trust in these technologies grows. Over 75% of developers now express confidence in AI, a trend that highlights the far-reaching potential of these advancements across industries. A study shared by Parametric Architecture underscores the expanding reliance on AI, with sectors ranging from marketing to architecture integrating these tools for tasks like design and communication. Yet, the implications for trust and authenticity remain nuanced, as stakeholders grapple with ensuring AI-driven content meets ethical and quality standards. Major players like Microsoft are capitalizing on this AI surge, offering solutions that enhance business efficiency. From automating emails to managing records, Microsoft’s tools demonstrate how AI can bridge the gap between human interaction and machine-driven processes. These advancements also intensify competition with other industry leaders, including Salesforce, as businesses seek smarter ways to streamline operations. In marketing, AI’s influence is particularly transformative. As noted by Karla Jo Helms in MarketingProfs, platforms like Google are adapting to the proliferation of AI-generated content by implementing stricter guidelines to combat misinformation. With projections suggesting that 90% of online content could be AI-generated by 2026, marketers face the dual challenge of maintaining authenticity while leveraging automation. Trust remains central to these efforts. According to Helms, “82% of consumers say brands must advertise on safe, accurate, and trustworthy content.” To meet these expectations, marketers must prioritize quality and transparency, aligning with Google’s emphasis on value-driven content over mass-produced AI outputs. This focus on trustworthiness is critical to maintaining audience confidence in an increasingly automated landscape. Beyond marketing, AI is making waves in diverse fields. In agriculture, Southern land-grant scientists are leveraging AI for precision spraying and disease detection, helping farmers reduce costs while improving efficiency. These innovations highlight how AI can drive strategic advancements even in traditional sectors. Across industries, the interplay between AI adoption and ethical content creation poses critical questions. AI should serve as a collaborator, enhancing rather than replacing human creativity. Achieving this balance requires transparency about AI’s role, along with regulatory frameworks to ensure accountability and ethical use. As AI takes center stage in content creation, industries must address challenges around trust and authenticity. The focus must shift from merely implementing AI to integrating it responsibly, fostering user confidence while maintaining the integrity of human narratives. Looking ahead, the path to success lies in balancing automation’s efficiency with genuine storytelling. By emphasizing ethical practices, clear communication about AI’s contributions, and a commitment to quality, content creators can cultivate trust and establish themselves as dependable voices in an increasingly AI-driven world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI FOMO

AI FOMO

Enterprise interest in artificial intelligence has surged in the past two years, with boardroom discussions centered on how to capitalize on AI advancements before competitors do. Generative AI has been a particular focus for executives since the launch of ChatGPT in November 2022, followed by other major product releases like Amazon’s Bedrock, Google’s Gemini, Meta’s Llama, and a host of SaaS tools incorporating the technology. However, the initial rush driven by fear of missing out (FOMO) is beginning to fade. Business and tech leaders are now shifting their attention from experimentation to more practical concerns: How can AI generate revenue? This question will grow in importance as pilot AI projects move into production, raising expectations for financial returns. Using AI to Increase Revenue AI’s potential to drive revenue will be a critical factor in determining how quickly organizations adopt the technology and how willing they are to invest further. Here are 10 ways businesses can harness AI to boost revenue: 1. Boost Sales AI-powered virtual assistants and chatbots can help increase sales. For example, Ikea’s generative AI tool assists customers in designing their living spaces while shopping for furniture. Similarly, jewelry insurance company BriteCo launched a GenAI chatbot that reduced chat abandonment rates, leading to more successful customer interactions and potentially higher sales. A TechTarget survey revealed that AI-powered customer-facing tools like chatbots are among the top investments for IT leaders. 2. Reduce Customer Churn AI helps businesses retain clients, reducing revenue loss and improving customer lifetime value. By analyzing historical data, AI can profile customer attributes and identify accounts at risk of leaving. AI can then assist in personalizing customer experiences, decreasing churn and fostering loyalty. 3. Enhance Recommendation Engines AI algorithms can analyze customer data to offer personalized product recommendations. This drives cross-selling and upselling opportunities, boosting revenue. For instance, Meta’s AI-powered recommendation engine has increased user engagement across its platforms, attracting more advertisers. 4. Accelerate Marketing Strategies While marketing doesn’t directly generate revenue, it fuels the sales pipeline. Generative AI can quickly produce personalized content, such as newsletters and ads, tailored to customer interests. Gartner predicts that by 2025, 30% of outbound marketing messages will be AI-generated, up from less than 2% in 2022. 5. Detect Fraud AI is instrumental in detecting fraudulent activities, helping businesses preserve revenue. Financial firms like Capital One use machine learning to detect anomalies and prevent credit card fraud, while e-commerce companies leverage AI to flag fraudulent orders. 6. Reinvent Business Processes AI can transform entire business processes, unlocking new revenue streams. For example, Accenture’s 2024 report highlighted an insurance company that expects a 10% revenue boost after retooling its underwriting workflow with AI. In healthcare, AI could streamline revenue cycle management, speeding up reimbursement processes. 7. Develop New Products and Services AI accelerates product development, particularly in industries like pharmaceuticals, where it assists in drug discovery. AI tools also speed up the delivery of digital products, as seen with companies like Ally Financial and ServiceNow, which have reduced software development times by 20% or more. 8. Provide Predictive Maintenance AI-driven predictive maintenance helps prevent costly equipment downtime in industries like manufacturing and fleet management. By identifying equipment on the brink of failure, AI allows companies to schedule repairs and avoid revenue loss from operational disruptions. 9. Improve Forecasting AI’s predictive capabilities enhance planning and forecasting. By analyzing historical and real-time data, AI can predict product demand and customer behavior, enabling businesses to optimize inventory levels and ensure product availability for ready-to-buy customers. 10. Optimize Pricing AI can dynamically adjust prices based on factors like demand shifts and competitor pricing. Reinforcement learning algorithms allow businesses to optimize pricing in real time, ensuring they maximize revenue even as market conditions change. Keeping ROI in Focus While AI offers numerous ways to generate new revenue streams, it also introduces costs in development, infrastructure, and operations—some of which may not be immediately apparent. For instance, research from McKinsey & Company shows that GenAI models account for only 15% of a project’s total cost, with additional expenses related to change management and data preparation often overlooked. To make the most of AI, organizations should prioritize use cases with a clear return on investment (ROI) and postpone those that don’t justify the expense. A focus on ROI ensures that AI deployments align with business goals and contribute to sustainable revenue growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
$15 Million to AI Training for U.S. Government Workforce

$15 Million to AI Training for U.S. Government Workforce

Google.org Commits $15 Million to AI Training for U.S. Government Workforce Google.org has announced $15 million in grants to support the development of AI skills in the U.S. government workforce, aiming to promote responsible AI use across federal, state, and local levels. These grants, part of Google.org’s broader $75 million AI Opportunity Fund, include $10 million to the Partnership for Public Service and $5 million to InnovateUS. The $10 million grant to the Partnership for Public Service will fund the establishment of the Center for Federal AI, a new hub focused on building AI expertise within the federal government. Set to open in spring 2025, the center will provide a federal AI leadership program, internships, and other initiatives designed to cultivate AI talent in the public sector. InnovateUS will use the $5 million grant to expand AI education for state and local government employees, aiming to train 100,000 workers through specialized courses, workshops, and coaching sessions. “AI is today’s electricity—a transformative technology fundamental to the public sector and society,” said Max Stier, president and CEO of the Partnership for Public Service. “Google.org’s generous support allows us to expand our programming and launch the new Center for Federal AI, empowering agencies to harness AI to better serve the public.” These grants clearly underscore Google.org’s commitment to equipping government agencies with the tools and talent necessary to navigate the evolving AI landscape responsibly. With these tools in place, Tectonic looks forward to assist you in becoming an ai-driven public sector service. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Scope of Generative AI

Scope of Generative AI

Generative AI has far more to offer your site than simply mimicking a conversational ChatGPT-like experience or providing features like generating cover letters on resume sites. Let’s explore how you can integrate Generative AI with your product in diverse and innovative ways! There are three key perspectives to consider when integrating Generative AI with your features: system scope, spatial relationship, and functional relationship. Each perspective offers a different lens for exploring integration pathways and can spark valuable conversations about melding AI with your product ecosystem. These categories aren’t mutually exclusive; instead, they overlap and provide flexible ways of envisioning AI’s role. 1. System Scope — The Reach of Generative AI in Your System System scope refers to the breadth of integration within your system. By viewing integration from this angle, you can assess the role AI plays in managing your platform’s overall functionality. While these categories may overlap, they are useful in facilitating strategic conversations. 2. Spatial Relationships — Where AI Interacts with Features Spatial relationships describe where AI features sit in relation to your platform’s functionality: 3. Functional Relationships — How AI Interacts with Features Functional relationships determine how AI and platform features work together. This includes how users engage with AI and how AI content updates based on feature interactions: Scope of Generative AI By considering these different perspectives—system scope, spatial, and functional—you can drive more meaningful conversations about how Generative AI can best enhance your product’s capabilities. Each approach offers unique value, and careful thought can help teams choose the integration path that aligns with their needs and goals. Scope of Generative AI conversations with Tectonic can assist in planning the best ROI approach to AI. Contact us today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Scaling Generative AI

Scaling Generative AI

Many organizations follow a hybrid approach to AI infrastructure, combining public clouds, colocation facilities, and on-prem solutions. Specialized GPU-as-a-service vendors, for instance, are becoming popular for handling high-demand AI computations, helping businesses manage costs without compromising performance. Business process outsourcing company TaskUs, for example, focuses on optimizing compute and data flows as it scales its gen AI deployments, while Cognizant advises that companies distinguish between training and inference needs, each with different latency requirements.

Read More
AI Agents and Digital Transformation

Ready for AI Agents

Brands that can effectively integrate agentic AI into their operations stand to gain a significant competitive edge. But as with any innovation, success will depend on balancing the promise of automation with the complexities of trust, privacy, and user experience.

Read More
What is Explainable AI

What is Explainable AI

Building a trusted AI system starts with ensuring transparency in how decisions are made. Explainable AI is vital not only for addressing trust issues within organizations but also for navigating regulatory challenges. According to research from Forrester, many business leaders express concerns over AI, particularly generative AI, which surged in popularity following the 2022 release of ChatGPT by OpenAI. “AI faces a trust issue,” explained Forrester analyst Brandon Purcell, underscoring the need for explainability to foster accountability. He highlighted that explainability helps stakeholders understand how AI systems generate their outputs. “Explainability builds trust,” Purcell stated at the Forrester Technology and Innovation Summit in Austin, Texas. “When employees trust AI systems, they’re more inclined to use them.” Implementing explainable AI does more than encourage usage within an organization—it also helps mitigate regulatory risks, according to Purcell. Explainability is crucial for compliance, especially under regulations like the EU AI Act. Forrester analyst Alla Valente emphasized the importance of integrating accountability, trust, and security into AI efforts. “Don’t wait for regulators to set standards—ensure you’re already meeting them,” she advised at the summit. Purcell noted that explainable AI varies depending on whether the AI model is predictive, generative, or agentic. Building an Explainable AI System AI explainability encompasses several key elements, including reproducibility, observability, transparency, interpretability, and traceability. For predictive models, transparency and interpretability are paramount. Transparency involves using “glass-box modeling,” where users can see how the model analyzed the data and arrived at its predictions. This approach is likely to be a regulatory requirement, especially for high-risk applications. Interpretability is another important technique, useful for lower-risk cases such as fraud detection or explaining loan decisions. Techniques like partial dependence plots show how specific inputs affect predictive model outcomes. “With predictive AI, explainability focuses on the model itself,” Purcell noted. “It’s one area where you can open the hood and examine how it works.” In contrast, generative AI models are often more opaque, making explainability harder. Businesses can address this by documenting the entire system, a process known as traceability. For those using models from vendors like Google or OpenAI, tools like transparency indexes and model cards—which detail the model’s use case, limitations, and performance—are valuable resources. Lastly, for agentic AI systems, which autonomously pursue goals, reproducibility is key. Businesses must ensure that the model’s outputs can be consistently replicated with similar inputs before deployment. These systems, like self-driving cars, will require extensive testing in controlled environments before being trusted in the real world. “Agentic systems will need to rack up millions of virtual miles before we let them loose,” Purcell concluded. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
How Skechers Solved Its Ecommerce Challenges

How Skechers Solved Its Ecommerce Challenges

Skechers Boosts Direct-to-Consumer Sales with Ecommerce Platform Upgrades Skechers, now a global brand in 2024, credits its recent ecommerce platform upgrades for saving time and increasing direct-to-consumer sales. However, it wasn’t always equipped with the right technology to support its massive growth. During Salesforce’s Dreamforce conference in San Francisco, Eric Cheng, Skechers USA Inc.’s director of ecommerce architecture, shared insights into how key technology decisions helped the brand expand and enhance its website and content capabilities. “Today, we’re present in over 180 countries worldwide,” Cheng said, speaking on stage at the Moscone Center. Skechers’ journey began in 1992, and its expansion has taken the brand across borders, reaching millions of customers worldwide. “We connect hundreds of millions of customers through our retail stores and ecommerce platform to deliver a unique experience,” Cheng noted, emphasizing the need to meet the diverse demands of each market. Skechers ranks No. 273 in the Top 1000, Digital Commerce 360’s ranking of the largest North American e-retailers by online sales, where it is categorized as an Apparel & Accessories retailer. Digital Commerce 360 projects that Skechers will reach 0.65 million in online sales by 2024. Ecommerce Platform Challenges Cheng acknowledged that Skechers’ digital transformation wasn’t immediate: “The journey did not just happen overnight; it took time and effort.” Skechers faced challenges in three key areas: content management, scalability, and customer experience. The legacy system was inadequate, lacking robust tools for efficient content delivery, previewing scheduled content, and handling localization. As Cheng described, launching a marketing page often required the content team to be on standby at midnight—an unsustainable approach for 17 countries. How Skechers Solved Its Ecommerce Challenges To overcome these hurdles, Skechers partnered with Astound Digital. Together, they implemented Salesforce Service Cloud and Manhattan Active Omni for order management. Kyle Montgomery, senior vice president of commerce at Astound Digital, joined Cheng on stage and highlighted the goal: “Their vision was to unify, supply, and scale.” This transformation enabled Skechers to bring 17 countries in Europe, Japan, and North America onto a single platform. Jennifer Lane, Salesforce’s director of success guides, also emphasized the flexibility achieved using Salesforce’s Page Designer and localization solutions from Salesforce’s AppExchange. Integrations with Thomson Reuters for tax, CyberSource for payments, and Salesforce Marketing Cloud for personalization further enhanced Skechers’ capabilities. The Results Cheng highlighted three key improvements after the ecommerce overhaul. First, content creation and localization tools improved operational efficiency by over 500%. The time to launch in new markets was dramatically reduced from five months to just a few weeks. Additionally, Skechers saw a notable sales boost, with a 24.5% increase in its direct-to-consumer segment during Q1 2023. Skechers’ success demonstrates the significant impact of a well-executed ecommerce platform upgrade, allowing the brand to scale globally while improving customer experience and operational efficiency. Contact Tectonic to learn what Salesforce can do for you. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Google on Google AI

Google on Google AI

As a leading cloud provider, Google Cloud is also a major player in the generative AI market. Google on Google AI provides insights into this new tool. In the past two years, Google has been in a competitive battle with AWS, Microsoft, and OpenAI to gain dominance in the generative AI space. Recently, Google introduced several generative Artificial Intelligence products, including its flagship large language model, Gemini, and the Vertex AI Model Garden. Last week, it also unveiled Audio Overview, a tool that transforms documents into audio discussions. Despite these advancements, Google has faced criticism for lagging in some areas, such as issues with its initial image generation tool, like X’s Grok. However, the company remains committed to driving progress in generative AI. Google’s strategy focuses not only on delivering its proprietary models but also offering a broad selection of third-party models through its Model Garden. Google’s Thoughts on Google AI Warren Barkley, head of product for Google Cloud’s Vertex AI, GenAI, and machine learning, emphasized this approach in a recent episode of the Targeting AI podcast. He noted that a key part of Google’s ongoing effort is ensuring users can easily transition to more advanced models. “A lot of what we did in the early days, and we continue to do now, is make it easy for people to move to the next generation,” Barkley said. “The models we built 18 months ago are a shadow of what we have today. So, providing pathways for people to upgrade and stay on the cutting edge is critical.” Google is also focused on helping users select the right AI models for specific applications. With over 100 closed and open models available in the Model Garden, evaluating them can be challenging for customers. To address this, Google introduced evaluation tools that allow users to test prompts and compare model responses. In addition, Google is exploring advancements in Artificial Intelligence reasoning, which it views as crucial to driving the future of generative AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Fully Formatted Facts

Fully Formatted Facts

A recent discovery by programmer and inventor Michael Calvin Wood is addressing a persistent challenge in AI: hallucinations. These false or misleading outputs, long considered an inherent flaw in large language models (LLMs), have posed a significant issue for developers. However, Wood’s breakthrough is challenging this assumption, offering a solution that could transform how AI-powered applications are built and used. The Importance of Wood’s Discovery for Developers Wood’s findings have substantial implications for developers working with AI. By eliminating hallucinations, developers can ensure that AI-generated content is accurate and reliable, particularly in applications where precision is critical. Understanding the Root Cause of Hallucinations Contrary to popular belief, hallucinations are not primarily caused by insufficient training data or biased algorithms. Wood’s research reveals that the issue stems from how LLMs process and generate information based on “noun-phrase routes.” LLMs organize information around noun phrases, and when they encounter semantically similar phrases, they may conflate or misinterpret them, leading to incorrect outputs. How LLMs Organize Information For example: The Noun-Phrase Dominance Model Wood’s research led to the development of the Noun-Phrase Dominance Model, which posits that neural networks in LLMs self-organize around noun phrases. This model is key to understanding and eliminating hallucinations by addressing how AI processes noun-phrase conflicts. Fully-Formatted Facts (FFF): A Solution Wood’s solution involves transforming input data into Fully-Formatted Facts (FFF)—statements that are literally true, devoid of noun-phrase conflicts, and structured as simple, complete sentences. Presenting information in this format has led to significant improvements in AI accuracy, particularly in question-answering tasks. How FFF Processing Works While Wood has not provided a step-by-step guide for FFF processing, he hints that the process began with named-entity recognition using the Python SpaCy library and evolved into using an LLM to reduce ambiguity while retaining the original writing style. His company’s REST API offers a wrapper around GPT-4o and GPT-4o-mini models, transforming input text to remove ambiguity before processing it. Current Methods vs. Wood’s Approach Current approaches, like Retrieval Augmented Generation (RAG), attempt to reduce hallucinations by adding more context. However, these methods often introduce additional noun-phrase conflicts. For instance, even with RAG, ChatGPT-3.5 Turbo experienced a 23% hallucination rate when answering questions about Wikipedia articles. In contrast, Wood’s method focuses on eliminating noun-phrase conflicts entirely. Results: RAG FF (Retrieval Augmented Generation with Formatted Facts) Wood’s method has shown remarkable results, eliminating hallucinations in GPT-4 and GPT-3.5 Turbo during question-answering tasks using third-party datasets. Real-World Example: Translation Error Elimination Consider a simple translation example: This transformation eliminates hallucinations by removing the potential noun-phrase conflict. Implications for the Future of AI The Noun-Phrase Dominance Model and the use of Fully-Formatted Facts have far-reaching implications: Roadmap for Future Development Wood and his team plan to expand their approach by: Conclusion: A New Era of Reliable AI Wood’s discovery represents a significant leap forward in the pursuit of reliable AI. By aligning input data with how LLMs process information, he has unlocked the potential for accurate, trustworthy AI systems. As this technology continues to evolve, it could have profound implications for industries ranging from healthcare to legal services, where AI could become a consistent and reliable tool. While there is still work to be done in expanding this method across all AI tasks, the foundation has been laid for a revolution in AI accuracy. Future developments will likely focus on refining and expanding these capabilities, enabling AI to serve as a trusted resource across a range of applications. Experience RAGFix For those looking to explore this technology, RAGFix offers an implementation of these groundbreaking concepts. Visit their official website to access demos, explore REST API integration options, and stay updated on the latest advancements in hallucination-free AI: Visit RAGFix.ai Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Recent advancements in AI

Recent advancements in AI

Recent advancements in AI have been propelled by large language models (LLMs) containing billions to trillions of parameters. Parameters—variables used to train and fine-tune machine learning models—have played a key role in the development of generative AI. As the number of parameters grows, models like ChatGPT can generate human-like content that was unimaginable just a few years ago. Parameters are sometimes referred to as “features” or “feature counts.” While it’s tempting to equate the power of AI models with their parameter count, similar to how we think of horsepower in cars, more parameters aren’t always better. An increase in parameters can lead to additional computational overhead and even problems like overfitting. There are various ways to increase the number of parameters in AI models, but not all approaches yield the same improvements. For example, Google’s Switch Transformers scaled to trillions of parameters, but some of their smaller models outperformed them in certain use cases. Thus, other metrics should be considered when evaluating AI models. The exact relationship between parameter count and intelligence is still debated. John Blankenbaker, principal data scientist at SSA & Company, notes that larger models tend to replicate their training data more accurately, but the belief that more parameters inherently lead to greater intelligence is often wishful thinking. He points out that while these models may sound knowledgeable, they don’t actually possess true understanding. One challenge is the misunderstanding of what a parameter is. It’s not a word, feature, or unit of data but rather a component within the model‘s computation. Each parameter adjusts how the model processes inputs, much like turning a knob in a complex machine. In contrast to parameters in simpler models like linear regression, which have a clear interpretation, parameters in LLMs are opaque and offer no insight on their own. Christine Livingston, managing director at Protiviti, explains that parameters act as weights that allow flexibility in the model. However, more parameters can lead to overfitting, where the model performs well on training data but struggles with new information. Adnan Masood, chief AI architect at UST, highlights that parameters influence precision, accuracy, and data management needs. However, due to the size of LLMs, it’s impractical to focus on individual parameters. Instead, developers assess models based on their intended purpose, performance metrics, and ethical considerations. Understanding the data sources and pre-processing steps becomes critical in evaluating the model’s transparency. It’s important to differentiate between parameters, tokens, and words. A parameter is not a word; rather, it’s a value learned during training. Tokens are fragments of words, and LLMs are trained on these tokens, which are transformed into embeddings used by the model. The number of parameters influences a model’s complexity and capacity to learn. More parameters often lead to better performance, but they also increase computational demands. Larger models can be harder to train and operate, leading to slower response times and higher costs. In some cases, smaller models are preferred for domain-specific tasks because they generalize better and are easier to fine-tune. Transformer-based models like GPT-4 dwarf previous generations in parameter count. However, for edge-based applications where resources are limited, smaller models are preferred as they are more adaptable and efficient. Fine-tuning large models for specific domains remains a challenge, often requiring extensive oversight to avoid problems like overfitting. There is also growing recognition that parameter count alone is not the best way to measure a model’s performance. Alternatives like Stanford’s HELM and benchmarks such as GLUE and SuperGLUE assess models across multiple factors, including fairness, efficiency, and bias. Three trends are shaping how we think about parameters. First, AI developers are improving model performance without necessarily increasing parameters. A study of 231 models between 2012 and 2023 found that the computational power required for LLMs has halved every eight months, outpacing Moore’s Law. Second, new neural network approaches like Kolmogorov-Arnold Networks (KANs) show promise, achieving comparable results to traditional models with far fewer parameters. Lastly, agentic AI frameworks like Salesforce’s Agentforce offer a new architecture where domain-specific AI agents can outperform larger general-purpose models. As AI continues to evolve, it’s clear that while parameter count is an important consideration, it’s just one of many factors in evaluating a model’s overall capabilities. To stay on the cutting edge of artificial intelligence, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
NYT Issues Cease-and-Desist Letter to Perplexity AI

NYT Issues Cease-and-Desist Letter to Perplexity AI

NYT Issues Cease-and-Desist Letter to Perplexity AI Over Alleged Unauthorized Content Use The New York Times (NYT) has issued a cease-and-desist letter to Perplexity AI, accusing the AI-powered search startup of using its content without permission. This move marks the second time the NYT has confronted a company for allegedly misappropriating its material. According to reports, the Times claims Perplexity is accessing and utilizing its content to generate summaries and other outputs, actions it argues infringe on copyright laws. The startup now has two weeks to respond to the accusations. A Growing Pattern of Tensions Perplexity AI is not the only publisher-facing scrutiny. In June, Forbes threatened legal action against the company, alleging “willful infringement” by using its text and images. In response, Perplexity launched the Perplexity Publishers’ Program, a revenue-sharing initiative that collaborates with publishers like Time, Fortune, and The Texas Tribune. Meanwhile, the NYT remains entangled in a separate lawsuit with OpenAI and its partner Microsoft over alleged misuse of its content. A Strategic Legal Approach The NYT’s decision to issue a cease-and-desist letter instead of pursuing an immediate lawsuit signals a calculated move. “Cease-and-desist approaches are less confrontational, less expensive, and faster,” said Sarah Kreps, a professor at Cornell University. This method also opens the door for negotiation, a pragmatic step given the uncharted legal terrain surrounding generative AI and copyright law. Michael Bennett, a responsible AI expert from Northeastern University, echoed this view, suggesting that the cease-and-desist approach positions the Times to protect its intellectual property while maintaining leverage in ongoing legal battles. If the NYT wins its case against OpenAI, Bennett added, it could compel companies like Perplexity to enter financial agreements for content use. However, if the case doesn’t favor the NYT, the publisher risks losing leverage. The letter also serves as a warning to other AI vendors, signaling the NYT’s determination to safeguard its intellectual property. Perplexity’s Defense: Facts vs. Expression Perplexity AI has countered the NYT’s claims by asserting that its methods adhere to copyright laws. “We aren’t scraping data for building foundation models but rather indexing web pages and surfacing factual content as citations,” the company stated. It emphasized that facts themselves cannot be copyrighted, drawing parallels to how search engines like Google operate. Kreps noted that Perplexity’s approach aligns closely with other AI platforms, which typically index pages to provide factual answers while citing sources. “If Perplexity is culpable, then the entire AI industry could be held accountable,” she said, contrasting Perplexity’s citation-based model with platforms like ChatGPT, which often lack transparency about data sources. The Crux of the Copyright Argument The NYT’s cease-and-desist letter centers on the distinction between facts and the creative expression of facts. While raw facts are not protected under copyright, the NYT claims that its specific interpretation and presentation of those facts are. Vincent Allen, an intellectual property attorney, explained that if Perplexity is scraping data and summarizing articles, it may involve making unauthorized copies of copyrighted content, strengthening the NYT’s claims. “This is a big deal for content providers,” Allen said, “as they want to ensure they’re compensated for their work.” Implications for the AI Industry The outcome of this dispute could set a precedent for how AI platforms handle content generated by publishers. If Perplexity’s practices are deemed infringing, it could reshape the operational models of similar AI vendors. At the heart of the debate is the balance between fostering innovation in AI and protecting intellectual property, a challenge that will likely shape the future of generative AI and its relationship with content creators. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
gettectonic.com