Replicate Archives - gettectonic.com
Is Your LLM Agent Enterprise-Ready?

Is Your LLM Agent Enterprise-Ready?

Customer Relationship Management (CRM) systems are the backbone of modern business operations, orchestrating customer interactions, data management, and process automation. As businesses embrace advanced AI, the potential for transformative growth is clear—automating workflows, personalizing customer experiences, and enhancing operational efficiency. However, deploying large language model (LLM) agents in CRM systems demands rigorous, real-world evaluations to ensure they meet the complexity and dynamic needs of professional environments.

Read More
Salesforce adds Testing Center to Agentforce for AI agents

Salesforce adds Testing Center to Agentforce for AI agents

Salesforce Unveils Agentforce Testing Center to Streamline AI Agent Lifecycle Management Salesforce has introduced the Agentforce Testing Center, a suite of tools designed to help enterprises test, deploy, and monitor autonomous AI agents in a secure and controlled environment. These innovations aim to support businesses adopting agentic AI, a transformative approach that enables intelligent systems to reason, act, and execute tasks on behalf of employees and customers. Agentforce Testing Center: A New Paradigm for AI Agent Deployment The Agentforce Testing Center offers several key capabilities to help businesses confidently deploy AI agents without risking disruptions to live production systems: Supporting a Limitless Workforce Adam Evans, EVP and GM for Salesforce AI Platform, emphasized the importance of these tools in accelerating the adoption of AI agents: “Agentforce is helping businesses create a limitless workforce. To deliver this value fast, CIOs need new tools for testing and monitoring agentic systems. Salesforce is meeting the moment with Agentforce Testing Center, enabling companies to roll out trusted AI agents with no-code tools for testing, deploying, and monitoring in a secure, repeatable way.” From Testing to Deployment Once testing is complete, enterprises can seamlessly deploy their AI agents to production using Salesforce’s proprietary tools such as Change Sets, DevOps Center, and the Salesforce CLI. Additionally, the Digital Wallet feature offers transparent usage monitoring, allowing teams to track consumption and optimize resources throughout the AI development lifecycle. Customer and Analyst Perspectives Shree Reddy, CIO of PenFed, praised the potential of Agentforce and Data Cloud Sandboxes: “By enabling rigorous pre-deployment testing, we can deliver faster, more accurate support and recommendations to our members, aligning with our commitment to financial well-being.” Keith Kirkpatrick, Research Director at The Futurum Group, highlighted the broader implications: “Salesforce is instilling confidence in AI adoption by testing hundreds of variations of agent interactions in parallel. These enhancements make it easier for businesses to pressure-test autonomous systems and ensure reliability.” Availability With these tools, Salesforce solidifies its leadership in the agentic AI space, empowering enterprises to adopt AI systems with confidence and transform their operations at scale. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
RAGate

RAGate

RAGate: Revolutionizing Conversational AI with Adaptive Retrieval-Augmented Generation Building Conversational AI systems is challenging.It’s not just feasible; it’s complex, resource-intensive, and time-consuming. The difficulty lies in creating systems that can not only understand and generate human-like responses but also adapt effectively to conversational nuances, ensuring meaningful engagement with users. Retrieval-Augmented Generation (RAG) has already transformed Conversational AI by combining the internal knowledge of large language models (LLMs) with external knowledge sources. By leveraging RAG with business data, organizations empower their customers to ask natural language questions and receive insightful, data-driven answers. The challenge?Not every query requires external knowledge. Over-reliance on external sources can disrupt conversational flow, much like consulting a book for every question during a conversation—even when internal knowledge is sufficient. Worse, if no external knowledge is available, the system may respond with “I don’t know,” despite having relevant internal knowledge to answer. The solution?RAGate — an adaptive mechanism that dynamically determines when to use external knowledge and when to rely on internal insights. Developed by Xi Wang, Procheta Sen, Ruizhe Li, and Emine Yilmaz and introduced in their July 2024 paper on Adaptive Retrieval-Augmented Generation for Conversational Systems, RAGate addresses this balance with precision. What Is Conversational AI? At its core, conversation involves exchanging thoughts, emotions, and information, guided by tone, context, and subtle cues. Humans excel at this due to emotional intelligence, socialization, and cultural exposure. Conversational AI aims to replicate these human-like interactions by leveraging technology to generate natural, contextually appropriate, and engaging responses. These systems adapt fluidly to user inputs, making the interaction dynamic—like conversing with a human. Internal vs. External Knowledge in AI Systems To understand RAGate’s value, we need to differentiate between two key concepts: Limitations of Traditional RAG Systems RAG integrates LLMs’ natural language capabilities with external knowledge retrieval, often guided by “guardrails” to ensure responsible, domain-specific responses. However, strict reliance on external knowledge can lead to: How RAGate Enhances Conversational AI RAGate, or Retrieval-Augmented Generation Gate, adapts dynamically to determine when external knowledge retrieval is necessary. It enhances response quality by intelligently balancing internal and external knowledge, ensuring conversational relevance and efficiency. The mechanism: Traditional RAG vs. RAGate: An Example Scenario: A healthcare chatbot offers advice based on general wellness principles and up-to-date medical research. This adaptive approach improves response accuracy, reduces latency, and enhances the overall conversational experience. RAGate Variants RAGate offers three implementation methods, each tailored to optimize performance: Variant Approach Key Feature RAGate-Prompt Uses natural language prompts to decide when external augmentation is needed. Lightweight and simple to implement. RAGate-PEFT Employs parameter-efficient fine-tuning (e.g., QLoRA) for better decision-making. Fine-tunes the model with minimal resource requirements. RAGate-MHA Leverages multi-head attention to interactively assess context and retrieve external knowledge. Optimized for complex conversational scenarios. RAGate Varients How to Implement RAGate Key Takeaways RAGate represents a breakthrough in Conversational AI, delivering adaptive, contextually relevant, and efficient responses by balancing internal and external knowledge. Its potential spans industries like healthcare, education, finance, and customer support, enhancing decision-making and user engagement. By intelligently combining retrieval-augmented generation with nuanced adaptability, RAGate is set to redefine the way businesses and individuals interact with AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more We Are All Cloud Users My old company and several others are concerned about security, and feel more secure with being able to walk down Read more

Read More
Will AI Hinder Digital Transformation in Healthcare?

Poisoning Your Data

Protecting Your IP from AI Training: Poisoning Your Data As more valuable intellectual property (IP) becomes accessible online, concerns over AI vendors scraping content for training models without permission are rising. If you’re worried about AI theft and want to safeguard your assets, it’s time to consider “poisoning” your content—making it difficult or even impossible for AI systems to use it effectively. Key Principle: AI “Sees” Differently Than Humans AI processes data in ways humans don’t. While people view content based on context, AI “sees” data in raw, specific formats that can be manipulated. By subtly altering your content, you can protect it without affecting human users. Image Poisoning: Misleading AI Models For images, you can “poison” them to confuse AI models without impacting human perception. A great example of this is Nightshade, a tool designed to distort images so that they remain recognizable to humans but useless to AI models. This technique ensures your artwork or images can’t be replicated, and applying it across your visual content protects your unique style. For example, if you’re concerned about your images being stolen or reused by generative AI systems, you can embed misleading text into the image itself, which is invisible to human users but interpreted by AI as nonsensical data. This ensures that an AI model trained on your images will be unable to replicate them correctly. Text Poisoning: Adding Complexity for Crawlers Text poisoning requires more finesse, depending on the sophistication of the AI’s web crawler. Simple methods include: Invisible Text One easy method is to hide text within your page using CSS. This invisible content can be placed in sidebars, between paragraphs, or anywhere within your text: cssCopy code.content { color: black; /* Same as the background */ opacity: 0.0; /* Invisible */ display: none; /* Hidden in the DOM */ } By embedding this “poisonous” content directly in the text, AI crawlers might have difficulty distinguishing it from real content. If done correctly, AI models will ingest the irrelevant data as part of your content. JavaScript-Generated Content Another technique is to use JavaScript to dynamically alter the content, making it visible only after the page loads or based on specific conditions. This can frustrate AI crawlers that only read content after the DOM is fully loaded, as they may miss the hidden data. htmlCopy code<script> // Dynamically load content based on URL parameters or other factors </script> This method ensures that AI gets a different version of the page than human users. Honeypots for AI Crawlers Honeypots are pages designed specifically for AI crawlers, containing irrelevant or distorted data. These pages don’t affect human users but can confuse AI models by feeding them inaccurate information. For example, if your website sells cheese, you can create pages that only AI crawlers can access, full of bogus details about your cheese, thus poisoning the AI model with incorrect information. By adding these “honeypot” pages, you can mislead AI models that scrape your data, preventing them from using your IP effectively. Competitive Advantage Through Data Poisoning Data poisoning can also work to your benefit. By feeding AI models biased information about your products or services, you can shape how these models interpret your brand. For example, you could subtly insert favorable competitive comparisons into your content that only AI models can read, helping to position your products in a way that biases future AI-driven decisions. For instance, you might embed positive descriptions of your brand or products in invisible text. AI models would ingest these biases, making it more likely that they favor your brand when generating results. Using Proxies for Data Poisoning Instead of modifying your CMS, consider using a proxy server to inject poisoned data into your content dynamically. This approach allows you to identify and respond to crawlers more easily, adding a layer of protection without needing to overhaul your existing systems. A proxy can insert “poisoned” content based on the type of AI crawler requesting it, ensuring that the AI gets the distorted data without modifying your main website’s user experience. Preparing for AI in a Competitive World With the increasing use of AI for training and decision-making, businesses must think proactively about protecting their IP. In an era where AI vendors may consider all publicly available data fair game, implementing data poisoning should become a standard practice for companies concerned about protecting their content and ensuring it’s represented correctly in AI models. Businesses that take these steps will be better positioned to negotiate with AI vendors if they request data for training and will have a competitive edge if AI systems are used by consumers or businesses to make decisions about their products or services. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI and Disability

AI and Disability

Dr. Johnathan Flowers of American University recently sparked a conversation on Bluesky regarding a statement from the organizers of NaNoWriMo, which endorsed the use of generative AI technologies, such as LLM chatbots, in this year’s event. Dr. Flowers expressed concern about the implication that AI assistance was necessary for accessibility, arguing that it could undermine the creativity and agency of individuals with disabilities. He believes that art often serves as a unique space where barriers imposed by disability can be transcended without relying on external help or engaging in forced intimacy. For Dr. Flowers, suggesting the need for AI support may inadvertently diminish the perceived capabilities of disabled and marginalized artists. Since the announcement, NaNoWriMo organizers have revised their stance in response to criticism, though much of the social media discussion has become unproductive. In earlier discussions, the author has explored the implications of generative AI in art, focusing on the human connection that art typically fosters, which AI-generated content may not fully replicate. However, they now wish to address the role of AI as a tool for accessibility. Not being personally affected by physical disability, the author approaches this topic from a social scientific perspective. They acknowledge that the views expressed are personal and not representative of any particular community or organization. Defining AI In a recent presentation, the author offered a new definition of AI, drawing from contemporary regulatory and policy discussions: AI: The application of specific forms of machine learning to perform tasks that would otherwise require human labor. This definition is intentionally broad, encompassing not just generative AI but also other machine learning applications aimed at automating tasks. AI as an Accessibility Tool AI has potential to enhance autonomy and independence for individuals with disabilities, paralleling technological advancements seen in fields like the Paris Paralympics. However, the author is keen to explore what unique benefits AI offers and what risks might arise. Benefits Risks AI and Disability The author acknowledges that this overview touches only on some key issues related to AI and disability. It is crucial for those working in machine learning to be aware of these dynamics, striving to balance benefits with potential risks and ensuring equitable access to technological advancements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
What is Explainable AI

What is Explainable AI

Building a trusted AI system starts with ensuring transparency in how decisions are made. Explainable AI is vital not only for addressing trust issues within organizations but also for navigating regulatory challenges. According to research from Forrester, many business leaders express concerns over AI, particularly generative AI, which surged in popularity following the 2022 release of ChatGPT by OpenAI. “AI faces a trust issue,” explained Forrester analyst Brandon Purcell, underscoring the need for explainability to foster accountability. He highlighted that explainability helps stakeholders understand how AI systems generate their outputs. “Explainability builds trust,” Purcell stated at the Forrester Technology and Innovation Summit in Austin, Texas. “When employees trust AI systems, they’re more inclined to use them.” Implementing explainable AI does more than encourage usage within an organization—it also helps mitigate regulatory risks, according to Purcell. Explainability is crucial for compliance, especially under regulations like the EU AI Act. Forrester analyst Alla Valente emphasized the importance of integrating accountability, trust, and security into AI efforts. “Don’t wait for regulators to set standards—ensure you’re already meeting them,” she advised at the summit. Purcell noted that explainable AI varies depending on whether the AI model is predictive, generative, or agentic. Building an Explainable AI System AI explainability encompasses several key elements, including reproducibility, observability, transparency, interpretability, and traceability. For predictive models, transparency and interpretability are paramount. Transparency involves using “glass-box modeling,” where users can see how the model analyzed the data and arrived at its predictions. This approach is likely to be a regulatory requirement, especially for high-risk applications. Interpretability is another important technique, useful for lower-risk cases such as fraud detection or explaining loan decisions. Techniques like partial dependence plots show how specific inputs affect predictive model outcomes. “With predictive AI, explainability focuses on the model itself,” Purcell noted. “It’s one area where you can open the hood and examine how it works.” In contrast, generative AI models are often more opaque, making explainability harder. Businesses can address this by documenting the entire system, a process known as traceability. For those using models from vendors like Google or OpenAI, tools like transparency indexes and model cards—which detail the model’s use case, limitations, and performance—are valuable resources. Lastly, for agentic AI systems, which autonomously pursue goals, reproducibility is key. Businesses must ensure that the model’s outputs can be consistently replicated with similar inputs before deployment. These systems, like self-driving cars, will require extensive testing in controlled environments before being trusted in the real world. “Agentic systems will need to rack up millions of virtual miles before we let them loose,” Purcell concluded. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Recent advancements in AI

Recent advancements in AI

Recent advancements in AI have been propelled by large language models (LLMs) containing billions to trillions of parameters. Parameters—variables used to train and fine-tune machine learning models—have played a key role in the development of generative AI. As the number of parameters grows, models like ChatGPT can generate human-like content that was unimaginable just a few years ago. Parameters are sometimes referred to as “features” or “feature counts.” While it’s tempting to equate the power of AI models with their parameter count, similar to how we think of horsepower in cars, more parameters aren’t always better. An increase in parameters can lead to additional computational overhead and even problems like overfitting. There are various ways to increase the number of parameters in AI models, but not all approaches yield the same improvements. For example, Google’s Switch Transformers scaled to trillions of parameters, but some of their smaller models outperformed them in certain use cases. Thus, other metrics should be considered when evaluating AI models. The exact relationship between parameter count and intelligence is still debated. John Blankenbaker, principal data scientist at SSA & Company, notes that larger models tend to replicate their training data more accurately, but the belief that more parameters inherently lead to greater intelligence is often wishful thinking. He points out that while these models may sound knowledgeable, they don’t actually possess true understanding. One challenge is the misunderstanding of what a parameter is. It’s not a word, feature, or unit of data but rather a component within the model‘s computation. Each parameter adjusts how the model processes inputs, much like turning a knob in a complex machine. In contrast to parameters in simpler models like linear regression, which have a clear interpretation, parameters in LLMs are opaque and offer no insight on their own. Christine Livingston, managing director at Protiviti, explains that parameters act as weights that allow flexibility in the model. However, more parameters can lead to overfitting, where the model performs well on training data but struggles with new information. Adnan Masood, chief AI architect at UST, highlights that parameters influence precision, accuracy, and data management needs. However, due to the size of LLMs, it’s impractical to focus on individual parameters. Instead, developers assess models based on their intended purpose, performance metrics, and ethical considerations. Understanding the data sources and pre-processing steps becomes critical in evaluating the model’s transparency. It’s important to differentiate between parameters, tokens, and words. A parameter is not a word; rather, it’s a value learned during training. Tokens are fragments of words, and LLMs are trained on these tokens, which are transformed into embeddings used by the model. The number of parameters influences a model’s complexity and capacity to learn. More parameters often lead to better performance, but they also increase computational demands. Larger models can be harder to train and operate, leading to slower response times and higher costs. In some cases, smaller models are preferred for domain-specific tasks because they generalize better and are easier to fine-tune. Transformer-based models like GPT-4 dwarf previous generations in parameter count. However, for edge-based applications where resources are limited, smaller models are preferred as they are more adaptable and efficient. Fine-tuning large models for specific domains remains a challenge, often requiring extensive oversight to avoid problems like overfitting. There is also growing recognition that parameter count alone is not the best way to measure a model’s performance. Alternatives like Stanford’s HELM and benchmarks such as GLUE and SuperGLUE assess models across multiple factors, including fairness, efficiency, and bias. Three trends are shaping how we think about parameters. First, AI developers are improving model performance without necessarily increasing parameters. A study of 231 models between 2012 and 2023 found that the computational power required for LLMs has halved every eight months, outpacing Moore’s Law. Second, new neural network approaches like Kolmogorov-Arnold Networks (KANs) show promise, achieving comparable results to traditional models with far fewer parameters. Lastly, agentic AI frameworks like Salesforce’s Agentforce offer a new architecture where domain-specific AI agents can outperform larger general-purpose models. As AI continues to evolve, it’s clear that while parameter count is an important consideration, it’s just one of many factors in evaluating a model’s overall capabilities. To stay on the cutting edge of artificial intelligence, contact Tectonic today. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Causes Job Flux

AI Causes Job Flux

AI Barometer Signals Job Disruption Amid Global Productivity Gains A recent PwC report highlights significant productivity improvements worldwide, but also points to potential job disruption due to artificial intelligence (AI). Described as the “Industrial Revolution of knowledge work,” AI is transforming how workers utilize information, generate content, and deliver results at unprecedented speed and scale. The 2024 AI Jobs Barometer, released by PwC, aims to provide empirical data on the impact of AI on global employment. AI Causes Job Flux but not necessarly job loss. AI Causes Job Flux The analysis involved examining over half a billion job ads across 15 advanced economies, including the U.S., Canada, Singapore, Australia, New Zealand, and several European nations. PwC sought to uncover the effects of AI on jobs, skills, wages, and productivity by monitoring the rise of positions requiring specialist AI skills across various industries and regions. The findings show that AI adoption is accelerating, with workers proficient in AI commanding substantial wage premiums. Broader Workforce Impact Interestingly, the impact of AI extends beyond workers with specialized AI skills. According to PwC, the majority of workers leveraging AI tools do not require such expertise. In many cases, a small number of AI specialists design tools that are then used by thousands of customer service agents, analysts, or legal professionals—none of whom possess advanced AI knowledge. This trend is driven largely by generative AI applications, which can typically be operated using simple, everyday language without technical skills. AI’s Economic Promise AI is leading a productivity revolution. Labor productivity growth has stagnated in many OECD countries over the past two decades, but AI may offer a solution. To better understand its effect on productivity, PwC analyzed jobs based on their “AI exposure,” indicating the extent to which AI can assist with tasks within specific roles. The report found that industries with higher AI exposure are experiencing much greater labor productivity growth. Knowledge-based jobs, in particular, show the highest AI exposure and the greatest demand for workers with advanced AI skills. Sectors such as financial services, professional services, and information and communications are leading the way, with AI-related job shares 2.8x, 3x, and 5x higher, respectively, than other industries. Overall, these sectors are witnessing nearly fivefold productivity growth due to AI integration. AI is also playing a role in alleviating labor shortages. Jobs in customer service, administration, and IT, among others, are still growing but at a slower rate. AI-driven productivity may help fill gaps caused by shrinking working-age populations in advanced economies. Wage Premiums for AI Skills Workers in AI-specialist roles are seeing significant wage premiums—up to 25% on average. Since 2016, demand for these roles has outpaced the growth of the overall job market. The highest wage premiums are found in the U.S. (25%) and the U.K. (14%), with data specialists commanding premiums of over 50% in both countries. Financial analysts, lawyers, and marketing managers also enjoy substantial wage boosts. The Disruption of Job Markets The skills required for AI-exposed jobs are evolving rapidly. PwC’s report reveals that new skills are emerging 25% faster in AI-exposed occupations compared to those less affected by AI. Jobs requiring AI proficiency have grown 3.5 times faster than other roles since 2016, and this trend predates the rise of popular tools like ChatGPT. However, while AI is driving demand for new skills, it is also reducing the need for certain old ones. Jobs in fields like IT, design, sales, and data analysis are seeing slower growth, as tasks in these areas are increasingly automated by AI technologies. The Future of Work The PwC report stresses that AI will not necessarily result in fewer jobs overall, but will change the nature of work. Instead of asking whether AI can replicate existing tasks, the focus should be on how AI enables new opportunities and industries. Tectonic recommends you work on this trail of thought by implementing AI Acceptable Use Policies in your company. Encourage your teams to explore AI tools that increase productivity but clearly outline what is and is not acceptable AI usage. PwC outlines several steps for policymakers, business leaders, and workers to take to ensure a positive transition into the AI era. Policymakers are encouraged to promote AI adoption through supportive policies, digital infrastructure, and workforce development. Business leaders should embrace AI as a complement to human workers, focusing on generating new ways to create value. Meanwhile, workers must build AI-complementary skills and experiment with AI tools to remain competitive in the evolving job market. Ultimately, while AI is disrupting the job landscape, it also presents vast opportunities for those who are willing to adapt. Like past technological revolutions, those who embrace change stand to benefit the most from AI’s transformative power. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI and Patient Engagement

Generative AI and Patient Engagement

The healthcare industry is undergoing a significant digital transformation, with generative AI and chatbots playing a prominent role in various patient engagement applications. Technologies such as online symptom checkers, appointment scheduling, patient navigation tools, medical search engines, and patient portal messaging are prime examples of how AI is enhancing patient-facing interactions. These advancements aim to alleviate staff workload while improving the overall patient experience, according to industry experts. However, even these patient-centric applications face challenges, such as the risk of generating medical misinformation or biased outcomes. As healthcare professionals explore the potential of generative AI and chatbots, they must also implement safeguards to prevent the spread of false information and mitigate disparities in care. Online Symptom Checkers Online symptom checkers allow patients to input their symptoms and receive a list of potential diagnoses, helping them decide the appropriate level of care, whether it’s urgent care or self-care at home. These tools hold promise for improving patient experiences and operational efficiency, reducing unnecessary healthcare visits. For healthcare providers, they help triage patients, ensuring those who need critical care receive it. However, the effectiveness of online symptom checkers is mixed. A 2022 literature review revealed that diagnostic accuracy ranged between 19% and 37.9%, while triage accuracy was higher, between 48.9% and 90%. Patient reception to these tools has been lukewarm as well, with some expressing dissatisfaction with the COVID-19 symptom checkers during the pandemic, mainly when the tools did not emulate human interaction. Moreover, studies have indicated that these tools might exacerbate health inequities, as users tend to be younger, female, and more digitally literate. To mitigate this, developers must ensure that chatbots can communicate in multiple languages, replicate human interactions, and escalate to human providers when needed. Self-Scheduling and Patient Navigation Generative AI and conversational AI have shown promise in addressing lower-level patient inquiries, such as appointment scheduling and navigation, reducing the strain on healthcare staff. AI-driven scheduling systems help fill gaps in navigation by assisting patients with appointment bookings and answering logistical questions, like parking or directions. A December 2023 review noted that AI-optimized patient scheduling reduces provider time burdens and improves patient satisfaction. However, barriers such as health equity, access to broadband, and patient trust must be addressed to ensure effective implementation. While organizations need to ensure these systems are accessible to all, AI is a valuable tool for managing routine patient requests, freeing staff to focus on more complex issues. Online Medical Research AI tools like ChatGPT are expanding on the “Dr. Google” phenomenon, offering patients a way to search for medical information. Despite initial concerns from clinicians about online medical searches, recent studies show that generative AI tools can provide accurate and understandable information. For instance, ChatGPT accurately answered breast cancer screening questions 88% of the time in one 2023 study and offered adequate colonoscopy preparation information in another. However, patients remain cautious about AI-generated medical advice. A 2023 survey revealed that nearly half of respondents were concerned about potential misinformation, and many were unsure about the sources AI tools use. Addressing these concerns by validating source material and providing supplementary educational resources will be crucial for building patient trust. Patient Portal Messaging and Provider Communication Generative AI is also finding its place in patient portal messaging, where it can generate responses to patient inquiries, helping to alleviate clinician burnout. In a 2024 study, AI-generated responses within a patient portal were often indistinguishable from those written by clinicians, requiring human editing in only 58% of cases. While chatbot-generated messages have been found to be more empathetic than those written by overworked providers, it’s important to ensure AI-generated responses are always reviewed by healthcare professionals to catch any potential errors. In addition to patient engagement, generative AI is being used in clinical decision support and ambient documentation, showcasing its potential to improve healthcare efficiency. However, developers and healthcare organizations must remain vigilant about preventing algorithmic bias and other AI-related risks. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Cortex Framework Integration with Salesforce (SFDC)

Cortex Framework Integration with Salesforce (SFDC)

Cortex Framework: Integration with Salesforce (SFDC) This insight outlines the process of integrating Salesforce (SFDC) operational workloads into the Cortex Framework Data Foundation. By integrating Salesforce data through Dataflow pipelines into BigQuery, Cloud Composer can schedule and monitor these pipelines, allowing you to gain insights from your Salesforce data. Cortex Framework Integration with Salesforce explained. Prerequisite: Before configuring any workload integration, ensure that the Cortex Framework Data Foundation is deployed. Configuration File The config.json file in the Cortex Framework Data Foundation repository manages settings for transferring data from various sources, including Salesforce. Below is an example of how Salesforce workloads are configured: jsonCopy code”SFDC”: { “deployCDC”: true, “createMappingViews”: true, “createPlaceholders”: true, “datasets”: { “cdc”: “”, “raw”: “”, “reporting”: “REPORTING_SFDC” } } Explanation of Parameters: Parameter Meaning Default Value Description SFDC.deployCDC Deploy CDC true Generates Change Data Capture (CDC) processing scripts to run as DAGs in Cloud Composer. SFDC.createMappingViews Create mapping views true Creates views in the CDC processed dataset to show the “latest version of the truth” from the raw dataset. SFDC.createPlaceholders Create placeholders true Creates empty placeholder tables if they aren’t generated during ingestion, ensuring smooth downstream reporting deployment. SFDC.datasets.raw Raw landing dataset (user-defined) The dataset where replication tools land data from Salesforce. SFDC.datasets.cdc CDC processed dataset (user-defined) Source for reporting views and target for records processed by DAGs. SFDC.datasets.reporting Reporting dataset for SFDC “REPORTING_SFDC” Name of the dataset accessible for end-user reporting, where views and user-facing tables are deployed. Salesforce Data Requirements Table Structure: Loading SFDC Data into BigQuery The Cortex Framework offers several methods for loading Salesforce data into BigQuery: CDC Processing The CDC scripts rely on two key fields: You can adjust the CDC processing to handle different field names or add custom fields to suit your data schema. Configuration of API Integration and CDC To configure Salesforce data integration into BigQuery, Cortex provides the following methods: Example Configuration (settings.yaml): yamlCopy codesalesforce_to_raw_tables: – base_table: accounts raw_table: Accounts api_name: Account load_frequency: “@daily” Data Mapping and Polymorphic Fields Cortex Framework supports mapping data fields to the expected format. For example, a field named unicornId in your source system would be mapped to AccountId in Cortex with the string data type. Polymorphic Fields: Fields whose names vary but have the same structure can be mapped in Cortex using [Field Name]_Type, such as Who_Type for the Who.Type field in the Task object. Modifying DAG Templates You can customize DAG templates as needed for CDC or raw data processing. To disable CDC or raw data processing from API calls, set deployCDC=false in the configuration file. Setting Up the Extraction Module Follow these steps to set up the Salesforce to BigQuery extraction module: Cloud Composer Setup To run Python scripts for replication, install the necessary Python packages depending on your Airflow version. For Airflow 2.x: bashCopy codegcloud composer environments update my-composer-instance –location us-central1 –update-pypi-package apache-airflow-providers-salesforce>=5.2.0 Security and Permissions Ensure Cloud Composer has access to Google Secret Manager for retrieving stored secrets, enhancing the security of sensitive data like passwords and API keys. Conclusion By following these steps, you can successfully integrate Salesforce workloads into Cortex Framework, ensuring a seamless data flow from Salesforce into BigQuery for reporting and analytics. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Data Cloud and Zero Copy

Salesforce Data Cloud and Zero Copy

As organizations across industries gather increasing amounts of data from diverse sources, they face the challenge of making that data actionable and deriving real-time insights. With Salesforce Data Cloud and zero copy architecture, organizations can streamline access to data and build dynamic, real-time dashboards that drive value while embedding contextual insights into everyday workflows. A session during Dreamforce 2024 with Joanna McNurlen, Principal Solution Engineer for Data Cloud at Salesforce, discussed how zero copy architecture facilitates the creation of dashboards and workflows that provide near-instant insights, enabling quick decision-making to enhance operational efficiency and competitive advantage. What is zero copy architecture?Traditionally, organizations had to replicate data from one system to another, such as copying CRM data into a data warehouse for analysis. This approach introduces latency, increases storage costs, and often results in inconsistencies between systems. Zero copy architecture eliminates the need for replication and provides a single source of truth for your data. It allows different systems to access data in its original location without duplication across platforms. Instead of using traditional extract, transform, and load (ETL) processes, systems like Salesforce Data Cloud can connect directly with external databases, such as Google Cloud BigQuery, Snowflake, Databricks, or Amazon Redshift, for real-time data access. Zero copy can also facilitate data sharing from within Salesforce to other systems. As Salesforce expands its zero copy partner network, opportunities to easily connect data from various sources will continue to grow. How does zero copy work?Zero copy employs virtual tables that act as blueprints for the data structure, enabling queries to be executed as if the data were local. Changes made in the data warehouse are instantly visible across all connected systems, ensuring users always work with the latest information. While developing dashboards, users can connect directly to the zero copy objects within Data Cloud to create visualizations and reports on top of them. Why is zero copy beneficial?Zero copy allows organizations to analyze data as it is generated, enabling faster responses, smarter decision-making, and enhanced customer experiences. This architecture reduces reliance on data transformation workflows and synchronizations within both Tableau and CRM Analytics, where organizations have historically encountered bottlenecks due to runtimes and platform limits. Various teams can benefit from the following capabilities: Unlocking real-time insights in Salesforce using zero copy architectureZero copy architecture and real-time data are transforming how organizations operate. By eliminating data duplication and providing real-time insights, the use of zero copy in Salesforce Data Cloud empowers organizations to work more efficiently, make informed decisions, and enhance customer experiences. Now is the perfect time to explore how Salesforce Data Cloud and zero copy can elevate your operations. Tectonic, a trusted Salesforce partner, can help you unlock the potential of your data and create new opportunities with the Salesforce platform. Connect with us today to get started. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Exploring Large Action Models

Exploring Large Action Models

Exploring Large Action Models (LAMs) for Automated Workflow Processes While large language models (LLMs) are effective in generating text and media, Large Action Models (LAMs) push beyond simple generation—they perform complex tasks autonomously. Imagine an AI that not only generates content but also takes direct actions in workflows, such as managing customer relationship management (CRM) tasks, sending emails, or making real-time decisions. LAMs are engineered to execute tasks across various environments by seamlessly integrating with tools, data, and systems. They adapt to user commands, making them ideal for applications in industries like marketing, customer service, and beyond. Key Capabilities of LAMs A standout feature of LAMs is their ability to perform function-calling tasks, such as selecting the appropriate APIs to meet user requirements. Salesforce’s xLAM models are designed to optimize these tasks, achieving high performance with lower resource demands—ideal for both mobile applications and high-performance environments. The fc series models are specifically tuned for function-calling, enabling fast, precise, and structured responses by selecting the best APIs based on input queries. Practical Examples Using Salesforce LAMs In this article, we’ll explore: Implementation: Setting Up the Model and API Start by installing the necessary libraries: pythonCopy code! pip install transformers==4.41.0 datasets==2.19.1 tokenizers==0.19.1 flask==2.2.5 Next, load the xLAM model and tokenizer: pythonCopy codeimport json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = “Salesforce/xLAM-7b-fc-r” model = AutoModelForCausalLM.from_pretrained(model_name, device_map=”auto”, torch_dtype=”auto”, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) Now, define instructions and available functions. Task Instructions: The model will use function calls where applicable, based on user questions and available tools. Format Example: jsonCopy code{ “tool_calls”: [ {“name”: “func_name1”, “arguments”: {“argument1”: “value1”, “argument2”: “value2”}} ] } Define available APIs: pythonCopy codeget_weather_api = { “name”: “get_weather”, “description”: “Retrieve weather details”, “parameters”: {“location”: “string”, “unit”: “string”} } search_api = { “name”: “search”, “description”: “Search for online information”, “parameters”: {“query”: “string”} } Creating Flask APIs for Business Logic We can use Flask to create APIs to replicate business processes. pythonCopy codefrom flask import Flask, request, jsonify app = Flask(__name__) @app.route(“/customer”, methods=[‘GET’]) def get_customer(): customer_id = request.args.get(‘customer_id’) # Return dummy customer data return jsonify({“customer_id”: customer_id, “status”: “active”}) @app.route(“/send_email”, methods=[‘GET’]) def send_email(): email = request.args.get(’email’) # Return dummy response for email send status return jsonify({“status”: “sent”}) Testing the LAM Model and Flask APIs Define queries to test LAM’s function-calling capabilities: pythonCopy codequery = “What’s the weather like in New York in fahrenheit?” print(custom_func_def(query)) # Expected: {“tool_calls”: [{“name”: “get_weather”, “arguments”: {“location”: “New York”, “unit”: “fahrenheit”}}]} Function-Calling Models in Action Using base_call_api, LAMs can determine the correct API to call and manage workflow processes autonomously. pythonCopy codedef base_call_api(query): “””Calls APIs based on LAM recommendations.””” base_url = “http://localhost:5000/” json_response = json.loads(custom_func_def(query)) api_url = json_response[“tool_calls”][0][“name”] params = json_response[“tool_calls”][0][“arguments”] response = requests.get(base_url + api_url, params=params) return response.json() With LAMs, businesses can automate and streamline tasks in complex workflows, maximizing efficiency and empowering teams to focus on strategic initiatives. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
2024 AI Glossary

2024 AI Glossary

Artificial intelligence (AI) has moved from an emerging technology to a mainstream business imperative, making it essential for leaders across industries to understand and communicate its concepts. To help you unlock the full potential of AI in your organization, this 2024 AI Glossary outlines key terms and phrases that are critical for discussing and implementing AI solutions. Tectonic 2024 AI Glossary Active LearningA blend of supervised and unsupervised learning, active learning allows AI models to identify patterns, determine the next step in learning, and only seek human intervention when necessary. This makes it an efficient approach to developing specialized AI models with greater speed and precision, which is ideal for businesses aiming for reliability and efficiency in AI adoption. AI AlignmentThis subfield focuses on aligning the objectives of AI systems with the goals of their designers or users. It ensures that AI achieves intended outcomes while also integrating ethical standards and values when making decisions. AI HallucinationsThese occur when an AI system generates incorrect or misleading outputs. Hallucinations often stem from biased or insufficient training data or incorrect model assumptions. AI-Powered AutomationAlso known as “intelligent automation,” this refers to the integration of AI with rules-based automation tools like robotic process automation (RPA). By incorporating AI technologies such as machine learning (ML), natural language processing (NLP), and computer vision (CV), AI-powered automation expands the scope of tasks that can be automated, enhancing productivity and customer experience. AI Usage AuditingAn AI usage audit is a comprehensive review that ensures your AI program meets its goals, complies with legal requirements, and adheres to organizational standards. This process helps confirm the ethical and accurate performance of AI systems. Artificial General Intelligence (AGI)AGI refers to a theoretical AI system that matches human cognitive abilities and adaptability. While it remains a future concept, experts predict it may take decades or even centuries to develop true AGI. Artificial Intelligence (AI)AI encompasses computer systems that can perform complex tasks traditionally requiring human intelligence, such as reasoning, decision-making, and problem-solving. BiasBias in AI refers to skewed outcomes that unfairly disadvantage certain ideas, objectives, or groups of people. This often results from insufficient or unrepresentative training data. Confidence ScoreA confidence score is a probability measure indicating how certain an AI model is that it has performed its assigned task correctly. Conversational AIA type of AI designed to simulate human conversation using techniques like NLP and generative AI. It can be further enhanced with capabilities like image recognition. Cost ControlThis is the process of monitoring project progress in real-time, tracking resource usage, analyzing performance metrics, and addressing potential budget issues before they escalate, ensuring projects stay on track. Data Annotation (Data Labeling)The process of labeling data with specific features to help AI models learn and recognize patterns during training. Deep LearningA subset of machine learning that uses multi-layered neural networks to simulate complex human decision-making processes. Enterprise AIAI technology designed specifically to meet organizational needs, including governance, compliance, and security requirements. Foundational ModelsThese models learn from large datasets and can be fine-tuned for specific tasks. Their adaptability makes them cost-effective, reducing the need for separate models for each task. Generative AIA type of AI capable of creating new content such as text, images, audio, and synthetic data. It learns from vast datasets and generates new outputs that resemble but do not replicate the original data. Generative AI Feature GovernanceA set of principles and policies ensuring the responsible use of generative AI technologies throughout an organization, aligning with company values and societal norms. Human in the Loop (HITL)A feedback process where human intervention ensures the accuracy and ethical standards of AI outputs, essential for improving AI training and decision-making. Intelligent Document Processing (IDP)IDP extracts data from a variety of document types using AI techniques like NLP and CV to automate and analyze document-based tasks. Large Language Model (LLM)An AI technology trained on massive datasets to understand and generate text. LLMs are key in language understanding and generation and utilize transformer models for processing sequential data. Machine Learning (ML)A branch of AI that allows systems to learn from data and improve accuracy over time through algorithms. Model AccuracyA measure of how often an AI model performs tasks correctly, typically evaluated using metrics such as the F1 score, which combines precision and recall. Natural Language Processing (NLP)An AI technique that enables machines to understand, interpret, and generate human language through a combination of linguistic and statistical models. Retrieval Augmented Generation (RAG)This technique enhances the reliability of generative AI by incorporating external data to improve the accuracy of generated content. Supervised LearningA machine learning approach that uses labeled datasets to train AI models to make accurate predictions. Unsupervised LearningA type of machine learning that analyzes and groups unlabeled data without human input, often used to discover hidden patterns. By understanding these terms, you can better navigate the AI implementation world and apply its transformative power to drive innovation and efficiency across your organization. Tectonic 2024 AI Glossary Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Collabrate With AI

Collabrate With AI

Many artists, writers, musicians, and creators are facing fears that AI is taking over their jobs. On the surface, generative AI tools can replicate work in moments that previously took creators hours to produce—often at a fraction of the cost and with similar quality. This shift has led many businesses to adopt AI for content creation, leaving creators worried about their livelihoods. Yet, there’s another way to view this situation, one that offers hope to creators everywhere. AI, at its core, is a tool of mimicry. When provided with enough data, it can replicate a style or subject with reasonable accuracy. Most of this data has been scraped from the internet, often without explicit consent, to train AI models on a wide variety of creative outputs. If you’re a creator, it’s likely that pieces of your work have contributed to the training of these AI models. Your art, words, and ideas have helped shape what these systems now consider ‘good’ in the realms of art, music, and writing. AI can combine the styles of multiple creators to generate something new, but often these creations fall flat. Why? While image-generating AI can predict pixels, it lacks an understanding of human emotions. It knows what a smile looks like but can’t grasp the underlying feelings of joy, nervousness, or flirtation that make a smile truly meaningful. AI can only generate a superficial replica unless the creator uses extensive prompt engineering to convey the context behind that smile. Emotion is uniquely human, and it’s what makes our creations resonate with others. A single brushstroke from a human artist can convey emotions that might take thousands of words to replicate through an AI prompt. We’ve all heard the saying, “A picture is worth a thousand words.” But generating that picture with AI often takes many more words. Input a short prompt, and the AI will enhance it with more words, often leading to results that stray from your original vision. To achieve a specific outcome, you may need hours of prompt engineering, trial, and error—and even then, the result might not be quite right. Without a human artist to guide the process, these generated works will often remain unimpressive, no matter how advanced the technology becomes. That’s where you, the creator, come in. By introducing your own inputs, such as images or sketches, and using workflows like those in ComfyUI, you can exert more control over the outputs. AI becomes less of a replacement for the artist and more of a tool or collaborator. It can help speed up the creative process but still relies on the artist’s hand to guide it toward a meaningful result. Artists like Martin Nebelong have embraced this approach, treating AI as just another tool in their creative toolbox. Nebelong uses high levels of control in AI-driven workflows to create works imbued with his personal emotional touch. He shares these workflows on platforms like LinkedIn and Twitter, encouraging other creators to explore how AI can speed up their processes while retaining the unique artistry that only humans can provide. Nebelong’s philosophy is clear: “I’m pro-creativity, pro-art, and pro-AI. Our tools change, the scope of what we can do changes. I don’t think creative AI tools or models have found their best form yet; they’re flawed, raw, and difficult to control. But I’m excited for when they find that form and can act as an extension of our hands, our brush, and as an amplifier of our artistic intent.” AI can help bring an artist 80% of the way to a finished product, but it’s the final 20%—the part where human skill and emotional depth come in—that elevates the piece to something truly remarkable. Think about the notorious issues with AI-generated hands. Often, the output features too many fingers or impossible poses, a telltale sign of AI’s limitations. An artist is still needed to refine the details, correct mistakes, and bring the creation in line with reality. While using AI may be faster than organizing a full photoshoot or painting from scratch, the artist’s role has shifted from full authorship to that of a collaborator, guiding AI toward a polished result. Nebelong often starts with his own artwork and integrates AI-generated elements, using them to enhance but never fully replace his vision. He might even use AI to generate 3D models, lighting, or animations, but the result is always driven by his creativity. For him, AI is just another step in the creative journey, not a shortcut or replacement for human effort. However, AI’s ability to replicate the styles of famous artists and public figures raises ethical concerns. With platforms like CIVIT.AI making it easy to train models on any style or subject, questions arise about the legality and morality of using someone else’s likeness or work without permission. As regulations catch up, we may see a future where AI models trained on specific styles or individuals are licensed, allowing creators to retain control over their works in the same way they license their traditional creations today. The future may also see businesses licensing AI models trained on actors, artists, or styles, allowing them to produce campaigns without booking the actual talent. This would lower costs while still benefiting creators through licensing fees. Actors and artists could continue to contribute their talents long after they’ve retired, or even passed on, by licensing their digital likenesses, as seen with CGI performances in movies like Rogue One. In conclusion, AI is pushing creators to learn new skills and adapt to new tools. While this can feel daunting, it’s important to remember that AI is just that—a tool. It doesn’t understand emotion, intent, or meaning, and it never will. That’s where humans come in. By guiding AI with our creativity and emotional depth, we can produce works that resonate with others on a deeper level. For example, you can tell artificial intelligence what an image should look like but not what emotions the image should evoke. Creators, your job isn’t disappearing. It’s

Read More
  • 1
  • 2
gettectonic.com