Train AI - gettectonic.com
AI Risk Management

AI Risk Management

Organizations must acknowledge the risks associated with implementing AI systems to use the technology ethically and minimize liability. Throughout history, companies have had to manage the risks associated with adopting new technologies, and AI is no exception. Some AI risks are similar to those encountered when deploying any new technology or tool, such as poor strategic alignment with business goals, a lack of necessary skills to support initiatives, and failure to secure buy-in across the organization. For these challenges, executives should rely on best practices that have guided the successful adoption of other technologies. In the case of AI, this includes: However, AI introduces unique risks that must be addressed head-on. Here are 15 areas of concern that can arise as organizations implement and use AI technologies in the enterprise: Managing AI Risks While AI risks cannot be eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to minimize their negative impact. These policies should ensure the use of high-quality data, require testing and validation to eliminate biases, and mandate ongoing monitoring to identify and address unexpected consequences. Furthermore, ethical considerations should be embedded in AI systems, with frameworks in place to ensure AI produces transparent, fair, and unbiased results. Human oversight is essential to confirm these systems meet established standards. For successful risk management, the involvement of the board and the C-suite is crucial. As noted, “This is not just an IT problem, so all executives need to get involved in this.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Ethical AI Implementation

Ethical AI Implementation

AI technologies are rapidly evolving, becoming a practical solution to support essential business operations. However, creating true business value from AI requires a well-balanced approach that considers people, processes, and technology. Ethical AI Implementation. AI encompasses various forms, including machine learning, deep learning, predictive analytics, natural language processing, computer vision, and automation. To leverage AI’s competitive advantages, companies need a strong foundation and a realistic strategy aligned with their business goals. “Artificial intelligence is multifaceted,” said John Carey, managing director at AArete, a business management consultancy. “There’s often hype and, at times, exaggeration about how ‘intelligent’ AI truly is.” Business Advantages of AI Adoption Recent advancements in generative AI, such as ChatGPT and Dall-E, have showcased AI’s significant impact on businesses. According to a McKinsey Global Survey, global AI adoption surged from around 50% over the past six years to 72% in 2024. Some key benefits of adopting AI include: Prerequisites for AI Implementation Successfully implementing AI can be complex. A detailed understanding of the following prerequisites is crucial for achieving positive results: 13 Steps for Successful AI Implementation Common AI Implementation Mistakes Organizations often stumble by: Key Challenges in Ethical AI Implementation Human-related challenges often present the biggest hurdles. To overcome them, organizations must foster data literacy and build trust among stakeholders. Additionally, challenges around data management, model governance, system integration, and intellectual property need to be addressed. Ensuring Ethical AI Implementation To ensure responsible AI use, companies should: Ethical AI implementation requires a continuous commitment to transparency, fairness, and inclusivity across all levels of the organization. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
New Technology Risks

New Technology Risks

Organizations have always needed to manage the risks that come with adopting new technologies, and implementing artificial intelligence (AI) is no different. Many of the risks associated with AI are similar to those encountered with any new technology: poor alignment with business goals, insufficient skills to support the initiatives, and a lack of organizational buy-in. To address these challenges, executives should rely on best practices that have guided the successful adoption of other technologies, according to management consultants and AI experts. When it comes to AI, this includes: However, AI presents unique risks that executives must recognize and address proactively. Below are 15 areas of risk that organizations may encounter as they implement and use AI technologies: Managing AI Risks While the risks associated with AI cannot be entirely eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to mitigate them. This includes ensuring high-quality data for AI training, testing for biases, and continuous monitoring of AI systems to catch unintended consequences. Ethical frameworks are also crucial to ensure AI systems produce fair, transparent, and unbiased results. Involving the board and C-suite in AI governance is essential, as managing AI risk is not just an IT issue but a broader organizational challenge. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Army of AI Bots

Army of AI Bots

Salesforce Inc. has announced a significant upgrade with the launch of Industries AI, a new automation platform designed to handle a wide range of time-consuming tasks, enhancing productivity across various sectors. We are NOT advocating that the next war will be fought with AI Bots. We aren’t even suggesting there is anything negative about these bots. However, if the next war were to be information and data based, who knows. Industries AI will be integrated into all 15 of Salesforce’s cloud platforms, including Sales Cloud, Data Cloud, Service Cloud, Commerce Cloud, and Marketing Cloud. This expansive solution is capable of managing over 100 common tasks, from matching patients with clinical trials and providing maintenance alerts for vehicles and machinery, to streamlining recruitment processes and enhancing government services. The launch of Industries AI responds to findings from Salesforce’s Trends in AI for CRM Report, which indicated that over 75% of business leaders are concerned about missing out on AI advancements if they do not adopt the technology soon. With a 700% increase in urgency to implement AI over the past six months, many organizations struggle with the resources and expertise needed to develop and train AI models. Salesforce aims to address this by offering a ready-made framework for creating AI agents tailored to industry-specific needs, utilizing each customer’s proprietary data within the Salesforce platform. Industries AI will provide a foundation for quickly deploying autonomous agents, with setup times estimated at just a few minutes. To assist customers in leveraging AI automation, Salesforce has created use case libraries for each of its cloud platforms, featuring over 100 capabilities at launch. These capabilities span multiple industries: Salesforce will begin rolling out Industries AI capabilities in October 2024, with some features available by February 2025. The company plans to regularly update Industries AI with new capabilities as part of its annual Salesforce releases. Jeff Amann, executive vice president and general manager of Salesforce Industries, emphasized that this innovation aims to make powerful AI accessible to all enterprises, regardless of size or budget. “Organizations can now easily start with AI solutions tailored to their specific challenges, enhancing efficiency and productivity across various functions,” he said. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
2024 AI Glossary

2024 AI Glossary

Artificial intelligence (AI) has moved from an emerging technology to a mainstream business imperative, making it essential for leaders across industries to understand and communicate its concepts. To help you unlock the full potential of AI in your organization, this 2024 AI Glossary outlines key terms and phrases that are critical for discussing and implementing AI solutions. Tectonic 2024 AI Glossary Active LearningA blend of supervised and unsupervised learning, active learning allows AI models to identify patterns, determine the next step in learning, and only seek human intervention when necessary. This makes it an efficient approach to developing specialized AI models with greater speed and precision, which is ideal for businesses aiming for reliability and efficiency in AI adoption. AI AlignmentThis subfield focuses on aligning the objectives of AI systems with the goals of their designers or users. It ensures that AI achieves intended outcomes while also integrating ethical standards and values when making decisions. AI HallucinationsThese occur when an AI system generates incorrect or misleading outputs. Hallucinations often stem from biased or insufficient training data or incorrect model assumptions. AI-Powered AutomationAlso known as “intelligent automation,” this refers to the integration of AI with rules-based automation tools like robotic process automation (RPA). By incorporating AI technologies such as machine learning (ML), natural language processing (NLP), and computer vision (CV), AI-powered automation expands the scope of tasks that can be automated, enhancing productivity and customer experience. AI Usage AuditingAn AI usage audit is a comprehensive review that ensures your AI program meets its goals, complies with legal requirements, and adheres to organizational standards. This process helps confirm the ethical and accurate performance of AI systems. Artificial General Intelligence (AGI)AGI refers to a theoretical AI system that matches human cognitive abilities and adaptability. While it remains a future concept, experts predict it may take decades or even centuries to develop true AGI. Artificial Intelligence (AI)AI encompasses computer systems that can perform complex tasks traditionally requiring human intelligence, such as reasoning, decision-making, and problem-solving. BiasBias in AI refers to skewed outcomes that unfairly disadvantage certain ideas, objectives, or groups of people. This often results from insufficient or unrepresentative training data. Confidence ScoreA confidence score is a probability measure indicating how certain an AI model is that it has performed its assigned task correctly. Conversational AIA type of AI designed to simulate human conversation using techniques like NLP and generative AI. It can be further enhanced with capabilities like image recognition. Cost ControlThis is the process of monitoring project progress in real-time, tracking resource usage, analyzing performance metrics, and addressing potential budget issues before they escalate, ensuring projects stay on track. Data Annotation (Data Labeling)The process of labeling data with specific features to help AI models learn and recognize patterns during training. Deep LearningA subset of machine learning that uses multi-layered neural networks to simulate complex human decision-making processes. Enterprise AIAI technology designed specifically to meet organizational needs, including governance, compliance, and security requirements. Foundational ModelsThese models learn from large datasets and can be fine-tuned for specific tasks. Their adaptability makes them cost-effective, reducing the need for separate models for each task. Generative AIA type of AI capable of creating new content such as text, images, audio, and synthetic data. It learns from vast datasets and generates new outputs that resemble but do not replicate the original data. Generative AI Feature GovernanceA set of principles and policies ensuring the responsible use of generative AI technologies throughout an organization, aligning with company values and societal norms. Human in the Loop (HITL)A feedback process where human intervention ensures the accuracy and ethical standards of AI outputs, essential for improving AI training and decision-making. Intelligent Document Processing (IDP)IDP extracts data from a variety of document types using AI techniques like NLP and CV to automate and analyze document-based tasks. Large Language Model (LLM)An AI technology trained on massive datasets to understand and generate text. LLMs are key in language understanding and generation and utilize transformer models for processing sequential data. Machine Learning (ML)A branch of AI that allows systems to learn from data and improve accuracy over time through algorithms. Model AccuracyA measure of how often an AI model performs tasks correctly, typically evaluated using metrics such as the F1 score, which combines precision and recall. Natural Language Processing (NLP)An AI technique that enables machines to understand, interpret, and generate human language through a combination of linguistic and statistical models. Retrieval Augmented Generation (RAG)This technique enhances the reliability of generative AI by incorporating external data to improve the accuracy of generated content. Supervised LearningA machine learning approach that uses labeled datasets to train AI models to make accurate predictions. Unsupervised LearningA type of machine learning that analyzes and groups unlabeled data without human input, often used to discover hidden patterns. By understanding these terms, you can better navigate the AI implementation world and apply its transformative power to drive innovation and efficiency across your organization. Tectonic 2024 AI Glossary Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Collabrate With AI

Collabrate With AI

Many artists, writers, musicians, and creators are facing fears that AI is taking over their jobs. On the surface, generative AI tools can replicate work in moments that previously took creators hours to produce—often at a fraction of the cost and with similar quality. This shift has led many businesses to adopt AI for content creation, leaving creators worried about their livelihoods. Yet, there’s another way to view this situation, one that offers hope to creators everywhere. AI, at its core, is a tool of mimicry. When provided with enough data, it can replicate a style or subject with reasonable accuracy. Most of this data has been scraped from the internet, often without explicit consent, to train AI models on a wide variety of creative outputs. If you’re a creator, it’s likely that pieces of your work have contributed to the training of these AI models. Your art, words, and ideas have helped shape what these systems now consider ‘good’ in the realms of art, music, and writing. AI can combine the styles of multiple creators to generate something new, but often these creations fall flat. Why? While image-generating AI can predict pixels, it lacks an understanding of human emotions. It knows what a smile looks like but can’t grasp the underlying feelings of joy, nervousness, or flirtation that make a smile truly meaningful. AI can only generate a superficial replica unless the creator uses extensive prompt engineering to convey the context behind that smile. Emotion is uniquely human, and it’s what makes our creations resonate with others. A single brushstroke from a human artist can convey emotions that might take thousands of words to replicate through an AI prompt. We’ve all heard the saying, “A picture is worth a thousand words.” But generating that picture with AI often takes many more words. Input a short prompt, and the AI will enhance it with more words, often leading to results that stray from your original vision. To achieve a specific outcome, you may need hours of prompt engineering, trial, and error—and even then, the result might not be quite right. Without a human artist to guide the process, these generated works will often remain unimpressive, no matter how advanced the technology becomes. That’s where you, the creator, come in. By introducing your own inputs, such as images or sketches, and using workflows like those in ComfyUI, you can exert more control over the outputs. AI becomes less of a replacement for the artist and more of a tool or collaborator. It can help speed up the creative process but still relies on the artist’s hand to guide it toward a meaningful result. Artists like Martin Nebelong have embraced this approach, treating AI as just another tool in their creative toolbox. Nebelong uses high levels of control in AI-driven workflows to create works imbued with his personal emotional touch. He shares these workflows on platforms like LinkedIn and Twitter, encouraging other creators to explore how AI can speed up their processes while retaining the unique artistry that only humans can provide. Nebelong’s philosophy is clear: “I’m pro-creativity, pro-art, and pro-AI. Our tools change, the scope of what we can do changes. I don’t think creative AI tools or models have found their best form yet; they’re flawed, raw, and difficult to control. But I’m excited for when they find that form and can act as an extension of our hands, our brush, and as an amplifier of our artistic intent.” AI can help bring an artist 80% of the way to a finished product, but it’s the final 20%—the part where human skill and emotional depth come in—that elevates the piece to something truly remarkable. Think about the notorious issues with AI-generated hands. Often, the output features too many fingers or impossible poses, a telltale sign of AI’s limitations. An artist is still needed to refine the details, correct mistakes, and bring the creation in line with reality. While using AI may be faster than organizing a full photoshoot or painting from scratch, the artist’s role has shifted from full authorship to that of a collaborator, guiding AI toward a polished result. Nebelong often starts with his own artwork and integrates AI-generated elements, using them to enhance but never fully replace his vision. He might even use AI to generate 3D models, lighting, or animations, but the result is always driven by his creativity. For him, AI is just another step in the creative journey, not a shortcut or replacement for human effort. However, AI’s ability to replicate the styles of famous artists and public figures raises ethical concerns. With platforms like CIVIT.AI making it easy to train models on any style or subject, questions arise about the legality and morality of using someone else’s likeness or work without permission. As regulations catch up, we may see a future where AI models trained on specific styles or individuals are licensed, allowing creators to retain control over their works in the same way they license their traditional creations today. The future may also see businesses licensing AI models trained on actors, artists, or styles, allowing them to produce campaigns without booking the actual talent. This would lower costs while still benefiting creators through licensing fees. Actors and artists could continue to contribute their talents long after they’ve retired, or even passed on, by licensing their digital likenesses, as seen with CGI performances in movies like Rogue One. In conclusion, AI is pushing creators to learn new skills and adapt to new tools. While this can feel daunting, it’s important to remember that AI is just that—a tool. It doesn’t understand emotion, intent, or meaning, and it never will. That’s where humans come in. By guiding AI with our creativity and emotional depth, we can produce works that resonate with others on a deeper level. For example, you can tell artificial intelligence what an image should look like but not what emotions the image should evoke. Creators, your job isn’t disappearing. It’s

Read More
Used YouTube to Train AI

Used YouTube to Train AI

Announced by siliconANGLE’s Duncan Riley. Companies Used YouTube to Train AI. A new report released today reveals that companies such as Anthropic PBC, Nvidia Corp., Apple Inc., and Salesforce Inc. have used subtitles from YouTube videos to train their AI services without obtaining permission. This raises significant ethical questions about the use of publicly available materials and facts without consent. According to Proof News, these companies allegedly utilized subtitles from 173,536 YouTube videos sourced from over 48,000 channels to enhance their AI models. Rather than scraping the content themselves, Anthropic, Nvidia, Apple, and Salesforce reportedly used a dataset provided by EleutherAI, a nonprofit AI organization. EleutherAI, founded in 2020, focuses on the interpretability and alignment of large AI models. The organization aims to democratize access to advanced AI technologies by developing and releasing open-source AI models like GPT-Neo and GPT-J. EleutherAI also advocates for open science norms in natural language processing, promoting transparency and ethical AI development. The dataset in question, known as “YouTube Subtitles,” includes transcripts from educational and online learning channels, as well as several media outlets and YouTube personalities. Notable YouTubers whose transcripts are included in the dataset are Mr. Beast, Marques Brownlee, PewDiePie, and left-wing political commentator David Pakman. Some creators whose content was used are outraged. Pakman, for example, argues that using his transcripts jeopardizes his livelihood and that of his staff. David Wiskus, CEO of streaming service Nebula, has even called the use of the data “theft.” Despite the data being publicly accessible, the controversy revolves around the fact that large language models are utilizing it. This situation echoes recent legal actions regarding the use of publicly available data to train AI models. For instance, Microsoft Corp. and OpenAI were sued in November over their use of nonfiction authors’ works for AI training. The class-action lawsuit, led by a New York Times reporter, claimed that OpenAI scraped the content of hundreds of thousands of nonfiction books to develop their AI models. Additionally, The New York Times accused OpenAI, Google LLC, and Meta Holdings Inc. in April of skirting legal boundaries in their use of AI training data. While the legality of using AI training data remains a gray area, it has yet to be extensively tested in court. Should a case arise, the key issue will likely be whether publicly stated facts, including utterances, can be copyrighted. Relevant U.S. case law includes Feist Publications Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991) and International News Service v. Associated Press (1918). In both cases, the U.S. Supreme Court ruled that facts cannot be copyrighted. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com