Scams Archives - gettectonic.com
is it real or is it gen-r-x

Is it Real or is it Gen-r-X?

The Rise of AI-Generated Content: A Double-Edged Sword It began with a viral deepfake video of a celebrity singing an unexpected tune. Soon, political figures appeared to say things they never uttered. Before long, hyper-realistic AI-generated content flooded the internet, blurring the line between reality and fabrication. While AI-driven creativity unlocks endless possibilities, it also raises an urgent question: How can society discern truth in an era where anything can be convincingly fabricated? Enter SynthID, Google DeepMind’s pioneering solution designed to embed imperceptible watermarks into AI-generated images, offering a reliable method to verify authenticity. What Is SynthID, and Why Does It Matter? At its core, SynthID is an AI-powered watermarking tool that embeds and detects digital signatures in AI-generated images. Unlike traditional watermarks, which can be removed or altered, SynthID’s markers are nearly invisible to the human eye but detectable by specialized AI models. This innovation represents a significant step in combating AI-generated misinformation while preserving the integrity of creative AI applications. How SynthID Works SynthID’s technology operates in two critical phases: This method ensures that even if an image is slightly edited, resized, or filtered, the SynthID watermark remains intact—making it far more resilient than conventional watermarking techniques. SynthID for AI-Generated Text Large language models (LLMs) generate text one token at a time, where each token may represent a single character, word, or part of a phrase. The model predicts the next most likely token based on preceding words and probability scores assigned to potential options. For example, given the phrase “My favorite tropical fruits are __,” an LLM might predict tokens like “mango,” “lychee,” “papaya,” or “durian.” Each token receives a probability score. When multiple viable options exist, SynthID can adjust these probability scores—without compromising output quality—to embed a detectable signature. (Source: DeepMind) SynthID for AI-Generated Music SynthID converts an audio waveform—a one-dimensional representation of sound—into a spectrogram, a two-dimensional visualization of frequency changes over time. The digital watermark is embedded into this spectrogram before being converted back into an audio waveform. This process leverages audio properties to ensure the watermark remains inaudible to humans, preserving the listening experience. The watermark is robust against common modifications such as noise additions, MP3 compression, or tempo changes. SynthID can also scan audio tracks to detect watermarks at different points, helping determine if segments were generated by Lyria, Google’s advanced AI music model. (Source: DeepMind) The Urgent Need for Digital Watermarking in AI AI-generated content is already disrupting multiple industries: In this chaotic landscape, SynthID serves as a digital signature of truth, offering journalists, artists, regulators, and tech companies a crucial tool for transparency. Real-World Impact: How SynthID Is Being Used Today SynthID is already integrated into Google’s Imagen, a text-to-image AI model, and is being tested across industries: By embedding SynthID into digital content pipelines, these industries are fostering an ecosystem where AI-generated media is traceable, reducing misinformation risks. Challenges & Limitations: Is SynthID Foolproof? While groundbreaking, SynthID is not without challenges: Despite these limitations, SynthID lays the foundation for a future where AI-generated content can be reliably traced. The Future of AI Content Verification Google DeepMind’s SynthID is just the beginning. The battle against AI-generated misinformation may involve: As AI reshapes the digital world, tools like SynthID ensure innovation does not come at the cost of authenticity. The Thin Line Between Trust & Deception AI is a powerful tool, but without safeguards, it can become a weapon of misinformation. SynthID represents a bold step toward transparency, helping society navigate the blurred boundaries between real and artificial content. As the technology evolves, businesses, policymakers, and users must embrace solutions like SynthID to ensure AI enhances reality rather than distorting it. The next time an AI-generated image appears, one might ask: Is it real, or does it carry the invisible signature of SynthID? Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in scams

AIs Role in Scams

How Generative AI is Supporting the Creation of Lures & Scams A Guide for Value Added Resellers Copyright © 2024 Gen Digital Inc. All rights reserved. Avast is part of Gen™. A long, long time ago, I worked for an antivirus company who has since been acquired by Avast.  Knowing many of the people involved in this area of artificial intelligence, I pay attention when they publish a white paper. AI in scams is something we all should be concerned about. I am excited to share it in our Tectonic Insights. Executive Summary The capabilities and global usage of both large language models (LLMs) and generative AI are rapidly increasing. While these tools offer significant benefits to the general public and businesses, they also pose potential risks for misuse by malicious actors, including the misuse of tools like OpenAI’s ChatGPT and other GPTs. This document explores how the ChatGPT brand is exploited for lures, scams, and other social engineering threats. Generative AI is expected to play a crucial role in the cyber threat world challenges, particularly in creating highly believable, multilingual texts for phishing and scams. These advancements provide more opportunities for sophisticated social engineering by even less sophisticated scammers than ever before. Conversely, we believe generative AI will not drastically change the landscape of malware generation in the near term. Despite numerous proofs of concept, the complexity of generative AI methods still makes traditional, simpler methods more practical for malware creation. In short, the good may not outweigh the bad – just yet. Recognizing the value of generative AI for legitimate purposes is important. AI-based security and assistant tools with various levels of maturity and specialization are already emerging in the market. As these tools evolve and become more widely available, substantial improvements in their capabilities are anticipated. AI-Generated Lures and Scams AI-generated lures and scams are increasingly prevalent. Cybercriminals use AI to create lures and conduct phishing attempts and scams through various texts—emails, social media content, e-shop reviews, SMS scams, and more. AI improves the credibility of social scams by producing trustworthy, authentic texts, eliminating traditional phishing red flags like broken language and awkward addressing. These advanced threats have exploited societal issues and initiatives, including cryptocurrencies, Covid-19, and the war in Ukraine. The popularity of ChatGPT among hackers stems more from its widespread recognition than its AI capabilities, making it a prime target for investigation by attackers. How is Generative AI Supporting the Creation of Lures and Scams? Generative AI, particularly ChatGPT, enhances the language used in scams, enabling cybercriminals to create more advanced texts than they could otherwise. AI can correct grammatical errors, provide multilingual content, and generate multiple text variations to improve believability. For sophisticated phishing attacks, attackers must integrate the AI-generated text into credible templates. They can purchase functional, well-designed phishing kits or use web archiving tools to replicate legitimate websites, altering URLs to phish victims. Currently, attackers need to manually build some aspects of their attempts. ChatGPT is not yet an “out-of-the-box” solution for advanced malware creation. However, the emergence of multi-type models, combining outputs like images, audio, and video, will enhance the capabilities of generative AI for creating believable phishing and scam campaigns. Malvertising Malvertising, or “malicious advertising,” involves disseminating malware through online ads. Cybercriminals exploit the widespread reach and interactive nature of digital ads to distribute harmful content. Instances have been observed where ChatGPT’s name is used in malicious vectors on platforms like Facebook, leading users to fraudulent investment portals. Users who provide personal information become vulnerable to identity theft, financial fraud, account takeovers, and further scams. The collected data is often sold on the dark web, contributing to the broader cybercrime ecosystem. Recognizing and mitigating these deceptive tactics is crucial. YouTube Scams YouTube, one of the world’s most popular platforms, is not immune to cybercrime. Fake videos featuring prominent figures are used to trick users into harmful actions. This strategy, known as the “Appeal to Authority,” exploits trust and credibility to phish personal details or coerce victims into sending money. For example, videos featuring Elon Musk discussing OpenAI have been modified to scam victims. A QR code displayed in the video redirects users to a scam page, often a cryptocurrency scam or phishing attempt. As AI models like Midjourney and DALL-E mature, the use of fake images, videos, and audio is expected to increase, enhancing the credibility of these scams. Typosquatting Typosquatting involves minor changes in URLs to redirect users to different websites, potentially leading to phishing attacks or the installation of malicious applications. An example is an Android app named “Open Chat GBT: AI Chat Bot,” where a subtle URL alteration can deceive users into downloading harmful software. Browser Extensions The popularity of ChatGPT has led to the emergence of numerous browser extensions. While many are legitimate, others are malicious, designed to lure victims. Attackers create extensions with names resembling ChatGPT to deceive users into downloading harmful software, such as adware or spyware. These extensions can also subscribe users to services that periodically charge fees, known as fleeceware. For instance, a malicious extension mimicking “ChatGPT for Google” was reported by Guardio. This extension stole Facebook sessions and cookies but was removed from the Chrome Web Store after being reported. Installers and Cracks Malicious installers often mimic legitimate tools, tricking users into installing malware. These installers promise to install ChatGPT but instead deploy malware like NodeStealer, which steals passwords and browser cookies. Cracked or unofficial software versions pose similar risks, hiding malware that can steal personal information or take control of computers. This particular method of installing malware has been around for decades. However the usage of ChatGPT and other free to download tools has given it a resurrection. Fake Updates Fake updates are a common tactic where users are prompted to update their browser to access content. Campaigns like SocGholish use ChatGPT-related articles to lure users into downloading remote access trojans (RATs), giving attackers control over infected devices. These pages are often hosted on vulnerable WordPress sites or sites with

Read More
How AI is Raising the Stakes in Phishing Attacks

How AI is Raising the Stakes in Phishing Attacks

Cybercriminals are increasingly using advanced AI, including tools like ChatGPT, to execute highly convincing phishing campaigns that mimic legitimate communications with uncanny accuracy. As AI-powered phishing becomes more sophisticated, cybersecurity practitioners must adopt AI and machine learning defenses to stay ahead. What are AI-Powered Phishing Attacks? Phishing, a long-standing cybersecurity issue, has evolved from crude scams into refined attacks that can mimic trusted entities like Amazon, postal services, or colleagues. Leveraging social engineering, these scams trick people into clicking malicious links, downloading harmful files, or sharing sensitive information. However, AI is elevating this threat by making phishing attacks more convincing, timely, and challenging to detect. General Phishing Attacks Traditionally, phishing emails were often easy to spot due to grammatical errors or poor formatting. AI, however, eliminates these mistakes, creating messages that appear professionally written. Additionally, AI language models can gather real-time data from news and corporate sites, embedding relevant details that create urgency and heighten the attack’s credibility. AI chatbots can also generate business email compromise attacks or whaling campaigns at a massive scale, boosting both the volume and sophistication of these threats. Spear Phishing Spear phishing involves targeting specific individuals with highly customized messages based on data gathered from social media or data breaches. AI has supercharged this tactic, enabling attackers to craft convincing, personalized emails almost instantly. During a cybersecurity study, AI-generated phishing emails outperformed human-crafted ones in terms of convincing recipients to click on malicious links. With the help of large language models (LLMs), attackers can create hyper-personalized emails and even deepfake phone calls and videos. Vishing and Deepfakes Vishing, or voice phishing, is another tactic on the rise. Traditionally, attackers would impersonate someone like a company executive or trusted colleague over the phone. With AI, they can now create deepfake audio to mimic a specific person’s voice, making it even harder for victims to discern authenticity. For example, an employee may receive a voice message that sounds exactly like their CFO, urgently requesting a bank transfer. How to Defend Against AI-Driven Phishing Attacks As AI-driven phishing becomes more prevalent, organizations should adopt the following defense strategies: How AI Improves Phishing Defense AI can also bolster phishing defenses by analyzing threat patterns, personalizing training, and monitoring for suspicious activity. GenAI, for instance, can tailor training to individual users’ weaknesses, offer timely phishing simulations, and assess each person’s learning needs to enhance cybersecurity awareness. AI can also predict potential phishing trends based on data such as attack frequency across industries, geographical locations, and types of targets. These insights allow security teams to anticipate attacks and proactively adapt defenses. Preparing for AI-Enhanced Phishing Threats Businesses should evaluate their risk level and implement corresponding safeguards: AI, and particularly LLMs, are transforming phishing attacks, making them more dangerous and harder to detect. As digital footprints grow and personalized data becomes more accessible, phishing attacks will continue to evolve, including falsified voice and video messages that can trick even the most vigilant employees. By proactively integrating AI defenses, organizations can better protect against these advanced phishing threats. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
can-spam act

CAN-SPAM Act

Do you use email for your business? The CAN-SPAM Act, a law that regulates commercial email, sets requirements for these messages, grants recipients the right to stop receiving emails, and imposes significant penalties for non-compliance. The FTC enforces the CAN-SPAM Act and the associated CAN-SPAM Rule. Contrary to what its name might suggest, the CAN-SPAM Act isn’t limited to bulk email. It applies to all commercial messages, which are defined as any electronic mail message primarily intended to advertise or promote a commercial product or service, including emails that promote content on commercial websites. The law also applies to business-to-business email, meaning every email, such as one announcing a new product line to former customers, must adhere to CAN-SPAM regulations. Each individual email that violates the CAN-SPAM Act can result in penalties of up to $51,744, making compliance crucial. Fortunately, following the law is straightforward. Here’s an overview of CAN-SPAM’s key requirements: Frequently Asked Questions: Q: How do I know if the CAN-SPAM Act applies to the emails my business sends? A: The law applies based on the “primary purpose” of the message. An email can contain three types of content: If the message’s primary purpose is commercial, it must comply with CAN-SPAM. If it’s transactional or relationship-based, it must still avoid false or misleading routing information but is otherwise exempt from most CAN-SPAM requirements. Q: How can I determine if an email is a transactional or relationship message? A: An email is transactional or relationship-focused if it: These categories are interpreted narrowly, so be careful when assuming that any message sent to subscribers or members is transactional or relationship-based. Consider whether a reasonable recipient would view the email’s primary purpose as fitting into one of these categories. If not, the email must comply with CAN-SPAM. Q: What if an email combines commercial and transactional/relationship content? A: When an email includes both commercial and transactional/relationship content, the primary purpose determines its status. If the subject line leads a recipient to believe the message is primarily commercial or if the transactional/relationship content isn’t prominent at the beginning, the email is considered commercial and must comply with CAN-SPAM. Need More Information? For more detailed guidance on CAN-SPAM compliance, refer to the full CAN-SPAM Act or consult the FTC’s resources. Q: What if a message contains both commercial content and content classified as “other”? A: If a message includes both commercial content and other types of content, the CAN-SPAM Act applies if the primary purpose of the message is commercial. This determination is made if: Factors that influence this interpretation include the placement of the commercial content (e.g., whether it appears at the beginning of the message), the proportion of the message dedicated to commercial content, and how elements like color, graphics, and text style are used to emphasize the commercial aspects. Q: What if an email includes content from more than one company? Who is responsible for CAN-SPAM compliance? A: When an email promotes the products, services, or websites of multiple marketers, the responsible “sender” under the CAN-SPAM Act is typically determined by agreement among the marketers. The designated sender must: If the designated sender fails to meet these obligations, all marketers involved may be held liable as senders. Q: My company sends emails with a “Forward to a Friend” feature. Who is responsible for CAN-SPAM compliance for these forwarded messages? A: Whether a seller or forwarder is considered a “sender” or “initiator” under the CAN-SPAM Act depends on the situation. Typically, the Act applies if the seller offers an incentive for forwarding the message, such as money, discounts, or sweepstakes entries. In such cases, the seller is likely responsible for compliance. If a seller provides any benefit in exchange for forwarding an email or generating traffic, they are likely subject to CAN-SPAM regulations. Q: What are the penalties for violating the CAN-SPAM Act? A: Each email that violates the CAN-SPAM Act can result in penalties of up to $51,744, with the possibility of multiple parties being held responsible. Both the company whose product is promoted and the company that sent the message can be liable. Additionally, emails that contain misleading claims may be subject to other laws, like Section 5 of the FTC Act, which prohibits deceptive advertising. The CAN-SPAM Act also includes aggravated violations that could lead to additional fines and even criminal penalties, including imprisonment, for: Civil penalties may also require restitution to consumers under Section 19 of the FTC Act, covering not just what consumers paid, but also the value of their lost time. Q: Are there specific rules for sexually explicit marketing emails? A: Yes, the FTC has rules under the CAN-SPAM Act for emails with sexually explicit content. These emails must start with “SEXUALLY-EXPLICIT:” in the subject line. The body of the email must initially display only this warning and the standard CAN-SPAM information: the message’s commercial nature, the sender’s physical address, and an opt-out method. No images or graphics are allowed in this part of the message, ensuring that sexually explicit content isn’t viewable without an affirmative action, like scrolling or clicking. This requirement doesn’t apply if the recipient has previously given consent to receive such messages. About the FTC The FTC is dedicated to preventing fraudulent, deceptive, and unfair practices affecting businesses and consumers. You can report scams and unethical business practices at ReportFraud.ftc.gov. For guidance on legal compliance, visit business.ftc.gov. Understanding and fulfilling your compliance obligations is smart business practice, regardless of your organization’s size or industry. For updates on cases and initiatives, subscribe to the FTC’s Business Blog. Your Opportunity to Comment The National Small Business Ombudsman and 10 Regional Fairness Boards collect feedback from small businesses regarding federal compliance and enforcement activities. The Ombudsman evaluates these activities annually and rates each agency’s responsiveness to small businesses. Comments can be submitted without fear of reprisal by calling 1-888-REGFAIR (1-888-734-3247) or visiting www.sba.gov/ombudsman. Content updated January 2024. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a

Read More
ai watermarking

AI Watermarking

What is AI Watermarking? AI watermarking is the process of embedding a unique, identifiable signal—called a watermark—into the output of an artificial intelligence model, such as text or images, to mark it as AI-generated. This watermark can then be detected by specialized algorithms designed to scan for it. An effective AI watermark should be: AI watermarking has gained attention with the rise of consumer-facing AI tools like text and image generators, which can produce highly realistic content. For instance, in March 2023, an AI-generated image of the Pope wearing a puffer coat went viral, misleading many into believing it was real. While some AI-generated content is harmless, the technology also poses risks, such as: To combat these risks, researchers are developing watermarking techniques to help distinguish AI-generated content from human-created material. How AI Watermarking Works AI watermarking involves two key stages: Example: Text Watermarking in LLMs A technique proposed by OpenAI researcher Scott Aaronson involves: Similarly, image generators could embed watermarks by: Benefits of AI Watermarking Limitations & Challenges Despite its potential, current AI watermarking has significant drawbacks: Conclusion AI watermarking is a promising but imperfect solution for identifying AI-generated content. While it could help mitigate misinformation and verify authenticity, current methods remain unreliable. Future advancements will need to address removal resistance, false detection, and ethical implications before watermarking becomes a widely adopted standard. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
gettectonic.com