CIO Archives - gettectonic.com - Page 4
Acceptable AI Use Policies

Acceptable AI Use Policies

With great power comes—when it comes to generative AI—significant security and compliance risks. Discover how AI acceptable use policies can safeguard your organization while leveraging this transformative technology. AI has become integral across various industries, driving digital operations and organizational infrastructure. However, its widespread adoption brings substantial risks, particularly concerning cybersecurity. A crucial aspect of managing these risks and ensuring the security of sensitive data is implementing an AI acceptable use policy. This policy defines how an organization handles AI risks and sets guidelines for AI system usage. Why an AI Acceptable Use Policy Matters Generative AI systems and large language models are potent tools capable of processing and analyzing data at unprecedented speeds. Yet, this power comes with risks. The same features that enhance AI efficiency can be misused for malicious purposes, such as generating phishing content, creating malware, producing deepfakes, or automating cyberattacks. An AI acceptable use policy is essential for several reasons: Crafting an Effective AI Acceptable Use Policy An AI acceptable use policy should be tailored to your organization’s needs and context. Here’s a general guide for creating one: Essential Elements of an AI Acceptable Use Policy A robust AI acceptable use policy should include: An AI acceptable use policy is not just a document but a dynamic framework guiding safe and responsible AI use within an organization. By developing and enforcing this policy, organizations can harness AI’s power while mitigating its risks to cybersecurity and data integrity, balancing innovation with risk management as AI continues to evolve and integrate into our digital landscapes. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Copilots in the Workplace

Copilots in the Workplace

The Rise of AI-Powered Copilots in the Workplace: The New Age of Office Helpers As more businesses embrace AI tools, the tech world is buzzing with a new kind of office assistant: the AI-powered copilot. These digital sidekicks are here to revolutionize how we interact with information—think of them as the high-tech, caffeine-free version of your office buddy who always knows where the stapler is. Copilots in the Workplace are here. AI-powered copilots use large language models (LLMs) to help users wade through vast amounts of data, often with the grace of a caffeinated librarian. By facilitating conversations instead of requiring precise queries, these tools let you ask for help without needing to channel your inner tech wizard. Hugo Sarrazin, Chief Product and Technology Officer at UKG, points out that many of these AI copilots are essentially “search functions dressed up in a snazzy new outfit.” UKG’s own digital assistant, UKG Bryte, made its debut last November—just in time to help you find out why your vacation request hasn’t been approved yet. These AI assistants offer an enhanced chatbot experience by understanding a wide range of queries through generative AI. Imagine asking your chatbot, “Hey, what’s the deadline for open enrollment?” and getting a response that doesn’t involve translating your question into a techie dialect. “Generative AI isn’t stuck on keywords and rigid queries. It’s like a magic eight ball with a PhD,” Sarrazin explains. Traditional systems often force users through pre-set menus and workflows—kind of like a bureaucratic maze—but copilots let you skip the detours and get straight to the point. With AI copilots, you can ask in plain language and receive useful answers without needing to consult a human. Picture this: an HR chatbot that knows exactly what the per diem is for your conference, or which days you’re free for the next company holiday—like having a personal assistant who never needs a coffee break. Salesforce employees, for instance, are getting a taste of this futuristic help with their Einstein copilot. Since the introduction of Einstein, Salesforce has seen an uptick in productivity and a drop in mundane tasks. Nathalie Scardino, Salesforce’s Chief People Officer, says the company has been working to seamlessly integrate AI tools into daily workflows—because nothing says “we care” like a virtual assistant who understands your workload better than you do. After Salesforce acquired Slack in 2020, the Einstein-powered Slack app launched in February. This tool helps with scheduling, document summarization, and general inquiries, effectively turning your to-do list into a “done” list. Research showed that desk workers spend 41% of their time on tasks that aren’t exactly rocket science, and Einstein is here to tackle those chores. Scardino and Salesforce’s CIO, Juan Perez, have been busy ensuring that AI tools fit perfectly into the company’s workflow. Einstein is also making waves in HR by integrating with Basecamp, Salesforce’s hub for employee info. This integration has answered over 88,000 queries and cut resolution times from two days to just 30 minutes—making it the office hero you didn’t know you needed. “The big win here is bringing all those disparate systems together and making information accessible without needing a PhD,” Scardino quips. “No more hopping between six systems just to find out about your healthcare benefits.” In this brave new world of AI-assisted work, copilots like Einstein are proving that getting the right information quickly is no longer a sci-fi dream. They’re here to make our office lives smoother, smarter, and a little less dependent on those old-fashioned human helpers. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven

Read More
Data Protection Improvements from Next DLP

Data Protection Improvements from Next DLP

Insider risk and data protection company Next DLP has unveiled its new Secure Data Flow technology, designed to enhance data protection for customers. Integrated into the company’s Reveal Platform, Secure Data Flow monitors the origin, movement, and modification of data to provide comprehensive protection. Data Protection Improvements from Next DLP. This technology can secure critical business data flow from any SaaS application, including Salesforce, Workday, SAP, and GitHub, to prevent accidental data loss and malicious theft. “In modern IT environments, intellectual property often resides in SaaS applications and cloud data stores,” said John Stringer, head of product at Next DLP. “The challenge is that identifying high-impact data in these locations based on its content is difficult. Secure Data Flow, through Reveal, ensures that firms can confidently protect their most critical data assets, regardless of their location or application.” Next DLP argues that legacy data protection technologies are inadequate, relying on pattern matching, regular expressions, keywords, user-applied tags, and fingerprinting, which only cover a limited range of text-based data types. The company highlights that recent studies indicate employees download an average of 30 GB of data each month from SaaS applications to their endpoints, such as mobile phones, laptops, and desktops, emphasizing the need for advanced data protection measures. Secure Data Flow tracks data as it moves through both sanctioned and unsanctioned channels within an organization. By complementing traditional content and sensitivity classification-based approaches with origin-based data identification, manipulation detection, and data egress controls, it effectively prevents data theft and misuse. This approach results in an “all-encompassing, 100 percent effective, false-positive-free solution that simplifies the lives of security analysts,” claims Next DLP. “Secure Data Flow represents a novel approach to data protection and insider risk management,” said Ken Buckler, research director at Enterprise Management Associates. “It not only enhances detection and protection capabilities but also streamlines data management processes. This improves the accuracy of data sensitivity recognition and reduces endpoint content inspection costs in today’s diverse technological environments.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI Safety and Responsibility

AI Safety and Responsibility

The Future of AI: Balancing Innovation and Trust Authored by Justin Tauber, General Manager, Innovation and AI Culture at Salesforce, ANZ. AI Safety and Responsibility AI holds the promise of transforming business operations and freeing up our most precious resource: time. This is particularly beneficial for small businesses, where customer-facing staff must navigate a complex set of products, policies, and data with limited time and support. AI-assisted customer engagement can lead to more timely, personalized, and intelligent interactions. However, trust is paramount, and businesses must use AI power safely and ethically. The Trust Challenge According to the AI Trust Quotient, 89% of Australian office workers don’t trust AI to operate without human oversight, and 62% fear that humans will lose control of AI. Small businesses must build competence and confidence in using AI responsibly. Companies that successfully combine human and machine intelligence will lead in AI transformation. Building trust and confidence in AI requires focusing on the employee experience of AI. Employees should be integrated early into decision-making, output refinement, and feedback processes. Generative AI outcomes improve when humans are actively involved. Humans need to lead their partnership with AI, ensuring AI works effectively with humans at the helm. Strategies for Building Trust One strategy is to remind employees of AI’s strengths and weaknesses within their workflow. Showing confidence values — how much the model believes its output is correct — helps employees handle AI responses with the appropriate level of care. Lower-scored content can still be valuable, but human reviews provide deeper scrutiny. Prompt templates for staff ensure consistent inputs and predictable outputs. Explainability or citing sources for AI-generated content also addresses trust and accuracy issues. Another strategy focuses on use cases that enhance customer trust. The sweet spot is where productivity and trust-building benefits align. For example, generative AI can reassure customers that a product will arrive on time. AI in fraud detection and prevention is another area where AI can flag suspicious transactions for human review, improving the accuracy and effectiveness of fraud detection systems. Salesforce’s Commitment to Ethical AI Salesforce ensures that its AI solutions keep humans at the helm by respecting ethical guardrails in AI product development. Salesforce goes further by creating capabilities and solutions that lower the cost of responsible AI deployment and use. AI safety products help businesses use AI power without significant risks. Salesforce AI products are built with trust and reliability in mind, embodying Trustworthy AI principles to help customers deploy these products ethically. It’s unrealistic and unfair to expect employees, especially in SMBs, to refine every AI-generated output. Therefore, Salesforce provides businesses with powerful, system-wide controls and intuitive interfaces to make timely and responsible judgments about testing, refining responses, or escalating problems. Salesforce has invested in ethical AI for nearly a decade, focusing on principles, policies, and protections for itself and its customers. New guidelines for responsible generative AI development expand on core Trusted AI principles. Updated Acceptable Use Policy safeguards and the Einstein Trust layer protect customer data from external LLMs. Commitment to a Trusted AI Future While we’re still in the early days of AI, Salesforce is committed to learning and iterating in close collaboration with customers and regulators to make trusted AI a reality for all. Originally published in Smart Company. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Cyber Group Targets SaaS Platforms

Cyber Group Targets SaaS Platforms

Cyber Group UNC3944 Targets SaaS Platforms like Azure, Salesforce, vSphere, AWS, and Google Cloud UNC3944, also known as “0ktapus” and “Scattered Spider,” has shifted its focus to attacking Software-as-a-Service (SaaS) applications, as reported by Google Cloud’s Mandiant threat intelligence team. This hacking group, previously linked to incidents involving companies such as Snowflake and MGM Entertainment, has evolved its strategies to concentrate on data theft and extortion. Cyber Group Targets SaaS Platforms Attack Techniques UNC3944 exploits legitimate third-party tools for remote access and leverages Okta permissions to expand their intrusion capabilities. One notable aspect of their attacks involves creating new virtual machines in VMware vSphere and Microsoft Azure, using administrative permissions linked through SSO applications for further activities. The group uses commonly available utilities to reconfigure virtual machines (VMs), disable security protocols, and download tools such as Mimikatz and ADRecon, which extract and combine various artifacts from Active Directory (AD) and Microsoft Entra ID environments. Evolving Methods Initially, UNC3944 employed a variety of techniques, but over time, their methods have expanded to include ransomware and data theft extortion. Active since at least May 2022, the group has developed resilience mechanisms against virtualization platforms and improved their ability to move laterally by abusing SaaS permissions. The group also uses SMS phishing to reset passwords and bypass multi-factor authentication (MFA). Once inside, they conduct thorough reconnaissance of Microsoft applications like SharePoint to understand remote connection needs. According to Google Cloud’s Mandiant team, UNC3944’s primary activity is now data theft without using ransomware. They employ expert social engineering tactics, using detailed personal information to bypass identity checks and target employees with high-level access. Social Engineering and Threats Attackers often pose as employees, contacting help desks to request MFA resets for setting up new phones. If help desk staff comply, attackers can easily bypass MFA and reset passwords. If social engineering fails, UNC3944 resorts to threats, including doxxing, physical threats, or releasing compromising material to coerce credentials from victims. Once access is gained, they gather information on tools like VPNs, virtual desktops, and remote work utilities to maintain consistent access. Targeting SaaS and Cloud Platforms UNC3944 targets Okta’s single sign-on (SSO) tools, allowing them to create accounts that facilitate access to multiple systems. Their attacks extend to VMware’s vSphere hybrid cloud management tool and Microsoft Azure, where they create virtual machines for malicious purposes. By operating within a trusted IP address range, they complicate detection. Additional targets include SaaS applications like VMware’s vCenter, CyberArk, Salesforce, CrowdStrike, Amazon Web Services (AWS), and Google Cloud. Office 365 is another focus, with attackers using Microsoft’s Delve tool to identify valuable information. To exfiltrate data, they use synchronization utilities such as Airbyte and Fivetran to transfer information to their own cloud storage. The group also targets Active Directory Federation Services (ADFS) to extract certificates and employ Golden SAML attacks for continued access to cloud applications. They leverage Microsoft 365 capabilities like Office Delve for quick reconnaissance and data mining. Recommendations – Cyber Group Targets SaaS Platforms Mandiant advises deploying host-based certificates with MFA for VPN access, implementing stricter conditional access policies, and enhancing monitoring for SaaS applications. Consolidating logs from crucial SaaS applications and monitoring virtual machine setups can help identify potential breaches. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More

An Eye on AI

Humans often cast uneasy glances over their shoulders as artificial intelligence (AI) rapidly advances, achieving feats once exclusive to human intellect. An Eye on AI should ease their troubled minds. AI-driven chatbots can now pass rigorous exams like the bar and medical licensing tests, generate tailored images and summaries from complex texts, and simulate human-like interactions. Yet, amidst these advancements, concerns loom large — fears of widespread job loss, existential threats to humanity, and the specter of machines surpassing human control to safeguard their own existence. Skeptics of these doomsday scenarios argue that today’s AI lacks true cognition. They assert that AI, including sophisticated chatbots, operates on predictive algorithms that generate responses based on patterns in data inputs rather than genuine understanding. Even as AI capabilities evolve, it remains tethered to processing inputs into outputs without cognitive reasoning akin to human thought processes. So, are we venturing into perilous territory or merely witnessing incremental advancements in technology? Perhaps both. While the prospect of creating a malevolent AI akin to HAL 9000 from “2001: A Space Odyssey” seems far-fetched, there is a prudent assumption that human ingenuity, prioritizing survival, would prevent engineering our own demise through AI. Yet, the existential question remains — are we sufficiently safeguarded against ourselves? Doubts about AI’s true cognitive abilities persist despite its impressive functionalities. While AI models like large language models (LLMs) operate on vast amounts of data to simulate human reasoning and context awareness, they fundamentally lack consciousness. AI’s creativity, exemplified by its ability to invent new ideas or solve complex problems, remains a simulated mimicry rather than authentic intelligence. Moreover, AI’s domain-specific capabilities are constrained by its training data and programming limitations, unlike human cognition which adapts dynamically to diverse and novel situations. AI excels in pattern recognition tasks, from diagnosing diseases to classifying images, yet it does so without comprehending the underlying concepts or contexts. For instance, in medical diagnostics or art authentication, AI can achieve remarkable accuracy in identifying patterns but lacks the interpretative skills and contextual understanding that humans possess. This limitation underscores the necessity for human oversight and critical judgment in areas where AI’s decisions impact significant outcomes. The evolution of AI, rooted in neural network technologies and deep learning paradigms, marks a profound shift in how we approach complex tasks traditionally performed by human experts. However, AI’s reliance on data patterns and algorithms highlights its inherent limitations in achieving genuine cognitive understanding or autonomous decision-making. In conclusion, while AI continues to transform industries and enhance productivity, its capabilities are rooted in computational algorithms rather than conscious reasoning. As we navigate the future of AI integration, maintaining a balance between leveraging its efficiencies and preserving human expertise and oversight remains paramount. Ultimately, the intersection of AI and human intelligence will define the boundaries of technological advancement and ethical responsibility in the years to come. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
The Growing Role of AI in Cloud Management

The Growing Role of AI in Cloud Management

AI technologies are redefining cloud management by automating IT systems, improving security, optimizing cloud costs, enhancing data management, and streamlining the provisioning of AI services across complex cloud ecosystems. With the surging demand for AI, its ability to address technological complexities makes a unified cloud management strategy indispensable for IT teams. Cloud and security platforms have steadily integrated AI and machine learning to support increasingly autonomous IT operations. The rapid rise of generative AI (GenAI) has further spotlighted these AI capabilities, prompting vendors to prioritize their development and implementation. Adnan Masood, Chief AI Architect at UST, highlights the transformative potential of AI-driven cloud management, emphasizing its ability to oversee vast data centers hosting millions of applications and services with minimal human input. “AI automates tasks such as provisioning, scaling, cost management, monitoring, and data migration,” Masood explains, showcasing its wide-ranging impact. From Reactive to Proactive Cloud Management Traditionally, CloudOps relied heavily on manual intervention and expertise. AI has shifted this paradigm, introducing automation, predictive analytics, and intelligent decision-making. This evolution enables enterprises to transition from reactive, manual management to proactive, self-optimizing cloud environments. Masood underscores that this shift allows cloud systems to self-manage and optimize with minimal human oversight. However, organizations must navigate challenges, including complex data integration, real-time processing limitations, and model accuracy concerns. Business hurdles like implementation costs, uncertain ROI, and maintaining the right balance between AI automation and human oversight also require careful evaluation. AI’s Transformation of Cloud Computing AI has reshaped cloud management into a more proactive and efficient process. Key applications include: “AI enhances efficiency, scalability, and flexibility for IT teams,” says Agustín Huerta, SVP of Digital Innovation at Globant. He views AI as a pivotal enabler of automation and optimization, helping businesses adapt to rapidly changing environments. AI also automates repetitive tasks such as provisioning, performance monitoring, and cost management. More importantly, it strengthens security across cloud infrastructure by detecting misconfigurations, vulnerabilities, and malicious activities. Nick Kramer of SSA & Company highlights how AI-powered natural language interfaces simplify cloud management, transforming it from a technical challenge to a logical one. With conversational AI, business users can manage cloud operations more efficiently, accelerating problem resolution. AI-Enabled Cloud Management Tools Ryan Mallory, COO at Flexential, categorizes AI-powered cloud tools into: The Rise of Self-Healing Cloud Systems AI enables cloud systems to detect, resolve, and optimize issues with minimal human intervention. For instance, AI can identify system failures and trigger automatic remediation, such as restarting services or reallocating resources. Over time, machine learning enhances these systems’ accuracy and reliability. Key Applications of AI in Cloud Management AI’s widespread applications in cloud computing include: Benefits of AI in Cloud Management AI transforms cloud management by enabling autonomous systems capable of 24/7 monitoring, self-healing, and optimization. This boosts system reliability, reduces downtime, and provides businesses with deeper analytical insights. Chris Vogel from S-RM emphasizes that AI’s analytical capabilities go beyond automation, driving strategic business decisions and delivering measurable value. Challenges of AI in Cloud Management Despite its advantages, AI adoption in cloud management presents challenges, including: AI’s Impact on IT Departments AI’s growing influence on cloud management introduces new responsibilities for IT teams, including managing unauthorized AI systems, ensuring data security, and maintaining high-quality data for AI applications. IT departments must provide enterprise-grade AI solutions that are private, governed, and efficient while balancing the costs and benefits of AI integration. Future Trends in AI-Driven Cloud Management Experts anticipate that AI will revolutionize cloud management, much like cloud computing reshaped IT a decade ago. Prasad Sankaran from Cognizant predicts that organizations investing in AI for cloud management will unlock opportunities for faster innovation, streamlined operations, and reduced technical debt. As AI continues to evolve, cloud environments will become increasingly autonomous, driving efficiency, scalability, and innovation across industries. Businesses embracing AI-driven cloud management will be well-positioned to adapt to the complexities of tomorrow’s IT landscape. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
AI Misconceptions Dispelled

AI Misconceptions Dispelled

The recent launch of GPT-4o (“o” for “omni”) has captivated everyone with its seamless human-computer interaction. Capable of solving math problems, translating languages in real-time, and even answering queries in a human voice with emotions, GPT-4o is a game-changer. Within hours of its debut, shares of Duolingo, the popular language EdTech platform, plummeted by 26% as investors perceived GPT-4o as a potential threat. But what AI Misconceptions Dispelled, would prevent this? Fears about AI are widespread. Many believe it will become so advanced and efficient that employing humans will be too costly, potentially leading to mass unemployment. Over the past year, it has become clear that artificial intelligence (AI) is among the most disruptive forces in business. AI promises efficiency and speed but also raises concerns about bias and ethics. In a candid conversation on Mint’s new video series All About AI, Arundhati Bhattacharya, Chairperson and CEO of Salesforce India, dispels these fears and discusses bridging the generation gap and making Salesforce a Great Place to Work. Forging Unity and Vision “When I came in, there were disparate groups—sales and distribution, technology and products, support and success. Each group had its leaders, but nobody was bringing them together to create one Salesforce vision and ensure that each group developed the Salesforce DNA,” Arundhati reflects on her April 2020 arrival. She underscored Salesforce’s values-driven approach, highlighting the significance of Trust, Customer Success, Innovation, Equality, and Sustainability. Under Arundhati’s leadership, Salesforce India has risen from 36th to 4th on the Great Places to Work list. Navigating AI Skepticism AI advancements are profoundly shaping industries and humanity’s future. According to Frost & Sullivan’s “Global State of AI, 2022” report, 87% of organizations see AI and machine learning as catalysts for revenue growth, operational efficiency, and better customer experiences. A 2023 IBM survey found that 42% of large businesses have integrated AI, with another 40% considering it. Furthermore, 38% of organizations have adopted generative AI, with an additional 42% contemplating its implementation. Despite the excitement around AI, skepticism remains. Arundhati offers insights on addressing this skepticism and using AI to benefit society. She suggests a balanced approach, noting that every significant technological change has sparked similar fears. Arundhati argues that AI won’t necessarily lead to massive unemployment, given humanity’s ability to adapt and evolve. Amidst India’s socio-economic challenges, Arundhati sees AI as a potent tool for positive change. She cites examples like the Prime Minister’s Jan Dhan Yojana, where AI-enabled solutions facilitated broader financial inclusion. “Similarly, AI can greatly improve services in state hospitals where doctors are overworked. AI can gather patient symptoms and present an initial diagnosis, allowing doctors to focus on more critical aspects. The technology is also being used to check sales conversations for accuracy in insurance, ensuring compliance and reducing mis-selling,” she elaborates. Driving Productivity through AI Integration Improving productivity in India is a pressing issue, and AI can effectively bridge this gap. However, the term “AI” is often overused and misunderstood. People need to approach AI initiatives with intentionality and focus. First, determine the use cases for AI, such as improving productivity, gaining customer mindshare, or enhancing customer experience. Once that is clear, ensure your organization is structured to provide the right inputs for AI, which involves having a robust data strategy. Tools like Data Cloud can help by integrating various data sources without copying the data and extracting intelligence from them. Lastly, securing buy-in from employees is crucial for successful AI implementation. Addressing their concerns, communicating the potential risks, and aligning everyone toward the same goal is essential. Securing the Future: Addressing AI Security Concerns As AI technologies advance, concerns about their security and potential misuse also rise. Threat actors can exploit sophisticated AI tools intended for positive purposes to carry out scams and fraud. As businesses rely more on AI, it is vital to recognize and protect against these security risks. These risks include data manipulation, automated malware, and abuse through impersonation and hallucination. To tackle AI security challenges, consider prioritizing cybersecurity measures for AI systems. Salesforce makes substantial investments in cybersecurity daily to stay ahead of potential threats. “We use third-party infrastructure with additional security layers on top. Public cloud infrastructure provides multiple layers of security, much like a compound with perimeter, building, and apartment security,” Arundhati explains. Empowering the Next Generation Workforce and Fostering Innovation Transitioning from her previous role as Chairperson of the State Bank of India to leading Salesforce India, Arundhati acknowledges the generational shift in workforce dynamics. She emphasizes understanding and catering to the evolving needs and aspirations of a younger workforce, focusing on engagement and fulfillment beyond monetary incentives. “Salesforce has a strong giving policy called one by one by one, where we give 1% of our profit, products, and time to the nonprofit sector. This resonates with the younger workforce, making them feel engaged and fulfilled.” Through a dedicated startup program, Salesforce fosters a collaborative ecosystem where startups can leverage resources, tools, and connections to thrive and succeed. Arundhati’s stewardship of Salesforce India epitomizes a transformative leadership approach anchored in values, innovation, and community empowerment. Under her leadership, Salesforce India continues to chart a course toward sustainable growth and inclusive prosperity, poised to redefine the paradigm of corporate success in the digital age. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Blockers to IT Success and Salesforce Implementation Solutions

Blockers to IT Success and Salesforce Implementation Solutions

The CIO’s website recently delved into the primary obstacles to achieving success in IT. Tectonic echoes these concerns and offers insights and remedies based on our Salesforce Implementation Solutions. Issues such as data challenges, technical debt, and talent shortages can significantly hinder the progress of IT organizations and departments in executing high-value projects. Several CIOs have shared their approaches to tackling these challenges. Tectonic poses solutions based upon the Salesforce ecosystem. Carm Taglienti, Chief Data Officer and Distinguished Engineer at Insight, reflects on the dual nature of the recent surge in artificial intelligence (AI). While AI advancements have undoubtedly enhanced efficiency and productivity across technology departments, lines of business, and business units, the rapid proliferation of AI technologies, particularly generative AI, has disrupted numerous IT plans. Taglienti emphasizes the need for organizations to adapt swiftly to these technological shifts to avoid derailing critical projects. Tectonic recently looked at challenges the public sector face in regards to AI. Read more here. The rapid evolution of technology poses a continuous challenge for IT leaders. The relentless pace of technological advancements, exemplified by the rise of AI, demands proactive resource allocation to stay competitive. Ryan Downing, CIO of Principal Financial Group, underscores the necessity of adopting a strategic approach to navigate the complexities of multicloud environments effectively. Tectonic echoes the multicloud challenge. We address this for our clients with Salesforce implementation, optimization, consulting, and ongoing managed services. Salesforce remains the world’s number one CRM solution for a reason. Cloud solutions for marketing, personalization, patient data privacy, manufacturing, feedback management, and more are just a small sampling of the IT solutions Salesforce and Tectonic present. Unaddressed data issues pose a significant impediment to realizing the full potential of analytics, automation, and AI. Many organizations are grappling with legacy systems and inadequate data management practices, hindering their progress in succesfully deploying advanced technologies. Working with a Salesforce partner can address this challenge. The scarcity of skilled talent remains a pressing concern for CIOs, as highlighted in the State of the CIO Study by Foundry. Despite efforts to train internal staff and leverage contractors, filling critical tech positions remains challenging, impeding transformation initiatives. Managed services providers help address this skill gap. Technical debt and legacy systems present additional hurdles for IT departments. The maintenance of outdated infrastructure drains resources and limits innovation, forcing CIOs to strike a delicate balance between modernization efforts and operational demands. Addressing cybersecurity threats and compliance with evolving regulations further strains IT resources, necessitating proactive measures to safeguard organizational assets and maintain regulatory compliance. Striking the right balance between sustaining existing operations, fostering growth, and driving transformative initiatives is another challenge facing CIOs. Scott Saccal of Cambrex emphasizes the importance of aligning resource allocation with strategic objectives to avoid market displacement. The allure of new technologies, coupled with executive pressure to explore shiny objects, can divert focus from core priorities, hampering strategic execution. Shadow IT and the lack of organizational agility pose additional barriers to IT success, highlighting the need for CIOs to foster collaboration, align IT initiatives with business goals, and cultivate a culture of adaptability within their departments. ‘Shadow IT’ refers to the unsanctioned use of software, hardware, or other systems and services within an organization, often without the knowledge of that organization’s information technology (IT) department. CIOs must navigate a myriad of challenges, from technological disruptions to talent shortages, while maintaining a laser focus on strategic objectives to drive organizational success in an ever-evolving digital landscape. Tectonic is here to consult and achieve your IT challenges. Contact us today. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
How AI is Raising the Stakes in Phishing Attacks

How AI is Raising the Stakes in Phishing Attacks

Cybercriminals are increasingly using advanced AI, including tools like ChatGPT, to execute highly convincing phishing campaigns that mimic legitimate communications with uncanny accuracy. As AI-powered phishing becomes more sophisticated, cybersecurity practitioners must adopt AI and machine learning defenses to stay ahead. What are AI-Powered Phishing Attacks? Phishing, a long-standing cybersecurity issue, has evolved from crude scams into refined attacks that can mimic trusted entities like Amazon, postal services, or colleagues. Leveraging social engineering, these scams trick people into clicking malicious links, downloading harmful files, or sharing sensitive information. However, AI is elevating this threat by making phishing attacks more convincing, timely, and challenging to detect. General Phishing Attacks Traditionally, phishing emails were often easy to spot due to grammatical errors or poor formatting. AI, however, eliminates these mistakes, creating messages that appear professionally written. Additionally, AI language models can gather real-time data from news and corporate sites, embedding relevant details that create urgency and heighten the attack’s credibility. AI chatbots can also generate business email compromise attacks or whaling campaigns at a massive scale, boosting both the volume and sophistication of these threats. Spear Phishing Spear phishing involves targeting specific individuals with highly customized messages based on data gathered from social media or data breaches. AI has supercharged this tactic, enabling attackers to craft convincing, personalized emails almost instantly. During a cybersecurity study, AI-generated phishing emails outperformed human-crafted ones in terms of convincing recipients to click on malicious links. With the help of large language models (LLMs), attackers can create hyper-personalized emails and even deepfake phone calls and videos. Vishing and Deepfakes Vishing, or voice phishing, is another tactic on the rise. Traditionally, attackers would impersonate someone like a company executive or trusted colleague over the phone. With AI, they can now create deepfake audio to mimic a specific person’s voice, making it even harder for victims to discern authenticity. For example, an employee may receive a voice message that sounds exactly like their CFO, urgently requesting a bank transfer. How to Defend Against AI-Driven Phishing Attacks As AI-driven phishing becomes more prevalent, organizations should adopt the following defense strategies: How AI Improves Phishing Defense AI can also bolster phishing defenses by analyzing threat patterns, personalizing training, and monitoring for suspicious activity. GenAI, for instance, can tailor training to individual users’ weaknesses, offer timely phishing simulations, and assess each person’s learning needs to enhance cybersecurity awareness. AI can also predict potential phishing trends based on data such as attack frequency across industries, geographical locations, and types of targets. These insights allow security teams to anticipate attacks and proactively adapt defenses. Preparing for AI-Enhanced Phishing Threats Businesses should evaluate their risk level and implement corresponding safeguards: AI, and particularly LLMs, are transforming phishing attacks, making them more dangerous and harder to detect. As digital footprints grow and personalized data becomes more accessible, phishing attacks will continue to evolve, including falsified voice and video messages that can trick even the most vigilant employees. By proactively integrating AI defenses, organizations can better protect against these advanced phishing threats. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more

Read More
Where Will AI Take Us?

Where Will AI Take Us?

Author Jeremy Wagstaff wrote a very thought provoking article on the future of AI, and how much of it we could predict based on the past. This insight expands on that article. Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn. These machines can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Many people think of artificial intelligence in the vein of how they personally use it. Some people don’t even realize when they are using it. Artificial intelligence has long been a concept in human mythology and literature. Our imaginations have been grabbed by the thought of sentient machines constructed by humans, from Talos, the enormous bronze automaton (self-operating machine) that safeguarded the island of Crete in Greek mythology, to the spacecraft-controlling HAL in 2001: A Space Odyssey. Artificial Intelligence comes in a variety of flavors, if you will. Artificial intelligence can be categorized in several ways, including by capability and functionality: You likely weren’t even aware of all of the above categorizations of artificial intelligence. Most of us still would sub set into generative ai, a subset of narrow AI, predictive ai, and reactive ai. Reflect on the AI journey through the Three C’s – Computation, Cognition, and Communication – as the guiding pillars for understanding the transformative potential of AI. Gain insights into how these concepts converge to shape the future of technology. Beyond a definition, what really is artificial intelligence, who makes it, who uses it, what does it do and how. Artificial Intelligence Companies – A Sampling AI and Its Challenges Artificial intelligence (AI) presents a novel and significant challenge to the fundamental ideas underpinning the modern state, affecting governance, social and mental health, the balance between capitalism and individual protection, and international cooperation and commerce. Addressing this amorphous technology, which lacks a clear definition yet pervades increasing facets of life, is complex and daunting. It is essential to recognize what should not be done, drawing lessons from past mistakes that may not be reversible this time. In the 1920s, the concept of a street was fluid. People viewed city streets as public spaces open to anyone not endangering or obstructing others. However, conflicts between ‘joy riders’ and ‘jay walkers’ began to emerge, with judges often siding with pedestrians in lawsuits. Motorist associations and the car industry lobbied to prioritize vehicles, leading to the construction of vehicle-only thoroughfares. The dominance of cars prevailed for a century, but recent efforts have sought to reverse this trend with ‘complete streets,’ bicycle and pedestrian infrastructure, and traffic calming measures. Technology, such as electric micro-mobility and improved VR/AR for street design, plays a role in this transformation. The guy digging out a road bed for chariots and Roman armies likely considered none of this. Addressing new technology is not easy to do, and it’s taken changes to our planet’s climate, a pandemic, and the deaths of tens of millions of people in traffic accidents (3.6 million in the U.S. since 1899). If we had better understood the implications of the first automobile technology, perhaps we could have made better decisions. Similarly, society should avoid repeating past mistakes with AI. The market has driven AI’s development, often prioritizing those who stand to profit over consumers. You know, capitalism. The rapid adoption and expansion of AI, driven by commercial and nationalist competition, have created significant distortions. Companies like Nvidia have soared in value due to AI chip sales, and governments are heavily investing in AI technology to gain competitive advantages. Listening to AI experts highlights the enormity of the commitment being made and reveals that these experts, despite their knowledge, may not be the best sources for AI guidance. The size and impact of AI are already redirecting massive resources and creating new challenges. For example, AI’s demand for energy, chips, memory, and talent is immense, and the future of AI-driven applications depends on the availability of computing resources. The rise in demand for AI has already led to significant industry changes. Data centers are transforming into ‘AI data centers,’ and the demand for specialized AI chips and memory is skyrocketing. The U.S. government is investing billions to boost its position in AI, and countries like China are rapidly advancing in AI expertise. China may be behind in physical assets, but it is moving fast on expertise, generating almost half of the world’s top AI researchers (Source: New York Times). The U.S. has just announced it will provide chip maker Intel with $20 billion in grants and loans to boost the country’s position in AI. Nvidia is now the third largest company in the world, entirely because its specialized chips account for more than 70 percent of AI chip sales. Memory-maker Micro has mostly run out of high-bandwidth memory (HBM) stocks because of the chips’ usage in AI—one customer paid $600 million up-front to lock in supply, according to a story by Stack. Back in January, the International Energy Agency forecast that data centers may more than double their electrical consumption by 2026 (Source: Sandra MacGregor, Data Center Knowledge). AI is sucking up all the payroll: Those tech workers who don’t have AI skills are finding fewer roles and lower salaries—or their jobs disappearing entirely to automation and AI (Source: Belle Lin at WSJ). Sam Altman of OpenAI sees a future where demand for AI-driven apps is limited only by the amount of computing available at a price the consumer is willing o pay. “Compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we should be investing heavily to make a lot more compute.” Sam Altman, OpenAI CEO This AI buildup is reminiscent of past technological transformations, where powerful interests shaped outcomes, often at the expense of broader societal considerations. Consider early car manufacturers. They focused on a need for factories, components, and roads.

Read More
Gov Agencies AI Workforce Challenges

Gov Agencies AI Workforce Challenges

Federal agencies are placing a higher priority on providing AI training to their workforces with a focus on principles of transparency and accountability, officials announced at ATARC’s GITEC conference in Charlottesville, Virginia earlier. Gov Agencies AI Workforce Challenges. Alexis Bonnell, Air Force Research Laboratory CIO and Director of the Digital Capabilities Directorate, emphasized the importance of upholding existing ethics standards rather than creating new ones. She stressed that agencies need to exercise the ethical principles they have always been expected to follow. President Biden’s October 2023 executive order on artificial intelligence mandated that agencies develop ethical AI and establish AI offices, among other directives. While agencies like the Defense Department and the Department of Homeland Security are optimistic about AI’s potential, leaders remain cautious about its ethical implications and stress the importance of safe technology development. It’s not just technologists who require AI training. To ensure all employees understand AI’s risks and benefits, government leaders are prioritizing education and upskilling efforts. Steven Brand, Energy Deputy CIO of Resource Management, highlighted the initiative to provide foundational AI training across his department, emphasizing that the goal is not to make employees experts. Tammy Hornsby-Fink, Executive Vice President and System CISO at the Federal Reserve Bank of Richmond, emphasized the need for accessible learning opportunities for all department members, from data scientists to executive assistants, to grasp AI concepts in manageable increments. Hornsby-Fink also emphasized the importance of providing sandboxes for employees to experiment with new technologies safely, stressing that experimentation is key to understanding how these technologies can create business value. According to Tony Boese, Department of Veterans Affairs Interagency Programs Manager, consistent education is essential to combat misinformation about AI. He mentioned the agency’s ASPIRE data-literacy program, which leverages AI to identify skills gaps and tailor educational pathways for individuals. Karen Howard, IRS Office of Online Services Executive Director, highlighted the need to modernize recruitment strategies and change management principles to attract top talent and leverage digital transformation and AI effectively. Jamie Holcombe, U.S. Patent and Trademark Office CIO, emphasized the importance of diversifying agency workforces by bringing in new perspectives from industry, such as those from Silicon Valley, to move away from outdated organizational playbooks. Gov Agencies AI Workforce Challenges Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Service Cloud with AI-Driven Intelligence Salesforce Enhances Service Cloud with AI-Driven Intelligence Engine Data science and analytics are rapidly becoming standard features in enterprise applications, Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com