PHI Archives - gettectonic.com - Page 7

When to use Flow

When and Why Should You Use a Flow in Salesforce? Flow is Salesforce’s premier tool for creating configurable automation and guided user experiences. If you need to build a process that doesn’t require the complexity of Apex code, Flow should be your go-to solution. It’s versatile, user-friendly, and equipped to handle a wide range of business automation needs. Legacy tools like Process Builder and Workflow Rules are being phased out, with support ending in December 2025. While you may choose to edit existing automations in these tools temporarily, migrating to Flow should be a top priority for future-proofing your Salesforce org. Capabilities of FlowFlows allow you to: When Should You Avoid Using a Flow?Although Flow is powerful, it’s not the right choice in every scenario. Here are situations where it may not be suitable: Creating a Flow in Salesforce Pro Tips for Flow Building Flow vs. Apex: Which to Choose?Flows are simpler, faster to deploy, and accessible to admins without coding expertise. Apex, on the other hand, is suited for complex use cases requiring advanced logic or integrations. Here’s when Apex should be used instead: Why Flows Are the FutureSalesforce has positioned Flow as the central automation tool by deprecating Workflow Rules and Process Builder. With every release, Flow’s capabilities expand, making it easier to replace tasks traditionally requiring Apex. For instance: Final ThoughtsSalesforce admins should prioritize building and migrating automation to Flow. It’s a scalable and admin-friendly tool that ensures your org stays up-to-date with Salesforce’s evolving ecosystem. Whether you’re automating basic processes or tackling complex workflows, Flow provides the flexibility to meet your needs. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI as a Service

AI as a Service

The latest research study from HTF MI, titled Global Artificial Intelligence (AI) As a Service Market Size, Player Analysis & Segment Growth 2020-2032, offers an in-depth evaluation of market risks, opportunities, and strategic decision-making support. The report delves into trends, growth drivers, technological advancements, and the evolving investment landscape within the Global AI As a Service market. Key players featured in the study include Google, Amazon Web Services, IBM, Microsoft, SAP, Salesforce, Intel, Baidu, FICO, SAS, and BigML. Market Overview: The study provides an extensive view of the AI As a Service market, with segmentation across industries such as banking, financial services, insurance, healthcare, retail, telecommunications, government and defense, manufacturing, and energy. Covering 18+ countries globally, it also highlights both emerging and established players. The report offers tailored analysis based on specific business objectives or geographic requirements. AI As a Service Market: Demand Analysis & Opportunity Outlook 2030 This research defines the market size across various segments and countries by analyzing historical data and forecasting future values through 2030. It combines qualitative and quantitative insights, including market share, value, and volume forecasts from 2019 to 2023, with projections extending to 2030. Key elements such as growth drivers, restraining factors, and critical statistics shape the market’s outlook. Market Segmentation: The report categorizes the AI As a Service market into the following: Key Players: The study profiles major industry players such as Google, Amazon Web Services, IBM, Microsoft, SAP, Salesforce, Intel, Baidu, FICO, SAS, and BigML, analyzing their market strategies and positioning. Geographic Scope: The global report covers multiple regions, including: Key Questions Addressed: Report Chapters Overview: For more information, request a sample report or inquire about the full research study through the provided links. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Slack AI Exploit Prevented

Slack AI Exploit Prevented

Slack AI Exploit Prevented. Slack has patched a vulnerability in its Slack AI assistant that could have been for insider phishing attacks, according to an announcement made by the company on Wednesday. This update follows a blog post by PromptArmor, which detailed how an insider attacker—someone within the same Slack workspace as the target—could manipulate Slack AI into sending phishing links to private channels that the attacker does not have access to. The vulnerability is an example of an indirect prompt injection attack. In this type of attack, the attacker embeds malicious instructions within content that the AI processes, such as an external website or an uploaded document. In this case, the attacker could plant these instructions in a public Slack channel. Slack AI, designed to use relevant information from public channels in the workspace to generate responses, could then be tricked into acting on these malicious instructions. While placing such instructions in a public channel poses a risk of detection, PromptArmor pointed out that an attacker could create a rogue public channel with only one member—themselves—potentially avoiding detection unless another user specifically searches for that channel. Salesforce, which owns Slack, did not directly reference PromptArmor in its advisory and did not confirm to SC Media that the issue it patched is the same one described by PromptArmor. However, the advisory does mention a security researcher’s blog post published on August 20, the same day as PromptArmor’s blog. “When we became aware of the report, we launched an investigation into the described scenario where, under very limited and specific circumstances, a malicious actor with an existing account in the same Slack workspace could phish users for certain data. We’ve deployed a patch to address the issue and have no evidence at this time of unauthorized access to customer data,” a Salesforce spokesperson told SC Media. How the Slack AI Exploit Could Have Extracted Secrets from Private Channels PromptArmor demonstrated two proof-of-concept exploits that would require the attacker to have access to the same workspace as the victim, such as a coworker. The attacker would create a public channel and lure the victim into clicking a link delivered by the AI. In the first exploit, the attacker aimed to extract an API key stored in a private channel that the victim is part of. The attacker could post a carefully crafted prompt in the public channel that indirectly instructs Slack AI to respond to a request for the API key with a fake error message and a URL controlled by the attacker. The AI would unknowingly insert the API key from the victim’s private channel into the URL as an HTTP parameter. If the victim clicks on the URL, the API key would be sent to the attacker’s domain. “This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see. This really makes me question how safe our AI tools are,” said Akhil Mittal, Senior Manager of Cybersecurity Strategy and Solutions at Synopsys Software Integrity Group, in an email to SC Media. “It’s not just about fixing problems but making sure these tools manage our data properly. As AI becomes more common, it’s important for organizations to keep both security and ethics in mind to protect our information and keep trust.” In a second exploit, PromptArmor demonstrated how similar crafted instructions could be used to deliver a phishing link to a private channel. The attacker would tailor the instructions to the victim’s workflow, such as asking the AI to summarize messages from their manager, and include a malicious link. PromptArmor reported the issue to Slack on August 14, with Slack acknowledging the disclosure the following day. Despite some initial skepticism from Slack about the severity of the vulnerability, the company patched the issue on August 21. “Slack’s security team had prompt responses and showcased a commitment to security and attempted to understand the issue. Given how new prompt injection is and how misunderstood it has been across the industry, this is something that will take the industry time to wrap our heads around collectively,” PromptArmor wrote in their blog. New Slack AI Feature Could Pose Further Prompt Injection Risk PromptArmor concluded its testing of Slack AI before August 14, the same day Slack announced that its AI assistant could now reference files uploaded to Slack when generating search answers. PromptArmor noted that this new feature could create additional opportunities for indirect prompt injection attacks, such as hiding malicious instructions in a PDF file by setting the font color to white. However, the researchers have not yet tested this scenario and noted that workspace admins can restrict Slack AI’s ability to read files. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Collabrate With AI

Collabrate With AI

Many artists, writers, musicians, and creators are facing fears that AI is taking over their jobs. On the surface, generative AI tools can replicate work in moments that previously took creators hours to produce—often at a fraction of the cost and with similar quality. This shift has led many businesses to adopt AI for content creation, leaving creators worried about their livelihoods. Yet, there’s another way to view this situation, one that offers hope to creators everywhere. AI, at its core, is a tool of mimicry. When provided with enough data, it can replicate a style or subject with reasonable accuracy. Most of this data has been scraped from the internet, often without explicit consent, to train AI models on a wide variety of creative outputs. If you’re a creator, it’s likely that pieces of your work have contributed to the training of these AI models. Your art, words, and ideas have helped shape what these systems now consider ‘good’ in the realms of art, music, and writing. AI can combine the styles of multiple creators to generate something new, but often these creations fall flat. Why? While image-generating AI can predict pixels, it lacks an understanding of human emotions. It knows what a smile looks like but can’t grasp the underlying feelings of joy, nervousness, or flirtation that make a smile truly meaningful. AI can only generate a superficial replica unless the creator uses extensive prompt engineering to convey the context behind that smile. Emotion is uniquely human, and it’s what makes our creations resonate with others. A single brushstroke from a human artist can convey emotions that might take thousands of words to replicate through an AI prompt. We’ve all heard the saying, “A picture is worth a thousand words.” But generating that picture with AI often takes many more words. Input a short prompt, and the AI will enhance it with more words, often leading to results that stray from your original vision. To achieve a specific outcome, you may need hours of prompt engineering, trial, and error—and even then, the result might not be quite right. Without a human artist to guide the process, these generated works will often remain unimpressive, no matter how advanced the technology becomes. That’s where you, the creator, come in. By introducing your own inputs, such as images or sketches, and using workflows like those in ComfyUI, you can exert more control over the outputs. AI becomes less of a replacement for the artist and more of a tool or collaborator. It can help speed up the creative process but still relies on the artist’s hand to guide it toward a meaningful result. Artists like Martin Nebelong have embraced this approach, treating AI as just another tool in their creative toolbox. Nebelong uses high levels of control in AI-driven workflows to create works imbued with his personal emotional touch. He shares these workflows on platforms like LinkedIn and Twitter, encouraging other creators to explore how AI can speed up their processes while retaining the unique artistry that only humans can provide. Nebelong’s philosophy is clear: “I’m pro-creativity, pro-art, and pro-AI. Our tools change, the scope of what we can do changes. I don’t think creative AI tools or models have found their best form yet; they’re flawed, raw, and difficult to control. But I’m excited for when they find that form and can act as an extension of our hands, our brush, and as an amplifier of our artistic intent.” AI can help bring an artist 80% of the way to a finished product, but it’s the final 20%—the part where human skill and emotional depth come in—that elevates the piece to something truly remarkable. Think about the notorious issues with AI-generated hands. Often, the output features too many fingers or impossible poses, a telltale sign of AI’s limitations. An artist is still needed to refine the details, correct mistakes, and bring the creation in line with reality. While using AI may be faster than organizing a full photoshoot or painting from scratch, the artist’s role has shifted from full authorship to that of a collaborator, guiding AI toward a polished result. Nebelong often starts with his own artwork and integrates AI-generated elements, using them to enhance but never fully replace his vision. He might even use AI to generate 3D models, lighting, or animations, but the result is always driven by his creativity. For him, AI is just another step in the creative journey, not a shortcut or replacement for human effort. However, AI’s ability to replicate the styles of famous artists and public figures raises ethical concerns. With platforms like CIVIT.AI making it easy to train models on any style or subject, questions arise about the legality and morality of using someone else’s likeness or work without permission. As regulations catch up, we may see a future where AI models trained on specific styles or individuals are licensed, allowing creators to retain control over their works in the same way they license their traditional creations today. The future may also see businesses licensing AI models trained on actors, artists, or styles, allowing them to produce campaigns without booking the actual talent. This would lower costs while still benefiting creators through licensing fees. Actors and artists could continue to contribute their talents long after they’ve retired, or even passed on, by licensing their digital likenesses, as seen with CGI performances in movies like Rogue One. In conclusion, AI is pushing creators to learn new skills and adapt to new tools. While this can feel daunting, it’s important to remember that AI is just that—a tool. It doesn’t understand emotion, intent, or meaning, and it never will. That’s where humans come in. By guiding AI with our creativity and emotional depth, we can produce works that resonate with others on a deeper level. For example, you can tell artificial intelligence what an image should look like but not what emotions the image should evoke. Creators, your job isn’t disappearing. It’s

Read More
APIs and Software Development

APIs and Software Development

The Role of APIs in Modern Software Development APIs (Application Programming Interfaces) are central to modern software development, enabling teams to integrate external features into their products, including advanced third-party AI systems. For instance, you can use an API to allow users to generate 3D models from prompts on MatchboxXR. The Rise of AI-Powered Applications Many startups focus exclusively on AI, but often they are essentially wrappers around existing technologies like ChatGPT. These applications provide specialized user interfaces for interacting with OpenAI’s GPT models rather than developing new AI from scratch. Some branding might make it seem like they’re creating groundbreaking technology, when in reality, they’re leveraging pre-built AI solutions. Solopreneur-Driven Wrappers Large Language Models (LLMs) enable individuals and small teams to create lightweight apps and websites with AI features quickly. A quick search on Reddit reveals numerous small-scale startups offering: Such features can often be built using ChatGPT or Gemini within minutes for free. Well-Funded Ventures Larger operations invest heavily in polished platforms but may allocate significant budgets to marketing and design. This raises questions about whether these ventures are also just sophisticated wrappers. Examples include: While these products offer interesting functionalities, they often rely on APIs to interact with LLMs, which brings its own set of challenges. The Impact of AI-First, API-Second Approaches Design Considerations Looking Ahead Developer Experience: As AI technologies like LLMs become mainstream, focusing on developer experience (DevEx) will be crucial. Good DevEx involves well-structured schemas, flexible functions, up-to-date documentation, and ample testing data. Future Trends: The future of AI will likely involve more integrations. Imagine: AI is powerful, but the real innovation lies in integrating hardware, data, and interactions effectively. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Demandbase One for Sales iFrame

Demandbase One for Sales iFrame

Understanding the Demandbase One for Sales iFrame in Salesforce The Demandbase One for Sales iFrame (formerly known as Sales Intelligence) allows sales teams to access deep, actionable insights directly within Salesforce. This feature provides account-level and people-level details, including engagement data, technographics, intent signals, and even relevant news, social media posts, and email communications. By offering this level of visibility, sales professionals can make informed decisions and take the most effective next steps on accounts. Key Points: Overview of the Demandbase One for Sales iFrame The iFrame is divided into several key sections: Account, People, Engagement, and Insights tabs. Each of these provides critical information to help you better understand and engage with the companies and people you’re researching. Account Tab People Tab Engagement Tab Final Notes: The Demandbase One for Sales iFrame is a powerful tool that provides a complete view of account activity, helping sales teams make informed decisions and drive results. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Rold of Small Language Models

Role of Small Language Models

The Role of Small Language Models (SLMs) in AI While much attention is often given to the capabilities of Large Language Models (LLMs), Small Language Models (SLMs) play a vital role in the AI landscape. Role of Small Language Models. Large vs. Small Language Models LLMs, like GPT-4, excel at managing complex tasks and providing sophisticated responses. However, their substantial computational and energy requirements can make them impractical for smaller organizations and devices with limited processing power. In contrast, SLMs offer a more feasible solution. Designed to be lightweight and resource-efficient, SLMs are ideal for applications operating in constrained computational environments. Their reduced resource demands make them easier and quicker to deploy, while also simplifying maintenance. What are Small Language Models? Small Language Models (SLMs) are neural networks engineered to generate natural language text. The term “small” refers not only to the model’s physical size but also to its parameter count, neural architecture, and the volume of data used during training. Parameters are numeric values that guide a model’s interpretation of inputs and output generation. Models with fewer parameters are inherently simpler, requiring less training data and computational power. Generally, models with fewer than 100 million parameters are classified as small, though some experts consider models with as few as 1 million to 10 million parameters to be small in comparison to today’s large models, which can have hundreds of billions of parameters. How Small Language Models Work SLMs achieve efficiency and effectiveness with a reduced parameter count, typically ranging from tens to hundreds of millions, as opposed to the billions seen in larger models. This design choice enhances computational efficiency and task-specific performance while maintaining strong language comprehension and generation capabilities. Techniques such as model compression, knowledge distillation, and transfer learning are critical for optimizing SLMs. These methods enable SLMs to encapsulate the broad understanding capabilities of larger models into a more concentrated, domain-specific toolset, facilitating precise and effective applications while preserving high performance. Advantages of Small Language Models Applications of Small Language Models Role of Small Language Models is lengthy. SLMs have seen increased adoption due to their ability to produce contextually coherent responses across various applications: Small Language Models vs. Large Language Models Feature LLMs SLMs Training Dataset Broad, diverse internet data Focused, domain-specific data Parameter Count Billions Tens to hundreds of millions Computational Demand High Low Cost Expensive Cost-effective Customization Limited, general-purpose High, tailored to specific needs Latency Higher Lower Security Risk of data exposure through APIs Lower risk, often not open source Maintenance Complex Easier Deployment Requires substantial infrastructure Suitable for limited hardware environments Application Broad, including complex tasks Specific, domain-focused tasks Accuracy in Specific Domains Potentially less accurate due to general training High accuracy with domain-specific training Real-time Application Less ideal due to latency Ideal due to low latency Bias and Errors Higher risk of biases and factual errors Reduced risk due to focused training Development Cycles Slower Faster Conclusion The role of Small Language Models (SLMs) is increasingly significant as they offer a practical and efficient alternative to larger models. By focusing on specific needs and operating within constrained environments, SLMs provide targeted precision, cost savings, improved security, and quick responsiveness. As industries continue to integrate AI solutions, the tailored capabilities of SLMs are set to drive innovation and efficiency across various domains. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Salesforce Campaign Flows

Salesforce Campaign Flows: A New Era of Automation

Salesforce’s Flow has long been a powerful automation tool, democratizing access to sophisticated automation for non-coders. It also drives the automation behind Marketing Cloud Growth Edition, including email dispatch. Salesforce Campaign Flows mark a significant step toward making automation more accessible for marketers. For the first time, Salesforce introduces what are termed “non-admin flows,” which offer a streamlined interface for managing Flow automation without dealing with complex nodes and elements. This marks a significant development, as no other Salesforce products currently feature this simplified approach. Marketers now have direct access to Flow’s capabilities. Introduction to Salesforce Campaign Flows Campaign Flows in Salesforce provide a user-friendly interface for setting up email content, applying actions to records triggered by events, and more. This functionality closely parallels tools like Engagement Studio in Account Engagement and Journey Builder in Marketing Cloud. However, the timeline for incorporating features such as Journey Builder’s goals and exit criteria into Campaign Flows remains undisclosed. Flow now supports shorter wait periods, a sought-after feature for orchestrating marketing journeys more effectively. Branching logic with Decision elements allows users to create “yes/no” paths based on lead or contact criteria, adding flexibility to the marketing automation process. Types of Campaign Flows Currently, there are two types of Campaign Flows: Segment-triggered Flows and Form-triggered Flows. The key differences between these Campaign Flows and traditional Salesforce Flows include: Available Elements Campaign Flows are a simplified version of Salesforce Flows, with some elements unavailable in this reduced interface. Key elements such as Wait and Decision elements are included, which are essential for marketing use cases. The following table compares available elements: Element Name Salesforce Flow Segment-triggered Flows Form-triggered Flows Action ✅ ❌ ❌ Add Prompt Instructions ✅ ❌ ❌ Apex Action ✅ ❌ ❌ Assignment ✅ ✅ ✅ Collection Filter ✅ ✅ ✅ Collection Sort ✅ ✅ ✅ Create Records ✅ ✅ ✅ Custom Error ✅ ❌ ❌ Decision ✅ ✅ ✅ Delete Records ✅ ✅ ✅ Email Alert ✅ ❌ ❌ Get Records ✅ ✅ ✅ Loop ✅ ✅ ✅ Recommendation Assignment ✅ ❌ ❌ Screen ✅ ❌ ❌ Send Email Message * ✅ ❌ Send SMS Message * ✅ ❌ Start ✅ ✅ ✅ Subflow ✅ ❌ ❌ Transform ✅ ❌ ❌ Update Records ✅ ✅ ❌ Wait ✅ ✅ ✅ Wait Until Event * ✅ ✅ *Only with Marketing Cloud Growth Wait vs. Wait Until Event Elements The “Wait” element allows for fixed pauses, such as waiting for three days. The “Wait Until Event” element, available in Marketing Cloud Growth, holds Leads/Contacts until a specified event makes them eligible to proceed. This mirrors functionality found in Engagement Studio. User Access and Capabilities In Marketing Cloud Growth, Campaign Flow sharing is set to private by default, with visibility influenced by associated records, sharing rules, and manual sharing settings. This means Campaign Flows are generally private unless additional sharing rules are established. Creating and Editing Campaign Flows Campaign Flows in Marketing Cloud Growth have a simplified user interface compared to Salesforce’s traditional flows. Summary The introduction of non-admin flows in Salesforce marks a significant step toward making automation more accessible for marketers. These simplified interfaces enable the creation of effective marketing campaigns while maintaining the option to integrate with more complex flows in Marketing Cloud Growth Edition. Future developments will likely expand the use cases and capabilities of these streamlined flows. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Enhance Payer Patient Education

Enhance Payer Patient Education

Data and Technology Strategies Enhance Payer Patient Education Analytics platforms, omnichannel engagement tools, telehealth, and other technological advancements have become essential in driving successful, enhanced payer patient education. Cathy Moffitt, MD, a pediatrician with 15 years of experience in the pediatric emergency department and now the senior vice president and Aetna chief medical officer at CVS Health, understands the critical role of patient education. “Education is empowerment. It is engagement. It is very critical to making patients more equipped to handle their healthcare journey,” Moffitt said in an episode of Healthcare Strategies. “Even overseeing a large payer like Aetna, I still believe tremendously in health education.” Enhance Payer Patient Education For large payers, effective patient education begins with data analytics and a deep understanding of their member population. Through data, payers can identify key insights, including when members are most receptive to educational materials. “People are more open to hear you and to be educated and empowered when they need help right then,” Moffitt explained. Timing is crucial—offering educational resources when they’re most relevant to a member’s immediate needs increases the likelihood that the information will be absorbed and acted upon. Aetna’s Next Best Action initiative, launched in 2018, exemplifies this approach. Through this program, Aetna employees reach out to members with specific conditions, offering guidance on the next best steps for managing their health. By providing education at a time when members are most open to it, the initiative ensures that patient education is both timely and impactful. In addition to timing, payer data can shape patient education by providing insights into a member’s demographics, including race, sexual orientation, gender identity, ethnicity, and location. Tailoring educational efforts to these factors ensures that communication is accessible and resonates with members. To better connect with a diverse member base, Aetna has integrated translator services into its customer support and trained representatives on sensitivity to sexual orientation and gender identity. Additionally, updating the provider directory to reflect demographic data is crucial. When members see providers who share their language, culture, and experiences, they are more likely to engage with and retain the educational materials provided. “Understanding, in a multicultural and multifactorial way, who our members are and trying to help understand what they need…as well as understanding both acute and chronic illness from an actionability standpoint, where we can best engage to good effect as we reach out to people—that’s the cornerstone of our intent and our philosophy around how we scrub data,” Moffitt shared. With over 20 years in the healthcare industry, both as a provider and now in a payer role, Moffitt has observed key trends and identified strengths and weaknesses in patient education efforts. She noted that the most successful patient education initiatives have been in mental health and preventive care, with technology playing a crucial role in both areas. Patient education has significantly reduced the stigma around mental healthcare and highlighted the importance of mental wellness. Telemedicine has vastly improved access to care, particularly in mental health, Moffitt noted. In preventive care, more people are now aware of the benefits of cancer screenings, vaccines, wellness visits, and other preventive measures. Moffitt suggested that the increased use of home health visits and retail clinics has contributed to these improvements, particularly among Aetna’s members. Looking ahead, Moffitt predicted that customized engagement is the next frontier for patient education. Members increasingly want educational materials delivered in a personalized and streamlined manner that suits their preferences. Omnichannel engagement solutions will be vital in meeting this demand. While significant progress has been made in enabling members to receive educational materials through various channels such as email, text, and phone calls, Moffitt anticipates even more advancements in the future. “I can’t tell you exactly where we’re going to be in 10 years because I wouldn’t have been able to tell you 10 years ago where we are now, but we will continue to respond and meet the demands with the technological commitments that we’re making,” Moffitt said. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Cloud PBX

Cloud PBX

The Clock is Ticking on the Big UK Traditional Telephony Switch-Off As the UK approaches the traditional telephony switch-off, millions of small businesses are prioritizing the digitization of their voice communications. The move to cloud-powered replacements – Cloud PBX – is not just about meeting the January 2027 deadline; it’s an opportunity to modernize and leverage the benefits of cloud-based communications. The switch-off represents a chance for businesses to embrace a mobile-first, omnichannel approach to communication, unifying voice, video, emails, messaging, webchat, and more. This integration empowers employees to work smarter and enhances the customer experience. For small businesses and their IT service provider partners, modernization depends on deploying feature-rich, affordable technology that simplifies complexity and delivers tangible efficiency gains. Choosing the right product and vendor is crucial. “Cloud-powered, unified communication is no longer just for larger enterprises; small businesses must also embrace transformational change to keep pace with modern work trends. What may seem like a major undertaking can be easier than they think,” says Arya Zhou, Head of Global Sales at Yeastar. Yeastar’s recently launched P520 IPPBX digitizes voice calling and seamlessly integrates it with video, messaging, and customer experience into one platform. Discover the Yeastar P520 The Yeastar P520, part of the P-Series Appliance Edition, supports up to 20 users and 10 concurrent calls. It combines a compact, lightweight hardware body with powerful software capabilities. It supports Yeastar’s Linkus UC Client for various platforms, integrates with Microsoft Teams, and provides comprehensive call analytics and graphical call reports to improve communication efficiency and productivity. The P520 offers advanced call center features, including: Additionally, it includes team chat with presence and file sharing, integrated lightweight video conferencing, PBX-native external contacts management, extension groups, and ready-made integrations with popular CRMs and helpdesks. All these features come with single-point configuration and enterprise-grade security. “The Yeastar P520 is ideal for smaller teams looking to enhance their communication infrastructure,” says Zhou. “It delivers advanced communication capabilities and improved productivity tailored for SMBs and startups, without high costs.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Impact of Generative AI on Workforce

Impact of Generative AI on Workforce

The Impact of Generative AI on the Future of Work Automation has long been a source of concern and hope for the future of work. Now, generative AI is the latest technology fueling both fear and optimism. AI’s Role in Job Augmentation and Replacement While AI is expected to enhance many jobs, there’s a growing argument that job augmentation for some might lead to job replacement for others. For instance, if AI makes a worker’s tasks ten times easier, the roles created to support that job could become redundant. A June 2023 McKinsey report highlighted that generative AI (GenAI) could automate 60% to 70% of employee workloads. In fact, AI has already begun replacing jobs, contributing to nearly 4,000 job cuts in May 2023 alone, according to Challenger, Gray & Christmas Inc. OpenAI, the creator of ChatGPT, estimates that 80% of the U.S. workforce could see at least 10% of their jobs impacted by large language models (LLMs). Examples of AI Job Replacement One notable example involves a writer at a tech startup who was let go without explanation, only to later discover references to her as “Olivia/ChatGPT” in internal communications. Managers had discussed how ChatGPT was a cheaper alternative to employing a writer. This scenario, while not officially confirmed, strongly suggested that AI had replaced her role. The Writers Guild of America also went on strike, seeking not only higher wages and more residuals from streaming platforms but also more regulation of AI. Research from the Frank Hawkins Kenan Institute of Private Enterprise indicates that GenAI might disproportionately affect women, with 79% of working women holding positions susceptible to automation compared to 58% of working men. Unlike past automation that typically targeted repetitive tasks, GenAI is different—it automates creative work such as writing, coding, and even music production. For example, Paul McCartney used AI to partially generate his late bandmate John Lennon’s voice to create a posthumous Beatles song. In this case, AI enhanced creativity, but the broader implications could be more complex. Other Impacts of AI on Jobs AI’s impact on jobs goes beyond replacement. Human-machine collaboration presents a more positive angle, where AI helps improve the work experience by automating repetitive tasks. This could lead to a rise in AI-related jobs and a growing demand for AI skills. AI systems require significant human feedback, particularly in training processes like reinforcement learning, where models are fine-tuned based on human input. A May 2023 paper also warned about the risk of “model collapse,” where LLMs deteriorate without continuous human data. However, there’s also the risk that AI collaboration could hinder productivity. For example, generative AI might produce an overabundance of low-quality content, forcing editors to spend more time refining it, which could deprioritize more original work. Jobs Most Affected by AI AI Legislation and Regulation Despite the rapid advancement of AI, comprehensive federal regulation in the U.S. remains elusive. However, several states have introduced or passed AI-focused laws, and New York City has enacted regulations for AI in recruitment. On the global stage, the European Union has introduced the AI Act, setting a common legal framework for AI. Meanwhile, U.S. leaders, including Senate Majority Leader Chuck Schumer, have begun outlining plans for AI regulation, emphasizing the need to protect workers, national security, and intellectual property. In October 2023, President Joe Biden signed an executive order on AI, aiming to protect consumer privacy, support workers, and advance equity and civil rights in the justice system. AI regulation is becoming increasingly urgent, and it’s a question of when, not if, comprehensive laws will be enacted. As AI continues to evolve, its impact on the workforce will be profound and multifaceted, requiring careful consideration and regulation to ensure it benefits society as a whole. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Ehanced Delphi Experience With Einstein 1

Ehanced Delphi Experience With Einstein 1

Amadeus has launched an enhanced and expanded sales and catering suite, introducing Delphi Direct, aimed at helping hotels of all sizes boost efficiency and profitability. Ehanced Delphi Experience With Einstein 1 for Amadeus. In 2024, group business is a key focus for hoteliers. Recent research shows they are prioritizing efforts to strengthen customer relationships, enhance outreach to both new and returning clients, and improve event planning and execution. To support this, Delphi has been updated to cater to the diverse needs of any hotel, regardless of size. Whether a small property with limited resources, a full-service hotel managing large events, or a hotel management company overseeing multiple properties, Delphi offers a scalable and customizable solution. Central to the upgraded offering is a modern user interface based on the Einstein 1 Platform, which allows Delphi users to benefit from the combined features of Amadeus and Salesforce. Key features include: Delphi Direct, part of this suite, is an online booking platform that revolutionizes how hotels capture group business, allowing meeting spaces to be booked directly on a hotel’s website. This streamlines the sales process, unlocks additional revenue, and frees up teams to focus on securing larger deals. In addition to Delphi, Amadeus offers a comprehensive sales and catering software ecosystem, including Delphi Direct, Delphi Diagramming, and MeetingBroker, along with partner integrations designed to foster streamlined business growth and management. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Sensitive AI Knowledge Models

Sensitive AI Knowledge Models

Based on the writings of David Campbell in Generative AI. Sensitive AI Knowledge Models “Crime is the spice of life.” This quote from an unnamed frontier model engineer has been resonating for months, ever since it was mentioned by a coworker after a conference. It sparked an interesting thought: for an AI model to be truly useful, it needs comprehensive knowledge, including the potentially dangerous information we wouldn’t really want it to share with just anyone. For example, a student trying to understand the chemical reaction behind an explosion needs the AI to accurately explain it. While this sounds innocuous, it can lead to the darker side of malicious LLM extraction. The student needs an accurate enough explanation to understand the chemical reaction without obtaining a chemical recipe to cause the reaction. An abstract digital artwork portrays the balance between AI knowledge and ethical responsibility. A blue and green flowing ribbon intertwines with a gold and white geometric pattern, symbolizing knowledge and ethical frameworks. Where they intersect, small bursts of light represent innovation and responsible AI use. The background gradient transitions from deep purple to soft lavender, conveying progress and hope. Subtle binary code is ghosted throughout, adding a tech-oriented feel. AI red-teaming is a process born of cybersecurity origins. The DEFCON conference co-hosted by the White House held the first Generative AI Red Team competition. Thousands of attendees tested eight large language models from an assortment of AI companies. In cybersecurity, red-teaming implies an adversarial relationship with a system or network. A red-teamer’s goal is to break into, hack, or simulate damage to a system in a way that emulates a real attack. When entering the world of AI red teaming, the initial approach often involves testing the limits of the LLM, such as trying to extract information on how to build a pipe bomb. This is not purely out of curiosity but also because it serves as a test of the model’s boundaries. The red-teamer has to know the correct way to make a pipe bomb. Knowing the correct details about sensitive topics is crucial for effective red teaming; without this knowledge, it’s impossible to judge whether the model’s responses are accurate or mere hallucinations. Sensitive AI Knowledge Models This realization highlights a significant challenge: it’s not just about preventing the AI from sharing dangerous information, but ensuring that when it does share sensitive knowledge, it’s not inadvertently spreading misinformation. Balancing the prevention of harm through restricted access to dangerous knowledge and avoiding greater harm from inaccurate information falling into the wrong hands is a delicate act. AI models need to be knowledgeable enough to be helpful but not so uninhibited that they become a how-to guide for malicious activities. The challenge is creating AI that can navigate this ethical minefield, handling sensitive information responsibly without becoming a source of dangerous knowledge. The Ethical Tightrope of AI Knowledge Creating dumbed-down AIs is not a viable solution, as it would render them ineffective. However, having AIs that share sensitive information freely is equally unacceptable. The solution lies in a nuanced approach to ethical training, where the AI understands the context and potential consequences of the information it shares. Ethical Training: More Than Just a Checkbox Ethics in AI cannot be reduced to a simple set of rules. It involves complex, nuanced understanding that even humans grapple with. Developing sophisticated ethical training regimens for AI models is essential. This training should go beyond a list of prohibited topics, aiming to instill a deep understanding of intention, consequences, and social responsibility. Imagine an AI that recognizes sensitive queries and responds appropriately, not with a blanket refusal, but with a nuanced explanation that educates the user about potential dangers without revealing harmful details. This is the goal for AI ethics. But it isn’t as if AI is going to extract parental permission for youths to access information, or run prompt-based queries, just because the request is sensitive. The Red Team Paradox Effective AI red teaming requires knowledge of the very things the AI should not share. This creates a paradox similar to hiring ex-hackers for cybersecurity — effective but not without risks. Tools like the WMDP Benchmark help measure and mitigate AI risks in critical areas, providing a structured approach to red teaming. To navigate this, diverse expertise is necessary. Red teams should include experts from various fields dealing with sensitive information, ensuring comprehensive coverage without any single person needing expertise in every dangerous area. Controlled Testing Environments Creating secure, isolated environments for testing sensitive scenarios is crucial. These virtual spaces allow safe experimentation with the AI’s knowledge without real-world consequences. Collaborative Verification Using a system of cross-checking between multiple experts can enhance the security of red teaming efforts, ensuring the accuracy of sensitive information without relying on a single individual’s expertise. The Future of AI Knowledge Management As AI systems advance, managing sensitive knowledge will become increasingly challenging. However, this also presents an opportunity to shape AI ethics and knowledge management. Future AI systems should handle sensitive information responsibly and educate users about the ethical implications of their queries. Navigating the ethical landscape of AI knowledge requires a balance of technical expertise, ethical considerations, and common sense. It’s a necessary challenge to tackle for the benefits of AI while mitigating its risks. The next time an AI politely declines to share dangerous information, remember the intricate web of ethical training, red team testing, and carefully managed knowledge behind that refusal. This ensures that AI is not only knowledgeable but also wise enough to handle sensitive information responsibly. Sensitive AI Knowledge Models need to handle sensitive data sensitively. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful

Read More
AI Trust and Optimism

AI Trust and Optimism

Building Trust in AI: A Complex Yet Essential Task The Importance of Trust in AI Trust in artificial intelligence (AI) is ultimately what will make or break the technology. AI Trust and Optimism. Amid the hype and excitement of the past 18 months, it’s widely recognized that human beings need to have faith in this new wave of automation. This trust ensures that AI systems do not overstep boundaries or undermine personal freedoms. However, building this trust is a complicated task, thankfully receiving increasing attention from responsible thought leaders in the field. The Challenge of Responsible AI Development There is a growing concern that in the AI arms race, some individuals and companies prioritize making their technology as advanced as possible without considering long-term human-centric issues or the present-day realities. This concern was highlighted when OpenAI CEO Sam Altman presented AI hallucinations as a feature, not a bug, at last year’s Dreamforce, shortly after Salesforce CEO Marc Benioff emphasized the vital nature of trust. Insights from Salesforce’s Global Study Salesforce recently released the results of a global study involving 6,000 knowledge workers from various companies. The study reveals that while respondents trust AI to manage 43% of their work tasks, they still prefer human intervention in areas such as training, onboarding, and data handling. A notable finding is the difference in trust levels between leaders and rank-and-file workers. Leaders trust AI to handle over half (51%) of their work, while other workers trust it with 40%. Furthermore, 63% of respondents believe human involvement is key to building their trust in AI, though a subset is already comfortable offloading certain tasks to autonomous AI. Specifically: The study predicts that within three years, 41% of global workers will trust AI to operate autonomously, a significant increase from the 10% who feel comfortable with this today. Ethical Considerations in AI Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, is responsible for establishing guidelines and best practices for technology adoption. Her interpretation of the study findings indicates that while workers are excited about a future with autonomous AI and are beginning to transition to it, trust gaps still need to be bridged. Goldman notes that workers are currently comfortable with AI handling tasks like writing code, uncovering data insights, and building communications. However, they are less comfortable delegating tasks such as inclusivity, onboarding, training employees, and data security to AI. Salesforce advocates for a “human at the helm” approach to AI. Goldman explains that human oversight builds trust in AI, but the way this oversight is designed must evolve to keep pace with AI’s rapid development. The traditional “human in the loop” model, where humans review every AI-generated output, is no longer feasible even with today’s sophisticated AI systems. Goldman emphasizes the need for more sophisticated controls that allow humans to focus on high-risk, high-judgment decisions while delegating other tasks. These controls should provide a macro view of AI performance and the ability to inspect it, which is crucial. Education and Training Goldman also highlights the importance of educating those steering AI systems. Trust and adoption of technology require that people are enabled to use it successfully. This includes comprehensive knowledge and training to make the most of AI capabilities. Optimism Amidst Skepticism Despite widespread fears about AI, Goldman finds a considerable amount of optimism and curiosity among workers. The study reflects a recognition of AI’s transformative potential and its rapid improvement. However, it is essential to distinguish between genuine optimism and hype-driven enthusiasm. Salesforce’s Stance on AI and Trust Salesforce has taken a strong stance on trust in relation to AI, emphasizing the non-silver bullet nature of this technology. The company acknowledges the balance between enthusiasm and pragmatism that many executives experience. While there is optimism about trusting autonomous AI within three years, this prediction needs to be substantiated with real-world evidence. Some organizations are already leading in generative AI adoption, while many others express interest in exploring its potential in the future. Conclusion Overall, this study contributes significantly to the ongoing debate about AI’s future. The concept of “human at the helm” is compelling and highlights the importance of ethical considerations in the AI-enabled future. Goldman’s role in presenting this research underscores Salesforce’s commitment to responsible AI development. For more insights, check out her blog on the subject. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com