Laws - gettectonic.com
Meta Joins the Race to Reinvent Search with AI

Meta Joins the Race to Reinvent Search with AI

Meta Joins the Race to Reinvent Search with AI Meta, the parent company of Facebook, Instagram, and WhatsApp, is stepping into the evolving AI-driven search landscape. As vendors increasingly embrace generative AI to transform search experiences, Meta aims to challenge Google’s dominance in this space. The company is reportedly developing an AI-powered search engine designed to provide conversational, AI-generated summaries of recent events and news. These summaries would be delivered via Meta’s AI chatbot, supported by a multiyear partnership with Reuters for real-time news insights, according to The Information. AI Search: A Growing Opportunity The push comes as generative AI reshapes search technology across the industry. Google, the long-standing leader, has integrated AI features such as AI Overviews into its search platform, offering users summarized search results, product comparisons, and more. This feature, now available in over 100 countries as of October 2024, signals a shift in traditional search strategies. Similarly, OpenAI, the creator of ChatGPT, has been exploring its own AI search model, SearchGPT, and forging partnerships with media organizations like the Associated Press and Hearst. However, OpenAI faces legal challenges, such as a lawsuit from The New York Times over alleged copyright infringement. Meta’s entry into AI-powered search aligns with a broader trend among tech giants. “It makes sense for Meta to explore this,” said Mark Beccue, an analyst with TechTarget’s Enterprise Strategy Group. He noted that Meta’s approach seems more targeted at consumer engagement than enterprise solutions, particularly appealing to younger audiences who are shifting away from traditional search behaviors. Shifting User Preferences Generational changes in search habits are creating opportunities for new players in the market. Younger users, particularly Gen Z and Gen Alpha, are increasingly turning to platforms like TikTok for lifestyle advice and Amazon for product recommendations, bypassing traditional search engines like Google. “Recent studies show younger generations are no longer using ‘Google’ as a verb,” said Lisa Martin, an analyst with the Futurum Group. “This opens the playing field for competitors like Meta and OpenAI.” Forrester Research corroborates this trend, noting a diversification in search behaviors. “ChatGPT’s popularity has accelerated this shift,” said Nikhil Lai, a Forrester analyst. He added that these changes could challenge Google’s search ad market, with its dominance potentially waning in the years ahead. Meta’s AI Search Potential Meta’s foray into AI search offers an opportunity to enhance user experiences and deepen engagement. Rather than pushing news content into users’ feeds—an approach that has drawn criticism—AI-driven search could empower users to decide what content they see and when they see it. “If implemented thoughtfully, it could transform the user experience and give users more control,” said Martin. This approach could also boost engagement by keeping users within Meta’s ecosystem. The Race for Revenue and Trust While AI-powered search is expected to increase engagement, monetization strategies remain uncertain. Google has yet to monetize its AI Overviews, and OpenAI’s plans for SearchGPT remain unclear. Other vendors, like Perplexity AI, are experimenting with models such as sponsored questions instead of traditional results. Trust remains a critical factor in the evolving search landscape. “Google is still seen as more trustworthy,” Lai noted, with users often returning to Google to verify AI-generated information. Despite the competition, the conversational AI search market lacks a definitive leader. “Google dominated traditional search, but the race for conversational search is far more open-ended,” Lai concluded. Meta’s entry into this competitive space underscores the ongoing evolution of search technology, setting the stage for a reshaped digital landscape driven by AI innovation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
salesforce government digital transformation

Salesforce Drives Digital Transformation in Governmental Agencies

How Salesforce Drives Digital Transformation in Governmental Agencies in 2025 In the evolving digital age, government agencies face an increasing demand to modernize their services, improve citizen engagement, and deliver seamless digital experiences. These organizations require transformational technologies that not only streamline internal operations but also adopt a citizen-first approach. Salesforce emerges as a key enabler of this transformation, empowering government agencies with tools to build unified, transparent platforms while fostering efficiency and enhancing citizen interaction. Leveraging Salesforce Commerce Cloud and Salesforce CRM, agencies can overcome common challenges and embrace a more digitally enabled public sector. Let’s explore the pressing challenges government agencies face and how Salesforce provides practical, scalable solutions to address them. 1. Citizen Engagement and Accessibility: Bridging the Digital Divide Challenge: Citizens now expect government services to be as user-friendly and accessible as private-sector experiences. Lengthy response times, disconnected platforms, and inconsistent experiences across digital and physical touchpoints erode trust and hinder accessibility. Solution: 2. Data Security and Compliance: Safeguarding Citizen Trust Challenge: Handling sensitive citizen data requires robust security and strict compliance with regulations like GDPR, CCPA, and other local data privacy laws. Solution: 3. Legacy Systems and Integration: Modernizing Infrastructure Challenge: Legacy systems often limit agility, making it difficult to integrate new technologies and slowing the pace of digital transformation. Solution: 4. Budget Constraints: Implementing Cost-Effective Solutions Challenge: Budget limitations often hinder the adoption of new technologies, especially those requiring significant upfront investment. Solution: 5. Efficient Service Delivery: Streamlining Workflows Challenge: Paper-heavy, bureaucratic processes delay service delivery and frustrate both staff and citizens. Solution: 6. Data-Driven Decision-Making: Analytics for Informed Policies Challenge: Generating actionable insights from vast amounts of data is challenging, affecting policymaking and government efficiency. Solution: 7. Enhancing Collaboration: A Unified Workforce Challenge: Siloed departments hinder collaboration and reduce overall productivity, making it difficult to provide cohesive citizen services. Solution: 8. Real-Time Responsiveness: Meeting Citizen Expectations Challenge: Citizens expect real-time support and proactive communication from government agencies. Delays lead to frustration and diminished trust. Solution: Transforming Government Services with Salesforce Salesforce Commerce Cloud and Salesforce CRM are tailored to address public sector challenges in 2025. By leveraging these tools, government agencies can: Salesforce offers a clear path to a digitally empowered future, enabling government agencies to meet today’s demands while laying the foundation for innovation. Ready to Transform?If your agency is ready to embrace digital transformation, streamline operations, and enhance citizen services, Salesforce can help you get there. Let’s discuss how Salesforce solutions, supported by expert implementation, can drive meaningful change for your organization and your citizens. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Energy Solution

AI Energy Solution

Could the AI Energy Solution Make AI Unstoppable? The Rise of Brain-Based AI In 2002, Jason Padgett, a furniture salesman from Tacoma, Washington, experienced a life-altering transformation after a traumatic brain injury. Following a violent assault, Padgett began to perceive the world through intricate patterns of geometry and fractals, developing a profound, intuitive grasp of advanced mathematical concepts—despite no formal education in the subject. His extraordinary abilities, emerging from the brain’s adaptation to injury, revealed an essential truth: the human brain’s remarkable capacity for resilience and reorganization. This phenomenon underscores the brain’s reliance on inhibition, a critical mechanism that silences or separates neural processes to conserve energy, clarify signals, and enable complex cognition. Researcher Iain McGilchrist highlights that this ability to step back from immediate stimuli fosters reflection and thoughtful action. Yet this foundational trait—key to the brain’s efficiency and adaptability—is absent from today’s dominant AI models. Current AI systems, like Transformers powering tools such as ChatGPT, lack inhibition. These models rely on probabilistic predictions derived from massive datasets, resulting in inefficiencies and an inability to learn independently. However, the rise of brain-based AI seeks to emulate aspects of inhibition, creating systems that are not only more energy-efficient but also capable of learning from real-world, primary data without constant retraining. The AI Energy Problem Today’s AI landscape is dominated by Transformer models, known for their ability to process vast amounts of secondary data, such as scraped text, images, and videos. While these models have propelled significant advancements, their insatiable demand for computational power has exposed critical flaws. As energy costs rise and infrastructure investment balloons, the industry is beginning to reevaluate its reliance on Transformer models. This shift has sparked interest in brain-inspired AI, which promises sustainable solutions through decentralized, self-learning systems that mimic human cognitive efficiency. What Brain-Based AI Solves Brain-inspired models aim to address three fundamental challenges with current AI systems: The human brain’s ability to build cohesive perceptions from fragmented inputs—like stitching together a clear visual image from saccades and peripheral signals—serves as a blueprint for these models, demonstrating how advanced functionality can emerge from minimal energy expenditure. The Secret to Brain Efficiency: A Thousand Brains Jeff Hawkins, the creator of the Palm Pilot, has dedicated decades to understanding the brain’s neocortex and its potential for AI design. His Thousand Brains Theory of Intelligence posits that the neocortex operates through a universal algorithm, with approximately 150,000 cortical columns functioning as independent processors. These columns identify patterns, sequences, and spatial representations, collaborating to form a cohesive perception of the world. Hawkins’ brain-inspired approach challenges traditional AI paradigms by emphasizing predictive coding and distributed processing, reducing energy demands while enabling real-time learning. Unlike Transformers, which centralize control, brain-based AI uses localized decision-making, creating a more scalable and adaptive system. Is AI in a Bubble? Despite immense investment in AI, the market’s focus remains heavily skewed toward infrastructure rather than applications. NVIDIA’s data centers alone generate 5 billion in annualized revenue, while major AI applications collectively bring in just billion. This imbalance has led to concerns about an AI bubble, reminiscent of the early 2000s dot-com and telecom busts, where overinvestment in infrastructure outpaced actual demand. The sustainability of current AI investments hinges on the viability of new models like brain-based AI. If these systems gain widespread adoption within the next decade, today’s energy-intensive Transformer models may become obsolete, signaling a profound market correction. Controlling Brain-Based AI: A Philosophical Divide The rise of brain-based AI introduces not only technical challenges but also philosophical ones. Scholars like Joscha Bach argue for a reductionist approach, constructing intelligence through mathematical models that approximate complex phenomena. Others advocate for holistic designs, warning that purely rational systems may lack the broader perspective needed to navigate ethical and unpredictable scenarios. This philosophical debate mirrors the physical divide in the human brain: one hemisphere excels in reductionist analysis, while the other integrates holistic perspectives. As AI systems grow increasingly complex, the philosophical framework guiding their development will profoundly shape their behavior—and their impact on society. The future of AI lies in balancing efficiency, adaptability, and ethical design. Whether brain-based models succeed in replacing Transformers will depend not only on their technical advantages but also on our ability to guide their evolution responsibly. As AI inches closer to mimicking human intelligence, the stakes have never been higher. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
DHS Introduces AI Framework to Protect Critical Infrastructure

DHS Introduces AI Framework to Protect Critical Infrastructure

The Department of Homeland Security (DHS) has unveiled the Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, a voluntary set of guidelines designed to ensure the safe and secure deployment of AI across the systems that power daily life. From energy grids to water systems, transportation, and communications, critical infrastructure increasingly relies on AI for enhanced efficiency and resilience. While AI offers transformative potential—such as detecting earthquakes, optimizing energy usage, and streamlining logistics—it also introduces new vulnerabilities. Framework Overview The framework, developed with input from cloud providers, AI developers, critical infrastructure operators, civil society, and public sector organizations, builds on DHS’s broader policies from 2023, which align with White House directives. It aims to provide a shared roadmap for balancing AI’s benefits with its risks. AI Vulnerabilities in Critical Infrastructure The DHS framework categorizes vulnerabilities into three key areas: The guidelines also address sector-specific vulnerabilities and offer strategies to ensure AI strengthens resilience while minimizing misuse risks. Industry and Government Support Arvind Krishna, Chairman and CEO of IBM, lauded the framework as a “powerful tool” for fostering responsible AI development. “We look forward to working with DHS to promote shared and individual responsibilities in advancing trusted AI systems.” Marc Benioff, CEO of Salesforce, emphasized the framework’s role in fostering collaboration among stakeholders while prioritizing trust and accountability. “Salesforce is committed to humans and AI working together to advance critical infrastructure industries in the U.S. We support this framework as a vital step toward shaping the future of AI in a safe and sustainable manner.” DHS Secretary Alejandro N. Mayorkas highlighted the urgency of proactive action. “AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms. The framework, if widely adopted, will help ensure the safety and security of critical services.” DHS Recommendations for Stakeholders A Call to Action DHS encourages widespread adoption of the framework to build safer, more resilient critical infrastructure. By prioritizing trust, transparency, and collaboration, this initiative aims to guide the responsible integration of AI into essential systems, ensuring they remain secure and effective as technology continues to evolve. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Won't Hurt Salesforce

AI Won’t Hurt Salesforce

Marc Benioff Dismisses AI Threats, Sets Sights on a Billion AI Agents in One Year Salesforce CEO Marc Benioff has no doubts about the transformative potential of AI for enterprise software, particularly Salesforce itself. At the core of his vision are AI agents—autonomous software bots designed to handle routine tasks, freeing up human workers to focus on more strategic priorities. “What if your workforce had no limits? That’s a question we couldn’t even ask over the past 25 years of Salesforce—or the 45 years I’ve been in software,” Benioff said during an appearance on TechCrunch’s Equity podcast. The Billion-Agent Goal Benioff revealed that Salesforce’s recently launched Agentforce platform is already being adopted by “hundreds of customers” and aims to deploy a billion AI agents within a year. These agents are designed to handle tasks across industries—from enhancing customer experiences at retail brands like Gucci to assisting patients with follow-ups in healthcare. To illustrate, Benioff shared his experience with Disney’s virtual Private Tour Guides. “The AI agent analyzed park flow, ride history, and preferences, then guided me to attractions I hadn’t visited before,” he explained. Competition with Microsoft and the AI Landscape While Benioff is bullish on AI, he hasn’t hesitated to criticize competitors—particularly Microsoft. When Microsoft unveiled its new autonomous agents for Dynamics 365 in October, Benioff dismissed them as uninspired. “Copilot is the new Clippy,” he quipped, referencing Microsoft’s infamous virtual assistant from the 1990s. Benioff also cited Gartner research highlighting data security issues and administrative flaws in Microsoft’s AI tools, adding, “Copilot has disappointed so many customers. It’s not transforming companies.” However, industry skeptics argue that the real challenge to Salesforce isn’t Microsoft but the wave of AI-powered startups disrupting traditional enterprise software. With tools like OpenAI’s ChatGPT and Klarna’s in-house AI assistant “Kiki,” companies are starting to explore GenAI solutions that can replace legacy platforms like Salesforce altogether. For example, Klarna recently announced it was moving away from Salesforce and Workday, favoring GenAI tools that enable seamless, conversational interfaces and faster data access. Why Salesforce Is Positioned to Win Despite the noise, Benioff remains confident that Salesforce’s extensive data infrastructure gives it a significant edge. “We manage 230 petabytes of customer data with robust security and sharing models. That’s what allows AI to thrive in our ecosystem,” he said. While companies may question how other platforms like OpenAI handle data, Salesforce offers an integrated approach, reducing the need for complex data migrations to other clouds, such as Microsoft Azure. Salesforce’s Own Use of AI Benioff also highlighted Salesforce’s internal adoption of Agentforce, using AI agents in its customer service operations, sales processes, and help centers. “If you’re authenticated on help.salesforce.com, you’re already interacting with our agent,” he noted. AI Startups: Threat or Opportunity? As for concerns about AI startups overtaking Salesforce, Benioff sees them as acquisition opportunities rather than existential threats. “We’ve made over 60 acquisitions, many of them startups,” he said. He pointed to Agentforce itself, which was built using technology from Airkit.ai, a startup founded by a former Salesforce employee. Salesforce Ventures initially invested in Airkit.ai before acquiring and integrating it into its platform. The Path Forward Benioff is resolute in his belief that AI won’t hurt Salesforce—instead, it will revolutionize how businesses operate. While skeptics warn of a seismic shift in enterprise software, Benioff’s strategy is clear: lean into AI, leverage data, and stay agile through innovation and acquisitions. “We’re just getting started,” he concluded, reiterating his vision for a future where AI agents expand the possibilities of work and customer experience like never before. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
healthcare Can prioritize ai governance

Healthcare Can Prioritize AI Governance

As artificial intelligence gains momentum in healthcare, it’s critical for health systems and related stakeholders to develop robust AI governance programs. AI’s potential to address challenges in administration, operations, and clinical care is drawing interest across the sector. As this technology evolves, the range of applications in healthcare will only broaden.

Read More
healthcare Can prioritize ai governance

AI Data Privacy and Security

Three Key Generative AI Data Privacy and Security Concerns The rise of generative AI is reshaping the digital landscape, introducing powerful tools like ChatGPT and Microsoft Copilot into the hands of professionals, students, and casual users alike. From creating AI-generated art to summarizing complex texts, generative AI (GenAI) is transforming workflows and sparking innovation. However, for information security and privacy professionals, this rapid proliferation also brings significant challenges in data governance and protection. Below are three critical data privacy and security concerns tied to generative AI: 1. Who Owns the Data? Data ownership is a contentious issue in the age of generative AI. In the European Union, the General Data Protection Regulation (GDPR) asserts that individuals own their personal data. In contrast, data ownership laws in the United States are less clear-cut, with recent state-level regulations echoing GDPR’s principles but failing to resolve ambiguity. Generative AI often ingests vast amounts of data, much of which may not belong to the person uploading it. This creates legal risks for both users and AI model providers, especially when third-party data is involved. Cases surrounding intellectual property, such as controversies involving Slack, Reddit, and LinkedIn, highlight public resistance to having personal data used for AI training. As lawsuits in this arena emerge, prior intellectual property rulings could shape the legal landscape for generative AI. 2. What Data Can Be Derived from LLM Output? Generative AI models are designed to be helpful, but they can inadvertently expose sensitive or proprietary information submitted during training. This risk has made many wary of uploading critical data into AI models. Techniques like tokenization, anonymization, and pseudonymization can reduce these risks by obscuring sensitive data before it is fed into AI systems. However, these practices may compromise the model’s performance by limiting the quality and specificity of the training data. Advocates for GenAI stress that high-quality, accurate data is essential to achieving the best results, which adds to the complexity of balancing privacy with performance. 3. Can the Output Be Trusted? The phenomenon of “hallucinations” — when generative AI produces incorrect or fabricated information — poses another significant concern. Whether these errors stem from poor training, flawed data, or malicious intent, they raise questions about the reliability of GenAI outputs. The impact of hallucinations varies depending on the context. While some errors may cause minor inconveniences, others could have serious or even dangerous consequences, particularly in sensitive domains like healthcare or legal advisory. As generative AI continues to evolve, ensuring the accuracy and integrity of its outputs will remain a top priority. The Generative AI Data Governance Imperative Generative AI’s transformative power lies in its ability to leverage vast amounts of information. For information security, data privacy, and governance professionals, this means grappling with key questions, such as: With high stakes and no way to reverse intellectual property violations, the need for robust data governance frameworks is urgent. As society navigates this transformative era, balancing innovation with responsibility will determine whether generative AI becomes a tool for progress or a source of new challenges. While generative AI heralds a bold future, history reminds us that groundbreaking advancements often come with growing pains. It is the responsibility of stakeholders to anticipate and address these challenges to ensure a safer and more equitable AI-powered world. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Python Alongside Salesforce

Python Losing the Crown

For years, Python has been synonymous with data science, thanks to its robust libraries like NumPy, Pandas, and scikit-learn. It’s long held the crown as the dominant programming language in the field. However, even the strongest kingdoms face threats. Python Losing the Crown. The whispers are growing louder: Is Python’s reign nearing its end? Before you fire up your Jupyter notebook to prove me wrong, let me clarify — Python is incredible and undeniably one of the greatest programming languages of all time. But no ruler is without flaws, and Python’s supremacy may not last forever. Here are five reasons why Python’s crown might be slipping. 1. Performance Bottlenecks: Python’s Achilles’ Heel Let’s address the obvious: Python is slow. Its interpreted nature makes it inherently less efficient than compiled languages like C++ or Java. Sure, libraries like NumPy and tools like Cython help mitigate these issues, but at its core, Python can’t match the raw speed of newer, more performance-oriented languages. Enter Julia and Rust, which are optimized for numerical computing and high-performance tasks. When working with massive, real-time datasets, Python’s performance bottlenecks become harder to ignore, prompting some developers to offload critical tasks to faster alternatives. 2. Python’s Memory Challenges Memory consumption is another area where Python struggles. Handling large datasets often pushes Python to its limits, especially in environments with constrained resources, such as edge computing or IoT. While tools like Dask can help manage memory more efficiently, these are often stopgap solutions rather than true fixes. Languages like Rust are gaining traction for their superior memory management, making them an attractive alternative for resource-limited scenarios. Picture running a Python-based machine learning model on a Raspberry Pi, only to have it crash due to memory overload. Frustrating, isn’t it? 3. The Rise of Domain-Specific Languages (DSLs) Python’s versatility has been both its strength and its weakness. As industries mature, many are turning to domain-specific languages tailored to their specific needs: Python may be the “jack of all trades,” but as the saying goes, it risks being the “master of none” compared to these specialized tools. 4. Python’s Simplicity: A Double-Edged Sword Python’s beginner-friendly syntax is one of its greatest strengths, but it can also create complacency. Its ease of use often means developers don’t delve into the deeper mechanics of algorithms or computing. Meanwhile, languages like Julia, designed for scientific computing, offer intuitive structures for advanced modeling while encouraging developers to engage with complex mathematical concepts. Python’s simplicity is like riding a bike with training wheels: it works, but it may not push you to grow as a developer. 5. AI-Specific Frameworks Are Gaining Ground Python has been the go-to language for AI, powering frameworks like TensorFlow, PyTorch, and Keras. But new challengers are emerging: As AI and machine learning evolve, these specialized frameworks could chip away at Python’s dominance. The Verdict: Python Losing the Crown? Python remains the Swiss Army knife of programming languages, especially in data science. However, its cracks are showing as new, specialized tools and faster languages emerge. The data science landscape is evolving, and Python must adapt or risk losing its crown. For now, Python is still king. But as history has shown, no throne is secure forever. The future belongs to those who innovate, and Python’s ability to evolve will determine whether it remains at the top. The throne of code is only as stable as the next breakthrough. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI Risk Management

AI Risk Management

Organizations must acknowledge the risks associated with implementing AI systems to use the technology ethically and minimize liability. Throughout history, companies have had to manage the risks associated with adopting new technologies, and AI is no exception. Some AI risks are similar to those encountered when deploying any new technology or tool, such as poor strategic alignment with business goals, a lack of necessary skills to support initiatives, and failure to secure buy-in across the organization. For these challenges, executives should rely on best practices that have guided the successful adoption of other technologies. In the case of AI, this includes: However, AI introduces unique risks that must be addressed head-on. Here are 15 areas of concern that can arise as organizations implement and use AI technologies in the enterprise: Managing AI Risks While AI risks cannot be eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to minimize their negative impact. These policies should ensure the use of high-quality data, require testing and validation to eliminate biases, and mandate ongoing monitoring to identify and address unexpected consequences. Furthermore, ethical considerations should be embedded in AI systems, with frameworks in place to ensure AI produces transparent, fair, and unbiased results. Human oversight is essential to confirm these systems meet established standards. For successful risk management, the involvement of the board and the C-suite is crucial. As noted, “This is not just an IT problem, so all executives need to get involved in this.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
NYT Issues Cease-and-Desist Letter to Perplexity AI

NYT Issues Cease-and-Desist Letter to Perplexity AI

NYT Issues Cease-and-Desist Letter to Perplexity AI Over Alleged Unauthorized Content Use The New York Times (NYT) has issued a cease-and-desist letter to Perplexity AI, accusing the AI-powered search startup of using its content without permission. This move marks the second time the NYT has confronted a company for allegedly misappropriating its material. According to reports, the Times claims Perplexity is accessing and utilizing its content to generate summaries and other outputs, actions it argues infringe on copyright laws. The startup now has two weeks to respond to the accusations. A Growing Pattern of Tensions Perplexity AI is not the only publisher-facing scrutiny. In June, Forbes threatened legal action against the company, alleging “willful infringement” by using its text and images. In response, Perplexity launched the Perplexity Publishers’ Program, a revenue-sharing initiative that collaborates with publishers like Time, Fortune, and The Texas Tribune. Meanwhile, the NYT remains entangled in a separate lawsuit with OpenAI and its partner Microsoft over alleged misuse of its content. A Strategic Legal Approach The NYT’s decision to issue a cease-and-desist letter instead of pursuing an immediate lawsuit signals a calculated move. “Cease-and-desist approaches are less confrontational, less expensive, and faster,” said Sarah Kreps, a professor at Cornell University. This method also opens the door for negotiation, a pragmatic step given the uncharted legal terrain surrounding generative AI and copyright law. Michael Bennett, a responsible AI expert from Northeastern University, echoed this view, suggesting that the cease-and-desist approach positions the Times to protect its intellectual property while maintaining leverage in ongoing legal battles. If the NYT wins its case against OpenAI, Bennett added, it could compel companies like Perplexity to enter financial agreements for content use. However, if the case doesn’t favor the NYT, the publisher risks losing leverage. The letter also serves as a warning to other AI vendors, signaling the NYT’s determination to safeguard its intellectual property. Perplexity’s Defense: Facts vs. Expression Perplexity AI has countered the NYT’s claims by asserting that its methods adhere to copyright laws. “We aren’t scraping data for building foundation models but rather indexing web pages and surfacing factual content as citations,” the company stated. It emphasized that facts themselves cannot be copyrighted, drawing parallels to how search engines like Google operate. Kreps noted that Perplexity’s approach aligns closely with other AI platforms, which typically index pages to provide factual answers while citing sources. “If Perplexity is culpable, then the entire AI industry could be held accountable,” she said, contrasting Perplexity’s citation-based model with platforms like ChatGPT, which often lack transparency about data sources. The Crux of the Copyright Argument The NYT’s cease-and-desist letter centers on the distinction between facts and the creative expression of facts. While raw facts are not protected under copyright, the NYT claims that its specific interpretation and presentation of those facts are. Vincent Allen, an intellectual property attorney, explained that if Perplexity is scraping data and summarizing articles, it may involve making unauthorized copies of copyrighted content, strengthening the NYT’s claims. “This is a big deal for content providers,” Allen said, “as they want to ensure they’re compensated for their work.” Implications for the AI Industry The outcome of this dispute could set a precedent for how AI platforms handle content generated by publishers. If Perplexity’s practices are deemed infringing, it could reshape the operational models of similar AI vendors. At the heart of the debate is the balance between fostering innovation in AI and protecting intellectual property, a challenge that will likely shape the future of generative AI and its relationship with content creators. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
collaboration between humans and AI

Collaboration Between Humans and AI

The Future of AI: What to Expect in the Next 5 Years In the next five years, AI will accelerate human life, reshape behaviors, and transform industries—these changes are inevitable. Collaboration Between Humans and AI. For much of the early 20th century, AI existed mainly in science fiction, where androids, sentient machines, and futuristic societies intrigued fans of the genre. From films like Metropolis to books like I, Robot, AI was the subject of speculative imagination. AI in fiction often over-dramatized reality and caused us to suspend belief in what was and was not possible. But by the mid-20th century, scientists began working to bring AI into reality. A Brief History of AI’s Impact on Society The 1956 Dartmouth Summer Research Project on Artificial Intelligence marked a key turning point, where John McCarthy coined the term “artificial intelligence” and helped establish a community of AI researchers. Although the initial excitement about AI often outpaced its actual capabilities, significant breakthroughs began emerging by the late 20th century. One such moment was IBM’s Deep Blue defeating chess champion Garry Kasparov in 1997, signaling that machines could perform complex cognitive tasks. The rise of big data and Moore’s Law, which fueled the exponential growth of computational power, enabled AI to process vast amounts of information and tackle tasks previously handled only by humans. By 2022, generative AI models like ChatGPT proved that machine learning could yield highly sophisticated and captivating technologies. AI’s influence is now everywhere. No longer is it only discussed in IT circles. AI is being featured in nearly all new products hitting the market. It is part of if not the creation tool of most commercials. Voice assistants like Alexa, recommendation systems used by Netflix, and autonomous vehicles represent just a glimpse of AI’s current role in society. Yet, over the next five years, AI’s development is poised to introduce far more profound societal changes. How AI Will Shape the Future Industries Most Affected by AI Long-term Risks of Collaboration Between Humans and AI AI’s potential to pose existential risks has long been a topic of concern. However, the more realistic danger lies in human societies voluntarily ceding control to AI systems. Algorithmic trading in finance, for example, demonstrates how human decisions are already being replaced by AI’s ability to operate at unimaginable speeds. Still, fear of AI should not overshadow the opportunities it presents. If organizations shy away from AI out of anxiety, they risk missing out on innovations and efficiency gains. The future of AI depends on a balanced approach that embraces its potential while mitigating its risks. In the coming years, the collaboration between humans and AI will drive profound changes across industries, legal frameworks, and societal norms, creating both challenges and opportunities for the future. Tectonic can help you map your AI journey for the best Collaboration Between Humans and AI. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Promising Patient Engagement Use Cases for GenAI and Chatbots

Promising Patient Engagement Use Cases for GenAI and Chatbots

Promising Patient Engagement Use Cases for GenAI and Chatbots Generative AI (GenAI) is showing great potential in enhancing patient engagement by easing the burden on healthcare staff and clinicians while streamlining the overall patient experience. As healthcare undergoes its digital transformation, various patient engagement applications for GenAI and chatbots are emerging as promising tools. Let’s look at Promising Patient Engagement Use Cases for GenAI and Chatbots. Key applications of GenAI and patient-facing chatbots include online symptom checkers, appointment scheduling, patient navigation, medical search engines, and even patient portal messaging. These technologies aim to alleviate staff workloads while improving the patient journey, according to some experts. However, patient-facing AI applications are not without challenges, such as the risk of generating medical misinformation or exacerbating healthcare disparities through biased algorithms. As healthcare professionals explore the potential of GenAI and chatbots for patient engagement, they must also ensure safeguards are in place to prevent the spread of inaccuracies and avoid creating health inequities. Online Symptom Checkers Online symptom checkers allow healthcare organizations to assess patients’ medical concerns without requiring an in-person visit. Patients can input their symptoms, and the AI-powered chatbot will generate a list of possible diagnoses, helping them decide whether to seek urgent care, visit the emergency department, or manage symptoms at home. These tools promise to improve both patient experience and operational efficiency by directing patients to the right care setting, thus reducing unnecessary visits. For healthcare providers, symptom checkers can help triage patients and ensure high-acuity areas are available for those needing critical care. Despite their potential, studies show mixed results regarding the diagnostic accuracy of online symptom checkers. A 2022 literature review found that diagnostic accuracy for these tools ranged from 19% to 37.9%. However, triage accuracy—referring patients to the correct care setting—was better, ranging between 48.9% and 90%. Patient reception to symptom checkers has also been varied. For example, during the COVID-19 pandemic, symptom checkers were designed to help patients assess whether their symptoms were virus-related. While patients appreciated the tools, they preferred chatbots that displayed human-like qualities and competence. Tools perceived as similar in quality to human interactions were favored. Furthermore, some studies indicate that online symptom checkers could deepen health inequities, as users tend to be younger, female, and more digitally literate. To mitigate this, AI developers must create chatbots that can communicate in multiple languages, mimic human interaction, and easily escalate issues to human professionals when needed. Self-Scheduling and Patient Navigation GenAI and conversational AI are proving valuable in addressing routine patient queries, like appointment scheduling and patient navigation, tasks that typically fall on healthcare staff. With a strained medical workforce, using AI for lower-level inquiries allows clinicians to focus on more complex tasks. AI-enhanced appointment scheduling systems, for example, not only help patients book visits but also answer logistical questions like parking directions or department locations within a clinic. A December 2023 literature review highlighted that AI-optimized scheduling could reduce provider workload, increase patient satisfaction, and make healthcare more patient-centered. However, key considerations for AI integration include ensuring health equity, broadband access, and patient trust. While AI can manage routine requests, healthcare organizations need to ensure their tools are accessible and functional for diverse populations. Online Medical Research GenAI tools like ChatGPT are contributing to the “Dr. Google” phenomenon, where patients search online for medical information before seeing a healthcare provider. While some clinicians have been cautious about these tools, research suggests they can effectively provide accurate medical information. For instance, an April 2023 study showed that ChatGPT answered 88% of breast cancer screening questions correctly. Another study in May 2023 demonstrated that the tool could adequately educate patients on colonoscopy preparation. In both cases, the information was presented in an easy-to-understand format, essential for improving health literacy. However, GenAI is not without flaws. Patients express concern about the reliability of AI-generated information, with a 2023 Wolters Kluwer survey showing that 49% of respondents worry about false information from GenAI. Additionally, many are uneasy about the unknown sources and validation processes behind the information. To build patient trust, AI developers must ensure the accuracy of their source material and provide supplementary authoritative resources like patient education materials. Patient Portal Messaging and Provider Communication Generative AI has also found use in patient portal messaging, where it can draft responses on behalf of healthcare providers. This feature has the potential to reduce clinician burnout by handling routine inquiries. A study conducted at Mass General Brigham in April 2024 revealed that a large language model embedded in a secure messaging tool could generate acceptable responses to patient questions. In 58% of cases, chatbot-generated messages required human editing. Promising Patient Engagement Use Cases for GenAI and Chatbots Interestingly, other research has found that AI-generated responses in patient portals are often more empathetic than those written by overworked healthcare providers. Nevertheless, AI responses should always be reviewed by a clinician to ensure accuracy before being sent to patients. Generative AI is also making strides in clinical decision support and ambient documentation, further boosting healthcare efficiency. However, as healthcare organizations adopt these technologies, they must address concerns around algorithmic bias and ensure patient safety remains a top priority. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
A Company in Transition

A Company in Transition

OpenAI Restructures: Increased Flexibility, But Raises Concerns OpenAI’s decision to restructure into a for-profit entity offers more freedom for the company and its investors but raises questions about its commitment to ethical AI development. Founded in 2015 as a nonprofit, OpenAI transitioned to a hybrid model in 2019 with the creation of a for-profit subsidiary. Now, its restructuring, widely reported this week, signals a shift where the nonprofit arm will no longer influence the day-to-day operations of the for-profit side. CEO Sam Altman is set to receive equity in the newly restructured company, which will operate as a benefit corporation (B Corp), similar to competitors like Anthropic and Sama. A Company in Transition This move comes on the heels of a turbulent year. OpenAI’s board initially voted to remove Altman over concerns about transparency, but later rehired him after significant backlash and the resignation of several board members. The company has seen a number of high-profile departures since, including co-founder Ilya Sutskever, who left in May to start Safe Superintelligence (SSI), an AI safety-focused venture that recently secured $1 billion in funding. This week, CTO Mira Murati, along with key research leaders Bob McGrew and Barret Zoph, also announced their departures. OpenAI’s restructuring also coincides with an anticipated multi-billion-dollar investment round involving major players such as Nvidia, Apple, and Microsoft, potentially pushing the company’s valuation to as high as $150 billion. Complex But Expected Move According to Michael Bennett, AI policy advisor at Northeastern University, the restructuring isn’t surprising given OpenAI’s rapid growth and increasingly complex structure. “Considering OpenAI’s valuation, it’s understandable that the company would simplify its governance to better align with investor priorities,” said Bennett. The transition to a benefit corporation signals a shift towards prioritizing shareholder interests, but it also raises concerns about whether OpenAI will maintain its ethical obligations. “By moving away from its nonprofit roots, OpenAI may scale back its commitment to ethical AI,” Bennett noted. Ethical and Safety Concerns OpenAI has faced scrutiny over its rapid deployment of generative AI models, including its release of ChatGPT in November 2022. Critics, including Elon Musk, have accused the company of failing to be transparent about the data and methods it uses to train its models. Musk, a co-founder of OpenAI, even filed a lawsuit alleging breach of contract. Concerns persist that the restructuring could lead to less ethical oversight, particularly in preventing issues like biased outputs, hallucinations, and broader societal harm from AI. Despite the potential risks, Bennett acknowledged that the company would have greater operational freedom. “They will likely move faster and with greater focus on what benefits their shareholders,” he said. This could come at the expense of the ethical commitments OpenAI previously emphasized when it was a nonprofit. Governance and Regulation Some industry voices, however, argue that OpenAI’s structure shouldn’t dictate its commitment to ethical AI. Veera Siivonen, co-founder and chief commercial officer of AI governance vendor Saidot, emphasized the role of regulation in ensuring responsible AI development. “Major players like Anthropic, Cohere, and tech giants such as Google and Meta are all for-profit entities,” Siivonen said. “It’s unfair to expect OpenAI to operate under a nonprofit model when others in the industry aren’t bound by the same restrictions.” Siivonen also pointed to OpenAI’s participation in global AI governance initiatives. The company recently signed the European Union AI Pact, a voluntary agreement to adhere to the principles of the EU’s AI Act, signaling its commitment to safety and ethics. Challenges for Enterprises The restructuring raises potential concerns for enterprises relying on OpenAI’s technology, said Dion Hinchcliffe, an analyst with Futurum Group. OpenAI may be able to innovate faster under its new structure, but the reduced influence of nonprofit oversight could make some companies question the vendor’s long-term commitment to safety. Hinchcliffe noted that the departure of key staff could signal a shift away from prioritizing AI safety, potentially prompting enterprises to reconsider their trust in OpenAI. New Developments Amid Restructuring Despite the ongoing changes, OpenAI continues to roll out new technologies. The company recently introduced a new moderation model, “omni-moderation-latest,” built on GPT-4o. This model, available through the Moderation API, enables developers to flag harmful content in both text and image outputs. A Company in Transition As OpenAI navigates its restructuring, balancing rapid innovation with maintaining ethical standards will be crucial to sustaining enterprise trust and market leadership. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Generative AI Replaces Legacy Systems

Securing AI for Efficiency and Building Customer Trust

As businesses increasingly adopt AI to enhance automation, decision-making, customer support, and growth, they face crucial security and privacy considerations. The Salesforce Platform, with its integrated Einstein Trust Layer, enables organizations to leverage AI securely by ensuring robust data protection, privacy compliance, transparent AI functionality, strict access controls, and detailed audit trails. Why Secure AI Workflows Matter AI technology empowers systems to mimic human-like behaviors, such as learning and problem-solving, through advanced algorithms and large datasets that leverage machine learning. As the volume of data grows, securing sensitive information used in AI systems becomes more challenging. A recent Salesforce study found that 68% of Analytics and IT teams expect data volumes to increase over the next 12 months, underscoring the need for secure AI implementations. AI for Business: Predictive and Generative Models In business, AI depends on trusted data to provide actionable recommendations. Two primary types of AI models support various business functions: Addressing Key LLM Risks Salesforce’s Einstein Trust Layer addresses common risks associated with large language models (LLMs) and offers guidance for secure Generative AI deployment. This includes ensuring data security, managing access, and maintaining transparency and accountability in AI-driven decisions. Leveraging AI to Boost Efficiency Businesses gain a competitive edge with AI by improving efficiency and customer experience through: Four Strategies for Secure AI Implementation To ensure data protection in AI workflows, businesses should consider: The Einstein Trust Layer: Protecting AI-Driven Data The Einstein Trust Layer in Salesforce safeguards generative AI data by providing: Salesforce’s Einstein Trust Layer addresses the security and privacy challenges of adopting AI in business, offering reliable data security, privacy protection, transparent AI operations, and robust access controls. Through this secure approach, businesses can maximize AI benefits while safeguarding customer trust and meeting compliance requirements. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
  • 1
  • 2
gettectonic.com