PHI - gettectonic.com - Page 3
Generative ai energy consumption

AI Energy Consumption

At the Gartner IT Symposium/Xpo 2024, industry leaders emphasized that rising energy consumption and costs are fast becoming constraints on IT capabilities. Solutions discussed include adopting acceleration technologies, exploring microgrids, and keeping an eye on emerging energy-efficient technologies. With enterprise AI applications expanding, computing demands – and the energy needed to support them – are rapidly increasing. Nvidia’s CEO, Jensen Huang, highlighted this challenge, noting that advancements in traditional computing are failing to keep pace with data processing needs. “If compute demand grows exponentially while general-purpose performance stagnates, you’ll face not just cost inflation but significant energy inflation,” he said. Huang suggested that leveraging accelerated computing can mitigate some of these impacts, improving energy efficiency. Another approach highlighted was the use of microgrids, with Gartner predicting that Fortune 500 companies will shift up to $500 billion toward such systems by 2027 to manage ongoing energy risks and AI demand. Gartner’s Daryl Plummer noted that these independent energy networks could help energy-intensive enterprises avoid dependence on strained public power grids. Hyperscalers, including major cloud providers, are already exploring alternative power sources, such as nuclear energy, to meet escalating demands. For instance, Microsoft has announced plans to source energy from the Three Mile Island nuclear plant. While emerging technologies like quantum, neuromorphic, and photonic computing offer the promise of significant energy efficiency, they’re still years away from maturity. Gartner analyst Frank Buytendijk predicted it will take five to ten years before these options become viable solutions. “Energy-efficient computing is on the horizon, but we have a ways to go,” he said. Until then, enterprises will need to consider proactive strategies to manage energy risks and costs as part of their AI and IT planning. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Assistants Using LangGraph

AI Assistants Using LangGraph

In the evolving world of AI, retrieval-augmented generation (RAG) systems have become standard for handling straightforward queries and generating contextually relevant responses. However, as demand grows for more sophisticated AI applications, there is a need for systems that move beyond simple retrieval tasks. Enter AI agents—autonomous entities capable of executing complex, multi-step processes, maintaining state across interactions, and dynamically adapting to new information. LangGraph, a powerful extension of the LangChain library, is designed to help developers build these advanced AI agents, enabling stateful, multi-actor applications with cyclic computation capabilities. AI Assistants Using LangGraph. In this insight, we’ll explore how LangGraph revolutionizes AI development and provide a step-by-step guide to building your own AI agent using an example that computes energy savings for solar panels. This example will demonstrate how LangGraph’s unique features enable the creation of intelligent, adaptable, and practical AI systems. What is LangGraph? LangGraph is an advanced library built on top of LangChain, designed to extend Large Language Model (LLM) applications by introducing cyclic computational capabilities. While LangChain allows for the creation of Directed Acyclic Graphs (DAGs) for linear workflows, LangGraph enhances this by enabling the addition of cycles—essential for developing agent-like behaviors. These cycles allow LLMs to continuously loop through processes, making decisions dynamically based on evolving inputs. LangGraph: Nodes, States, and Edges The core of LangGraph lies in its stateful graph structure: LangGraph redefines AI development by managing the graph structure, state, and coordination, allowing for the creation of sophisticated, multi-actor applications. With automatic state management and precise agent coordination, LangGraph facilitates innovative workflows while minimizing technical complexity. Its flexibility enables the development of high-performance applications, and its scalability ensures robust and reliable systems, even at the enterprise level. Step-by-step Guide Now that we understand LangGraph’s capabilities, let’s dive into a practical example. We’ll build an AI agent that calculates potential energy savings for solar panels based on user input. This agent can function as a lead generation tool on a solar panel seller’s website, providing personalized savings estimates based on key data like monthly electricity costs. This example highlights how LangGraph can automate complex tasks and deliver business value. Step 1: Import Necessary Libraries We start by importing the essential Python libraries and modules for the project. pythonCopy codefrom langchain_core.tools import tool from langchain_community.tools.tavily_search import TavilySearchResults from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import Runnable from langchain_aws import ChatBedrock import boto3 from typing import Annotated from typing_extensions import TypedDict from langgraph.graph.message import AnyMessage, add_messages from langchain_core.messages import ToolMessage from langchain_core.runnables import RunnableLambda from langgraph.prebuilt import ToolNode Step 2: Define the Tool for Calculating Solar Savings Next, we define a tool to calculate potential energy savings based on the user’s monthly electricity cost. pythonCopy code@tool def compute_savings(monthly_cost: float) -> float: “”” Tool to compute the potential savings when switching to solar energy based on the user’s monthly electricity cost. Args: monthly_cost (float): The user’s current monthly electricity cost. Returns: dict: A dictionary containing: – ‘number_of_panels’: The estimated number of solar panels required. – ‘installation_cost’: The estimated installation cost. – ‘net_savings_10_years’: The net savings over 10 years after installation costs. “”” def calculate_solar_savings(monthly_cost): cost_per_kWh = 0.28 cost_per_watt = 1.50 sunlight_hours_per_day = 3.5 panel_wattage = 350 system_lifetime_years = 10 monthly_consumption_kWh = monthly_cost / cost_per_kWh daily_energy_production = monthly_consumption_kWh / 30 system_size_kW = daily_energy_production / sunlight_hours_per_day number_of_panels = system_size_kW * 1000 / panel_wattage installation_cost = system_size_kW * 1000 * cost_per_watt annual_savings = monthly_cost * 12 total_savings_10_years = annual_savings * system_lifetime_years net_savings = total_savings_10_years – installation_cost return { “number_of_panels”: round(number_of_panels), “installation_cost”: round(installation_cost, 2), “net_savings_10_years”: round(net_savings, 2) } return calculate_solar_savings(monthly_cost) Step 3: Set Up State Management and Error Handling We define utilities to manage state and handle errors during tool execution. pythonCopy codedef handle_tool_error(state) -> dict: error = state.get(“error”) tool_calls = state[“messages”][-1].tool_calls return { “messages”: [ ToolMessage( content=f”Error: {repr(error)}n please fix your mistakes.”, tool_call_id=tc[“id”], ) for tc in tool_calls ] } def create_tool_node_with_fallback(tools: list) -> dict: return ToolNode(tools).with_fallbacks( [RunnableLambda(handle_tool_error)], exception_key=”error” ) Step 4: Define the State and Assistant Class We create the state management class and the assistant responsible for interacting with users. pythonCopy codeclass State(TypedDict): messages: Annotated[list[AnyMessage], add_messages] class Assistant: def __init__(self, runnable: Runnable): self.runnable = runnable def __call__(self, state: State): while True: result = self.runnable.invoke(state) if not result.tool_calls and ( not result.content or isinstance(result.content, list) and not result.content[0].get(“text”) ): messages = state[“messages”] + [(“user”, “Respond with a real output.”)] state = {**state, “messages”: messages} else: break return {“messages”: result} Step 5: Set Up the LLM with AWS Bedrock We configure AWS Bedrock to enable advanced LLM capabilities. pythonCopy codedef get_bedrock_client(region): return boto3.client(“bedrock-runtime”, region_name=region) def create_bedrock_llm(client): return ChatBedrock(model_id=’anthropic.claude-3-sonnet-20240229-v1:0′, client=client, model_kwargs={‘temperature’: 0}, region_name=’us-east-1′) llm = create_bedrock_llm(get_bedrock_client(region=’us-east-1′)) Step 6: Define the Assistant’s Workflow We create a template and bind the tools to the assistant’s workflow. pythonCopy codeprimary_assistant_prompt = ChatPromptTemplate.from_messages( [ ( “system”, ”’You are a helpful customer support assistant for Solar Panels Belgium. Get the following information from the user: – monthly electricity cost Ask for clarification if necessary. ”’, ), (“placeholder”, “{messages}”), ] ) part_1_tools = [compute_savings] part_1_assistant_runnable = primary_assistant_prompt | llm.bind_tools(part_1_tools) Step 7: Build the Graph Structure We define nodes and edges for managing the AI assistant’s conversation flow. pythonCopy codebuilder = StateGraph(State) builder.add_node(“assistant”, Assistant(part_1_assistant_runnable)) builder.add_node(“tools”, create_tool_node_with_fallback(part_1_tools)) builder.add_edge(START, “assistant”) builder.add_conditional_edges(“assistant”, tools_condition) builder.add_edge(“tools”, “assistant”) memory = MemorySaver() graph = builder.compile(checkpointer=memory) Step 8: Running the Assistant The assistant can now be run through its graph structure to interact with users. python import uuidtutorial_questions = [ ‘hey’, ‘can you calculate my energy saving’, “my montly cost is $100, what will I save”]thread_id = str(uuid.uuid4())config = {“configurable”: {“thread_id”: thread_id}}_printed = set()for question in tutorial_questions: events = graph.stream({“messages”: (“user”, question)}, config, stream_mode=”values”) for event in events: _print_event(event, _printed) Conclusion By following these steps, you can create AI Assistants Using LangGraph to calculate solar panel savings based on user input. This tutorial demonstrates how LangGraph empowers developers to create intelligent, adaptable systems capable of handling complex tasks efficiently. Whether your application is in customer support, energy management, or other domains, LangGraph provides the Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched

Read More
Market Insights and Forecast for Quote Generation Software

Market Insights and Forecast for Quote Generation Software

Market Insights and Forecast for Quote Generation Software for Salesforce (2024-2031): Key Players, Technology Advancements, and Growth Opportunities A recent research report by WMR delves into the Quote Generation Software for Salesforce Market, offering over 150 pages of in-depth analysis on business strategies employed by both leading and emerging industry players. The study provides insights into market developments, technological advancements, drivers, opportunities, and overall market status. Understanding market segments is essential to identify key factors driving growth. Comprehensive Market Insights The report provides an extensive analysis of the global market landscape, including business expansion strategies designed to increase revenue. It compiles critical data about target customers, evaluating the potential success of products and services prior to launch. The research offers valuable insights for stakeholders, including detailed updates on the impact of COVID-19 on business operations and the broader market. The report assesses whether a target market aligns with an enterprise’s goals, emphasizing that market success hinges on understanding the target audience. Key Players Featured: Market Segmentation By Types: By Applications: Geographical Overview The Quote Generation Software for Salesforce Market varies significantly across regions, driven by factors such as economic development, technical advancements, and cultural differences. Businesses looking to expand globally must account for these variations to leverage local opportunities effectively. Key regions include: Competitive Landscape The report offers a detailed competitive analysis, highlighting: Highlights from the Report Key Market Questions Addressed: Reasons to Purchase this Report: This report provides a valuable roadmap for businesses aiming to navigate the evolving Quote Generation Software for Salesforce Market, helping them make informed decisions and strategically position themselves for growth. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
UX Principles for AI in Healthcare

UX Principles for AI in Healthcare

The Role of UX in AI-Driven Healthcare AI is poised to revolutionize the global economy, with predictions it could contribute $15.7 trillion by 2030—more than the combined economic output of China and India. Among the industries likely to see the most transformative impact is healthcare. However, during my time at NHS Digital, I saw how systems that weren’t designed with existing clinical workflows in mind added unnecessary complexity for clinicians, often leading to manual workarounds and errors due to fragmented data entry across systems. The risk is that AI, if not designed with user experience (UX) at the forefront, could exacerbate these issues, creating more disruption rather than solving problems. From diagnostic tools to consumer health apps, the role of UX in AI-driven healthcare is critical to making these innovations effective and user-friendly. This article explores the intersection of UX and AI in healthcare, outlining key UX principles to design better AI-driven experiences and highlighting trends shaping the future of healthcare. The Shift in Human-Computer Interaction with AI AI fundamentally changes how humans interact with computers. Traditionally, users took command by entering inputs—clicking, typing, and adjusting settings until the desired outcome was achieved. The computer followed instructions, while the user remained in control of each step. With AI, this dynamic shifts dramatically. Now, users specify their goal, and the AI determines how to achieve it. For example, rather than manually creating an illustration, users might instruct AI to “design a graphic for AI-driven healthcare with simple shapes and bold colors.” While this saves time, it introduces challenges around ensuring the results meet user expectations, especially when the process behind AI decisions is opaque. The Importance of UX in AI for Healthcare A significant challenge in healthcare AI is the “black box” nature of the systems. For example, consider a radiologist reviewing a lung X-ray that an AI flagged as normal, despite the presence of concerning lesions. Research has shown that commercial AI systems can perform worse than radiologists when multiple health issues are present. When AI decisions are unclear, clinicians may question the system’s reliability, especially if they cannot understand the rationale behind an AI’s recommendation. This opacity hinders feedback, making it difficult to improve the system’s performance. Addressing this issue is essential for UX designers. Bias in AI is another significant issue. Many healthcare AI tools have been documented as biased, such as systems trained on predominantly male cardiovascular data, which can fail to detect heart disease in women. AIs also struggle to identify conditions like melanoma in people with darker skin tones due to insufficient diversity in training datasets. UX can help mitigate these biases by designing interfaces that clearly explain the data used in decisions, highlight missing information, and provide confidence levels for predictions. The movement toward eXplainable AI (XAI) seeks to make AI systems more transparent and interpretable for human users. UX Principles for AI in Healthcare To ensure AI is beneficial in real-world healthcare settings, UX designers must prioritize certain principles. Below are key UX design principles for AI-enabled healthcare applications: Applications of AI in Healthcare AI is already making a significant impact in various healthcare applications, including: Real-world deployments of AI in healthcare have demonstrated that while AI can be useful, its effectiveness depends heavily on usability and UX design. By adhering to the principles of transparency, interpretability, controllability, and human-centered AI, designers can help create AI-enabled healthcare applications that are both powerful and user-friendly. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
Rise of Agentforce

Rise of Agentforce

The Rise of Agentforce: How AI Agents Are Shaping the Future of Work Salesforce wrapped up its annual Dreamforce conference this September, leaving attendees with more than just memories of John Mulaney’s quips. As the swarms of Waymos ferried participants across a cleaner-than-usual San Francisco, it became clear that AI-powered agents—dubbed Agentforce—are poised to transform the workplace. These agents, controlled within Salesforce’s ecosystem, could significantly change how work is done and how customer experiences are delivered. Dreamforce has always been known for its bold predictions about the future, but this year’s vision of AI-based agents felt particularly compelling. These agents represent the next frontier in workplace automation, but as exciting as this future is, some important questions remain. Reality Check on the Agentforce Vision During his keynote, Salesforce CEO Marc Benioff raised an interesting point: “Why would our agents be so low-hallucinogenic?” While the agents have access to vast amounts of data, workflows, and services, they currently function best within Salesforce’s own environment. Benioff even made the claim that Salesforce pioneered prompt engineering—a statement that, for some, might have evoked a scene from Austin Powers, with Dr. Evil humorously taking credit for inventing the question mark. But can Salesforce fully realize its vision for Agentforce? If they succeed, it could be transformative for how work gets done. However, as with many AI-driven innovations, the real question lies in interoperability. The Open vs. Closed Debate As powerful as Salesforce’s ecosystem is, not all business data and workflows live within it. If the future of work involves a network of AI agents working together, how far can a closed ecosystem like Salesforce’s really go? Apple, Microsoft, Amazon, and other tech giants also have their sights set on AI-driven agents, and the race is on to own this massive opportunity. As we’ve seen in previous waves of technology, this raises familiar debates about open versus closed systems. Without a standard for agents to work together across platforms, businesses could find themselves limited. Closed ecosystems may help solve some problems, but to unlock the full potential of AI agents, they must be able to operate seamlessly across different platforms and boundaries. Looking to the Open Web for Inspiration The solution may lie in the same principles that guide the open web. Just as mobile apps often require a web view to enable an array of outcomes, the same might be necessary in the multi-agent landscape. Tools like Slack’s Block Kit framework allow for simple agent interactions, but they aren’t enough for more complex use cases. Take Clockwise Prism, for example—a sophisticated scheduling agent designed to find meeting times when there’s no obvious availability. When integrated with other agents to secure that critical meeting, businesses will need a flexible interface to explore multiple scheduling options. A web view for agents could be the key. The Need for an Open Multi-Agent Standard Benioff repeatedly stressed that businesses don’t want “DIY agents.” Enterprises seek controlled, repeatable workflows that deliver consistent value—but they also don’t want to be siloed. This is why the future requires an open standard for agents to collaborate across ecosystems and platforms. Imagine initiating a set of work agents from within an Atlassian Jira ticket that’s connected to a Salesforce customer case—or vice versa. For agents to seamlessly interact regardless of the system they originate from, a standard is needed. This would allow businesses to deploy agents in a way that’s consistent, integrated, and scalable. User Experience and Human-in-the-Loop: Crucial Elements for AI Agents A significant insight from the integration of LangChain with Assistant-UI highlighted a crucial factor: user experience (UX). Whether it’s streaming, generative interfaces, or human-in-the-loop functionality, the UX of AI agents is critical. While agents need to respond quickly and efficiently, businesses must have the ability to involve humans in decision-making when necessary. This principle of human-in-the-loop is key to the agent’s scheduling process. While automation is the goal, involving the user at crucial points—such as confirming scheduling options—ensures that the agent remains reliable and adaptable. Any future standard must prioritize this capability, allowing for user involvement where necessary, while also enabling full automation when confidence levels are high. Generative or Native UI? The discussion about user interfaces for agents often leads to a debate between generative UI and native UI. The latter may be the better approach. A native UI, controlled by the responding service or agent, ensures the interface is tailored to the context and specifics of the agent’s task. Whether this UI is rendered using AI or not is an implementation detail that can vary depending on the service. What matters is that the UI feels native to the agent’s task, making the user experience seamless and intuitive. What’s Next? The Push for an Open Multi-Agent Future As we look ahead to the multi-agent future, the need for an open standard is more pressing than ever. At Clockwise, we’ve drafted something we’re calling the Open Multi-Agent Protocol (OMAP), which we hope will foster collaboration and innovation in this space. The future of work is rapidly approaching, where new roles—like Agent Orchestrators—will emerge, enabling people to leverage AI agents in unprecedented ways. While Salesforce’s vision for Agentforce is ambitious, the key to unlocking its full potential lies in creating a standard that allows agents to work together, across platforms, and beyond the boundaries of closed ecosystems. With the right approach, we can create a future where AI agents transform work in ways we’re only beginning to imagine. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial

Read More
Salesforce Flow Tests

Salesforce Flow is Here

Hello, Salesforce Flow. Goodbye, Workflow Rules and Process Builder. As Bob Dylan famously sang, “The times they are a-changin’.” If your nonprofit is still relying on Workflow Rules and Process Builder to automate tasks in Salesforce, it’s time to prepare for change. These tools are being retired, but there’s no need to panic—Salesforce Flow, a more powerful, versatile automation tool, is ready to take the lead. Why Move to Salesforce Flow? Salesforce is consolidating its automation features into one unified platform: Flow. This shift comes with significant benefits for nonprofits: What This Means for Nonprofits While existing Workflow Rules and Process Builders will still function for now, Salesforce plans to end support by December 31, 2025. This means no more updates or bug fixes, and unsupported automations could break unexpectedly soon after the deadline. To avoid disruptions, nonprofits should start migrating their automations to Flow sooner rather than later. How to Transition to Salesforce Flow Resources to Simplify Migration: Planning Your Migration: Start by auditing your existing automations to determine which Workflow Rules and Process Builders need to be transitioned. Think strategically about how to improve processes and leverage Flow’s expanded capabilities. What Can Flow Do for Your Nonprofit? Salesforce Flow empowers nonprofits to automate processes in innovative ways: Don’t Go It Alone Transitioning to Salesforce Flow may seem overwhelming, but it’s a chance to elevate your nonprofit’s automation capabilities. Whether you need help with migration tools, strategic planning, or Flow development, you don’t have to do it alone. Reach out to our support team or contact us to get started. Together, we can make this transition seamless and set your nonprofit up for long-term success with Salesforce Flow. Like1 Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents and Digital Transformation

Ready for AI Agents

Brands that can effectively integrate agentic AI into their operations stand to gain a significant competitive edge. But as with any innovation, success will depend on balancing the promise of automation with the complexities of trust, privacy, and user experience.

Read More
AI Risk Management

AI Risk Management

Organizations must acknowledge the risks associated with implementing AI systems to use the technology ethically and minimize liability. Throughout history, companies have had to manage the risks associated with adopting new technologies, and AI is no exception. Some AI risks are similar to those encountered when deploying any new technology or tool, such as poor strategic alignment with business goals, a lack of necessary skills to support initiatives, and failure to secure buy-in across the organization. For these challenges, executives should rely on best practices that have guided the successful adoption of other technologies. In the case of AI, this includes: However, AI introduces unique risks that must be addressed head-on. Here are 15 areas of concern that can arise as organizations implement and use AI technologies in the enterprise: Managing AI Risks While AI risks cannot be eliminated, they can be managed. Organizations must first recognize and understand these risks and then implement policies to minimize their negative impact. These policies should ensure the use of high-quality data, require testing and validation to eliminate biases, and mandate ongoing monitoring to identify and address unexpected consequences. Furthermore, ethical considerations should be embedded in AI systems, with frameworks in place to ensure AI produces transparent, fair, and unbiased results. Human oversight is essential to confirm these systems meet established standards. For successful risk management, the involvement of the board and the C-suite is crucial. As noted, “This is not just an IT problem, so all executives need to get involved in this.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Cohere-Powered Slack Agents

Cohere-Powered Slack Agents

Salesforce AI and Cohere-Powered Slack Agents: Seamless CRM Data Interaction and Enhanced Productivity Slack agents, powered by Salesforce AI and integrated with Cohere, enable seamless interaction with CRM data within the Slack platform. These agents allow teams to use natural language to surface data insights and take action, simplifying workflows. With Slack’s AI Workflow Builder and support for third-party AI agents, including Cohere, productivity is further enhanced through automated processes and customizable AI assistants. By leveraging these technologies, Slack agents provide users with direct access to CRM data and AI-powered insights, improving efficiency and collaboration. Key Features of Slack Agents: Salesforce AI and Cohere Productivity Enhancements with Slack Agents: Salesforce AI and Cohere AI Agent Capabilities in Slack: Salesforce and Cohere Data Security and Compliance for Slack Agents FAQ What are Slack agents, and how do they integrate with Salesforce AI and Cohere?Slack agents are AI-powered assistants that enable teams to interact with CRM data directly within Slack. Salesforce AI agents allow natural language data interactions, while Cohere’s integration enhances productivity with customizable AI assistants and automated workflows. How do Salesforce AI agents in Slack improve team productivity?Salesforce AI agents enable users to interact with both CRM and conversational data, update records, and analyze opportunities using natural language. This integration improves workflow efficiency, leading to a reported 47% productivity boost. What features does the Cohere integration with Slack AI offer?Cohere integration offers customizable AI assistants that can help generate workflows, summarize channel content, and provide intelligent responses to user queries within Slack. How do Slack agents handle data security and compliance?Slack agents leverage cloud-native DLP solutions, automatically detecting sensitive data across different file types and setting up automated remediation processes for enhanced security and compliance. Can Slack agents work with AI providers beyond Salesforce and Cohere?Yes, Slack supports AI agents from various providers. In addition to Salesforce AI and Cohere, integrations include Adobe Express, Anthropic, Perplexity, IBM, and Amazon Q Business, offering users a wide array of AI-powered capabilities. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI in Networking

AI in Networking

AI Tools in Networking: Tailoring Capabilities to Unique Needs AI tools are becoming increasingly common across various industries, offering a wide range of functionalities. However, network engineers may not require every capability these tools provide. Each network has distinct requirements that align with specific business objectives, necessitating that network engineers and developers select AI toolsets tailored to their networks’ needs. While network teams often desire similar AI capabilities, they also encounter common challenges in integrating these tools into their systems. The Rise of AI in Networking Though AI is not a new concept—having existed for decades in the form of automated and expert systems—it is gaining unprecedented attention. According to Jim Frey, principal analyst for networking at TechTarget’s Enterprise Strategy Group, many organizations have not fully grasped AI’s potential in production environments over the past three years. “AI has been around for a long time, but the interesting thing is, only a minority—not even half—have really said they’re using it effectively in production for the last three years,” Frey noted. Generative AI (GenAI) has significantly contributed to this renewed interest in AI. Shamus McGillicuddy, vice president of research at Enterprise Management Associates, categorizes AI tools into two main types: GenAI and AIOps (AI for IT operations). “Generative AI, like ChatGPT, has recently surged in popularity, becoming a focal point of discussion among IT professionals,” McGillicuddy explained. “AIOps, on the other hand, encompasses machine learning, anomaly detection, and analytics.” The increasing complexity of networks is another factor driving the adoption of AI in networking. Frey highlighted that the demands of modern network environments are beyond human capability to manage manually, making AI engines a vital solution. Essential AI Tool Capabilities for Networks While individual network needs vary, many network engineers seek similar functionalities when integrating AI. Commonly desired capabilities include: According to McGillicuddy’s research, network optimization and automated troubleshooting are among the most popular use cases for AI. However, many professionals prefer to retain manual oversight in the fixing process. “Automated troubleshooting can identify and analyze issues, but typically, people want to approve the proposed fixes,” McGillicuddy stated. Many of these capabilities are critical for enhancing security and mitigating threats. Frey emphasized that networking professionals increasingly view AI as a tool to improve organizational security. DeCarlo echoed this sentiment, noting that network managers share similar objectives with security professionals regarding proactive problem recognition. Frey also mentioned alternative use cases for AI, such as documentation and change recommendations, which, while less popular, can offer significant value to network teams. Ultimately, the relevance of any AI capability hinges on its fit within the network environment and team needs. “I don’t think you can prioritize one capability over another,” DeCarlo remarked. “It depends on the tools being used and their effectiveness.” Generative AI: A New Frontier Despite its recent emergence, GenAI has quickly become an asset in the networking field. McGillicuddy noted that in the past year and a half, network professionals have adopted GenAI tools, with ChatGPT being one of the most recognized examples. “One user reported that leveraging ChatGPT could reduce a task that typically takes four hours down to just 10 minutes,” McGillicuddy said. However, he cautioned that users must understand the limitations of GenAI, as mistakes can occur. “There’s a risk of errors or ‘hallucinations’ with these tools, and having blind faith in their outputs can lead to significant network issues,” he warned. In addition to ChatGPT, vendors are developing GenAI interfaces for their products, including virtual assistants. According to McGillicuddy’s findings, common use cases for vendor GenAI products include: DeCarlo added that GenAI tools offer valuable training capabilities due to their rapid processing speeds and in-depth analysis, which can expedite knowledge acquisition within the network. Frey highlighted that GenAI’s rise is attributed to its ability to outperform older systems lacking sophistication. Nevertheless, the complexity of GenAI infrastructures has led to a demand for AIOps tools to manage these systems effectively. “We won’t be able to manage GenAI infrastructures without the support of AI tools, as human capabilities cannot keep pace with rapid changes,” Frey asserted. Challenges in Implementing AI Tools While AI tools present significant benefits for networks, network engineers and managers must navigate several challenges before integration. Data Privacy, Collection, and Quality Data usage remains a critical concern for organizations considering AIOps and GenAI tools. Frey noted that the diverse nature of network data—combining operational information with personally identifiable information—heightens data privacy concerns. For GenAI, McGillicuddy pointed out the importance of validating AI outputs and ensuring high-quality data is utilized for training. “If you feed poor data to a generative AI tool, it will struggle to accurately understand your network,” he explained. Complexity of AI Tools Frey and McGillicuddy agreed that the complexity of both AI and network systems could hinder effective deployment. Frey mentioned that AI systems, especially GenAI, require careful tuning and strong recommendations to minimize inaccuracies. McGillicuddy added that intricate network infrastructures, particularly those involving multiple vendors, could limit the effectiveness of AIOps components, which are often specialized for specific systems. User Uptake and Skills Gaps User adoption of AI tools poses a significant challenge. Proper training is essential to realize the full benefits of AI in networking. Some network professionals may be resistant to using AI, while others may lack the knowledge to integrate these tools effectively. McGillicuddy noted that AIOps tools are often less intuitive than GenAI, necessitating a certain level of expertise for users to extract value. “Understanding how tools function and identifying potential gaps can be challenging,” DeCarlo added. The learning curve can be steep, particularly for teams accustomed to longstanding tools. Integration Issues Integration challenges can further complicate user adoption. McGillicuddy highlighted two dimensions of this issue: tools and processes. On the tools side, concerns arise about harmonizing GenAI with existing systems. “On the process side, it’s crucial to ensure that teams utilize these tools effectively,” he said. DeCarlo cautioned that organizations might need to create in-house supplemental tools to bridge integration gaps, complicating the synchronization of vendor AI

Read More
LLMs and AI

LLMs and AI

Large Language Models (LLMs): Revolutionizing AI and Custom Solutions Large Language Models (LLMs) are transforming artificial intelligence by enabling machines to generate and comprehend human-like text, making them indispensable across numerous industries. The global LLM market is experiencing explosive growth, projected to rise from $1.59 billion in 2023 to $259.8 billion by 2030. This surge is driven by the increasing demand for automated content creation, advances in AI technology, and the need for improved human-machine communication. Several factors are propelling this growth, including advancements in AI and Natural Language Processing (NLP), large datasets, and the rising importance of seamless human-machine interaction. Additionally, private LLMs are gaining traction as businesses seek more control over their data and customization. These private models provide tailored solutions, reduce dependency on third-party providers, and enhance data privacy. This guide will walk you through building your own private LLM, offering valuable insights for both newcomers and seasoned professionals. What are Large Language Models? Large Language Models (LLMs) are advanced AI systems that generate human-like text by processing vast amounts of data using sophisticated neural networks, such as transformers. These models excel in tasks such as content creation, language translation, question answering, and conversation, making them valuable across industries, from customer service to data analysis. LLMs are generally classified into three types: LLMs learn language rules by analyzing vast text datasets, similar to how reading numerous books helps someone understand a language. Once trained, these models can generate content, answer questions, and engage in meaningful conversations. For example, an LLM can write a story about a space mission based on knowledge gained from reading space adventure stories, or it can explain photosynthesis using information drawn from biology texts. Building a Private LLM Data Curation for LLMs Recent LLMs, such as Llama 3 and GPT-4, are trained on massive datasets—Llama 3 on 15 trillion tokens and GPT-4 on 6.5 trillion tokens. These datasets are drawn from diverse sources, including social media (140 trillion tokens), academic texts, and private data, with sizes ranging from hundreds of terabytes to multiple petabytes. This breadth of training enables LLMs to develop a deep understanding of language, covering diverse patterns, vocabularies, and contexts. Common data sources for LLMs include: Data Preprocessing After data collection, the data must be cleaned and structured. Key steps include: LLM Training Loop Key training stages include: Evaluating Your LLM After training, it is crucial to assess the LLM’s performance using industry-standard benchmarks: When fine-tuning LLMs for specific applications, tailor your evaluation metrics to the task. For instance, in healthcare, matching disease descriptions with appropriate codes may be a top priority. Conclusion Building a private LLM provides unmatched customization, enhanced data privacy, and optimized performance. From data curation to model evaluation, this guide has outlined the essential steps to create an LLM tailored to your specific needs. Whether you’re just starting or seeking to refine your skills, building a private LLM can empower your organization with state-of-the-art AI capabilities. For expert guidance or to kickstart your LLM journey, feel free to contact us for a free consultation. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises

Generative AI Energy Consumption Rises, but Impact on ROI Unclear The energy costs associated with generative AI (GenAI) are often overlooked in enterprise financial planning. However, industry experts suggest that IT leaders should account for the power consumption that comes with adopting this technology. When building a business case for generative AI, some costs are evident, like large language model (LLM) fees and SaaS subscriptions. Other costs, such as preparing data, upgrading cloud infrastructure, and managing organizational changes, are less visible but significant. Generative AI Energy Consumption Rises One often overlooked cost is the energy consumption of generative AI. Training LLMs and responding to user requests—whether answering questions or generating images—demands considerable computing power. These tasks generate heat and necessitate sophisticated cooling systems in data centers, which, in turn, consume additional energy. Despite this, most enterprises have not focused on the energy requirements of GenAI. However, the issue is gaining more attention at a broader level. The International Energy Agency (IEA), for instance, has forecasted that electricity consumption from data centers, AI, and cryptocurrency could double by 2026. By that time, data centers’ electricity use could exceed 1,000 terawatt-hours, equivalent to Japan’s total electricity consumption. Goldman Sachs also flagged the growing energy demand, attributing it partly to AI. The firm projects that global data center electricity use could more than double by 2030, fueled by AI and other factors. ROI Implications of Energy Costs The extent to which rising energy consumption will affect GenAI’s return on investment (ROI) remains unclear. For now, the perceived benefits of GenAI seem to outweigh concerns about energy costs. Most businesses have not been directly impacted, as these costs tend to affect hyperscalers more. For instance, Google reported a 13% increase in greenhouse gas emissions in 2023, largely due to AI-related energy demands in its data centers. Scott Likens, PwC’s global chief AI engineering officer, noted that while energy consumption isn’t a barrier to adoption, it should still be factored into long-term strategies. “You don’t take it for granted. There’s a cost somewhere for the enterprise,” he said. Energy Costs: Hidden but Present Although energy expenses may not appear on an enterprise’s invoice, they are still present. Generative AI’s energy consumption is tied to both model training and inference—each time a user makes a query, the system expends energy to generate a response. While the energy used for individual queries is minor, the cumulative effect across millions of users can add up. How these costs are passed to customers is somewhat opaque. Licensing fees for enterprise versions of GenAI products likely include energy costs, spread across the user base. According to PwC’s Likens, the costs associated with training models are shared among many users, reducing the burden on individual enterprises. On the inference side, GenAI vendors charge for tokens, which correspond to computational power. Although increased token usage signals higher energy consumption, the financial impact on enterprises has so far been minimal, especially as token costs have decreased. This may be similar to buying an EV to save on gas but spending hundreds and losing hours at charging stations. Energy as an Indirect Concern While energy costs haven’t been top-of-mind for GenAI adopters, they could indirectly address the issue by focusing on other deployment challenges, such as reducing latency and improving cost efficiency. Newer models, such as OpenAI’s GPT-4o mini, are more economical and have helped organizations scale GenAI without prohibitive costs. Organizations may also use smaller, fine-tuned models to decrease latency and energy consumption. By adopting multimodel approaches, enterprises can choose models based on the complexity of a task, optimizing for both speed and energy efficiency. The Data Center Dilemma As enterprises consider GenAI’s energy demands, data centers face the challenge head-on, investing in more sophisticated cooling systems to handle the heat generated by AI workloads. According to the Dell’Oro Group, the data center physical infrastructure market grew in the second quarter of 2024, signaling the start of the “AI growth cycle” for infrastructure sales, particularly thermal management systems. Liquid cooling, more efficient than air cooling, is gaining traction as a way to manage the heat from high-performance computing. This method is expected to see rapid growth in the coming years as demand for AI workloads continues to increase. Nuclear Power and AI Energy Demands To meet AI’s growing energy demands, some hyperscalers are exploring nuclear energy for their data centers. AWS, Google, and Microsoft are among the companies exploring this option, with AWS acquiring a nuclear-powered data center campus earlier this year. Nuclear power could help these tech giants keep pace with AI’s energy requirements while also meeting sustainability goals. I don’t know. It seems like if you akin AI accessibility to more nuclear power plants you would lose a lot of fans. As GenAI continues to evolve, both energy costs and efficiency are likely to play a greater role in decision-making. PwC has already begun including carbon impact as part of its GenAI value framework, which assesses the full scope of generative AI deployments. “The cost of carbon is in there, so we shouldn’t ignore it,” Likens said. Generative AI Energy Consumption Rises Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Agents and Digital Transformation

AI Agents and Digital Transformation

In the rapidly developingng world of technology, Artificial Intelligence (AI) is revolutionizing industries and reshaping how we interact with digital systems. One of the most promising advancements within AI is the development of AI agents. These intelligent entities, often powered by Large Language Models (LLMs), are driving the next wave of digital transformation by enabling automation, personalization, and enhanced decision-making across various sectors. AI Agents and digital transformation are here to stay. What is an AI Agent? An AI agent, or intelligent agent, is a software entity capable of perceiving its environment, reasoning about its actions, and autonomously working toward specific goals. These agents mimic human-like behavior using advanced algorithms, data processing, and machine-learning models to interact with users and complete tasks. LLMs to AI Agents — An Evolution The evolution of AI agents is closely tied to the rise of Large Language Models (LLMs). Models like GPT (Generative Pre-trained Transformer) have showcased remarkable abilities to understand and generate human-like text. This development has enabled AI agents to interpret complex language inputs, facilitating advanced interactions with users. Key Capabilities of LLM-Based Agents LLM-powered agents possess several key advantages: Two Major Types of LLM Agents LLM agents are classified into two main categories: Multi-Agent Systems (MAS) A Multi-Agent System (MAS) is a group of autonomous agents working together to achieve shared goals or solve complex problems. MAS applications span robotics, economics, and distributed computing, where agents interact to optimize processes. AI Agent Architecture and Key Elements AI agents generally follow a modular architecture comprising: Learning Strategies for LLM-Based Agents AI agents utilize various learning techniques, including supervised, reinforcement, and self-supervised learning, to adapt and improve their performance in dynamic environments. How Autonomous AI Agents Operate Autonomous AI agents act independently of human intervention by perceiving their surroundings, reasoning through possible actions, and making decisions autonomously to achieve set goals. AI Agents’ Transformative Power Across Industries AI agents are transforming numerous industries by automating tasks, enhancing efficiency, and providing data-driven insights. Here’s a look at some key use cases: Platforms Powering AI Agents The Benefits of AI Agents and Digital Transformation AI agents offer several advantages, including: The Future of AI Agents The potential of AI agents is immense, and as AI technology advances, we can expect more sophisticated agents capable of complex reasoning, adaptive learning, and deeper integration into everyday tasks. The future promises a world where AI agents collaborate with humans to drive innovation, enhance efficiency, and unlock new opportunities for growth in the digital age. AI Agents and Digital Transformation By partnering with AI development specialists at Tectonic, organizations can access cutting-edge solutions tailored to their needs, positioning themselves to stay ahead in the rapidly evolving AI-driven market. Agentforce Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
GPUs and AI Development

GPUs and AI Development

Graphics processing units (GPUs) have become widely recognized due to their growing role in AI development. However, a lesser-known but critical technology is also gaining attention: high-bandwidth memory (HBM). HBM is a high-density memory designed to overcome bottlenecks and maximize data transfer speeds between storage and processors. AI chipmakers like Nvidia rely on HBM for its superior bandwidth and energy efficiency. Its placement next to the GPU’s processor chip gives it a performance edge over traditional server RAM, which resides between storage and the processing unit. HBM’s ability to consume less power makes it ideal for AI model training, which demands significant energy resources. However, as the AI landscape transitions from model training to AI inferencing, HBM’s widespread adoption may slow. According to Gartner’s 2023 forecast, the use of accelerator chips incorporating HBM for AI model training is expected to decline from 65% in 2022 to 30% by 2027, as inferencing becomes more cost-effective with traditional technologies. How HBM Differs from Other Memory HBM shares similarities with other memory technologies, such as graphics double data rate (GDDR), in delivering high bandwidth for graphics-intensive tasks. But HBM stands out due to its unique positioning. Unlike GDDR, which sits on the printed circuit board of the GPU, HBM is placed directly beside the processor, enhancing speed by reducing signal delays caused by longer interconnections. This proximity, combined with its stacked DRAM architecture, boosts performance compared to GDDR’s side-by-side chip design. However, this stacked approach adds complexity. HBM relies on through-silicon via (TSV), a process that connects DRAM chips using electrical wires drilled through them, requiring larger die sizes and increasing production costs. According to analysts, this makes HBM more expensive and less efficient to manufacture than server DRAM, leading to higher yield losses during production. AI’s Demand for HBM Despite its manufacturing challenges, demand for HBM is surging due to its importance in AI model training. Major suppliers like SK Hynix, Samsung, and Micron have expanded production to meet this demand, with Micron reporting that its HBM is sold out through 2025. In fact, TrendForce predicts that HBM will contribute to record revenues for the memory industry in 2025. The high demand for GPUs, especially from Nvidia, drives the need for HBM as AI companies focus on accelerating model training. Hyperscalers, looking to monetize AI, are investing heavily in HBM to speed up the process. HBM’s Future in AI While HBM has proven essential for AI training, its future may be uncertain as the focus shifts to AI inferencing, which requires less intensive memory resources. As inferencing becomes more prevalent, companies may opt for more affordable and widely available memory solutions. Experts also see HBM following the same trajectory as other memory technologies, with continuous efforts to increase bandwidth and density. The next generation, HBM3E, is already in production, with HBM4 planned for release in 2026, promising even higher speeds. Ultimately, the adoption of HBM will depend on market demand, especially from hyperscalers. If AI continues to push the limits of GPU performance, HBM could remain a critical component. However, if businesses prioritize cost efficiency over peak performance, HBM’s growth may level off. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Data Labeling

Data Labeling

Data Labeling: Essential for Machine Learning and AI Data labeling is the process of identifying and tagging data samples, essential for training machine learning (ML) models. While it can be done manually, software often assists in automating the process. Data labeling is critical for helping machine learning models make accurate predictions and is widely used in fields like computer vision, natural language processing (NLP), and speech recognition. How Data Labeling Works The process begins with collecting raw data, such as images or text, which is then annotated with specific labels to provide context for ML models. These labels need to be precise, informative, and independent to ensure high-quality model training. For instance, in computer vision, data labeling can tag images of animals so that the model can learn common features and correctly identify animals in new, unlabeled data. Similarly, in autonomous vehicles, labeling helps the AI differentiate between pedestrians, cars, and other objects, ensuring safe navigation. Why Data Labeling is Important Data labeling is integral to supervised learning, a type of machine learning where models are trained on labeled data. Through labeled examples, the model learns the relationships between input data and the desired output, which improves its accuracy in real-world applications. For example, a machine learning algorithm trained on labeled emails can classify future emails as spam or not based on those labels. It’s also used in more advanced applications like self-driving cars, where the model needs to understand its surroundings by recognizing and labeling various objects like roads, signs, and obstacles. Applications of Data Labeling The Data Labeling Process Data labeling involves several key steps: Errors in labeling can negatively affect the model’s performance, so many organizations adopt a human-in-the-loop approach to involve people in quality control and improve the accuracy of labels. Data Labeling vs. Data Classification vs. Data Annotation Types of Data Labeling Benefits and Challenges Benefits: Challenges: Methods of Data Labeling Companies can label data through various methods: Each organization must choose a method that fits its needs, based on factors like data volume, staff expertise, and budget. The Growing Importance of Data Labeling As AI and ML become more pervasive, the need for high-quality data labeling increases. Data labeling not only helps train models but also provides opportunities for new jobs in the AI ecosystem. For instance, companies like Alibaba, Amazon, Facebook, Tesla, and Waymo all rely on data labeling for applications ranging from e-commerce recommendations to autonomous driving. Looking Ahead Data tools are becoming more sophisticated, reducing the need for manual work while ensuring higher data quality. As data privacy regulations tighten, businesses must also ensure that labeling practices comply with local, state, and federal laws. In conclusion, labeling is a crucial step in building effective machine learning models, driving innovation, and ensuring that AI systems perform accurately across a wide range of applications. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com