ChatGPT Archives - gettectonic.com - Page 5
GPT 4o and GPT 4

GPT 4o and GPT 4

OpenAI’s GPT-4o: Advancing the Frontier of AI OpenAI’s GPT-4o builds upon the foundation of its predecessors with significant enhancements, including improved multimodal capabilities and faster performance. GPT 4o and GPT 4. Evolution of ChatGPT and Its Underlying Models Since the launch of ChatGPT in late 2022, both the chatbot interface and its underlying models have seen several major updates. GPT-4o, released in May 2024, succeeds GPT-4, which launched in March 2023, and was followed by GPT-4o mini in July 2024. GPT-4 and GPT-4o (with “o” standing for “omni”) are advanced generative AI models developed for the ChatGPT interface. Both models generate natural-sounding text in response to user prompts and can engage in interactive, back-and-forth conversations, retaining memory and context to inform future responses. TechTarget Editorial tested these models by using them within ChatGPT, reviewing OpenAI’s informational materials and technical documentation, and analyzing user reviews on Reddit, tech blogs, and the OpenAI developer forum. Differences Between the GPTs While GPT-4o and GPT-4 share similarities, including vision and audio capabilities, a 128,000-token context window, and a knowledge cutoff in late 2023, they also differ significantly in multimodal capabilities, performance, efficiency, pricing, and language support. Introduction of GPT-4o Mini On July 18, 2024, OpenAI introduced GPT-4o mini, a cost-efficient, smaller model designed to replace GPT-3.5. GPT-4o mini outperforms GPT-3.5 while being more affordable. Aimed at developers seeking to build AI applications without the compute costs of larger models, GPT-4o mini is positioned as a competitor to other small language models like Claude’s Haiku. All users on ChatGPT Free, Plus, and Team plans received access to GPT-4o mini at launch, with ChatGPT Enterprise users expected to gain access shortly afterward. The model supports text and vision, and OpenAI plans to add support for other multimodal inputs like video and audio. Multimodality Multimodal AI models process multiple data types such as text, images, and audio. Both GPT-4 and GPT-4o support multimodality in the ChatGPT interface, allowing users to create and upload images and use voice chat. However, their approaches to multimodality differ significantly. GPT-4 primarily focuses on text processing, requiring other OpenAI models like DALL-E for image generation and Whisper for speech recognition to handle non-text input. In contrast, GPT-4o was designed for multimodality from the ground up, with all inputs and outputs processed by a single neural network. This design makes GPT-4o faster for tasks involving multiple data types, such as image analysis. Controversy Over GPT-4o’s Voice Capabilities During the GPT-4o launch demo, a voice called Sky, which sounded similar to Scarlett Johansson’s AI character in the film “Her,” sparked controversy. Johansson, who had declined a previous request to use her voice, announced legal action. In response, OpenAI paused the use of Sky’s voice, highlighting ethical concerns over voice likenesses and artists’ rights in the AI era. Performance and Efficiency GPT-4o is designed to be quicker and more efficient than GPT-4. OpenAI claims GPT-4o is twice as fast as the most recent version of GPT-4. In tests, GPT-4o generally responded faster than GPT-4, although not quite at double the speed. OpenAI’s testing indicates GPT-4o outperforms GPT-4 on major benchmarks, including math, language comprehension, and vision understanding. Pricing GPT-4o’s improved efficiency translates to lower costs. For API users, GPT-4o is available at $5 per million input tokens and $15 per million output tokens, compared to GPT-4’s $30 per million input tokens and $60 per million output tokens. GPT-4o mini is even cheaper, at $0.15 per million input tokens and $0.60 per million output tokens. GPT-4o will power the free version of ChatGPT, offering multimodality and higher-quality text responses to free users. GPT-4 remains available only to paying customers on plans starting at $20 per month. Language Support GPT-4o offers better support for non-English languages compared to GPT-4, particularly for languages that don’t use Western alphabets. This improvement addresses longstanding issues in natural language processing, making GPT-4o more effective for global applications. Is GPT-4o Better Than GPT-4? In most cases, GPT-4o is superior to GPT-4, with improved speed, lower costs, and multimodal capabilities. However, some users may still prefer GPT-4 for its stability and familiarity, especially in critical applications. Transitioning to GPT-4o may involve significant changes for systems tightly integrated with GPT-4. What Does GPT-4o Mean for ChatGPT Users? GPT-4o’s introduction brings significant changes, including the availability of multimodal capabilities for all users. While these advancements may make the Plus subscription less appealing, paid plans still offer benefits like higher usage caps and faster response times. As the AI community looks forward to GPT-5, expected later this summer, the introduction of GPT-4o sets a new standard for AI capabilities, offering powerful tools for users and developers alike. Like Related Posts Salesforce Artificial Intelligence Is artificial intelligence integrated into Salesforce? Salesforce Einstein stands as an intelligent layer embedded within the Lightning Platform, bringing robust Read more Salesforce’s Quest for AI for the Masses The software engine, Optimus Prime (not to be confused with the Autobot leader), originated in a basement beneath a West Read more Salesforce Sales Cloud Explained Salesforce Sales Cloud is Salesforce’s premier product, originating with the company’s birth in 1999, and currently commands the largest market Read more Salesforce Objects and Fields Salesforce objects and fields are analogous to database tables and the table columns. Objects and fields structure your data. For Read more

Read More
Generative AI Overview

Generative AI Overview

Editor’s Note: AI Cloud, Einstein GPT, and other cloud GPT products are now Einstein. For the latest on Salesforce Einstein The Rise of Generative AI: What It Means for Business and CRM Generative artificial intelligence (AI) made headlines in late 2022, sparking widespread curiosity and questions about its potential impact on various industries. What is Generative AI? Generative AI is a technology that creates new content—such as poetry, emails, images, or music—based on a set of input data. Unlike traditional AI, which focuses on classifying or predicting, generative AI can produce novel content with a human-like understanding of language, as noted by Salesforce Chief Scientist Silvio Savarese. However, successful generative AI depends on the quality of the input data. “AI is only as good as the data you give it, and you must ensure that datasets are representative,” emphasizes Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer. How Does Generative AI Work? Generative AI can be developed using several deep learning approaches, including: Other methods include Variational Autoencoders (VAEs) and Neural Radiance Fields (NeRFs), which generate new data or create 2D and 3D images based on sample data. Generative AI and Business Generative AI has captured the attention of global business leaders. A recent Salesforce survey found that 67% of IT leaders are focusing on generative AI in the next 18 months, with 33% considering it a top priority. Salesforce has long been exploring generative AI applications. For instance, CodeGen helps transform simple English prompts into executable code, and LAVIS makes language-vision AI accessible to researchers. More recently, Salesforce’s ProGen project demonstrated the creation of novel proteins using AI, potentially advancing medicine and treatment development. Ketan Karkhanis, Salesforce’s Executive VP and GM of Sales Cloud, highlights that generative AI benefits not just large enterprises but also small and medium-sized businesses (SMBs) by automating proposals, customer communications, and predictive sales modeling. Challenges and Ethical Considerations Despite its potential, generative AI poses risks, as noted by Paula Goldman and Kathy Baxter of Salesforce’s Ethical AI practice. They stress the importance of responsible innovation to ensure that generative AI is used safely and ethically. Accuracy in AI recommendations is crucial, and the authoritative tone of models like ChatGPT can sometimes lead to misleading results. Salesforce is committed to building trusted AI with embedded guardrails to prevent misuse. As generative AI evolves, it’s vital to balance its capabilities with ethical considerations, including its environmental impact. Generative AI can increase IT energy use, which 71% of IT leaders acknowledge. Generative AI at Salesforce Salesforce has integrated AI into its platform for years, with Einstein AI providing billions of daily predictions to enhance sales, service, and customer understanding. The recent launch of Einstein GPT, the world’s first generative AI for CRM, aims to transform how businesses interact with customers by automating content creation across various functions. Salesforce Ventures is also expanding its Generative AI Fund to $500 million, supporting AI startups and fostering responsible AI development. This expansion includes investments in companies like Anthropic and Cohere. As Salesforce continues to lead in AI innovation, the focus remains on creating technology that is inclusive, responsible, and sustainable, paving the way for the future of CRM and business. The Future of Business: AI-Powered Leadership and Decision-Making Tomorrow’s business landscape will be transformed by specialized, autonomous AI agents that will significantly change how companies are run. Future leaders will depend on these AI agents to support and enhance their teams, with AI chiefs of staff overseeing these agents and harnessing their capabilities. New AI-powered tools will bring businesses closer to their customers and enable faster, more informed decision-making. This shift is not just a trend—it’s backed by significant evidence. The Slack Workforce Index reveals a sevenfold increase in leaders seeking to integrate AI tools since September 2023. Additionally, Salesforce research shows that nearly 80% of global workers are open to an AI-driven future. While the pace of these changes may vary, it is clear that the future of work will look vastly different from today. According to the Slack Workforce Index, the number of leaders looking to integrate AI tools into their business has skyrocketed 7x since September 2023. Mick Costigan, VP, Salesforce Futures In the [still] early phases of a major technology shift, we tend to over-focus on the application of technology innovations to existing workflows. Such advances are important, but closing the imagination gap about the possible new shapes of work requires us to consider more than just technology. It requires us to think about people, both as the customers who react to new offerings and as the employees who are responsible for delivering them. Some will eagerly adopt new technology. Others will resist and drag their feet. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI Assisting Nursing

AI Assisting Nursing

Leveraging AI to Alleviate the Documentation Burden in Nursing As the nursing profession grapples with increasing burnout, researchers are investigating the potential of large language models to streamline clinical documentation and care planning. Nurses play an essential role in delivering high-quality care and improving patient outcomes, but the profession is under significant strain due to shortages and burnout. AI Assisting Nursing could lessoning burnout while improving communication. What role could Salesforce play? The American Nurses Association (ANA) emphasizes that to maximize nurses’ potential, healthcare organizations must prioritize maintaining an adequate workforce, fostering healthy work environments, and supporting policies that back nurses. The COVID-19 pandemic has exacerbated existing challenges, including increased healthcare demand, insufficient workforce support, and a wave of retirements outpacing the influx of new nurses. Tectonic has nearly two decades of experience providing IT solutions for the health care industry. Salesforce, as a leader in the field of artificial intelligence, is a top tool for health care IT. AI Assisting Nursing In response to these growing demands, some experts argue that AI technologies could help alleviate some of the burden, particularly in areaTes like clinical documentation and administrative tasks. In a recent study published in the Journal of the American Medical Informatics Association, Dr. Fabiana Dos Santos, a post-doctoral research scientist at Columbia University School of Nursing, led a team to explore how a ChatGPT-based framework could assist in generating care plan suggestions for a lung cancer patient. In an interview with Healthtech Analytics, Dr. Santos discussed the potential and challenges of using AI chatbots in nursing. Challenges in Nursing Care Plan Documentation Creating care plans is vital for ensuring patients receive timely, adequate care tailored to their needs. Nurses are central to this process, yet they face significant obstacles when documenting care plans. AI Assisting Nursing and Salesforce as a customer relationship solution addresses those challenges. “Nurses are on the front line of care and spend a considerable amount of time interacting closely with patients, contributing valuable clinical assessments to electronic health records (EHRs),” Dr. Santos explained. “However, many documentation systems are cumbersome, leading to a documentation burden where nurses spend much of their workday interacting with EHRs. This can result in cognitive burden, stress, frustration, and disruptions to direct patient care.” The American Association of Critical-Care Nurses (AACN) highlights that electronic documentation is a significant burden, consuming an average of 40% of a nurse’s shift. Time spent on documentation inversely correlates with time spent on patient care, leading to increased burnout, cognitive load, and decreased job satisfaction. These factors, in turn, contribute to patient-related issues such as a higher risk of medical errors and hospital-acquired infections, which lower patient satisfaction. When combined with the heavy workloads nurses already manage, inefficient documentation tools can make care planning even more challenging. AI Assisting Nursing and Care Plans “The demands of direct patient care and managing multiple administrative tasks simultaneously limit nurses’ time to develop individualized care plans,” Dr. Santos continued. “The non-user-friendly interfaces of many EHR systems exacerbate this challenge, making it difficult to capture all aspects of a patient’s condition, including physical, psychological, social, cultural, and spiritual dimensions.” To address these challenges, Dr. Santos and her team explored the potential of ChatGPT to improve clinical documentation. “These negative impacts on a nurse’s workday underscore the urgency of improving EHR documentation systems to reduce these issues,” she noted. “AI tools, if well designed, can improve the process of developing individualized care plans and reduce the burden of EHR-related documentation.” The Promises and Pitfalls of AI Developing care plans requires nurses to draw from their expertise to address issues like symptom management and comfort care, especially for patients with complex needs. Dr. Santos emphasized that advanced technologies, such as generative AI (GenAI), could streamline this process by enhancing documentation workflows and assisting with administrative tasks. AI tools can rapidly process large amounts of data and generate care plans more quickly than traditional methods, potentially allowing nurses to spend more time on direct and holistic patient care. However, Dr. Santos stressed the importance of carefully validating AI models, ensuring that nurses’ clinical judgment and expertise play a central role in evaluating AI-generated care plans. “New technologies can help nurses improve documentation, leading to better descriptions of patient conditions, more accurate capture of care processes, and ultimately, improved patient outcomes,” she said. “This presents an important opportunity to use novel generative AI solutions to reduce nurses’ workload and act as a supportive documentation tool.” Despite the promise of AI as a support tool, Dr. Santos cautioned that chatbots require further development to be effectively implemented in nursing care plans. AI-generated outputs can contain inaccuracies or irrelevant information, necessitating careful review and validation by nurses. Additionally, AI tools may lack the nuanced understanding of a patient’s unique needs, which only a nurse can provide through personal, empathetic interactions, such as interpreting specific cultural or spiritual needs. Despite these challenges, large language models (LLMs) and other GenAI tools are generating significant interest in the healthcare industry. They are expected to be deployed in various applications, including EHR workflows and nursing efficiency. Dr. Santos’ research contributes to this growing field. To conduct the study, the researchers developed and validated a method for structuring ChatGPT prompts—guidelines that the LLM uses to generate responses—that could produce high-quality nursing care plans. The approach involved providing detailed patient information and specific questions to consider when creating an appropriate care plan. The research team refined the Patient’s Needs Framework over ten rounds using 22 diverse hypothetical patient cases, ensuring that the ChatGPT-generated plans were consistent and aligned with typical nursing care plans. “Our findings revealed that ChatGPT could prioritize critical aspects of care, such as oxygenation, infection prevention, fall risk, and emotional support, while also providing thorough explanations for each suggested intervention, making it a valuable tool for nurses,” Dr. Santos indicated. The Future of AI in Nursing While the study focused on care plans for lung cancer, Dr. Santos emphasized that this research is just the beginning of

Read More
APIs and Software Development

APIs and Software Development

The Role of APIs in Modern Software Development APIs (Application Programming Interfaces) are central to modern software development, enabling teams to integrate external features into their products, including advanced third-party AI systems. For instance, you can use an API to allow users to generate 3D models from prompts on MatchboxXR. The Rise of AI-Powered Applications Many startups focus exclusively on AI, but often they are essentially wrappers around existing technologies like ChatGPT. These applications provide specialized user interfaces for interacting with OpenAI’s GPT models rather than developing new AI from scratch. Some branding might make it seem like they’re creating groundbreaking technology, when in reality, they’re leveraging pre-built AI solutions. Solopreneur-Driven Wrappers Large Language Models (LLMs) enable individuals and small teams to create lightweight apps and websites with AI features quickly. A quick search on Reddit reveals numerous small-scale startups offering: Such features can often be built using ChatGPT or Gemini within minutes for free. Well-Funded Ventures Larger operations invest heavily in polished platforms but may allocate significant budgets to marketing and design. This raises questions about whether these ventures are also just sophisticated wrappers. Examples include: While these products offer interesting functionalities, they often rely on APIs to interact with LLMs, which brings its own set of challenges. The Impact of AI-First, API-Second Approaches Design Considerations Looking Ahead Developer Experience: As AI technologies like LLMs become mainstream, focusing on developer experience (DevEx) will be crucial. Good DevEx involves well-structured schemas, flexible functions, up-to-date documentation, and ample testing data. Future Trends: The future of AI will likely involve more integrations. Imagine: AI is powerful, but the real innovation lies in integrating hardware, data, and interactions effectively. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
AI All Grown Up

Generative AI Tools

One of the most significant use cases for generative AI in business is customer service and support. Most of us have likely experienced the frustration of dealing with traditional automated systems. However, today’s advanced AI, powered by large language models and natural language chatbots, is rapidly improving these interactions. While many still prefer human agents for complex or sensitive issues, AI is proving highly capable of handling routine inquiries efficiently. Here’s an overview of some of the top AI-powered tools for automating customer service. Although the human element will always be essential in customer experience, these tools free up human agents from repetitive tasks, allowing them to focus on more complex challenges requiring empathy and creativity. Cognigy Cognigy is an AI platform designed to automate customer service voice and chat channels. It goes beyond simply reading FAQ responses by delivering personalized, context-sensitive answers in multiple languages. Cognigy’s AI Copilot feature enhances human contact center workers by offering real-time AI assistance during interactions, making both fully automated and human-augmented support possible. IBM WatsonX Assistant IBM’s WatsonX Assistant helps businesses create AI-powered personal assistants to streamline tasks, including customer support. With its drag-and-drop configuration, companies can set up seamless self-service experiences. The platform uses retrieval-augmented generation (RAG) to ensure that responses are accurate and up-to-date, continuously improving as it learns from customer interactions. Salesforce Einstein Service Cloud Einstein Service Cloud, part of the Salesforce platform, automates routine and complex customer service tasks. Its AI-powered Agentforce bots manage “low-touch” interactions, while “high-touch” cases are overseen by human agents supported by AI. Fully customizable, Einstein ensures that responses align with your brand’s tone and voice, all while leveraging enterprise data securely. Zendesk AI Zendesk, a leader in customer support, integrates generative AI to boost its service offerings. By using machine learning and natural language processing, Zendesk understands customer sentiment and intent, generates personalized responses, and automatically routes inquiries to the most suitable agent—be it human or machine. It also provides human agents with real-time guidance on resolving issues efficiently. Ada Ada is a conversational AI platform built for large-scale customer service automation. Its no-code interface allows businesses to create custom bots, reducing the cost of handling inquiries by up to 78% per ticket. By integrating domain-specific data, Ada helps improve both support efficiency and customer experience across omnichannel support environments. More AI Tools for Customer Service There are numerous other AI tools designed to enhance automated customer support: While AI tools are transforming customer service, the key lies in using them to complement human agents, allowing for a balance of efficiency and personalized care. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Small Language Models

Small Language Models

Large language models (LLMs) like OpenAI’s GPT-4 have gained acclaim for their versatility across various tasks, but they come with significant resource demands. In response, the AI industry is shifting focus towards smaller, task-specific models designed to be more efficient. Microsoft, alongside other tech giants, is investing in these smaller models. Science often involves breaking complex systems down into their simplest forms to understand their behavior. This reductionist approach is now being applied to AI, with the goal of creating smaller models tailored for specific functions. Sébastien Bubeck, Microsoft’s VP of generative AI, highlights this trend: “You have this miraculous object, but what exactly was needed for this miracle to happen; what are the basic ingredients that are necessary?” In recent years, the proliferation of LLMs like ChatGPT, Gemini, and Claude has been remarkable. However, smaller language models (SLMs) are gaining traction as a more resource-efficient alternative. Despite their smaller size, SLMs promise substantial benefits to businesses. Microsoft introduced Phi-1 in June last year, a smaller model aimed at aiding Python coding. This was followed by Phi-2 and Phi-3, which, though larger than Phi-1, are still much smaller than leading LLMs. For comparison, Phi-3-medium has 14 billion parameters, while GPT-4 is estimated to have 1.76 trillion parameters—about 125 times more. Microsoft touts the Phi-3 models as “the most capable and cost-effective small language models available.” Microsoft’s shift towards SLMs reflects a belief that the dominance of a few large models will give way to a more diverse ecosystem of smaller, specialized models. For instance, an SLM designed specifically for analyzing consumer behavior might be more effective for targeted advertising than a broad, general-purpose model trained on the entire internet. SLMs excel in their focused training on specific domains. “The whole fine-tuning process … is highly specialized for specific use-cases,” explains Silvio Savarese, Chief Scientist at Salesforce, another company advancing SLMs. To illustrate, using a specialized screwdriver for a home repair project is more practical than a multifunction tool that’s more expensive and less focused. This trend towards SLMs reflects a broader shift in the AI industry from hype to practical application. As Brian Yamada of VLM notes, “As we move into the operationalization phase of this AI era, small will be the new big.” Smaller, specialized models or combinations of models will address specific needs, saving time and resources. Some voices express concern over the dominance of a few large models, with figures like Jack Dorsey advocating for a diverse marketplace of algorithms. Philippe Krakowski of IPG also worries that relying on the same models might stifle creativity. SLMs offer the advantage of lower costs, both in development and operation. Microsoft’s Bubeck emphasizes that SLMs are “several orders of magnitude cheaper” than larger models. Typically, SLMs operate with around three to four billion parameters, making them feasible for deployment on devices like smartphones. However, smaller models come with trade-offs. Fewer parameters mean reduced capabilities. “You have to find the right balance between the intelligence that you need versus the cost,” Bubeck acknowledges. Salesforce’s Savarese views SLMs as a step towards a new form of AI, characterized by “agents” capable of performing specific tasks and executing plans autonomously. This vision of AI agents goes beyond today’s chatbots, which can generate travel itineraries but not take action on your behalf. Salesforce recently introduced a 1 billion-parameter SLM that reportedly outperforms some LLMs on targeted tasks. Salesforce CEO Mark Benioff celebrated this advancement, proclaiming, “On-device agentic AI is here!” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Guide to Creating a Working Sales Plan Creating a sales plan is a pivotal step in reaching your revenue objectives. To ensure its longevity and adaptability to Read more Salesforce Artificial Intelligence Is artificial intelligence integrated into Salesforce? Salesforce Einstein stands as an intelligent layer embedded within the Lightning Platform, bringing robust Read more

Read More
Chatbots in Healthcare

Chatbots in Healthcare

Not all medical chatbots are created equal, as a recent JAMA Network Open study reveals. The study found that some chatbots are better at tailoring health information to patient health literacy than others. Chatbots in Healthcare may not be ready for prime time. The report compared the free and paid versions of ChatGPT, showing that while the paid version initially provided more readable health information, the difference was minimal once researchers asked the chatbots to explain things at a sixth-grade reading level. The findings suggest that both versions of ChatGPT could potentially widen health disparities in terms of information access and literacy. Chatbots like ChatGPT are becoming increasingly prominent in healthcare, showing potential in improving patient access to health information. However, their quality can vary. The study evaluated the free and paid versions of ChatGPT using the Flesch Reading Ease score for readability and the DISCERN instrument for consumer health information quality. Researchers tested both versions using the five most popular cancer-related queries from 2021 to 2023. They found that while the paid version had slightly higher readability scores (52.6) compared to the free version (62.48) on a 100-point scale, both scores were deemed suboptimal. The study revealed that prompting the free version of ChatGPT to explain concepts at a sixth-grade reading level improved its readability score to 71.55, outperforming the paid version under similar conditions. Even so, when both versions were asked to simplify answers to a sixth-grade reading level, the paid version scored slightly higher at 75.64. Despite these improvements, the overall readability of responses was still problematic. Without the simplification prompt, responses were roughly at a 12th-grade reading level. Even with the prompt, they remained closer to an eighth- or tenth-grade level, possibly due to chatbot confusion about the request. The study raises concerns about health equity. If the paid version of ChatGPT provides more accessible information, individuals with the means to purchase it might have a clear advantage. This disparity could exacerbate existing health inequities, especially for those using the free version. The researchers concluded that until chatbots consistently provide information at a lower reading level, clinicians should guide patients on how to effectively use these tools and encourage them to request information at simpler reading levels. Like Related Posts Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more Guide to Creating a Working Sales Plan Creating a sales plan is a pivotal step in reaching your revenue objectives. To ensure its longevity and adaptability to Read more 50 Advantages of Salesforce Sales Cloud According to the Salesforce 2017 State of Service report, 85% of executives with service oversight identify customer service as a Read more

Read More
SearchGPT and Knowledge Cutoff

SearchGPT and Knowledge Cutoff

Tackling the Knowledge Cutoff Challenge in Generative AI In the realm of generative AI, a significant hurdle has been the issue of knowledge cutoff—where a large language model (LLM) only has information up until a specific date. This was an early concern with OpenAI’s ChatGPT. For example, the GPT-4o model that currently powers ChatGPT has a knowledge cutoff in October 2023. The older GPT-4 model, on the other hand, had a cutoff in September 2021. Traditional search engines like Google, however, don’t face this limitation. Google continuously crawls the internet to keep its index up to date with the latest information. To address the knowledge cutoff issue in LLMs, multiple vendors, including OpenAI, are exploring search capabilities powered by generative AI (GenAI). Introducing SearchGPT: OpenAI’s GenAI Search Engine SearchGPT is OpenAI’s GenAI search engine, first announced on July 26, 2024. It aims to combine the strengths of a traditional search engine with the capabilities of GPT LLMs, eliminating the knowledge cutoff by drawing real-time data from the web. SearchGPT is currently a prototype, available to a limited group of test users, including individuals and publishers. OpenAI has invited publishers to ensure their content is accurately represented in search results. The service is positioned as a temporary offering to test and evaluate its performance. Once this evaluation phase is complete, OpenAI plans to integrate SearchGPT’s functionality directly into the ChatGPT interface. As of August 2024, OpenAI has not announced when SearchGPT will be generally available or integrated into the main ChatGPT experience. Key Features of SearchGPT SearchGPT offers several features designed to enhance the capabilities of ChatGPT: OpenAI’s Challenge to Google Search Google has long dominated the search engine landscape, a position that OpenAI aims to challenge with SearchGPT. Answers, Not Links Traditional search engines like Google act primarily as indexes, pointing users to other sources of information rather than directly providing answers. Google has introduced AI Overviews (formerly Search Generative Experience or SGE) to offer AI-generated summaries, but it still relies heavily on linking to third-party websites. SearchGPT aims to change this by providing direct answers to user queries, summarizing the source material instead of merely pointing to it. Contextual Continuity In contrast to Google’s point-in-time search queries, where each query is independent, SearchGPT strives to maintain context across multiple queries, offering a more seamless and coherent search experience. Search Accuracy Google Search often depends on keyword matching, which can require users to sift through several pages to find relevant information. SearchGPT aims to combine real-time data with an LLM to deliver more contextually accurate and relevant information. Ad-Free Experience SearchGPT offers an ad-free interface, providing a cleaner and more user-friendly experience compared to Google, which includes ads in its search results. AI-Powered Search Engine Comparison Here’s a comparison of the AI-powered search engines available today: Search Engine Platform Integration Publisher Collaboration Ads Cost SearchGPT (OpenAI) Standalone prototype Strong emphasis Ad-free Free (prototype stage) Google SGE Built on Google’s infrastructure SEO practices, content partnerships Includes ads Free Microsoft Bing AI/Copilot Built on Microsoft’s infrastructure SEO practices, content partnerships Includes ads Free Perplexity AI Standalone Basic source attribution Ad-free Free; $20/month for premium You.com AI assistant with various modes Basic source attribution Ad-free Free; premium tiers available Brave Search Independent search index Basic source attribution Ad-free Free Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Top Ten Reasons Why Tectonic Loves the Cloud The Cloud is Good for Everyone – Why Tectonic loves the cloud You don’t need to worry about tracking licenses. Read more

Read More
AI in scams

AIs Role in Scams

How Generative AI is Supporting the Creation of Lures & Scams A Guide for Value Added Resellers Copyright © 2024 Gen Digital Inc. All rights reserved. Avast is part of Gen™. A long, long time ago, I worked for an antivirus company who has since been acquired by Avast.  Knowing many of the people involved in this area of artificial intelligence, I pay attention when they publish a white paper. AI in scams is something we all should be concerned about. I am excited to share it in our Tectonic Insights. Executive Summary The capabilities and global usage of both large language models (LLMs) and generative AI are rapidly increasing. While these tools offer significant benefits to the general public and businesses, they also pose potential risks for misuse by malicious actors, including the misuse of tools like OpenAI’s ChatGPT and other GPTs. This document explores how the ChatGPT brand is exploited for lures, scams, and other social engineering threats. Generative AI is expected to play a crucial role in the cyber threat world challenges, particularly in creating highly believable, multilingual texts for phishing and scams. These advancements provide more opportunities for sophisticated social engineering by even less sophisticated scammers than ever before. Conversely, we believe generative AI will not drastically change the landscape of malware generation in the near term. Despite numerous proofs of concept, the complexity of generative AI methods still makes traditional, simpler methods more practical for malware creation. In short, the good may not outweigh the bad – just yet. Recognizing the value of generative AI for legitimate purposes is important. AI-based security and assistant tools with various levels of maturity and specialization are already emerging in the market. As these tools evolve and become more widely available, substantial improvements in their capabilities are anticipated. AI-Generated Lures and Scams AI-generated lures and scams are increasingly prevalent. Cybercriminals use AI to create lures and conduct phishing attempts and scams through various texts—emails, social media content, e-shop reviews, SMS scams, and more. AI improves the credibility of social scams by producing trustworthy, authentic texts, eliminating traditional phishing red flags like broken language and awkward addressing. These advanced threats have exploited societal issues and initiatives, including cryptocurrencies, Covid-19, and the war in Ukraine. The popularity of ChatGPT among hackers stems more from its widespread recognition than its AI capabilities, making it a prime target for investigation by attackers. How is Generative AI Supporting the Creation of Lures and Scams? Generative AI, particularly ChatGPT, enhances the language used in scams, enabling cybercriminals to create more advanced texts than they could otherwise. AI can correct grammatical errors, provide multilingual content, and generate multiple text variations to improve believability. For sophisticated phishing attacks, attackers must integrate the AI-generated text into credible templates. They can purchase functional, well-designed phishing kits or use web archiving tools to replicate legitimate websites, altering URLs to phish victims. Currently, attackers need to manually build some aspects of their attempts. ChatGPT is not yet an “out-of-the-box” solution for advanced malware creation. However, the emergence of multi-type models, combining outputs like images, audio, and video, will enhance the capabilities of generative AI for creating believable phishing and scam campaigns. Malvertising Malvertising, or “malicious advertising,” involves disseminating malware through online ads. Cybercriminals exploit the widespread reach and interactive nature of digital ads to distribute harmful content. Instances have been observed where ChatGPT’s name is used in malicious vectors on platforms like Facebook, leading users to fraudulent investment portals. Users who provide personal information become vulnerable to identity theft, financial fraud, account takeovers, and further scams. The collected data is often sold on the dark web, contributing to the broader cybercrime ecosystem. Recognizing and mitigating these deceptive tactics is crucial. YouTube Scams YouTube, one of the world’s most popular platforms, is not immune to cybercrime. Fake videos featuring prominent figures are used to trick users into harmful actions. This strategy, known as the “Appeal to Authority,” exploits trust and credibility to phish personal details or coerce victims into sending money. For example, videos featuring Elon Musk discussing OpenAI have been modified to scam victims. A QR code displayed in the video redirects users to a scam page, often a cryptocurrency scam or phishing attempt. As AI models like Midjourney and DALL-E mature, the use of fake images, videos, and audio is expected to increase, enhancing the credibility of these scams. Typosquatting Typosquatting involves minor changes in URLs to redirect users to different websites, potentially leading to phishing attacks or the installation of malicious applications. An example is an Android app named “Open Chat GBT: AI Chat Bot,” where a subtle URL alteration can deceive users into downloading harmful software. Browser Extensions The popularity of ChatGPT has led to the emergence of numerous browser extensions. While many are legitimate, others are malicious, designed to lure victims. Attackers create extensions with names resembling ChatGPT to deceive users into downloading harmful software, such as adware or spyware. These extensions can also subscribe users to services that periodically charge fees, known as fleeceware. For instance, a malicious extension mimicking “ChatGPT for Google” was reported by Guardio. This extension stole Facebook sessions and cookies but was removed from the Chrome Web Store after being reported. Installers and Cracks Malicious installers often mimic legitimate tools, tricking users into installing malware. These installers promise to install ChatGPT but instead deploy malware like NodeStealer, which steals passwords and browser cookies. Cracked or unofficial software versions pose similar risks, hiding malware that can steal personal information or take control of computers. This particular method of installing malware has been around for decades. However the usage of ChatGPT and other free to download tools has given it a resurrection. Fake Updates Fake updates are a common tactic where users are prompted to update their browser to access content. Campaigns like SocGholish use ChatGPT-related articles to lure users into downloading remote access trojans (RATs), giving attackers control over infected devices. These pages are often hosted on vulnerable WordPress sites or sites with

Read More
ChatGPT Word Choices

ChatGPT Word Choices

Why Does ChatGPT Use the Word “Delve” So Much? Mystery Solved. The mystery behind ChatGPT’s frequent use of the word “delve” (one of the 10 most common words it uses) has finally been unraveled, and the answer is quite unexpected. Why ChatGPT Word Choices are repetitive. While “delve” and other words like “tapestry” aren’t common in everyday conversations, ChatGPT seems to favor them. You may have noticed this tendency in its outputs. The sudden rise in the use of “delve” in medical papers from March 2024, coincides with the first full year of ChatGPT’s widespread use. “Delve,” along with phrases like “as an AI language model…,” has become a hallmark of ChatGPT’s language, almost a giveaway that a text is AI-generated. But why does ChatGPT overuse “delve”? If it’s trained on human data, how did it develop this preference? Is it emergent behavior? And why “delve” specifically? A Guardian article, “How Cheap, Outsourced Labour in Africa is Shaping AI English,” provides a clue. The key lies in how ChatGPT was built. Why “Delve” So Much? The overuse of “delve” suggests ChatGPT’s language might have been influenced after its initial training on internet data. After training on a massive corpus of data, an additional supervised learning step is used to align the AI’s behavior. Human annotators evaluate the AI’s outputs, and their feedback fine-tunes the model. Here’s a summary of the process: This iterative process involves human feedback to improve the AI’s responses, ensuring it stays aligned and useful. However, this feedback is often provided by a workforce in the global south, where English-speaking annotators are more affordable. In Nigeria, “delve” is more commonly used in business English than in the US or UK. Annotators from these regions provided examples using their familiar language, influencing the AI to adopt a slightly African English style. This is an example of poor sampling, where the evaluators’ language differs from that of the target users, introducing a bias in the writing style. This bias likely stems from the RLHF step rather than the initial training. ChatGPT’s writing style, with or without “delve,” is already somewhat robotic and easy to detect. Understanding these potential pitfalls helps us avoid similar issues in future AI development. Making ChatGPT More Human-Like To make ChatGPT sound more human and avoid overused words like “delve,” consider these Prompt Engineering approaches: These methods can be time-consuming. Ideally, a quick, reliable tool, like a Chrome extension, would streamline this process. If you’ve found a solution or a reliable tool for this issue, share it below in the comments. This is a widespread challenge that many users face. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
Ask ChatGPT Vision Action

Ask ChatGPT Vision Action

Enhance Your Workflow with the Ask ChatGPT Vision Action Extend the use of artificial intelligence in your daily operations by leveraging the Ask ChatGPT Vision action. This feature allows ChatGPT to analyze images attached to your Salesforce records and apply its insights directly to your workflows. The action is compatible with ChatGPT models that accept image input. How to Use the Ask ChatGPT Vision Action: Create a Macro for Repeated Use: To streamline usage, create a Macro with preconfigured prompts and result fields. Assign the macro to users or profiles to ensure consistent use of the Ask ChatGPT Vision action. Examples: Object Prompt Result Field Case Determine if the image content matches this description: “{!Description}”. Answer “Yes” or “No”. Custom picklist field ‘Attachment matches description’ with values Yes and No Use Cases: For example, use the Ask ChatGPT Vision action to verify if attachments in Cases align with the case’s subject and description. If an attachment matches, automatically route the case to a support agent; otherwise, flag it for review. Expand Your Options: For more flexibility, you can create custom classes and actions to integrate additional data sources or automate further tasks based on ChatGPT’s responses. Explore options like sending emails, creating tasks, or updating records with the information retrieved. For more details on using ChatGPT and managing data privacy, please refer to OpenAI’s website. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
State of AI

State of AI

With the Dreamforce conference just a few weeks away, AI is set to be a central theme once again. This week, Salesforce offered a preview of what to expect in September with the release of its “Trends in AI for CRM” report. This report consolidates findings from several Salesforce research studies conducted from February last year to April this year. The report’s executive summary highlights four key insights: The Fear of Missing Out (FOMO) An intriguing statistic from Salesforce’s “State of Data and Analytics” report reveals that 77% of business leaders feel a fear of missing out on generative AI. This concern is particularly pronounced among marketers (88%), followed by sales executives (78%) and customer service professionals (73%). Given the continued hype around generative AI, these numbers are likely still relevant or even higher as of July 2024. As Salesforce AI CEO Clara Shih puts it: “The majority of business executives fear they’re missing out on AI’s benefits, and it’s a well-founded concern. Today’s technology world is reminiscent of 1998 for the Internet—full of opportunities but also hype.” Shih adds: “How do we separate the signal from the noise and identify high-impact enterprise use cases?” The Quest for ROI and Value The surge of hype around generative AI over the past 18 months has led to high expectations. While Salesforce has been more responsible in managing user expectations, many executives view generative AI as a cure-all. However, this perspective can be problematic, as “silver bullets” often miss their mark. Recent tech sector developments reflect a shift toward a longer-term view of AI’s impact. Meta’s share price fell when Mark Zuckerberg emphasized AI as a multi-year project, and Alphabet’s Sundar Pichai faced tough questions from Wall Street about the need for continued investment. State of AI Shih notes a growing impatience with the time required to realize AI’s value: “It’s been over 18 months since ChatGPT sparked excitement about AI in business. Many companies are still grappling with building or buying solutions that are not overly siloed and can be customized. The challenge is finding a balance between quick implementation and configurability.” She adds: “The initial belief was that companies could just integrate ChatGPT and see instant transformation. However, there are security risks and practical challenges. For LLMs to be effective, they need contextual data about users and customers.” Conclusion: A Return to the Future Shih likens the current AI landscape to the late 90s Internet boom, noting: “It’s similar to the late 90s when people questioned if the Internet was overhyped. While some investments will not pan out, the transformative potential of successful use cases is enormous. Just as with the Internet, discovering the truly valuable applications of AI may require experimentation and time. We are very much in the 1998 moment for AI now.” Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
guide to RAG

Tectonic Guide to RAG

Guide to RAG (Retrieval-Augmented Generation) Retrieval-Augmented Generation (RAG) has become increasingly popular, and while it’s not yet as common as seeing it on a toaster oven manual, it is expected to grow in use. Despite its rising popularity, comprehensive guides that address all its nuances—such as relevance assessment and hallucination prevention—are still scarce. Drawing from practical experience, this insight offers an in-depth overview of RAG. Why is RAG Important? Large Language Models (LLMs) like ChatGPT can be employed for a wide range of tasks, from crafting horoscopes to more business-centric applications. However, there’s a notable challenge: most LLMs, including ChatGPT, do not inherently understand the specific rules, documents, or processes that companies rely on. There are two ways to address this gap: How RAG Works RAG consists of two primary components: While the system is straightforward, the effectiveness of the output heavily depends on the quality of the documents retrieved and how well the Retriever performs. Corporate documents are often unstructured, conflicting, or context-dependent, making the process challenging. Search Optimization in RAG To enhance RAG’s performance, optimization techniques are used across various stages of information retrieval and processing: Python and LangChain Implementation Example Below is a simple implementation of RAG using Python and LangChain: pythonCopy codeimport os import wget from langchain.vectorstores import Qdrant from langchain.embeddings import OpenAIEmbeddings from langchain import OpenAI from langchain_community.document_loaders import BSHTMLLoader from langchain.chains import RetrievalQA # Download ‘War and Peace’ by Tolstoy wget.download(“http://az.lib.ru/t/tolstoj_lew_nikolaewich/text_0073.shtml”) # Load text from html loader = BSHTMLLoader(“text_0073.shtml”, open_encoding=’ISO-8859-1′) war_and_peace = loader.load() # Initialize Vector Database embeddings = OpenAIEmbeddings() doc_store = Qdrant.from_documents( war_and_peace, embeddings, location=”:memory:”, collection_name=”docs”, ) llm = OpenAI() # Ask questions while True: question = input(‘Your question: ‘) qa = RetrievalQA.from_chain_type( llm=llm, chain_type=”stuff”, retriever=doc_store.as_retriever(), return_source_documents=False, ) result = qa(question) print(f”Answer: {result}”) Considerations for Effective RAG Ranking Techniques in RAG Dynamic Learning with RELP An advanced technique within RAG is Retrieval-Augmented Language Model-based Prediction (RELP). In this method, information retrieved from vector storage is used to generate example answers, which the LLM can then use to dynamically learn and respond. This allows for adaptive learning without the need for expensive retraining. Guide to RAG RAG offers a powerful alternative to retraining large language models, allowing businesses to leverage their proprietary knowledge for practical applications. While setting up and optimizing RAG systems involves navigating various complexities, including document structure, query processing, and ranking, the results are highly effective for most business use cases. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more Alphabet Soup of Cloud Terminology As with any technology, the cloud brings its own alphabet soup of terms. This insight will hopefully help you navigate Read more

Read More
Impact of Generative AI on Workforce

Impact of Generative AI on Workforce

The Impact of Generative AI on the Future of Work Automation has long been a source of concern and hope for the future of work. Now, generative AI is the latest technology fueling both fear and optimism. AI’s Role in Job Augmentation and Replacement While AI is expected to enhance many jobs, there’s a growing argument that job augmentation for some might lead to job replacement for others. For instance, if AI makes a worker’s tasks ten times easier, the roles created to support that job could become redundant. A June 2023 McKinsey report highlighted that generative AI (GenAI) could automate 60% to 70% of employee workloads. In fact, AI has already begun replacing jobs, contributing to nearly 4,000 job cuts in May 2023 alone, according to Challenger, Gray & Christmas Inc. OpenAI, the creator of ChatGPT, estimates that 80% of the U.S. workforce could see at least 10% of their jobs impacted by large language models (LLMs). Examples of AI Job Replacement One notable example involves a writer at a tech startup who was let go without explanation, only to later discover references to her as “Olivia/ChatGPT” in internal communications. Managers had discussed how ChatGPT was a cheaper alternative to employing a writer. This scenario, while not officially confirmed, strongly suggested that AI had replaced her role. The Writers Guild of America also went on strike, seeking not only higher wages and more residuals from streaming platforms but also more regulation of AI. Research from the Frank Hawkins Kenan Institute of Private Enterprise indicates that GenAI might disproportionately affect women, with 79% of working women holding positions susceptible to automation compared to 58% of working men. Unlike past automation that typically targeted repetitive tasks, GenAI is different—it automates creative work such as writing, coding, and even music production. For example, Paul McCartney used AI to partially generate his late bandmate John Lennon’s voice to create a posthumous Beatles song. In this case, AI enhanced creativity, but the broader implications could be more complex. Other Impacts of AI on Jobs AI’s impact on jobs goes beyond replacement. Human-machine collaboration presents a more positive angle, where AI helps improve the work experience by automating repetitive tasks. This could lead to a rise in AI-related jobs and a growing demand for AI skills. AI systems require significant human feedback, particularly in training processes like reinforcement learning, where models are fine-tuned based on human input. A May 2023 paper also warned about the risk of “model collapse,” where LLMs deteriorate without continuous human data. However, there’s also the risk that AI collaboration could hinder productivity. For example, generative AI might produce an overabundance of low-quality content, forcing editors to spend more time refining it, which could deprioritize more original work. Jobs Most Affected by AI AI Legislation and Regulation Despite the rapid advancement of AI, comprehensive federal regulation in the U.S. remains elusive. However, several states have introduced or passed AI-focused laws, and New York City has enacted regulations for AI in recruitment. On the global stage, the European Union has introduced the AI Act, setting a common legal framework for AI. Meanwhile, U.S. leaders, including Senate Majority Leader Chuck Schumer, have begun outlining plans for AI regulation, emphasizing the need to protect workers, national security, and intellectual property. In October 2023, President Joe Biden signed an executive order on AI, aiming to protect consumer privacy, support workers, and advance equity and civil rights in the justice system. AI regulation is becoming increasingly urgent, and it’s a question of when, not if, comprehensive laws will be enacted. As AI continues to evolve, its impact on the workforce will be profound and multifaceted, requiring careful consideration and regulation to ensure it benefits society as a whole. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
What is OpenAI Strawberry?

What is OpenAI Strawberry?

OpenAI’s Secret Project: “Strawberry” Background and Goals OpenAI, the company behind ChatGPT, is working on a new AI project codenamed “Strawberry,” according to an insider and internal documents reviewed by Reuters. This project, whose details have not been previously reported, aims to showcase advanced reasoning capabilities in OpenAI’s models. The project seeks to enable AI to not only generate answers to queries but also plan and navigate the internet autonomously to perform “deep research.” What is OpenAI Strawberry? Project Overview The “Strawberry” initiative represents an evolution of the previously known Q* project, which demonstrated potential in solving complex problems like advanced math and science questions. While the precise date of the internal document is unclear, it outlines plans for using Strawberry to enhance AI’s reasoning and problem-solving abilities. The source describes the project as a work in progress, with no confirmed timeline for its public release. Technological Approach Strawberry is described as a method of post-training AI models, refining their performance after initial training on large datasets. This post-training phase involves techniques such as fine-tuning, where models are adjusted based on feedback and examples of correct and incorrect responses. The project is reportedly similar to Stanford’s 2022 “Self-Taught Reasoner” (STaR) method, which uses iterative self-improvement to enhance AI’s intelligence levels. Potential and Challenges If successful, Strawberry could revolutionize AI by improving its reasoning capabilities, allowing it to tackle complex tasks that require multi-step problem-solving and planning. This could lead to significant advancements in scientific research, software development, and various other fields. However, the project also raises concerns about ethical implications, control, accountability, and bias, necessitating careful consideration as AI becomes more autonomous. Industry Context OpenAI is not alone in this pursuit. Other major tech companies like Google, Meta, and Microsoft are also experimenting with improving AI reasoning. The broader goal across the industry is to develop AI that can achieve human or super-human levels of intelligence, capable of making major scientific discoveries and planning complex tasks. Conclusion OpenAI’s project Strawberry represents a significant step forward in AI research, pushing the boundaries of what AI can achieve. While the project is still in its early stages, its potential to enhance AI reasoning capabilities is significant. As OpenAI continues to develop and refine Strawberry, its impact on the future of artificial intelligence will be closely watched by researchers and industry leaders alike. Like Related Posts Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more Health Cloud Brings Healthcare Transformation Following swiftly after last week’s successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Read More
gettectonic.com