Knowledge Discovery Archives - gettectonic.com
How Graph Databases and AI Agents Are Redefining Modern Data Strategy

How Graph Databases and AI Agents Are Redefining Modern Data Strategy

The Data Tightrope: How Graph Databases and AI Agents Are Redefining Modern Data Strategy The Data Leader’s Dilemma: Speed vs. Legacy Today’s data leaders face an impossible balancing act: The gap between expectation and reality is widening. Businesses demand faster insights, deeper connections, and decisions that can’t wait—yet traditional databases weren’t built for this dynamic world. The Problem with Traditional Databases Relational databases force data into predefined tables, stripping away context and relationships. Need to analyze new connections? Prepare for:✔ Schema redesigns✔ Costly ETL pipelines✔ Slow, complex joins Result: Data becomes siloed, insights are delayed, and innovation stalls. Graph Databases: The Flexible Future of Data What Makes Graphs Different? Unlike rigid tables, graph databases store data as: Example: An e-commerce graph instantly reveals: No joins. No schema redesigns. Just direct, real-time traversal. Why Graphs Are Winning Now The Next Leap: AI-Powered, Self-Evolving Graphs Static graphs are powerful—but AI agents make them intelligent. How AI Agents Supercharge Graphs From Static Data to Living Knowledge Traditional graphs:❌ Manually updated❌ Fixed structure❌ Limited to known queries AI-augmented graphs:✅ Self-learning (adds/removes connections dynamically)✅ Adapts to new questions✅ Gets smarter with every query The Business Impact: Smarter, Faster, Cheaper 1. Break Down Silos Without Rebuilding Pipelines 2. Autonomous Decision-Making 3. Democratized Intelligence The Future: Graphs as Invisible Infrastructure In 2–3 years, AI-powered graphs will be as essential as cloud storage—ubiquitous, self-maintaining, and silently powering:✔ Hyper-personalized customer experiences✔ Real-time risk mitigation✔ Cross-functional insights How to Start Today The Bottom Line Static data is dead. The future belongs to dynamic, self-learning graphs powered by AI. The question isn’t if you’ll adopt this approach—it’s how fast you can start. → Innovators will leverage graphs as competitive moats.→ Laggards will drown in unconnected data. Like Related Posts Who is Salesforce? Who is Salesforce? Here is their story in their own words. From our inception, we’ve proudly embraced the identity of Read more Salesforce Unites Einstein Analytics with Financial CRM Salesforce has unveiled a comprehensive analytics solution tailored for wealth managers, home office professionals, and retail bankers, merging its Financial Read more AI-Driven Propensity Scores AI plays a crucial role in propensity score estimation as it can discern underlying patterns between treatments and confounding variables Read more Tectonic’s Successful Salesforce Track Record Salesforce Technology Services Integrator – Tectonic has successfully delivered Salesforce in a variety of industries including Public Sector, Hospitality, Manufacturing, Read more

Read More

Supercharge Salesforce Agentforce with OpenText AI-Powered Insights

The future of intelligent customer engagement is here. OpenText and Salesforce are revolutionizing AI-driven workflows with deep content integration, empowering sales and service teams to work smarter, faster, and with greater accuracy. AI in Sales & Service: The Need for Trusted Data AI is transforming business operations:✅ 83% of AI-powered sales teams report revenue growth✅ 93% of service teams achieve time and cost savings But success depends on trusted data. With 98% of sales leaders emphasizing the need for accurate, secure, and compliant information, OpenText Content Cloud provides the foundation for reliable AI—seamlessly integrated with Salesforce. OpenText + Salesforce: AI Innovation Leaders Since 2016, OpenText has enhanced Salesforce with powerful content management solutions. Now, we’re taking it further with GenAI-powered automation:✔ OpenText™ Content Aviator delivers AI-driven insights from unstructured data (contracts, emails, documents)✔ Selected as a launch partner for the Agentforce Partner Network✔ First-to-market solution on Salesforce’s new AgentExchange—making AI agent deployment faster than ever Key Use Cases 🔹 Sales Teams – Summarize customer buying trends, auto-generate upsell recommendations🔹 Customer Service – Instantly resolve claims by extracting key details from documents🔹 Claims Processing – Automate approvals with AI-powered document analysis How It Works: AI Insights → Agentforce Actions OpenText Content Aviator for Agentforce unlocks hidden insights from unstructured content stored in OpenText Content Management, then feeds them directly into Salesforce Agentforce to trigger smart, automated actions. Key Benefits 🚀 Accelerate Sales Cycles – Auto-summarize contracts, identify upsell opportunities🎯 Enhance Customer Service – Resolve cases faster with AI-generated insights✍ Reduce Manual Work – Auto-update Salesforce records, eliminating errors📧 Personalize at Scale – Draft tailored email responses using AI insights Now Available ✔ Integrated with OpenText Content Management CE 25.1✔ Coming soon to OpenText Core Content SaaS (CE 25.3) OpenText Content Aviator and Salesforce Agentforce integration provides AI-driven insights for Sales and Customer Service teams, enhancing productivity and accelerating processes. This integration enables users to discover, summarize, and translate business workspace content directly within Agentforce, eliminating the need to switch applications. Essentially, it leverages AI to extract valuable insights from unstructured data like documents, contracts, and emails, and then uses those insights to drive data-driven actions within Agentforce What’s Next? The Future of AI-to-AI Integration This is just the beginning. OpenText is expanding AI-driven automation across the entire content lifecycle, with upcoming innovations including:🔹 More AI agents for sales, service, and operations🔹 Industry-specific solutions (banking, insurance, healthcare)🔹 Bi-directional AI – Blending insights from multiple AI systems for smarter decision-making OpenText™ Content Aviator puts AI into the hands of business users to leverage conversational search, discover content, or even summarize a document or workspace, offering new ways to interact with content and extract knowledge. Content Aviator enables organizations to combine the power of generative AI and large language models (LLMs) with OpenText content services platforms, including OpenText™ Core Content Management, OpenText™ Documentum™ Content Management (CM) and OpenText™ Content Management (Extended ECM), to make document management, knowledge discovery, and business process automation more efficient, effective and intelligent. Get Started Today ✅ Explore OpenText Content Aviator for Agentforce on Salesforce AgentExchange✅ Discover all OpenText-Salesforce integrations on the Salesforce AppExchange Unlock the power of AI-driven content intelligence—and transform the way your teams work. Contact Tectonic today to leverage AI-driven content intelligence. Like1 Related Posts Who is Salesforce? Who is Salesforce? Here is their story in their own words. From our inception, we’ve proudly embraced the identity of Read more Salesforce Marketing Cloud Transactional Emails Salesforce Marketing Cloud Transactional Emails are immediate, automated, non-promotional messages crucial to business operations and customer satisfaction, such as order Read more Salesforce Unites Einstein Analytics with Financial CRM Salesforce has unveiled a comprehensive analytics solution tailored for wealth managers, home office professionals, and retail bankers, merging its Financial Read more AI-Driven Propensity Scores AI plays a crucial role in propensity score estimation as it can discern underlying patterns between treatments and confounding variables Read more

Read More
unpatched ai

Unpatched.ai

The Mystery of Unpatched.ai: AI-Powered Vulnerability Discovery Raises Questions During January’s Patch Tuesday, Microsoft credited Unpatched.ai for reporting multiple high-severity vulnerabilities. Yet, despite its contributions, the AI-driven bug-finding tool remains an enigma to the cybersecurity community. Last month, Microsoft addressed 159 new vulnerabilities across its widely used products. Among them, Unpatched.ai was acknowledged for identifying three remote code execution flaws—CVE-2025-21186, CVE-2025-21366, and CVE-2025-21395—all of which affect Microsoft Access and received a CVSS score of 7.8. While Microsoft’s recognition highlights Unpatched.ai’s role in vulnerability discovery, little is known about the tool itself. Informa TechTarget reached out to multiple security vendors and experts for insights, but responses only deepened the mystery. A Cryptic Online Presence Unpatched.ai describes itself as “vulnerability discovery by an AI-guided cybersecurity platform” on its website. It provides a list of reported vulnerabilities, which consists solely of Microsoft-related flaws—primarily within Microsoft Access. The platform states that it collaborates with “select enterprise, government, and security vendors based in the U.S. and ally countries.” The company’s “About” page sheds some light on its mission, attributing its research to the need for greater transparency around unpatched software flaws: “We find unpatched issues in software to help customers better identify and manage cyber risk. Many issues are unknown or silently fixed by software vendors, hiding the true risk profile of their products. With the help of AI, we are developing an automated platform to help find and analyze these issues for our customers.” Beyond the website, Unpatched.ai maintains an X account, though much of its activity has been erased. A now-deleted post from January 29 warned that Microsoft’s patch for CVE-2025-21396 was insufficient. When contacted about the post, a Microsoft spokesperson responded, “We are aware of these reports and will take action as needed to help protect customers.” However, Microsoft did not provide additional background on Unpatched.ai. Attempts to reach Unpatched.ai directly have gone unanswered. Piecing Together the Puzzle Efforts to uncover more about Unpatched.ai yielded few concrete details. The domain was registered through Namecheap in September, with ownership masked by a privacy service based in Reykjavik, Iceland. Adam Barnett, lead software engineer at Rapid7, noted that beyond Unpatched.ai’s website, information is scarce. However, he identified a Reddit user, “Fit_Tie_9430,” who has claimed affiliation with the platform. This user shared details about Unpatched.ai’s vulnerability discoveries and linked to now-private YouTube videos demonstrating exploits against Microsoft Access vulnerabilities. Barnett pointed out that Unpatched.ai was also credited for a December Patch Tuesday flaw, CVE-2024-49142. Initially published without attribution, Microsoft later updated the advisory to acknowledge Unpatched.ai’s discovery. Interestingly, the Unpatched.ai website’s favicon—a simple “:)” emoticon—appears to reference the Windows Blue Screen of Death’s “:(” symbol. “It’s a nice touch,” Barnett said, “but I still don’t know who’s behind it. It could be just about anyone with the time, resources, and skills.” Other industry experts share the same uncertainty. Satnam Narang, senior staff research engineer at Tenable, observed that Unpatched.ai’s X account follows only a handful of infosec professionals. “It’s unclear if the service is still in a closed-door phase and will eventually provide more insights about its leadership and team, or who may be backing it,” he said. Alon Yamin, co-founder and CEO of Copyleaks, noted that an AI-driven vulnerability discovery platform was inevitable given the surge in software flaws. While AI can be a game-changer for proactive threat detection, he cautioned against potential misuse. “It’s crucial that Unpatched.ai is deployed carefully, responsibly, and ethically, with safeguards to prevent attackers from exploiting the vulnerabilities it identifies,” Yamin said. The Future of AI-Powered Bug Hunting AI-driven vulnerability discovery is an emerging focus in cybersecurity, though few major breakthroughs have been publicly confirmed. In November, Google announced it had discovered a zero-day vulnerability using AI. Google Project Zero and DeepMind’s AI-powered agent, Big Sleep, identified a buffer stack underflow flaw in the SQLite open-source database engine. With Unpatched.ai making waves yet remaining elusive, the cybersecurity community is left with more questions than answers. Is this the beginning of a new era in AI-powered vulnerability research, or is Unpatched.ai an outlier? Until more information surfaces, the mystery remains. Like Related Posts Who is Salesforce? Who is Salesforce? Here is their story in their own words. From our inception, we’ve proudly embraced the identity of Read more Salesforce Unites Einstein Analytics with Financial CRM Salesforce has unveiled a comprehensive analytics solution tailored for wealth managers, home office professionals, and retail bankers, merging its Financial Read more AI-Driven Propensity Scores AI plays a crucial role in propensity score estimation as it can discern underlying patterns between treatments and confounding variables Read more Tectonic’s Successful Salesforce Track Record Salesforce Technology Services Integrator – Tectonic has successfully delivered Salesforce in a variety of industries including Public Sector, Hospitality, Manufacturing, Read more

Read More
google agentspace

Google Agentspace

Google Agentspace: Boosting Productivity with AI-Powered Agents Google has unveiled Agentspace, a cutting-edge tool designed to revolutionize workplace productivity by combining the power of AI agents, Google Gemini 2.0, and its advanced search capabilities. This tool aims to streamline workflows, enhance information discovery, and empower enterprises to unlock the full potential of their data. What is Google Agentspace? Google Agentspace is an enterprise-focused productivity platform that simplifies complex tasks involving planning, research, and content generation. By integrating AI-powered tools like NotebookLM Plus, it enables employees to uncover insights, interact with unstructured and structured data, and make informed decisions—all in one centralized platform. Key features include: Core Benefits of Google Agentspace 1. Streamlined Information Discovery Employees often waste hours sifting through fragmented data in emails, documents, and spreadsheets. Agentspace serves as a centralized knowledge hub, offering conversational assistance, proactive suggestions, and actionable insights from both unstructured and structured data sources. With pre-built connectors for tools like Google Drive, Jira, Microsoft SharePoint, and ServiceNow, Agentspace ensures seamless integration with existing systems, providing employees with relevant information faster. 2. Enhanced Multimodal Capabilities Agentspace leverages Google’s search expertise and Gemini 2.0 to provide advanced reasoning capabilities. Employees can query in multiple formats (text, audio, video), translate information into different languages, and generate audio summaries, enhancing productivity and accessibility. 3. Task Automation Across Departments Agentspace empowers teams across various functions to automate repetitive tasks, such as: 4. Scalable AI for Enterprises Agentspace offers a low-code visual tool for creating custom AI agents tailored to specific business needs. These agents can automate multi-step workflows, conduct in-depth research, and assist with data-driven content generation, enabling enterprises to scale AI adoption effortlessly. Security and Responsible AI Google Agentspace is built on Google Cloud’s secure-by-design infrastructure, ensuring that enterprises can deploy AI tools with confidence. Key Security Features Google is also addressing responsible AI concerns with tools for evaluation, content moderation, and bias mitigation, ensuring ethical and explainable AI use in the workplace. Use Cases Google Agentspace provides solutions tailored to various enterprise needs: Challenges and Future Directions Despite its potential, Agentspace faces hurdles such as employee training and adoption. Organizations must ensure that employees understand how to incorporate the tool into their daily workflows effectively. Moreover, Google’s approach to responsible AI will be closely scrutinized. Addressing issues like explainability, bias prevention, and robust data infrastructure will be crucial for building trust and driving adoption. Early Access and the Road Ahead Google is offering early access to Agentspace, allowing enterprises to explore its potential and provide feedback. As AI continues to reshape the workplace, tools like Agentspace position Google as a leader in productivity-enhancing solutions for businesses. For enterprises looking to harness AI to unlock creativity, improve decision-making, and automate workflows, Agentspace is the next step in digital transformation. Sign up for early access today to bring the future of work to your organization. Like Related Posts Who is Salesforce? Who is Salesforce? Here is their story in their own words. From our inception, we’ve proudly embraced the identity of Read more Salesforce Marketing Cloud Transactional Emails Salesforce Marketing Cloud Transactional Emails are immediate, automated, non-promotional messages crucial to business operations and customer satisfaction, such as order Read more Salesforce Unites Einstein Analytics with Financial CRM Salesforce has unveiled a comprehensive analytics solution tailored for wealth managers, home office professionals, and retail bankers, merging its Financial Read more AI-Driven Propensity Scores AI plays a crucial role in propensity score estimation as it can discern underlying patterns between treatments and confounding variables Read more

Read More
Recent advancements in AI

Recent advancements in AI

Recent advancements in AI have been propelled by large language models (LLMs) containing billions to trillions of parameters. Parameters—variables used to train and fine-tune machine learning models—have played a key role in the development of generative AI. As the number of parameters grows, models like ChatGPT can generate human-like content that was unimaginable just a few years ago. Parameters are sometimes referred to as “features” or “feature counts.” While it’s tempting to equate the power of AI models with their parameter count, similar to how we think of horsepower in cars, more parameters aren’t always better. An increase in parameters can lead to additional computational overhead and even problems like overfitting. There are various ways to increase the number of parameters in AI models, but not all approaches yield the same improvements. For example, Google’s Switch Transformers scaled to trillions of parameters, but some of their smaller models outperformed them in certain use cases. Thus, other metrics should be considered when evaluating AI models. The exact relationship between parameter count and intelligence is still debated. John Blankenbaker, principal data scientist at SSA & Company, notes that larger models tend to replicate their training data more accurately, but the belief that more parameters inherently lead to greater intelligence is often wishful thinking. He points out that while these models may sound knowledgeable, they don’t actually possess true understanding. One challenge is the misunderstanding of what a parameter is. It’s not a word, feature, or unit of data but rather a component within the model‘s computation. Each parameter adjusts how the model processes inputs, much like turning a knob in a complex machine. In contrast to parameters in simpler models like linear regression, which have a clear interpretation, parameters in LLMs are opaque and offer no insight on their own. Christine Livingston, managing director at Protiviti, explains that parameters act as weights that allow flexibility in the model. However, more parameters can lead to overfitting, where the model performs well on training data but struggles with new information. Adnan Masood, chief AI architect at UST, highlights that parameters influence precision, accuracy, and data management needs. However, due to the size of LLMs, it’s impractical to focus on individual parameters. Instead, developers assess models based on their intended purpose, performance metrics, and ethical considerations. Understanding the data sources and pre-processing steps becomes critical in evaluating the model’s transparency. It’s important to differentiate between parameters, tokens, and words. A parameter is not a word; rather, it’s a value learned during training. Tokens are fragments of words, and LLMs are trained on these tokens, which are transformed into embeddings used by the model. The number of parameters influences a model’s complexity and capacity to learn. More parameters often lead to better performance, but they also increase computational demands. Larger models can be harder to train and operate, leading to slower response times and higher costs. In some cases, smaller models are preferred for domain-specific tasks because they generalize better and are easier to fine-tune. Transformer-based models like GPT-4 dwarf previous generations in parameter count. However, for edge-based applications where resources are limited, smaller models are preferred as they are more adaptable and efficient. Fine-tuning large models for specific domains remains a challenge, often requiring extensive oversight to avoid problems like overfitting. There is also growing recognition that parameter count alone is not the best way to measure a model’s performance. Alternatives like Stanford’s HELM and benchmarks such as GLUE and SuperGLUE assess models across multiple factors, including fairness, efficiency, and bias. Three trends are shaping how we think about parameters. First, AI developers are improving model performance without necessarily increasing parameters. A study of 231 models between 2012 and 2023 found that the computational power required for LLMs has halved every eight months, outpacing Moore’s Law. Second, new neural network approaches like Kolmogorov-Arnold Networks (KANs) show promise, achieving comparable results to traditional models with far fewer parameters. Lastly, agentic AI frameworks like Salesforce’s Agentforce offer a new architecture where domain-specific AI agents can outperform larger general-purpose models. As AI continues to evolve, it’s clear that while parameter count is an important consideration, it’s just one of many factors in evaluating a model’s overall capabilities. To stay on the cutting edge of artificial intelligence, contact Tectonic today. Like Related Posts Who is Salesforce? Who is Salesforce? Here is their story in their own words. From our inception, we’ve proudly embraced the identity of Read more Salesforce Marketing Cloud Transactional Emails Salesforce Marketing Cloud Transactional Emails are immediate, automated, non-promotional messages crucial to business operations and customer satisfaction, such as order Read more Salesforce Unites Einstein Analytics with Financial CRM Salesforce has unveiled a comprehensive analytics solution tailored for wealth managers, home office professionals, and retail bankers, merging its Financial Read more AI-Driven Propensity Scores AI plays a crucial role in propensity score estimation as it can discern underlying patterns between treatments and confounding variables Read more

Read More
LLMs Turn CSVs into Knowledge Graphs

LLMs Turn CSVs into Knowledge Graphs

Neo4j Runway and Healthcare Knowledge Graphs Recently, Neo4j Runway was introduced as a tool to simplify the migration of relational data into graph structures. LLMs Turn CSVs into Knowledge Graphs. According to its GitHub page, “Neo4j Runway is a Python library that simplifies the process of migrating your relational data into a graph. It provides tools that abstract communication with OpenAI to run discovery on your data and generate a data model, as well as tools to generate ingestion code and load your data into a Neo4j instance.” In essence, by uploading a CSV file, the LLM identifies the nodes and relationships, automatically generating a Knowledge Graph. Knowledge Graphs in healthcare are powerful tools for organizing and analyzing complex medical data. These graphs structure information to elucidate relationships between different entities, such as diseases, treatments, patients, and healthcare providers. Applications of Knowledge Graphs in Healthcare Integration of Diverse Data Sources Knowledge graphs can integrate data from various sources such as electronic health records (EHRs), medical research papers, clinical trial results, genomic data, and patient histories. Improving Clinical Decision Support By linking symptoms, diagnoses, treatments, and outcomes, knowledge graphs can enhance clinical decision support systems (CDSS). They provide a comprehensive view of interconnected medical knowledge, potentially improving diagnostic accuracy and treatment effectiveness. Personalized Medicine Knowledge graphs enable the development of personalized treatment plans by correlating patient-specific data with broader medical knowledge. This includes understanding relationships between genetic information, disease mechanisms, and therapeutic responses, leading to more tailored healthcare interventions. Drug Discovery and Development In pharmaceutical research, knowledge graphs can accelerate drug discovery by identifying potential drug targets and understanding the biological pathways involved in diseases. Public Health and Epidemiology Knowledge graphs are useful in public health for tracking disease outbreaks, understanding epidemiological trends, and planning interventions. They integrate data from various public health databases, social media, and other sources to provide real-time insights into public health threats. Neo4j Runway Library Neo4j Runway is an open-source library created by Alex Gilmore. The GitHub repository and a blog post describe its features and capabilities. Currently, the library supports OpenAI LLM for parsing CSVs and offers the following features: The library eliminates the need to write Cypher queries manually, as the LLM handles all CSV-to-Knowledge Graph conversions. Additionally, Langchain’s GraphCypherQAChain can be used to generate Cypher queries from prompts, allowing for querying the graph without writing a single line of Cypher code. Practical Implementation in Healthcare To test Neo4j Runway in a healthcare context, a simple dataset from Kaggle (Disease Symptoms and Patient Profile Dataset) was used. This dataset includes columns such as Disease, Fever, Cough, Fatigue, Difficulty Breathing, Age, Gender, Blood Pressure, Cholesterol Level, and Outcome Variable. The goal was to provide a medical report to the LLM to get diagnostic hypotheses. Libraries and Environment Setup pythonCopy code# Install necessary packages sudo apt install python3-pydot graphviz pip install neo4j-runway # Import necessary libraries import numpy as np import pandas as pd from neo4j_runway import Discovery, GraphDataModeler, IngestionGenerator, LLM, PyIngest from IPython.display import display, Markdown, Image Load Environment Variables pythonCopy codeload_dotenv() OPENAI_API_KEY = os.getenv(‘sk-openaiapikeyhere’) NEO4J_URL = os.getenv(‘neo4j+s://your.databases.neo4j.io’) NEO4J_PASSWORD = os.getenv(‘yourneo4jpassword’) Load and Prepare Medical Data pythonCopy codedisease_df = pd.read_csv(‘/home/user/Disease_symptom.csv’) disease_df.columns = disease_df.columns.str.strip() for i in disease_df.columns: disease_df[i] = disease_df[i].astype(str) disease_df.to_csv(‘/home/user/disease_prepared.csv’, index=False) Data Description for the LLM pythonCopy codeDATA_DESCRIPTION = { ‘Disease’: ‘The name of the disease or medical condition.’, ‘Fever’: ‘Indicates whether the patient has a fever (Yes/No).’, ‘Cough’: ‘Indicates whether the patient has a cough (Yes/No).’, ‘Fatigue’: ‘Indicates whether the patient experiences fatigue (Yes/No).’, ‘Difficulty Breathing’: ‘Indicates whether the patient has difficulty breathing (Yes/No).’, ‘Age’: ‘The age of the patient in years.’, ‘Gender’: ‘The gender of the patient (Male/Female).’, ‘Blood Pressure’: ‘The blood pressure level of the patient (Normal/High).’, ‘Cholesterol Level’: ‘The cholesterol level of the patient (Normal/High).’, ‘Outcome Variable’: ‘The outcome variable indicating the result of the diagnosis or assessment for the specific disease (Positive/Negative).’ } Data Analysis and Model Creation pythonCopy codedisc = Discovery(llm=llm, user_input=DATA_DESCRIPTION, data=disease_df) disc.run() # Instantiate and create initial graph data model gdm = GraphDataModeler(llm=llm, discovery=disc) gdm.create_initial_model() gdm.current_model.visualize() Adjust Relationships pythonCopy codegdm.iterate_model(user_corrections=”’ Let’s think step by step. Please make the following updates to the data model: 1. Remove the relationships between Patient and Disease, between Patient and Symptom and between Patient and Outcome. 2. Change the Patient node into Demographics. 3. Create a relationship HAS_DEMOGRAPHICS from Disease to Demographics. 4. Create a relationship HAS_SYMPTOM from Disease to Symptom. If the Symptom value is No, remove this relationship. 5. Create a relationship HAS_LAB from Disease to HealthIndicator. 6. Create a relationship HAS_OUTCOME from Disease to Outcome. ”’) # Visualize the updated model gdm.current_model.visualize().render(‘output’, format=’png’) img = Image(‘output.png’, width=1200) display(img) Generate Cypher Code and YAML File pythonCopy code# Instantiate ingestion generator gen = IngestionGenerator(data_model=gdm.current_model, username=”neo4j”, password=’yourneo4jpasswordhere’, uri=’neo4j+s://123654888.databases.neo4j.io’, database=”neo4j”, csv_dir=”/home/user/”, csv_name=”disease_prepared.csv”) # Create ingestion YAML pyingest_yaml = gen.generate_pyingest_yaml_string() gen.generate_pyingest_yaml_file(file_name=”disease_prepared”) # Load data into Neo4j instance PyIngest(yaml_string=pyingest_yaml, dataframe=disease_df) Querying the Graph Database cypherCopy codeMATCH (n) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m Visualizing Specific Nodes and Relationships cypherCopy codeMATCH (n:Disease {name: ‘Diabetes’}) WHERE n:Demographics OR n:Disease OR n:Symptom OR n:Outcome OR n:HealthIndicator OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m MATCH (d:Disease) MATCH (d)-[r:HAS_LAB]->(l) MATCH (d)-[r2:HAS_OUTCOME]->(o) WHERE l.bloodPressure = ‘High’ AND o.result=’Positive’ RETURN d, properties(d) AS disease_properties, r, properties(r) AS relationship_properties, l, properties(l) AS lab_properties Automated Cypher Query Generation with Gemini-1.5-Flash To automatically generate a Cypher query via Langchain (GraphCypherQAChain) and retrieve possible diseases based on a patient’s symptoms and health indicators, the following setup was used: Initialize Vertex AI pythonCopy codeimport warnings import json from langchain_community.graphs import Neo4jGraph with warnings.catch_warnings(): warnings.simplefilter(‘ignore’) NEO4J_USERNAME = “neo4j” NEO4J_DATABASE = ‘neo4j’ NEO4J_URI = ‘neo4j+s://1236547.databases.neo4j.io’ NEO4J_PASSWORD = ‘yourneo4jdatabasepasswordhere’ # Get the Knowledge Graph from the instance and the schema kg = Neo4jGraph( url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD, database=NEO4J_DATABASE ) kg.refresh_schema() print(textwrap.fill(kg.schema, 60)) schema = kg.schema Initialize Vertex AI pythonCopy codefrom langchain.prompts.prompt import PromptTemplate from langchain.chains import GraphCypherQAChain from langchain.llms import VertexAI vertexai.init(project=”your-project”, location=”us-west4″) llm = VertexAI(model=”gemini-1.5-flash”) Create the Prompt Template pythonCopy codeprompt_template = “”” Let’s think step by

Read More
gettectonic.com