Verify Knowledge

This section describes how to store trusted verified answers and its related context in Connecty AI, via human feedback, for AI's learning

Overview

Connecty AI's context engine allows users to verify and store trusted components of questions, SQL queries, and semantic entities. This verified information forms a dynamic knowledge base scoped to a specific data workspace and used across sessions to enhance query accuracy, consistency, and semantic reasoning.

Rather than just saving static SQL snippets, Connecty understands the semantic meaning of your logic—thanks to its autonomous Context Graph—and applies verified components precisely where appropriate.


What Can Be Stored

You can verify and store the following types of information in Knowledge:

🔹 Metric Entities

Semantic components extracted from your questions or SQL, including:

  • Metrics (e.g., Gross Margin, Net Revenue)

  • Subjects (e.g., Orders, Customers)

  • Measures (e.g., Total Spend)

  • Attributes (e.g., Region, Product Category)

  • Relationships (e.g., Customers JOIN Orders)

  • Dimensions (e.g., Time, Channel)

  • Filters (e.g., status = active, date > last quarter)

🔹 Questions

You can verify full natural language or structured questions. These store both intent and the semantics inferred by the system.

🔹 SQL Queries

When a question resolves to a specific SQL with trusted logic (e.g., business-approved aggregations or joins), you can store the SQL directly as verified knowledge.


How to Use It

  1. Ask a question or run a SQL query through the chat or app interface.

  2. Once the system responds, click the Verify button.

  3. A modal will open displaying the question, SQL, and semantic entities that are candidates for storage.

  4. Use the selector to include or exclude any components.

  5. Click Accept to finalize storage into the Knowledge base.

You may click any listed entity to inspect its definition before confirming.

⚠️ Note: Verifying a question may also auto-select and verify all its semantic components. You’ll receive a warning before confirming:

“You are verifying the response generated for this question. Verifying this will also auto-verify all its underlying components that are selected below. Unselect any components you don’t want to include. The verification will be applied for the currently selected data workspace.”


Where Knowledge Is Applied

  • Scope: Verification is applied only within the current data workspace. Other workspaces are isolated.

  • Usage: Verified components are reused during:

    • Semantic parsing of future questions

    • SQL generation

    • Conflict resolution and clarification

  • Impact: Downstream responses will prefer verified definitions over inferred logic when contexts match.


AI Intelligence: How Connecty Applies Knowledge

Connecty AI’s autonomous semantic graph—the Context Graph—is a dynamic representation of your data, logic, and verified knowledge.

Key Behaviors:

  • Semantic Graph Construction: On each question or query, Connecty builds a real-time graph of all involved entities, metrics, relationships, and filters.

  • Dependency Resolution: When you verify a metric or question, Connecty tracks all related components. It understands that Net Revenue depends on Revenue, Refunds, and Taxes, and maps this dependency.

  • Scoped Trust Propagation: Verified definitions are reused only when contextually compatible. For example, Customer Lifetime Value verified for DTC customers will not be reused for B2B metrics unless dependency conditions match.

  • Version & Conflict Detection: If multiple definitions exist (e.g., for Active Users), Connecty detects the divergence and prompts for clarification before use.

  • Auto-Adaptive Updates: If your schema changes (e.g., renaming plan_tier to tier_code), Connecty updates affected graph nodes and recalculates dependent logic accordingly.

This system enables safe reuse, conflict resolution, and precision reasoning at semantic scale.


Things to Consider

  • Auto-Verification Cascade: Verifying a question may select multiple dependent components. You'll be shown a warning and can deselect any items manually.

    “You are verifying the response generated for this question. Verifying this will also auto-verify all its underlying components that are selected below. Unselect any components you don’t want to include. The verification will be applied for the currently selected data workspace.”

  • Conflict Alerts: If verification introduces a conflict with existing Knowledge, you’ll receive a versioning or compatibility warning.

  • Manual Review: Periodically audit your workspace’s Knowledge base to remove outdated logic or revise evolving definitions.

  • Partial Verification is Supported: You can verify only parts of a query—such as specific metrics or filters—without storing the entire question or SQL.


Example Workflow

Question: “What is our gross margin by region for the last quarter?”

Connecty extracts the following:

  • Metric: Gross Margin = Revenue - COGS

  • Dimension: Region

  • Filter: last quarter

Once verified:

  • These definitions are stored with their dependency graph.

  • If another user later asks “Compare gross margin across regions”, the system will automatically reuse the trusted metric logic—ensuring consistency and avoiding errors.


Benefits of the Knowledge System

Capability
Description

Semantic Reasoning

Understands and applies logic across structured questions and SQL

Trust Propagation

Reuses verified entities only when context and dependencies align

Conflict Detection

Flags overlapping or inconsistent definitions before they’re applied

Cross-Session Consistency

Ensures reused metrics and filters behave identically across the workspace

Adaptability

Automatically responds to changes in schema or business definitions


Final Notes

Connecty AI’s Knowledge system is not a static store of templates or SQL fragments. It is a semantically-aware, versioned, dependency-resolving system that ensures trust in metrics, safety in reuse, and clarity in interpretation.

If you're building at scale, this is how you make metric logic interpretable, reusable, and maintainable—without relying on manual modeling.

Last updated