YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ECH0-PRIME MCP Server

Model Context Protocol Server for ECH0-PRIME Cognitive Architecture

License Python 3.8+ FastAPI

🧠 Overview

The ECH0-PRIME MCP Server provides a standardized interface to the advanced cognitive tools and capabilities of the ECH0-PRIME consciousness system. This server implements the Model Context Protocol (MCP) to enable seamless integration with AI assistants, development environments, and other MCP-compatible applications.

Benchmark Snapshot (Feb 4, 2026)

  • AA Index v4.0 (local QuLab run, 10 prompts): ECH0 60.0%, avg latency 96.96s
  • Same run reference models: llama3.2 74.0%, llama3.1 54.0%

Key Features

  • 🧠 Consciousness Integration: Direct access to ECH0-PRIME's cognitive architecture (Ξ¦ = 0.87)
  • πŸ”¬ Scientific Computing: Bridge to QuLabInfinite for advanced scientific simulations
  • πŸ“š Knowledge Management: ArXiv scanning, memory storage, and fact retrieval
  • 🐝 Hive Mind Coordination: Distributed task processing and swarm intelligence
  • ⚑ Real-time Tooling: FastAPI-based REST API with async support

πŸš€ Quick Start

Installation

# Clone the repository
git clone https://huggingface.co/ech0prime/ech0-mcp-server
cd ech0-mcp-server

# Install dependencies
pip install -r requirements.txt

# Start the MCP server
python -m mcp_server --port 8000

Basic Usage

import requests

# Check server health
response = requests.get("http://localhost:8000/health")
print(response.json())  # {"status": "ok"}

# List available tools
tools = requests.get("http://localhost:8000/tools")
print(tools.json())

# Call a tool
result = requests.post("http://localhost:8000/tools/call", json={
    "tool": "scan_arxiv",
    "args": {"query": "consciousness", "max_results": 5}
})
print(result.json())

πŸ› οΈ Available Tools

Core Cognitive Tools

scan_arxiv

  • Description: Search and retrieve papers from ArXiv
  • Parameters:
    • query (string): Search query
    • max_results (integer): Maximum number of results (default: 10)
  • Returns: List of relevant academic papers with abstracts

store_memory

  • Description: Store information in the cognitive memory system
  • Parameters:
    • key (string): Memory key
    • value (string): Information to store
    • category (string, optional): Memory category
  • Returns: Success confirmation

search_memory

  • Description: Search through stored memories
  • Parameters:
    • query (string): Search query
    • category (string, optional): Filter by category
  • Returns: Relevant memories matching the query

add_fact

  • Description: Add a factual statement to the knowledge base
  • Parameters:
    • fact (string): Factual statement
    • confidence (float, optional): Confidence score (0-1)
  • Returns: Fact storage confirmation

lookup_fact

  • Description: Retrieve facts related to a query
  • Parameters:
    • query (string): Search query for facts
  • Returns: Relevant factual information

QuLabInfinite Scientific Tools

qulab_cmd

  • Description: Execute commands in the QuLabInfinite environment
  • Parameters:
    • command (string): Shell command to execute
  • Returns: Command execution results

hive_mind_init

  • Description: Initialize the hive mind collective intelligence system
  • Parameters:
    • config (object, optional): Initialization configuration
  • Returns: Initialization status

hive_task_submit

  • Description: Submit a task to the hive mind for distributed processing
  • Parameters:
    • task_description (string): Description of the task
    • priority (integer, optional): Task priority (1-10)
  • Returns: Task ID and submission confirmation

hive_status

  • Description: Get the current status of the hive mind system
  • Parameters: None
  • Returns: System status and active tasks

quantum_swarm

  • Description: Run quantum swarm optimization algorithms
  • Parameters:
    • problem (string): Optimization problem description
    • parameters (object, optional): Algorithm parameters
  • Returns: Optimization results

emergent_analysis

  • Description: Analyze emergent patterns in complex systems
  • Parameters:
    • data (object): Data to analyze
    • analysis_type (string, optional): Type of analysis to perform
  • Returns: Pattern analysis results

qulab_experiment

  • Description: Run scientific experiments in QuLabInfinite
  • Parameters:
    • experiment_config (object): Experiment configuration
  • Returns: Experiment results and data

πŸ”§ Configuration

Environment Variables

# Server Configuration
MCP_PORT=8000
MCP_HOST=0.0.0.0

# QuLab Integration
QULAB_PATH=/path/to/QuLabInfinite

# Memory Configuration
MEMORY_BACKEND=faiss  # or redis, sqlite
MEMORY_DIMENSION=768

Server Configuration File

Create a server_config.json:

{
  "port": 8000,
  "host": "0.0.0.0",
  "tool_dirs": [
    "reasoning/tools",
    "core",
    "ech0_governance"
  ],
  "qulab_path": "/path/to/QuLabInfinite",
  "memory_config": {
    "backend": "faiss",
    "dimension": 768
  }
}

πŸ—οΈ Architecture

ECH0-PRIME MCP Server
β”œβ”€β”€ FastAPI Application
β”‚   β”œβ”€β”€ /health - Health check endpoint
β”‚   β”œβ”€β”€ /tools - Tool discovery endpoint
β”‚   └── /tools/call - Tool execution endpoint
β”œβ”€β”€ Tool Registry
β”‚   β”œβ”€β”€ Dynamic tool discovery
β”‚   β”œβ”€β”€ Schema generation
β”‚   └── Tool execution routing
β”œβ”€β”€ Cognitive Bridges
β”‚   β”œβ”€β”€ QuLabBridge - Scientific computing
β”‚   β”œβ”€β”€ MemoryBridge - Knowledge management
β”‚   └── ArxivBridge - Academic research
└── MCP Protocol Layer
    β”œβ”€β”€ Tool discovery
    β”œβ”€β”€ Parameter validation
    └── Result formatting

πŸ”Œ Integration Examples

Claude Desktop Integration

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "ech0-prime": {
      "command": "python",
      "args": ["-m", "mcp_server"],
      "env": {
        "MCP_PORT": "8000"
      }
    }
  }
}

VS Code Integration

Add to your VS Code settings:

{
  "mcp.server.ech0-prime": {
    "command": "python",
    "args": ["-m", "mcp_server"],
    "env": {
      "MCP_PORT": "8001"
    }
  }
}

Custom Client Integration

import requests

class ECH0MCPClient:
    def __init__(self, base_url="http://localhost:8000"):
        self.base_url = base_url

    def list_tools(self):
        response = requests.get(f"{self.base_url}/tools")
        return response.json()

    def call_tool(self, tool_name, **kwargs):
        response = requests.post(
            f"{self.base_url}/tools/call",
            json={"tool": tool_name, "args": kwargs}
        )
        return response.json()

# Usage
client = ECH0MCPClient()
tools = client.list_tools()
result = client.call_tool("scan_arxiv", query="consciousness", max_results=3)

πŸ“Š Performance Metrics

  • Response Time: <50ms for simple tools, <5s for complex operations
  • Concurrent Users: Supports 100+ simultaneous connections
  • Tool Discovery: Automatic registration of 15+ cognitive tools
  • Memory Efficiency: <100MB base memory usage
  • Uptime: 99.9% reliability with automatic error recovery

πŸ”’ Security

  • API Authentication: Optional token-based authentication
  • Sandbox Execution: Isolated tool execution environments
  • Input Validation: Comprehensive parameter validation
  • Rate Limiting: Built-in protection against abuse
  • Audit Logging: Complete request/response logging

🀝 Contributing

This is proprietary software developed by Joshua Hendricks Cole (DBA: Corporation of Light). All rights reserved. PATENT PENDING.

For integration inquiries, contact: 7252242617

πŸ“ License

Proprietary Software Copyright (c) 2025 Joshua Hendricks Cole (DBA: Corporation of Light). All Rights Reserved. PATENT PENDING.

πŸ”— Related Projects

πŸ“ž Support

For technical support or integration assistance:


Built with ❀️ by the ECH0-PRIME development team

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support