AI & ML interests

None defined yet.

Recent Activity

ekjotsingh  updated a Space 1 day ago
metanthropic/metanthropic-chat
ekjotsingh  updated a Space 5 days ago
metanthropic/README
ekjotsingh  published a Space 28 days ago
metanthropic/metanthropic-chat
View all activity

Organization Card
Metanthropic Labs Logo

METANTHROPIC LABS

RESEARCH • INTERPRETABILITY • INTELLIGENCE

Website GitHub LinkedIn RSS Feed


"We do not view safety as an adjunct effort; it is the mathematical constraint under which we optimize for intelligence."



🔬 About Metanthropic

Metanthropic Labs is an independent AI research organization founded by Ekjot Singh. We operate on a singular thesis: the path to safe Artificial General Intelligence requires systems that are not just highly capable, but structurally transparent.

Rather than relying purely on brute-force scaling, our lab focuses on high-efficiency architectural injections, mechanistic interpretability, and controllable reasoning. We build, dissect, and open-source models that push the frontier of what is possible within optimized compute budgets.


🧠 Core Research Vectors

Our technical agenda is rigorous, empirical, and built in public. We currently focus on three primary vectors:

1. Mechanistic Interpretability & Latent Topologies

We cannot align systems we do not fundamentally understand. Our lab pioneers techniques to map and control the internal representations of large language models.

  • Sparse Autoencoders (SAEs): Developing novel gating mechanisms (like Chronometric Flux Gating) to prevent feature absorption and isolate causal logic circuits.
  • Interference Rejection: Mapping destructive interference in latent spaces to structurally immunize models against algorithmic exploits.

2. Controllable Reasoning Systems

We are pushing the frontier of "System 2" reasoning, moving models beyond probabilistic pattern matching to verifiable, multi-step logical deduction.

  • Test-Time Compute: Engineering architectures that can autonomously backtrack, cross-verify, and self-correct prior to generating final outputs.
  • Parameter-Efficient Scaling: Utilizing Mixture-of-Experts (MoE) and surgical layer grafting to achieve frontier-level performance on highly constrained hardware.

3. High-Efficiency Multimodality

Building grounded world models that natively process text, audio, and visual data without the bloat of traditional pipeline systems.

  • Native Vision Transformers: Streamlining adapter architectures to unify perception-centric tasks (like OCR) with semantic reasoning.

🚀 Featured Infrastructure & Models

We believe in open-source validation. Our active deployments include:

  • Arvi-20b: A 20-billion parameter autoregressive Mixture-of-Experts (MoE) reasoning model designed for agentic workflows and tool use.
  • BulBul-OCR Engine: A highly efficient Vision-Language Model framework engineered specifically for complex optical character recognition and parsing.
  • Metanthropic Interactive Spaces: Live Gradio deployments of our reasoning and multimodal models for community testing and adversarial probing.

(Note: Experimental and foundational research shards are kept private until they meet our internal rigorous safety and coherence benchmarks.)


📰 Research Index & Lab Updates

To stay perfectly synced with our latest model weights, technical reports, and architectural milestones, subscribe directly to our live feed:
📡 Subscribe to the Metanthropic Labs RSS Feed



Copyright © 2025-2026 Metanthropic Labs. All rights reserved.
Licensed under the Metanthropic Research License.

datasets 0

None public yet