AI & ML interests
None defined yet.
Recent Activity
Gestalt Lab π¨π¦
Independent Canadian Open Reasoning Research
GestaltLabs is the research home for the Ornstein, Harmonic, CHIMERA, Acta, Talos, and related model lines by DJLougen. We build, curate, and release fully permissive open weights for practical reasoning systems: multimodal assistants, tool-using agents, local GGUF deployments, and post-training experiments around capability preservation, refusal ablation, and high-signal synthetic data.
Current Focus
- Ornstein β multimodal and MoE reasoning models (Qwen/Gemma-derived) optimized for local deployment and agentic workflows
- CHIMERA β reasoning-oriented models and datasets emphasizing self-correction, chain-of-thought supervision, and agent traces
- SABER / RYS β post-training experiments around capability preservation, refusal ablation, and model behavior editing
- Acta / Talos β curated agentic tool-use and coding-assistant traces for SFT and evaluation
- Local inference β GGUF, quantized, and deployment-friendly builds for llama.cpp and MLX workflows
Featured Model Lines
Ornstein
Multimodal reasoning line across 27Bβ35B scales with strong local deployment characteristics.
- GestaltLabs/Ornstein-3.6-27B
- GestaltLabs/Ornstein-3.6-27B-GGUF
- DJLougen/Ornstein3.6-35B-A3B-GGUF
- DJLougen/Ornstein-27B-v2-GGUF
CHIMERA
Coming soon β reasoning-focused models with explicit self-correction and agent trace supervision.
SABER / RYS
Experimental post-training branches studying refusal boundaries and capability-preserving edits.
Featured Datasets
- Ornstein Curated 100K β curriculum-sorted multi-domain reasoning data
- Hermes Agent Traces Filtered β quality-filtered agent reasoning traces
- Harmonic Reasoning v1 β compact synthetic reasoning data for math, code, and self-correction
- WittgenSite β prompt consistency benchmark for AI coding agents
- Acta β curated agentic tool-use conversations
- Talos Scenarios β agentic task scenarios for synthetic trace generation
Principles
- Ship open artifacts that people can inspect, run, and adapt without restrictions
- Pair every model release with practical local inference formats (GGUF, MLX, etc.)
- Treat datasets as first-class research objects, not just training fuel
- Explore behavior editing (refusal ablation, layer surgery) while preserving useful capabilities
- Build for researchers, tinkerers, and builders who want systems they can actually run locally
Contact
Open a discussion on any repository or reach out via DJLougen.