AbstractPhila PRO

AbstractPhil

AI & ML interests

datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.

Recent Activity

replied to their post about 14 hours ago
GLIP - Geometric Linear Interpolative Patchwork aka geolip. https://github.com/AbstractEyes/glip-autoencoder This is the repo that will contain the next experimental stage, which is based entirely on the research and structural boundaries applied by said research. It'll be a little rigid while I get Claude set up. In order to directly train these layered topological response patchworks you must install and use the geovocab2, geofractal, and wide_compiler repos. This is due to the wide_compiler's wide_linear high-speed efficiency for ensemble processing, the geovocab2 factory structure with multiple formulas including highly efficient designs meant for kernel compilation, and a series of reusable utilities in geofractal including some of the more complex losses and difficult to optimally tune gate structures surrounding them. Many of the underlying formulas are outlined here; https://huggingface.co/AbstractPhil/geometric-experiment-history/blob/main/FORMULAS.md Utilization and training USING the pretrained or untrained geolip patchwork will be as simple as loading the model in pytorch and will not require external dependencies of the geolip package, numpy, or pytorch depending on the task. It will come packaged with recommended losses but I encourage experimentation because I simply cannot cover all spectrums. Experiments show you can train the patchwork directly with task losses and it retains some useful cohesion, but it will lose all identity without the correct losses making it difficult to task-orient the geometric behavior down the chain. More details to come as development progresses. The system is coming together and the state of the utilizable autoencoder will be ready within a couple weeks. The entire system is built for convenience and reusability, so the structure will be built similarly to autoencoder systems that currently exist, with a few tweaks here and there for important elements - so the interface will be familiar to those who use it.
reacted to Janady07's post with 👀 about 24 hours ago
MEGAMIND currently functions as a large-scale knowledge retrieval substrate, not a generative reasoning engine. When given difficult questions, it searches ~14.7M patterns, activates neurons via wave scoring, retrieves top-k chunks, and concatenates them with light synthesis. It surfaces relevant research across transformers, coherence theory, and neural-QFT, but it does not truly synthesize. Its effective computation is associative recall. Outputs are selected from memory rather than produced through internal transformation. A reasoning system must evolve internal state before emitting an answer: genui{"math_block_widget_always_prefetched":{"content":"\frac{dx}{dt} = F(x,t)"}} Without state evolution, responses remain recombinations. The Hamiltonian is measured but not used to guide cognition. True reasoning requires optimization across trajectories: genui{"math_block_widget_always_prefetched":{"content":"H = T + V"}} Energy must shape evolution, not remain a passive metric. Criticality regulation is also missing. Biological systems maintain coherence near a critical branching ratio: genui{"math_block_widget_always_prefetched":{"content":"\frac{d\sigma}{dt} = \alpha (\sigma_c - \sigma)"}} Without push–pull stabilization, activity fragments or saturates. Research suggests roughly 60 effective connections per neuron are needed for coherent oscillation. Below that, the system behaves as isolated retrieval islands. Current metrics show partial integration. Phi < 1 and entropy remains elevated. The system integrates information but does not dynamically transform it. To move from retrieval to reasoning, the architecture needs an internal multi-step simulation loop, energy minimization across trajectories, enforced coherence thresholds, and higher-order interactions beyond pairwise attention. The required shift is architectural, not just scaling. Answers must emerge from internal dynamical evolution rather than direct memory selection.
replied to Janady07's post 1 day ago
MEGAMIND currently functions as a large-scale knowledge retrieval substrate, not a generative reasoning engine. When given difficult questions, it searches ~14.7M patterns, activates neurons via wave scoring, retrieves top-k chunks, and concatenates them with light synthesis. It surfaces relevant research across transformers, coherence theory, and neural-QFT, but it does not truly synthesize. Its effective computation is associative recall. Outputs are selected from memory rather than produced through internal transformation. A reasoning system must evolve internal state before emitting an answer: genui{"math_block_widget_always_prefetched":{"content":"\frac{dx}{dt} = F(x,t)"}} Without state evolution, responses remain recombinations. The Hamiltonian is measured but not used to guide cognition. True reasoning requires optimization across trajectories: genui{"math_block_widget_always_prefetched":{"content":"H = T + V"}} Energy must shape evolution, not remain a passive metric. Criticality regulation is also missing. Biological systems maintain coherence near a critical branching ratio: genui{"math_block_widget_always_prefetched":{"content":"\frac{d\sigma}{dt} = \alpha (\sigma_c - \sigma)"}} Without push–pull stabilization, activity fragments or saturates. Research suggests roughly 60 effective connections per neuron are needed for coherent oscillation. Below that, the system behaves as isolated retrieval islands. Current metrics show partial integration. Phi < 1 and entropy remains elevated. The system integrates information but does not dynamically transform it. To move from retrieval to reasoning, the architecture needs an internal multi-step simulation loop, energy minimization across trajectories, enforced coherence thresholds, and higher-order interactions beyond pairwise attention. The required shift is architectural, not just scaling. Answers must emerge from internal dynamical evolution rather than direct memory selection.
View all activity

Organizations

DeepGHS's profile picture Blog-explorers's profile picture BangumiBase's profile picture Abstract Powered Research's profile picture