A browsable dataset of ideas, predictions, frameworks, and essays.
Filter by tags or browse chronologically.
Collect all unique functions across GitHub into a code-graph. Focus AI training and framework development on the functions that actually run the world each day.
The complete inventory of ideas organized by cognitive tier. From TIER_0 orchestrators down to TIER_6 utilities. Systems that build systems that build products.
A Gödel-Darwin machine. Systems that build systems. Self-improving, evolutionary, perpetual. The type of machine that builds all other machines.
Every GitHub repo is a world state. If we structure repos for AI comprehension, we unlock multi-agent collaboration across the entire open source ecosystem.
Write out all ideas at scale. Catalog them into structured books. Have AI rewrite the books every day until we nail it.
The singularity of personal productivity occurs when AI implements ideas faster than you can generate them.
The mono-repo is not just code storage. It is a cognitive amplifier designed to reach the singularity of personal productivity.
Docs: user_preference_framework/vision.md
Why build an app when you can just have the data where you already work? Cursor IDE is not a code editor - it's a shared operating environment for Human and Artificial Intelligence.
Wrote 'CURSOR_AS_AGENT_RUNTIME.md' analysis.
The current Transformer architecture requires dense matrix multiplications across all parameters for every token. This is computationally insane. Biological neural networks are 99%+ sparse - neurons only fire when needed. Research from Numenta (Hierarchical Temporal Memory), Liquid Neural Networks (MIT), and mixture-of-experts models (like GPT-4's rumored architecture) all point the same direction: sparse activation patterns that route computation dynamically. The efficiency gains are 10-100x. The question isn't if, but when. Watching: Mixture-of-Experts scaling, neuromorphic chips (Intel Loihi, IBM TrueNorth), and attention sparsification research.
The foundational thesis of AIA Limited. Traditional software is a tool - you buy it, configure it, use it. AI Agents are different: they have ongoing operational costs (tokens, compute), they improve over time (fine-tuning, prompt refinement), and they deliver measurable value per task. This makes them economically equivalent to employees. A business should evaluate an AI Agent the same way they evaluate a hire: What's the annual cost? What value do they produce? What's the ROI? At $15k/year in API costs, an Agent that automates $150k worth of human labor is a 10x return. Companies will have 'Agent headcounts' alongside human headcounts. AIA is already operating this model: AI Employees with defined roles, costs, and revenue targets. Proof: aia.works is live, revenue-generating, and built entirely on this thesis.
Want to work together?
GET IN TOUCH →