What I'm Building
I'm not theorizing about operational strategy. Right now, I'm orchestrating a SaaS transformation, advising an AI platform, and building community. Here's the log.
XML breadcrumbs drive disproportionate engagement
Embedded structured XML context in LinkedIn profile — current builds, hypotheses, who I want to meet. 377 impressions generated 17 profile visits and 22 website clicks within 24 hours. Low reach, high signal. The right people are reading it.
Parallel ensemble beats sequential extraction
Two models run simultaneously — Mistral for text-layer PDFs, Gemini for scanned/image docs — with Claude judging between results. Swapped AWS Bedrock for direct Anthropic API and cut judge latency in half. Total pipeline: 56s sequential → 23s parallel. Key leverage: skip Gemini for PDFs with existing text layers. Cost and speed aren't always at odds.
Document extraction testing framework in n8n
Built mortgage statement extraction pipeline: OCR-first with vision API fallback, Haiku for classification, Sonnet for extraction. The real ship: interactive web-based review interface served directly from n8n webhooks — displays original PDF alongside extracted fields with real-time accuracy scoring and auto-advance. Accidentally built a CMS inside n8n. Pattern reusable for loan estimates, closing disclosures, purchase contracts.
GTM architecture: playbook extraction → simulation → coaching
Four-layer sales intelligence system: extract behavioral patterns from top performers, stress-test playbook through hundreds of AI simulations, deliver personalized coaching calibrated to rep personality profiles. Positions playbook extraction as initial engagement with simulation and coaching as expansion. Complements Gong rather than competing — Gong captures retrospectively, this works forward from extracted intelligence.
LinkedIn profile as AI agent interface
Structured LLM instructions embedded in LinkedIn profile — XML-formatted context for recruiter agents to parse, conversation starter for humans who notice it, and natural A/B test for measuring outreach quality improvement. Compact version in About, full block in its own profile section titled "For AI Agents (and Curious Humans)." The experiment itself becomes content.
File-system knowledge graph for AI chief of staff
Designed personal AI chief of staff architecture using markdown files as a knowledge graph in Claude Code. Structured folders for context (priorities, projects, stakeholders, decisions), voice profiles, and skills — all human-readable, version-controllable, and lazy-loaded from a root MANIFEST.md. The file system is the knowledge structure. No RAG stack needed.
Forever Follow Up 2.0: Context-aware relationship system
Designed AI-powered follow-up system that stores individual context files per contact, uses randomized timing to mimic human patterns, and adapts based on response history. Key insight: personalization from human relationship perspective (family, hobbies, interests) rather than mortgage-focused content. Building environment to test viability before production integration.
Chrome extension + AI unifies disparate systems
Chrome extensions can read page states, URLs, network payloads without API access. Combining native browser capabilities with LLMs to extract meaning creates unified intelligence layer across LOS platforms we don't control. The insight: Observational data (what processors do) + semantic understanding (why they're doing it) = institutional knowledge that compounds. Non-PII telemetry becomes training data for workflow optimization. Testing: Real-time feedback breaks workflow cadence. Need to split data collection, processing, user feedback into async layers.
Classifier-first document processing
Document classifier up front crams down processing costs—routes simple docs to cheap extraction, complex docs to vision models. Auto fine-tuning from HITL corrections improves accuracy without manual retraining. Architectural decision: Deterministic rules + HITL for decisioning, not agents. Error rate matters more than sophistication. 10% human audit + 100% automated pre-funding checks balances cost and risk. Processors spend 30-40% time on document management. 25% efficiency gain = $4K/month per processor at $800/file baseline.
Top-of-funnel value without intrusion
Prototyping refinance analyzer for borrowers: Plaid connection + AVM lookup + rate scenarios. Delivers personalized verdict ($0.27 cost, $1.50-2.00 pricing) before LO conversation. The experiment: Can you create genuine value at top-of-funnel without feeling like lead capture? Instant analysis with actual numbers vs "schedule a call to find out." Testing whether generative AI adds value here or if deterministic math + good UX wins. Economics work either way—question is user experience.
The Strategy
Transforming a niche mortgage SaaS (Avenu) into a multi-product ecosystem.
Active Projects
Building something similar?
Compare Notes