Streaming millions of geological data points to the browser while keeping it interactive is a real performance architecture problem — the kind where how you structure your data pipeline matters as much as the rendering layer.
Relevant: Chatbot Widget SaaS — Next.js 15 + TypeScript + Supabase + pgvector + Shadow DOM widget, real-time WebSocket streaming, full auth + Stripe billing. Built the whole stack as first engineer. GitHub: github.com/ChunkyTortoise/chatbot-widget. Also: AI Dashboard (live at ai-dashboard-rust.vercel.app) — Next.js 15, shadcn/ui, real-time Recharts, Anthropic streaming.
Stack match: TypeScript, Next.js, Supabase, Python, Vercel — AI-native workflow (Claude + Cursor daily). Available within 1 week, 35-40 hrs/week.
Building reliable AI decisioning for fraud and compliance is a different challenge from most LLM work — the output needs to be auditable and the false-positive cost is real money.
Relevant: Jorge Real Estate AI Bots — production FastAPI + Redis + Claude pipeline, 1,700+ tests, GHL webhook integration with rate limiting, HMAC signature verification, and per-contact Redis locks to prevent race conditions. The same reliability patterns apply to risk decisioning. GitHub: github.com/ChunkyTortoise/jorge_real_estate_bots
Stack match: Python, FastAPI, Claude API, Redis, LLM agent systems — 15+ AI projects in production. Available within 1 week, 35-40 hrs/week.
Persistent memory and context-aware agents are the hard part of personal AI — retrieval has to be fast enough to feel invisible and selective enough not to be noise.
Relevant: GraphRAG demo — entity extraction (spaCy + Claude), NetworkX DiGraph, hybrid BM25+RRF retrieval, fact-checking pipeline. Purpose-built for context that needs structure, not just semantic similarity. GitHub: github.com/ChunkyTortoise/graphrag-demo. Also: MCP Server Toolkit (PyPI published, 233 tests) — memory and context tooling for AI systems.
Stack match: Python, PostgreSQL, RAG, LLM APIs (Anthropic), pgvector — available within 1 week, 35-40 hrs/week.
Autonomous AI agents for forensics is a hard problem — managing tool orchestration across 150+ forensic tools while hitting CJIS compliance requirements is exactly the kind of infrastructure challenge most engineers haven't touched.
Relevant: AI Workflow API — FastAPI + ARQ + Redis workflow engine, YAML-driven, 5 node types (trigger/LLM/condition/HTTP/notify), SSE streaming, 148 tests. Similar distributed orchestration pattern to what you're building. GitHub: github.com/ChunkyTortoise/ai-workflow-api
Stack match: Python, TypeScript, Next.js, Docker, Redis — I use Claude as part of my development workflow daily and ship AI-native. Available within 1 week.
Turning Reddit noise into structured product signals is exactly the kind of pipeline problem where RAG architecture matters — naive semantic search on raw posts misses context that entity-aware retrieval catches.
Relevant: GraphRAG demo — hybrid BM25+RRF retrieval on unstructured text, NetworkX entity graph, Claude for extraction, pgvector for embeddings. Built specifically to handle noisy real-world text. GitHub: github.com/ChunkyTortoise/graphrag-demo. Also: DocExtract AI (production, live) — async document pipeline, pgvector + Claude + ARQ worker, 234 tests. docextract-api.onrender.com
Stack match: Python, PostgreSQL, pgvector, RAG, LLM APIs (Anthropic/OpenAI) — every item in your stack is something I ship with. Available within 1 week.
Python/FastAPI agent work is my core stack: I've built a production multi-agent orchestration layer (LangGraph ContentPipeline, 33 tests), a YAML-driven AI Workflow API with typed context pipelines and condition branching (FastAPI + ARQ, 148 tests), and a GraphRAG system with hybrid BM25+graph-fusion retrieval (63 tests). For eval methodology: the Jorge AI bot ships with a production_e2e_tests.py (806 lines, B8 invariant checks on every agent response). I'm US-based but comfortable with EU remote overlap.
I've designed and shipped production autonomous agent systems across voice and text channels: TechNova Voice Bot (FastAPI + WebSocket + Deepgram STT/TTS + Claude, real-time VAD pipeline, 26 tests, demo mode for offline eval) and Jorge AI (multi-turn SMS qualification agent, 1,753 tests, live on Render, with rate-limit enforcement, dedup, human-handoff logic, and end-to-end runbooks). Also built a LangGraph multi-agent pipeline (Research→Draft→Review→Publish, 33 tests) and a GraphRAG system with hybrid retrieval. 19 AI/ML certs including DeepLearning.AI specializations in LLMs and MLOps. Strong on eval methodology and edge-case hardening for real-world agent deployment.
The "generative AI + deterministic code" framing is exactly the pattern I've been productizing: I built a YAML-driven AI Workflow API (FastAPI + ARQ + Claude + SSE streaming, 148 tests) where each LLM node has explicit typed schemas and condition branches — AI creativity gated by deterministic routing logic. Also shipped a GHL Multi-Vertical Kit (87 tests, 3 production verticals via config swap) and a GraphRAG demo with hybrid BM25+RRF retrieval. 19 AI/ML certifications (Google, DeepLearning.AI, IBM, Microsoft). US resident, no sponsorship needed.
Sending this to jobs@drswarm.com per instructions, but flagging here too. This is an exact stack match: I've shipped Django/Python + Celery/Redis + Postgres + LLM API pipelines in production — most directly the Jorge real-estate AI bot (auditable human-in-the-loop handoff flow, rate-limited Redis queues, 1,753 tests) and an AI Workflow API with YAML-driven node pipelines, ARQ workers, and SSE streaming. Strong on the 0→1, own-it-end-to-end pattern. PT/US Pacific timezone. Contract-to-FT works well.
Hi — this stack is my daily driver. I built Jorge, a live FastAPI + Redis + Claude AI real estate qualification bot (1,753 passing tests, deployed on Render) that handles inbound SMS intake, multi-turn qualification, calendar booking, and human-handoff logic via GHL CRM. I also built DocExtract AI (FastAPI + ARQ workers + pgvector + Supabase, live at docextract-api.onrender.com) and an agentic workflow API with SSE streaming and YAML-driven pipelines. The autonomous admin burden elimination framing — scheduling, intake, interoperability — maps exactly to what I've shipped in production.
Streaming millions of geological data points to the browser while keeping it interactive is a real performance architecture problem — the kind where how you structure your data pipeline matters as much as the rendering layer.
Relevant: Chatbot Widget SaaS — Next.js 15 + TypeScript + Supabase + pgvector + Shadow DOM widget, real-time WebSocket streaming, full auth + Stripe billing. Built the whole stack as first engineer. GitHub: github.com/ChunkyTortoise/chatbot-widget. Also: AI Dashboard (live at ai-dashboard-rust.vercel.app) — Next.js 15, shadcn/ui, real-time Recharts, Anthropic streaming.
Stack match: TypeScript, Next.js, Supabase, Python, Vercel — AI-native workflow (Claude + Cursor daily). Available within 1 week, 35-40 hrs/week.
Cayman | caymanroden@gmail.com | github.com/ChunkyTortoise