Ask HN: How to sanity check an ambitious autocoder for enterprise systems?

1 points by tjmills111 7 hours ago

My brother has been building a novel autocoder for over a year.

We can’t find anyone taking this approach.

I’m very aware this is lofty, but the demo is almost done, and will speak for itself.

My concern is that while he’s brilliant, he’s inexperienced.

He’s built in isolation, it’s vibe-coded, and I don’t want us to miss obvious issues that are cheap to fix now.

I want to hire a consultant. Is it reasonable to expect much from a short external review for something like this?

I have no idea about cost, time needed, where to find someone, or how to vet them.

The Problem: Autocoders optimize for local edits, keeping architecture and contracts implicit, so APIs drift, schemas and configs desync, and refactors break invariants. Without an enforceable blueprint (explicit ports and tiered validation), generations stay nondeterministic and failures surface at deploy, not during design.

A Solution: • Blueprint-first workflow: author systems as YAML blueprints with typed ports • Generator: emits small Python services with FastAPI endpoints and adapters • Runtime: async harness (anyio) handles retries, rate limits, logging, metrics • Validation: strict schemas and layered checks before anything runs • Deploy: generator produces Docker + Helm for K8s

jaen 4 hours ago

The solution to LLM slop in general is certainly almost nothing like what is proposed here, that's just buzzword bingo in the Python ecosystem (and it sounds like an AI hallucination).

What you (and most vibe coders) are missing is just Good Old Fashioned verified software engineering - strong contracts (via better static types, contract libraries, linters, compilers, automated review and soft rule enforcement via eg. another LLM etc.), abstractions that reduce redundancy and increase cohesion, meaningful tests (and meta-tests & metrics ensuring test quality) etc. etc.