star-173 3 hours ago

Repo: https://github.com/STAR-173/LLMSession-Docker

I built this because I was burning through API credits just to test simple prompt chains and agent logic. I wanted a way to develop against the free web tiers of ChatGPT, Claude, and Gemini but with a standard programmatic interface.

How it works:

1. It spins up a Docker container with Xvfb and a headless browser.

2. It uses your Google credentials to handle SSO login.

3. It exposes a standardized REST endpoint (`POST /generate`) at localhost:8080.

4. It maintains the session via a Docker volume so it doesn't need to re-login on every request.

Why: This allows you to prototype agents or test "reasoning" models (like Gemini Advanced) via code without paying per-token fees during the dev phase.

Disclaimer: This is obviously a grey area regarding ToS. It's designed strictly for local development and prototyping. Once you need reliability or production throughput, you should switch to the official paid APIs.

I'd love feedback on the browser queue logic if anyone gives it a spin.