Examples overview¶
Runnable examples
The runnable examples live in the top-level gateway-examples/ project. They are intentionally small, opinionated, and close to real application patterns, so you can copy one integration style without digging through the whole gateway codebase first.
Source repository: github.com/lunargate-ai/gateway-examples
All runnable examples¶
python-basic-poetrypython-responses-tools-poetrypython-auto-tiers-poetrypython-ollama-embeddings-poetrynode-express-moderatedocker-compose-minimalstreamlit-chat-docker-composestreamlit-rag-redissearch-docker-compose
Example families¶
- SDK-first examples:
python-basic-poetry,python-responses-tools-poetry,python-auto-tiers-poetry,python-ollama-embeddings-poetry - App/backend example:
node-express-moderate - Compose demos:
docker-compose-minimal,streamlit-chat-docker-compose,streamlit-rag-redissearch-docker-compose
Start smallest
Use a one-file Python example with the normal OpenAI SDK.
Start fastest
Run gateway plus demo app together with one compose command.
Good rule of thumb:
- if you want the smallest possible SDK integration, start with
python-basic-poetry - if you want the smallest embeddings smoke test, start with
python-ollama-embeddings-poetry - if you want a minimal Responses API tool-calling loop, start with
python-responses-tools-poetry - if you want the fastest demo for a teammate, start with
docker-compose-minimal - if you want app-level streaming, look at
node-express-moderateorstreamlit-chat-docker-compose - if you want local RAG with file upload and retrieval, go straight to
streamlit-rag-redissearch-docker-compose - if you want tiered routing, go straight to
python-auto-tiers-poetry
Quick paths¶
Which example to start with¶
| Example | Runtime | Best for | What it teaches | Source code |
|---|---|---|---|---|
python-basic-poetry |
Python + Poetry | the smallest possible client integration | OpenAI Python SDK, base_url, one request, one response |
GitHub |
python-responses-tools-poetry |
Python + Poetry | minimal Responses API plus local tool execution | /v1/responses, function-call loop, function_call_output, route/header verification |
GitHub |
python-auto-tiers-poetry |
Python + Poetry | lunargate/auto and model tier selection |
model_selection, tier headers, tool-aware routing on chat.completions |
GitHub |
python-ollama-embeddings-poetry |
Python + Poetry | the smallest embeddings integration | OpenAI-compatible embeddings, local Ollama, /v1/embeddings smoke testing |
GitHub |
node-express-moderate |
Node.js + Express | a backend service calling LunarGate for both JSON and streaming | SSE streaming, custom X-LunarGate-* headers, app API vs gateway API separation |
GitHub |
docker-compose-minimal |
Docker Compose | the fastest local demo with both app and gateway | gateway + app in one compose file, smoke checks, remote-Docker-safe wrapper image | GitHub |
streamlit-chat-docker-compose |
Docker Compose | a basic chat UI beside the gateway | OpenAI-compatible chat UI, streaming responses, compose-local demo UX | GitHub |
streamlit-rag-redissearch-docker-compose |
Docker Compose | a local RAG demo with UI | file upload, embeddings, RedisSearch vector retrieval, local Ollama chat + embeddings | GitHub |
Shared assumptions across examples¶
- Most examples assume the gateway is on
http://127.0.0.1:8080/v1. - Client-side auth is assumed to be off by default, so examples use
not-needed-if-gateway-auth-is-offas the API key placeholder. - Most examples include both
config-simple.yaml.exampleandconfig-observability.yaml.example, so you can switch between local-only mode and request inspection in the LunarGate Dashboard onapp.lunargate.ai.
Observability-ready configs¶
Most examples also ship with a config-observability.yaml.example variant.
That variant enables:
data_sharing.enabled: trueshare_prompts: trueshare_responses: trueremote_control: true
To use those configs, create a gateway in the Gateways section of app.lunargate.ai, then export:
You do not need LUNARGATE_GATEWAY_ID. The gateway identifies itself to the LunarGate Dashboard on app.lunargate.ai with LUNARGATE_GATEWAY_API_KEY, and the backend resolves the internal gateway ID automatically.
You also do not need to set LUNARGATE_BACKEND_URL in these examples unless you intentionally want to override the gateway default.
Recommended reading order¶
- Start with Python basic with Poetry if you want the smallest possible OpenAI SDK example.
- Move to Docker Compose minimal if you want a self-contained demo with the gateway included.
- Use Python Responses tools with Poetry if you want a minimal function-calling loop on
/v1/responses. - Use Python Ollama embeddings with Poetry if you want to validate the embeddings endpoint before building retrieval flows.
- Use Node Express with streaming, Streamlit chat in Docker Compose, or Streamlit RAG with RedisSearch when you need app-level UX patterns.
- Read Python
lunargate/autodemo before configuring autorouting in production.