Use the full local authoring environment, test with your own dependencies, and bring your own model service when needed.
Use Pro or Max when you want OOMOL to take on the online delivery layer for the same validated function as an API, MCP tool, or automation task.
Channel | Model | Input Price (Credits/M Token) | Cache Price (Credits/M Token) | Output Price (Credits/M Token) |
|---|---|---|---|---|
No LLM pricing data is available right now. | ||||
