PromptForge uses any local Ollama model to rewrite your prompt for clarity and precision before sending to any AI provider — so you get dramatically better responses every time.
Any Ollama model runs on your machine. Choose from Gemma, Llama, Mistral, Phi and more. Your raw prompt is refined entirely on-device — never sent anywhere.
Send to OpenAI, Anthropic, Google Gemini, Groq, Mistral, xAI Grok, or OpenRouter — all in one interface.
Confident, technical, concise, creative, mentor — inject a response style into every prompt automatically.
See exactly what was sent: original prompt, optimised version, and final assembled prompt with tone — side by side.
Runs as a local web app in your browser. One setup script handles everything — Ollama, models, dependencies.
API keys stored only in a local JSON file on your machine. We never see them, never store them.
Type your prompt as you normally would — no special syntax or formatting needed.
Your chosen local model rewrites the prompt to be clearer, more specific, and better structured.
Optionally inject a response style — confident, technical, concise, mentor, or seven others.
The refined prompt is sent to your chosen AI. View all three versions side by side.
No sign-up, no account, no cost. Just download and run.
Runs locally on your machine. Bring your own API keys for the AI providers you already use.
macOS / Linux
Windows
Requires Python 3.9+ — the setup script installs Ollama automatically. Choose and install any model from within the app. PromptForge opens at http://localhost:7474.