DTLocal Dev ToolsPrivate browser utilities

AI development tool

System Prompt Token Optimizer

Optimize long system prompts with light, medium, and aggressive cleanup modes while preserving fenced code blocks and clearly labeling token counts as approximate local estimates.

This tool runs locally in your browser. Your input is not uploaded.

No uploadsNo server logsNo runtime APIsBrowser-only transforms

A system prompt with repeated lines, extra spacing, and conservative phrase replacements.

Optimized prompt output

Optimize a prompt to see approximate token savings, character savings, and copyable output.
Options
Optimization mode
Token counts are approximate. This MVP uses a local heuristic and includes an extension hook for exact tokenizer support later.
Fenced code blocks are protected before cleanup so examples and snippets stay intact.

Examples

Prompt compression examples

Medium mode removes duplicate consecutive lines and shortens conservative verbose phrases.

Please make sure that you follow every instruction.
Please make sure that you follow every instruction.

JSON policy block

Aggressive mode can minify valid JSON outside fenced code blocks while warning that style may change.

{
  "privacy": true,
  "logging": false
}

How it works

  1. What this prompt optimizer does: cleans whitespace, repeated lines, separators, bullets, verbose phrases, and optional aggressive JSON or XML whitespace locally in the browser.
  2. How token estimation works: the MVP uses a local heuristic based on characters, words, symbols, whitespace, and CJK characters, so counts are clearly labeled approximate.
  3. Light vs medium vs aggressive optimization: light preserves style with whitespace cleanup, medium adds conservative compaction, and aggressive applies stronger style-changing reductions with warnings.
  4. What this tool will not change: fenced code blocks are protected before optimization, and no LLM or tokenizer API is called.

Limitations

  • Token counts are estimates and may differ from model-specific tokenizers.
  • Aggressive mode may alter prompt style and should be reviewed before use.
  • YAML-like sections are preserved where possible but are not parsed as a full YAML document in the MVP.

FAQ

Does this prompt optimizer upload my system prompt?

No. Cleanup and token estimation run locally in your browser with no external API calls.

Are token counts exact?

No. The MVP uses a simple local estimator and labels the result as approximate.

Will it rewrite code examples?

No. Fenced code blocks are protected before optimization and restored afterward.

Related tools