A web tool to debug and test LLM model jinja chat templates.
I recommend using uv.
uv sync
uv run app.py
Then load up your browser to the URL that flask run outputs.
You can directly edit a model's chat template by providing a path to the tokenizer_config.json file:
uv run app.py --config path/to/tokenizer_config.json
This will:
- Extract the chat template from the config file into the templates directory
- Automatically open the extracted template in the web interface
- Create a backup of the original config file (with .timestamp.orig extension) before the first edit
- Sync any changes back to the original config file in real-time
This provides a seamless way to edit a model's chat template with immediate preview of the changes.
The project includes a CLI tool for extracting and injecting chat templates from/to Hugging Face model tokenizer_config.json files.
uv run template_cli.py extract path/to/tokenizer_config.json
By default, this will extract the template to the templates
directory with a filename based on the model name or config filename (e.g., templates/model-name_template.jinja
). You can specify a custom output path:
uv run template_cli.py extract path/to/tokenizer_config.json --output my_template.jinja
uv run template_cli.py inject path/to/template.jinja path/to/tokenizer_config.json
This will update the tokenizer_config.json file with the contents of the template file.
- Renders jinja templates with a variety of test cases
- Re-renders templates when template files change
- Direct editing of Hugging Face model chat templates with two-way sync
- Automatic backup of original config files before modification
- CLI tool for extracting and injecting chat templates from/to Hugging Face model tokenizer_config.json files
Model Templater is licensed under the GNU General Public License v3.0 (GPLv3). See the LICENSE file for details.