L10n Audit Toolkit is a Python-based localization QA toolkit for auditing translation files, validating runtime-sensitive strings, and producing safe localization review workflows for multilingual applications.
📚 Documentation: 👉 https://wael-daaboul.github.io/L10n-Audit-Toolkit/
pipx install "git+https://github.com/wael-daaboul/L10n-Audit-Toolkit.git"
L10n Audit Toolkit helps engineering and localization teams catch issues before translations ship to production. It combines code usage scanning, locale-file validation, placeholder validation, terminology audit, glossary enforcement, and translation QA reporting in a single repository-oriented workflow.
The project is designed for teams that need repeatable localization audits for i18n and l10n pipelines without rewriting their application structure. It supports JSON locale files and Laravel PHP translation files, generates machine-readable and spreadsheet reports, and keeps risky changes in a review queue instead of auto-applying them.
Modern multilingual applications often fail in production because translation QA is fragmented across manual review, ad hoc scripts, and framework-specific checks. Common issues include:
L10n Audit Toolkit addresses those problems with a structured localization audit pipeline and explicit safe-fix boundaries.
Built-in project profiles currently cover:
vue-i18n JSONCurrent locale format support:
return [...] and return array(...)The toolkit can report issues such as:
The L10n Audit Toolkit now comes with a powerful CLI. To get started in your localization project:
Initialize Workspace:
l10n-audit init
Verify Setup:
l10n-audit doctor
Run a Fast Audit:
l10n-audit run --stage fast
Primary outputs are written under Results/.
Here are the main commands you will use daily:
l10n-audit --help - Shows help, usage instructions, and available arguments.l10n-audit --version - Displays the current installed version of the toolkit.l10n-audit init - Discovers your project and creates the .l10n-audit/ workspace.l10n-audit run --stage <STAGE> - Runs specific or all audit modules (e.g., fast, full, autofix).l10n-audit doctor - Diagnoses your current environment and tool configuration.l10n-audit update - Fetches the latest global rules and dictionaries to your local workspace.l10n-audit self-update - Shows instructions to globally update the CLI tool.You can enhance your audits with AI (e.g., OpenAI, OpenRouter) to check context, tone, and grammar:
l10n-audit run --stage ai-review \
--ai-enabled \
--ai-api-base "https://openrouter.ai/api/v1" \
--ai-model "openai/gpt-4o-mini"
Note: For deep technical details and developer scripts, check the
docs/folder.
If you are using the repository checkout directly rather than an installed launcher, you can still run:
./bin/run_all_audits.sh --stage fast
Use the bootstrap script for the fastest setup:
./bootstrap.sh
Manual setup:
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -r requirements-optional.txt
python -m pip install -r requirements-dev.txt
Detailed environment setup is documented in INSTALL.md and docs/quickstart.md.
The repository ships with a neutral example glossary at docs/terminology/glossary.json. Replace it or point glossary_file to your own JSON glossary.
Run the full localization audit pipeline:
l10n-audit run --stage full
Useful stage-specific commands:
l10n-audit run --stage ai-review --ai-enabled
l10n-audit run --stage ai-review --ai-enabled --ai-model gpt-4o-mini --ai-api-base https://api.openai.com/v1
l10n-audit doctor
l10n-audit update --check
To refresh local workspace templates from GitHub or a direct archive URL:
l10n-audit init --from-github --channel stable --repo https://github.com/your-org/l10n-audit-toolkit
l10n-audit update --from-github --channel main --repo https://github.com/your-org/l10n-audit-toolkit
You can also pass a direct .zip archive URL or file://...zip path during testing.
You can also run the basic localization usage audit directly:
./bin/l10n_audit.sh
The toolkit separates deterministic changes from human-reviewed changes.
Results/final/final_audit_report.md.Results/review/review_queue.xlsx.approved_new for reviewed rows and set status to approved.python -m fixes.apply_review_fixes
Results/final_locale/.Safe auto-fix planning is available with:
./bin/run_all_audits.sh --stage autofix
The review and fix workflow is documented in HOW_TO_USE.md and docs/review_workflow.md.
./bin/run_all_audits.sh --stage full
python -m audits.placeholder_audit
python -m audits.terminology_audit
python -m fixes.apply_safe_fixes
python -m fixes.apply_review_fixes
python -m pytest
Common outputs include:
Results/per_tool/: raw per-audit findingsResults/normalized/: normalized machine-readable findingsResults/review/review_queue.xlsx: review queue for human approvalResults/fixes/fix_plan.json: safe fix planResults/fixes/safe_fixes_applied_report.json: auto-fix summaryResults/final/final_audit_report.md: aggregated dashboardResults/final_locale/ar.final.json: final reviewed localeSee docs/output_reports.md for report details.
audits/: audit modules for localization, placeholder, terminology, ICU, and locale QA checkscore/: shared runtime, loaders, exporters, scanners, and validation helpersfixes/: safe-fix and reviewed-fix application logicreports/: report aggregation and final dashboard generationschemas/: JSON schemas for config and generated artifactsconfig/: toolkit configuration and project profilesbin/: shell entry points for common workflowsexamples/: framework-oriented sample layouts and usage notesdocs/: reference documentation for workflows and outputstests/: regression coverage for audits, exports, reports, and fix safetyDetailed directory roles are documented in docs/overview.md.
Contributions that improve localization audit quality, translation validation, framework coverage, or documentation are welcome. See CONTRIBUTING.md before opening a pull request.
Please report vulnerabilities privately. See SECURITY.md.
This repository is released under the MIT License. See LICENSE.