Automated localization auditing, validation, and repair β built for real-world production pipelines.
See CHANGELOG.md for full history including v1.7.0.
Localization often breaks in subtle ways that automated tests miss and human reviewers overlook:
L10n Audit Toolkit provides a deterministic, repeatable, and automated pipeline to catch these issues before they reach production. It acts as a safety net between translators language files and your codebase.
audit_master.json) for the entire pipeline state.# 1. Initialize your workspace (detects framework, sets up .l10n-audit/)
l10n-audit init
# 2. Run the audit pipeline against your locale files
l10n-audit run
# 3. Freeze approved rows into an execution-safe workbook
l10n-audit prepare-apply
# 4. Apply reviewed fixes back to your codebase
l10n-audit apply
Primary review/apply workflow:
run -> review_queue.xlsx -> prepare-apply -> review_final.xlsx -> apply
Adaptive config workflow:
generate-adaptation-report -> generate-manifest -> review-manifest -> apply-manifest
This generates:
Results/final/final_audit_report.jsonResults/review/review_queue.xlsxResults/review/review_final.xlsxResults/artifacts/audit_master.jsonThe toolkit generates a structured set of outputs to guide both humans and machines:
Results/artifacts/audit_master.json β The core application state and single source of truth.Results/review/review_queue.xlsx β The editable human review workspace.Results/review/review_final.xlsx β The frozen, execution-safe contract generated by prepare-apply. Do not edit this file manually.Results/final/final_audit_report.md β The dashboard-style Markdown summary of your localization health.| File | Role |
|---|---|
review_queue.xlsx |
Editable human review workspace |
review_final.xlsx |
Frozen execution contract |
β οΈ Note:
- Per-tool CSV/XLSX outputs are optional and disabled by default to reduce noise.
- The
.cache/directory contains internal, raw processing data and should be ignored by version control.
The toolkit operates on a unidirectional, safe pipeline:
audit_master.json.review_queue.xlsx.review_final.xlsx, a deterministic, execution-safe workbook.review_final.xlsx and applies approved fixes to temporary .fix files.For explicit adaptive configuration changes, the toolkit uses a separate manual chain:
adaptation_report.json from a learning profile.config.json.π Documentation: π https://wael-daaboul.github.io/L10n-Audit-Toolkit/
The toolkit now includes a production-ready Decision Quality layer that is:
Decision quality output is machine-readable and built around these stable metrics:
route_distribution β distribution of final decision routesconfidence_bands β distribution of high / medium / low confidence findingsevidence_source_usage β how often each evidence source influenced decisionscontribution_impact_summary β aggregated bonus/penalty impact by sourcequality_risk_indicators β analytical warning signals such as low-confidence auto-fix cases, high-confidence manual cases, explanation gaps, and evidence-heavy caution patternsThis layer measures decision behavior without changing decision behavior. It is kept separate from routing and scoring execution so the system remains auditable and deterministic.