L10n-Audit-Toolkit

L10n Audit Toolkit

Automated localization auditing, validation, and repair β€” built for real-world production pipelines.

Version Architecture


πŸš€ What’s New in v1.7.1

See CHANGELOG.md for full history including v1.7.0.


πŸ’‘ Why This Tool Is Different


πŸ›‘ What Problem It Solves

Localization often breaks in subtle ways that automated tests miss and human reviewers overlook:

L10n Audit Toolkit provides a deterministic, repeatable, and automated pipeline to catch these issues before they reach production. It acts as a safety net between translators language files and your codebase.


✨ Key Features


πŸš€ Quickstart

# 1. Initialize your workspace (detects framework, sets up .l10n-audit/)
l10n-audit init

# 2. Run the audit pipeline against your locale files
l10n-audit run

# 3. Freeze approved rows into an execution-safe workbook
l10n-audit prepare-apply

# 4. Apply reviewed fixes back to your codebase
l10n-audit apply

Primary review/apply workflow:

run -> review_queue.xlsx -> prepare-apply -> review_final.xlsx -> apply

Adaptive config workflow:

generate-adaptation-report -> generate-manifest -> review-manifest -> apply-manifest

This generates:


πŸ“Š Output Overview

The toolkit generates a structured set of outputs to guide both humans and machines:

File Role
review_queue.xlsx Editable human review workspace
review_final.xlsx Frozen execution contract

⚠️ Note:


πŸ—οΈ Architecture (High-Level)

The toolkit operates on a unidirectional, safe pipeline:

  1. Audit: Reads project locales and runs heuristics/AI to detect issues.
  2. Aggregate: Merges all findings into audit_master.json.
  3. Review: Projects the master state into the editable human workspace review_queue.xlsx.
  4. Prepare Apply: Transforms the editable review workspace into review_final.xlsx, a deterministic, execution-safe workbook.
  5. Apply: Reads only review_final.xlsx and applies approved fixes to temporary .fix files.
  6. Reconcile: Merges the application results back into the master state for a complete audit trail.

For explicit adaptive configuration changes, the toolkit uses a separate manual chain:

  1. Generate Adaptation Report: Builds adaptation_report.json from a learning profile.
  2. Generate Manifest: Converts the adaptation report into a reviewable consumption manifest.
  3. Review Manifest: Produces a reviewed manifest with explicit human approvals.
  4. Apply Manifest: Applies only approved manifest actions to config.json.

πŸ‘₯ Who Should Use This


πŸ“š Documentation: πŸ‘‰ https://wael-daaboul.github.io/L10n-Audit-Toolkit/


🧠 Decision Quality Layer

The toolkit now includes a production-ready Decision Quality layer that is:

Decision quality output is machine-readable and built around these stable metrics:

This layer measures decision behavior without changing decision behavior. It is kept separate from routing and scoring execution so the system remains auditable and deterministic.