product / on-device translation

Translate

A browser-first AI translator. Model files are served from zhanghe.dev on Cloudflare R2, and translation text stays in your browser.
Hy-MT1.5 1.8B / 440MB quantized model targetText translation v1Local history stays in your browser
architecture

Privacy And Performance By Default

The page is server-rendered first, then the model runtime loads lazily. Idle preloading only starts on suitable browsers and networks.

worker

Browser inference

Translation runs inside a Web Worker. The main thread stays responsive, and your text is not sent to a third-party translation API.
r2

China-friendly delivery

Model, WASM, and tokenizer assets are served from same-origin Cloudflare/R2 paths instead of Hugging Face CDN.
runtime

Runtime compatibility

The runtime prefers self-hosted Transformers.js ONNX files and falls back to a browser-side GGUF/WASM adapter when needed.

Model Delivery

R2 object access is restricted to the models/translate/ prefix. Responses support Range, ETag, and long immutable caching for 440MB-class model artifacts.

/models/translate/*
Henry Zhang

FAQ