
Tencent has officially open-sourced Tencent HY-MT1.5, a new generation of multilingual machine translation models, giving developers and researchers free access to high-quality translation technology that can run locally on consumer hardware.
The release was announced to the AI community through Reddit’s r/LocalLLaMA and quickly gained attention due to its strong performance, small memory footprint, and permissive open-source availability.
Tencent HY-MT1.5 is available in two versions, designed to cover both lightweight and higher-quality translation scenarios:
Both models are designed specifically for machine translation, rather than general chat, allowing them to focus on accuracy, fluency, and consistency.
HY-MT1.5 supports bidirectional translation across 30+ languages, including major global languages and regional variants. Key features include:
These features make the models useful not only for casual translation, but also for documentation, software localization, and technical writing.
A major highlight of this release is its focus on local inference. Tencent provides tooling and model formats compatible with popular open-source ecosystems, enabling:
This approach aligns well with the growing demand for privacy-friendly, offline-capable AI, where users want control over their data without relying on cloud APIs.
Tencent HY-MT1.5 demonstrates that open-source translation models can now compete with — and in some cases outperform — proprietary translation services, especially in terms of speed and deployability.
For developers building local AI tools, multilingual apps, or privacy-focused solutions, HY-MT1.5 offers a compelling new option backed by a major industry player.
The models and documentation are available publicly via Tencent’s official GitHub and Hugging Face repositories, making it easy to start experimenting right away.