At Mobile World Congress 2026 in Barcelona, LG AI Research announced EXAONE 4.5 — the next generation of its open-weight AI model series. Unlike its predecessors, EXAONE 4.5 is a vision-language model (VLM) that can process both text and images simultaneously, marking LG’s biggest step yet into multimodal AI. The model is expected to launch in the first half of 2026 as an open-weight release.
Intermediate
EXAONE 4.5 builds on the foundation of EXAONE 4.0, which launched in July 2025 as a hybrid reasoning model in 32B and 1.2B sizes. Where EXAONE 4.0 focused on integrating reasoning and non-reasoning modes within a text-only architecture, version 4.5 adds full vision-language capabilities — enabling the model to understand diagrams, charts, photographs, and other visual inputs alongside text.
LG AI Research describes it as a model designed to “integrate text and image understanding in a way that more closely resembles human cognition.” The company claims EXAONE 4.5 will be the highest-performing open-weight model of its size when released, though independent benchmarks are not yet available.
While specific parameter counts for EXAONE 4.5 haven’t been disclosed, context from the EXAONE lineage provides clues. EXAONE 4.0’s 32B model featured a hybrid attention mechanism with a mix of global and sliding window attention layers, 128K token context length, and training on 14 trillion tokens. The separate K-EXAONE model, a 236B Mixture-of-Experts (MoE) system with only 23B active parameters, introduced innovations like 70% memory reduction through hybrid attention, a 150,000-word tokenizer, and 150% inference speed gains via multi-token prediction.
LG’s EXAONE models have been climbing global AI rankings. According to the Artificial Analysis Intelligence Index, K-EXAONE currently ranks 7th worldwide among open-weight models with a score of 32 — just one point behind OpenAI at 33. In Korea’s government-led AI foundation model competition, K-EXAONE topped 10 of 13 benchmark tests with an average score of 72, outperforming GPT-OSS-120B (69.5) and Qwen3-235B (69.5).
Epoch AI has recognized five EXAONE models as “Notable AI Models”: EXAONE 3.5, EXAONE Deep, EXAONE Path 2.0, EXAONE 4.0, and K-EXAONE — a track record that underscores LG’s rapid iteration in the open-weight space.
Perhaps the most ambitious application for EXAONE 4.5 is as the cognitive engine for KAPEX, South Korea’s national humanoid robot project. Co-developed by LG Electronics, LG AI Research, and the Korea Institute of Science and Technology (KIST), KAPEX was unveiled in November 2025 and aims for field demonstrations in 2026 with full commercialization within four years.
Vision-language understanding is critical for robotics — a humanoid that can interpret visual scenes, read labels, and reason about its environment needs exactly the kind of multimodal intelligence EXAONE 4.5 is designed to provide. LG positions this as the beginning of a “physical AI” era, where large-scale models are embedded directly into machines operating in the real world.
To support its AI ambitions, LG is building the Paju AI Data Center in Gyeonggi Province — a 200-megawatt facility capable of housing up to 120,000 GPUs, targeted for completion in 2027. The data center will integrate capabilities across LG Electronics, LG Energy Solution, and LG CNS.
Co-President Lim Woo-hyung summarized LG’s philosophy: “The AI that LG pursues is not about competing over how intelligent it is, but about creating a partner that helps people and solves problems in the real world.”
