Meta Hasn’t Given Up on Open Source: Muse Spark Launches as Open-Weight Plans Continue

On April 8, 2026, Meta launched Muse Spark — its first proprietary frontier AI model built by the new Superintelligence Labs under former Scale AI CEO Alexandr Wang. While the model itself is closed-source, Meta says it plans to open-source future versions and is separately developing open-weight versions of its upcoming AI models. The message is clear: Meta hasn’t abandoned the open-source strategy that defined the Llama era.

Intermediate

Meta's Muse Spark announcement banner showing the model name and Meta Superintelligence Labs branding
Image credit: Meta

Muse Spark: What It Can Do

Muse Spark — codenamed “Avocado” internally — is the inaugural model from Meta Superintelligence Labs, the AI division that Alexandr Wang now leads following Meta’s nearly $15 billion acquisition of Scale AI. Unlike the Llama series, Muse Spark is proprietary and currently powers the Meta AI assistant at meta.ai, with rollouts to WhatsApp, Instagram, Facebook, Messenger, and Meta’s Ray-Ban AI glasses in the coming weeks.

The model is natively multimodal, accepting voice, text, and image inputs (text output only). It introduces several notable features:

  • Thought Compression — During reinforcement learning, the model is penalized for excessive reasoning tokens, forcing efficient problem-solving. Meta claims Muse Spark hits the same capability level as Llama 4 Maverick with over 10x less compute.
  • Contemplating Mode — Orchestrates parallel reasoning agents for complex problems, achieving 50.2% on Humanity’s Last Exam (vs. 48.4% for Gemini 3.1 Deep Think and 43.9% for GPT 5.4 Pro).
  • Health AI — Trained in collaboration with 1,000+ physicians, scoring 42.8 on HealthBench Hard (vs. 40.1 for GPT 5.4).
Animated header showing Muse Spark model interface and capabilities
Image credit: Meta

However, Muse Spark has notable gaps. It scores just 42.5 on ARC AGI 2 (abstract reasoning), compared to 76.5 for Gemini 3.1 Pro. In coding benchmarks, it trails with 59.0 on Terminal-Bench 2.0 vs. 75.1 for GPT 5.4. Meta itself acknowledges the model “won’t match competitors in every area.”

The Open-Source Plans

Meta headquarters sign at the company's campus
Image credit: SiliconANGLE

Two days before the Muse Spark launch, Axios reported that Meta is developing open-source versions of its next-generation AI models. The company is also building a second proprietary model codenamed “Mango,” a multimedia file generator. Open-weight versions of these models will follow, though Meta plans to keep certain capabilities proprietary — particularly around cybersecurity code generation and some mixture-of-experts components.

Meta says it “hopes to open-source future versions” of the Muse series itself. This hybrid strategy — proprietary models for Meta’s consumer products, open-weight releases for the developer ecosystem — fits with Wang’s stated vision of Meta as a “counterweight to Anthropic and OpenAI,” which focus more on government and enterprise customers.

The investment behind this is enormous. Meta’s AI-related capital expenditures for 2026 are projected at $115–135 billion, nearly double last year’s spending.

What This Means for the Community

For researchers and developers, the key question was whether Meta’s pivot to Superintelligence Labs and proprietary models meant the end of the Llama-style open releases. The answer appears to be no — but with caveats. Future open-weight models may not include all capabilities of their proprietary counterparts, and the largest frontier models may remain closed.

Still, Meta’s track record with the Llama series — from Llama 2 through Llama 4 Maverick’s 400B-parameter mixture-of-experts architecture — has made it the most significant contributor to open-weight AI. The commitment to continue, even partially, keeps a US-made open option available for developers worldwide at a time when most frontier labs are moving toward closed models.

Related Coverage

Sources