ByteDance releases new open source Seed-OSS-36B model

Date:

Share post:

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


TikTok is making headlines again today after the White House joined the popular social media application โ€” but its parent company ByteDance, a Chinese web giant, also had a surprise announcement up its sleeve.

The companyโ€™s Seed Team of AI researchers today released Seed-OSS-36B on AI code sharing website Hugging Face.

Seed-OSS-36B is new line of open source, large language models (LLM) designed for advanced reasoning, and developer-focused usability with a longer token context โ€” that is, how much information the models can accept as inputs and then output in a single exchange โ€” than many competing LLMs from U.S. tech companies, even leaders such as OpenAI and Anthropic.

The collection introduces three main variants:


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


  • Seed-OSS-36B-Base with synthetic data
  • Seed-OSS-36B-Base without synthetic data
  • Seed-OSS-36B-Instruct

In releasing both synthetic and non-synthetic versions of the Seed-OSS-36B-Base model, the Seed Team sought to balance practical performance with research flexibility.

The synthetic-data variant, trained with additional instruction data, consistently delivers stronger scores on standard benchmarks and is intended as a higher-performing general-purpose option.

The non-synthetic model, by contrast, omits these augmentations, creating a cleaner foundation that avoids potential bias or distortion introduced by synthetic instruction data.

By providing both, the team gives applied users access to improved results while ensuring researchers retain a neutral baseline for studying post-training methods.

Meanwhile, the Seed-OSS-36B-Instruct model differs in that it is post-trained with instruction data to prioritize task execution and instruction following, rather than serving purely as a foundation model.

All three models are released under the Apache-2.0 license, allowing free use, modification, and redistribution by researchers and developers working for enterprises.

That means they can be used to power commercial applications, internal to a company or external/customer-facing, without paying ByteDance any licensing fees or for application programming interface (API) usage.

This continues the summer 2025 trend of Chinese companies shipping powerful open source models with OpenAI attempting to catch up with its own open source gpt-oss duet released earlier this month.

The Seed Team positions Seed-OSS for international applications, emphasizing versatility across reasoning, agent-like task execution, and multilingual settings.

The Seed Team, formed in 2023, has concentrated on building foundation models that can serve both research and applied use cases.

Design and core features

The architecture behind Seed-OSS-36B combines familiar design choices such as causal language modeling, grouped query attention, SwiGLU activation, RMSNorm, and RoPE positional encoding.

Each model carries 36 billion parameters across 64 layers and supports a vocabulary of 155,000 tokens.

One of the defining features is its native long-context capability, with a maximum length of 512,000 tokens, designed to process extended documents and reasoning chains without performance loss.

Thatโ€™s twice the length of OpenAIโ€™s new GPT-5 model family and is roughly equivalent to about 1,600 pages of text, the length of a Christian Bible.

Another distinguishing element is the introduction of a thinking budget, which lets developers specify how much reasoning the model should perform before delivering an answer.

Itโ€™s something weโ€™ve seen from other recent open source models as well, including Nvidiaโ€™s new Nemotron-Nano-9B-v2, also available on Hugging Face.

In practice, this means teams can tune performance depending on the complexity of the task and the efficiency requirements of deployment.

Budgets are recommended in multiples of 512 tokens, with 0 providing a direct response mode/

Competitive performance on third-party benchmarks

Benchmarks published with the release position Seed-OSS-36B among the stronger large open-source models. The Instruct variant, in particular, posts state-of-the-art results in multiple areas.

  • Math and reasoning: Seed-OSS-36B-Instruct achieves 91.7 percent on AIME24 and 65 on BeyondAIME, both representing open-source โ€œstate-of-the-artโ€ (SOTA).
  • Coding: On LiveCodeBench v6, the Instruct model records 67.4, another SOTA score.
  • Long-context handling: On RULER at 128K context length, it reaches 94.6, marking the highest open-source result reported.
  • Base model performance: The synthetic-data Base variant delivers 65.1 on MMLU-Pro and 81.7 on MATH, both state-of-the-art results in their categories.

The no-synthetic Base version, while slightly behind on many measures, proves competitive in its own right.

It outperforms its synthetic counterpart on GPQA-D, providing researchers with a cleaner, instruction-free baseline for experimentation.

For enterprises comparing open options, these results suggest Seed-OSS offers strong potential across math-heavy, coding, and long-context workloads while still providing flexibility for research use cases.

Access and deployment

Beyond performance, the Seed Team highlights accessibility for developers and practitioners. The models can be deployed using Hugging Face Transformers, with quantization support in both 4-bit and 8-bit formats to reduce memory requirements.

They also integrate with vLLM for scalable serving, including configuration examples and API server instructions.

To lower barriers further, the team includes scripts for inference, prompt customization, and tool integration.

For technical leaders managing small teams or working under budget constraints, these provisions are positioned to make experimentation with 36-billion-parameter models more approachable.

Licensing and considerations for enterprise decision-makers

With the models offered under Apache-2.0, organizations can adopt them without restrictive licensing terms, an important factor for teams balancing legal and operational concerns.

For decision makers evaluating the open-source landscape, the release brings three takeaways:

  • State-of-the-art benchmarks across math, coding, and long-context reasoning.
  • A balance between higher-performing synthetic-trained models and clean research baselines.
  • Accessibility features that lower operational overhead for lean engineering teams.

By placing strong performance and flexible deployment under an open license, ByteDanceโ€™s Seed Team has added new options for enterprises, researchers, and developers alike.


Source link
spot_img

Related articles

DOM-Based Extension Clickjacking Exposes Popular Password Managers to Credential and Data Theft

๎ ‚Aug 20, 2025๎ „Ravie LakshmananVulnerability / Browser Security Popular password manager plugins for web browsers have been found susceptible to...

Work Smart: Planning in Progress

As founder and CEO of Adrenaline Special Events, which produces 5K races and other events, Aaron Del Mar...