OpenAI debuts open source AI models with advanced reasoning

05/08/2025

OpenAI presents gpt-oss-120b and gpt-oss-20b, its first open source language models since GPT-2, available under Apache 2.0 license with accessible weights for free download.

OpenAI debuts open source AI models with advanced reasoning

The American company has introduced two new open source artificial intelligence models that incorporate advanced reasoning capabilities. The gpt-oss-120b and gpt-oss-20b models represent the first release of language models with accessible weights from OpenAI since the publication of GPT-2.

The gpt-oss-120b model has 117 billion total parameters and activates 5.1 billion per token, while gpt-oss-20b has 21 billion total parameters and activates 3.6 billion per token. Both use transformer architecture with mixture of experts (MoE) to reduce the active parameters needed during processing.

The models are optimized to run on consumer hardware. The gpt-oss-120b operates efficiently on an 80 GB GPU, while gpt-oss-20b can run on edge devices with just 16 GB of memory. This technical capability makes them accessible to independent developers and organizations with limited resources.

In comparative evaluations, gpt-oss-120b achieves performance close to o4-mini on basic reasoning tests and surpasses o3-mini in competitive programming, mathematics, and tool usage. The smaller model, gpt-oss-20b, equals or exceeds o3-mini in the same evaluations.

The models incorporate full chain-of-thought reasoning (CoT) without direct supervision, a feature OpenAI considers fundamental for detecting inappropriate behavior. Developers can adjust reasoning effort across three levels (low, medium, and high) to balance latency and performance according to their needs.

OpenAI has implemented specific security measures for open models. The company conducted tests training the models with malicious data to evaluate potential misuse, concluding that models modified for malicious purposes do not reach high capability levels according to their preparedness framework. This methodology was reviewed by three independent groups of external experts.

To foster ecosystem security, OpenAI has organized a Red Teaming Challenge with a prize fund of $500,000 to identify new security risks in the models.

The training process of these models was based on a high-quality dataset, primarily in English, with special emphasis on STEM disciplines (science, technology, engineering, and mathematics), programming, and general knowledge. This content selection seeks to optimize the technical and scientific reasoning capabilities of the models.

The files are available for free on Hugging Face. The company has established collaborations with platforms like Azure, AWS, vLLM, Ollama, and hardware manufacturers like NVIDIA, AMD, Cerebras, and Groq to facilitate implementation. Microsoft will bring optimized versions of the gpt-oss-20b model to Windows devices through ONNX Runtime.

This release aims to accelerate artificial intelligence research and reduce access barriers for emerging markets and organizations with limited resources. OpenAI considers that a healthy ecosystem of open models is fundamental to making AI more accessible and democratic. The company will evaluate whether the advantages of these models justify future investments in open source developments.

Related AI

OpenAI

Responsible AI Research and Development

OpenAI develops artificial intelligence with a focus on safety and social benefit. The company integrates advanced research and ethical principles to drive general-purpose AI ...

Lastest news

Trustpilot
This website uses technical, personalization and analysis cookies, both our own and from third parties, to facilitate anonymous browsing and analyze website usage statistics. We consider that if you continue browsing, you accept their use.