MiniMind is a minimalist open-source project designed to train a 25.8M-parameter GPT model from scratch in just 2 hours on a single GPU at a cost as low as $3 (approximately 1/7000th the size of GPT-3). It fully open-sources the end-to-end pipeline—including dataset cleaning, pretraining, supervised fine-tuning (SFT), LoRA tuning, direct preference optimization (DPO), and model distillation—all implemented in native PyTorch without relying on third-party abstractions. Compatible with transformers, trl, peft, and other popular LLM ecosystems, it supports both single- and multi-GPU training as well as inference server deployment, making it an ideal hands-on tutorial and lightweight LLM training framework
Source code:https://github.com/jingyaogong/minimind
Project includes
- The complete code of MiniMind-LLM structure (Dense+MoE model).
- Contains detailed training code for the Tokenizer.
- Contains the full process training code of Pretrain, SFT, LoRA, RLHF-DPO, and model distillation.
- Collect, distill, organize and clean high-quality datasets at all stages of deduplication, and make them all open source.
- Implement pre-training, instruction fine-tuning, LoRA, DPO reinforcement learning, and white-box model distillation from scratch. The key algorithms are almost independent of third-party packaged frameworks and are all open source.
- It is also compatible with third-party mainstream frameworks such as
transformers
,trl
, and so on.peft
- Training supports single machine single card, single machine multi-card (DDP, DeepSpeed) training, and supports wandb visualization of training process. Supports dynamic start and stop training.
- Conduct model testing on third-party evaluation boards (C-Eval, C-MMLU, OpenBookQA, etc.).
- A minimalist server that implements the Openai-Api protocol, which is easy to integrate into third-party ChatUI (FastGPT, Open-WebUI, etc.).
- Implement the simplest chat WebUI front end based on streamlit.
- Fully compatible with popular community
llama.cpp
,vllm
,ollama
inference engines orLlama-Factory
training frameworks. - MiniMind-Reason model that reproduces (distillation/RL) the large-scale reasoning model DeepSeek-R1. All data + models are open source!
I hope this open source project can help LLM beginners get started quickly!
Libre Depot original article,Publisher:Libre Depot,Please indicate the source when reprinting:https://www.libredepot.top/5479.html