Minimind trains small language models from scratch

MiniMind is an open-source project that trains a tiny GPT model from scratch, with the smallest variant at just 25.8M parameters—1/7000 the size of GPT-3—achievable in 2 hours and $3 GPU rental cost. Fully implemented in native PyTorch, it provides end-to-end code for data preprocessing, pretraining, fine-tuning (SFT/LoRA/DPO), distillation, and inference. Compatible with mainstream toolkits and multimodal extensions, MiniMind serves as a hands-on tutorial for personal GPU users and LLM beginners.

Minimind trains small language models from scratch

Official website:https://github.com/jingyaogong/minimind

Project includes

  • The complete code of MiniMind-LLM structure (Dense+MoE model).
  • Contains detailed training code for the Tokenizer.
  • Contains the full process training code of Pretrain, SFT, LoRA, RLHF-DPO, and model distillation.
  • Collect, distill, organize and clean high-quality datasets at all stages of deduplication, and make them all open source.
  • Implement pre-training, instruction fine-tuning, LoRA, DPO reinforcement learning, and white-box model distillation from scratch. The key algorithms are almost independent of third-party packaged frameworks and are all open source.
  • It is also compatible with third-party mainstream frameworks such as transformerstrlpeftand so on.
  • Training supports single machine single card, single machine multi-card (DDP, DeepSpeed) training, and supports wandb visualization of training process. Supports dynamic start and stop training.
  • Conduct model testing on third-party evaluation boards (C-Eval, C-MMLU, OpenBookQA, etc.).
  • A minimalist server that implements the Openai-Api protocol, which is easy to integrate into third-party ChatUI (FastGPT, Open-WebUI, etc.).
  • Implement the simplest chat WebUI front end based on streamlit.
  • Fully compatible with popular community llama.cppvllmollamainference engines or Llama-Factorytraining frameworks.
  • MiniMind-Reason model that reproduces (distillation/RL) the large-scale reasoning model DeepSeek-R1. All data + models are open source!

Libre Depot original article,Publisher:Libre Depot,Please indicate the source when reprinting:https://www.libredepot.top/5520.html

Like (0)
Libre DepotLibre Depot
Previous 4 hours ago
Next 4 hours ago

Related articles

Leave a Reply

Your email address will not be published. Required fields are marked *