ML

Coobiw/MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.

669 35 +2/wk
GitHub
deepspeed fine-tuning mllm model-parallel multimodal-large-language-models pipeline-parallelism pretraining qwen video-language-model video-large-language-models
Trend 3

Star & Fork Trend (18 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

Coobiw/MPP-LLaVA has +2 stars this period . 7-day velocity: 0.3%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric MPP-LLaVA Complete-Life-Cycle-of-a-Data-Science-Project Awesome-Model-Merging-Methods-Theories-Applications MARS
Stars 669 638708717
Forks 35 2544149
Weekly Growth +2 +1+1-1
Language Jupyter Notebook N/AN/APython
Sources 1 111
License N/A MITN/AApache-2.0

Capability Radar vs Complete-Life-Cycle-of-a-Data-Science-Project

MPP-LLaVA
Complete-Life-Cycle-of-a-Data-Science-Project
Maintenance Activity 0

Last code push 394 days ago.

Community Engagement 26

Fork-to-star ratio: 5.2%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 58

+2 stars this period — 0.30% growth rate.

License Clarity 30

No clear license detected — proceed with caution.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.