LL

haotian-liu/LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

24.7k 2.8k +7/wk
GitHub PyPI 2-source
chatbot chatgpt foundation-models gpt-4 instruction-tuning llama llama-2 llama2 llava multi-modality multimodal vision-language-model
Trend 3

Star & Fork Trend (31 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

haotian-liu/LLaVA has +7 stars this period , with cross-source activity across 2 platforms (github, pypi). 7-day velocity: 0.0%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric LLaVA unilm serve UI-TARS-desktop
Stars 24.7k 22.1k21.9k29.3k
Forks 2.8k 2.7k2.2k2.9k
Weekly Growth +7 +1-1+19
Language Python PythonPythonTypeScript
Sources 2 221
License Apache-2.0 MITApache-2.0Apache-2.0

Capability Radar vs unilm

LLaVA
unilm
Maintenance Activity 0

Last code push 604 days ago.

Community Engagement 56

Fork-to-star ratio: 11.2%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 42

+7 stars this period — 0.03% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.