VL

vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

75.7k 15.3k +120/wk
GitHub PyPI 2-source
amd blackwell cuda deepseek deepseek-v3 gpt gpt-oss inference kimi llama llm llm-serving
Trend 19

Star & Fork Trend (50 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

vllm-project/vllm has +120 stars this period , with cross-source activity across 2 platforms (github, pypi). 7-day velocity: 0.4%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric vllm gpt4all ragflow llm-course
Stars 75.7k 77.3k77.5k78.0k
Forks 15.3k 8.3k8.7k9.1k
Weekly Growth +120 +0+105+38
Language Python C++PythonN/A
Sources 2 322
License Apache-2.0 MITApache-2.0Apache-2.0

Capability Radar vs gpt4all

vllm
gpt4all
Maintenance Activity 100

Last code push 0 days ago.

Community Engagement 100

Fork-to-star ratio: 20.3%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 50

+120 stars this period — 0.16% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.