LV

guqiong96/Lvllm

LvLLM is a special NUMA extension of vllm that makes full use of CPU and memory resources, reduces GPU memory requirements, and features an efficient GPU parallel and NUMA parallel architecture, supporting hybrid inference for MOE large models.

317 30 +1/wk
GitHub
cpu decode gpu hybrid inference model moe numa parallelism prefill vllm
Trend 3

Star & Fork Trend (15 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

guqiong96/Lvllm has +1 stars this period . 7-day velocity: 1.6%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric Lvllm ai-legal-compliance-assistant cymbal-air-toolbox-demo hud-python
Stars 317 319321312
Forks 30 4311454
Weekly Growth +1 +0+0-7
Language Python PythonPythonPython
Sources 1 111
License Apache-2.0 N/AApache-2.0MIT

Capability Radar vs ai-legal-compliance-assistant

Lvllm
ai-legal-compliance-assistant
Maintenance Activity 100

Last code push 1 days ago.

Community Engagement 47

Fork-to-star ratio: 9.5%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 59

+1 stars this period — 0.32% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.