LM

pjlab-sys4nlp/llama-moe

⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)

999 60 -1/wk
GitHub
continual-pre-training expert-partition llama llm mixture-of-experts moe
Trend 0

Star & Fork Trend (17 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

pjlab-sys4nlp/llama-moe has -1 stars this period . 7-day velocity: -0.1%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric llama-moe OpenAlpha_Evolve NeuroSploit Prompt4ReasoningPapers
Stars 999 9971.0k1.0k
Forks 60 14724867
Weekly Growth -1 +0+2+0
Language Python PythonPythonN/A
Sources 1 111
License Apache-2.0 MITN/AMIT

Capability Radar vs OpenAlpha_Evolve

llama-moe
OpenAlpha_Evolve
Maintenance Activity 0

Last code push 489 days ago.

Community Engagement 30

Fork-to-star ratio: 6.0%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.