NE
Maluuba/nlg-eval
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
1.4k 226 +0/wk
GitHub
bleu bleu-score cider dialog dialogue evaluation machine-translation meteor natural-language-generation natural-language-processing nlg nlp
Trend
0
Star & Fork Trend (15 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
Maluuba/nlg-eval has +0 stars this period . Velocity data will be available after more historical data is collected.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | nlg-eval | RCLI | awesome-seedance | NL2SQL_Handbook |
|---|---|---|---|---|
| Stars | 1.4k | 1.4k | 1.4k | 1.4k |
| Forks | 226 | 71 | 171 | 85 |
| Weekly Growth | +0 | +4 | +3 | +2 |
| Language | Python | C++ | Shell | Python |
| Sources | 1 | 1 | 1 | 1 |
| License | NOASSERTION | MIT | N/A | N/A |
Capability Radar vs RCLI
nlg-eval
RCLI
Maintenance Activity 0
Last code push 596 days ago.
Community Engagement 81
Fork-to-star ratio: 16.2%. Active community forking and contributing.
Issue Burden 70
Issue data not yet available.
Growth Momentum 30
No measurable growth in the current period (first-day cold start expected).
License Clarity 30
No clear license detected — proceed with caution.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.