HR
anthropics/hh-rlhf
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
1.8k 154 +1/wk
GitHub
Trend
3
Star & Fork Trend (18 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
anthropics/hh-rlhf has +1 stars this period . 7-day velocity: 0.1%.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
No comparable projects found in the same topic categories.
Maintenance Activity 0
Last code push 295 days ago.
Community Engagement 42
Fork-to-star ratio: 8.4%. Lower fork ratio may indicate passive usage.
Issue Burden 70
Issue data not yet available.
Growth Momentum 43
+1 stars this period — 0.05% growth rate.
License Clarity 95
Licensed under MIT. Permissive — safe for commercial use.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.