SR

PKU-Alignment/safe-rlhf

Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

1.6k 132 +0/wk
GitHub
ai-safety alpaca beaver datasets deepspeed gpt large-language-models llama llm llms reinforcement-learning reinforcement-learning-from-human-feedback
Trend 3

Star & Fork Trend (24 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

PKU-Alignment/safe-rlhf has +0 stars this period . 7-day velocity: 0.1%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric safe-rlhf AI-Codereview-Gitlab llm-chain LLM-eval-survey
Stars 1.6k 1.6k1.6k1.6k
Forks 132 33614399
Weekly Growth +0 +0-1+1
Language Python PythonRustN/A
Sources 1 111
License Apache-2.0 Apache-2.0MITN/A

Capability Radar vs AI-Codereview-Gitlab

safe-rlhf
AI-Codereview-Gitlab
Maintenance Activity 26

Last code push 136 days ago.

Community Engagement 41

Fork-to-star ratio: 8.3%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.