PT

THUDM/P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

938 114 +0/wk
GitHub
few-shot-learning natural-language-processing p-tuning parameter-efficient-learning pre-trained-language-models prompt-tuning
Trend 0

Star & Fork Trend (34 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

THUDM/P-tuning has +0 stars this period . Velocity data will be available after more historical data is collected.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric P-tuning Eagle langcorn ML-University
Stars 938 938938937
Forks 114 5071120
Weekly Growth +0 +0-1+0
Language Python PythonPythonN/A
Sources 1 111
License MIT Apache-2.0MITN/A

Capability Radar vs Eagle

P-tuning
Eagle
Maintenance Activity 0

Last code push 1280 days ago.

Community Engagement 61

Fork-to-star ratio: 12.2%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.