AR

Trusted-AI/adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

5.9k 1.3k +3/wk
GitHub
adversarial-attacks adversarial-examples adversarial-machine-learning ai artificial-intelligence attack blue-team evasion extraction inference machine-learning poisoning
Trend 3

Star & Fork Trend (43 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

Trusted-AI/adversarial-robustness-toolbox has +3 stars this period . 7-day velocity: 0.1%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric adversarial-robustness-toolbox chainer gluon-cv VLM-R1
Stars 5.9k 5.9k5.9k5.9k
Forks 1.3k 1.4k1.2k378
Weekly Growth +3 +0+0+2
Language Python PythonPythonPython
Sources 1 111
License MIT MITApache-2.0Apache-2.0

Capability Radar vs chainer

adversarial-robustness-toolbox
chainer
Maintenance Activity 36

Last code push 118 days ago.

Community Engagement 100

Fork-to-star ratio: 22.0%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 43

+3 stars this period — 0.05% growth rate.

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.