The repo was public yesterday. Today it is locked behind a login.
This is the new reality for many open source models: restricted access. In the last year, several high-profile AI and machine learning projects have shifted from truly open source licenses to gated downloads, API-only access, or non-commercial clauses. The code or weights might still be available, but not without friction, limits, or legal walls.
This shift is driven by risk management, competitive edge, and safety concerns. Teams building large models face regulatory pressure, fear of misuse, and investor demands. By restricting access, they control distribution, monitor usage, and retain IP leverage. But the trade-off is obvious. The open ecosystem—where developers could audit, fine-tune, or extend models freely—is shrinking.
Projects that once allowed direct cloning from GitHub or hugging face hub now require requests, approvals, even identity verification. While some call these “open source” models, the term is often stretched beyond its original definition. Under the Open Source Initiative’s official rules, a model with gated access is not open source. At best, it is “source available” or “research only.”