Sr.ht should set up their own graceful rate-limiting anyway, or they're going to have problems with badly coded CI setups as the service becomes more popular.
Wouldn't this just make the failures more mysterious and harder to track down for users? Failing sometimes for no reason that's obvious or apparent to end users is worse than always failing, IMO.
E.g.: source-based packages on distributions where users may not be Go programmers, or even non-programmers, will compile and install Go software where some nested library dependency is on sr.ht. These packages will now fail and, sadly, this is going to cause widespread disruption. I think it'd be worse if those failures only happened occasionally, and not reliably repeatably.