Getting data from API beyond max limit

What is the correct way of retrieving stargazers for a project that has in excess of 40,000 stars? Apache Superset has ~41,200 stars, and when I try to get the 40001th stargazer I get the following error:

$ curl -H "Accept: application/vnd.github.v3.star+json" "https://api.github.com/repos/apache/superset/stargazers?per_page=1&page=40001"
{
  "message": "In order to keep the API fast for everyone, pagination is limited for this resource. Check the rel=last link relation in the Link response header to see how far back you can traverse.",
  "documentation_url": "https://docs.github.com/v3/#pagination"
}

The same happens for e.g. per_page=100&page=401, so it appears the maximum limit is 40,000 rows. When I check the rel link in previous pages, they point to https://api.github.com/repositories/39464018/stargazers?per_page=1&page=40000 being the last page. In addition, there’s no next rel on that page either.

I have similar challenges with other endpoints.

Is there some other way to ingest data from a GitHub repo without having to deal with these limits? I may be missing some key piece of information, but based on my understanding right now it seems the GitHub API can’t really be used to do either a full or incremental data ingestion due to these pagination limits.