Several concurrent requests to the same domain

I want to fetch some text files from a repo but when I use several fetches the response takes too long to arrive on the client side. It takes about 1min 20secs which is forbiddingly long for a web app.

I am not sure but I suspect that happens because the browser limits the number of concurrent request to

Does GitHub have multiple domains, is there an alternative domain I could send my request to please?


<!DOCTYPE html>
<html lang="en">
    <meta charset="UTF-8">

    var filenames = [
        {'url': ""},
        {'url': ""},
        {'url': ""},
        {'url': ""},

        {'url': ""},
        {'url': ""},
        {'url': ""},
        {'url': ""}

    var fetchData = (data) => {
        var filenames = => d.url);
        return Promise.all(
   => fetch(d))

    app = async function () {
        var results = await fetchData(filenames);


:wave: @acycliq: There are a number of publicly available URLs that GitHub relies on as a part of it’s application’s functionality. is one of those URLs, but we don’t rely on using that URL as illustrated in the code snippet for production cases because that service is subject to change at any time.

The GitHub API provides several documented and established resources that your application can rely on. I highly recommend using our Contents API to get repository content. Files and symlinks support a custom media type for retrieving the raw content or rendered HTML (when supported). This interface is great for fetching contents that are 1MB or less in size.

I mention the 1MB limit because I noticed something interest when I made this curl request to one of the specified resources:


At this time of writing, the API returns this response:

  "message": "This API returns blobs up to 1 MB in size. The requested blob is too large to fetch via the API, but you can use the Git Data API to request blobs up to 100 MB in size.",
  "errors": [
      "resource": "Blob",
      "field": "data",
      "code": "too_large"
  "documentation_url": ""

This particular file is 75.5MB––this exceeds the 1MB limit for the Contents API.

We have a Blobs API that exposes a Get a blob endpoint which supports blobs up to 100MB in size.

curl -v

Does this help with what you’re looking to do?

@francisfuzz: Thanks for looking into this. My main issue is that I fetch all the files in parallel. This is part of a data loader that feeds the data to a web application. Since most browsers allow up to 6 concurrent connections there is a significant delay until the moment I start streaming the data. In firefox for example, where you can increase the max number of connections by tweaking its config settings, streaming starts almost instantaneously. I was hoping that GitHub would have multiple domains, like etc so I could split my fetches to several domains.

@acycliq: Thanks for sharing that additional context. I can confirm here that there aren’t multiple domains; the Contents API would be your best bet to get the data you’re looking for from a repository. Alternatively, I think it may be worthwhile to checkout a database-as-a-service rather than using a GitHub repository for fetching those datasets.