Understanding your graphs part 7 - Pre-Recieve Hooks, Git caching, and Cluster (HA) ping
In part 6 of our ‘Understanding your graphs’ mini-series, we talked about GitHub Enterprise System services. In part 7 - our final article in this mini-series - we’ll dive into GitHub Enterprise Pre-receive hooks, Git caching, and Cluster (HA) ping graphs.
Graphs related to pre-receive hook execution.
- Execution time of pre-receive hooks, in milliseconds.
- Pre-recieve hooks have a non-configurable 5 second timeout. Longer or more indepth checks should be performed via CI and reported as a required Status Check to the relevant Pull Request instead.
Git fetch caching
GitHub Enterprise will attempt to cache intensive operations, such as
git pack-objects, when multiple identical requests arrive in quick succession.
- Git client requests which GitHub Enterprise cached.
- High sustained rates of git requests being cached can be a result of clients polling for changes.
- Git client requests that GitHub Enterprise was able to serve from cache.
- Indicates a detected “Thundering Herd” of requests.
- Requests ignored by the caching system, as they were not good candidates for caching.
- High sustained rates of ignored requests may also indicate polling, however the built in caching was unable to provide any benefit. These requests may have lower performance overall.
Graphs related to GitHub Enterprise High Availability or Clustering.
- High ping response times between HA and Cluster nodes may impact replication performance.
- In Geo-replication environments ping times between replicas may reach upwards of several hundred milliseconds. Overall Git push speeds in Geo-replication environments will also be impacted by this latency.
Continue the conversation
This concludes our “Understanding your graphs” mini-series. Thanks so much for following along! If you’d like to read back on or reference all the articles from this mini-series, just subscribe to the “Understanding your graphs” tag (link below). Please let us know if you have any questions in the comments.