It looks like setting sponsorableOnly: true on the repositories field of a topic that contains a lot of repositories is causing the API to return an error, specifically, the Oh snap! HTML error page:
Has the behavior subsided, or are you able to provide some additional step to reproduce?
I’ll also take a look at logs next, to see if I can identify any related errors and report back.
Edit: I wasn’t able to find any exceptions bubbling up in either two of our logging tools. We have one that logs exceptions only, and another general logging tool. Neither have elevated matching events to what you’ve described.
Since it sounds like it’s happening in-browser, are you able to capture any network details during a failure?
I’m still unable to reproduce the error myself, but also unfortunate, that request_id didn’t get caught by either our logging, or our exception catching tool =(
Unfortunately, none of those 3 new request IDs came back in either our general logging, or our exception catching logs tools. Nor was I able to reproduce this behavior both with my GitHub Staff account, and with my non-staff user account.
Do you happen to be aware of any other users running into the same?
One thing that was missing from your screenshots, is the Network tab, which should show us which URL is kicking out that 500, or if it’s even reaching the site, before producing the error.
I can escalate this once we have something a bit more tangible. For now though, still nothing in logs, and still unable to reproduce =(
I’m still able to reproduce this on my account, both in Firefox and Chrome using that API topic query.
Here are scrubbed HARs in both browsers of the failing request: chrome.har · GitHub
I isolated just the failing request that occurs when I press the run button in the explorer – let me know if full HARs of the whole page load + me pressing run are needed.
Hopefully these give you a bit more concrete info on the query request that is failing.
Thanks so much Riley; that request ID did return back a captured error in our logging. I’m still so confused why Matt’s hasn’t =(
Either way, I appreciate the additional info!
Interestingly, this specific request ID: CD52:28F2:44D2FD4:4635053:61B1B900 was caught as a timeout, similar to other conversations Matt and I had in this thread.
Between then and now, there has been work done to help us better catch details during timeouts for GQL calls, and thankfully we can see it in this example.
Uniquely from that conversation about heavy GQL calls, this call alone would presumably not be considered very heavy. From what I can see in our logs, a single MySQL query is accounting for 5.87s of time spent, which seems actionable.
I’m going to elevate this to our dev team and hopefully we can get this sorted!
Thanks again for the details and for your patience