Rest API doc, says we can use
per_page param to get the max result as possible (100), But this is not working as expected.
Server error while using the
per_page = 100 and if I lower this count some around
per_page = 50, it works fine.
However, it is not the case for every repository in the same organisation or in different orgs. It is on the specific url.
after 12 seconds of running
"message": "Server Error"
But if I lower this count to around 50 it works. I tested via hit & trial. It works up to per_page 56 57 but not beyond than that.
This works fine.
I could not understand why is this behaviour. And it is continuously saying the same error as above.
I tried it using Chrome in Incognito mode, and there is no error, as you can see at the scroll bar, it’s too deep,
I tried this for several times just to be sure and still there is no error
however, there is a warning at the Developer tools
if you are not using the browser, that might affect the performance, thereby the server returning error
yes, @jdevstatic , I am using the above APIs in python . The error comes when we use
per_page:100 and which I am using throughout the code.
But above is the only url which throws the error for
per_page:100 not for
What should be the fix for it?
Hi @dsharma522 If GitHub takes more than 10 seconds to process an API request, GitHub will terminate the request and you will receive a timeout response or error.
Likely this is the cause of your problem
If this is the cause the answer is to reduce per_page size to a value where the server side can safely process within the timeout period. This could vary from API to API. The timeout issue is more likely when when using very complex GraphQL queries.
If timeout is the case, then error message should be improved to state the same clearly.
Right now, it is difficult to interpret the message to debug
I would agree if its due to timeout a more useful error message would be better