[LGTM] finding cause of analysis timeout

We’re using LGTM on a python project, and some recent changes would cause LGTM analysis to timeout. I was able to narrow the problem code down to a test (pytest), and the codeql check CallToSuperWrongClass.

Is there a good way to find out the actual cause of the issue or possibly report as an example of a bug?
The problem test is split off at tests: Parameters: add for specify_none by kmantel · Pull Request #2416 · PrincetonUniversity/PsyNeuLink · GitHub

I’m afraid that in this case, the most likely culprit is that the points-to analysis failed to complete in time.

The py/super-not-enclosing-class query is unlikely to be the real cause – it just happens to be the first query that relied on the points-to analysis. This also means that disabling that one query is unlikely to make a difference.

Having looked at the code, it’s really not clear to me why this would be triggering a timeout in the points-to analysis.

Happily, we’re moving towards relying less and less on points-to. Unfortunately, actually getting there will take some time yet.

In the short term, I’m not sure what the best thing you can do to mitigate this. I guess you could instruct LGTM to ignore that particular file (through the lgtm.yml config file). It’s not the prettiest of solutions, but it should work in the short term. I unfortunately can’t give you an indication of when this performance issue will be investigated and fixed.