-
We’ve been using github actions for the ArduPilot open source autopilot since September 2020. Generally has been very good, but over the last couple of days all our PRs are stuck for a very long time. Here is a typical one stuck for 11 hours:
AP_Scripting: avoid a error in lua with gcc 10.2 on STM32 with -Werror
g++ 10.2.1 for STM32 gives this: ![image](https://user-images.githubusercontent….com/831867/127244840-4e710674-4eca-489b-aacb-48d737dee857.png) it seems to be a false positive How do we find out why we’re getting stuck? Other PRs get through almost half the workflows, like this one which has done 28 and has another 28 queued.
Logging: added LOG_FILE_RATEMAX parameter
This PR gives a simple way to lower the logging rate for smaller logs. It gives …a single LOG_FILE_RATEMAX parameter for the maximum rate for streaming messages. Later I'd like to add LOG_MAV_RATEMAX and LOG_BLK_RATEMAX. for the other backends. Special handling is made of multi-instance messages. If the same streaming message ID is sent within the same scheduler tick then it is assumed to be a multi-instance message and the same send/no-send decision is made for all the messages of that type within a tick. This gives good handling of multi-instance messages. update: now has LOG_MAV_RATEMAX and LOG_BLK_RATEMAX too Cheers, Tridge |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
A little more information - day-old pushes to PRs are getting this: others are getting this:
Switching to |
Beta Was this translation helpful? Give feedback.
-
Note this same problem is almost certainly being discussed over here: Hosted runners not picking up jobs - #5 by balupton |
Beta Was this translation helpful? Give feedback.
-
the problem has resolved itself |
Beta Was this translation helpful? Give feedback.
the problem has resolved itself