Could this be caused by the recent high failure rate on uploads (to the main server from the SG). Seems most of the SG’s I checked on the SG HUB are experiencing a lot of upload failures and the red / green pattern is very similar between SG’s.
Could there be a bandwidth issue on the server as more stations are being brought online? Don’t remember seeing this much ‘red’ last year?
First capture is from SG-9FD3RPI49FDB Gem AB (moderate file sizes)
Hi Mark, we’ve been experiencing a lot of server related issues in the last couple weeks which is undoubtedly responsible for some of the issues. Just today again, the server was down temporarily. It’s easy for even a short outage to create more issues since processing can get backed up pretty quickly. Something like this may also affect uploads as you suggest as a large number of SG’s attempt to upload data around the same time when the server comes back online.
Hopefully the server folks with their system error logs can figure out the issue(s). It’s an interesting and tricky problem; many external clients trying to call home and dump their information. Some with fast connections, some much slower, big files and small.
Hopefully it’s not something malicious like a denial of service.
Sorry I can’t help, in my pre-retirement I used to build real time control applications; the servers were maintained by nice folks who wore sandals. :)
To help troubleshooting, the Last data received” for 4 of 5 my SG stations are now showing 4 days behind again. When I log into the SG-Hub, these four stations all show no files to upload (ie. it the file transfer to the server appears to be working).
The fifth is even further behind but I think it’s due for a new SD card.
I’ve noticed the same. The V2 stations are connecting and syncing properly, but the processing for those is delayed. I believe this may also be affecting manual uploads of SG data, which are processed via the same data pipeline. I think the underlying issue has been resolved but that it’s now a mater of catching up and getting through the backlog…