Feature, yay! A BARF milestone subproject.
Part of the running cost of BURP is storage. If we are to open up BURP and scale it up with a factor of 10x or 100x in the future then we need to be much more conscious about server-side storage. Currently, for a number of reasons, the strategy has been to store literally everything (including multiple copies of each frame rendered from each render node, both parts and the stitched frames etc.). That way when something goes wrong, it costs very little to recreate it from some deeper storage layer.
In recent years fewer and fewer things have gone wrong where the extra storage was needed, and when the custom Cycles validator project is done there is a very large portion of that kind of "almost but not quite duplicate" storage that can be freed up for other purposes.
In the future the hope is to make the service a lot more slim, so that it scales better without unnecessary waste of storage.
The session data that will be put in long-term storage is going to consist primarily of
- One canonical copy of each rendered, final frame
- An encoded preview video file with all the frames
- The original input file(s) used for the session
- Some logs, checksums and performance data
In that list the first item is typically the one using the most space. By using lossless compression it is possible to store the exact same data using less space. How much less differs a lot from session to session but it is typically between 5%-50%.
What you are seeing is the new CruncherService going through the backlog of all the old rendered sessions and performing two tasks: validating that the data is still correct and compressing rendered frames using lossless compression.
At the moment it has been granted very few CPU resources and it is running as a background service so it is running fairly slowly - but it will pick up in speed once the new server is in place. The process is fairly I/O heavy so we're keeping it on the server cluster instead of distributing it to the farm via BOINC.