What about to speed up?

Message boards : Number crunching : What about to speed up?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile PlainText s.r.o.

Send message
Joined: 11 Apr 07
Posts: 95
Credit: 3,532,950
RAC: 0
Message 5640 - Posted: 24 Apr 2007, 11:03:23 UTC
Last modified: 24 Apr 2007, 11:05:24 UTC

I got Idea... Now the all WU are computed twice or more...

WU is for example:
render frame 20, pixels (0,0,16,16).
render frame 20, pixels (0,16,0,32).
...
picture:

-------------------
| WU1 | WU2 |
-------------------
| WU3 | WU4 |
-------------------

What about not to render all wu twice or more, buth instead to chceck if some pc isnt broken (broken results) like this:

-------------------
|..........-----..........|
-------|WU5|-------
|..........-----..........|
-------------------

Idont know, if there wont be problem with calculating points... buth this can be extended for example to render evry second pixel e.t.c...

Sorry, if bad idea => spam.
ID: 5640 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4563
Credit: 2,097,282
RAC: 0
Message 5643 - Posted: 24 Apr 2007, 14:59:50 UTC - in response to Message 5640.  

That\'s actually a very clever idea if the system was to check only for random errors (like the ones made by overclocked machines and ones with failing components). However there\'s another factor that needs to be taken into account: deliberate errors.
A deliberate error is when someone tampers with the results for some reason. This could for instance be rendering no pixels at all (or only some pixels) in an attempt to get credit for the full render anyways - or it could be an attempt to embed graphics into the final result.

Using the {1,2,3,4}+{5} system Mr. Bad Guy could safely render only 1/4 of, say WU1, and still get full credit. Similarly he or she could insert text in the areas not covered by WU5.

Now what about placing WU5 so that it randomly overlaps the other WUs? Well, that would only slightly increase the chance of catching Mr. Bad Guy - and the intention is to have a system that catches flaws every time.

About rendering dots, every second pixel, every second line etc. etc.:
Unfortunately there\'s a constant overhead whenever you start rendering a \"block\". A block is a boxed area (a single pixel or a line are simply examples of small blocks). The renderer uses the start up time to allocate memory, do shadow maps and calculate all kinds of stuff that it is going to use throughout the actual rendering phase.
Doing too many blocks means too much overhead is done - and too much CPU time is wasted.
ID: 5643 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Christian Sturm

Send message
Joined: 12 Mar 06
Posts: 47
Credit: 830
RAC: 0
Message 5835 - Posted: 21 May 2007, 0:35:26 UTC
Last modified: 21 May 2007, 0:40:37 UTC

Possibly, I don\'t have understood that... my english is very bad... many times sayed but....

I think I understood the problem of faking credits and results...

My Idea is to render twice times... but only one time at fullresulotion.
The second time for comparing if it is a right result... it may render half resolution... because of Antialiasing you should get a comperable result...

So you get a like checksum of result not exactly matching the \"main\" result.. but compareble with results from other mashines in 1/4 Time... at half res...

The matching criteria is less than 1/4 unexact, because of antialiasing effects.
So it should be possible to compare Pictures with a like Threshhold...

That sould increase Renderperformance by 175% ... cause of 25% rendertime for comparing... the feature you could call \"Turbotrash\" ;) I would be proud!!!!

Another way is to resize the mainrendering to halfres and compare with another mashines Comparepic... the resulting difference should be minimiced by this way... this way too, many time is saved for another Frame ;)

Oh, I think thats a really idiotic idea... but you have to think free to enhance creativity ;)

Greetings
Christian Sturm



ID: 5835 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4563
Credit: 2,097,282
RAC: 0
Message 5836 - Posted: 21 May 2007, 8:15:55 UTC - in response to Message 5835.  
Last modified: 21 May 2007, 9:13:05 UTC

Ok, I understand your post as the following:
Do image verification by rendering the result once at full resolution and once at half resolution, scale down the full resolution image to compare the two.

At first sight this seems like a very good way of handling both random errors and deliberate errors since the entire image is always covered. However, there\'s a technique called \"information hiding\" that exploits the linear or cubic scaling mechanism. The linear scaler takes 4 points and maps them into 1 (when cutting the width and height in half) - this is somewhat the same that happens when antialiasing in a renderer, it calculates 4 (or more) slightly offset rays per pixel and averages over the results. The cubic scaler is a little bit more interesting, but the same results holds for that one as well.

To rephrase the previous section, any rescaler that downscales an image throws away information. It stores the average (calculated in some smart way) of a number of points in a single point. This means that you can still modify those points as long as you end up with the same average afterwards. Let me show an example (in this case using the linear scaler because it is easier to show):

Bob has a workunit that tells his machine to render a low resolution version of the following image, which shows a red bird flying towards a green larva on a transport tape:

Bob is a nice person and has a machine without bugs, so he renders the following (correct) lowres image:


Now Eve (who is evil) does not like gray images but loves exclamation marks - and since there\'s no exclamation marks in the original image she decides to add one and make the gray background into a grid instead!
First Eve renders the correct image (as seen above) but then she modifies it to look like this (the exclamation mark is in the upper right corner):

Now, we can easily agree that this is pretty far from the correct image - not one single pixel is what it should be. Nevertheless Eve has carefully constructed the image in such a way that her changes will not be seen when the image is scaled down (because of the averaging):
vs

Similarly self-negating random patterns will not be detected. I\'ve never seen such an error happen with Blender, but I wouldn\'t rule it out.

Apart from that there\'s the issue of time: Rendering with antialiasing is usually the same as rendering a larger image without antialiasing and then scaling it down - and it also takes about just as long.

(All images in this post have been blown up by a factor of 5 to make it easier to see any differences)
ID: 5836 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Christian Sturm

Send message
Joined: 12 Mar 06
Posts: 47
Credit: 830
RAC: 0
Message 5838 - Posted: 21 May 2007, 13:34:43 UTC

Wow! Janus... thats an impressive explaination...!!! Thanks for that....
Ok, you win. My mistake.

Thank you for that great thing...

Dear
Christian Sturm

PS: Greetings to you and all fabolous people working in Burp!

ID: 5838 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Christian Sturm

Send message
Joined: 12 Mar 06
Posts: 47
Credit: 830
RAC: 0
Message 5839 - Posted: 21 May 2007, 14:44:36 UTC
Last modified: 21 May 2007, 14:49:25 UTC

Ok, i like braincrunching ;)

I thought of that what you explained... maybe i got a exploitdetector...

The two Images looks like the same after they have the same size...
thats true.

maybe i am wrong, but:

Adding the Turbotrash Exploitdetector(by me ;)....
...by mixing source with 50%Opacity:

\"A random color AND brightness pattern...\"

Compareresults looks like this



scaling down to equal sizes there are ****NOW**** different results...



Ok...
*BUUUURRRRPPP* Alert... heres something wrong....

Am I wrong with this theoretical garbadge?
The Overlaypattern makes pictures divideble...

Greetings
Chris


ID: 5839 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Christian Sturm

Send message
Joined: 12 Mar 06
Posts: 47
Credit: 830
RAC: 0
Message 5845 - Posted: 22 May 2007, 13:45:15 UTC

Janus?
ID: 5845 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4563
Credit: 2,097,282
RAC: 0
Message 5846 - Posted: 22 May 2007, 14:15:23 UTC - in response to Message 5845.  
Last modified: 22 May 2007, 14:31:05 UTC

Lack of time...

You\'ve got a valid point, and if it is extended with a high-intensity filter and a low-intensity filter and an extra rendered copy (for verification) of Bob\'s scaled down result it would actually only be a constant factor (2-8 times?) worse at detecting image flaws than the currently used system at BURP.
This is based on a very quick analysis of your idea, no conclusions.

However, there\'s a few other issues that I haven\'t had time to look into:
1) Will it actually be faster (seems so, rendering 2xlowres + 1 highres should be less work than 2 high-res...)
2) How do you calculate credit when only one person actually rendered the thing (using a scalar on the low-res image gives a very broad range and is too imprecise).
3) If (2) cannot be handled, will rendering 2xlowres+2xhighres be enough to establish a valid credit amount? (This is still compareable with rendering 2xhighres alone, so clearly cannot be effecient anymore, but expected time is still better than 3xhighres which is the current strategy). But it kinda destroys the intention of cutting down the total rendertime spent on verification.
4) How does this affect timing? Since only 1 person renders the actual image a timeout on this part would be very bad - similarly a timeout for one of the lowres images would also delay the final result. Is the probabilitydistribution of this happening better or worse in the suggested scenario?
5) Would this put additional load on the central server structure and move delay in there instead? It requires a completely different validation system as well as added dependencies between workunits - is this even possible given the current BOINC platform?
6) Is the constant factor too large to be usable - will correct results get through or will tampered results get through too? Does this require additional filters to cut down the factor? How many?
...

We are currently sending out 4 copies and only 3 need to validate to get through - this is to increase the render speed when one of the 3 needed results go missing (a computer crashes or simply doesn\'t return the result). Also we use 3 results instead of 2 to be able to grant \"the middle\" claimed credit to all valid results.
A similar scenario for the suggested idea would require 4xhighres + 4xlowres (or possibly +3xlowres because you could use a shorter deadline). Without thinking about the credit issue you could cut down the current system to 2xhighres and the suggested idea to 1xhighres+2xlowres.

I think you get the point: There\'s more to this than the validity of the image itself.
ID: 5846 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
baracutio
Project donor

Send message
Joined: 29 Mar 05
Posts: 96
Credit: 174,604
RAC: 0
Message 5848 - Posted: 22 May 2007, 15:54:52 UTC - in response to Message 5846.  


4) How does this affect timing? Since only 1 person renders the actual image a timeout on this part would be very bad - similarly a timeout for one of the lowres images would also delay the final result. Is the probabilitydistribution of this happening better or worse in the suggested scenario?


what would be if one session will be splitted into 2 \'different\' runs?
first run -> calculating 2 low-res for whole FRAME.
when first run is at 50% done you could start the second run. calculating high-res of each PART.


2) How do you calculate credit when only one person actually rendered the thing (using a scalar on the low-res image gives a very broad range and is too imprecise).


hm maybe you take the credit given for low-res image (whole FRAME), multiply with factor 2 or so (should be tested if more/less) and make it as fixed credit for each PART of the actual frame.

dont know if that all would be the best way, but you can think about that;)



mfg bara
ID: 5848 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
noderaser
Project donor
Avatar

Send message
Joined: 28 Mar 06
Posts: 516
Credit: 1,567,702
RAC: 0
Message 5849 - Posted: 22 May 2007, 16:05:22 UTC

Side question: If a unit fails validation, does this show up in the workunit status page? I ask, because I have one computer that is having some weird performance issues with BOINC, and I\'m suspecting that it may have a hardware problem. It seems to hang on work units from some projects, and usually gets performance far worse than a computer half its speed.
ID: 5849 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4563
Credit: 2,097,282
RAC: 0
Message 5850 - Posted: 22 May 2007, 16:14:56 UTC - in response to Message 5849.  
Last modified: 22 May 2007, 16:15:22 UTC

@noderaser:
The result would not get granted credit and if you click the result it will show \"Validation failed\" (or similar) on the validation status.
ID: 5850 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : What about to speed up?