Memory requirements

Message boards : Number crunching : Memory requirements
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile nubz

Send message
Joined: 23 Aug 05
Posts: 6
Credit: 10,356
RAC: 0
Message 3498 - Posted: 9 Jun 2006, 14:34:07 UTC

meesage from boinc manager:
6/9/2006 9:51:16 AM|BURP|Message from server: Your computer has only 527937536 bytes of memory; workunit requires 8933376 more bytes

is this going to be the memory requirements for the wu\'s now on?

ID: 3498 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 3503 - Posted: 9 Jun 2006, 16:04:51 UTC - in response to Message 3498.  

meesage from boinc manager:
6/9/2006 9:51:16 AM|BURP|Message from server: Your computer has only 527937536 bytes of memory; workunit requires 8933376 more bytes

is this going to be the memory requirements for the wu\'s now on?

No. The memory requirement is based on the complexity of the session.
ID: 3503 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Steve Cressman
Avatar

Send message
Joined: 27 Mar 05
Posts: 142
Credit: 3,243
RAC: 0
Message 3507 - Posted: 9 Jun 2006, 17:03:45 UTC

@ Janus

Do all three sessions in the queue require greater than 512 megs?
If not then I think there is a problem.
When the scheduler contacts burb and gets this :

6/9/06 5:59:32 AM|BURP|Scheduler request to http://burp.boinc.dk/burp_cgi/cgi succeeded
6/9/06 5:59:32 AM|BURP|Message from server: Your computer has only 536289280 bytes of memory; workunit requires 581632 more bytes
6/9/06 5:59:32 AM|BURP|Message from server: No work sent
6/9/06 5:59:32 AM|BURP|Message from server: (there was work but your computer doesn\'t have enough memory)

It defers contact for twenty-four hours. So the problem I see is that if there is a mix of work in the queue requiring differing amounts of ram then those that get defered may never see the work that requires less memory.

Also I don\'t really think that a half a meg more of memory would make much difference in rendering ability. Would be interesting to see what would happen if you changed the requirement to 510 Megs in stead of the 512 it is now at.

I wish there was a way to force it to get at least one of work units to see if there are difficulties.
Win98SE XP2500+ Boinc v5.8.8

And God said"Let there be light."But then the program crashed because he was trying to access the 'light' property of a NULL universe pointer.
ID: 3507 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Bones
Project donor

Send message
Joined: 25 Apr 05
Posts: 16
Credit: 149,899
RAC: 0
Message 3523 - Posted: 10 Jun 2006, 2:30:37 UTC - in response to Message 3507.  

So far the 192 session wu\'s I\'ve had peak at memory usage of about 70M. So will the 192 session use more than 70M, if not maybe the memory requirement for this session could be decreased and get more pc\'s involved.
ID: 3523 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Achim

Send message
Joined: 17 May 05
Posts: 183
Credit: 2,642,713
RAC: 0
Message 3553 - Posted: 10 Jun 2006, 19:40:40 UTC

Similar on my long term WU\'s.

IS there a way to determine the required memory, or is it just an educated guess?
And how does the split of one frame into parts change the memory requirement (Not at all, or maybe linear, or even better)?

If there is no automated way maybe there is a way for the boinc API part of BURP to monitor the memory usage of a WU\'s and report this back to BURP.

With this information, the memory check could be adjusted.
(Starting with a high number, getting lower hopefully)
ID: 3553 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 3642 - Posted: 12 Jun 2006, 23:01:15 UTC - in response to Message 3507.  

Do all three sessions in the queue require greater than 512 megs?

No, and that\'s actually a very valid point.
I don\'t know how to explain this because it is very technical, I\'ll try anyways:
Last time I talked to the BOINC devs about it, they explained that the scheduler (or the feeder actually, but nevermind) caches up XXX workunits and then waits for people to request some. When the mix of WUs is very uneven (like now where 80000 units are for session 192 and the rest are for the remaining sessions in the queue) there\'s a high risk that the XXX cached WUs are from one session only. This means that the scheduler will never \"see\" the WUs with lower memory requirements untill a lot of the ones with higher requirements have been processed.

The solution would be a change in the way that the feeder picks out WUs from the queue, however, this change was never implemented as it turned out that it would be very ineffecient.

So a very large session can sometimes \"lock\" the ability to get other sessions started.
ID: 3642 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Steve Cressman
Avatar

Send message
Joined: 27 Mar 05
Posts: 142
Credit: 3,243
RAC: 0
Message 3646 - Posted: 12 Jun 2006, 23:53:23 UTC

Thanx Janus, I understand much of the backend process, but not all of it. That is just another drawback of FIFO. Would be nice if a way could be found to have a some what random sampling of work from all the sessions sitting on the feeder without it being ineffecient ;) Maybe someday somebody will have a brain storm and come up with an idea.
:)
Win98SE XP2500+ Boinc v5.8.8

And God said"Let there be light."But then the program crashed because he was trying to access the 'light' property of a NULL universe pointer.
ID: 3646 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 3651 - Posted: 13 Jun 2006, 0:24:04 UTC - in response to Message 3646.  
Last modified: 13 Jun 2006, 0:26:04 UTC

Thanx Janus, I understand much of the backend process, but not all of it. That is just another drawback of FIFO. Would be nice if a way could be found to have a some what random sampling of work from all the sessions sitting on the feeder without it being ineffecient ;) Maybe someday somebody will have a brain storm and come up with an idea.
:)

Actually I think I just came up with one that doesn\'t even require any code changes in the scheduler. Will have to think a bit more about it though.
It basicly involves using random priorities within intervals - so that each workunit will have a priority within 1-100 if it is lowprio, 101-200 if high etc.
Then the feeder will always pick the highest prio ones first (it already does this but so far we have only 1 and 2 as priorities). The highest prio ones will be distributed amongst all the sessions that are in the queue. Using an unfair weighting if a session uses less memory ensures that there will always be some low-requirement sessions in the feeder cache. Since these workunits will then be picked by both machines with low memory and those with more memory it is only fair to give them a tiny advantage compared to those with higher requirements. This means that a single session can no longer dominate the feeder cache.

Ok, this got technical again. Sorry about that =)
ID: 3651 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile DangerNerd
Project donor

Send message
Joined: 31 Mar 06
Posts: 127
Credit: 512,997
RAC: 0
Message 3662 - Posted: 13 Jun 2006, 5:55:22 UTC - in response to Message 3651.  

Actually I think I just came up with one that doesn\'t even require any code changes in the scheduler. Will have to think a bit more about it though.
It basicly involves using random priorities within intervals - so that each workunit will have a priority within 1-100 if it is lowprio, 101-200 if high etc.
Then the feeder will always pick the highest prio ones first (it already does this but so far we have only 1 and 2 as priorities). The highest prio ones will be distributed amongst all the sessions that are in the queue. Using an unfair weighting if a session uses less memory ensures that there will always be some low-requirement sessions in the feeder cache. Since these workunits will then be picked by both machines with low memory and those with more memory it is only fair to give them a tiny advantage compared to those with higher requirements. This means that a single session can no longer dominate the feeder cache.

Ok, this got technical again. Sorry about that =)



Janus,

I didn\'t think that was overly technical. You made it very easy to follow.

The idea itself is brilliant. Once again, the random number generator saves the day. :-)

Since I began PHP I cannot tell you how many times I have made non-standard use of a couple different random number generator functions.

I hope it is easy to implement.

DN.
Our Advice is to support all useful BOINC projects. Smart people needed to give advice to those who seek answers: Give or Get Free Advice Here
ID: 3662 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Steve Cressman
Avatar

Send message
Joined: 27 Mar 05
Posts: 142
Credit: 3,243
RAC: 0
Message 3693 - Posted: 13 Jun 2006, 18:38:14 UTC

Ditto for what DangerNerd said :)
Sounds like it should do the trick.
Win98SE XP2500+ Boinc v5.8.8

And God said"Let there be light."But then the program crashed because he was trying to access the 'light' property of a NULL universe pointer.
ID: 3693 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ellastoman

Send message
Joined: 29 Jul 05
Posts: 7
Credit: 831,661
RAC: 0
Message 4169 - Posted: 15 Oct 2006, 14:01:17 UTC
Last modified: 15 Oct 2006, 14:02:37 UTC

I think that there should be a listing in the Session Gallery of how much memory is required for each session. Or in the Serverstatus page
ID: 4169 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 4173 - Posted: 15 Oct 2006, 14:51:44 UTC - in response to Message 4169.  

I think that there should be a listing in the Session Gallery of how much memory is required for each session. Or in the Serverstatus page

This information is now on the server status page.
ID: 4173 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ellastoman

Send message
Joined: 29 Jul 05
Posts: 7
Credit: 831,661
RAC: 0
Message 4199 - Posted: 19 Oct 2006, 4:48:21 UTC
Last modified: 19 Oct 2006, 4:48:45 UTC

Thank you for putting how much memory is required. It helps me with some of the sessions with the computer that I am using right now.
ID: 4199 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Lee Carre

Send message
Joined: 30 Sep 05
Posts: 183
Credit: 0
RAC: 0
Message 4205 - Posted: 21 Oct 2006, 2:40:43 UTC - in response to Message 3646.  
Last modified: 21 Oct 2006, 2:41:10 UTC

Last time I talked to the BOINC devs about it, they explained that the scheduler (or the feeder actually, but nevermind) caches up XXX workunits and then waits for people to request some. When the mix of WUs is very uneven (like now where 80000 units are for session 192 and the rest are for the remaining sessions in the queue) there\'s a high risk that the XXX cached WUs are from one session only. This means that the scheduler will never \"see\" the WUs with lower memory requirements untill a lot of the ones with higher requirements have been processed.

The solution would be a change in the way that the feeder picks out WUs from the queue, however, this change was never implemented as it turned out that it would be very ineffecient.
Thanx Janus, I understand much of the backend process, but not all of it. That is just another drawback of FIFO. Would be nice if a way could be found to have a some what random sampling of work from all the sessions sitting on the feeder without it being ineffecient ;) Maybe someday somebody will have a brain storm and come up with an idea.
:)

nobody thought to add an option/switch in the configuration file to choose which mode, with the more efficinet one as the default
Want to search the BOINC Wiki, BOINCstats, or various BOINC forums from within firefox? Try the BOINC related Firefox Search Plugins
ID: 4205 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Peter M. Nielsen
Project donor

Send message
Joined: 14 Mar 05
Posts: 130
Credit: 307
RAC: 0
Message 4212 - Posted: 21 Oct 2006, 4:51:15 UTC - in response to Message 3651.  

Thanx Janus, I understand much of the backend process, but not all of it. That is just another drawback of FIFO. Would be nice if a way could be found to have a some what random sampling of work from all the sessions sitting on the feeder without it being ineffecient ;) Maybe someday somebody will have a brain storm and come up with an idea.
:)

Actually I think I just came up with one that doesn\'t even require any code changes in the scheduler. Will have to think a bit more about it though.
It basicly involves using random priorities within intervals - so that each workunit will have a priority within 1-100 if it is lowprio, 101-200 if high etc.
Then the feeder will always pick the highest prio ones first (it already does this but so far we have only 1 and 2 as priorities). The highest prio ones will be distributed amongst all the sessions that are in the queue. Using an unfair weighting if a session uses less memory ensures that there will always be some low-requirement sessions in the feeder cache. Since these workunits will then be picked by both machines with low memory and those with more memory it is only fair to give them a tiny advantage compared to those with higher requirements. This means that a single session can no longer dominate the feeder cache.

Ok, this got technical again. Sorry about that =)


Well, the problem is even worse if you consider you have multiple platforms and memoryrequirements. You need to have some way to make sure you always have a work unit with low memory requirement and if the first wu is taken by a linux computer the rest should also be sent to a linux computer. I don´t know if this is true for Burp?



ID: 4212 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 4215 - Posted: 21 Oct 2006, 10:51:35 UTC

No, Blender is consistent across platforms (at least when there\'s no bugs in it like the flipped normal map bug that we\'ve seen earlier here), so there\'s no need to issue work to specific platforms.
ID: 4215 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Deamiter

Send message
Joined: 12 Mar 06
Posts: 8
Credit: 12,581
RAC: 0
Message 4367 - Posted: 17 Dec 2006, 22:24:38 UTC

Just from what you said Janus, wouldn\'t using random numbers mean that just about every session would take 5-10 times as long to complete? You\'re going to be constantly inputting WUs with random priority, and that one that gets a 1 will just never be sent out!

A solution could be to add to the priority over time, but that would probably require more coding...
ID: 4367 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 4371 - Posted: 18 Dec 2006, 12:39:48 UTC

So far the issue has been solved by simply increasing the number of workunits the feeder can see at any given time. No randomized priorities are used at the moment, since they caused other unforeseen performance issues (more fragmentation in the database for instance).
ID: 4371 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Memory requirements