The big Beta rewrite plan

Message boards : Server backend and mirrors : The big Beta rewrite plan
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 6 · Next

AuthorMessage
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8015 - Posted: 3 Apr 2008, 7:11:45 UTC
Last modified: 5 Jun 2010, 20:09:02 UTC

Hey all

We are now reaching the second half of Alpha. This means that the current feature-set will be frozen and that all development energy will be focused on writing the Beta code. The Beta version of the serverside BURP software will go under the version numbers 0.3.x, so you may see \"v.3\" or \"Beta\" used interchangeably.

Question: What does this mean for the alpha project? Will it die?
No. The Alpha prototype website will continue to work as it has been working so far - until the release of the Beta version. You will still be able to up/download material to it and all data will be carried over to the new website once the code is ready.
The Beta work will in no way affect the currently running Alpha project with the exception that no new features will be added to the Alpha version. Feature suggestions will be added to the TODO for the Beta software.

Question: Can I still render on the Alpha project while the Beta version is being made?
Yes. In fact you are encouraged to make use of the alpha version. It is considered \"as complete as it gets\".

Question: The title mentions a \"big rewrite\", what is this?
One way of making software is to make a working prototype, look at it, determine what is wrong about it or where it can be improved and then make a new improved prototype based on what you have learned about the previous one.
The current v.2 system has a lot of limitations that can be lifted if the system was designed differently.

The rewrite will roughly follow the plan sketched here and you will be able to see the progress of each individual step on the front page. Each step will be further explained in a set of forum posts when development of that step has progressed far enough:

  • Local Network Services


  • Database Related Services

    • RenderQueueHandler
    • MirrorHandler
    • StatisticsHandler
    • CleanupHandler (session workunits)


  • External communication

    • Website frontend
    • RPCService

      • Upload
      • Session control
      • Statistics


    • MessengerService


  • Clientside

    • Screensaver
    • CATS
    • Modeler\'s Toolkit (checking, packing etc)


  • Documentation

    • Code
    • Use of the system
    • Legal docs




As soon as the new prototype is relatively stable it will be opensourced under GPL and made available to the public. At the same time the code will be deployed on this website (and possibly other places).
When the alpha milestones have been satisfied we will progress to the Beta phase of this project. There are some points in the alpha milestones that may have to be reevaluated at that time - more specifically binary Mac OSX support and support for checkpointing.

Question: What if I find a nasty bug in the Alpha version, will it not be fixed before Beta then?
The frozen code applies to new features only. This means that bugs will be fixed at the earliest possible time. Some bugs, however, will require a new feature in order to be truely fixed - in this case you may have to wait.

Question: Where do I post ideas about improving the system?
If the idea is not already covered in the list of limitations then feel free to post about it in the appropriate development subforum.

Question: How long will it take to do this?
It will be realeased when it is ready - no earlier, no later.

ID: 8015 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Achim

Send message
Joined: 17 May 05
Posts: 183
Credit: 2,642,713
RAC: 0
Message 8018 - Posted: 3 Apr 2008, 7:51:00 UTC - in response to Message 8015.  

Hey all

Question: What does this mean for the alpha project? Will it die?
No. The Alpha prototype website will continue to work as has been working so far - until the release of the Beta version.

This impleies as well, that the current background rendering will stay as it is, and nt replaced by an improved version as you mentioned earlier?
ID: 8018 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8019 - Posted: 3 Apr 2008, 7:55:16 UTC - in response to Message 8018.  

This impleies as well, that the current background rendering will stay as it is, and nt replaced by an improved version as you mentioned earlier?

No. The new background rendering feature was the last feature to be added to the system. It is already deployed but session 726 will complete using the older method.
Any future background sessions will use the new method.
ID: 8019 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile keeneym
Project developer

Send message
Joined: 7 Feb 08
Posts: 54
Credit: 224,663
RAC: 0
Message 8025 - Posted: 4 Apr 2008, 1:42:23 UTC

Are you still looking for developers? I would be more than willing to help code or document.
ID: 8025 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8027 - Posted: 4 Apr 2008, 8:23:54 UTC - in response to Message 8025.  
Last modified: 4 Apr 2008, 8:24:29 UTC

Are you still looking for developers? I would be more than willing to help code or document.

Sure, any help is very much appreciated! What qualifications do you have? Ie. what programming languages do you know and what level of experience do you have?

I think the places that are easiest to \"jump into\" are the modeler\'s toolkit and the documentation. AC is working on the checker (that checks if the .blend file is valid before sending it to the server) but the plan is to also have a packer (a small tool, preferably written in python for Blender internal or java for external use) that allows you to figure out all the dependencies in a .blend and pack them with the .blend into one or more archives).

The documentation part really starts when the new website frontend is getting ready. Then we need HOWTOs and FAQs for everything. Currently the documentation part is mostly related to improving and documenting system error messages (like the errors people get in their mail if they made a mistake in the scene-file).
ID: 8027 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8030 - Posted: 4 Apr 2008, 17:10:22 UTC
Last modified: 4 Apr 2008, 17:15:46 UTC

-----------------------
Local network services.
-----------------------

One of the big changes between v2 and v3 is the adoption of distributed computing on the serverside. Where the old system used a central approach to validation, assimilation etc. the new system will attempt to spread the load from this onto other local machines. Since these machines are not always available and may sporadically disconnect it is necessary to allow them to do so without affecting the rest of the system.

To do this a ServiceManager, a ServiceRequestHandler and a set of distributed services will be developed. The list of services is available in the first post in this thread.

The remainder of this post is a copy of some text from the code describing the services in general:

/**
* A service is a network daemon with certain abilities like
* start, stop and status accessors. It performs some kind of CPU intensive
* task.
*
* Services may be distributed to any of the machines available on the local
* network segment running a ServiceManager. This way the central server can
* spread the load of the tasks that it is given onto these machines.
* Services are required to be stateless and are allowed to crash or fail in any
* way they like. This enables machines with ServiceManagers to be hotplugged
* into the network and, for instance, only provide their services at certain
* times of the day.
*/

ID: 8030 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8031 - Posted: 4 Apr 2008, 17:16:38 UTC
Last modified: 13 Apr 2008, 12:32:47 UTC

------------------
The ServiceManager
------------------

The basic idea here is that the central server yells onto the local network:
- \"Anybody here who knows how to validate a workunit?\"
The ServiceManagers then answers:
- \"Yes, I have a service that knows exactly how to do that! You can find it here: [some data describing the location]\"
The central server can then go on to contacting the service directly.

Simple and easy.


/**
* The ServiceManager launches a set of services and manages the access to these.
* It listens for UDP serviceClass request broadcasts on a set port and replies with
* a list of ports where the service of the requested serviceClass is present.
*
* Only one serviceClass manager is ever present on a single machine.
*
* The ServiceManager has a ServiceAcceptanceStrategy the determines which connections
* each service is allowed to accept.
* Similarly the ServiceManager uses the AccessManager to check for allowed connections
* to itself.
*
* Using a set of ServiceManagers distributed on several machines on the local network
* it is possible to locate and deploy a range of different services onto these machines
* and dynamically spread the load across the attached systems.
*/


---------------------
The ServiceArbitrator
---------------------

Part of the framework sourrounding the ServiceManager is the ServiceArbitrator. This is the part of the server that yells onto the network whenever it needs some particular service.
It also handles the distribution of the load by directing work to different machines each time a request pops up.
ID: 8031 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile keeneym
Project developer

Send message
Joined: 7 Feb 08
Posts: 54
Credit: 224,663
RAC: 0
Message 8032 - Posted: 4 Apr 2008, 18:00:07 UTC

Most of my experience is in java and I don\'t have any experience with python. I have looked at the burp3 cvs and understand the outline of what you are doing. I do not have a ton of experience with the internal workings of blender so I may not be much help with the packer. Without having much blender experience I was thinking I would be more suited to work on possibly the validation or assimilation or other BURP classes.
ID: 8032 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8035 - Posted: 4 Apr 2008, 20:07:43 UTC - in response to Message 8032.  
Last modified: 4 Apr 2008, 20:25:29 UTC

Most of my experience is in java and I don\'t have any experience with python. I have looked at the burp3 cvs and understand the outline of what you are doing. I do not have a ton of experience with the internal workings of blender so I may not be much help with the packer. Without having much blender experience I was thinking I would be more suited to work on possibly the validation or assimilation or other BURP classes.

Gof is currently looking into the validator service. The assimilator, however, uses the stitcher service as part of what it does. This involves juggling around with some image data and melting it together into a single image. I agree that this could be a nice little project for you if you like.

I changed the classes in the stitcher package to match the desired layout for the data transmitted on the network.

One of the requirements would be that the stitcher should be able to handle reading and writing RGBA PNG - although it would be better if the image format did not matter at all. Another requirement is that it does everything in-memory - the system it is deployed on may not have any harddrive. On a similar level it is important to keep memory usage low.
ID: 8035 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zombie67 [MM]
Project donor
Avatar

Send message
Joined: 9 Dec 06
Posts: 93
Credit: 2,492,267
RAC: 649
Message 8037 - Posted: 5 Apr 2008, 5:34:41 UTC

Does this mean the server will be upgraded to the latest version of BOINC?
Reno, NV
Team: SETI.USA

ID: 8037 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8038 - Posted: 5 Apr 2008, 6:59:55 UTC - in response to Message 8037.  
Last modified: 5 Apr 2008, 7:01:43 UTC

Does this mean the server will be upgraded to the latest version of BOINC?

Yes. We are aiming at the BOINC v6+.x client series for beta - the server code matching these is expected to have several features that we need. Most importantly it is planned to have a more dynamic and configurable scheduler.

The server code used at the alpha project will probably be updated around the same time that the beta web frontend is getting ready (it\'s easier that way).
ID: 8038 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Izarf

Send message
Joined: 29 Jul 05
Posts: 33
Credit: 13,637
RAC: 0
Message 8115 - Posted: 11 Apr 2008, 10:34:49 UTC

Regarding :
\" No automatic baking
Could be done serverside as a pre-processing effect\" in the
http://burp.boinc.dk/development/beta_switchover.txt document.


I\'m interested in how the baking would taken care of. As far as I know, the baking process is not multit-hreaded (bullet is multithreaded, so why doesn\'t blender use it that way?).
This itself could produce some problems since it would be hard to bake the .blend files in a better way than a regular user-PC would.

In my point of view, we have these options:

- Let the users bake their own animations before submission
- Let the server pre-process
- Bake animation using server-side load balancing cluster.

I do not know much about load balancing clusters, so it could mean that we need the baking process to be multithreaded. In that case may be we could have some in the Blender dev team to modify the blender code and make baking multi-threaded. In that way we could make the baking process much faster than it would on a user-PC.

Janus, what thoughts do you have on the subject?
ID: 8115 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8117 - Posted: 11 Apr 2008, 14:08:12 UTC - in response to Message 8115.  
Last modified: 13 Apr 2008, 12:38:13 UTC

Baking (or physics simulation in general) is inherently data-heavy. This means that it will need access to all the data at every place where computation is done. So at least with the current state of simulation technology it requires the baking to take place on a single machine - whether single- or multithreaded doesn\'t matter.

That\'s why the plan is to support both of the following:

- Let the users bake their own animations before submission
- Let the server pre-process


[Edit: ] To elaborate on the first option. Blender devs have voiced that they want to have a unified physics/simulation data system that allows data to be shipped more easily with .blend-files. Such a system would enable BURP to render pre-baked simulations with no change to the code on our side.
BURP does not yet, however, support per-frame simulation files, which could optimize the amount of data transferred over the internet. This optimization would work for both options. The intention is to add this support in the beta framework.
ID: 8117 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8121 - Posted: 13 Apr 2008, 12:57:00 UTC
Last modified: 13 Apr 2008, 13:12:19 UTC

------------------
The Parser Service
------------------
Whenever a session is uploaded the system needs to load in a lot of information from the session files. The main responsibility of the parser is to allow the system to get access to information from custom- or proprietary format renderer data files as well as matching the files with appropriate renderers.
In the case of Blender this includes parsing the selected render size, framerate etc. as well as some settings from options that could cause issues together with the BURP render framework (like SSS+multiparts, use of unsupported features like Yafray and panorama rendering).
The parser is slightly similar to the checking software in the Modeler\'s toolkit. Where the checking software serves as a guide for users the parser serves as a an accessor for the system software. In the case where the parser is not fully implemented in native code it may even reuse parts of the checking software.

The services makes use of several Parsers, where each renderer may have one or more associated Parsers (more in the case that the renderer supports more than one file format).



/**
* The parser service allows parsing of input files into a parse result.
* Such a result contains information about the file. This information may
* then be used by the system to determine settings depending on the file properties.
*
* The service listens for connections on a random TCP port.
*
* When a connection arrives the protocol for the parser service is the following:
* Remote: ParseRequest Object
* ParserService: true/false Boolean object or Exception or death
* Parser on original port: Send Integer port number of new port
* Remote: Send file 1 over this port
* Parser on original port: Send Integer port number of new port
* Remote: Send file 2 over this port
* ...
* ParserService: true/false Boolean object or Exception or death
* ... parsing takes place ...
* ParserService: List Object (an object for each parser)
*/


So far the implementation of the Blender parser is progressing smoothly. It is based on a modified version of Blender (the same code available in the Blender patchset) as well as a Java accessor class (BlenderParser).
Compared to the current parser the plan is to add support for a few new tests that seem to cause people some issues from time to time:

  • Test whether one of the scaling (25%, 50%, 75%) buttons are selected. Typically this is an error by the user and a warning should be returned.
  • SSS+multipart. SSS simply doesn\'t work with multipart rendering. Checking whether SSS is present can then limit the split option to \"No split\" or produce a warning.
  • Scripting. Sessions containing scripts generally take longer to get accepted. A warning could be displayed to save the user some waiting time. An automated classification system could sort sessions into sessions with and without scripts for further treatment on the server.
  • Unresolved libraries or external files. As we add support for libraries/includes it is important to warn the user about unresolved externals or missing includes as this is typically an error made by the user.

ID: 8121 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8142 - Posted: 16 Apr 2008, 7:25:46 UTC
Last modified: 17 Apr 2008, 7:24:43 UTC

-----------
The crusher
-----------

Recently, as part of the research for changes needed in beta, I ran into an interesting little utility called pngcrush and a derivative program OptiPNG. These programs claim to enable you to save around 10% of the PNG filesize while maintaining per-pixel identity! This sounded almost too good to be true but turned out to actually work extremely well with the frame files from BURP.

PNG is a lossless format, so the compression itself is naturally lossless. However, the compression parameters determine just how much compression you get. Normally zlib (the library we use as part of our PNG compression routines) uses a set of heuristics (guesses) to determine what compression parameters to use. Instead of doing that the two programs above try out all or most of the parameters, compares the results and then chooses the smallest file as the output.
With a 1TB directory of image files 10% is a nice 100GB of saved space. So what is the cost?: CPU power. It takes around 5 mins for an HD frame to figure out the best parameters. Since this is a data-heavy process which can sometimes reach data rates of 1MB/min it is not something that can be done over BOINC.
However, the plan is to have a small set of machines available for CPU intensive applications on the server location. These machines will be doing stuff like physics simulations, movie file encoding and other interesting tasks. When they are idle they may just as well spend their time turning CPU power into storage space.

So I\'m adding a new item to the list of server-site services - the Crusher!

Together with the other changes to the longterm storage platform this may make the storage more than twice as effective, enabling us to store more than twice as many frames as we could store with the current system.
ID: 8142 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
AC
Project donor
Avatar

Send message
Joined: 30 Sep 07
Posts: 121
Credit: 143,874
RAC: 0
Message 8166 - Posted: 16 Apr 2008, 23:46:49 UTC - in response to Message 8142.  

Would transfer times/network load make BOINC distributed OptiPNGing pointless? It would seem pretty parallelizable otherwise.

...not that the idea of the BURP cluster doesn\'t have a certain appeal :-)
ID: 8166 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8176 - Posted: 17 Apr 2008, 7:39:47 UTC - in response to Message 8166.  
Last modified: 17 Apr 2008, 7:43:42 UTC

Would transfer times/network load make BOINC distributed OptiPNGing pointless? It would seem pretty parallelizable otherwise.

Since this is a data-heavy process which can sometimes reach data rates of 1MB/min it is not something that can be done over BOINC.

Let\'s just say 1.2MB/min = 20KB/s. A typical frame is around 2-3MB and uses a couple of minutes to get crushed if using the maximal search space. With 1000 concurrent clients that is around 300 simultaneously running files = 6MB/s outgoing data from the server + ~15MB/s incomming. That\'s without replication taken into account (ie. assuming that something like Bittorrent is used).
I\'d say that applications that have a 1:1 relationship between megabytes of compressed input data and computation time in minutes are not an option for BOINC.

\"But why is BURP feasible then? You have even bigger input files!\"
Yes. However, the inputfiles for BURP are shared across multiple workunits and each workunit usually takes longer than a few minutes to complete. These facts combined give a much smaller data-to-computation factor and makes BURP much more viable than a distributed optipng setup.
ID: 8176 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8214 - Posted: 20 Apr 2008, 18:11:42 UTC
Last modified: 25 Apr 2008, 11:25:30 UTC

-----------
The Encoder
-----------

This is one of the places which are changing dramatically from Alpha to Beta. In Alpha the Encoder pretty much did the following:

1) Unpack all frame files from storage into a new directory
2) Launch Mencoder on the frames to compress them into FMP4
3) Repeat for each quality setting (3 different settings were used)

In Beta the hope is to produce higher quality movie files while using less bytes and less disk I/O to produce them. So here is a bunch of changes:

  • Instead of unpacking all the files the system will access the data directly from the storage using on-the-fly unpacking.
  • Instead of running Mencoder directly on the server the data will be shipped over the local network to any available encoder, freeing up server CPU cycles.
  • Instead of letting Mencoder handle the files directly they will be passed through a RGB=>YCbCr-filter and precoded into a yuv4mpeg stream which is then piped into the encoder. This allows the system to do streaming processing on the entire sequence while only ever using memory to store a single image - in other words there will be no limits on the length of the movie.
  • Instead of using FMP4 we will be targetting H.264 through x264 (this is basically a state-of-the-art video compression format used both on HD-DVD and BlueRay). x264 also has the benefit of multithreaded encoding support.
  • Instead of using only single-pass encoding we will use multiple passes in order to increase quality without increasing filesize.
  • Different quality settings can be encoded simultaneously by multiple machines instead of waiting for a single machine to complete each of them in turn.



So the new streamflow will be like this:
Storage => Network => Any yuv4mpeg compatible encoder (like mencoder) => Network => Storage

\"Why use streaming, neither mencoder nor ffmpeg seems to support streams of PNG files as input?\"
With streaming it is possible to avoid large chunks of disk I/O and it is easier to keep a low memory and disk footprint. Since none of the standard encoders seem to support streaming input of PNG files it has been necessary to write a custom PNG2YCC-filter that allows for this kind of use. This filter turns the PNG stream into a yuv4mpeg stream, which is a commonly used standard format among encoders that support streaming through pipes. A benefit of this is that we can now use any of the encoders that support this format.

\"Why use yuv420? Doesn\'t it decrease the quality?\"
Yes, slightly. The Cb and Cr channels are only half the resolution of the Y channel in the 4:2:0 YCbCr format. However, the difference is pretty much unnoticeable after h264 compression.
The optimal solution would be to use raw AVI bitmapped frames for piping into mencoder, but the Microsoft AVI container format was making me go insane.

\"How will this impact the website?\"
Since h264 is a streamable format and support for h264 was recently added to Flash 9 it is quite possible that the session page will be \'upgraded\' with the possibility of watching a streaming preview of the video material in low resolution (somewhat in the same manner as what you get with YouTube).
This could happen through a flash-application like flowplayer, JW FLV Mediaplayer or similar (please say if you know of any).

\"What about old sessions, will they be upgraded too?\"
Yes. Exactly which files will be added has not been determined yet. Links to the old files will be kept intact in order to avoid broken links.

[Edit 080425:]

\"What level of quality are you aiming for?\"
The best of course, but we also have to make the right kind of files for the right kind of people. The old system created 3 different files in the same format. Statistics currently show the download distribution on these files to be (shown as total hits and a total corrected for unique clients, mirror2mirror transfers and system self-testing hits):
*q96 : 258\'800 hits (36\'052 hits corrected)
*q85 : 190\'180 hits (10\'253 hits corrected)
*q50 : 53\'013 hits (12\'031 hits corrected)
The interesting thing to notice is that q96 is very popular due to the high quality and q50 is marginally more popular than q85 due to the small filesize.
Based on these observations the new system will generate two files instead of three:
1) A high quality downloadable version in full resolution
2) A lower quality streamable version in scaled-down resolution
Both files will be encoded at a set bitrate rather than a fixed quantizer setting because the quantizer-based encoding caused some filesize issues in files with excessive movement or extremely complex structures. The exact bitrates are yet to be determined based on tests but (2) will be less than 500Kbit/s and (1) will be in the order of one or a few Mbit/s.
By making only 2 files instead of 3 it is possible to improve the encoding quality (since more time is available) and decrease the mirror load (since less files will need to be distributed).

The intention is naturally that (2) will serve as a quick and easy preview, (1) is useful in the situations where BURP is the final destination for the movie file or when a quality preview is required, CATS will be the solution to use if further postprocessing is required.

\"What about postprocessed movie files uploaded by the artists?\"
There are currently no plans to convert these files since the complexity involved in determining audio/video codecs, syncing etc is enormous (ask the Youtube devs). These files will be presented exactly as uploaded by the author.

ID: 8214 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Izarf

Send message
Joined: 29 Jul 05
Posts: 33
Credit: 13,637
RAC: 0
Message 8246 - Posted: 22 Apr 2008, 12:56:50 UTC

Janus wrote: \"the plan is to have a small set of machines available for CPU intensive applications on the server location\".

Exactly how is this cluster(?) going to look like? Will it be a cluster at all or just a simple set of single machines, working all on their own? Or would it be some kind of linux high performance load ballancing cluster solution?

If the latter is the case, I think you might be interested in that the fluid(possibly all physics simultions) easily could be computed parallel since there already is OpenMP-support built in. I heard that it is just to alter a certain variable to tell blender to bake the physics multithreaded.

Using a local cluster to render the physics maby is a good idea after all.
ID: 8246 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Janus
Volunteer moderator
Project administrator
Avatar

Send message
Joined: 16 Jun 04
Posts: 4574
Credit: 2,100,463
RAC: 8
Message 8250 - Posted: 22 Apr 2008, 15:01:07 UTC - in response to Message 8246.  

Exactly how is this cluster(?) going to look like?

One or more high-performance dualcore machines equipped with around 4GB memory each.

Will it be a cluster at all or just a simple set of single machines, working all on their own? Or would it be some kind of linux high performance load ballancing cluster solution?

Even clusters are merely collections of simpler (but highly interconnected) machines. However, the small bunch of machines here will not be clustered at the OS level but rather at the application level (ie. load gets distributed by sending tasks to different machines rather than distributing the same task across different machines).

I think you might be interested in that the fluid(possibly all physics simultions) easily could be computed parallel since there already is OpenMP-support built in. I heard that it is just to alter a certain variable to tell blender to bake the physics multithreaded.

Do you have a reference to that? It may be a nice way to optimize use of the multicore systems.
ID: 8250 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · 3 · 4 . . . 6 · Next

Message boards : Server backend and mirrors : The big Beta rewrite plan