What I have seen from LSP you would need to copy and paste. 3,000 channels for 10 seconds will probably paste ok.
My tree is going to be 16x100 pixels. I was going to do 50ms per frame and a 10 second animation
So
(16x100x3 channels = 4800 channels. x 200 frames of animation = 960,000 cells of info to paste.
One user of Nutcracker has 76 strings of 80 pixels. This would be 18,240 channels. This is for one frame. if they had a 25ms frame and 10 seconds then you would have
(76x80x3 =18240 channels) x (40fps x10secs =400 frames) = 7,296,000 individual cells to paste, would that work too?
Maybe we will need to just create different sizes until we find what is the biggest sequence that can be pasted.
BTW, i dont plan on ever trying to manually sequence my megatree. So for a 3 minute song, i need 180 seconds of animations.
Using the big tree listed above, i could create 18 10 second animations.
I would then need to import 7,296,000 individual cells (for a 10 second animation) x 18 10 second slots in the show.
= 131,328,000 individual cells
or for my smaller tree
(960,000 cells over 10 seconds) x (18 10 second slots) = 17,280,000 cells of info
It probably would be a better architecture in the sequencers if they could call the rgb device on the timeline and let it manage its own chunk of data.
I have 64 channels of Lynx Express, if a future sequencer could have 64 channels laid out and then have channel 65 be RGB device. On that timeline you put the name of objects, SPIRAL1, TEXT1, PICTURE3 called at the appropriate time.
If you could click on an object, say a 10 second spiral and then opens the timeline for just that object. Maybe it shows 3-20K channels, you manipulate that object and close it.
Now when running a sequence i only see 65 channels. I really dont want to figure out what all those channels are doing. Let something like Nutcracker produce chunks of animation. Or maybe the sequencer software itself.
Probably , this is teh future of the sequencers, the question is what to do now?
Maybe we could load up a conductor from RJ with data from Nutcracker directly. Run all the other sequences and keep the RGB synched to the show.
Computer=>DMX=> Runs all channels except RGB
RJ Conductor=>DMX=>Pixelnet runs the RGB device
Both are kept in sync someway.
Now I could create 18 effects in Nutcracker, Lay them out on a timeline in Nutcracker (a future enhancement I already have planned) and then load the Conductor with everything and you never see the ugly stuff.
just a thought.
sean