Author Topic: What should the next generation of sequencing software look like?  (Read 3948 times)

Offline tonyv2842

  • Full Member
  • ***
  • Posts: 47
I am trying to get a feel for what you guys think about how to create sequences for the higher node counts and RGB strings that are coming on the scene.  What do you think the user interface should look like on the next generation of software that would facilitate creation of large node count sequences for minutes in length? I am not so much interested in "features" per se, I am more interested in the human machine interaction.  How can we effectively program perhaps thousands of nodes in an efficient and flexible manner?

Offline ptone

  • Sr. Member
  • ****
  • Posts: 107
I'm super interested to hear what people have to say on this as well.  I have a lot of ideas on it, and I have a software project that is one approach to this problem (I'm working on getting it in releasable shape, and so that I don't commit the sin of vaporware, will wait to tout it till it can be downloaded and kicked around - I will say it will be free, open source, and cross platform).  There are a number of other projects at various stages of development out there.

I think the best overall introduction the crux of the matter is this article from Spring 2010 (in horrendous Flash based reader):

You are not allowed to view links. Register or Login

There will certainly not be a one sized fits all solution in this space.  Perhaps there will end up being one sized fits most in the end. The probability is that for 2011, LSP will see a nice healthy number of sales, as it is probably the best current offering.

What is not unique here is that software often has to find a sweet spot between powerful and easy to use.  It is relatively easy to do one or the other, but very hard to do both well.

Looking forward to see what people are thinking about for this challenge!

-P
--
budding channel wrangler

Offline WWNF911

  • Patron Member
  • Sr. Member
  • ****
  • Posts: 1079
Of course I'm no software engineer but the software I feel will do best will enable us to identify a device as RGB then select a color for the cell/ cells. The necessary channels and number to get to the color will all be in the backgroung and of no worry to the user. The user interface will make it look as simple as sequencing in Vixen. All the same sequence tools we currently use would be the same just have the RGB stuff in background. In the end we have the colors and performance we all want without having to worry about doing all the work on a huge channel count scale. The software will do all the heavy lifting for us. Seems pretty simple but can itbe done? 2 cents
Leon
Leon

Offline travailen

  • Sr. Member
  • ****
  • Posts: 332
  • 77459
I think we need to look at a tree or an arch or whatever as a 3D object. Then decide how we are going to illuminate it and then program the illuminating objects. For example:
First we choose an object from a library of objects. For a mega tree it would be a cone an arch could be a tube. So we choose a cone.
Then we choose the material for the cone. In the case of an rgb tree the material is a node grid wrapped around the cone as opposed to vertical stripes for an incan. tree. We have told the software the ID of each node in the grid.
Then we choose some lights to illuminate the cone and place them in certain relationships to the cone. Front, back, side, top . The more illuminating lights the more complicated the programming for the user. The intensity, color, circle of illumination (one node or the whole cone), the position, moving high to low or left to right and/or panning left or right  or whatever can be manipulated by the user to change according to the music. Kind of like a searchlight illuminating part, or all of the cone as it moves around. Now the light movement can be automatic as the music plays or the user can manipulate the lights as they desire. The program uses the “light” shining on the cone from the illuminators to program the channels for each node for color and intensity as time is passing. To program, the user would choose a light or lights and run the time as he manipulates the light(s). Lights not selected, if previously programmed, would move according to the time base so the entire effect could be viewed.  To add more fun the cone could be set to rotate around various axis at the same time.
This is just a simple explanation. If it has any merit, I am sure all you programming guys can do something useful with it.

Offline tng5737

  • Sr. Member
  • ****
  • Posts: 480
I tend to think in terms of a typical vizualizer.   if the vizualizer image was implemented as a layer in the grid background  with a set of tools similar to you might find in a paint pgm.  Placing your display objects on the image, you also describe (or the pgm would know from its database of stored objects) the various characteristics of the object based on its capabilities.  (Think OOPS here)
Now as you sequence you could sort of "paint" your effects, transistions etc... much like you would do with the on/off/ramps/etc... now! As you painted your effects over the display objects the pgm would interpret the implementation of the effect in accordance with the object capabilities.  Sort of an artist paint a picture in time.  you could replay the music segment and fine tune your effect! An example of this might be tree made out of the smart string nodes.  YOu could paint your colors /hues  right ontothe tree.  YOu could even place text there or images and have the pgm figure how to implement the various nodes.

Offline JJJR

  • Jr. Member
  • **
  • Posts: 15
To me the way to proceed with high count channel sequencing is to let the software figure out which cells need to be used and at what intensity and have the user work on a more simple interface for what they are wanting to do.
The best example I can give is to anyone who's ever installed software you have two options from the beginning of the installation. Simple or Advanced. Simple handles the more involved things in the background instead just asking the user a few questions to get the primary concerns taken care off. For most users this would be the "mode" they would operate in as most users will want similar light sequences. I'll elaborate on that later. As for the Advanced option this would be exactly as we are familar with now. Where every cell corelates with a channel and the user has complete control of the sequence on a cell by cell basis. This option or "mode" would be for the user who wants to make light patterns that are uncommon or perhaps want to tweak what the simple mode created. Now to elaborate how simple mode would theoritcly operate. A user tells the program channels x through xxxxx are RGB and the program automaticly creates the cells for RGB. The user than tells the program channels x through xxxxx are say a mega tree with channel x through xxx are the first string. The program then makes reference the number of nodes in the string and how many strings in the tree and asks the users what pattern they want to make. Say snow falling or any number of other effects the user can chose and then the program would automaticly calculate the effect to the number of channels and strings over the length of time the user calls for and enter the value in the cells. At least this would help cut down on time by having the program add every value to the thousands of cells. But while this would cut down on time for sequencing at least RGB we still would need a new revolutionary way to sequence. Just a thought.

Offline tonyv2842

  • Full Member
  • ***
  • Posts: 47
You are not allowed to view links. Register or Login
I think the best overall introduction the crux of the matter is this article from Spring 2010 (in horrendous Flash based reader):

You are not allowed to view links. Register or Login

Thanks!

You are not allowed to view links. Register or Login
I think we need to look at a tree or an arch or whatever as a 3D object. Then decide how we are going to illuminate it and then program the illuminating objects. For example:
First we choose an object from a library of objects. For a mega tree it would be a cone an arch could be a tube. So we choose a cone.
Then we choose the material for the cone. In the case of an rgb tree the material is a node grid wrapped around the cone as opposed to vertical stripes for an incan. tree. We have told the software the ID of each node in the grid.
Then we choose some lights to illuminate the cone and place them in certain relationships to the cone. Front, back, side, top . The more illuminating lights the more complicated the programming for the user. The intensity, color, circle of illumination (one node or the whole cone), the position, moving high to low or left to right and/or panning left or right  or whatever can be manipulated by the user to change according to the music. Kind of like a searchlight illuminating part, or all of the cone as it moves around. Now the light movement can be automatic as the music plays or the user can manipulate the lights as they desire. The program uses the “light” shining on the cone from the illuminators to program the channels for each node for color and intensity as time is passing. To program, the user would choose a light or lights and run the time as he manipulates the light(s). Lights not selected, if previously programmed, would move according to the time base so the entire effect could be viewed.  To add more fun the cone could be set to rotate around various axis at the same time.
This is just a simple explanation. If it has any merit, I am sure all you programming guys can do something useful with it.


Your thoughts are to model it in the 3D realm using meshes light sources etc., if I understand you correctly.  Although it's a great idea, I don't know how well the non graphics types would take to something like that.  As I mentioned, it's an interesting idea though.  Thanks for sharing it.

You are not allowed to view links. Register or Login
I tend to think in terms of a typical vizualizer.   if the vizualizer image was implemented as a layer in the grid background  with a set of tools similar to you might find in a paint pgm.  Placing your display objects on the image, you also describe (or the pgm would know from its database of stored objects) the various characteristics of the object based on its capabilities.  (Think OOPS here)
Now as you sequence you could sort of "paint" your effects, transistions etc... much like you would do with the on/off/ramps/etc... now!

Another great idea.

You are not allowed to view links. Register or Login
Now to elaborate how simple mode would theoritcly operate. A user tells the program channels x through xxxxx are RGB and the program automaticly creates the cells for RGB. The user than tells the program channels x through xxxxx are say a mega tree with channel x through xxx are the first string. The program then makes reference the number of nodes in the string and how many strings in the tree and asks the users what pattern they want to make. Say snow falling or any number of other effects the user can chose and then the program would automaticly calculate the effect to the number of channels and strings over the length of time the user calls for and enter the value in the cells. At least this would help cut down on time by having the program add every value to the thousands of cells. But while this would cut down on time for sequencing at least RGB we still would need a new revolutionary way to sequence. Just a thought.

Thanks!

Offline tonyv2842

  • Full Member
  • ***
  • Posts: 47
I've gotten some interesting ideas and it seems to me that we have 3 maybe 4 smaller problems to solve that when solved, solve the big problem.

1) We need to "tell" the program what we are using, ie. RGB nodes in the shape of a MegaTree, 8 strings of white Incans in an arch, etc....

2) What channels and or capabilities are associated with what it is we are using.

3) What effects are associated with what it is we are using.

4) How do we map those effects to what we are using.

5) How to animate what it is we are using.

4 and 5 ultimately are very similar.  Although 4 is really "canned" effects based on the object type and 5 could be any animation at all for what it is we are using.

Looking at the problem from this perspective, what other ideas can we come up with to help solve this problem?

 

Offline rrowan

  • Administrator
  • Sr. Member
  • *****
  • Posts: 5899
  • 08096
Hi Tony,

I totally agree with JJJR. We need something that can handle large channel counts and not bog down the user with those channels counts. Take MS word for an example. It has ton of features but still simple enough for people to run it and type a letter and not worry about the advance stuff.

For me I would like to see something between Vixen's easy of use and LSP RGB capabilities. LSP also has a way better preview setup. Like you want a tree it draws the tree, a flood light shines like a flood light. Very nice. One thing that bugs me about Vixen is the scheduler is part of the main program. To run a show it should be separate program and calendar format like xlights does (maybe LSP but I never had a chance to try that).  

I don't see a need for 3d yard layout. IMHO
I would rather say I have a Mega tree with 16 strings of 70 rgb nodes and the software takes care of the rest. Also I should be able to change my mind and say I need 32 strings of 60 later and it updates the seq as needed.

Basic show items should be in the software and a easy way to add a item(s) that I might have but most other people don't
Basic items: Mega Tree, Smaller Trees, Arches, Flood lights, Snowflakes, Icicles, Lights strings in patterns (box for windows, lights outlining the roof), net lights, RGB display panels (creates text and graphics)

Basic effects and a way to connect a certain part of the song to a item. Like I want at this point in the song for my trees to do a chase left to right and later from right to left.
Basic effects: On, Off, Dimming, Chase, Ramps up and down, (I am sure other people will want more items like sparkle, etc)
Some RGB effects would be: Color, Fade from one color to another color (pick color from a color chart)
Effects should be easy to customize and change as needed.

I think we should not be locked into a windows program. It should be cross-platform useable
If I could setup a schedule for a show in GUI and then run it in a CLI mode than it would take less computing power (cpu, ram,etc) to run a show with lots going on. The scheduler should be able to run a show with or without audio, the start time and end time should be very flexible and allow more than one per day. Also I should be able to run a show at 4pm till 9:33pm and then a seq at 11:02pm to 9am the next day.

I'll stop there for now.

Cheers and good luck

Rick R.
Light Animation Hobby - Having fun and Learning at the same time. (21st member of DLA)
You are not allowed to view links. Register or Login
Warning SOME assembly required

Offline ptone

  • Sr. Member
  • ****
  • Posts: 107
You are not allowed to view links. Register or Login
I would rather say I have a Mega tree with 16 strings of 70 rgb nodes and the software takes care of the rest. Also I should be able to change my mind and say I need 32 strings of 60 later and it updates the seq as needed.

This points to what I think is a key feature of future software.  The idea of separating the intent of the sequence, from the raw channel values.  What this means is you don't sequence channels, you sequence objects.  If you change the object or its configuration, you don't need to change your sequence.  This means that in the software, the channel data only exists behind the scenes, or is generated completely on the fly.

Compare this to the way you might have adopted someone else's sequence in the past.  If someone had 5 windows, and you had 4, you might remap that 5th window channel to a bush.  However if we are talking about a complex effect mapped to an RGB mega tree - and you want to make a change to the tree as Rick suggests, there is no simple way of modifying a sequence if the only thing recorded was the channel data - which would just be a blizzard of channel values.

You need to store and work with another representation of the sequence, that contains the timing and target abstracted from the raw channels.  With such an approach, you can change the configuration of the mega tree, and when the software plays the show (or renders a playback file) it will look at the timing and the reinterpret the effect onto the newly configured target and render out all the raw channel data appropriately.

-P
« Last Edit: January 16, 2011, by ptone »
--
budding channel wrangler

Offline mmulvenna

  • Patron Member
  • Sr. Member
  • ****
  • Posts: 231
Lots of good ideas in this thread, but to the software developers, don't completely eliminate the grid. None of  automatic High level software that I have viewed that has eliminated the grid is capable of handling complex wire frames like these You are not allowed to view links. Register or Login           You are not allowed to view links. Register or Login    

 If I have missed how to do it without the grid I apologize. But these kinds of objects are not sequenced to the music and stand on their own.  Another example would be a snowball fight but I wont bore you  with the video


My 2 cents and that all it is probably worth :)
« Last Edit: January 16, 2011, by mmulvenna »
Thanks
Mike

Offline tonyv2842

  • Full Member
  • ***
  • Posts: 47
Rick, thanks for the feedback and it certainly appears that is the way we need to head.
All are excellent points.  I can tell you must have been thinking about this for a while.

ptone, I agree, making that useable is going to be the trick.

mmulvenna, we should keep the lowest level functionality exposed for special cases such as the ones you mentioned.

This certainly has got me thinking, now I need to think about this from a users standpoint and see how this should look from a users pespective when it comes to actually accomplishing the task.  Does anyone have any other ideas?  I'd love to hear them.

Tx,
Tony

Offline ptone

  • Sr. Member
  • ****
  • Posts: 107
You are not allowed to view links. Register or Login
Lots of good ideas in this thread, but to the software developers, don't completely eliminate the grid. None of  automatic High level software that I have viewed that has eliminated the grid is capable of handling complex wire frames like these You are not allowed to view links. Register or Login           You are not allowed to view links. Register or Login    

 If I have missed how to do it without the grid I apologize. But these kinds of objects are not sequenced to the music and stand on their own.  Another example would be a snowball fight but I wont bore you  with the video


My 2 cents and that all it is probably worth :)

Fear not, you can do this sort of sequencing without a grid - in fact, I'd say you have more flexibility.

I should say that a grid strictly speaking will always exist in the form of an update rate (at the electronics and protocol level).  But here is how you might do your wireframes without access to that grid.  A grid will be replaced by a timeline.  And the lighting objects will be instructed to change over time.  For example in a wireframe you might have a wheel with a 2-3 part animation.  You can set up a loop that says advance through the 3 steps (3 channels) in 3 seconds (a 1 second delay), and then repeat indefinitely while the show is running.  You'd set up similar instructions for other parts of the display. 

Now lets say you finish the whole thing, and after watching it, you think it would look better if the wheels  were spinning a bit faster.  So you go back and change the duration for that chase to 2 seconds instead of 3  - that's it.  You don't need to touch the rest of the sequence, and you don't need to change the spacing of the wheel animation throughout the show on a grid, you've just changed one configuration attribute of that looping animation.  The software then will generate the grid behind the scenes at or just before showtime.

LSP sort of does this with transitions, but still keeps the grid front and center - in that it will update the grid based on the transition, but the grid is still the canvas.  I'm just suggesting that the canvas be up a level of abstraction, and show just timing and intent.

-P
--
budding channel wrangler

Offline rrowan

  • Administrator
  • Sr. Member
  • *****
  • Posts: 5899
  • 08096
Hi Guys,

Just a couple more points

1 - I run different computers a sequence computer and a show computer. The show computer shouldn't need as much horse power as the sequence computer. Now I know a lot of people do everything with one computer. The scheduler part of the problem should run with less system resources since it should not be displaying anything other than a update its running or not.

2 - I think ptone just posted part 2, the software should be time based and just send updates to the dongle on the channels that updates are needed.

Cheers

Rick R.
Light Animation Hobby - Having fun and Learning at the same time. (21st member of DLA)
You are not allowed to view links. Register or Login
Warning SOME assembly required

Offline TheBanker

  • Sr. Member
  • ****
  • Posts: 308
Personally I think it will have to move in an entirely different direction, outside the box.  We are talking that people will be having 40,000 to 100,000 channels.  I cannot help but keep thinking that there is an answer somewhere in that the string themselves are smart and only need a trigger to do their thing.  Still lots of programing but one at a time or group at a time.  Each box would be preprogramed with its own chip for how every many songs you want to do then all it needs is a trigger to tell it when to start and what song.

Will
Will