Some of your comments and questions have me curious about your design.
If I were creating something like this I would have the socket receiver code just stuffing data into a raw channel array. Ie, E1.31 universe 2 of 510 channels gets mapped to raw channel ID's 511-1020, etc.. That part should be really fast, it's just memcpy-ing data into an array. Then I would have a thread or multiple threads for pulling data off that raw channel array for each display model and drawing that on the preview display. I think the simplest way to do this would be to 'draw' onto an image with a transparent background then overlay that image every X milliseconds onto the background image. The image could be generated by multiple threads each working on separate models pulling channel info from the raw channel data. Then once the models are all drawn, a single thread makes a single call to draw that image over top your background image.
This way the display code doesn't have to know about universes or pixels split across universes (which I believe is possible with some E1.31 controllers), instead it just deals with models which are based on raw channels. This is how Nutcracker models work.
I think the only assumption you can make about a particular pixel's R, G, and B channels is that they are contiguous. You can't even assume their ordering, that must be defined as part of a display item model. Pixels don't have to end on a channel number divisible by 3 or on a packet-relative channel number divisible by 3. I might have a dumb string with it's RGB channels on 1,2,3 and then another string on the same controller on channels 5,6,7 because channel 4 is bad. I could do the same with two 3-channel controllers or I could be skipping channel 4 in case I eventually want to switch to RGBW.