ayaros 1 day ago

The graphics side of my program is structured as a tree of objects. The root is the entire display. Each object contains zero or more children, and masks those children so they only appear within the rectangular bounds of the object. If an object has an image class, and there are several of these using different data structures - then it further masks its children according to the image's alpha channel. I also have some blending modes that can be turned on. Any object can have a blending mode that affects appearance of the object based on the image of the object, or the images below it.

Is that something I could port to WebGL? Can objects contain or be parents of other objects in that way, and mask thier children to only appear directly in front of them? I have done a tiny bit of WebGL for a CS assignment but it was ages ago. Hopefully once I make this public I'll be able to get some feedback on all this. I don't want to say too much more as it would begin to spoil things!

1
catapart 1 day ago

It doesn't sound like you have an architecture that is particularly conducive to renderer-based programming, but that's not to say it's a difficult or complicated port. That all depends on the details.

When going to a dedicated graphics renderer, the biggest "change" from standard dev is "unrolling" your loops. So, right now, you have a bunch of parents and children and parents inform children. This is not, at all, different from a game engine which allows nesting of render objects (very common). Usually, the "transform" of a parent mesh is added to the transform of its children, for a very basic example.

If you're working in 3D, there's a bit of a complication with alpha channels because not only do you have to worry about blending, but you also have to deal with the intersection of alpha objects - simple front-to-back-then-back-to-front rendering won't cover that. But since you're just doing forms, you can be sure that all of your alpha content will be rendered after all of your opaque content, and blended according to its mode at that time, so it's a fairly straightforward version of "unrolling" that you would need.

But all of that is to say: no, WebGL (or any graphics API) won't have any kind of built in way to manage nesting objects. Those relations would have to be defined by you and uploaded to the GPU and then referenced separately, rather than looping through each parent and drilling down into the objects. It's just a different type of problem-solving. Rather than dealing with things per-object, you're dealing with them per-pixel (or pixel group). So you tell each pixel how it should render, based on what objects it is expected to describe at that pixel, and then let them all do their work at the exact same time. It's less "layer painting" and more "arrange and stamp".