I was having some trouble last time I looked at it with getting a proper sample cursor position on the stream voice. The music stream gets read in chunks, so there are multiple buffers getting filled and submitted to the audio system. I seemed to be having the problem that every time a buffer was started, the current play position was reset, even though the docs said it should be the count until an end of stream marker is reached.
Debugging audio can be a pain in the butt, because there are all sorts of things all happening at the same time with multiple threads, hardware interrupts, etc. It's hard to debug things in a properly "stopped" state sometimes, because every time you step an instruction, all sorts of other stuff happens.
In any case, the problem should have been obvious: the end of stream marker doesn't actually stop anything, it just keeps playing submitted buffers happily. And the end of stream marker was set on every single buffer.
Then I got into looking at the shader node stuff in Blender. I was trying to set the transparency based on the dot product of the view vector and the surface normal to give a nice smooth falloff.
First off, the shader system is completely awesome, because it actually has input pins for normal, view vector, etc., and math nodes for doing dot products and other vector operations. So, in theory, it should actually be possible to compute that.
- Specifying transparency is a bit difficult, because the renderer is using path tracing, so there isn't really an "alpha channel" you can specify for opacity, it's more about probability that each of the R/G/B channels will be passed through.
- The above math operations don't seem to be working as expected.
Plus, it seems that different operations might give output on different pins. The dot product will result in a scalar, it would be nice if it output it to all channels of the vector pin or something too. Or was documented at all. Or the source code wasn't so opaque. (haha, opaque... stopping now)
Oh, I also wrote a tool to convert a MIDI file into script commands to spawn objects a specific time values. This is half way to cool, except drum tracks only have a few different notes, which is presently kind of boring. Still working on a good solution to that.