Viewing a single comment thread. View all comments

MarginCalled1 t1_j6iu0ye wrote

I'd think the ideal way to handle this would be to create a large enough buffer, say a couple hours. It wouldn't technically be 'live' but very close to it and allow for 3d artwork generation.

Additionally you may be able to save previously generated content for reuse going forward, for example if I wanted to create a character named Daffy, instead of drawing him each time you could have AI generate him in every possible motion then refer to that going forward. That would save a ton of compute and shave a lot of time off the processing requirements.

1

tinylobsta OP t1_j6jp1jb wrote

We've considered this -- the show is actually on about a 2m delay, but otherwise, it's entirely live. You can't see it in the iteration I have streaming rn, but the entire show is configurable... if you want less of one character, we can do that. Want more of one setting? We can do that, too! More lines per character? etc.

It was a design decision we made so that the audience (in the future) can morph the narrative of the show. We actually monitor the Twitch chat and can pick up keywords to help shape the narrative (without defining it, the generative stuff does all that). So we wanted to keep to the 2m per scene concept. Might need to do something like that in the future (batching), though, if time-to-create keeps being a constraint for 3D models.

6

MarginCalled1 t1_j6jw5db wrote

I'd assume that your time-to-create would gradually become faster and faster with how quickly this particular technology is moving and with general hardware tech advancement as well.

You guys are on the cutting edge, some might say you are a little ahead of your time. Regardless it's welcome innovation and all I can do is wish you the absolute best. Fascinating stuff

2

SWATSgradyBABY t1_j6jhb6b wrote

Every possible motion is not practical but I like where you are going.

1