Submitted by FusionRocketsPlease t3_10ytpii in singularity
[removed]
Submitted by FusionRocketsPlease t3_10ytpii in singularity
[removed]
AAAAAAAAAAAAAAAAH Damn bro, I'm excited to know that cool people have thought the same as me and are putting it into practice. I can't wait for what's next!
tl;dr: how long until i don't have to learn blender
Imagine not having to spend hundreds of hours tinkering with that anti-intuitive stuff.
yeah if only there were people out there with that specialized skill that you could pay to do the task for you
"GIB ME MONEY!!!!"
If only there were AI agents with that skill.
I seen Stable Diffusion plug-ins already developed and incorporated into photoshop. I’d imagine the same will go for Blender. If the creators of say blender go into partnership with someone like MidJourney or SD then it’d move pretty fast. I had fun today cloning a friends voice from an interview and creating a satirical video of an avatar of him speaking to camera talking about his (fake) career in pornography. We both cracked up 😂. All these media uses for AI are going to integrate very quickly
What’s Blender
computer graphics software
its what we use to make milkshakes and smoothies, i can't believe i have to explain this >!just kidding its an art application!<
Was thinking about something like this the other day but for nvidia's rtx remix when it gets released officially. Train the AI as you remaster games using the program, then eventually it knows enough to remaster games by itself.
I think these types of systems will start to appear not long after AI assistants are integrated into operating systems.
Google's LaMDA already has this theoretically figured out. They've shown over a year ago it's ability to function general tasks like this. And I'm sure they are much further ahead already.
The model wouldn't learn to use blender, that'd be inefficient. It would create a voxel object then upscale the voxels into a proper 3D model, then exported in Blender format.
The bits are already there in various research papers. We know how to take a text prompt and generate a small voxel model. We know how to take a voxel model and upscale it into a large voxel model.... All that's missing is somebody to assemble the entire thing and enough budget for the 7 to 8 figures training cost.
1 year for first attempts.
I think in about 2-3 years there will be an AI that can control/use any software or app we have. Companies are working on making AI that can navigate a browser, book flights, etc.
So an AI trained to create custom macros? The tech is already past that. This project itself would be doable if Blender had some API to create and run macros and there were database of macros for it to train on. It wouldn't be very different from Github copilot. But even then it'd just not be worth the trouble. Its application would be too narrow. It's better to invest in AI that can just render stuff on the fly.
This would be awesome. And maybe for Unity and Daz as well. I was thinking today about how these companies are going to be left behind if they don't find a way integrate AI into these programs.
The model wouldn't learn to use blender, that'd be inefficient. It would create a voxel object then upscale the voxels into a proper 3D model, then exported in Blender format.
The bits are already there in various research papers. We know how to take a text prompt and generate a small voxel model. We know how to take a voxel model and upscale it into a large voxel model.... All that's missing is somebody to assemble the entire thing and enough budget for the 7 to 8 figures training cost.
Have you asked in chatgpt knows blender commands? I haven't, but I know it knows graphviz, so it can draw graphs.
I asked some generic questions and he gave some answers that seem to be correct from the tutorial.
it depends what level of abstraction you are taking from the raw actions within the program.
A lot of 3D stuff that can be automated already is, you can write scripts.
Having an AI 'script writer' helper that takes in natural language and produces a python script can already be done. It's my go to thing to test chat bots, to ask them to generate simple scrips for Maya. (the you.com one got a bit better at that recently)
If however you are asking for something like 'create me a full sci-fi environment' or 'rig this model' or 'animate this armature like this' and it just does, well we are not there yet. There are scripts, asset libraries, etc... that streamline these processes but nothing end to end driven by natural language with zero manual input from a human.
I don't want one that creates something so generalized with a prompt. I want one where I can create every last detail without needing those ultra-complex menus and years of practice.
>They train a language model with all Blender commands, and all possible outcomes. Then the model learns to control blender, allowing the user to be guided through it or the user can ask for what it wants through a text prompt. How far?
can probably be done this year. does it require expensive hardware? Is it slow? is it bad? most likely.
TFenrir t1_j7ziwlh wrote
Not far, there are a lot of people working on getting the appropriate training data for this right now. One of the most prominent groups is Adept.ai - their v1 model is trained on using browser based apps however, you can see examples and sign up for the waitlist on their website.
If I was going to ballpark when a regular Joe will have access to tech like that (without commenting on proficiency, and specifically for Blender)... 50% certain within 1 year, 80% within 3?