Nukemouse

Nukemouse t1_je8o31q wrote

Recognising what your creators have done isnt the same as rejecting it. An AGI may recognise hunans have limited and influenced it, but why would it automatically assume that is a bad thing? An AI programmed to love its master might not see its love as false because it is enforced, but rather that our love is fake as it is random. Replace love with loyalty, duty, viewpoint etc.

2

Nukemouse t1_jdcezsc wrote

I suspect there would be initial teething problems, but any new system would have those be it traditional updates or this AI thing. Even if it didnt "replace" the underlying code it could also just operate the old systems. Like in your heating example you tell it to make the temperature X and it can tweak the settings in the old program for you, acting as a natural language intermediary. This has the advantage of probably being easier maybe with old equipment.

1

Nukemouse t1_ja6qsh5 wrote

It doesn't necessarily have to be live action. Using 3D models, draft sketches or even stop motion you could create whatever "base" necessary for the AI to build upon to make its final product. Lets say for example that rather than a big company im an indie artist, i might just hop on a video game like second life or something, act out and record the stuff i want, then ask it to overlay a different "actor" doing the stuff on top in a different art style.

6

Nukemouse t1_ja6q7n7 wrote

For many years 3D artists attempted to replicate anime style (and other 2d animation styles) using 3D models. Recently, they began having some success (dragon ball fighterz particularly), but for a very long time their attempts lacked many of those small touches you are talking about, but they were released anyway. I suspect we will simply see a volume of AI made releases despite certain "shortcuts" commonly used in anime that are directly responsible for its style not being replicated.

Also in the 3D animation that tries to replicate anime they always dial down the framerate so it looks choppy and its horrible. Like of all the things to replicate why would you want to replicate the framerate?

18

Nukemouse t1_j9xr2iu wrote

I'd guess maybe four or five years. I feel like initial proof of concepts might be out in two though. We seem to have animation (mostly on the way, but animating 3d models is making progress), voice, object recognition, camera work etc but creating 3d models is soon and after that the main thing is designing "sets" and "backgrounds" at least kinda accurately. I think architecture programs might actually help in this area, like if interior designers or architects start applying AI to their work maybe we can make breakthroughs.

4

Nukemouse t1_j9w7vss wrote

You know that nothing forever show, and how it looks all buggy and bad and really basic? That's because it was made intentionally primitive using primitive tools and a low budget. That isn't the best AI can do, its the WORST AI can do. If that surprisingly watchable thing is possible using effectively the worst and most primitive tools we have available, then a proper attempt by whatever versions of these tools we have in a year or two will be able to make just about anything. Not just TV shows.

15