Comments

You must log in or register to comment.

nycguy30 t1_j10a07v wrote

OpenAI making every small business and employees shit their pants

197

Less-Mail4256 t1_j10yqit wrote

I’ll worry when I see a robot designing and building a whole kitchen while being able to interact with an indecisive customer.

60

baelrog t1_j12sxbl wrote

Alas, the Skynet was born not out of automated weapons system, but from a fed up interior design bot fed up with indecisive humans.

14

[deleted] t1_j136gwr wrote

the thing is, there will be an ai that deals with customers, and that ai will have infinite patience to cater to the specific needs while at the same time use powerful tools to guide the customer to the most profitable option.

7

ChronoFish t1_j13kk6d wrote

Maybe most profitable, but the ability to capture correct sentiment from a person is something that humans struggle with..

A lot of people can build things. Some can even follow directions. But few can translate customer desire to true expectations...as the classic PM tree swing meme conveys:

tree swing

2

Less-Mail4256 t1_j14fena wrote

There will always be a large margin of error. Some people just can’t be appeased, regardless of their options, because they don’t actually know what they want.

1

RandomCandor t1_j10fyuy wrote

And the big ones. The big ones are shitting their pants even more.

40

AfrikaCorps t1_j11h4d6 wrote

Big business! Small business benefit from democratization of tech.

I remember when my brother started a 3D printer farm in 2013 and undercut this huge company making specialized parts

6

joshglen t1_j12rqll wrote

Ah good luck with that now, every part is being sold on etsy nowadays for barely over the price of filament + postage

1

AfrikaCorps t1_j12v77o wrote

Not here in Mexico, still good business, if you know spanish look up prices, it's stupid, we talking a print that takes 5 hours being like $20 or so.

Now, about your point which I will not ignore: that's even harder democratization because now some dudes can undercut small businesses and sell parts for less, making it a side hustle, it's positive in a way.

3

baelrog t1_j12t5mb wrote

It's been two weeks since ChatGPT comes out. I've already changed how I do free lance translation.

I feed every sentence I want to translate in the bot, and unlike Google translate where I have to edit every single time, with ChatGPT I only have to edit half of the time.

4

ChronoFish t1_j13m55z wrote

Well conversely, this opens up a whole new world of rapid prototyping that small businesses never had access to before. Instead of being reliant on the business owners skill, or maybe a single employee, those individuals can now ramp up their productivity to the point where they can service customers that were traditionally out of reach.

2

ExternaJudgment t1_j12r2d4 wrote

Bullshitters getting priced out because the REAL value of their bullshit is now becoming clear.

I see it as progress, I would never pay them before but now prices might actually be reasonable.

−6

SUPRVLLAN t1_j138eex wrote

Prices for what?

1

ExternaJudgment t1_j13flhj wrote

Like phone in 1980s: for 10.000$ you get a totally useless brick.

Evolution throws out the expensive garbage and makes prices fair for all.

−2

UX-Edu t1_j10uivv wrote

Ooo! Can they rig the models? Because rigging is a goddamn pain in the ass.

77

TITANDERP t1_j11p9fa wrote

Probably better solved algorithmically, surely there's already autorig tools right?

20

BestPlanetEver t1_j11sjwn wrote

There are, but AI helps with secondary movements like muscles juggling and skin solvers.

10

UX-Edu t1_j11upip wrote

Maybe? I’m not gonna lie, it’s been 15 years since I rigged anything

6

icebeat t1_j11zylf wrote

O the tedious and frustrating jobs are reserved to humans

2

Cheapskate-DM t1_j108dbu wrote

I suspect these tools are going to combine with and reinforce the 2D art generators - and possibly break into animation.

Having a stock model with known dimensions and posability, which can then be overlaid with a 2D illustration pass, could solve the wonkiness AI models currently have with things like hands and proportions. Plug that into a frame-by-frame animation, and you've just table-flipped the anime industry.

56

mythoughtsforapenny t1_j10uyo9 wrote

Strengthen animation AI to the point that it can animate fully realistic humans, other animals, and objects, combine it with improved language composition AI, maybe with a program specialized in generating narrative structures, and you've just flipped over the entire film, television, and gaming industries, but also people won't be able to rely on video evidence anymore.

17

TITANDERP t1_j11ozpw wrote

I for one would like to write a light novel and watch it slowly come to life. People are scared that media will be overtaken by ai junk but what they don't realize is the tools are likely to be marketed too. Imagine being able to write up an entire show, watch and share it, and vice versa. I understand it's likely an optimistic outlook and unlikely to happen in that fashion. That and the overall demeanor of how as of now people seeing it as soulless, which, fair enough.

12

icebeat t1_j12071r wrote

In my company we are already using AI for animation.

6

norbertus t1_j11iyjy wrote

> I suspect these tools are going to combine with and reinforce the 2D art generators - and possibly break into animation.

It's already happening. Stability Diffusion has been integrated into GIMP and Blender. I.e., auto-texture 3D models.

5

sanman t1_j1fuowj wrote

Since a 3D object is a vector object, it would be nice if they could also come out with AI that generates 2D vector art as well, since that would also help to reinforce the 2D art in general. Then 2D vector art could be seamlessly integrated with the 2D raster art.

1

micktalian t1_j120upi wrote

Alright yall, you know what this means. This has been decades I'm the making and we should rejoice. All we have to do is feed the AI warhammer models to train off of, give it a prompt of the specific faction/unit/pose you want, and bam GW needs to change their business model.

31

JohnnySasaki20 t1_j11666k wrote

That would certainly make developing video games a lot faster and easier. I imagine they could eventually make an AI write the code for the game as well. At some point you wouldn't have to do much of anything except tell the AI what to do.

24

imthebestnabruh t1_j117wni wrote

Sounds like coding, telling a machine what to do

18

Cactus_TheThird t1_j11dau5 wrote

More like "prompting", or just knowing which sequence of words to utter to turn your vision into reality

5

ExternaJudgment t1_j12rc38 wrote

Potejto potato

Same thing. Most idiots are incapable of doing a simple google search. You need same level of intelligence to know what to prompt as to implement it.

3

NLwino t1_j13iu6e wrote

At some level, this is true. But it will greatly increase the productivity of someone that has the knowledge needed. Someone not using AI will simply not be able to compete on the market anymore.

2

ExternaJudgment t1_j13n60q wrote

Sure, I've heard this expressed that it will make 10x programmers now 100x programmers.

And as an AI using 10x programmer, I couldn't agree more.

ChatGPT gives me such a boost it is getting ridiculous.

1

thisdesignup t1_j186y3w wrote

Yep, I'm trying to create a personal assistant. So I'm teaching Davinci 003 how to respond to voice commands and return keywords to trigger python scripts.

I basically feel like I'm programming in plain English. It's easier only because I don't have to know what code to use but I still have to know how the process of teaching. Teaching is not easy.

2

dc2b18b t1_j11r09g wrote

You’d have to describe your app in such detail that it would be faster just to code it

0

MeatisOmalley t1_j12di0u wrote

There are already AI coding assitants, they dramatically speed up workflow. You don't describe your entire game in a single prompt, you describe aspects of the game, then the ai writes the code to program that aspect.

1

dc2b18b t1_j140mhm wrote

“Make me a game. It has these aspects. Also when a user clicks item X and they have an axe in their inventory and greater than 50% health and there are less than 10 people currently playing and one of the other people playing is an elf who is greater than level 5 but less than level 8, the item should trigger Y. Oh but if the elf is at level 9, then scratch all that because the item should then do Z.”

Yeah good luck describing aspects at a high enough level that the AI can make something usable but in enough detail that it’s actually creating the game you want it to create, without causing unintended side effects.

−2

MeatisOmalley t1_j14q6au wrote

You just have no clue what you're talking about.

I can ask the ai, "write a gravity simulation" and it can code it in 1 second, and I could plug it into some aspect of my game. I could ask the ai, "write a performant function that constantly checks for and gives coordinates" and plug that into my minimap UI. It's literally faster than typing. I could also give parts of my own code to the ai, and say "I got (x) error. What's wrong with my code?" And the AI will describe in detail what I did wrong, and provide a solution.

Game development is coded in much smaller pieces than you seem to think.

4

SnooPuppers1978 t1_j164hcj wrote

Or you could just use an existing library that also takes "1 second".

I think so many examples people bring up are just something that you already can do with few clicks, existing livrary, existing framework, assets, etc.

In reality code is prompts. And you can always abstract code into functions which are like promots.

If prompts are that beneficial to some it sounds like their code has way too much boilerplate.

2

resdaz t1_j138jaw wrote

Except so brain dead easy that literally anyone can do it within 5 seconds, which while good also makes any video game worth 0 dollars.

3

bodden3113 t1_j1190bl wrote

That's the dream.

1

Solid_Rice t1_j11xnkp wrote

why is that the dream?

3

bodden3113 t1_j1218u7 wrote

Actually, You ever heard of or experienced lucid dreaming? If AI is capable of generation at that level. We would be able to have dream like experiences on demand.

Think about it. When you talk to someone in your dream are you controlling it directly? Or controlling the environment you find yourself in, deliberately, sounds or music you hear? Are controlling each aspect consciously. No your subconscious is, and it's generating the entire phenomenon real-time with or without your input, most of the time your just along for the ride. AI media could potentially get to that level of real-time generation. And it seems it could get there much sooner rather then later by how fast and how good google deep dream and img generation tech got. It's literally a dream come true, LITERALLY?

−1

bodden3113 t1_j11yakc wrote

Cause that's literally by definition what a dream is? 🤨

−4

BillowsB t1_j14gm8h wrote

ChatGPT can write code. Not on the level needed to facilitate full game development but enough to make my starfish pucker.

1

SorakaWithAids t1_j16y3j9 wrote

yeah frankly i have been using it for snippets and small functions. absolutely incredible.

1

BlitzBlotz t1_j189jo9 wrote

Making 3D models is super easy and super fast already, the anoying and hard part is making them in a way that doesnt cause glitches, errors or takes a huge toll on the graphic budget of your game.

Currently AI 3D models are pretty basic and have shitty geometry, will take some time til they fixed that. Not saying it doesnt happen but the current version of that AI is more a proof of concept than anything else.

1

JohnGabin t1_j10k7j4 wrote

Cool, because ChatGTP built me a really bad spaceship with a Blender Python script.

23

Shelfrock77 OP t1_j1027uk wrote

The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

Image Credits: OpenAI

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt. Still, it’s orders of magnitude faster than the previous state-of-the-art — at least according to the OpenAI team.

16

[deleted] t1_j10q3ye wrote

[deleted]

16

Themasterofcomedy209 t1_j12uyun wrote

For a while it’s going to be better for a lot of people, at first. AI will take a very long time to get to the point where you can tell it to generate like “long haired male elf” and it pulls up a professional and usable elf model. It’ll be helpful for a lot of people so they don’t have to spend 400 hours on crates and can focus on the stuff AI won’t be able to do for a while, which is the stuff most artists like doing the most anyway

2

MethodicalProgrammer t1_j13umcy wrote

I see it the same way as procederual generation: it's a good starting point for an artist but will invariably need an artist to make specific adjustments.

1

hendrix320 t1_j11giww wrote

I’m starting to wonder if when I used dall-e to make thousands of corgi images if that messed with its algorithm to think corgi’s are the best dog. Corgi’s shows up on their website quite often

9

69_A_Porcupine t1_j10s5lm wrote

Is this the part where it automatically starts printing robot parts and takes over the power grid

6

Robot-Candy t1_j10jch2 wrote

These look like they’re all made out of floam, an odd choice of rendering methods.

3

Rhawk187 t1_j11pdfa wrote

Teaching my Game Engine Design class next semester. This will be a fun one for an assignment.

3

LegendaryPlayboy t1_j123w1o wrote

We will soon create infinite AR/VR worlds out of our own words. And movies. AI cinema should be a reality within a couple of years, more or less. Point-E is another good step forward for the whole picture.

3

SadcoreEmpire168 t1_j12d464 wrote

I’m genuinely getting surprised that stable diffusion is getting more advanced & introduced into making detailed images like this

3

fox-mcleod t1_j11cpxh wrote

Given it generates point clouds from descriptions, I wonder if the best application is VR rather than 3D printing.

2

norbertus t1_j11je3f wrote

The 3D machine learning application you are are wondering about is the"neural radiance field" or NERF, which has VR applications.

https://www.matthewtancik.com/nerf

The technology is related to "computational photography" (or "light field photography" techniques that are a decade or so old.

3

bnogal t1_j133e54 wrote

I had enough, please, some should implement a politician AI to setup an AI party

2

NLwino t1_j13jmq5 wrote

The problem with this is that you need train the AI with a bunch of data. So where are going to find a lot of data of GOOD politicians?

2

bnogal t1_j13k7rn wrote

You can train it with the work to be solved.

Improve rates, reduce corruption, obtain votes

2

thisdesignup t1_j187dbz wrote

>obtain votes

Watch out, if you aren't detailed enough it will do anything to get votes.

1

bnogal t1_j1a3xhp wrote

Like sending opposition to the jail. Meh, if they improve indexes and reduce corruption that is fine.

1

FuturologyBot t1_j106v6d wrote

The following submission statement was provided by /u/Shelfrock77:


The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

Image Credits: OpenAI

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt. Still, it’s orders of magnitude faster than the previous state-of-the-art — at least according to the OpenAI team.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/zqv8a7/openai_releases_pointe_an_ai_that_generates_3d/j1027uk/

1

CheckMateFluff t1_j10jm46 wrote

It's not there yet but man this would be cool. I could generate a mesh from the text and clean up the topology. If I need simple basic things like lamps, cups, plates. etc this would save a lot of time.

1

Kalwasky t1_j12jh23 wrote

To anyone wondering, this is largely an iterative work over facebook’s prior work. As far as I’ve been able to tell there is little going on that’s groundbreaking, think of it as the difference between a small GPT model and a large one.

1

Sugar_bytes t1_j12jpbk wrote

Omniverse has some interesting stuff for AI 3D modeling. Worth looking at for rigging solutions on the horizon as NVIDIA seems to want to eliminate the busy work.

1

No_Introduction_3881 t1_j16k9f4 wrote

Silly question, I dont see it available in my open ai account? Where do I get it from?

1

pathego t1_j12n3ev wrote

More posts like this! The bot looking question posts about AI of late on this sub have devastated the quality of the sub. It has also revealed an interesting way to kill Reddit as more AI content buries the valuable stuff we come here for.

−1

Hotporkwater t1_j1162qu wrote

"Oh no! What if this puts 3D modelers out of work?? Ban it!" - half the people on Reddit

−5

imaverysexybaby t1_j12lhdo wrote

Yea just learn a new highly skilled trade that’s bound to be automated out of existence, you ingrates!

9

Hotporkwater t1_j12mmna wrote

You would have stopped Edison and Tesla because the lantern makers threw a fit about it, lmao.

−4

nameTotallyUnique t1_j11b54w wrote

I tried to have it make a cat in html with css, was just a bunch of circles named correctly

3

CoolmanWilkins t1_j11ofe0 wrote

People who create cats in html with css will still have jobs. Launching a bootcamp for this shortly.

3