Viewing a single comment thread. View all comments

nxqv t1_jdxx53i wrote

I don't know a whole lot about LLMs because I'm new to the field but I sure do know about FOMO. I recently felt a lot of FOMO about having missed opportunities to path towards graduate school and AI research years ago.

What you need to do is put a name to the face. Dig deep and understand your feelings better.

What is it you're afraid of missing out on exactly?

Untold riches? Researchers don't really make any more or less money than other computer science jobs. And most billionaires aren't following some predetermined path.

Fame? Clout? We can't all be Sam Altman or Yann LeCun or Eliezer Yudkowsky or whoever. Besides, most of the things you see these types of guys say or do in public is only tangentially related to the day to day experience of actually being them.

Impact? I've recently come to realize that a craving for "impact" is often rooted in a desire for one of these other things, or rooted in some sort of egotistical beliefs or other deep seated psychological matter like seeking someone's approval. In reality, you could be the guy who cures cancer and most regular people would only think about you for half a second, your peers could be jealous freaks, and people could still find some tiny little reason to turn on you if they really wanted to. You could easily die knowing you did something amazing for the world and nobody cared but you. Are you the type of person who would be okay with that?

Edit: the "Impact" part was controversial so I'd like to add:

> don't lose sight of the forest because of a tree. We're talking about impact in the context of FOMO - if you feel that level of anxiety and rush about potentially missing out on the ability to make an impact because others are already making the impact you want to make, it's more likely to be ego-driven than genuine altruism

The ability to work on something cool or trendy? There's SO MANY new technologies out there you can path towards a career in. And there will continue to be something cool to do for as long as humanity exists.

Something else?

For each one of these, you can come up with convincing counterarguments for either why it's not real or why you can just find a similar opportunity doing many other things.

And let's be real for a second, if this technology really is going to take knowledge workers' jobs, researchers are probably on the chopping block too.

97

[deleted] t1_jdy83bi wrote

[deleted]

46

ginsunuva t1_jdyu8d2 wrote

Some things don’t need impacting and yet people need to force an impact (which may worsen things) to satisfy their ego, which usually soon goes back to needing more satisfaction after they realize the issue is psychological and always relative to the current situation. Not always of course, duh, but some times. I usually attribute it to OCD fixated on fear of death without “legacy.”

5

nxqv t1_je006pc wrote

Yeah "legacy" is another one of those ego-loaded words that doesn't always mean what it looks like it means.

0

Impallion t1_je07in1 wrote

I completely agree and of the things that u/nxqv listed, I think impact is the thing that most everyday people want and fear they will no longer have, more so than fame, riches, clout etc. It's totally natural to want the things you spend effort on to have impact.

Now what I'm more interested in is the argument of how much impact is enough to make you feel satisfied, and I think this is where the FOMO starts to set in for people. People want to have a "large" impact - making company-wide differences, influence large swaths of people. I think the fear is that in the face of a ChatGPT, your little model or little application can only reach a handful of others.

Extrapolate current trends and you might think, oh well AI applications are just going to get bigger and bigger. Midjourney 5 or SuperChatGPT-12 are going to be so insanely capable that we will have no more use for human writing, human art, human music, human programming. There will simply be no more room for my work to EVER have a big impact in the future. (Maybe this change is also similar to how the scientific greats back in the day could discover big theorems like Einstein's relativity, but nowadays you need to hyper-specialize in academia to produce results for your tiny corner)

My solution is that we need to dig a little deeper. What does it mean to be human? What does it mean to live a good meaningful life? If your answer to that is that a good life worth living is one where you impact on the order of thousands or millions of humans, then yes we might be shifting away from that possibility. But humans are built for connection, and I think we will need to look inwards and realize that we don't need to influence thousands to experience that connection. You can make a little model or application that affects hundreds. You can write a song just for your friends and family. You can paint a piece of art that just hangs on your wall and gets a single compliment. To me that is already human connection, and is just as meaningful as making a large model that drives the next Google/Meta forward.

2

nxqv t1_je0cw14 wrote

>People want to have a "large" impact - making company-wide differences, influence large swaths of people. I think the fear is that in the face of a ChatGPT, your little model or little application can only reach a handful of others.

Yes, it's this idea of wanting to make "as large of an impact as possible" that I was starting to chip away at. A lot of people - myself often included - feel dismayed when we think about our work only impacting a tiny corner of the world. It feels like you're "settling for less." But when you finish that thought, it sounds more like "settling for less than what I'm capable of" which has a lot to unpack.

And for the record, I think it's okay to want to make a big splash to satisfy your own ego. I wasn't trying to say that it's immoral. I just think it's important to understand that you're in that position and unpack how you got there. Mindfulness is the way to combat FOMO, as well as all sorts of other negative emotions.

>My solution is that we need to dig a little deeper. What does it mean to be human? What does it mean to live a good meaningful life? If your answer to that is that a good life worth living is one where you impact on the order of thousands or millions of humans, then yes we might be shifting away from that possibility. But humans are built for connection, and I think we will need to look inwards and realize that we don't need to influence thousands to experience that connection. You can make a little model or application that affects hundreds. You can write a song just for your friends and family. You can paint a piece of art that just hangs on your wall and gets a single compliment. To me that is already human connection, and is just as meaningful as making a large model that drives the next Google/Meta forward.

Yes yes yes.

2

nxqv t1_je00ks4 wrote

Also, don't lose sight of the forest because of a tree. We're talking about impact in the context of FOMO - if you feel that level of anxiety and rush about potentially missing out on the ability to make an impact because others are already making the impact you want to make, it's more likely to be ego-driven than genuine altruism

1

ghostfaceschiller t1_jdyerkp wrote

> Yan LeCun

That dude is becoming straight-up unhinged on Twitter

24

spiritus_dei t1_jdz6pml wrote

If he's the standard of "success" then based on Twitter that's something you may want to reconsider. Jürgen Schmidhuber comes in a close second.

5

visarga t1_jdzu6az wrote

Let the critics critique, it's better to have an adversarial take for everything, when you take a survey you get better calibration that way.

He's angry for the forced Gallactica retraction, followed by chatGPT success. Both models had hallucination issues but his model was not tolerated well by the public.

4

nxqv t1_jdyjvxe wrote

Yeah it's really somethin

3

Alternative_Staff431 t1_je5r9j7 wrote

I thought so too but I actually genuinely appreciate what he says. His POV is valuable and his recent posts aren't really bad in recen times.

0

MootVerick t1_jdyj5x3 wrote

If ai can do research better than us, we are basically at singularity.

13

spiritus_dei t1_jdz7rmz wrote

I think this is the best formulation of the question I've seen, "Can you imagine any job that a really bright human could do that a superintelligent synthetic AI couldn't do better?"

Everyone loves to default to the horse and buggy example and they always ignore the horse. Are programmers and researchers the blacksmiths or are they the horses?

It's at least 50/50 that we're all the horses. That doesn't mean that horses have no value, but we don't see horses doing the work they once did in every major city prior to their displacement by automobiles.

We also hear this familiar tome, "AI will create all of these news jobs that none of us can imagine." Really? That superintelligent AIs won't be able to do? It reminds me of a mixed metaphor. These two ideas are just not compatible.

Either they hit a brick wall with scaling or we all will be dealing with a new paradigm where we remain humans (horses) or accept the reality that to participate in the new world you become a cyborg. I don't know if it's possible, but may be the only path to "keep up" and it's not a guarantee since we'd have to convert biological matter to silicon.

And who wants to give up their humanity to basically become an AI? My guess is the number of people will shock me if that ever becomes a possibility.

I'm fine with retirement and remaining an obsolete human doing work that isn't required for the fun of it. I don't play tennis because I am going to play at Wimbledon or even beat anyone good - I play it because I enjoy it. I think that will be the barometer if there isn't a hard limit on scaling.

This has been foretold decades ago by Hans Moravec and others. I didn't think it was possible in my lifetime until ChatGPT. I'm still processing it.

13

starfries t1_jdyx0xh wrote

I feel like Eliezer Yudkowsky proves that everyone can be Eliezer Yudkowsky, going from a crazy guy with a Harry Potter fanfic and a blog to being mentioned in your post alongside those other two names.

5

sdmat t1_jdyyqwe wrote

Does it? How many other fanfic writer -> well known researcher trajectories come to mind?

5

starfries t1_jdyz458 wrote

No, I mean you don't need anything special or to follow a conventional path.

1

sdmat t1_jdz0h51 wrote

I mean no personal offense, but it's strange to see someone generalizing from an extreme outlier in a machine learning sub.

5

starfries t1_jdz0q2b wrote

That's not what I meant, so no offense taken.

2

landongarrison t1_jdz5ao3 wrote

This was an incredibly well thought out comment. Should be at the top.

2