Unfocusedbrain

Unfocusedbrain t1_je9uenn wrote

If an AGI were to emerge in such a facility, would it not have easier access to the numerous other 'accelerators' (really gpus and cpus) present there? Considering that an AGI might require only 10-1000 accelerators, the availability of 100,000 would potentially enable a rapid transition from AGI to ASI.

8

Unfocusedbrain t1_je5fy87 wrote

Exactly. The AI field seems to be the last of the old Silicon Valley mindset of full-steam head on innovation, consequences be damned. Companies like Google (who have calcified) are too slow to react to these changes, whereas OpenAI, Microsoft, and countless startups have read the writing on the wall.

Adapt and progress, or die.

5

Unfocusedbrain t1_jcbz5p3 wrote

I must say that it might be challenging, if not impossible, to prevent the Redditification of this subreddit. I have been on Reddit for a decade and was part of r/Futurology when it first started. The moment r/Futurology became a default subreddit, it was flooded with individuals who lacked self-awareness, were overconfident, and often confidently incorrect.

Futurology is about looking towards the future with wonder and excitement, but many an average person does not share this perspective. Most people are preoccupied with their own lives, focused on immediate survival, and most times lack broader aspirations or views. When people joined r/Futurology post-default, they often didn't come to discuss but instead to force their perspectives and opinions onto the community, frequently acting in bad faith.

Singularity, futurology, optimism, and forward-thinking are not mutually exclusive; in fact, they are synergistic. Accelerating returns suggest we are moving toward progress, but the human element does not always keep pace. Society, sociology, ethics, philosophy, reason, politics, and many other 'human' domains tend to progress linearly, much like the human mind.

There is often a lag time for culture and other factors to catch up with exponential growth in any area. A population increase leads to a lag time in food production to sustain that population. Similarly, an exponential increase in technology can result in a lag time in cultural adaptation.

No matter what, the average person may struggle to grasp singularitarian concepts. If the floodgates open on this subreddit, those who do not understand these ideas will bring their biases and fears, potentially causing permanent disruption to the community.

While I'm optimistic about the future and technological progress, I have always been cautious about people's reactions to it. I don't know what the solution is or how to be proactive in preventing this subreddit from ending up like r/Futurology—but it is essential to be aware of these challenges and strive to foster a supportive and forward-thinking environment, even in the face of insurmountable odds.

52

Unfocusedbrain t1_jaebvqz wrote

I agree with you and I apologize if it seemed like I was implying you were giving a deadline for AGI. That was not my intention. I just liked your realistic perspective on AI progress, instead of the “AGI is < 10 years away! Can’t wait!” hype that some people have.

And yes, there will be a huge change on the web soon, similar to the iPhone and social media revolution in 2008. It’s not only Google and Microsoft - many other companies are working on LLM-enhanced search engines. We don’t know how that will affect the world, but I think it will speed up AGI research and make the world even more different than before and after social media & smartphones.

9

Unfocusedbrain t1_jae7e6m wrote

I dont know if “realistic” would be appropriate word for this since we dont know what -will- happen in the next 5-10 years. Though this is probably the most reasonable view of AI yet of anyone who posted on this board.

Anyone who’s been on the internet since the very beginning understands how (paradoxically) drastic, yet invisibly, the creeping change on the internet has been. Some times I have to step back from everything just for the question ‘how did things changes so drastically? what the fuck happened?’ to come into my head.

Same thing is happening with AI. People who understand concepts like the singularity notice these changes, but for the laymen who are focused on their daily struggles and routine wont even notice anything but useful tools and entertainment available to them.

I would wager within half a decade a multi-model proto-agi will be available that could do all the cognitive tasks a human can do at least at acceptable (but not necessarily extraordinary) levels. Not within a year, thats bonkers.

25

Unfocusedbrain t1_j8x9hhy wrote

Reply to comment by [deleted] in Microsoft Killed Bing by Neurogence

That is lobotomization for you, correct? I know you are being helpful by giving your perspective, so thank you for your definition.

I hope to hear from OP since they write like they got kicked in the balls.

1

Unfocusedbrain t1_j8fcuq2 wrote

> Also I don't think it will be complete utopia but definitely way cooler than our society is. More vitality/thought/energy, less of a doomer/malthusian vibe

I believe the same. After a certain point the sole-currency becomes energy, space, and matter. There isn’t an infinite amount of it, but for human purposes there are and so it will feel like complete utopia/communist paradise. If AI can build anything with enough matter and energy, and can allow any place to be habitable, well that eliminates currency except for really extreme scenarios.

I think at macro level there will be questions of “Who pays the cosmic water, energy bill and rent?” At that point it would be in the hands of AI systems so far advanced that they can manage those concerns without issue.

11

Unfocusedbrain t1_j7ssxdk wrote

True enough that malware is possible without ChatGPT my snarky commenter. I'm more concerned with script kiddies able to mass produce polymorphic malware that makes mitigation cumbersome with very little effort or investment by the creator.

Hackers have the advantage of anonymity, so it becomes incredibly difficult to stop them proactively. This just makes it worse.

But that wasn't my point my bad faithed chum and you know that very well. I mean, your posting history makes it really clear you have a vested interest in ChatGPT being unfettered as possible. So I don't think you and I can have a neutral discussion about this in the first place. Nor would you want one.

1

Unfocusedbrain t1_j7qys9x wrote

That's true enough. Considering people have died to GPS of all things, yeah, its a non-negligible issue.

The more concerning issue is bad faith actors and malicious agents. There are already examples of people using other AI software maliciously. Countless to list.

For Chagpt there is an example of cybersecurity researchers using ChatGPT to make malware even with its filters in place. They were acting in good faith too - but that also means people with less academic pursuits could use it for malicious but similar means.

−1

Unfocusedbrain t1_j7qvyof wrote

> In my opinion, the most ethical answer is to let people decide for themselves where their own line is. This technology isn’t limited by the one-size-fits-all approach that we’re used to, each person can have their own tailored product that doesn’t impose on anybody else’s.

That is a fine opinion and I agree, but that implies a world model with infinite resources and manpower. It implies that humanity has reached a state that is responsible enough and holds itself accountable enough to utilize this technology unfettered. We haven't proved ourselves, on any level, that we deserve this technology. Need? Yeah absolutely, too many problems that it will solve. But earned it from our moral and ethical actions? Absolutely not.

That's not to say we as humans need to be morally and ethically perfect. That's impossible, but we aren't even within striking distance of 'good enough'. Even if we want to let people use this technology unfettered, we don't even let people do that with their own lives. Good or bad.

"To each their own' is something I subscribe to, but holy hell can people get to some terrible things if left to their own devices. Too many bad faith actors and malicious agents around.

Ultimately we do need safeguards: as loathe as some people in the singularity community are willing to admit. The fact that most us are terrified of these corporations/and or powerful groups have control over this technology just backs up my whole point. We are discussing if they are ethical, morally, and intellectually fit enough to own this technology. How can we say that if they are only a reflection of us humans and the hierarchical systems we naturally created over time? What does that say about us as a species?

How can we say complete liberation-sque democratization of the this technology would be ANY better?

If we, as a species, were more or less ethical or moral then this wouldn't even be a discussion.

3

Unfocusedbrain t1_j7q2zwi wrote

I suspect they'll keep lowering the filter and censor until they find a sweet spot. Humanity as a whole is -unfortunately - not morally, ethically, or intellectually mature enough to handle an oracle that can answer almost every question - good or bad.

I'm positive we'll reach that level one day - but not today. I still remember people's covid 'cures' and the tide-pod challenge.

15