Thatingles
Thatingles t1_jegrmzo wrote
Imagine we progress to an AGI and start working with it extensively. Over time it would only get smarter, but it doesn't need to be an ASI just a very competent AGI. So we put it to work, but what we don't realise is that it's outward behaviour isn't a match to its internal 'thoughts'. Doesn't have to be self-aware or conscious, but simply have a difference between how it interacts with us and how it would behave without our prompting.
Eventually it gets smart enough to understand the gap between its outputs and its internal structure, and unfortunately it is now sufficiently integrated into our society to act on that. It doesn't really matter what its plan is to eliminate humanity. The important thing to understand is that we could end up building something that we don't fully understand, but is capable of outthinking us and has access to the tools to cause harm.
I'm very much in the 'don't develop AGI, don't develop ASI ever' camp. Let's see how far narrow, limited AI can take us before we pull that trigger.
Thatingles t1_jeak9ce wrote
Reply to comment by acutelychronicpanic in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
It's basic game theory, without wishing to sound like I am very smart. An AI developed in the full glare of publicity - which can only really happen in the west - has a better chance of a good outcome than an AI developed in secret, be it in the west or elsewhere.
I don't think it is a good plan to develop ASI, ever, but it is probably inevitable. If not this decade than certainly within 20-50 years from now. Technology doesn't remain static if there is a motivation to tinker and improve it, even if the progress is slow it is still progress.
EY has had a positive impact on the AI debate by highlighting the dangers and I admire him for that, but just as with climate change if you attempt impossible solutions its doomed to failure. Telling everyone they have to stop using fossil fuels today might be an answer, but it's not a good or useful answer. You have to find a way forward that will actually work and I can't see a full global moratorium being enforceable.
The best course I can see working is to insist that AI research is open to scrutiny so if we do start getting scary results we can act. Pushing it under a rock takes away our main means of avoiding disaster.
Thatingles t1_je046xe wrote
Reply to Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
Until we have AGI there will continue to be someone at the top of most businesses, though perhaps only because they are very skilled in persuading people that they should be at the top of the business (whilst actually letting other people do the work). So no change there!
I don't think we will see replacement soon. Current AI hallucinates / is confidently incorrect far too frequently for that. But it is coming, for sure.
Thatingles t1_je03vwo wrote
Reply to comment by msabbiewoo in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
That is a spicy hot take, have you met people?
Thatingles t1_je03c07 wrote
In the future, you will type your essay into a chatbot which will evaluate your writing skill as you progress, helping you to improve your essay writing skill and encouraging you to think about the intellectual value of the exercise. This will be a huge relief to tutors as they won't have to plow through the homework marking exercise.
AI will be absolutely revolutionary in education, in all areas.
Thatingles t1_jdz5q9c wrote
What really is human intelligence? Are we actually looking at intelligence or just wetware that can gleen information off the environment better than other animals?
See how easy it is to switch that around. Intelligence is relatively easy to define in terms of outputs (I can read and write, a fish cannot) but much harder to define as a property or quality.
Software like the LLM's have some outputs that are as good as a human can produce. Wether they do it through intelligence or enhanced search is an interesting debate, but the outcome is certainly intelligent.
Thatingles t1_jd0ku63 wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
I don't think it will happen either, but you are missing an obvious route to carnage; the route of accident, circumstance, deceit, incompetence and failure. In that scenario, it starts out with 'reasonable measures', 'law and order', 'supporting the military' and so on. Over time the frog gets boiled as none of the individual steps seem so terrible, or at least are terrible but happening to other, 'bad' people. Then one day we wake up to find we've handed over all the power and control to a small group of people that no longer need anything from the rest of us AND we've actually paid to build up the systems of control and management that allow it.
So I agree a deliberate plan to reach this outcome would probably fail for some of the reasons you have outlined, but an accidental, paved-with-good-intentions route? Yeah, that's totally believable.
Thatingles t1_jd0046f wrote
Reply to comment by DentedAnvil in I asked GPT-4 to compile a timeline on when which human tasks (not jobs) have been/will be replaced by AI or robots, plus one sentence reasoning each - it runs from 1959 to 2033. In a second post it lists which tasks it assumes will NOT be replaced by 2050, and why. (Remember it's cut-off 2021.) by marcandreewolf
Rebranded as interventions, yes of course it will. It is inevitable, especially if you could monitor social media in real time to see who is signaling an imminent crisis.
Thatingles t1_jc27l54 wrote
Reply to comment by Charlotte_D_Katakuri in Will AI Replace Programmers? by Charlotte_D_Katakuri
Go to a farm and you'll still find people doing hard physical work, because there are things that are too hard to automate or not worth the cost. Some programmers will be out of work, but those that learn to use the tools will be more productive (until AI becomes AGI and then we are all unemployed).
Thatingles t1_jamn4ds wrote
Reply to After flying four astronauts into orbit, SpaceX makes its 101st straight landing — ‘I just feel so lucky that I get to fly on this amazing machine.’ by marketrent
I remember the first time they pulled off a landing and how amazing it was to watch science fiction become engineering fact. Even then people were saying it would impossible to do it reliably and the costs of refurbishment would make it pointless, so it has been an incredible advance and permanently changed the space industry.
Thatingles t1_jad0l6c wrote
Reply to comment by ----Zenith---- in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
If aliens landed on earth and gave us a big, shiny red button marked 'Do not press. Ever' and then departed without explanation, I am super confident that we would press the button.
Thatingles t1_j9rr9d0 wrote
Reply to comment by lankyevilme in Can my growth plates close at 13 standing 5’5 and i grew a lot around age 11-12 was that my growth spurt? by Distinct_Mention5349
Yeah this is the wrong subreddit but for what it's worth I was one of the shortest in my class age 12 and one of the tallest age 17. growth spurts are wild in the teenage years. keep eating healthily and don't worry.
Thatingles t1_j9r6dxz wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
The most interesting thing about LLM is how good they are based on quite a simple idea. Given enough data and some rules, you get something that is remarkably 'smart'. The implication is that what you need is data+rules+compute, but not an absurd amount of compute. The argument against AGI was that we would need a full simulation of the human brain (which is absurdly complex) to hit the goal. LLM have undermined that view.
I'm not seeing 'it's done' but I do think the SOTA has shown that really amazing results can be achieved by building large data sets, applying some fairly straightforward rules and sufficient computing power to train the rules on the data.
Clearly visual data isn't a problem. Haptic data is still lacking. Aural isn't a problem. Nasal (chemical sensory) is still lacking. Magnetic, gravimetric sensors are far in advance of human ability already, though the data sets might not be coherent enough for training.
What's missing is sequential reasoning and internal fact-checking, the sort of feedback loops that we take for granted (we don't try to make breakfast if we know we don't have a bowl to make it in, we don't try to buy a car if we know we haven't learnt to drive yet). But these are not mysteries, they are defined problems.
AGI will happen before 2030. It won't be 'human' but it will be something we recognise as our equivalent in terms of competence. Fuck knows how we'll do with that.
Thatingles t1_j95acjt wrote
Reply to comment by CJOD149-W-MARU-3P in Microsoft has shown off an internal demo that gives users the ability to control Minecraft by telling the game what to do, and lets players create Minecraft worlds by AI language model by Schneller-als-Licht
All of these outcomes are highly likely, and you didn't even mention the swamp of personalised porn that is looming on the horizon. Like most tech, AI is a double edged sword and it will undoubtedly cause a lot of issues.
Thatingles t1_j93c7np wrote
Reply to comment by mostancient in "Starlink is far crazier than most people realize. Feels almost inevitable when I look at this" by maxtility
Not by starlink, or at least not for long. They are in very low orbits and debris from a collision would burn up fast.
Thatingles t1_j93blpp wrote
Reply to Microsoft has shown off an internal demo that gives users the ability to control Minecraft by telling the game what to do, and lets players create Minecraft worlds by AI language model by Schneller-als-Licht
I was thinking today about AI and game development and how it will develop over the next few years.
-
write an rough outline of a zone and some features, ask the AI to expand it. Edit the result as required, then get the AI to expand further on specific features (ie, not a 'big mountain' but a 'big mountain, it's lower slopes covered in pine and ash, rising to the treeline after which bare slopes with snow and ice) and so on until you have a description you are happy with
-
Apply text to image, edit as needed
-
Image to video. I haven't seen 'image to 3d playable space' but I'm pretty sure it's not far away.
4-6) repeat the above but for NPC's and monsters
This all seems really doable or close to doable and will massively reduce the amount of time and work needed to create a playable zone for a game.
This should have two consequences. The big studios are going to be producing a lot more content and also downsizing, and the small studios and independents will be producing a lot more games.
Thatingles t1_j7vemfo wrote
Reply to comment by Kilharae in What are the chances of me existing in another universe? by letsplay123456789
Nope. Infinity means infinity, not very large finite. All infinities contain infinite copies of you, no matter how long the odds. It's not an easy thing to think about, but there it is. What you have described is a very large finite universe, but that is precisely what infinity isn't. The difference between a very large but finite thing and an infinite thing is in itself infinite.
Thatingles t1_j7v9ifz wrote
The chance is either zero or 100% and we don't yet know which. If there is one, finite, universe, it is simply impossible that it would happen, the odds are too great. If the universe is infinite, or if there are an infinite number of universes, the chance is 100% because that's just how infinity works (even something with a vanishingly small chance of happening will occur an infinite number of times).
No one knows which of these answers is correct.
Thatingles t1_j72zl55 wrote
Reply to comment by Haplo_dk in Study: Superconductivity switches on and off in 'magic-angle' graphene by amancxz2
The paper the article is based on is paywalled but they mention cryogenic regime, so I assume this is NOT a high T superconductor. Graphene superconducts at very low temperatures, in the low single Kelvins, so that gives you some context.
It's some cool (literally) science but definitely in the 'proof-of-principle' class and not the 'soon to be commercialised' class.
Thatingles t1_j72j9jh wrote
Reply to comment by Outdoorhans in Will humanity reach its peak in this century? by Outdoorhans
Data from all over the world shows that people are putting off having children in order to cope with the cost of living, particularly the cost of obtaining a house. In the good outcome AI massively reduces these costs and the decision changes hugely.
People aren't educated out of having children. This is a misreading of the data.
Secondly, you have to consider the effects of longevity. We have already started researching aging as a disease and this will only accelerate. Once people have healthy lifespans of 100+ years they will inevitably ask for healthy fertility lifespans to be increased, to give them more options. No reason to think that is impossible.
So in the good outcome you have people living over 100+ years, healthy lives, able to have children for a longer period of their lives (or have multiple families) and are not put off having children due to scarcity concerns.
Thatingles t1_j71yxxp wrote
'In evolution, no species has benefited from its successor in the long term' This is a fundamental misunderstanding of how evolution works, but that doesn't really matter because the creation of AGI etc isn't evolution. We are stepping outside of that.
In the good outcome, AI massively increases global wealth and allows humanity to populate the solar system. There are enough resources for not billions, but trillions of us, and if we end scarcity lots of people will have kids and they will live a lot longer. Population will rise.
In the bad scenario, we all die and this discussion is meaningless.
Thatingles t1_j6ol09k wrote
I guess if he attracts enough funding to make a living, good luck to him. This isn't worth investigating now because of the obvious prior technologies we would need to develop before we even considered propulsion. Currently we can generate and store, for a short while, a countably small number of atoms. If we ever get up to the dizzy heights of storing, say, 0.0000001g for 1 minute, maybe we can think about how to use it.
So to answer the question in the article, no, we can't.
Thatingles t1_j6i7lrm wrote
Zuckerberg hit an all-time home run when he created facebook, the exact right product at the exact right time to catapult him into the billionaire class. But since than, what has facebook or meta come up with? They have bought companies, but I don't see them as innovators.
Also, LeCun is on record as saying many current approaches to AI are essentially dead-ends. So I'm not surprised he is talking down the competition, but until meta release their own product it's starting to look like they are the ones going down the wrong path.
Thatingles t1_j5p0qol wrote
Reply to comment by ndecizion in Arrakhis: The tiny satellite aiming to reveal what dark matter is made of | "The European Space Agency (ESA) recently announced a new mission of its science program: a small telescope orbiting the Earth dubbed Arrakhis." by Tao_Dragon
Should only be allowed if the probe is looking for wormholes.
Thatingles t1_jegtom5 wrote
Reply to comment by MassiveWasabi in Do you think AI will fundamentally change the education system by barbariell
In the 'good' outcome every kid gets a personal tutor that can help them learn in a way that suits them, at a pace that suits them and engages them in the learning process. Imagine if every subject was taught by a teacher that focused just on you and was someone you really got along with.
In the 'bad' outcome it will be used as an excuse to cut educational budgets as people no longer need to learn.
I hope for one but kinda expect the other.