Viewing a single comment thread. View all comments

visarga t1_jedjvrn wrote

> The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.

At CERN in Geneva they have 17500 PhD's working on physics research. Each of them GPT-5 or higher level, and yet it takes years and huge investments to get one discovery out. Science requires testing in the real world, and that is slow and expensive. Even AGI needs to use the same scientific method with people, it can't theorize without experimental validation. Including the world in your experimental loop slows down progress speed.

I am reminding people about this because we see lots of magical thinking along the lines of "AGI to ASI in one day" ignoring the experimental validation steps that are necessary to achieve this transition. Not even OpenAI researchers can guess what will happen before they start training, scaling laws are our best attempt, but they are very vague. They can't tell us what content is more useful, or how to improve a specific task. Experimental validation is needed at all levels of science.

Another good example of what I said - the COVID vaccine was ready in one week but took six months to validate. With all the doctors focusing on this one single question, it took half a year, while people were dying left and right. We can't predict complex systems in general, we really need experimental validation in the loop.

72

sideways t1_jedkfko wrote

You don't really know what level GPT-5 is going to be.

Regardless, you're right - we're not going to leapfrog right over the scientific method with AI. Experimentation and verification will be necessary.

But ask yourself how much things would accelerate if there was an essentially limitless army of postdocs capable of working tirelessly and drawing from a superhuman breadth of interdisciplinary research...

67

Desi___Gigachad t1_jedlzki wrote

What about simulating the real world very precisely and accurately?

24

SgathTriallair t1_jedn0bd wrote

We can't simulate the world without knowing the rules.

What we already do is guys at the rules, run a simulation to determine an outcome, then do the experiment for real to see if the outcome matches.

Where AI will excel is at coming up with experiments and building theories. Doing the actual experiments will still take just as long even if done by robots.

18

Kaining t1_jedt5v0 wrote

We're getting good at simulating only the part we need though. Look up what Dassault Systèmes is capable to do for medical practitioner needing trial runs. That's only now.

I guess simulation will only go so far and even AGI will need real world testing for all that's quantum related at the moment but that's the problem with progress. No way to know if what you think is the endgame of possibility really is.

16

SgathTriallair t1_jeemf44 wrote

Your will always have to back up your simulations with experiments. It's like the alpha fold program. It is extremely helpful at identifying the likely outcome of an experiment, and if it gets it wrong you can use those results to train it better, but you do still have to perform the experiment.

3

WorldlyOperation1742 t1_jeeupap wrote

In the past if you wanted to spin a cube infront of you you needed an actual cube. Atleast you don't need to do that anymore. I think simulations will go a long way in the future.

13

SgathTriallair t1_jeg0noy wrote

Agreed, but they can only be trusted when the science they are based on is well understood. At the edges they become less helpful.

1

[deleted] t1_jedw0su wrote

[removed]

2

Kaining t1_jee0c8g wrote

The only thing i know about it is that question: "if it is made, is it enough to simulate a quantum environement and bypass the need for IRL testing ?". At the moment, i'd say no. But i do not have the knowledge or expertise to guess if that could change.

However, what i can give a certain probability of being true is that simulation at regular relativistic physic scale could probably be completely simulated at some point. It's kind of already doing it anyway in very specific field with alphafold and other AI of the sort. Stack enough of specialised simulated model and you have a simulation of everything.

So uh, yes, quantum SGI maybe ?

2

_dekappatated t1_jedpp0y wrote

I agree partially, but I'm sure we've barely scratched the surface on what is possible with the knowledge that we already have and has already been proven by scientists. They might come up with novel solutions that are more or less correct that don't need extensive real world testing and be able to change the world very quickly that way. There are mathematicians who's work is entirely theoretical and haven't been applied to the real world, then suddenly a use is found for their stuff 30-50 years later.

16

hold_my_fish t1_jedsysm wrote

This is a great point that science and engineering in the physical world take time for experiments. I'd add that the life sciences are especially slow this way.

That means there might be a strange period where the type of STEM you can do on a computer at modest computational cost (such as mathematics, the theory of any area, software engineering, etc.) moves at an incredible pace, while the observed impact in the physical world still isn't very large.

But an important caveat to keep in mind is that there's quite possibly opportunity to speed up experimental validation if the experiments are designed, run, and analyzed with superhuman ability. So we can't necessarily assume that, because some experimental procedure is slow now, that it will remain equally slow when AI is applied.

14

Considion t1_jee4tya wrote

Additionally, if we do see an ASI, even if it is bound by a need for further physical testing and it stops at, say, twice the intelligence of our best minds, it may be able to prove many things about the physical world through experiments that have already been done.

Because not only would it be generally quite intelligent, it would specifically, as a computer, be far better at combing through massive amounts of research papers to look for connections. It's not a sure thing, but it's possible that it's able to find a connection between a paper on the elasticity of bubble gum and a paper on the mating habits of fruit flies to draw new proofs we never would have thought to look for. Not a certainty by any means, but one avenue for faster advancement than we might expect.

17

amplex1337 t1_jedufrg wrote

So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!

6

Talkat t1_jeena5n wrote

I mean if a super AI made a COVID vaccine that worked, and provided thousands of pages of reports on it, and did some trials in mice and stuff, and I was at risk... Absolutely I'd take it even if the FDA or whatever didn't approve it.

I'd send money to them and get it in the mail and self administer if I had to.

My point is perhaps if an AI system can provide enough supporting evidence and a good enough product they can operate outside of the existing medical system.

And they would likely create standards that exceed and more up to date than current medical regulations

6

sdmat t1_jegivhn wrote

There's also a huge opportunity for speeding up scientific progress with better coordination and trust. So much of the effort that goes into the scientific method in practice is working around human failures and self interest. If we had demonstrably reliable, well aligned AI (GPT4 is not this) the overall process can be much more efficient. Even if all it does is advise and review.

4

paulyivgotsomething t1_jeeipga wrote

CERN is an interesting case. They collect a tremendous amount of data, one petabyte per day. You have a lot of smart people looking for patterns in the data the reinforce or reject current thinking. Our experimental data in this case far outstrips the number of smart people we have looking at it. I would say we are in a world where the data we collect is under analysed. A single cryo electron microscope will produce 3 terabytes per day. There is stuff there we are are not seeing that our neural networks will see. New relationships between particles, new protein/cell interactions. There will be a PhD in the process for now who takes those relationships and puts theory to the test, but ten years from now maybe not.

12

delphisucks t1_jedtsmr wrote

Well, I think AI can teach itself how to use a body in VR. like millions of years of training, compressed into days. Then we mass produce robots to do everything for us, including research. The only thing really needed is a basic and accurate physics simulation in VR to teach robot AI.

9

ManHasJam t1_jeerj8a wrote

The robot physics simulations have been done, cool stuff

2

freebytes t1_jefeqxx wrote

Nvidia is teaching driverless cars in virtual environments in this manner.

2

fluffy_assassins t1_jef98m1 wrote

Where is all this processing power gonna come from? Aren't the quantity of chips kind of a hard wall?

1

Plus-Recording-8370 t1_jedum5j wrote

Point taken, but the experimental validation might look very different for ai than you'd think. For instance, instead of needing to run 100.000 generic tests, it would only need 100 extremely detailed tests

8

jlowe212 t1_jee403u wrote

CERN produces an unfathomable amount of data that algorithms have to sift through. If it's possible that an AI can find patterns in these enormous data sets that current algorithms can't, it could well lead to some relatively quick discoveries.

The problem is, it might not be physically possible or feasible to probe depths much farther than we've already probed. AGI can't do anything with data that we may never be able to even obtain.

7

Talkat t1_jeemwcc wrote

A recent thought was if you could get AGI from simulation.

AlphaGo learnt the game by studying experts and how they played but AlphaStar (whatever the next version) taught itself all in simulation.

I wonder if it is possible for an AI to bootstrap itself like AlphaStar did.

7

FlatulistMaster t1_jee78ml wrote

This is true for that type of experiment, but some things can be developed in hours if only information processing is involved.

Also, the prediction power of an ASI would be something completely different than what humans are capable of, so it is fair to assume that unnecessary experiments will not be as plentiful.

3

hyphnos13 t1_jef52x7 wrote

To be fair validating effectiveness of a medical intervention requires accounting for variety in people and making sure that it is safe across the board.

You don't need a pool of hundreds of thousands of the exact same particle and a control pool of the same or need them to roam about in the wild for months to ethically answer a question in physics.

If we were willing to immunize and deliberately expose a large pool of people the covid vaccines would have been finished with testing a lot faster.

1

hydraofwar t1_jef89y0 wrote

You're right, but I particularly believe that all our stored scientific information still has a lot to say, things that we humans haven't seen yet, and something that could decipher this, and very quickly, would be an AI.

What could bypass experimental validation would be quantum computing to simulate systems/environments.

1

OdahP t1_jedro26 wrote

The covid vaccines that didn't have any effect at all you mean?

−16

Jalen_1227 t1_jegi4wo wrote

It’s funny how people downvoted you to hell but this is literally the truth

1

OdahP t1_jegncee wrote

which was covered by newspapers all around the world but then quickly swept under the rug

2