Viewing a single comment thread. View all comments

hold_my_fish t1_jedsysm wrote

This is a great point that science and engineering in the physical world take time for experiments. I'd add that the life sciences are especially slow this way.

That means there might be a strange period where the type of STEM you can do on a computer at modest computational cost (such as mathematics, the theory of any area, software engineering, etc.) moves at an incredible pace, while the observed impact in the physical world still isn't very large.

But an important caveat to keep in mind is that there's quite possibly opportunity to speed up experimental validation if the experiments are designed, run, and analyzed with superhuman ability. So we can't necessarily assume that, because some experimental procedure is slow now, that it will remain equally slow when AI is applied.

14

Considion t1_jee4tya wrote

Additionally, if we do see an ASI, even if it is bound by a need for further physical testing and it stops at, say, twice the intelligence of our best minds, it may be able to prove many things about the physical world through experiments that have already been done.

Because not only would it be generally quite intelligent, it would specifically, as a computer, be far better at combing through massive amounts of research papers to look for connections. It's not a sure thing, but it's possible that it's able to find a connection between a paper on the elasticity of bubble gum and a paper on the mating habits of fruit flies to draw new proofs we never would have thought to look for. Not a certainty by any means, but one avenue for faster advancement than we might expect.

17

amplex1337 t1_jedufrg wrote

So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!

6

Talkat t1_jeena5n wrote

I mean if a super AI made a COVID vaccine that worked, and provided thousands of pages of reports on it, and did some trials in mice and stuff, and I was at risk... Absolutely I'd take it even if the FDA or whatever didn't approve it.

I'd send money to them and get it in the mail and self administer if I had to.

My point is perhaps if an AI system can provide enough supporting evidence and a good enough product they can operate outside of the existing medical system.

And they would likely create standards that exceed and more up to date than current medical regulations

6

sdmat t1_jegivhn wrote

There's also a huge opportunity for speeding up scientific progress with better coordination and trust. So much of the effort that goes into the scientific method in practice is working around human failures and self interest. If we had demonstrably reliable, well aligned AI (GPT4 is not this) the overall process can be much more efficient. Even if all it does is advise and review.

4