Viewing a single comment thread. View all comments

FeatheryBallOfFluff t1_j0kykz1 wrote

Highly unlikely. A scientist still needs to guide AI to make it search in the right direction. You can tell AI "Find me the best binding proteins with this protein of interest, binding epitope X", and AI may find you the best one, but someone still has to do something with that information and how to apply it.

Then there is research on things that are hard for a robot to understand. A lot of research in ecology is barely significant, but biologically relevant. A human may understand why something is biologically relevant, AI as it stands now is uncapable of doing so.

What AI can do though, is optimize plant growth parameters, so energy requirements go down, while food security increases. So essentially, it would be possible to eventually feed the population with very little labour, and so we can focus on other tasks that further improve our lives (hint: science).

AGI as it stands, is decades, if not centuries away. But let's for a second assume it is, why wouldn't humans collaborate with AGI to find new scientific results?

10

Surur t1_j0l3xsp wrote

If AIs can understand protein folding better than humans, I think it is pretty obvious those higher level abstractions are also tractible, especially complex things like ecology. I would bet AI would be much better at understanding ecology than us.

There is very little sign AGI is centuries away, and decades go past pretty fast.

−2

FeatheryBallOfFluff t1_j0l6723 wrote

AIs can predict, but that isn't equal to understanding why or how it works. It's like being able to apply a very complex formula. You may know how to apply the formula, but may not understand why the formula is like that. Computers are good at finding correlations, but in an environment with little correlations, AI may have difficulty, as there is no number that indicates biological relevance.

1

Surur t1_j0l6tj8 wrote

Finding the relationship between items is exactly what AI is good at. You sound like the people who said AI would never beat Go because the number of combinations were more than the atoms in the universe.

−1

breaditbans t1_j0l9qc4 wrote

I work in medical research. We are already seeing cool image based analysis, but it’s supervised machine learning that is only as good as the training set. This will apply to any machine learning algos. And that’s where we are going to run into issues. What I’d like to see in ML algos that can read 50 high impact papers in a field and put together a summary of the data. The problems arise when people have bad data. It might be fabricated, poorly designed expts or just bad statistics. The ML algos are going to assume that data is as real as the most well-performed experiments. The bad data will contaminate the good data and corrupt the conclusions drawn from the algos.

Will that problem get alleviated? Probably, but it’s going to take some time and it’s going to require a lot of bright people to curate the dataset to actually be able to draw better conclusions than we can arrive at alone. But in 15 years? God only knows. Maybe I’ll just submit whatever grant ChatGPT13 writes for me.

2

Surur t1_j0ld4s8 wrote

Dealing with dirty data is exactly the strength of neural networks. It is just a matter of time.

1