vhu9644

vhu9644 t1_ja9cw0v wrote

Laws have to be pragmatic.

It's like making encryption illegal. Anyone with the know-how can do it, and you can't detect an air-gapped model being trained.

We, as a society, shed data more than we shed skin cells. Restricting dataset access wouldn't really be that much of a deterrent either.

2

vhu9644 t1_ja6wu9v wrote

I know this is exciting (and it is) but just to temper the excitement: many computationally designed proteins have issues.

Most aren’t that good at working in in-Vivo conditions

We still can’t really adjust parameters we really want (like temperature these proteins work in)

Most are stuck on “simpler” problems like binding rather than enzymatic function

There may also be issues with evolvability of these enzymes

But all the same, it’s not an unnatural situation either. Protein sequences are still a sequence. Amino acids are added one by one to build them up, and we’ve known that neural nets are good at these problems. Before we solved tertiary structure prediction, secondary structure prediction sota was also neural networks. It’s just tertiary structure and these kinds of generative models are hard.

We’re finally cracking into generative protein design and the field is super exciting now, but it’s still only really preliminary results we’re seeing.

3

vhu9644 t1_j9de7c9 wrote

I have two bachelors, one in Bioengineering (focused on mechanical engineering), one in pure mathematics (with enough classes taken in CS to have a minor if that were allowed at my school). I currently am doing an MD/PhD with that PhD being computational and systems biology. ML and AI are things I want to apply to my field, and I have enough in my background to understand some of the seminal papers in the field. I say this because I have studied core ideas in all of the majors you have put out there.

My recommendation between CS, Math, Neuroscience, and Cog Sci is, in order of priority, Computer science, then applied math, then pure math, then cognitive science, then neuroscience.

Neural networks now borrow nearly nothing from Neuroscience and Cognitive science. The relevant equations in Neuroscience and Cognitive science are intractable to do actual computation on, and while cognitive science (and some neuroscience) does try to use some SOTA stuff, it isn't where the ideas really come from. Also, the perceptron is from the 1960s. ConvNets are from the 1980s. So was backprop. What made these old things actually work was advances in hardware, and what brought them further was educated recursion and iteration. People had ideas mostly driven by deep mathematical and empirical understanding of what they were working with, and then iterated on them until it worked.

That said, If we went through a more formalism-driven proof based conception of machine learning and AI, then math would be more useful. This is not the case. While the ideas in mathematics can be helpful (for example, there is deep mathematical theory for understanding neural networks) many of these ideas are generally applied post-hoc. To my knowledge, we have basically one important theorem in play here, which is the universal approximation theorem. It doesn't say much other than 2 hidden layers is sufficient for approximation of functions by densely connected neural networks. I'm not giving this much justice because the math behind it is deep and hard and beyond pre-collegiate mathematics (hard enough that this subject is the first math class to make me physically cry). This is to illustrate how ill-equipped the mathematical world is at understanding SOTA neural networks.

This isn't to say knowledge of mathematics will not help you. For example, we know things like how the landscape of VAEs loss functions are similar to that of PCA. There is a cool math trick to make diffusion models a tractable training problem. There is work in trying to bring self-attention down to more tractable memory sizes that involves some numerical analysis. This means that if your goal really is to help with AGI, you will need to know some math.

What is important for actual AGI are scientific insights (what is sentience? How can a machine generate new ideas? How can a machine learn about the world?) and engineering solutions (How can we make machine learning tractable? How can we fit the processing power into our current hardware?). Computer science teaches you both. You will learn how to analyze algorithms in how they scale (important for fitting things into hardware), and you'll have electives that teach you how we have concepted machine learning and Artificial Intelligence. What you should supplement is solid numerical and continuous mathematics. Learn some numerical analysis. Learn some control theory. Learn some statistics. These are the core ideas and problems we want AGI to currently solve. Neuroscience won't care about making AGI work (and neither will CogSci). Mathematics is deeply beautiful and useful, but the reliance of proofs make mathematics generally a bit behind on the empirical fields.

If you have any questions, I've chosen a very different path in life, but I'll be happy to answer stuff from my perspective. Best of luck with your major choice.

9

vhu9644 t1_iwkn78y wrote

I think I have the training to do this (math + BME undergrad, in grad school for comp bio), but currently busy with some work. If nothing posted in 2 days send me a reminder and I’ll try.

−2

vhu9644 t1_iwau2w1 wrote

And I’m a synthetic biologist in protein engineering. What I’m skeptical about is that for this protein specifically, this change in structure plays a major role in function determination, due to its simplicity, and that we are seeing two distinct folds that are locked from each other.

The point ultimately is moot, the protein chosen is a membrane bound protein, so the lipid layer will provide stabilization.

1

vhu9644 t1_iw7ct55 wrote

Yes I’m aware.

But these arguments by analogy don’t do it for me for something this simple, that doesn’t even look like it would have a catalytic core without some other subunit. Do you even know what protein this is?

Edit:

It’s a serotonin receptors from cricket. It’s a membrane protein so it should be stabilized by going through the membrane.

1

vhu9644 t1_iw7c7do wrote

Sure, but I’m just skeptical of the claim that these two predicted structures would give wildly different functions, or that they really are distinct on something this simple.

I cold believe it if for example the catalytic core of a barrel protein had small alterations in structure, but this is just two helices next to each other with a small disordered domain on the bottom.

1

vhu9644 t1_iw6folh wrote

Really? Could these both not be viable structures that a protein could switch between due to thermal fluctuations?

It looks like it’s not a particularly complex protein, so I imagine it’s some ligand or subunit for something, in which case the “correct” structure would be stabilized by its interaction with another object.

1

vhu9644 t1_iu08ox5 wrote

I think the more strategic importance is cementing internal legitimacy (domestic stability) while ensuring open sea access. They don’t care about blockading taiwan. They care about Taiwan blockading they (with our blessing)

SMIC sucks, but they don’t suck that bad. IIRC they’re like 1-3 generations behind, but again, not capitalist so yield matters a bit less. ASML not selling EUV to them is a big setback, but only time will tell if it is an insurmountable one.

0

vhu9644 t1_itzfdrh wrote

> Obviously and im not even going to try ans explain why China doesn't want the US to have its own chips.

What?

How does that line up with > This was probably written for Chinese state media, then translated here. No its not doomed to fail, China is just upset the US is going to cut them out.

China wants the US to stop relying on Taiwan. It’s another reason for the US to care less about them.

The US has the capability to make top-line chips, just not at competitive yields. China can’t make the chips. It’s a whole different realm of difference here.

We don’t really rely on China for our chips. We rely on Taiwan

3

vhu9644 t1_itz1rqc wrote

Oh but the reasoning was different then.

Anti-communism was strong then, and arguably not as strong now. The PRC was much shittier of a power. The American people then were more willing to do what it took to be the hegemon.

Now, China is less communist, more powerful, and the Americans more isolationist. My view is that the US is less interested in Taiwan now because domestically there is less support for maintaining this and destabilizing that region.

I could be wrong. I’m definitely a kid in the sense I wasn’t alive back then. But from my read on history, we’re supportive of Taiwanese sovereignty, but it’s not as strong as it used to be, and Taiwan losing semiconductor priority would also decrease that support.

1