vhu9644
vhu9644 t1_ja6wu9v wrote
Reply to Large language models generate functional protein sequences across diverse families by MysteryInc152
I know this is exciting (and it is) but just to temper the excitement: many computationally designed proteins have issues.
Most aren’t that good at working in in-Vivo conditions
We still can’t really adjust parameters we really want (like temperature these proteins work in)
Most are stuck on “simpler” problems like binding rather than enzymatic function
There may also be issues with evolvability of these enzymes
But all the same, it’s not an unnatural situation either. Protein sequences are still a sequence. Amino acids are added one by one to build them up, and we’ve known that neural nets are good at these problems. Before we solved tertiary structure prediction, secondary structure prediction sota was also neural networks. It’s just tertiary structure and these kinds of generative models are hard.
We’re finally cracking into generative protein design and the field is super exciting now, but it’s still only really preliminary results we’re seeing.
vhu9644 t1_j9de7c9 wrote
Reply to Computer vs Math vs Neuroscience vs Cognitive science Bachelors’ degree to major in by Ok_Telephone4183
I have two bachelors, one in Bioengineering (focused on mechanical engineering), one in pure mathematics (with enough classes taken in CS to have a minor if that were allowed at my school). I currently am doing an MD/PhD with that PhD being computational and systems biology. ML and AI are things I want to apply to my field, and I have enough in my background to understand some of the seminal papers in the field. I say this because I have studied core ideas in all of the majors you have put out there.
My recommendation between CS, Math, Neuroscience, and Cog Sci is, in order of priority, Computer science, then applied math, then pure math, then cognitive science, then neuroscience.
Neural networks now borrow nearly nothing from Neuroscience and Cognitive science. The relevant equations in Neuroscience and Cognitive science are intractable to do actual computation on, and while cognitive science (and some neuroscience) does try to use some SOTA stuff, it isn't where the ideas really come from. Also, the perceptron is from the 1960s. ConvNets are from the 1980s. So was backprop. What made these old things actually work was advances in hardware, and what brought them further was educated recursion and iteration. People had ideas mostly driven by deep mathematical and empirical understanding of what they were working with, and then iterated on them until it worked.
That said, If we went through a more formalism-driven proof based conception of machine learning and AI, then math would be more useful. This is not the case. While the ideas in mathematics can be helpful (for example, there is deep mathematical theory for understanding neural networks) many of these ideas are generally applied post-hoc. To my knowledge, we have basically one important theorem in play here, which is the universal approximation theorem. It doesn't say much other than 2 hidden layers is sufficient for approximation of functions by densely connected neural networks. I'm not giving this much justice because the math behind it is deep and hard and beyond pre-collegiate mathematics (hard enough that this subject is the first math class to make me physically cry). This is to illustrate how ill-equipped the mathematical world is at understanding SOTA neural networks.
This isn't to say knowledge of mathematics will not help you. For example, we know things like how the landscape of VAEs loss functions are similar to that of PCA. There is a cool math trick to make diffusion models a tractable training problem. There is work in trying to bring self-attention down to more tractable memory sizes that involves some numerical analysis. This means that if your goal really is to help with AGI, you will need to know some math.
What is important for actual AGI are scientific insights (what is sentience? How can a machine generate new ideas? How can a machine learn about the world?) and engineering solutions (How can we make machine learning tractable? How can we fit the processing power into our current hardware?). Computer science teaches you both. You will learn how to analyze algorithms in how they scale (important for fitting things into hardware), and you'll have electives that teach you how we have concepted machine learning and Artificial Intelligence. What you should supplement is solid numerical and continuous mathematics. Learn some numerical analysis. Learn some control theory. Learn some statistics. These are the core ideas and problems we want AGI to currently solve. Neuroscience won't care about making AGI work (and neither will CogSci). Mathematics is deeply beautiful and useful, but the reliance of proofs make mathematics generally a bit behind on the empirical fields.
If you have any questions, I've chosen a very different path in life, but I'll be happy to answer stuff from my perspective. Best of luck with your major choice.
vhu9644 t1_iwkn78y wrote
Reply to comment by ReasonablyBadass in MIT researchers solved the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms by Dr_Singularity
I think I have the training to do this (math + BME undergrad, in grad school for comp bio), but currently busy with some work. If nothing posted in 2 days send me a reminder and I’ll try.
vhu9644 t1_iwau2w1 wrote
Reply to comment by wazabee in Shape of a protein predicted by two different AI models (ESMFold on the left, AlphaFold on the right) by greentea387
And I’m a synthetic biologist in protein engineering. What I’m skeptical about is that for this protein specifically, this change in structure plays a major role in function determination, due to its simplicity, and that we are seeing two distinct folds that are locked from each other.
The point ultimately is moot, the protein chosen is a membrane bound protein, so the lipid layer will provide stabilization.
vhu9644 t1_iw7ct55 wrote
Reply to comment by wazabee in Shape of a protein predicted by two different AI models (ESMFold on the left, AlphaFold on the right) by greentea387
Yes I’m aware.
But these arguments by analogy don’t do it for me for something this simple, that doesn’t even look like it would have a catalytic core without some other subunit. Do you even know what protein this is?
Edit:
It’s a serotonin receptors from cricket. It’s a membrane protein so it should be stabilized by going through the membrane.
vhu9644 t1_iw7c7do wrote
Reply to comment by wazabee in Shape of a protein predicted by two different AI models (ESMFold on the left, AlphaFold on the right) by greentea387
Sure, but I’m just skeptical of the claim that these two predicted structures would give wildly different functions, or that they really are distinct on something this simple.
I cold believe it if for example the catalytic core of a barrel protein had small alterations in structure, but this is just two helices next to each other with a small disordered domain on the bottom.
vhu9644 t1_iw6folh wrote
Reply to comment by wazabee in Shape of a protein predicted by two different AI models (ESMFold on the left, AlphaFold on the right) by greentea387
Really? Could these both not be viable structures that a protein could switch between due to thermal fluctuations?
It looks like it’s not a particularly complex protein, so I imagine it’s some ligand or subunit for something, in which case the “correct” structure would be stabilized by its interaction with another object.
Submitted by vhu9644 t3_yjokyx in askscience
vhu9644 t1_iu08ox5 wrote
Reply to comment by reddit-MT in TSMC says efforts to rebuild US semiconductor industry are doomed to fail by 0wed12
I think the more strategic importance is cementing internal legitimacy (domestic stability) while ensuring open sea access. They don’t care about blockading taiwan. They care about Taiwan blockading they (with our blessing)
SMIC sucks, but they don’t suck that bad. IIRC they’re like 1-3 generations behind, but again, not capitalist so yield matters a bit less. ASML not selling EUV to them is a big setback, but only time will tell if it is an insurmountable one.
vhu9644 t1_itzfdrh wrote
Reply to comment by monchota in TSMC says efforts to rebuild US semiconductor industry are doomed to fail by 0wed12
> Obviously and im not even going to try ans explain why China doesn't want the US to have its own chips.
What?
How does that line up with > This was probably written for Chinese state media, then translated here. No its not doomed to fail, China is just upset the US is going to cut them out.
China wants the US to stop relying on Taiwan. It’s another reason for the US to care less about them.
The US has the capability to make top-line chips, just not at competitive yields. China can’t make the chips. It’s a whole different realm of difference here.
We don’t really rely on China for our chips. We rely on Taiwan
vhu9644 t1_itzdnts wrote
Reply to comment by monchota in TSMC says efforts to rebuild US semiconductor industry are doomed to fail by 0wed12
Taiwan isn’t China. Tsmc is Taiwanese.
vhu9644 t1_itz8s07 wrote
Reply to comment by [deleted] in TSMC says efforts to rebuild US semiconductor industry are doomed to fail by 0wed12
Well you’re half right.
Nothing stops is from making these chips commercially. But right now we just don’t know how to make these chips at a competitive yield. Sure we can make chips that work at the highest level. They just also come with a much larger failure rate than is competitively viable.
vhu9644 t1_itz1rqc wrote
Reply to comment by PHATsakk43 in TSMC says efforts to rebuild US semiconductor industry are doomed to fail by 0wed12
Oh but the reasoning was different then.
Anti-communism was strong then, and arguably not as strong now. The PRC was much shittier of a power. The American people then were more willing to do what it took to be the hegemon.
Now, China is less communist, more powerful, and the Americans more isolationist. My view is that the US is less interested in Taiwan now because domestically there is less support for maintaining this and destabilizing that region.
I could be wrong. I’m definitely a kid in the sense I wasn’t alive back then. But from my read on history, we’re supportive of Taiwanese sovereignty, but it’s not as strong as it used to be, and Taiwan losing semiconductor priority would also decrease that support.
vhu9644 t1_ityxkk5 wrote
Reply to comment by ked_man in TSMC says efforts to rebuild US semiconductor industry are doomed to fail by 0wed12
No, China wants US semiconductors to get parity. It makes protecting Taiwan less important to the US, meaning they have an easier time reclaiming it.
Taiwan has strategic value to China beyond semiconductors
vhu9644 t1_ja9cw0v wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
Laws have to be pragmatic.
It's like making encryption illegal. Anyone with the know-how can do it, and you can't detect an air-gapped model being trained.
We, as a society, shed data more than we shed skin cells. Restricting dataset access wouldn't really be that much of a deterrent either.