Present_Finance8707

Present_Finance8707 t1_j9qfavu wrote

His arguments don’t hold up. For one thing we already have powerful generalist agents. Gato is one and it’s clear that advanced LLMs can do all sorts of tasks they weren’t trained to. Prediction of next token seems as benign and narrow as it can get but if you don’t think a LLM can become dangerous you aren’t thinking hard enough. CAIS also assumes people won’t build generalist agents to start with but that cat is well out of the bag. Narrow agents can also become dangerous on their own because of instrumental convergence but even if you restrict building only weak narrow agents/services the profit incentive for building general agents will be too strong since they will likely outperform narrow ones.

1

Present_Finance8707 t1_j9my8wl wrote

Like I said you really really don’t understand alignment. Imagine thinking a “filter” is what we need to align AIs or completely lacking understanding of instrumental convergence. You don’t understand even the utter basics but think you know enough to dismiss Eliezers arguments out of hand??? Thankfully I think you’re also too stupid to contribute meaningfully to capabilities research so thanks for that.

3