VirtualHat
VirtualHat t1_jaa4jwx wrote
Reply to comment by bitemenow999 in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.
While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.
VirtualHat t1_j9vnc3y wrote
Reply to comment by icedrift in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes, it's worse than this too. We usually associate well-written text with accurate information. That's because, generally speaking, most people who write well are highly educated and have been taught to be critical of their own writing.
Text generated by large language models is atypical in that it's written like an expert but is not critical of its own ideas. We now have an unlimited amount of well-written, poor-quality information, and this is going to cause real problems.
VirtualHat t1_j9vkpgd wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That's a good question. To be clear, I believe there is a risk of an extinction-level event, just that it's unlikely. My thinking goes like this.
- Extinction-level events must be rare, as one has not occurred in a very long time.
- Therefore the 'base' risk is very low, and I need evidence to convince me otherwise.
- I'm yet to see strong evidence that AI will lead to an extinction-level event.
I think the most likely outcome is that there will be serious negative implications of AI (along with some great ones) but that they will be recoverable.
I also think some people overestimate how 'super' a superintelligence can be and how unstoppable an advanced AI would be. In a game like chess or Go, a superior player can win 100% of the time. But in a game with chance and imperfect information, a relatively weak player can occasionally beat a much stronger player. The world we live in is one of chance and imperfect information, which limits any agent's control over the outcomes. This makes EYs 'AI didn't stop at human-level for Go' analogy less relevant.
VirtualHat t1_j9rsysw wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I work in AI research, and I see many of the points EY makes here in section A as valid reasons for concern. They are not 'valid' in the sense that they must be true, but valid in that they are plausible.
For example, he says, We can't just build a very weak system. There are two papers that led me to believe this could be the case. All Else Being Equal be Empowered, which shows that any agent acting to achieve a goal under uncertainty will need (all else being equal) to maximize its control over the system. And the Zero Shot Learners paper which shows that (very large) models trained on one task seem also to learn other tasks (or at least learn how to learn them). Both of these papers make me question the assumption that a model trained to learn one 'weak' task won't also learn more general capabilities.
Where I think I disagree is on the likely scale of the consequences. "We're all going to die" is an unlikely outcome. Most likely the upheaval caused by AGI will be similar to previous upheavals in scale, and I'm yet to see a strong argument that bad outcomes will be unrecoverable.
VirtualHat t1_j9rqmii wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
This is very far from the current thinking in AI research circles. Everyone I know believes intelligence is substrate independent and, therefore, could be implemented in silicon. The debate is really more about what constitutes AGI and if we're 10 years or 100 years away, not if it can be done at all.
VirtualHat t1_j9lp6z3 wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
There was a really good paper a few years ago that identifies some biases in how DNNs learn might explain why they work so well in practice as compared to alternatives. Essentially they are biased towards smoother solutions, which is often what is wanted.
This is still an area of active research, though. I think it's fair to say we still don't quite know why DNNs work as well as they do.
VirtualHat t1_j9loi32 wrote
Reply to comment by kvutxdy in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
It should be all continious functions, but I can't really think of any problems where this would limit the solution. The set of all continuous functions is a very big set!
As a side note, I think it's quite interesting that the theorem doesn't include periodic functions like sin, so I guess it's not quite all continuous functions, just continuous functions with bounded input.
VirtualHat t1_j9lmkg1 wrote
Reply to comment by elmcity2019 in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
In my experience DNNs only help with structured data (audio, video, images etc.). I once had a large (~10M datapoints) tabular dataset and found that simply taking a random 2K subset and fitting an SVM gave the best results. I think this is usually the case, but people still want DNNs for some reason. If it were a vision problem, then, of course, it'd be the other way around.
VirtualHat t1_j9lm05j wrote
Reply to comment by 30299578815310 in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
It's worth noting that it wasn't until conv nets that DNNs took off. It's hard to think of a problem that a traditional vanilla MLP solves that can't also be solved with an SVM.
VirtualHat t1_j9ll5i2 wrote
Reply to comment by relevantmeemayhere in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Yes, that's right. For many problems, a linear model is just what you want. I guess what I'm saying is that the dividing line between when a linear model is appropriate vs when you want a more expressive model is often related to how much data you have.
VirtualHat t1_j9lkto4 wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Oh wow, super weird to be downvoted just for asking for a reference. r/MachineLearning isn't what it used to be I guess, sorry about that.
VirtualHat t1_j9j8uvr wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
For example, in IRIS dataset, the class label is not a linear combination of the input. Therefore, if your model class is all linear models, you won't find the optimal or in this case, even a good solution.
If you extend the model class to include non-linear functions, then your hypothesis space now at least contains a good solution, but finding it might be a bit more trickly.
VirtualHat t1_j9j8805 wrote
Reply to comment by GraciousReformer in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Linear models make an assumption that the solution is in the form of y=ax+b. If the solution is not in this form then the best solution will is likely to be a poor solution.
I think Emma Brunskill's notes are quite good at explaining this. Essentially the model will underfit as it is too simple. I am making an assumption though, that a large dataset implies a more complex non-linear solution, but this is generally the case.
VirtualHat t1_j9j2gwx wrote
Reply to comment by relevantmeemayhere in [D] "Deep learning is the only thing that currently works at scale" by GraciousReformer
Large linear models tend not to scale well to large datasets if the solution is not in the model class. Because of this lack of expressivity, linear models tend to do poorly on complex problems.
VirtualHat t1_j9j23wp wrote
If you're interested in the math, learning curve theory might be a good place to start.
VirtualHat t1_j6ckblf wrote
Reply to comment by luaks1337 in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
I was thinking next frame prediction, perhaps conditioned on the text description or maybe a transcript. The idea is you could then use the model to generate a video from a text prompt.
I suspect this is far too difficult to achieve with current algorithms. It's just interesting that the training data is all there, and would be many, many orders of magnitude larger than GPT-3's training set.
VirtualHat t1_j6bi3xk wrote
Reply to comment by visarga in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
Video and audio might be the next frontier. Although, I'm not too sure how useful it would be. Youtube receives over 500 hours of uploads per minute, providing an essentially unlimited pipe of training data.
VirtualHat t1_j45faub wrote
Reply to comment by [deleted] in [D] Has ML become synonymous with AI? by Valachio
Genetic algorithms are a type of evolutionary algorithm, which are themselves a part of AI. Have a look at the wiki page.
I think I can see your point though. The term AI is used quite differently in research than in the popular meaning. We sometimes joke that the cultural definition of AI is "everything that can't yet be done with a computer" :)
This is a bit of a running joke in the field. Chess was AI, until we solved it, then it wasn't. Asking a computer random questions and getting an answer Star Trek style was AI until Google then it was just 'searching the internet'. The list goes on...
VirtualHat t1_j45em2b wrote
Reply to comment by tell-me-the-truth- in [D] Has ML become synonymous with AI? by Valachio
Yes true! Most models will eventually saturate and perhaps and even become worse. I guess it's our job then to just make the algorithms better :). A great example of this is the new Large Langauge Models (LLM), which are trained on billions if not trillions of tokens, and still keep getting better :)
VirtualHat t1_j45e2rs wrote
Reply to comment by [deleted] in [D] Has ML become synonymous with AI? by Valachio
Everything is new in its current form :) AI, however, goes back a long way... perhaps Turing would be a reasonable starting point, though, with him writing about COMPUTING MACHINERY AND INTELLIGENCE back in 1950.
edit: gramma.
VirtualHat t1_j45dklv wrote
Reply to comment by [deleted] in [D] Has ML become synonymous with AI? by Valachio
I think Russell and Norvig is a good place to start if you want to read more. The AI defintion is a taken from their textbook which is one of the most cited references I've ever seen. I do agree however that the first defintion has a problem. Namely with what 'intellegently' means.
The second defintion is just the textbook defintion of ML. Hard to argue with that one. It's taken from Tom Mitchell. Formally “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” (Machine Learning, Tom Mitchell, McGraw Hill, 1997).
I'd be curious to know what your thoughts on a good defintion for AI would be? This is an actively debated topic, and so far no one really has a great defintion (that I know of).
VirtualHat t1_j456msu wrote
Reply to comment by I_will_delete_myself in [D] Has ML become synonymous with AI? by Valachio
Definitions shift a bit, and people disagree, but this is what I stick to...
AI: Any system that responds 'intelligently' to its environment. A thermostat is, therefore, AI.
ML: A system that gets better at a task with more data.
Therefore ML is a subset of AI, one specific way of achieving the goal.
VirtualHat t1_j440ajs wrote
Reply to [D] Has ML become synonymous with AI? by Valachio
People tend to use AI and ML to mean similar things. But yes, in academia, we still research AI ideas that are not ML. And integrating good-old-fashioned-ai (GOFAI) with more modern ML is becoming an area of increasing research interest.
VirtualHat t1_j0ds33h wrote
Reply to comment by [deleted] in [D] Tensorflow vs. PyTorch Memory Usage by Oceanboi
They mean they have 4 Conv layers, with 64, 32, 16, and 16 channel outputs. The filter size is not given.
VirtualHat t1_jaa4ueu wrote
Reply to comment by po-handz in [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
A better analogy would be: This professor thinks the implementation of driver's licences has reduced traffic accidents.