Submitted by LegendOfHiddnTempl t3_1169uzy in MachineLearning
Comments
Flag_Red t1_j96jzng wrote
> I bet stuff like this is gonna be the biggest real life use case for neural networks.
Huh? What about image/face/character/anything recognition, speech-to-text, text-to-speech, translation, natural language understanding, code autocomplete, etc?
Wacov t1_j96nvx5 wrote
Depends how you define "biggest" but running an ML physics sim per-frame per-character in a AAA title would add up to a hell of a lot of inference.
PacmanIncarnate t1_j9d7yaw wrote
The bigger use isn’t games, but animation or VFX. They require high quality simulations that sometimes take days to render a few seconds of simulation. Every tech that can cut that time down without a substantial loss of quality is huge.
vman512 t1_j96s8yu wrote
maybe for people who play video games all day, this is the most real life use case
nuclear_knucklehead t1_j9876bt wrote
Think of the zillions of FEA and CFD simulations done in the engineering world that a fast-running physics model would greatly accelerate and improve. These things are often less visible to the general audience than the high profile stuff you mention, but still have potentially billions of dollars in economic impact and productivity improvements.
thecodethinker t1_j96u7y5 wrote
I think classification tasks (like image or face recognition) is really useful, but is more niche. We had image recognition before, NNs just do it better. They don’t open up new use cases for recognition.
Same for speech to text and text to speech.
Translation is another huge one, that’s true.
I don’t think NN code autocomplete is a “big real life use case” as we have perfectly correct autocomplete as is and for anything beyond simple programs, I have seen any model give good suggestions. Plus not everyone writes code.
Natural language “understanding” is a weird one. I’m not convinced (yet) that we have models that “understand” language, just models that are good at guessing the next word.
ChatGPTs tendency to be flat out wrong or give nonsensical answers to very niche and specific questions suggests that it isn’t doing any kind of critical thinking about a question, it’s just generating statistically probable following tokens. It just generates convincing prose as it was trained to do.
liquiddandruff t1_j989luo wrote
the stochastic parrot argument is a weak one; we are stochastic parrots
the phenomenon of "reasoning ability" may be an emergent one that arises out of the recursive identification of structural patterns in input data--which chatgpt is shown to do.
prove that "understanding" is not and cannot ever be reducible to "statistical modelling" and only then is your null position intellectually defensible
thecodethinker t1_j98puob wrote
Where has chat gpt been rigorously shown to have reasoning ability? I’ve heard that it passed some exams, but that could just be the model regurgitating info in its training data.
Admittedly, I haven’t looked to deeply in the reasoning abilities of LLMs, so any references would be appreciated :)
liquiddandruff t1_j98v6ko wrote
it's an open question and lots of interesting work is happening at a frenetic pace here
- Language Models Can (kind of) Reason: A Systematic Formal Analysis of Chain-of-Thought https://openreview.net/forum?id=qFVVBzXxR2V
- Emergent Abilities of Large Language Models https://arxiv.org/abs/2206.07682
A favourite discussed recently:
- Theory of Mind May Have Spontaneously Emerged in Large Language Models https://arxiv.org/abs/2302.02083
synth_mania t1_j99njvy wrote
Dude the first image classification or recognition program used perceptrons, the first model of a neuron. In other words, image classification has been neural networks ever since the beginning
thecodethinker t1_j9a4mvo wrote
Yeah, exactly my point about image classification. We’ve had it for a long time already.
synth_mania t1_j9cug1p wrote
My point was that you said image classification has been around since before NNs. That is false. Image classification has only ever been done with NNs. Sometimes they are radically different than what is normally used today (e.g. RAMnets and WISARD), but they've always been NNs.
[deleted] t1_j98bp1g wrote
[removed]
Borrowedshorts t1_j9cxi3r wrote
I'd really like to see more realistic ground (contact) physics with different textures and terrains. Someone might walk differently in a desert environments vs a forest environment vs a snow environment for example. If there's debris on the ground such as small rocks or other debris it may cause the character to adjust foot contact to compensate. Sloping features could also be incorporated and modeled. Walking is a big thing but vehicle movement in these environments is also something that can be drastically improved upon.
LegendOfHiddnTempl OP t1_j95ok8m wrote
>We present a general framework for the garment animation problem through unsupervised deep learning inspired in physically based simulation. Existing trends in the literature already explore this possibility. Nonetheless, these approaches do not handle cloth dynamics. Here, we propose the first methodology able to learn realistic cloth dynamics unsupervisedly, and henceforth, a general formulation for neural cloth simulation. The key to achieve this is to adapt an existing optimization scheme for motion from simulation based methodologies to deep learning. Then, analyzing the nature of the problem, we devise an architecture able to automatically disentangle static and dynamic cloth subspaces by design. We will show how this improves model performance. Additionally, this opens the possibility of a novel motion augmentation technique that greatly improves generalization. Finally, we show it also allows to control the level of motion in the predictions. This is a useful, never seen before, tool for artists. We provide of detailed analysis of the problem to establish the bases of neural cloth simulation and guide future research into the specifics of this domain. arxiv.org
>
>github.com/hbertiche/NeuralClothSim
Lust4Me t1_j95u5zl wrote
relevant to post from earlier this week:
Kumacyin t1_j9889u0 wrote
what about clipping? from the point of the users, we're gonna focus on the stuff that we can notice right away and one of the biggest is clipping, where you gotta mix large motions and object collisions
ixent t1_j98xi5g wrote
Same concern for me. All the great cloth simulations I've seen in games have weird clipping issues.
Sir_Rade t1_j964nxx wrote
Cool paper, thanks for sharing!
blablanonymous t1_j982taj wrote
Damn, the more you know… what does the loss function look like for this problem?
[deleted] t1_j9a12np wrote
[removed]
mskogly t1_j9edst6 wrote
So are we putting «neural» in front of random things now to get traction? Looks like normal physics simulation. Where does the «neural» fit in?
thecodethinker t1_j96dsn8 wrote
I bet stuff like this is gonna be the biggest real life use case for neural networks.
Faster, more portable physics simulations.
We can get infinite training data using naive physics algorithms, then train a model to optimize that