neuralbeans
neuralbeans t1_jdmo56v wrote
Reply to comment by OriginalCompetitive in What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
No I mean what is consciousness?
neuralbeans t1_jdmarno wrote
Reply to comment by OriginalCompetitive in What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
What does that mean?
neuralbeans t1_jdm9hx8 wrote
Reply to comment by aught4naught in What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
Can we test for consciousness?
neuralbeans t1_jdljp0v wrote
Reply to What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
I can't think of why we would be different from a very complex computer.
neuralbeans t1_jbjizpw wrote
Reply to comment by BamaDane in Can feature engineering avoid overfitting? by Constant-Cranberry29
It's a degenerate case, not something anyone should do. If you include Y in your input, then overfitting will lead to the best generalisation. This shows that the input does affect overfitting. In fact, the more similar the input is to the output, the simpler the model can be and thus the less it can overfit.
neuralbeans t1_jbiygo0 wrote
Reply to comment by Constant-Cranberry29 in Can feature engineering avoid overfitting? by Constant-Cranberry29
Well selection is part of engineering, is it not?
neuralbeans t1_jbixife wrote
neuralbeans t1_jbiu3io wrote
Yes, if the features include the model's target output. Then, the overfitting would result in the model outputting that feature as is. Of course this is a useless solution, but the more similar the features are to the output, the less overfitting will be a problem and the less data you would need to generalise.
neuralbeans t1_j7jdiqz wrote
Reply to comment by beautyofdeduction in Why does my Transformer blow GPU memory? by beautyofdeduction
A sequence length of 6250 is massive! It's not just 6250*6250 since you're not multiplying one float per pair of sequence items. You're multiplying the key and value vectors together per pair of sequence items, and this is done for every attention head (in parallel). I think you're seriously under estimating the problem.
What transformer is this which accepts a sequence length of 6250?
neuralbeans t1_j7f68rv wrote
Parameters are a tiny portion of the values in GPU. The number of activations grows quadratically with sequence size.
neuralbeans t1_j6wqxwb wrote
Reply to New n=987 study into coulrophobia (the fear of clowns) suggests its main causes are clowns' unpredictability, their illness-like makeup, and prior media exposure. by fotogneric
How do you determine the causes of a phobia?
neuralbeans OP t1_j6nmccc wrote
Reply to comment by No_Cryptographer9806 in Best practice for capping a softmax by neuralbeans
It's for reinforcement learning to keep the model exploring possibilities.
neuralbeans OP t1_j6n0ima wrote
Reply to comment by chatterbox272 in Best practice for capping a softmax by neuralbeans
I want the output to remain a proper distribution.
neuralbeans OP t1_j6mjhog wrote
Reply to comment by emilrocks888 in Best practice for capping a softmax by neuralbeans
What's this about del attention?
neuralbeans OP t1_j6miw6o wrote
Reply to comment by Lankyie in Best practice for capping a softmax by neuralbeans
It needs to remain a valid softmax distribution.
neuralbeans OP t1_j6md46u wrote
Reply to comment by like_a_tensor in Best practice for capping a softmax by neuralbeans
That will just make the model learn larger logits to undo the effect of the temperature.
neuralbeans t1_j3bx6e5 wrote
Reply to AI, the so called "self thinking" machine. by Bakariiin
Alpha Go does not use a large database of moves. If anything the reason why winning Go was so impressive is because there are way too many scenarios to solve it the way you're describing. It learned to play by practicing against itself using reinforcement learning.
neuralbeans t1_j2cj8xp wrote
Reply to comment by thisoldmould in Do nerve endings closer to the brain / spinal cord take less time to transmit signals because there is less distance to travel? by ssinatra3
Is delayed pain perception a problem for very tall people such as those with giganticism?
neuralbeans t1_j2ciyir wrote
Reply to comment by matticitt in Do nerve endings closer to the brain / spinal cord take less time to transmit signals because there is less distance to travel? by ssinatra3
How do they get synchronised? Is it just a matter of there being discrete time steps when the signals get processed? Do the time steps get longer as you grow taller in order to accommodate the longest nerves?
neuralbeans t1_j1tlj8i wrote
Reply to AI and education by lenhoi
Is it that different from using calculators in primary school mathematics? I'd like to say that we need to come up with a higher level task that computers can't do for assessing students but I think we'll run out of options in a few years. Better to just make it a point that, while it is possible for AI to do a student's job (in the near future), it is still important that the students learn to do what the AI can also do. Unfortunately that means using in-class tests only.
neuralbeans t1_ivyhjxo wrote
Usually it's whatever the experimenter likes using together with a little tuning of the numbers.
neuralbeans t1_iuixn9p wrote
Reply to comment by NV_91 in A MMORPG with an infinite leveling system? by [deleted]
I'm stuck wondering if there has ever existed a game that is available on any platform except PC.
neuralbeans t1_is71aaq wrote
Is what you do a translation from specification to code? What is your profession exactly? How have you been doing this task before deep learning was a thing?
neuralbeans t1_irzgtyz wrote
Reply to comment by Voice_of_Humanity in Will the Internet be free in the future? by redingerforcongress
It's clear that OP meant free from interference. I'm not sure where you got the free-as-in price from.
neuralbeans t1_jdnscjk wrote
Reply to comment by OriginalCompetitive in What happens if it turns out that being human is not that difficult to duplicate in a machine? What if we're just ... well ... copyable? by RamaSchneider
Would you be able to tell if I didn't?