jms4607
jms4607 t1_j8693ex wrote
Reply to comment by Mobile-Bird-6908 in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
YoloV3 would shine
jms4607 t1_j2gwke0 wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
A lot of classical algorithms are not differentiable, so you couldn’t expect a differentiable model to do any better than approximate it. Reinforcement learning allows you to learn a non-differentiable algorithm tho.
jms4607 t1_j1s103c wrote
Reply to comment by PolywogowyloP in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
Are there any problems with the reparam trick?
jms4607 t1_j0im19u wrote
Reply to comment by Aggravating-Act-1092 in [D] What kind of effects ChatGPT or future developments may have on job market? by ureepamuree
Human labor(even that requiring intelligence) is going to replaceable with electricity. I honestly to struggle to see how this isn’t going to create wealth disparity like we have never seen.
jms4607 t1_j0d65c3 wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
-
Projecting can be interpolation, which these models are capable of. There are a handful of image/text models that can imagine/project an image of a puppy wearing a sailor hat.
-
All you need to do is have continuous sensory input in your RL environment/include cost or delay of thought in actions, which is something that has been implemented in research to resolve your f(x) = 2x issue.
-
The Cat example is only ridiculous because it obviously isn’t a cat. If we can’t reasonably prove that it is or isn’t a cat, then asking whether it is a cat or not is not a question worth considering. Similar idea goes for the question “is ChatGPT capturing some aspect of human cognition”. If we can’t prove that our brains work in a functionally different way that can’t be approximated to arbitrary degree by a ML model, then it isn’t something worth arguing ab. I don’t think we know enough ab neuroscience to state we aren’t just doing latent interpolation to optimize some objective.
-
The simba is only cute because you think it is cute. If we trained an accompanying text model for the simba function, where it was given the training data “you are cute” in different forms, it would probably respond yes if asked if it was cute. GPT-3 or ChatGPT can refer and make statements ab itself.
At least agree that evolution on earth and human actions are nothing but a MARL POMDP environment.
jms4607 t1_j0cva57 wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
I don’t think we know enough about the human brain to say we aren’t doing something very similar ourselves. 90% at least of human brain development has been to optimize E[agents with my dna in future]. Our brains are basically embedding our sensory input into a compressed latent internal state, then sampling actions to optimize some objective.
jms4607 t1_j0cqu0a wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
I’d argue that if ChatGPT was fine tuned in RL based off of the responses of a human, for example, if it’s goal as a debater ai was to make humans less confident of their belief by responding in contrary in a conversation, than it arguably has awareness of intent. Is this not possible in the training scheme of ChatGPT? I looked into how they use RL right now, and I agree it is just fine-tuning human-like responses, but I think a different reward function could illicit awareness of intent.
jms4607 t1_j0ckrj0 wrote
Reply to comment by ReginaldIII in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
Ur only able to sample something from the manifold you have been trained on.
jms4607 t1_j09860v wrote
You could argue a LLM trained with RL like ChatGPT has intent in that is aware it is acting in an MDP and needs to take purposeful action.
jms4607 t1_iz38c09 wrote
Reply to comment by Commyende in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
I think the pos/neg here is more like contrastive learning.
jms4607 t1_iy7doah wrote
Reply to comment by [deleted] in Is coding from scratch a requirement to be able to do research? [D] by [deleted]
For an applied paper yeah, you still gotta set it up in the right way.
jms4607 t1_ivmqkkt wrote
Reply to comment by terminal_object in [D] Academia: The highest funded plagiarist is also an AI ethicist by [deleted]
AI ethicist backpropagates toy-size mlp by hand (hard)
jms4607 t1_iqxuph2 wrote
Reply to comment by bushrod in [D] Why restrict to using a linear function to represent neurons? by MLNoober
Generalization out of distribution might be the biggest thing holding back ML rn. It’s worth thinking about whether the priors we encode in nns now are to blame. A large mlp is required just to approximate a single neuron. Maybe the unit additive nonlinearity we are using now is too simple. I’m sure there is a sweet spot between complex interactions/few neurons and simple interactions/many neurons.
jms4607 t1_iqxg1vm wrote
Reply to comment by bushrod in [D] Why restrict to using a linear function to represent neurons? by MLNoober
It’s not a clear answer. Our neurons actually have multiplicative effects, not only additive. The paper that talks about it I think is Active Dendrites, something Catastrophoc Forgetting. The real reason we don’t use polynomial is because of the combinatoric scaling of a d variable polynomial. However, a mlp cannot approximate y=x^2 to an arbitrary accuracy on (-inf, inf) no matter how large the size of your network. I can think of a proof of this for sigmoid, tanh, and Relu activations. A polynomial kernel (x^0, x^1, …, x^n) could fit y=x^2 perfectly however. An mlp that allowed you to multiply two inputs to each neuron could also learn the function perfectly. I’d be interested in papers that use multiple activation function and allow input interaction to enforce Occams Razor through weight regularization or something. Sure nets like that would generalize better.
jms4607 t1_iqxdhw3 wrote
Reply to comment by MrFlufypants in [D] Why restrict to using a linear function to represent neurons? by MLNoober
Without activation functions an mlp would just be y=sum(m•x) + b
jms4607 t1_jdxd3hv wrote
Reply to [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
Makes me if you could fine-tune by just incentivizing first answer to be that with a general accuracy/review rq