Cogwheel
Cogwheel OP t1_j8wn063 wrote
Reply to comment by No-Intern2507 in [D] Is anyone working on ML models that infer and train at the same time? by Cogwheel
what crawled up your ass and died? if I was really taking that bit seriously do you think I would've written "through various leaps of logic and 'faith'"? Do you really think there's no value in the overall conversation or were you just triggered?
So many other people have actually answered the question that your response seems completely asinine
Cogwheel OP t1_j8o4qci wrote
Reply to comment by crt09 in [D] Is anyone working on ML models that infer and train at the same time? by Cogwheel
Thanks! This seems to be the term I was looking for.
Cogwheel OP t1_j8o2st4 wrote
Reply to comment by HyugenAI in [D] Is anyone working on ML models that infer and train at the same time? by Cogwheel
> What most models are doing now is much more efficient, practical and reliable than what I described. Though it doesn't exactly reproduce how we learn things. But that's probably not what most people would want in their models. They prefer more efficient, pratical and reliable models.
Yeah, I guess the distinction here is whether one is using an ML model as a means to an end or as an end in itself. I imagine a researcher interested in AGI would be much more likely to take this kind of approach than someone trying to sell their ML models to industry.
Edit: anyone care to discuss why you downvoted?
Cogwheel OP t1_j8o29pz wrote
Reply to comment by CabSauce in [D] Is anyone working on ML models that infer and train at the same time? by Cogwheel
>1. Distributed models would have to be updated. How do we update weights from two sources? (There might be options for this, I haven't looked.)
This strikes me as more of a software/hardware engineering challenge rather than one of network and training architecture. Definitely a challenge though.
>2. Potential for undesirable and unstable predictions/generations.
I think the same is true for humans. Given enough "perverse" inputs we can all go crazy. So it's definitely something to think about and mitigate. There would definitely need to be components built to work against these "forces"
>3. I think you'd have to allow the weights to update pretty dramatically at each inference to get any real variation. I think this would lead to #2
Interesting point... The time between acts of inference in an ML model are on the order of clocks (milliseconds for realtime perception systems, seconds to minutes for things like ChatGPT). Whereas animals experience essentially continuous input. Our eyes alone present us with many Mbps of data, is it were.
So without these vast swathes of data constantly being fed in, the alternative is to make bigger changes based on the limited data.
>4. Attention components probably do what you're looking for more accurately and efficiently.
Attention had crossed my mind when I posted this. I agree its intention is to accomplish a kind of weight redistribution based on previous input. But I still think this is more superficial/ephemeral than what I'm asking about. Humans certainly have attention mechanisms in our brains, but those attention mechanisms are subject to the same kinds of changes over time as the rest.
Submitted by Cogwheel t3_113448t in MachineLearning
Cogwheel t1_j7h3xi6 wrote
Reply to comment by Frumpagumpus in Does the high dimensionality of AI systems that model the real world tell us something about the abstract space of ideas? [D] by Frumpagumpus
> (also i don't understand why the downvotes)
I will never understand Reddit's downvote behavior. It's clearly not just bots... It seems some people just can't stand honest curiosity, not already knowing what they know, etc.
Cogwheel t1_j6biglp wrote
Reply to comment by glasseyepatch in Milk for display purposes only by ProjectMew
By campylobacter
Cogwheel t1_iufo9ip wrote
Reply to comment by tatakatakashi in My boyfriend and my’s Halloween costume this year by tara_constance
Or they write "My boyfriend and I's costume"
Cogwheel t1_irwdfyl wrote
Reply to comment by mixelydian in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
I think the fundamental difference you're pointing out is that a brain's weights change over time, and those changes are influenced by factors beyond the structure and function of the neurons themselves. Maybe this kind of thing is necessary for consciousness, but I don't think it really changes the argument.
We don't normally think of the weights changing over time in a neural net application, but that's exactly what's happening when it goes through training. Perhaps future sentient AIs will have some sort of ongoing feedback/backpropagation during operation.
And because of the space/time duality for computation, we can also imagine implementing these changes over time as just a very large sequence of static elements that differ over space.
So I still don't see any reason this refutes the idea that the operations in the brain can be represented by math we already understand, or that brains are described by biochemical processes.
Edit removed redundant words that could be removed for redundancy
Cogwheel t1_irwa4ld wrote
Reply to comment by gravitas_shortage in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
I don't see how any of this refutes my original point. If there are unknown quantum effects taking place in the brain, they are part of the biochemistry, not separate from it.
And afaik, quantum mechanics is perfectly happy being represented as matrix operations (albeit with shitty space complexity)
Cogwheel t1_irw6cmn wrote
Reply to comment by gravitas_shortage in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
This is straight-up quantum mysticism. Quantum mechanics is a rigorous theory that explains the underpinnings of the electro-chemichal processes in everything, including brains.
Why would there be some fundamental force of the universe that only appears in brains?
To the extent any unknown quantum interactions exist, they would have to be negligible.
Cogwheel t1_irw5q6u wrote
Reply to comment by mixelydian in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
I'm not sure I understand. We know that the thing between an animal's senses and its behaviors is the nervous system. Just because we don't know all the details of the process doesn't mean the things we do know are wrong.
Cogwheel t1_iruw3c8 wrote
Reply to comment by Mysterious_Radish_14 in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
O.o in what other manner do brains work besides biochemical??
Cogwheel t1_irurt83 wrote
Reply to comment by Artgor in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
And brains are biochemical reactions and activation action potentials. Not sure what this is trying to say.
Cogwheel t1_jaf3jb2 wrote
Reply to CVS is getting a bit sassy by mbz321
I moved from California in an area where plastic bags are banned for most uses to Nevada a couple years ago. People seem surprised whenever we bring our reusable bags, even "healthy" stores like TJ's and Sprouts. And when we forget them, it seems like the cashiers are trying to prop up their stocks in bag manufacturers. They'll put 2-3 items in each bag and it gives me heart palpitations.