Cogwheel

Cogwheel t1_jaf3jb2 wrote

I moved from California in an area where plastic bags are banned for most uses to Nevada a couple years ago. People seem surprised whenever we bring our reusable bags, even "healthy" stores like TJ's and Sprouts. And when we forget them, it seems like the cashiers are trying to prop up their stocks in bag manufacturers. They'll put 2-3 items in each bag and it gives me heart palpitations.

0

Cogwheel OP t1_j8wn063 wrote

what crawled up your ass and died? if I was really taking that bit seriously do you think I would've written "through various leaps of logic and 'faith'"? Do you really think there's no value in the overall conversation or were you just triggered?

So many other people have actually answered the question that your response seems completely asinine

1

Cogwheel OP t1_j8o2st4 wrote

> What most models are doing now is much more efficient, practical and reliable than what I described. Though it doesn't exactly reproduce how we learn things. But that's probably not what most people would want in their models. They prefer more efficient, pratical and reliable models.

Yeah, I guess the distinction here is whether one is using an ML model as a means to an end or as an end in itself. I imagine a researcher interested in AGI would be much more likely to take this kind of approach than someone trying to sell their ML models to industry.

Edit: anyone care to discuss why you downvoted?

−2

Cogwheel OP t1_j8o29pz wrote

>1. Distributed models would have to be updated. How do we update weights from two sources? (There might be options for this, I haven't looked.)

This strikes me as more of a software/hardware engineering challenge rather than one of network and training architecture. Definitely a challenge though.

>2. Potential for undesirable and unstable predictions/generations.

I think the same is true for humans. Given enough "perverse" inputs we can all go crazy. So it's definitely something to think about and mitigate. There would definitely need to be components built to work against these "forces"

>3. I think you'd have to allow the weights to update pretty dramatically at each inference to get any real variation. I think this would lead to #2

Interesting point... The time between acts of inference in an ML model are on the order of clocks (milliseconds for realtime perception systems, seconds to minutes for things like ChatGPT). Whereas animals experience essentially continuous input. Our eyes alone present us with many Mbps of data, is it were.

So without these vast swathes of data constantly being fed in, the alternative is to make bigger changes based on the limited data.

>4. Attention components probably do what you're looking for more accurately and efficiently.

Attention had crossed my mind when I posted this. I agree its intention is to accomplish a kind of weight redistribution based on previous input. But I still think this is more superficial/ephemeral than what I'm asking about. Humans certainly have attention mechanisms in our brains, but those attention mechanisms are subject to the same kinds of changes over time as the rest.

2

Cogwheel t1_irwdfyl wrote

I think the fundamental difference you're pointing out is that a brain's weights change over time, and those changes are influenced by factors beyond the structure and function of the neurons themselves. Maybe this kind of thing is necessary for consciousness, but I don't think it really changes the argument.

We don't normally think of the weights changing over time in a neural net application, but that's exactly what's happening when it goes through training. Perhaps future sentient AIs will have some sort of ongoing feedback/backpropagation during operation.

And because of the space/time duality for computation, we can also imagine implementing these changes over time as just a very large sequence of static elements that differ over space.

So I still don't see any reason this refutes the idea that the operations in the brain can be represented by math we already understand, or that brains are described by biochemical processes.

Edit removed redundant words that could be removed for redundancy

1

Cogwheel t1_irwa4ld wrote

I don't see how any of this refutes my original point. If there are unknown quantum effects taking place in the brain, they are part of the biochemistry, not separate from it.

And afaik, quantum mechanics is perfectly happy being represented as matrix operations (albeit with shitty space complexity)

1

Cogwheel t1_irw6cmn wrote

This is straight-up quantum mysticism. Quantum mechanics is a rigorous theory that explains the underpinnings of the electro-chemichal processes in everything, including brains.

Why would there be some fundamental force of the universe that only appears in brains?

To the extent any unknown quantum interactions exist, they would have to be negligible.

1