Comments
Thatingles t1_ir106e5 wrote
I remain convinced AGI will emerge from linking together many modules and one of those would of course be a world model module, but I don't think it's the final step. We still seem to be missing the components that would allow an AI to solve complex multi-step problems by a combination of memory and reasoning. I'm sure it will come but this ain't it.
thruster_fuel69 t1_ir1bpx1 wrote
Thats my meaty sense also. World module is critical but not the only requirement. That being said, the future is going to be exciting as heck! I can't wait for my worldly sage of an AI mentor.
berd021 t1_ir2j69u wrote
That is exactly what the world model is for though. You can use it to perform a length of transformations that is unspecified beforehand. It will stop performing steps as soon as energy is reasonably minimized.
This is compared to ai now, which only performs as many steps as it has learnt.
Snufflepuffster t1_ir1k2dt wrote
I have always considered something approaching sentience could be made by having a network operating on top of smaller task specific nets. Now operating on the activations of all these smaller nets could give the ‘sentient’ net a sense of of the world around it because it has access to information. It can modulate each of the smaller slave nets on the fly based on previous experiences to make a decision. It can also identify the most pressing to task to make a decision about in its surrounding environment. That’s what LeCun is suggesting in this scholarly op-ed, it’s not a new idea, more a question of computing power.
afaik we haven’t clearly defined what sentience is yet, if an ai bot can trick you into believing it’s sentient then what else is there? I guess this would just show we have an information processing limit and once another entity approaches that limit we are fooled. This is a question for the humanities to answer probably.
LastExitToSalvation t1_ir21ltj wrote
To your point about a network overlaying smaller nets, we could get to a point where awareness or quasi-sentience is an emergent phenomenon, not something we can build. Thinking about human consciousness, it is evident that our self awareness is an emergent property of our biology. If we put enough of the right technology pieces together, perhaps we'll see the same thing in machines. And then we're left with a real ethical question. If we didn't create sentience but it merely occurred, do we have the moral right to shut it down?
Snufflepuffster t1_ir22k9m wrote
Yea eventually the emergent properties should be mostly contained in the self supervised training signal. So a question of how the model learns not necessarily its construction. As the bot learns more it can start to identify priority tasks to infer, and then this process just continues. The thing we’re taking for granted is the environment that supplies all the stimulus from which self awareness could be learned.
LastExitToSalvation t1_ir2g0ku wrote
Well that's the question though - is self awareness learned (in which case our self awareness is just linear algebra done by a meat computer) or is it a spontaneous event, like a wildfire catching hold, something more ephemeral? I suppose that's the humanities question - how are we going to define what is either contained in some component piece of the architecture or wholly distinct from it? If I take away my brain, my consciousness is gone. But if I take away my heart, it's the same result. Is a self-supervised training signal an analog for consciousness? I guess I think it will be something more than that, something uncontained but still dependent on the pieces.
[deleted] t1_ir2iz5l wrote
[deleted]
Mike_0x t1_ir5gi4o wrote
I for one welcome our AI overlords.
Ebayednoob t1_ir19oux wrote
Some have proposed an AI world prediction module could be block-chain based and store each state as a hash in a Merkle tree-based system for fast time-state processing.
This isn't some token or bitcoin scam nonsense.. IT's a practical use-case for the block chain development (not something that holds value as a currency.. more like a software implementation). It's also eerily similar to how our DNA stores data
TheLastSamurai t1_ir1kuis wrote
I don't want any of this. I wish we had the power to shut all this research down, to me the risks far outweigh the benefits, even in the "good" scenario we basically lost most of our actual humanity. We need to organize and stop this.
Snufflepuffster t1_ir1mmvm wrote
it’s just a neural net. An assistant. Did you read the paper? It’s not coming for you, and I think it’s really selfish to try to stop research that could help so many. Machine learning in medicine is a big thing.
xxxmsky t1_ir26nnc wrote
The good it can bring is abundance of materials and food, good climate, world peace the end of world hunger and diseases
I disagree that the risks outweighs the good. We need to democratize it!
Splatulance t1_ir13mv6 wrote
The actual paper is here: https://openreview.net/pdf?id=BZ5a1r-kVsf
It's written for a general audience. I don't have time to read it now, but you're better off reading the paper than the article, as the latter says nothing new.
Impossible_Cookie596 OP t1_ir0u95b wrote
Artificial intelligence is already ubiquitous in our digital lives. But this researcher wants to change the way these autonomous agents think.
nyxnars t1_ir0xhch wrote
>The momentum behind AI is building, thanks in part to the massive amounts of data that computers can gather about our likes, our purchases and our movements every day
Did you really just try to spin the theft of personal data as positive????
StarTracks2001 t1_ir12dl5 wrote
I wouldn't call it theft. People agree to the privacy policies and proceed to post, like, share, link purchasing apps, etc with all these companies that just mine & sell your data to advertisers.
"If a service is free to use, you're the product."
Rauleigh t1_ir1g8xu wrote
Yep and the fact that we live in a world where that is universal and almost unchallenged is freaky.
Impossible_Cookie596 OP t1_ir0y1fb wrote
Story has nothing to do with personal data, i.e (it's just the statement.)
nyxnars t1_ir0y9t3 wrote
>our likes, our purchases and our movements every
This is personal data
Splatulance t1_ir13zsg wrote
That isn't what the position paper is about. It's another high level proposal for a general ai based loosely on cognitive and neuroscience re brain architecture. It has nothing to do with your search history/whatever, and frankly a quick skimmy glance suggests that it's hardly news
PM_ur_Rump t1_ir0yahe wrote
That's literally a quote from you.
[deleted] t1_ir0zues wrote
OP do you not understand what "personal data" is?
xxxmsky t1_ir279b8 wrote
Honestly, there are positive sides to the lack of privacy control. We do get better services
piTehT_tsuJ t1_ir0wxvb wrote
So now Alexa is going to know when I take a shit too?
Powerism t1_ir10fq6 wrote
You don’t log it in the app?!
piTehT_tsuJ t1_ir160wx wrote
Nope the log goes in the toilet...
onyxengine t1_ir15fdh wrote
Its just a matter or architecture, real time data and a limbic system equivalent to drive behavior. The ecology of information fed to AIs is largely human controlled and static. Its based on real world Outputs but its generally old curated data.
To build a model of its surroundings, “the world”, you need real time data from some environment, the ability to adjust or spawn neural nets, and a system to inherently determine success and failure. This doesn’t have to be a conventional environment humans are familiar with.
FuturologyBot t1_ir0x44s wrote
The following submission statement was provided by /u/Impossible_Cookie596:
Artificial intelligence is already ubiquitous in our digital lives. The momentum behind AI is building, thanks in part to the massive amounts of data that computers can gather about our likes, our purchases and our movements every day. But this researcher wants to change the way these autonomous agents think.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/xvgvnf/metas_ai_chief_publishes_paper_on_creating/ir0u95b/
Commie_EntSniper t1_ir16v46 wrote
Figures it would be Zuckerberg an Co. that releases the Skynet.
LastExitToSalvation t1_ir0y0xs wrote
>One of the most complex parts of the proposed architecture, the “world model module” would work to estimate the state of the world, as well as predict imagined actions and other world sequences, much like a simulator.
This is the part standing between real cognition and ML prediction. AI has no sense of the world, only the discrete things it has been optimized to compute. If there was a general purpose world module, then everything a model learns can be put in the context of the real world, making the outputs more consistently accurate/training cheaper and faster. I know the paper just sets out an architecture for the next phase of research, but if this world module became real, that would be as profound as what deep learning has done over the last 10 years.
For everyone about to make a comment "I for one welcome our AI overlords" or some trite shit, this is actually the beginning of something that could lead us there. But without a world model, we will never get there. imo.