modeless
modeless t1_jdevktx wrote
Reply to [N] ChatGPT plugins by Singularian2501
To me the browser plugin is the only one you need. Wolfram Alpha is a website, Instacart is a website, everything is a website. Just have it use the website, done. Plugins seem like a way to get people excited about giving the AI permission to use their stuff, but it's not technically necessary.
modeless t1_jc4i39e wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
> performs as well as text-davinci-003
No it doesn't! The researchers don't claim that either, they claim "often behaves similarly to text-davinci-003" which is much more believable. I've seen a lot of people claiming things like this with little evidence. We need some people evaluating these claims objectively. Can someone start a third party model review site?
modeless t1_j9st9pd wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Alignment isn't my main concern. I fear AIs that are "aligned" with people who want to e.g. fight wars, or worse.
modeless t1_j1xwo4s wrote
Reply to [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Very cool. Literature is tough without being that familiar with the authors. Even so, I think longer snippets would be pretty easy. A sentence of only ten or so words out of context is not really much to go on.
modeless t1_j1xweue wrote
Reply to comment by anthonyhughes in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Ooh, shadows is a good one.
modeless t1_j1xw8xu wrote
Reply to comment by FilthyCommieAccount in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
71/100 exactly here too. I found Midjourney most convincing. Easiest tells I found are hands (obviously), signatures or any other lettering, malformed objects in general, and anything with symmetry or duplication. Funny that AI would be bad at duplication!
modeless t1_j02fiss wrote
Reply to comment by ChuckSeven in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
Without the requirement for exact repeatability you can use analog circuits instead of digital, and your manufacturing tolerances are greatly relaxed. You can use error-prone methods like self assembly instead of EUV photolithography in ten billion dollar cleanrooms.
Again, I don't really buy it but there's an argument to be made.
modeless t1_izzpcbe wrote
Reply to comment by IshKebab in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
He calls it "mortal computation". Like instead of loading identical pretrained weights into every robot brain you actually train each brain individually, and then when they die their experience is lost. Just like humans! (Except you can probably train them in simulation, "The Matrix"-style.) But the advantage is that by relaxing the repeatability requirement you get hardware that is orders of magnitude cheaper and more efficient, so for any given budget it is much, much more capable. Maybe. I tend to think that won't be the case, but who knows.
modeless t1_iz2bm8r wrote
Reply to comment by new_name_who_dis_ in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
Well no one knows exactly what the brain is up to in there, but we don't see enough backwards connections or activation storage to make backprop plausible, so this is a way of learning without backwards connections, and that alone makes it more biologically plausible.
modeless t1_iz28lbg wrote
Reply to [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
This seems more interesting than the capsule stuff he was working on before. Biologically plausible learning rules are cool. Does it work on imagenet though?
modeless t1_jdtx2eu wrote
Reply to comment by LanchestersLaw in [D] GPT4 and coding problems by enryu42
I like the idea of predicting the user's response. How's this as an architecture for a helpful agent:
Given a user question, before you generate an answer you predict the user's ideal response to the model's answer (e.g. "thanks, that was helpful", or more likely a distribution over such responses), then generate an answer and iteratively optimize it to make the ideal user response more likely.
This way you're explicitly modeling the user's intent, and you can adapt the amount of computation appropriately for the complexity of the question by controlling the number of iterations on the answer.