mrconter1
mrconter1 OP t1_j4tuaal wrote
Reply to comment by blose1 in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
> This is not testing intelligence, this is testing if human was trained on computer usage, knows what e-mail is and used gmail before.
I don't think it's binary. I think intelligence is a large part here.
> Someone from tribe in Africa would fail your test while he is human and is intelligent,
Could you train a bird to pass all questions on this benchmark? No. Because it's not as intelligent as a human.
> train him on this task like you would train current gen multimodal system and it will pass your benchmark. You train LLM in combination with image model and RL model, train on instruction following using inputs you described and now it understands what it sees, can follow what you want it to do.
Solving this benchmark is an easy problem? How long do you think it will take until we have a model that can causually solve all the instructions a gave in the previous comment?
mrconter1 OP t1_j4rsus2 wrote
Reply to comment by navillusr in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
Really appreciate your feedback.
> The distinctions you’re drawing, pixels vs selenium output and browser vs os, are far less significant than the complexity of the tasks (step-by-step vs entire processes). What they’ve achieved is strictly harder for humans than what you are testing. We can argue whether perception or planning are harder for current technology (the computer vision is far more developed than AI planning right now), but I think you need to reconsider the formulation of your tasks. It seems like they are designed to be easy enough for modern methods to solve.
I'm not sure about this. Being able to do the next click on a large diversified benchmark of screenshot is extremely difficult for a computer today. It would need to be able to:
- Choose the next chess move if I am in a chess application
- Recognize the color palette icon on the keyboard if I ask it to change the color of the keyboard
- Recognize the Gmail icon of I say "send an email"
- Change keyboard mode in if I ask it to write an exclamation mark
- Press the key "2" if I ask it to type the number equivalent to the number of consuls that traditionally held the office at the same time in ancient Rome.
That's way outside what current models can do. But humans could do it easily. This benchmark would be extremely simple and intuitive for humans to complete (even with far fetched goals) but there is no model today capable of even knowing that you should press on the new line location given a screenshot and "Add line" today.
> On another note, most interesting tasks can’t be completed with just an x,y mouse location output. Why did you decide to restrict the benchmark to such a limited set of tasks?
I wrote about this in the ReadMe. There is no reason. It's just easier to explain the idea for people. I think the most powerful variant of this idea would take a series of frames (video context) and instructions and output something of the following:
- Click
- Press (X seconds)
- Move from P1 to P2 (X seconds)
The benchmark is simple enough to understand and explain so that you can start to envision what such a model would be able to do. Or much more interesting. What would it not be able to do.
If you have any more feedback or thoughts please reply. I wish more people were interested but either the idea sucked or I need to create something interactive for people.
mrconter1 OP t1_j4rintd wrote
Reply to comment by navillusr in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
Yeah you're right. My approach see to be a bit more general and should be less work.
mrconter1 OP t1_j4rdm59 wrote
Reply to comment by navillusr in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
Adept AI is restricted to the web and also does not use raw pixels as input...
mrconter1 OP t1_j4rdf3t wrote
Reply to comment by navillusr in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
The MiniWoB++ is restricted to website related things on not OS also it does not take raw pixels as input.
mrconter1 OP t1_j4r8o27 wrote
Reply to comment by Dendriform1491 in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
No it's not.
mrconter1 OP t1_j4r8miw wrote
Reply to comment by navillusr in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
- A LLM test does not require reasoning because it generates one word at the time?
- It can't.
- This might be interesting though.
mrconter1 OP t1_j4qctlb wrote
Reply to comment by Laser_Plasma in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
The thing is that there are a lot of other screenshots + instructions as well. What wouldn a system that can get 100% on this benchmark not be able to do?
mrconter1 OP t1_j4q4o7t wrote
Reply to comment by Laser_Plasma in [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
I will upload the data and accompanying website soon. What do you think about the idea?
mrconter1 t1_iyaljxc wrote
Reply to comment by beezlebub33 in [r] The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable - LessWrong by visarga
No but I don't want people to think that it's just a random blog. People who spend a lot of time there are... How should I phrase it? A bit different.
mrconter1 t1_iy91oll wrote
Reply to [r] The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable - LessWrong by visarga
Can we please keep LW out of this subreddit... It's literally a doomsday cult.
Edit: Feel feel to read about LW. I don't like them and I would prefer this subreddit to not legitimize them. Their movement have a whole subreddit dedicated to them:
Edit 2: Basically, I preferred articles from a journal and people with academic prestige.
mrconter1 t1_j4wq1zs wrote
Reply to comment by bo_peng in [P] RWKV 14B Language Model & ChatRWKV : pure RNN (attention-free), scalable and parallelizable like Transformers by bo_peng
How does the memory scale with the context window size?