Submitted by mrconter1 t3_10e7fxg in MachineLearning
[removed]
Submitted by mrconter1 t3_10e7fxg in MachineLearning
[removed]
I will upload the data and accompanying website soon. What do you think about the idea?
I think ideas are cheap (“benchmark of AGI-like capabilities”), and this particular execution of the idea (closing a window in a browser?) isn’t really good in any way
The thing is that there are a lot of other screenshots + instructions as well. What wouldn a system that can get 100% on this benchmark not be able to do?
Your unconquerable benchmark is below the level of achievement attained by research from 1970
>SHRDLU was an early natural-language understanding computer program, developed by Terry Winograd at MIT in 1968–1970. In the program, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified "blocks world", essentially a virtual box filled with different blocks. SHRDLU was written in the Micro Planner and Lisp programming language on the DEC PDP-6 computer and a DEC graphics terminal. Later additions were made at the computer graphics labs at the University of Utah, adding a full 3D rendering of SHRDLU's "world".
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
No it's not.
Adept AI is restricted to the web and also does not use raw pixels as input...
The distinctions you’re drawing, pixels vs selenium output and browser vs os, are far less significant than the complexity of the tasks (step-by-step vs entire processes). What they’ve achieved is strictly harder for humans than what you are testing. We can argue whether perception or planning are harder for current technology (the computer vision is far more developed than AI planning right now), but I think you need to reconsider the formulation of your tasks. It seems like they are designed to be easy enough for modern methods to solve.
On another note, most interesting tasks can’t be completed with just an x,y mouse location output. Why did you decide to restrict the benchmark to such a limited set of tasks?
Really appreciate your feedback.
> The distinctions you’re drawing, pixels vs selenium output and browser vs os, are far less significant than the complexity of the tasks (step-by-step vs entire processes). What they’ve achieved is strictly harder for humans than what you are testing. We can argue whether perception or planning are harder for current technology (the computer vision is far more developed than AI planning right now), but I think you need to reconsider the formulation of your tasks. It seems like they are designed to be easy enough for modern methods to solve.
I'm not sure about this. Being able to do the next click on a large diversified benchmark of screenshot is extremely difficult for a computer today. It would need to be able to:
That's way outside what current models can do. But humans could do it easily. This benchmark would be extremely simple and intuitive for humans to complete (even with far fetched goals) but there is no model today capable of even knowing that you should press on the new line location given a screenshot and "Add line" today.
> On another note, most interesting tasks can’t be completed with just an x,y mouse location output. Why did you decide to restrict the benchmark to such a limited set of tasks?
I wrote about this in the ReadMe. There is no reason. It's just easier to explain the idea for people. I think the most powerful variant of this idea would take a series of frames (video context) and instructions and output something of the following:
The benchmark is simple enough to understand and explain so that you can start to envision what such a model would be able to do. Or much more interesting. What would it not be able to do.
If you have any more feedback or thoughts please reply. I wish more people were interested but either the idea sucked or I need to create something interactive for people.
>Recognize the Gmail icon of I say "send an email"
This is not testing intelligence, this is testing if human was trained on computer usage, knows what e-mail is and used gmail before.
Someone from tribe in Africa would fail your test while he is human and is intelligent, train him on this task like you would train current gen multimodal system and it will pass your benchmark. You train LLM in combination with image model and RL model, train on instruction following using inputs you described and now it understands what it sees, can follow what you want it to do.
> This is not testing intelligence, this is testing if human was trained on computer usage, knows what e-mail is and used gmail before.
I don't think it's binary. I think intelligence is a large part here.
> Someone from tribe in Africa would fail your test while he is human and is intelligent,
Could you train a bird to pass all questions on this benchmark? No. Because it's not as intelligent as a human.
> train him on this task like you would train current gen multimodal system and it will pass your benchmark. You train LLM in combination with image model and RL model, train on instruction following using inputs you described and now it understands what it sees, can follow what you want it to do.
Solving this benchmark is an easy problem? How long do you think it will take until we have a model that can causually solve all the instructions a gave in the previous comment?
The MiniWoB++ is restricted to website related things on not OS also it does not take raw pixels as input.
Laser_Plasma t1_j4pws5y wrote
The whole "benchmark" is just a Readme? What is this nonsense