Viewing a single comment thread. View all comments

dnimeerf t1_iqukg94 wrote

I literally wrote a white paper on this exact subject. It's here on reddit feel free to ask me anything.

2

Lone-Pine t1_iquoepi wrote

None of this is even close to replacing/being competitive with human researchers yet, right? How close are we to "Advanced Chess" where human researchers and AI systems work together to improve AI models?

3

DorianGre t1_iqusw99 wrote

I have been running the “same” AI chess bot on twitter for 6 years now. It is built to play up to 500k games at a time and plays at least 20 with versions of itself at a time, posting the moves of games as tweets. Every 1000 games or 30 days, which every is first, it updates it’s scoring tables, does an regression analysis of the moves, and if better, moves this nobel to a new set of move graphs hashes, then copies that out to a new player file set and spins up the new player. This player comes online announcing its synthetic FIDE ratting score. The others running then have to announce theirs as well. The one with the lowest performs apoptosis by shutting itself down with a final announcement to the twitter channel detailing its exploits: Name: Alive for: Won X games against Y players, with an average win rate of z. I increased / decreased my FIDA score by x amount over my life time. That is the memorial in the wall of rememberances. Then the new bo announced him sent and says he is ready for a game. godfreybot on trigger if anyone is interested. They mined all the base openings a while ago starting to do some weird openings now..

Now, this doesn’t rewrite the math for the bots, buts does update likelihood tables and when one gets created, it gets a wildcard rating between 1-10 to tell it how aggressive to be in straying from thr known most productive lines. I think I could add a subroutine for scoring and choosing moves tbat gets written based on a sheer elolutioaary model and then score and compare that too. Just a thiught

2