David Silver et.al., arXiv, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, here. I’m going to go ahead and call first, here.
The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.
OK, statically load AlphaZero with all the theorems with known proofs in, say, Number Theory. Call this data set INIT. AlphaZero goes off and teaches itself Number Theory to a super human level. Ask AlphaZero to assign a rating level for the initial static set of all known Number Theory proofs INIT versus its current superhuman level. That level assessment, call it X, is the super human assessment of the documented level of human Number Theory expertise, as of Dec 2017 on earth. Now, find a single new paper to add to INIT that lowers the AlphaZero assessment of human Number Theory expertise from X to Y < X. That newly added paper is special, it is the Billy Madison paper.
Principal: Mr. Madison, what you’ve just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it.