Off to the right here is a game called Deep Cheep that I designed and built for fun. It includes a minigame of tic tac toe, 4 in a row with randomly generated boards, and the computer figures out how to play against you using minimax search. Go ahead and tap or click to play, or zoom in to play full screen.

The game features artificial intelligence, with the emphasis on "artificial". Human intelligence is measured by how quickly somebody picks up ideas and develops insights. By that measure, computer IQ is pretty close to zero. The computer will learn nothing from playing against you, and it must look at thousands of possible moves to play competently. This is a problem with all artificial intelligence out there, from my lowly game to the advanced efforts of digital assistants.

The problem isn't with computer hardware. Data center installations are already on par with what our brains can do in terms of information processing. Silicon also has the advantage of operating without errors and operating in a sequential process that allows for dependency management and search tree pruning.

I think we are at the point where the Wright brothers were at in 1900 or so. People had created an engine that was light and powerful enough to push a plane through the air, but a design for a plane didn't exist and we didn't understand principles like roll, pitch, and yaw. Once those principals were understood a contraption could be designed to control for them and to take advantage of engine technology, producing a result that works nothing like a bird.

Today we have a powerful enough engine to drive human levels of artificial intelligence, so we just have to figure out the software to go with that engine. The sort of artificial intelligence that's proven practical is all narrow domain, big data, and deep search based brute forcing of problems. The most you get out of a system like that is narrow domain pattern recognition based on thousands of data points. It's useful for things like machine translation, detecting credit card fraud, grouping images, or answering search queries. There's a lot of smart people working to move that model forward.

One view is that we have to figure out the principals of intelligence, just like the Wright brothers did with flight. The problem is that understanding intelligence on a theoretical level has stalled out. Work has gone into software that mimics human intelligence by pattern matching one situation to another through analogies, but it's hopelessly primitive compared the wetware in our brains. Clearly a big jump happened in our own evolution when we picked up language, which all happened in the period of a few million years as homo genus brain sizes tripled. That's not an easy jump to make though, since we seem to be the only species that's cleared the hurdle. It could just be that our brains have very messy software that is impossible to generalize into an elegant "theory of intelligence".

That leaves us with emulating the human mind, which is being attempted by governments in Europe, along with a reverse engineering attempt in the United States. That may produce insights that will be useful, or it may be that these approaches are like trying to build a flying device by focusing on wing flapping.

The alternate view is we are at the same point that car designers were at in the 1950's. They saw nuclear power and flying cars seemed inevitable. Unfortunately, we could never overcome the technical hurdles to design safe and reliable flying cars, and transportation technology has stalled out.

Still, I think it's inevitable that just as the industrial revolution devalued the human hand, artificial intelligence will devalue the human mind. I tend to think of AGI as being inevitable and interesting, even more so than learning to fly or go into space or split the atom. Unfortunately, the topic of the "The Singularity" was popularized by Ray Kurzweil. He also predicted with complete certainty that we'd all also be shopping online using virtual reality in 2010, along with other silliness relating to medicine and living forever. He speaks with a very deep voice and is wildly optimistic so people tend to like him. Please don't read his books.

A lot of people wonder if AGI is a good thing. I think the alternative is waiting to be wiped out by a plague, or ecological collapse, or by a megalomaniac, or some unpredicted black swan. Humanity is very likely to screw up over the next few hundred years, so I personally look forward to greeting our hopefully benevolent robot overlords, like a puppy before his new master. In the mean time, I had fun creating Deep Cheep, and I hope you like playing it. Question or comments? Write to efbrazil at gmail dot com.