Chris McKinstry, a researcher in artificial intelligence, recently committed suicide, after posting suicide notes in his personal blog and a discussion board on Joel on Software. (Ryan Park has more details.)
Joel’s discussion board, which I read regularly, provides very useful information for software development and marketing. The suicide notes Chris left were in Joel’s little known off-topic discussion board ?off. This action directly led Joel to shut down the uncensored board to squelch “psychopathic” behavior as he wrote:
I used to have, hidden so deeply that almost nobody found it, a discussion forum on this site colloquially called ?off. It was the official off-topic forum and was virtually uncensored. It was foul, free-wheeling, and Not Safe For Work. It generated quite a few severely antisocial posts, which were sometimes funny.
Anyway, it wasn't really appropriate. It didn't have anything to do with software, I didn't participate, and there was no compelling reason to host it on my servers. Over time, the number of reasons to shut it down increased. Today, the last straw was broken when one of my actual friends (you know, a real-life person) told me that the discussion group was getting downright disturbing. Some of the participants in the group had probably crossed the line from common obnoxious online behavior to downright psychopathic behavior. In a discussion group which prides itself on "anything goes," this was impossible to control.
At 6 pm today, I closed that discussion group, having learned an important lesson about anarchy.
What made an impression on me was that Chris founded Mindpixel, a Web-based collaborative artificial intelligence project that accumulated a database of human facts, similar to both Cyc and ConceptNet.
Chris was even interviewed by Slashdot for his ambitious project. In the interview, he made some remarkable statements.
My primary inspiration for the project comes from observation: I observed that computers are stupid and know nothing of human existence. I concluded a very long time ago that either we had to write a "magic" program that was able to go out in the world and learn like a human child, or we just had to sit down and type in ALL the data. When I was studying psychology in the late 80's I wanted to begin to gnaw the bullet and start getting people to type in ALL the data.
… I would store my model of the human mind in binary propositions. I would make a digital model of the mind.
I realized within minutes that a giant database of these propositions could be used to train a neural net to mimic a conscious, thinking, feeling human being! I thought, maybe I'm missing something obvious. So, I emailed Marvin Minsky and asked him if he thought it would be possible to train a neural network into something resembling human using a database of binary propositions. He replied quickly saying "Yes, it is possible, but the training corpus would have to be enormous." The moment I finished reading that email, I knew I would spend the rest of my life building and validating the most enormous corpus I could.
His suicide reminded me of cautionary stories from my Catholic high school of various philosophers, like Nietzsche, who fell victim to hubris and either later became crazy or committed suicide. I never really put any credence into those tales.
I am also an AI software entrepreneur and a blogger as Chris was. His vision of computer intelligence is very similar to mine, and sometimes I do feel a little crazy. Perhaps, I should be careful. The desert of AI is full of the dead bones from ambitious researchers, whose visions went unfulfilled.