Artificial Intelligence is my pet computer science topic. I first took an AI course at Columbia University while I was in high school, and was completely entralled. Now a decade and half later, I am devoting my life to developing intelligent software on my own.
I recently read a quote from a professor, that, in the 1980s, investors were enthusiastic about AI while researchers were pessimistic. Unfortunately, AI in the 1980s was associated with numerous project failures and gained a bad reputation. Now the tables have turned, at lot of advances occured in the field, not to mention improvements in computational speed and storage, have happened over the 1990s, researchers who still remain are now very optimistic, but investors, having been burned before, are pessimistic.
I am actually quite optimistic about my chances of succeeding with AI. First, a large amount of resources and technologies are available for licensing from consortiums and universities. The difficulties with natural language processing, for example, are not computational, but more related to the accumulation of linguistical data. Second, there are fewer people in the field, just when the possibilities in AI have dramatically opened up, and a large amount of skepticism--meaning little competition. Third, the amount of memory now available in desktop PCs makes it feasible to store megabytes of machine readable dictionary and knowledgebase in memory.
I have notice an almost exclusive emphasis on quick and dirty statistical approaches in AI nowadays like neural networks, bayesian reasoning, and genetic programming. These approaches often result in nonsensical output.
People have lost interest in or don't appreciate the power and exactness of symbolic approaches. I am in the symbolic AI camp, though I do see statistical approaches as complementary in the areas of disambiguation and prediction.