I have started a software company, called SoftPerson, and am currently writing software which uses artificial intelligence to create new kinds of desktop applications. In the type of software we develop, the "software" acts like "person" (typically refered to agent in AI literature) in its ability to help construct documents and to reason about the content.
Very few commercial applications employ AI, but I think there will eventually be a AI wave in the future, especially computers know have the resources. The software that I am developing requires megabytes of compressed memory just to store world knowledge; this would not have been commercially feasible 7 years ago when computers did not even have that much RAM.
Whenever a task needs to be accomplished, I always imagine the software as a personal servant, and ask myself how would I interact with this servant, how would the dialogue proceed, what steps would the servant need to take. (That's why I am intrigue by inductive UI.)
Then I translate the steps to code. The tasks that the software needs to perform for each step are of a higher level of intelligence and require a bit of artificial intelligence, which I found to be difficult with traditional "C"-like languages. The programming language I use is C#. First, C# is more high-level than C++--the low level details of C++ is for deterrent for software that needs to mimic high-level human thinking. Garbage collection is essential because references abound everywhere forming complex tree and graph relationships, whose lifetimes are impractical to track.
To support human-like thinking, I used techniques from declarative programming, which involves constructing a model, sort of like XML-like document object model, free of control flow, that can be easily manipulating and searched, and can be rendered in a variety of views. I have an optimized list-based interpreter, modeled on Mathematica--basically it is a programming-language with a programming-language (C#). Mathematica has many advantages over Scheme, because it supports functional and rule-based programming, in addition to procedural programming, and because the list data types are based on vectors rather than linked-list, which improves performance. This interpreted language can perform lazy evaluations, handle unknown or ambiguous list expressions, and manipulate the whole expression without understanding the parts; these are hard to deal with in C. I have also built a Prolog-like inference engine on top of the interpreter complete with a knowledgebase implementation and the ability to solve basic mathematical equations.
I have also added statistical libraries for probabilistic reasoning and other mathematical support. (I am so glad that I dual majored in math and computer science.)
In order to act like a human, software needs to understand basic human concepts. I had to build several new data types to represent these concepts, just as we have ints and strings. So, there is a datatype for a Words and WordSenses (a word can consist of multiple senses, multiple words can have the same sense).
I use several natural languages databases like WordNet, which contains real world semantic knowledge. WordNet is nice because it a machine-readable dictionary that categorizes all the words of the English language and maintains dozens of relationships between these words, such as synonymy, hypernymy (dog IS-A animal), meronymy (finger IS-PART-OF hand).
There are other data structures for Sentences, Paragraphs and Discourse. If computers seem rigid and mechanical and not smart, today, it's because current software programs don't have these notions of these real world concepts. (Of course, to implement these I had to write a natural language parsing engine. It's ironic that all the theoretical concepts you learned in college like NDFAs and CFGs, which seemed, at the time, overly academic and impractical, come back to haunt you in a big way, when you implement anything moderately ambitious.)