Home
Brainhat Topics
Download

 ‐ 1998
 ‐ 2001
 ‐ 2004
 ‐ 2005
 ‐ 2009
 ‐ 2010
 ‐ present

	      August, 1996
	      
	      "You cannot achieve the impossible without
	      attempting the absurd."
	      
	      This project is called "huey" which is a rough
	      aglomeration of the words "human" and "ynterface."  The
	      goal is to make a program that can understand language,
	      maintain context, remember, and answer questions.  A lot
	      of work has been done on natural language and cognition
	      by a lot of very talented people; I am trying to be
	      realistic about my expectations for the project.  At the
	      same time, I have been thinking about the problem on and
	      off since 1980, when I took a natural language
	      processing course at the University of Connecticut.

	      My approach is brute force.  I am going to go over every
	      corner of the colloquial language, identifying
	      relationships between words and ideas.  The machine can
	      compile these into conceptual "families" or topological
	      "neighborhoods"--places where like ideas reside.
	      Together, these relationships will provide a platform
	      for "basic knowledge"--the kinds of understandings that
	      everyone has about the world about them.
	      
	      New information, taken as natural language, will be
	      compiled into strings of idea "family" identitifiers.  A
	      given utterance may be parse any of N ways.  Each
	      possibility will be matched against the collection of
	      "basic knowledge," and against the current context to
	      find a best match.  As time goes on, the machine may
	      learn more about its surroundings from its conversations
	      with others.
	      
	      A lot has to go right, frankly.  I have dismissed the
	      problem of attempting to make the machine really "know"
	      anything.  I believe it is possible to *almost* know
	      something by knowing things about it.
	      
	      Anyway, the result should be a lot of data structures
	      and code.  Wish me luck.

	      -Kevin Dowd
              
The project was called "Huey" only briefly. It came to be called Brainhat, named after a spaghetti collandar with a chin strap. There was piece on the radio about how students at the University of Auckland were using brainwaves to control light bulbs. Intrigued, I wired a pair of differential amplifiers and mounted them on top of the collandar. It worked! But I lost interest. The collandar ended up in a Mike Loukides' garage.

 
I had the domain, so I kept the name. 'Brainhat' became my effort model knowledge, inference, reason and intuition, and to scale it. My toy experiment results were satisfying, and it seemed to me that it would take just a little bit more effort to turn these experiments into working models. By the early 2000s, I was exhibiting Brainhat at speech technology shows, gaving interviews and appearing several times on television news. For a brief while, a few colleagues and I had a company for the purpose of developing a market. Among the dead-end ideas was creating content for automated phone sex! My fault. Because of the way I demonstrated it, people confused it with chat bots. But, Brainhat was never primarily about interaction; it has always been about knowledge representation and computation with knowledge.

So, how do we describe it today? Is this AI? No. The term Artificial Intelligence has come to mean high order curve-fitting with predictive capabilities. AIs are trained. The product is matrices whose contents represent the data they were trained by, structured for the program that trained them. In that sense, they're somewhat intransigent. To update an AI one has to retrain it. Accordingly, in use, they're read-only intelligence. Any sequential or stateful behavior has to be simulated by adding code around an AI. Brainhat could be that kind of code; Brainhat + AI would be a powerful combination. But, Brainhat is not AI in the current conventional sense of the term.

Is this Natural Language Processing? Maybe! But, no.... In the 1970s, Brainhat would have been called Natural Language Processing (NLP). That term has been subsequently repurposed to represent language translation, text summarization and interfaces for smart speakers. Brainhat processes knowledge. Human language is valuable for conveying knowledge. But once Brainhat compiles it into a knowledge representation, the language is thrown away. So, in that sense, there's no natural language processing at all—except when communicating with you; Brainhat processes knowledge.

Is this for Chat Bots? No! (please...). Brainhat is for computing with knowledge. Brainhat can interact with users with simple language. But one can also construct whole Brainhat applications that don't use language. The programming language is English, is all. And Brainhat's speech is wonky; it makes a odd chat bot.

The terms--AI and NLP--have referred to so many technologies over the years that I am loathe to call Brainhat by either, even though one might have in the past. So, I will refer to Brainhat as a Knowledge Operating System (KOS); it is a platform that brokers knowledge-based events. And I will call what it does 'Knowledge-Based Computation' because that what it does--computes with knowledge.

Brainhat code and development notes have been in the wild since around the time the project began. I stopped releasing the source in around 2010, but Brainhat has been generally available in binary form. My notes were released both on the web and in published form up until then. If you look on eBay now you may find them. My intention is that there will always be a community version of Brainhat. There's not a stitch of third-party code within Brainhat, so the source code could be best described as copyrighted and proprietary.

 
Copyright © 2026, Kevin Dowd. Contact: dowd@atlantic.com