Back in the day, there were two distinct paths for making
computers "think." The first was a family of inventions
that includes multi-layer perceptrons, neural nets,
genetic programming and machine learning. These are forms
of curve-fitting. If you have ever performed a linear
regression then you are familiar with curve-fitting: one
graphs a set of points and derives a linear equation that
models their behavior. Just a few years ago, higher order
curve-fitting appeared as the science behind deciding what
kind of music you'd like to hear next, or what else you
might want to buy. It's gotten a lot more sophisticated,
and is now called "AI."
AIs are trained. The products of training are matrices of
numbers. The equations for the curves are unknown. AIs
reproduce the data they were trained on when similar
conditions present themselves. Generative AIs are large,
intensely capable, but static. They know a great deal,
but they don't reason.
The other old-school approach to making computers "think"
is the product of linguistics and epistemology. Language
loosely captures knowledge. Use of a computer to model
language and interact with language was called Natural
Language Processing (NLP). In the 1970s, classical NLP
was a popular pursuit, and it seemed like an academic
avenue that would bear intelligent computing more quickly
than curve-fitting. But, NLP was difficult, brittle and
not scalable. Excitement waned and other forms of
artificial intelligence, such a Expert Systems, took the
limelight until their prospects dimmed as well. NLP has
since come to mean other things like text summarization,
translation, data mining and command-and-control, like
Siri.
Brainhat is not AI in the current sense
Brainhat is a lonely adult orphan of this second approach
to teaching computers to "think," thirty years in the
making. It treats human language like a programming
language. The problems of brittleness and scale are
addressed with massive coarse-grained parallelism. A
Brainhat instance can include an unlimited number of
computers, each with their own knowledge domains. And,
that same swarm of machines can serve many users, all at
once, across the globe. Where toy implementations with
old-school NLP systems fail, Brainhat approaches the
problem with Gestalt. More is better!
An invitation to you
Over the course of 2024, I am working to release the
current version Brainhat and explain it. There is much to
cover--from the basics of coding and parsing, to debug, to
networking, and to creation and sharing of knowledge. My
approach will be to verify a portion of the code, address
issues as may have been introduced over the years, update
the web site and make a release. And then do it again, repeatedly,
until the process is complete. A programmer/user manual is
in the works. I will publish it when the vetting
process is finished.
Thank you for your patience and please join me! I hope that
you will find that creating knowledge for use with Brainhat
is interesting, fun and something you can share.