Anyway, back to work

- Posted in Uncategorized by

So, anyway, back to work. I was incorporating ChatGPT or other LLMs into Brainhat. To talk to them, I need to send and receive JSON. Mother, help me. I could also use JSON for third-party software that wants to talk to Brainhat. So, it points to me needing to handle JSON in HTTP posts and such. Another detour!

I know that if I go look for something online it will use some crap like cmake or m4. So, I have to write it. To write it, I have to understand it. I outlasted XML, by the way, I think.... Oh wait. I didn't. Used to produce it--VoiceXML. Do you remember that?

Risc-V!

- Posted in Uncategorized by

I got built a little system out of a MilkV motherboard running Bianbu Linux. Works! And now, there's a Risc-V port of Brainhat. It was no problem compiling after the thrashing the compiler under Omnios put me through. No hard feelings; I deserved it.

I ported Brainhat to Omnios

- Posted in Uncategorized by

I ported Brainhat to Omnios (I really like Omnios!). I say 'ported' because the compiler complained about my code... a lot! I've just run the test suite. There are things to fix. Gabriela looks like it didn't run. Tests 126 and 127 look broken. Probably more. Very few of the test outputs appear to agree with the saved outputs. I don't see a difference in every case though. It might be due to a line feed missing or such. I had to change the sed invocation in the makefile, and this probably has something to do with it.

Hmmm... I am seeing this message from httpd.c when trying to fetch a page:

httprequest: nreads = MAXREADS; shutting down.

Doesn't appear to shut anything down, though.

Here's one line that is causing gabriela to dump core:

"Jorge returned to Tapachula to study law."

Digging into it.

BUGs:

>> i returned home to study  <--- "home" should be interpeted as "to home".  It's a one-off.
autohash3_a: couldn't find a CHILD-OF in Root-OBJECT
autohash3_a: couldn't find a CHILD-OF in Root-OBJECT
 You did return home studies.autohash3_a: couldn't find a CHILD-OF in Root-OBJECT

and

>> i returned to home to study
./run: line 6: 12384: Memory fault(coredump)

This is the same bug as in Gabriela. Tough to find(!) What does this say, anyway? "I returned to home because I study."

--effect-- because --cause-- ?

Does it work for "the dog went to the bowl to drink"? Yes. It works. Sort of. Which is the cause and which is the effect? Maybe it says "dog to drink because dog went to the bowl." Or maybe it says "dog went to the bowl because dog to drink". That seems better. The cause is the second part--to drink or to study.

How do I capture that in grammar? I'll work that. But first! And why AM I GETTING a segv?

>> i returned to the bowl to study
 You did return to the bowl.
>> i returned to the home to study
./run: line 6: 12540: Memory fault(coredump)

Hmmm.... "home" has a problem but "bowl" does not.

Ach! It's a problem with a rewrite rule! I'm an idiot; routine rewrite was in the traceback. I was looking past it to grammar issues. And, in fact (though the rewrite seems broken...), it's already translating the input into cause and effect. This is the problem with working on a 30 year-old project. I've been here already!

Anyway, the rewrite is working under Linux. I musta messed something up. Yep...needed a static int declaration. It is working now.

I also had to increase a setting of "MAXREADS" in the http code from 9 to ... 100. The Omnios version was resetting the connection because it saw too many incoming web connections(?). I dunno. I wrote the code, is all.

Goldilocks from ChatGPT

- Posted in Uncategorized by

Here's a sample from ChatGPT:

Goldilocks was a girl.
She walked in the woods.
She found a house.
The house had three bears.
The bears lived in the house.
The bears woke up.
They made porridge.
Papa Bear's porridge was too hot.
Mama Bear's porridge was too hot.
Baby Bear's porridge was just right.
Goldilocks went to the kitchen.
Goldilocks tasted Papa Bear's porridge.
Goldilocks tasted Mama Bear's porridge.
Goldilocks tasted Baby Bear's porridge.
Baby Bear's porridge was just right.
Goldilocks ate Baby Bear's porridge.
Goldilocks went to the living room.
Goldilocks sat in Papa Bear's chair.
The chair was too hard.
Goldilocks sat in Mama Bear's chair.
The chair was too soft.
Goldilocks sat in Baby Bear's chair.
The chair was just right.
The chair broke.
Goldilocks went to the bedrooms.
Goldilocks lay in Papa Bear's bed.
Papa Bear's bed was too hard.
Goldilocks lay in Mama Bear's bed.
Mama Bear's bed was too soft.
Goldilocks lay in Baby Bear's bed.
Baby Bear's bed was just right.
Goldilocks fell asleep.
The bears came home.
Papa Bear said, "Someone has eaten my porridge."
Mama Bear said, "Someone has eaten my porridge."
Baby Bear said, "Someone has eaten my porridge."
They saw the bowls were empty.
They saw the chairs were moved.
Baby Bear saw his chair was broken.
Papa Bear saw his porridge was eaten.
Mama Bear saw her porridge was eaten.
They found Goldilocks in Baby Bear's bed.
Goldilocks woke up.
She ran away.

ChatGPT's output will vary. After cleaning up the line feeds and such, the problems I would have with this block include:

"Goldilocks was a girl": Brainhat won't parse this correctly. It can parse "A girl's name is Goldilocks", however. I can approch it one of two ways--with a rewrite rule or new grammar. In either case, I'd need to recognize that "Goldilocks" is not in the vocabulary. That would differentiate the statement from something like "the pitcher was a girl."

"The house had three bears": Literally, this is an odd statement. Inference needs to fix it.

Use of the plural as in "The bears woke up" and "They made porridge" might be wonky.

"Papa Bear's porridge was too hot": we've never been introduced to "Papa Bear" as a formal title. To Brainhat, this will appear as two nouns, side-by-side. Something like "the big bear's name is papa bear" would take care of it.

Introducing these problems/process to readers, I could show why manual rewrites are needed, and what they should be. I think that will get me there faster. Otherwise I am diving back into development.

Anyway, whatever page I create for this translation, it should aid in making it on a line-by-line basis.

See also: https://www.brainhat.com/blog/gabriela-goes-chatgpt-and-back.html

Sending stuff to ChatGPT

- Posted in Uncategorized by

So, as I wasn't saying, it seems to me that a good first marriage of Brainhat and ChatGPT would be for ChatGPT to simplify input for Brainhat to use and then for ChatGPT to make a summary of the context when needed. This could have a page associated with it. I'm not doing any server-side storage (or local storage for that matter). I don't know where the integration will occur, but setting up another web page should be interesting enough.

The next integration will be to use ChatGPT to disambiguate input, to provide answers to otherwise unanswered questions and to support goal-seeking processing.

But, first, another web page. It will have a window where the user can post some text for translation into Brainhat language (another window). Maybe if you post in the ChatGPT window you get the Brainhat version in the other window and vice versa. Or maybe there could be a "send to ChatGPT" button for the context or a "Send to Brainhat" for the ChatGPT translation. We'll see.

See: https://www.brainhat.com/blog/gabriela-goes-chatgpt-and-back.html

ChatGPT and back

- Posted in Uncategorized by

Here's an experiment. read in part of Gabriela. Then, I fed the first twenty lines of the context to ChatGPT, like so:

"Summarize the following, taking the sentences in reverse order: You
plan You do make your pretty family.  You plan You do marry You. You
plan You do go to europe. You plan You do go to hawaii. You plan You
do make money. You plan You do work. You plan You do finish your
major. You plan You do study your major. You plan You do return to
mexico. You plan You do learns english. You have You do plan. You like
your pretty dog belonging to You. You like a college will be perfect
for You. You may be with hugo. You may be with your pretty family. You
believe a college will be perfect for You. a college is described. You
are from tapachula. You want You to study your major."

This is what came back:

Here’s a clear, simple summary of your passage — with the ideas
arranged in **reverse order** (starting from the last sentence and
moving backward):

You want to study your major and come from Tapachula.  You believe
college will be perfect for you and that you may be with Hugo and your
pretty family.  You like your pretty dog and think college will suit
you well.  You have plans — to learn English, return to Mexico, study
and finish your major, work, make money, travel to Hawaii and Europe,
get married, and build a beautiful family.

Now, I ask ChatGPT to turn the block of text above into simple sentences so that I can feed it back to Brainhat:

Using only words that you find in the following text, convert the text
into very simple sentences with one idea expressed per sentence: You
want to study your major and come from Tapachula.  You believe college
will be perfect for you and that you may be with Hugo and your pretty
family.  You like your pretty dog and think college will suit you
well.  You have plans — to learn English, return to Mexico, study and
finish your major, work, make money, travel to Hawaii and Europe, get
married, and build a beautiful family.

Here is what ChatGPT gave back to me:

Here is the text rewritten in **very simple sentences**, each with one clear idea, using **only the words from the original text**:

1. You come from Tapachula.
2. You want to study your major.
3. You believe college will be perfect for you.
4. You may be with Hugo.
5. You may be with your pretty family.
6. You like your pretty dog.
7. College will suit you well.
8. You have plans.
9. You learn English.
10. You return to Mexico.
11. You study your major.
12. You finish your major.
13. You work.
14. You make money.
15. You travel to Hawaii.
16. You travel to Europe.
17. You get married.
18. You build a beautiful family.

Now, I will feed this back to a fresh instance of Brainhat and see what we get. I will shut off creditspkr so I don't get "You say..." for each line. I didn't include the two lines with the asterisks. Will explain, below.

You come from Tapachula.
You want to study your major.
You believe college will be perfect for you.
* You may be with Hugo.
* You may be with your pretty family.
You like your pretty dog.
College will suit you well.
You have plans.
You learn English.
You return to Mexico.
You study your major.
You finish your major.
You work.
You make money.
You travel to Hawaii.
You travel to Europe.
You get married.
You build a beautiful family.

Here is the context:

debug> xspeak 1
I make a pretty family. I get married. I travel to europe. I travel to
hawaii. I make money. I work. I finish my major. I study my major. I
return to mexico. I learn english. I have I do plan. I like my pretty
dog. I believe college will be perfect for I. I want I to study my
major. I come from thing.

"I come from thing" (instead of Tapachula) is because the fresh copy of Brainhat does not know 'Tapachula.' It was originally introduced in gabriela.txt like so:

"I am from a city.
The city's name is Tapachula."

I also had to comment out "You may be with Hugo" and "You may be with your pretty family" because there is an old ponder routine that sees something in an imperfect future tense and tries to be clever and ask it as a question. I haven't depended on that for many years--will likely disable it.

ANYWAY! We took a round trip from 1) input to Brainhat to create a content to 2) feeding statements from the context to ChatGPT to create a fluid summary of the context to, 3) asking ChatGPT to re-express its own summary in simple statement to, 4) feeding the output of ChatGPT back to Brainhat as input. With the exceptions noted above, we essentially got back the same context we started with. Cool.

ChatGPT API

- Posted in Uncategorized by

I signed up for some kind of ChatGPT API service. OpenAI offers API code in a number of languages like Go, Python, Javascript... not C. I was looking at invoking Python scripts from C. It is possible through a library, or by fork + exec a shell script. But, populating some JSON templates and contacting ChatGPT through libcurl looks like the easiest thing to do.

Here is an example from the commandline:

curl -v https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" \
-H "Authorization: Bearer <arcane authorization here>
-d '{
  "model": "gpt-5-nano",
  "messages": [{"role": "user", "content": "In a few words, answer this question: does mario love the princess"}]
}'

The trouble is, the answer comes back: "Yes, he loves Princess Peach." (!)

Oh? Do we know that?

Now to embedding it in Brainhat. libcurl is available from https://curl.se/libcurl/ with little license restriction. There are non-blocking calls, which means I can ask ChatGPT a question and go away. It takes a long time to answer, typically.

What Next?

- Posted in Uncategorized by

Fixed a bug from May 4th. Was not recording multiple conditions in the result of an inference.

What next? GUI pages for vocab, communications, memories or scripts? Glue into ChatGPT? Integrate with MQTT? Try to create an application? Maybe Brainhat agents that follow network traffic looking for a reason?

Is your banana yellow?

- Posted in Uncategorized by

Done. Now, in theory, class attributes are just a special case of T1 memories.

BUG: There is a bug in test xlarge007. I ask "is your banana yellow?" The answer should be "no. banana is red."

Testing T1 memories and class attrs now. Source memory five-things includes "bananas are yellow." Let's see if it restores correctly. Hmmm... didn't work for a T3 vignette. Need to look at ponder routine remember().

Let's try for a T1. Sort of works. Needed to restore with "what color are bananas?" before "what color is my banana?" would work.

Class attributes in memories

- Posted in Uncategorized by

Hoping to have a little time this weekend for Brainhat.

Where I was: need to hash class attributes when a memory is restored. Then onto memories, though it may turn out that this is enough to cover local and remote memories. Class attributes look like propositions and attribute assignments to the existing routines. That might be good enough. . Lots of restoration routines in state.c call addtocontext routines. Mproc classdesc() recognizes a class description and hashes it into the rhash in core hash table. classdesc() is only called in one place, in input-patterns, just after addtocontext(). So, perhaps addtocontext() should call it directly.

Hmmm... that might be too simplistic. I'm just looking for a way to apply classdesc() everywhere it needs to go without actually applying it everywhere is needs to go. Maybe adding it to addcctoctxt() and addcctoctxtmotv() would cover all use cases. It might. The call to addtocontext() has to continue as it is because a class description needs to be in the context and hashable so that questions like "are bears hungry" can be answered in addition to "is a white bear hungry." And addtocontext() is already smart enough not to create dirty "bears" (plural) when it encounters "bears are hungry."

So, trying this: invoking classdesc() in addcctoctxt() and addcctoctxtmotv() following the call(s) to addtocontext(). t may require a little bit if spaghetti. The CC pattern that checks for class attributes from a route level is addtoc9.

Page 1 of 2