What Next?

- Posted in Uncategorized by

Among the things giving me pause is that I need to finish thinking about how to compile a file full of memories--not Vingettes, but a mix of different types of memories, possibly separated by blank lines so they don't pollute one another. I could figure that out now, before implementing T4, but I think I will just implement T4 like the others even if I end up deprecating or ripping it up for something better eventually.

I am starting with cstoretriggers(), storing class attributes. Maybe. How come class attributes aren't already store as T1 memories? Checking to see if they are... They are. But they're not restored as class attributes. So, I have to look into that too--state.c.

Class descriptions as memories

- Posted in Uncategorized by

The answer to the last question is 'not far'. Here is the method for storing L1 memories (propositions) for use as a template:

qstoretriggers() is a ponder routine that walks the whole context looking for simple propositions. It writes out each one it finds and adds hashes to the associated hash table. In order for qstoretriggers() to run, clarg_autoremember and clarg_remembert1 both have to be true. This ony happens in bhatexit(), on the way out the door: the clarg_autoremember flag gets set to create a one-shot memory-saving event. Bhatexit() also writes out the hash table associated with all memories, including propositions. This hash table was read in earlier and updated in memory.

(Note that if the '-T' flag is used, swaths of the context get written out less discriminately from storetriggers().)

In order to access these memories, the -A flag is set (clarg_autorecall). For propositions, routines that source the (T1) memories include declques2(), equivques(), qremember(), t1remember() and t1client().

I'm not sure how many of these or similar routines will be needed for T4 memories (class attributes), yet. Some of it is written already. Class Attributes (which are the basis for T4 memories) are already recognized, hashed and stored in core. declques() already has working code to use them. classattr(), an inline mproc, already has working code to use them.

I will need routines to save T4 memories, and need to save and restore the hash table from bhatexit() and main() (or by an http callback routine), respectively. T4 memories will be saved alongside other types in the same hash table...

As an example of cross-network memories, the ponder routine t3client() is clear enough. I send a swath of the context to however many remote subscribers I may have. t1client() looks a lot the same, which is a little confusing. I would have thought I would just send propositions. Have to look into that. I might have been lazy.

Oh, wait! I think I remember. I send the remote a swath and it will send me back the T1, T2, whatever memories that are hashed up by the swath. Via callback, I will store them into the memories directory and hash them into the existing memory hash (in core). Then they will be available for my use.

BUG:(?) It looks to me like the callback cbk_t1netmem() only sends attribute assignments. It needs to handle general propositions, too.

TODO: So, in summary, 1) I have to add the T4 hashes to the regular mhash table, setting their type to 4 so that they aren't sourced incorrectly, 2) I need to write them to the memories directory. 3) In order to be able to restore them, I need routines like qremember() for inline memory recall during questions and t1remember() for trail-behind memory recall of memories about things.

And for remote memories:

I will duplicate t1client()fR into *t4client(), and I might change the swath of stuff that's sent across to include discourse (or questions) instead of context. The callbacks for client and server will look like the existing T1 callbacks, except that they will handle T4. OR MAYBE, I should just bunch T1 and T4 together. They're sort of similar, except that one is actual attributes and the other is class attributes.... Class attributes could end up in the rhash table by veing processed by classdesc() just as inferences end up in the chash and phash by going through pcomp().

Debugging class descriptions

- Posted in Uncategorized by

BUG: "large bear weighs 600 pounds". The last part, "600 pounds", should be an adverbial phrase. It's not. "600" is the OBJECT.

BUG: This is still not right...

>> bananas are yellow
 banana is yellow.
>> dog's banana is big
 dog's yellow banana is large.
>> cat's banana is small
 cat's banana belonging to cat is small.
>> break

Break in debug at the start:
debug> list banana
Motive: default
  (clean text) banana->banana-n1 (clean), MID 10000
  (context text) banana->banana-n1-ae57 (dirty), MID 10000
  (context text) banana->banana-n1-b140 (dirty), MID 10000
debug> cdump banana-n1-ae57
define  banana-n1-ae57 [7426] (fe104310) /2710/ dirty
        label           banana
        label           bananas
        label           banana-n1
        attribute               big-a1
        attribute               Root-05f69060
        attribute               yellow-a1
        orthogonal              fruit-n1
        child-of                fruit-n1

debug> cdump banana-n1-b140
define  banana-n1-b140 [7426] (fe852600) /2710/ dirty
        label           banana
        label           bananas
        label           banana-n1
        attribute               small-a1
        attribute               Root-05f691e3
        orthogonal              fruit-n1
        child-of                fruit-n1

The first banana created with "bananas are yellow" should be plural and non-writeable, no? It could then be cloned and updated with orthogonal attributes. It should only be updated natively with other plural attributes, like "curved". All bananas are curved and yellow except for those that explicitly aren't. I think I need to go back and finish/fix this.

Wait... there was a new construct--a plabel. It is used for bears but not for bananas. Let me fix this and try again. Okay! That makes all the difference:

Initializing

>> bananas are yellow
 banana are yellow.
>> break list banana
Motive: default
  (clean text) banana->banana-n1 (clean), MID 10000  **<-no dirty bananas**

>> dog's banana is big
 dog's banana belonging to dog is large.
>> cat's banana is small
 cat's banana belonging to cat is small.
>> what color is dog's banana
 banana are yellow.
>> what color is cat's banana
 banana are yellow.
>> break list banana
Motive: default
  (clean text) banana->banana-n1 (clean), MID 10000
  (context text) banana->banana-n1-affe (dirty), MID 10000
  (context text) banana->banana-n1-b198 (dirty), MID 10000

>> break cdump banana-n1-affe
define  banana-n1-affe [7426] (4b51c960) /2710/ dirty
        label           banana
        label           bananas
        label           banana-n1
        attribute               big-a1
        attribute               Root-05f690af   **<-no color**
        orthogonal              fruit-n1
        child-of                fruit-n1


>> break cdump banana-n1-b198
define  banana-n1-b198 [7426] (4b9e2a40) /2710/ dirty
        label           banana
        label           bananas
        label           banana-n1
        attribute               small-a1
        attribute               Root-05f69233   **<-no color**
        orthogonal              fruit-n1
        child-of                fruit-n1


>> dog's banana is red
 dog's banana belonging to dog is red.
>> break list banana
Motive: default
  (clean text) banana->banana-n1 (clean), MID 10000
  (context text) banana->banana-n1-affe (dirty), MID 10000
  (context text) banana->banana-n1-b198 (dirty), MID 10000

>> break cdump banana-n1-affe
define  banana-n1-affe [7426] (4b51c960) /2710/ dirty
        label           banana
        label           bananas
        label           banana-n1
        attribute               red-a1
        attribute               big-a1   **<-explicit color**
        attribute               Root-05f690af
        orthogonal              fruit-n1
        child-of                fruit-n1

>> what color is dog's banana
 dog's banana belonging to dog is red.
>> what color is cat's banana
 banana are yellow.

Much better. But, there is work to do. See notes from 10/13/24. I also don't know how far I got with class descriptions as memories (T4).

Stuff I need to do

- Posted in Uncategorized by

Updated the web site a little. Added info for the updated "Parse Explorer" page. Wondering what's next.

I wanted to work on better separating 1st, 2nd, 3rd, another/the other, etc. for inference templates and local sequences of "things" references.

I might also create a memory pool(s) for people to connect to, and use it as an opportunity to explain remote memories, remote updates and other websocket stuff. This would have its own web page. This could include a tutorial. In fact, I could be including tutorials with the HTML that comes with the Brainhat web server.

I need to explain scripts and show them working on a web page.

I need to integrate LLMs. I could use them to simplify input, to generate output, to disambiguate pronouns, and to dereference phrases like "the guy who drinks beer by the pool." (Who drinks.....?).

Need to revisit robots and MQTT.

Dumb idea of the month: what if i were to use this as a network security tool to add intention to packet sequences. Then infer what was happening or what the user was trying to accomplish. And all of this data could shared across the universe. And I could make some money.

What Next?

- Posted in Uncategorized by

I published a new binary copy of Brainhat yesterday. It's been about 16 months. Where does the time go?

Next, I am going to reverse the hash versus search-near-context order of processing in attrques3() and declques2(). We'll see what happens. What I am looking for is more sensible dialog where the latest information appears in answers rather than the first-hashed.

After that, I am going to work on creating an online repository for basic information about something--maybe the barnyard. This will be available to demonstrate remote memories. Along with that will be a new web page for monitoring remote Brainhat interactions.

Hmmmm.... I swapped hash-first for crawl the context first in declques2() and don't see any differences in the QA tests. Next, I will swap the order in attrques3().

Huh. No difference here either...

Problem with garbage collection

- Posted in Uncategorized by

I disabled reap() in main() because I found one situation where garbage collection caused a problem.

What was it now? Hmmmm....

Fixed garbage collection

- Posted in Uncategorized by

I fixed all of the reap/keep problems, I think. There's a new flag, noreap, for disabling reap/keep. I added it to the GUI. Making a checkpoint.

Fixing some things

- Posted in Uncategorized by

Two things are on the agenda:

1) choose and modify a few more mprocs to work with GUI debug

2) fix bugs that I have been accumulating

Yikes! Tried to add debug for reap(). Doesn't appear to be getting called any longer. Since.... (looking).... maybe 2019? reap() was being invoked by rule all-complete. I put it in csent, following keep(). I need to look at QA before I turn it on again, though. Like I said: "yikes".

Okay. QA looks steady without it. Now, I am going to enable it (REAP) Huh. Looks okay. Wonder if it's doing anything... It wasn't; being called at match level "2". Going to adjust its position in input-patterns. That didn't work. csent-common is the first thing called from input-cycle(). Instead, I changed reap() so this it will run at levels 1 and 2. It was just level 1. Now I see debug output. Let's see what kind of bloodbath the QA tests are...

It's not good. I want to run reap() after everything else is done, including ponder() routines. Let me find a better way to invoke it.

Getting there. Invoking from dialog(). All of the xlarge tests appear to be broken. Everything else looks good.

(later) I had to reboot the computer (up 226 days) because one of the segv's became a runaway kernel process! Powerful medicine, that. Need to work through it on my laptop. Maybe don't do any garbage collection when reading from input files? No. I think the problem was that I didn't save (keep) the results of an implicit inference To look into.

Page 2 of 2