The answer to the last question is 'not far'. Here is the method for storing L1 memories (propositions) for use as a template:
qstoretriggers() is a ponder routine that walks the whole context looking for simple propositions. It writes out each one it finds and adds hashes to the associated hash table.
In order for qstoretriggers() to run, clarg_autoremember and clarg_remembert1 both have to be true. This ony happens in bhatexit(), on the way out the door: the clarg_autoremember flag gets set to create a one-shot memory-saving event.
Bhatexit() also writes out the hash table associated with all memories, including propositions. This hash table was read in earlier and updated in memory.
(Note that if the '-T' flag is used, swaths of the context get written out less discriminately from storetriggers().)
In order to access these memories, the -A flag is set (clarg_autorecall). For propositions, routines that source the (T1) memories include declques2(), equivques(), qremember(), t1remember() and t1client().
I'm not sure how many of these or similar routines will be needed for T4 memories (class attributes), yet. Some of it is written already. Class Attributes (which are the basis for T4 memories) are already recognized, hashed and stored in core. declques() already has working code to use them. classattr(), an inline mproc, already has working code to use them.
I will need routines to save T4 memories, and need to save and restore the hash table from bhatexit() and main() (or by an http callback routine), respectively. T4 memories will be saved alongside other types in the same hash table...
As an example of cross-network memories, the ponder routine t3client() is clear enough. I send a swath of the context to however many remote subscribers I may have. t1client() looks a lot the same, which is a little confusing. I would have thought I would just send propositions. Have to look into that. I might have been lazy.
Oh, wait! I think I remember. I send the remote a swath and it will send me back the T1, T2, whatever memories that are hashed up by the swath. Via callback, I will store them into the memories directory and hash them into the existing memory hash (in core). Then they will be available for my use.
BUG:(?) It looks to me like the callback cbk_t1netmem() only sends attribute assignments.
It needs to handle general propositions, too.
TODO: So, in summary,
1) I have to add the T4 hashes to the regular mhash table, setting their type to 4 so that they aren't sourced incorrectly,
2) I need to write them to the memories directory.
3) In order to be able to restore them, I need routines like qremember() for inline memory recall during questions and t1remember() for trail-behind memory recall of memories about things.
And for remote memories:
I will duplicate t1client()fR into *t4client(), and I might change the swath of stuff that's sent across to include discourse (or questions) instead of context. The callbacks for client and server will look like the existing T1 callbacks, except that they will handle T4.
OR MAYBE, I should just bunch T1 and T4 together. They're sort of similar, except that one is actual attributes and the other is class attributes.... Class attributes could end up in the rhash table by veing processed by classdesc() just as inferences end up in the chash and phash by going through pcomp().