Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
AbstractPhil 
posted an update 2 days ago
Post
88
My recent study in a nutshell shows a few important elements and everything else is technical.

* There are most definitely invariant architectural geometric states that persist and can be taught.
* They are not coincidental and the process works effectively on multiple data types and processes, not just noise. Noise is just fast to test with.
* Systems like SVD, Eigh, Conv, and the like - HELP align those systems for larger structures to produce amplified stability, but are not required for smaller structures, and the tests show even attention gets in the way at the smallest.
* Batched arrays, stacks, queues, and so on - all improve performance depending on the task.
* An SVAE battery is resolution agnostic, meaning with simple processing and logic you can scan space and record meshes fairly optimally to record large amounts of inference data.
* Batteries when trained on one specific task often can be directly used for other tasks once a codebook is fitted with the necessary data. Meaning a battery trained on gaussian noise can be fed imagenet snippets and downstream the MSE rates from the 64 battery array can be consumed for statistics aggregation to a fair degree of accuracy without actually training the array on images themselves.
* The battery codebook is a pointwise rigid map within the battery and can be used for pairwise learning when using the H2, H2a, and H2b batteries.

So this is, the evolved state of the geometric vocabulary in some ways, and a completely new and unexpected systemic development in others. They stack, you can reuse them, so small you can swap them at runtime with no time loss, they align rapidly, and downstream tasks can consume their information.

There are many untested avenues that I need to make a full writeup for because quite frankly it's messy currently and Claude is only making it more messy instead of cleaner.

I'll be determining the capacity and utility potential for the larger batteries; Fresnel, Johanna, Grandmaster, Freckles, and Johanna-F variants, which should give a good indication of which models are capable of handling codebooks and which are more errant.

I anticipate the D=16 will be more errant, and the final-state variants of those could very well be much more difficult or costly to inference as their axis bends are likely considerably harder to track. However, I'm confident that enough bounces will give the yield required so I'll set up some high-yield noise barrages to determine how much of them we can in fact extract from Johanna, and then set up similar barrages for images to map the internals of Fresnel and Grandmaster.

Grandmaster will be tricky, as it was an experimental series meant to map sigma noised image inputs to recreate Fresnel behavioral output. Noised image goes in -> Fresnel-grade replication comes out.

This allowed preliminary Dall-E Mini-esque VAE generation and will be explored further for the stereoscopic translation subsystem, to allow image generation in the unique format of diffusion that I was working out. I anticipate this system to be more than capable at making monstrosities, so I won't be posting TOO MANY prelims on this one, but the high-capacity potential of these noise makers are meaningfully powerful. Getting uniform codebooks in-place for these models will allow full transformer mapping downstream instead of just guessworking the MSE piecemeal, which the earlier versions and variants were doing. This should allow for LESS monstrosity, but I'm confident you'll be seeing some nasty stuff until the zero-hot labeling gets worked out.

I'm straying from the CLS specifically for this series because CLS creates adjudicated pools of bias orbiting the INCORRECT orbiter for this system. The orbital target IS the soft-hand accumulated bias with the sphere-norm, so having a competitor isn't going to be a good option.

In any case, today the target is inference.py and the mechanisms associated with making those inference mechanisms ACCURATE and SCALE within reasonable size/scale capacity.

In this post