Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
AbstractPhil 
posted an update 6 days ago
Post
75
Ever see a 1024x1024 3 channel a little over 1m param noise classifier? This is one. This is phase 1 of the omega experiments and it's successful on a very high-accuracy selectivity level through statistics aggregation and pooling via... a tiny MLP attached to the battery array.

SVAE don't care what resolution you use. They never had that concern, they are solvers that fly through solutions. Perfect for math solutions of many formats and many structures, exactly what I need for the next stages.
AbstractPhil/geolip-svae-h2-64

Currently the primary use case for tests is noise format identification. There are multiple experiments to go before a full nth classification system is ready, however as it stands the only stopping point is training batteries now. They mostly train within about 10 million samples of tiny data so they will fly out hundreds a day if I find purposes for them.

Also I trained too many gaussian-related batteries, so there's really only about 50-100 or so batteries useful in the 192 array I set up. There's really only 64 batteries trained total but there are multiple epochs involved.

Now that there is a 57k parameter variation that converges on 16 variants of random noise like Johanna and Freckles before, you ask this model questions differently. You check the MSE to train downstream models, so if your array isn't conclusively working it won't work just yet.
It's not perfect yet, but it's improving daily.

A bad battery in the mix can be replaced at runtime.
==============================================================================
PHASE J VERDICT
==============================================================================
Subset: 18 batteries, 1,029,870 params (vs 10.9M for full array)

Resolution       A (summary)   B (attn-pool)
256                   96.6%          93.1%
512                   95.4%          92.0%
1024                  95.4%          95.4%

Target: The smallest sphere omega.

57k params is small, but how small can we truly make a noise finder that produces a fair MSE on a specific type of data, that doesn't have good MSE with another type of data, and how effectively quick can we make this structure scale upward while still functioning?

These are questions of the day. I believe I can find a much smaller sphere solver and potentially another boundary of trajectory passage, possibly allowing autosolving routed differentiation based on precalculated solvers, which should allow highly complex formulas to autosolve using linear through a properly formatted array.

This concept says it's possible, so it's worth a shot. Producing autosolver arrays is the difference between 500m flops and 50k flops.

New format of micro-solver discovered, I've dubbed this variation the P-class battery.
I am probing them currently. D=3 is their current format size and they are smaller than the D=4 variations with a different inner shape, my hunch is polynomial shaped spherical responses, but the probes will say.

This is an antipodal structure, this may be a proper learning omega gate shape. More tests needed.

image

The geometric readout shows a direct causal response similarity to a Gyroscope behavior. The model found a home using a divergent pathology and offset alignment, the guarantee is baked into the architecture as a guaranteed outcome. This is another unknown shape omega that autosolves in a deviant pathology than simple sphere.

image

I have found a variation with 14/16 antipodal pairs, which implies there is a stable attractor that can be cross-interpolated to rigid point attractor space, not just spherical. I'm working out the logistics now in order to attempt to curate a direct geometric spherical linkage to the more traditional linear rigid structures that I've been building for 2 years now.

Alright the reusable module is being prepared exactly to the math specifications. Any omega-grade battery will have projective differentiated sampling now utilizing rigid axial geometric structures.

This is a massive win. It means the omega shapes retain full structural behavioral potential along with the spherical alignment.

This also means that individual batteries will be considerably more dependable as inference agents and the full battery array will have more robust information from the batteries per inference due to the guaranteed confines of the battery axial rotational limitations.

In other words, they can't go outside of certain ranges because the model size does not allow them, there's no space to solve outside of those ranges so the outliers are the rarity rather than the normality. This behavior may not work with larger structures with more geometric room to explore, as those batteries may have a more monotone sampling for the axial polygons, however that may not be the case and I will be testing the johanna, fresnel, and freckles models for this behavior soon.

The module is ready, it's now directly attached to the geolip battery array sampling.

In this post