Upcoming trains:
- 2-8gram batteries. Each SVAE battery is designed specifically with it's own sub-vocabulary for this experiment. We're going to see what happens when we build deviant codebook association to accumulate sentence similarity. I'm thinking... probably wikipedia 103 again, that worked for the bytewise variant.
- Stacked grams utilizing topological geometric structures; will be testing cantor sets with alpha/beta/gamma/sigma, differentiated procrustes bert opinions for downstream using captionbert, and a few other models. The plan is to create a stacked variation, where one battery leads to another directly rather than ensemble alignment. This will produce a few dual-gram represented using a topologically deterministic methodology that captures the output of the ngram that CAN function, and a few that cannot. I will try to keep them separate and update the research accordingly, however updating research is very time consuming so I may end up putting my nose to the paper for a week or two and just erupting a huge paper.
I can think of at least 80 or 90 model prototypes that can handle sequential codebook learning off the top of my head. This will enable pure sentence similarity with almost no params.
I anticipate low risk research results, which will yield enhanced codebook capacity, increased downstream utility, answers to questions such as battery-to-battery transfer learning, procrustes alignment out of spectrum, token curation and learning tokens directly instead of direct bytewise learning to retain order, and a series of other common machine-learning paradigms that have not been tested fully yet.
Most likely going into this I can say it will work based on the recon. If I map the vocabulary of 2gram to it's own, 3gram to it's own, and so on; it will work guaranteed, and the codebooks will yield uniqueness.
This will yield two factories and two schools of potential here.
- The direct hard-set space-specific non-agnostic super-strict aligned spectrum. These models will crash if loaded incorrectly and tell you why. If they aren't inferenced exactly to specifications they will most likely fault. Each have independent vocabularies hard-mapped to colors and sequentially prepared with the utmost care.
- The indirect soft-space unscaled agnostic soup-prone variant that somehow exhibits 99.6% MSE and trigram recon. Shuffle it. Throw it in the blender. Make the stew. What came out? Something mappable and understandable with ripter d=2 to map the voids and axial perturbations on the sphere.
These two will be direct competition, and may in fact end up being paired together in cooperative collectives if the trains yield.
A word of warning if you attempt to use my models or code from geolip-svae or geolip-core:
- The models and the process is highly experimental.
- The yield is determined heavily based on tunings from the spectrum, the array and list of configurations may not be defaulted to useful configurations.
- Many batteries are heavily experimental and may not yield as predicted every train for every dataset, this is expected and encouraged behavior to be studied.
Your trains aren't failing if you participate, you are witnessing another emergence deviance that must be catalogued and understood to better scale the scaling mechanism, refine the patchwork system, and align the projection system for the codebook synthesis.
The codebook training system is highly experimental and will be heavily prone to change in the coming days. What exists today likely won't be the same format in a week or two weeks, so be aware there will be rapid iterations.
WITHOUT a proper codebook, the model's cosine similarity won't work and you need to default to kNN detection which is slow but doable for CONV and downstream transformer models. It's just rough-going. Grab a Fresnel for images, that's your best bet. Grab a Johanna for generic. Grab Freckles if you want instability. Train their codebook for a task, it should be done in less than a minute, and then train your classifier/math tester/checker/etc. It'll work with some jiggering.
Fresnel was pretrained on imagenet, which provides all the data you need to capture image information.
Johanna was trained on 16 types of noise, which allows fair recon (albeit too smooth in the current state) for downstream differentiation utility.
USING their directly TOPICAL and EASY-ACCESS detections involves MSE assessment, which gives you the recon capacity of the object you're utilizing if you're using a battery selector framing. This is especially useful for models trained with mathematical formulas or functional systems related to data selection, rather than specific relational tasks.
So for example; say you have five fresnel finetunes. Each of them are trained with an additional 100 batches of a specific kind of table. Run the system, it creates your codebook, and you can have it snap the codebook to the model automatically or stack it in a directory - it's not AI data, it's geometric numerics.
So you have your five finetunes, and then you slap an h2 battery in there finetuned on gaussian noise. Show the collective the image you want to figure out, say it's a table.
Pretrain task:
- Load H2 imagenet pretrain. Finetune with chair dataset, create codebook.
- Run a process that splits your types by label and build differentiated dataset using cosine similarity.
- Unsupervised finetune each in the same sequence, roughly 100 batches per, 1 gig vram, < 1 minute each.
Image: Black Chair
H2 Battery Array: Chair differentiation calculator.
6 batteries.
H2 Battery 1 - Purple Chairs: mse 0.005
H2 Battery 2 - Grey Chairs: mse 0.03
H2 Battery 3 - Dark Grey Chairs: mse 0.01
H2 Battery 4 - Black Chairs: mse 0.000005
H2 Battery 5 - Wooden Chairs: mse 0.0005
H2 Battery 6 - Generic Gaussian: mse 0.00001
57,000 * 6 = 342,000 params
Selector between 800-5000 MLP, nothing too heavyweight but a little excessive for the task. Attach attention here if you want, but it won't matter most of the time.
If we snap battery 6 as lowest MSE we can probably say, alright this probably isn't any of the expected shapes. If the model is wrong, the backprop will feed the MLP standard backprop until it behaves to the collective outputs for MSE statistics. Everything is built in for this, and it's a guarantee eventually given enough passes to work.
Put together, the system is a guaranteed selector. Now say we want a selector for chairs. Well we have it.
Image: Potato
H2 Selector Array -
H2 Battery 1: Chair generic finetuned -> do we use the chair array?
H2 array 1: Chair array.
H2 Battery 2: Potato generic finetuned -> do we use the potato array?
H2 array 2: Potato array.
Current limitations:
- The Imagenet variation may just, perform better. Literally just, better than your finetune, that's what the upcoming experiments are testing for. How to guarantee independence and shared utilization selection.
- The noise variation may actually outperform the imagenet variation. Noise is quite the teacher, and the noise models learned a ton of noise. They are quite literally omegas, so they may just solve the problem.
- They generalize TOO WELL at times, which means specifics are trickier to implement.
CORRECTLY TRAINED and CORRECTLY UTILIZED, these will turn deterministic query into complex behavior oriented responses, implicitly aligned mathematical guarantees, guaranteed downstream adjudications, differentiation transfer through encoding, and many other options - all on the GPU or CPU if you wish. These ARE omegas, and they are absurdly compact data selectors at 57k params and roughly 700kb drive space - if treated as such, even without heavy query. Beware though, this isn't production ready. They work, and there's an interface, but they are most definitely not ready.