Text Generation
Transformers
Safetensors
English
model_n_embed_16_binary_n_layer_32
feature-extraction
causal-lm
transformer
decoder-only
fixed-embeddings
binary-token-codes
research
custom_code
Instructions to use E6E831728/fixed-minimal-binary-code with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use E6E831728/fixed-minimal-binary-code with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="E6E831728/fixed-minimal-binary-code", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("E6E831728/fixed-minimal-binary-code", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use E6E831728/fixed-minimal-binary-code with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "E6E831728/fixed-minimal-binary-code" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "E6E831728/fixed-minimal-binary-code", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/E6E831728/fixed-minimal-binary-code
- SGLang
How to use E6E831728/fixed-minimal-binary-code with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "E6E831728/fixed-minimal-binary-code" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "E6E831728/fixed-minimal-binary-code", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "E6E831728/fixed-minimal-binary-code" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "E6E831728/fixed-minimal-binary-code", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use E6E831728/fixed-minimal-binary-code with Docker Model Runner:
docker model run hf.co/E6E831728/fixed-minimal-binary-code
| license: apache-2.0 | |
| library_name: transformers | |
| tags: | |
| - causal-lm | |
| - text-generation | |
| - transformer | |
| - decoder-only | |
| - fixed-embeddings | |
| - binary-token-codes | |
| - research | |
| language: | |
| - en | |
| # Fixed Minimal Binary Code Model | |
| This is an anonymized research checkpoint for the paper: | |
| **Language Models Without a Trainable Input Embedding Table: Learning from Fixed Minimal Binary Token Codes** | |
| ## Model variant | |
| This repository contains the **fixed minimal binary token-code model**. | |
| Instead of a trainable input embedding table, each token ID is represented by its exact minimal binary code. | |
| For vocabulary size: | |
| ```text | |
| V = 65,536 | |
| ``` | |
| the minimal injective binary code width is: | |
| ```text | |
| K = ceil(log2(V)) = 16 | |
| ``` | |
| The 16-dimensional binary code is tiled to model width 1024. | |
| The model therefore uses: | |
| ```text | |
| 0 trainable input-embedding parameters | |
| ``` | |
| The output projection remains standard and trainable. | |
| ## Architecture | |
| - decoder-only Transformer | |
| - vocabulary size: 65,536 | |
| - model width: 1024 | |
| - number of layers: 32 | |
| - number of attention heads: 32 | |
| - context length: 1024 | |
| - rotary positional embeddings | |
| - GELU activations | |
| - untied trainable output projection | |
| ## Loading example | |
| ```python | |
| import torch | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| repo_id = "E6E831728/fixed-minimal-binary-code" | |
| tokenizer = AutoTokenizer.from_pretrained(repo_id, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained(repo_id, trust_remote_code=True) | |
| model.eval() | |
| prompt = "Question: What is the capital of France?\nAnswer:" | |
| input_ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long) | |
| with torch.no_grad(): | |
| output_ids = model.generate(input_ids, max_new_tokens=3, do_sample=False) | |
| print(tokenizer.decode(output_ids[0].tolist())) | |
| ``` | |
| ## Intended use | |
| This checkpoint is provided for anonymous review and reproducibility of the paper's main claim: a trainable input embedding table is not necessary for useful language modeling in the studied regime. | |
| ## Limitations | |
| This model is a research checkpoint. It is not intended for deployment. It may produce incorrect, biased, unsafe, or nonsensical outputs. | |
| ## Training data | |
| The model was trained on the same FineWeb-Edu + Cosmopedia mixture used for the matched comparisons in the paper. Dataset terms and licenses are those of the original datasets. |