The .aarm (Ahana Adaptive Resource Module) is an open, versioned container format for AI-native compressed data. One spec, one standard โ from IoT sensors to 70B model weights.
ZIP was designed in 1989. GZIP in 1992. The world has changed. .aarm is built from scratch for neural networks, large models, and streaming inference workloads.
A JSON table of contents maps every tensor by name to its exact byte offset. Extract any individual layer from a 70B model without decompressing the rest.
Magic bytes identify the ACP version at a glance. Flags encode tokenizer mode, encryption status, and fallback behavior โ no external metadata needed.
The .aarm spec supports an optional PUZZLE-AUTH envelope โ cryptographically locked compression where the wrong key yields random bytes, not an error.
Text .aarm files carry their BPE tokenizer inline. The file decodes itself on any machine with the AhanaAI runtime โ no external dictionary dependency.
Every .aarm file stores a 32-byte SHA-256 digest of the original payload, verified before any data is returned to the caller. Silent corruption is structurally impossible.
The .aarm spec is free and open. Anyone can build a reader. AhanaAI maintains the reference implementation and certifies compatible tools.
| Offset | Size | Field | Purpose |
|---|---|---|---|
| 0 | 4B | Magic | ACP version identifier |
| 4 | 8B | Original length | uint64 LE โ exact original byte count |
| 12 | 1B | Flags | Tokenizer mode, crypto, fallback state |
| 13 | 32B | SHA-256 digest | Integrity verification of original payload |
| 45 | 4B | TOC / tokenizer length | Size of embedded JSON table of contents |
| 49 | Nโ | Embedded TOC / tokenizer | JSON index or BPE tokenizer data |
| 49+Nโ | 4B | Payload length | Compressed stream byte count |
| 53+Nโ | Nโ | Compressed payload | Neural arithmetic coding bitstream |
Join early access to get SDK previews, spec documentation, and direct input into the standard before it ships.
Get Early Access โ