GGUF is a binary file format designed for environment friendly storage and quick giant language mannequin (LLM) loading with GGML, a C-based tensor library for machine studying.
GGUF encapsulates all needed elements for inference, together with the tokenizer and code, inside a single file. It helps the conversion of assorted language fashions, similar to Llama 3, Phi, and Qwen2. Moreover, it facilitates mannequin quantization to decrease precisions to enhance pace and reminiscence effectivity on CPUs.
We frequently write “GGUF quantization” however GGUF itself is just a file format, not a quantization technique. There are a number of quantization algorithms applied in llama.cpp to scale back the mannequin dimension and serialize the ensuing mannequin within the GGUF format.
On this article, we’ll see the right way to precisely quantize an LLM and convert it to GGUF, utilizing an significance matrix (imatrix) and the Ok-Quantization technique. I present the GGUF conversion code for Gemma 2 Instruct, utilizing an imatrix. It really works the identical with different fashions supported by llama.cpp: Qwen2, Llama 3, Phi-3, and so on. We may even see the right way to consider the accuracy of the quantization and inference throughput of the ensuing fashions.