BitsFusion: 1.99 Bits Compression Of Diffusion Models
BitsFusion quantizes diffusion model weights to 1.99 bits avg, maintaining high performance & efficiency. Outperforms other methods on image generation & text-to-image tasks.
This is a Plain English Papers summary of a research paper called 1.99 Bits Compression of Diffusion Models: BitsFusion Quantization. If you like these kinds of analysis, you should join AImodels.fyi or follow me on Twitter. Overview This paper presents a new mixed-precision quantization method called BitsFusion for diffusion models. BitsFusion can quantize diffusion model weights to just 1.99 bits on average while maintaining high performance. The paper compares BitsFusion to other quantization approaches and demonstrates its effectiveness on several benchmarks. Plain Engli...