Per my understanding, a lot of 3D hardware operates by allowing the programmer to specify a set of vertices between which the graphics chip draws lines. Then, the programmer can specify that a bitmap needs to be plotted between some of those lines. In 3D graphics parlance, those bitmaps are called textures. More textures make a game prettier, but a graphics card only has so much memory for storing these textures. In order to stretch the video RAM budget, some graphics cards allow for compressing textures using vector quantization.
A specific example of VQ in 3D graphics hardware is the Sega Dreamcast with its PowerVR2 graphics hardware. Textures can be specified in a number of pixel formats including, but not limited to, RGB555, RGB565, and VQ. In the VQ mode, a 256-entry vector codebook is initialized somewhere in video RAM. Each vector is 8 bytes large and specifies a 2×2 block of pixels in either RGB555 or RGB565 (can’t remember which, or it might be configurable). For the texture in video RAM that is specified as VQ, each byte is actually an index into the codebook. Instant 8:1 compression, notwithstanding the 2048-byte codebook overhead which can be negligible depending on how many textures leverage the codebook and how large those textures are.