SANM: Difference between revisions

From MultimediaWiki
Jump to navigation Jump to search
m (added subcodec 8.)
(Clarifications, cleanup.)
Line 280: Line 280:


=== Subcodec 5 ===
=== Subcodec 5 ===
This is an RLE decompress routine, with the added touch that colours are stored in big-endian.
This is an RLE scheme, with the added touch that colours are stored in big-endian.
  const size = RLE output size field from Bl16 header
  rle_decode(db0, video stream, RLE output size field from Bl16 header)
  remaining = size
  for each 16-bit value of (size / 2) values starting at decompressed data in db0
  while (remaining)
  {
    flip_bytes(value)
}
 
And here's the routine itself:
rle_decode(dst, src, const size)
  {
  {
     code = next byte of stream
     remaining = size
    line_length = (code >> 1) + 1
    while (remaining)
    {
      code = next byte of stream
      line_length = (code >> 1) + 1
    
    
    if (code & 1) // RLE run
      if (code & 1) // RLE run
    {
      {
      color = next byte of input stream
          color = next byte of input stream
      fill line_length bytes in db0 with color
          fill line_length bytes in dst with color
      }
      else // raw image data
      {
          copy line_length bytes from src into dst
      }
   
      remaining -= line_length
     }
     }
    else // raw image data
    {
      copy line_length bytes from stream into db0
    }
   
    remaining -= line_length
}
for each 16-bit value of (size / 2) values starting at decompressed data in db0
{
    flip_bytes(value)
  }
  }


=== Subcodec 6 ===
=== Subcodec 6 ===
This is a straightforward palette (codebook?) lookup/write routine.
This is a straightforward codebook lookup/write routine.
  for each pixel in db0:
  for each pixel in db0:
  {
  {
Line 317: Line 321:
Used by loladies.snm, and repmec3c.snm.
Used by loladies.snm, and repmec3c.snm.


This looks like a combination of VQ and RLE, whereby the actual indices into the codebook are RLE-compressed. The decompression scheme is identical to that of Subcodec 5, except there is no second "flip bytes" path.
Another RLE scheme, where the actual indices into the codebook are RLE-compressed. The decompression algorithm uses the same RKE decoding as in Subcodec 5.
  indices = rle_decode()
  indices = []
rle_decode(indices, video stream, width * height)
  for each pixel in db0, i in range(0, indices.size())
  for each pixel in db0, i in range(0, indices.size())
  {
  {

Revision as of 06:59, 18 April 2006

This page attempts to document the LucasArts Smush v2 codec, FOURCC "SANM". Note that at this stage the document is quite incomplete as the codec is still being reverse engineered. Most information regarding codecs here is speculative, at best. Structural information is quite correct unless otherwise noted.

A GPL'd decoder for the SNM format and relevant codecs can be found in the Residual reimplementation of the Grim Fandango Engine (GrimE). Note that this decoder is largely incorrect because it was effectively coverted from the original asm code directly to C.

Samples

Unique Samples

The following Grim Fandango movies are unique in that they (used to) make the Residual smush implementation segfault, and the issue was (never) resolved by adding 5700 bytes of padding to some buffers. When writing a decoder, they may serve as helpful potential stress tests.

lol.snm, byeruba.snm, crushed.snm, eldepot.snm, heltrain.snm, hostage.snm, tb_kitty.snm

Note - The Residual implementation's segfaulting results in from improper breakdown of the destination image into 8x8 blocks, whereby the calculation will claim that an image with N height blocks has (N+1) height blocks (or similar), at which point the segfault is imminent. This can be solved by either trimming the remaining pixels that don't fit into the last 8x8 blocks (undesirable), or set the image buffer width/height to the image size in blocks * 8, not pixels, to account for said remaining pixels (desirable). We'll dismiss this as a Residual implementation issue.

Use in Grim Fandango

SANM is used in Grim Fandango for cut-scenes and in-game animations. The actual SNM movie files are gzipped and stored inside LAB archive files (which are quite easy to extract, there are many tools). You must use a tool like *nix's "gunzip" to decompress the SNM files after extracting the SNM files out of the LAB files. A decompressed Smush file has the "SANM" FOURCC as the first four bytes.

Organization

This section deals with the structural properties of Smush movies. In other words, we describe the various headers used.

Note: each "chunk size" entry in a particular chunk header indicates the size of the chunk's contents without the chunk's FOURCC and size.

Preamble

The movie begins with a basic 8-byte section that looks like this:

0x00|"SANM" FOURCC        |4 bytes big endian
0x04|Movie size (in bytes)|4 bytes big endian

Video Header

This header immediately follows the preamble. It describes the movie's video properties.

0x000|"SHDR" FOURCC             |4 bytes big endian
0x004|Header size (in bytes)    |4 bytes big endian
0x008|Version                   |2 bytes little endian
0x00A|# of frames               |4 bytes little endian
0x00E|Padding?                  |2 bytes
0x010|Width                     |2 bytes little endian
0x012|Height                    |2 bytes little endian
0x014|Padding?                  |2 bytes
0x016|Frame delay (microseconds)|4 bytes little endian
0x01A|Color palette?            |1024 bytes
0x41A|Padding?                  |16 bytes

Audio/Keyframe Header

Smush supports variable-size keyframes. An example of this usage can be seen in the Full Throttle highway chase scenes, where different images are composited into the streaming video depending on the player's actions.

Curiously enough, this header includes both audio and keyframe information.

0x00|"FLHD" FOURCC         |4 bytes big endian
0x04|Header size (in bytes)|4 bytes big endian

Followed by any number of keyframe dimension chunks, which should match the number of keyframes in the movie. Dimension chunks in this header specify the dimensions of corresponding keyframes in the stream, in the order they're encountered. This information has not been rigorously verified, though.

0x00|"Bl16" FOURCC         |4 bytes big endian
0x04|Header size (in bytes)|4 bytes big endian
0x08|Padding?              |2 bytes
0x0A|Width                 |2 bytes little endian
0x0C|Height                |2 bytes little endian
0x0E|Padding?              |2 bytes

Followed by exactly one audio info chunk.

0x00|"Wave" FOURCC         |4 bytes big endian
0x04|Header size (in bytes)|4 bytes big endian
0x08|Frequency (Hz)        |4 bytes little endian
0x0C|# of channels         |4 bytes little endian
0x10|See notes             |4 bytes

Notes

  • For some movies, the "Wave" chunk contains an extra 4-byte field at its end, the purpose of which is unknown.
  • Movies without audio do not contain an audio info chunk.
  • The order in which Wave/Bl16 chunks are organized in the FLHD header is unspecified and is known to vary between movies.

Annotation

Movies may contain an optional plaintext annotation. In Grim Fandango, the only such movies are in-game animations. Keep in mind that the string itself may not always be as large as the advertised annotation size. In that case, the remaining space is padded with zeros until the advertised length is reached.

0x00|"ANNO" FOURCC             |4 bytes big endian
0x04|Annotation size (in bytes)|4 bytes big endian
0x08|Null-terminated string    |(Annotation size) bytes

Frame

This header is used as a container for a video frame and/or an audio frame, stored in an arbitrary order. In itself, it's just a FOURCC and a size.

0x00|"FRME" FOURCC        |4 bytes big endian
0x04|Chunk size (in bytes)|4 bytes big endian

Audio

Please see the appropriate section in VIMA for an audio frame's header/codec details. Note that as far as we know right now, this codec is specific to Grim Fandango Smush files.

Video

This chunk stores a potentially encoded video frame, as well as various opcodes and other stuff that's used to decode it. More details downstairs.

0x000|"Bl16" FOURCC            |4 bytes big endian
0x004|Chunk size (in bytes)    |4 bytes big endian
0x008|Unknown                  |8 bytes
0x010|Width                    |4 bytes little endian
0x014|Height                   |4 bytes little endian
0x018|Sequence #               |2 bytes little endian
0x01A|Subcodec ID              |1 byte
0x01B|Diff buffer rotate code  |1 byte
0x020|Small codebook           |8 bytes, 4 color values 2 bytes little endian each
0x028|Background colour        |2 bytes little endian
0x02C|RLE output size (bytes)  |4 bytes little endian
0x030|Codebook                 |Each entry is 2 bytes little endian
0x238|Video stream             |...

Codec

The codec is actually a combination of several subcodecs. The subcodec that's used for a particular frame is indicated by the appropriate field in the "Bl16" chunk of the frame.

Triple Diff Buffering

(incomplete) Smush uses a triple diff buffer mechanism to decode image data. A decoder's state includes three buffers, which are occasionally referenced by various subcodecs to decode individual frames. We will hereafter refer to said buffers as "db0", "db1", and "db2", where "db0" is the logical "current" diff buffer. It is crucial to note that "dbX" is only an alias to a particular diff buffer and does not stand for the contents of the buffer itself. In other words, it's a pointer.

Each frame contains an opcode that specifies how said buffers are rotated. Only two opcodes are used. Any other opcodes are ignored as "no-ops."

Opcode 1:
   swap(db0, db2)
Opcode 2:
   swap(db1, db2)
   swap(db2, db0) 

Initial Setup

We need to initialize two codebooks of 4x4 and 8x8 glyphs. The glyphs themselves are monochrome and thus consist of a foreground and background. We hereafter refer to said codebooks as glyph4_cb and glyph8_cb.

The construction algorithm iterates through two coordinate vectors, and interpolates an NxN glyph using every position in the x-vector with every position in the y-vector. Each vector contains 16 coordinates for a grand total of 256 glyphs per glyph size.

The vectors are defined for 8x8 and 4x4 glyphs as follows.

const int xvector4[] = { 0, 1, 2, 3, 3, 3, 3, 2, 1, 0, 0, 0, 1, 2, 2, 1 };
const int yvector4[] = { 0, 0, 0, 0, 1, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2 };
const int xvector8[] = { 0, 2, 5, 7, 7, 7, 7, 7, 7, 5, 2, 0, 0, 0, 0, 0 };
const int yvector8[] = { 0, 0, 0, 0, 1, 3, 4, 6, 7, 7, 7, 7, 6, 4, 3, 1 };

Here's how we make 4x4 glyphs. The algorithm for 8x8 glyphs is intuitively analogous.

for i = 0..16
{
   for j = 0..16
   {
      glyph[4][4] = all zeros

      vert1.x = xvector4[i]
      vert1.y = yvector4[i]
      vert2.x = xvector4[j]
      vert2.y = yvector4[j]
      
      edge1 = get_edge(vert1.x, vert1.y)
      edge2 = get_edge(vert2.x, vert2.y)
      direction = get_direction(edge1, edge2)

      width = largest side of line's bounding rectangle
      for each discrete point in _width_ points of our line
      {
         if direction is up, while row = point.y is >= 0, glyph[row--][point.x] = 1;
         if direction is down, while row = point.y is < 4, glyph[row++][point.x] = 1;
         if direction is left, while col = point.x is >= 0, glyph[point.y][col--] = 1;
         if direction is right, while col = point.x is < 4, glyph[point.y][col++] = 1;
      }
   
      codebook4.push_back(glyph) // order is important here, so yes, it's a push_back or equivalent
   }
}  

And here are the supplementary functions:

get_edge(x, y)
{
   if y == 0, return bottom_edge
   else if y == 3, return top_edge
   else if x == 0, return left_edge
   else if x == 3, return right_edge
   else, return no_edge
}

get_direction(2 edges)
{
   if (edges are left/right or right/left) or (edges are bottom/!top or !top/bottom), return up
   else if (edges are !bottom/top or top/!bottom), return down
   else if (edges are left/!right or !right/left), return left,
   else if (edges are bottom/top or top/bottom) or (edges are right/!left or !left/right), return right
}

Main Algorithm

This section is possibly incomplete.

if 0 == sequence number:
{
   // this is a keyframe
   fill db1 and db2 with background color.
}
handle subcodec according to ID.
copy contents of db0 into output image.
rotate buffers according to opcode.

Subcodecs

This section explains what the individual subcodecs mean, and how to suck out image data in each case. Note that ImageSize, in bytes, is defined as (Width * Height * 2)

This section is particularly incomplete.

ID|What
 0|Keyframe. Copy ImageSize bytes from video stream into db0.
 1|Never encountered so far.
 2|Hierarchical VQ and motion compensation.
 3|Copy ImageSize bytes from db2 into db0.
 4|Copy ImageSize bytes from db1 into db0.
 5|RLE decode. See below.
 6|Simple lookup/write. See below.
 7|Never encountered so far.
 8|RLE-encoded codebook indices. See below.

Subcodec 2

(Incomplete) This codec is broken up into a three-level hierarchy, where each level decodes differently sized image blocks. The decoding algorithms are chosen based on opcodes provided by the video stream. Block sizes are 8x8, 4x4, and 2x2. Upon entry into a level, the level opcode is determined by the next byte in the video stream.

  • We indicate the current x/y coordinates in db0 by "cx" and "cy", respectively.
  • We assume that two-dimensional arrays are row-major. That is, to access pixel(s) (x, y) at array, we write array[y][x].

We now describe the decoding algorithm for each opcode, per level.

Level 1 (8x8 blocks)

0x00 ... 0xF4
x, y = motion_vector[opcode]
copy 2x2 block from db2[y + cy][x + cx] to db0[cy][cx]
0xF5
index = next 2 bytes of stream
x, y = motion_vector[index]
copy 8x8 block from db2[y + cy][x + cx] to db0[cy][cx]
0xF6
copy 8x8 block from db1[cy][cx] to db0[cy][cx]
0xF7
glyph8_index = next byte of stream
fg_index = next byte of stream
bg_index = next byte of stream
fgcolor = codebook[fg_index]
bgcolor = codebook[bg_index]    
draw 8x8 glyph from glyph8_cb[glyph8_index] into db0[cy][cx] using fgcolor and bgcolor
0xF8
glyph8_index = next byte of stream
fgcolor = next 2 bytes of stream
bgcolor = next 2 bytes of stream     
draw 8x8 glyph from glyph8_cb[glyph8_index] into db0[cy][cx] using fgcolor and bgcolor
0xF9, 0xFA, 0xFB, 0xFC
color = value from small_codebook[opcode - 0xf9]
fill 8x8 block in db0[cy][cx] with color
0xFD
index = value of next byte in stream
color = value from codebook[index]
fill 8x8 block in db0[cy][cx] with color
0xFE
color = next 2 bytes in stream
fill 8x8 block in db0[cy][cx] with color
0xFF

This effectively breaks this block up into four 4x4 blocks and invokes the next level to decode them.

next_level(cx, cy)
cx += 4
next_level(cx, cy)

cx -= 4
cy += 4

next_level(cx, cy)
cx += 4
next_level(cx, cy) 

Level 2 (4x4 blocks)

Exactly the same as Level 1, except with 4x4 blocks.

Level 3 (2x2 blocks)

Same as the other levels except with 2x2 blocks, and with the following differences.

0xF7
indices[2][2] = next 4 bytes of stream
write a 2x2 block into db0[cy][cx] using codebook[indices[][]] for colors
0xF8, 0xFF
copy a 2x2 block from video stream into db0[cy][cx].

Subcodec 5

This is an RLE scheme, with the added touch that colours are stored in big-endian.

rle_decode(db0, video stream, RLE output size field from Bl16 header)
for each 16-bit value of (size / 2) values starting at decompressed data in db0
{
   flip_bytes(value)
}

And here's the routine itself:

rle_decode(dst, src, const size)
{
   remaining = size
   while (remaining)
   {
      code = next byte of stream
      line_length = (code >> 1) + 1
 
      if (code & 1) // RLE run
      {
         color = next byte of input stream
         fill line_length bytes in dst with color
      }
      else // raw image data
      {
         copy line_length bytes from src into dst
      }
   
      remaining -= line_length
   }
}

Subcodec 6

This is a straightforward codebook lookup/write routine.

for each pixel in db0:
{
   index = value of next byte in video stream;
   pixel = (2 bytes little endian) codebook[index];
}

Subcodec 8

Used by loladies.snm, and repmec3c.snm.

Another RLE scheme, where the actual indices into the codebook are RLE-compressed. The decompression algorithm uses the same RKE decoding as in Subcodec 5.

indices = []
rle_decode(indices, video stream, width * height)
for each pixel in db0, i in range(0, indices.size())
{
   pixel = codebook[indices[i]]
}