Small FFmpeg Tasks

From MultimediaWiki
Revision as of 21:03, 8 February 2010 by Dashcloud (talk | contribs) (Removing some old claimed tags, and updating some of the links)
Jump to: navigation, search

This page contains ideas for small, relatively simple tasks for the FFmpeg project. People who might be interested in trying one of these tasks:

  • Someone who wants to contribute to FFmpeg and needs to find a well-defined task to start with
  • Someone who wishes to qualify for one of FFmpeg's coveted Summer of Code project slots
  • An existing FFmpeg developer who has been away from the project for a while and needs a smaller task as motivation for re-learning the codebase

For other tasks of varying difficulty, see the Interesting Patches page.

If you would like to work on one of these tasks, please take these steps:

If you would like to add to this list, please be prepared to explain some useful details about the task. Excessively vague tasks with no supporting details will be ruthlessly deleted.

Finish up a previous incomplete SoC project

Several SoC projects from previous years have not yet made it into FFmpeg. Taking any of them and finishing them up to the point that they can be included should make for a good qualification task. Check out the FFmpeg Summer Of Code overview page and look for the unfinished projects, like AMR-NB, Dirac, TS muxer, JPEG 2000.

Generic Colorspace system

This task involves adding support more than 8 bits per component (Y on 10 bits, U on 10 bits, V on 10 bits for example) and generic simple conversion to other colorspaces.

Does this have to do with revising FFmpeg's infrastructure? If so, then it doesn't feel like a qualification task. If it's something simpler, then the vague description does not convey that simplicity. Please expound. --Multimedia Mike 12:56, 25 February 2008 (EST)

I don't think so, extending PixFmt to extended structure with finegrained description like depth, range values, colorspace, sample period, and write generic simple conversion from all formats to all others, like suggested by Michael on the mailing list. Conversion routine can be a good qualification task for video encoders/decoders. What do you think ? --Baptiste Coudurier 00:30, 29 February 2008 (EST)

* Adding the YCoCg colorspace (with different sized planes) for RGB sourced pictures would be nice too. Elte 07:15, 16 March 2009 (EDT)

Make the SoC dts encoder multichannel capable

Here is a skeleton for a dts encoder, currently it can only encode stereo streams. The task is to extend it to support 5.1 channels also.

Specs and info can be found here:

GIF LZW Encoder and extend Encoder and Decoder to support Animated GIFs

Lzw encoder is already used for TIFF, it must be extended to support GIF flavor.

Implement a Vivo demuxer for FFmpeg

Implement an FFmpeg demuxer for the Vivo file format. The best reference for understanding the format would be MPlayer's existing .viv demuxer.

This task corresponds to issue 99:

I am ready to help out with understanding MPlayer's demuxer, esp. MPlayer API stuff if necessary. --Reimar 15:46, 1 March 2008 (EST)

Port missing demuxers from MPlayer to FFmpeg

MPlayer supports a few container formats in libmpdemux that are not yet present in libavformat. Porting them over and gettting them relicensed as LGPL or reimplementing them from scratch should make reasonable small tasks.

  1. TiVo -- Jai Menon is working on this
  2. VIVO -- Daniel Verkamp has a patch for this
  3. SL support for MPEG-TS (anyone got samples?)
  4. MNG

Optimal Huffman tables for (M)JPEG

This task is outlined at and is tracked in the issue tracker:

YOP Playback System

This task is to implement an FFmpeg playback subsystem for Psygnosis YOP files. This will entail writing a new file demuxer and video decoder, both of which are trivial by FFmpeg standards. The Psygnosis YOP page contains the specs necessary to complete this task and points to downloadable samples.

Patch pending on -devel

M95 Playback System

This task is to implement an FFmpeg playback subsystem for M95 files. This will entail writing a new file demuxer and video decoder (the audio is already uncompressed), both of which are trivial by FFmpeg standards. The M95 page contains the specs necessary to complete this task and points to downloadable samples.

BRP Playback System

This task is to implement an FFmpeg playback subsystem for BRP files. This will entail writing a new file demuxer as well as a video decoder that can handle at least 2 variations of format data. Further, write an audio decoder for the custom DPCM format in the file. All of these tasks are considered trivial by FFmpeg standards. The BRP page contains the specs necessary to complete this task and points to downloadable samples for both known variations.

16-bit Interplay Video Decoder

FFmpeg already supports Interplay MVE files with 8-bit video data inside. This task involves supporting 16-bit video data. The video encoding format is mostly the same but the pixel size is twice as large. Engage the ffmpeg-devel list to discuss how best to approach this task.

16-bit VQA Video Decoder

FFmpeg already supports Westwood VQA files. However, there are 3 variations of its custom video codec. The first 2 are supported in FFmpeg. This task involves implementing support for the 3rd variation. Visit the VQA samples repository: -- The files in the directories Tiberian Sun VQAs/, bladerunner/, and dune2000/ use the 3rd variation of this codec. The VQA page should link to all the details you need to support this format.

Discussion/patch: (reference)

HNM4 Playback System

This task is to implement an FFmpeg playback subsystem for HNM4 variant of the HNM format. This will entail writing a new file demuxer and video decoder, both of which are trivial by FFmpeg standards. The HNM4 page contains the specs necessary to complete this task and links to downloadable samples.

Apple RPZA encoder

A patch was once sent to the ffmpeg-devel mailing list to include an encoder for the Apple RPZA video codec. That code can be found on the "Interesting Patches" page. This qualification task involves applying that patch so that it can compile with current FFmpeg SVN code and then cleaning it up per the standards of the project. Engage the mailing list to learn more about what to do.

Claimed by Jai Menon

QuickTime Edit List Support

Implement edit list support in FFmpeg's QuickTime demuxer (libavformat/mov.c). This involves parsing the 'elst' atom in a QuickTime file. For a demonstration of how this is a problem, download the file from and play it with ffplay or transcode it with ffmpeg. Notice that the audio and video are ever so slightly out of sync. Proper edit list support will solve that. Other samples in that directory also presumably exhibit edit list-related bugs. The Xine demuxer has support for this, it might be useful for hints.

(patch was submitted to ffmpeg-devel , around 14 March 2009)

Implement the Flash Screen Video codec version 2

FFmpeg is missing both a decoder and an encoder. Would be nice to have that.

Daniel Verkamp is working on this

Add wma fixed point decoder back into libavcodec Rockbox's fixed-point WMA decoder was adapted from the decoder in libavcodec.

RealAudio 14.4 encoder

FFmpeg contains a decoder for RealAudio 14.4, a farily simple integer CELP codec. Write an encoder. This would be a good qualification task for anyone interested in working on AMR, Speex, or sipr.

VC1 timestamps in m2ts

Codec copy of VC1 from m2ts currently doesn't work. Either extend the VC1 parser to output/fix timestamps, or fix the timestamps from m2ts demuxing.

FLIC work

Revise the Flic Video decoder at libavcodec/flicvideo.c to support video transported in AVI or MOV files while making sure that data coming from the usual FLI files still works. 'AFLC' and 'flic' FourCC samples are linked from the Flic Video page.

CJPG format

Extend FFmpeg's MJPEG decoder to handle the different frames/packing of CJPG. Samples at:

flip flag for upside-down codecs

about the flip, a patch that decodes images fliped when
codec_tag == ff_get_fourcc("GEOX") is welcome.
its a metter of 2lines manipulating data/linesize of imgages after
get_buffer() or something similar
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

more info:

lavf-based concatenation tool

Unless we have multiple files input in FFmpeg, it would be nice to have some libavformat-based tool that would extract frames from multiple files (possible different containers as well) and put them into single one.

cljr and vcr1 encoders

According to this: both of the encoders are disabled, and won't compile if enabled. Michael would prefer to keep them around, and have someone grow them into full encoders.

implement some colorspace fourcc/codecs

some colorspace formats were uploaded to including:

CYUV.AVI is 8 Bit Interleaved 4:2:2
a12v.avi is 4:2:2:4 10 Bit Interleaved
auv2.avi is 4:2:2:4 8 Bit Interleaved
and V-codecs/yuv8/MAILTEST.AVI .

it might decode with current pixfmts, for that all you will need is:

cd ffmpeg
svn di -r20378:20379

step by step tutorial for adding new input formats to swscale:

cd mplayer/libswscale/
svn di -r20426:20427
the hunks 3 and 5 you dont need, they are optional special converters
also the change to isSupportedOut() you dont need
above will add a new input format

another example for adding an input format

cd mplayer/libswscale/
svn di -r20604:20605

Make the rtp demuxer support rtcp BYE packets

rtcp BYE (203) packets are sent from the sender to the receiver to notify that a stream has ended. FFmpeg currently ignores them.

Sample url rtsp://

Implement the RTP/Theora payload

The Theora payload is currently still a draft. Yet, it would be nice to support this payload. As per above, the feng RTSP server supports the Theora RTP payload draft and can be used for testing your implementation of the draft, or you can use the online feng test-server (rtsp://

Most likely, your implementation will consist of a file called rtp_theora.c in libavformat/, which will read the header packets available in the SDP (the "configuration" piece in the fmtp: line) and which parses individual incoming RTP packets from the RTSP demuxer (minus the generic RTP header bits). It should output Theora-encoded frames which can subsequently be decoder by the Theora decoder in libavcodec/.

support for YCoCg/RGB colorspace in FFV1

Add support for YCoCg and RGB encoded sources for the FFV1 codec

This would add a free lossless intra-frame RGB codec for all by FFmpeg supported platforms (most important MacOS + Windows) which is often asked for video editing in video forums (e.g.

Metal Gear Solid Video format demuxer

Write a demuxer to play video files harvested from the game Metal Gear Solid: The Twin Snakes. The format is described on the wiki page Metal Gear Solid VP3 (which also contains links to samples). This page is based on observations and conjecture, so remember to engage the ffmpeg-devel mailing list with questions.

IFF ANIM decoder

Modify libavformat/iff.c to handle this chunk and write a decoder for the format. The wiki page at IFF ANIM has links to more information and source code. Samples can be found at .

CDXL decoder

Write a decoder for this format using the information in the CDXL wiki page Discussed for the 2009 SoC

port missing decoders/demuxers from other open source projects.

Paris Audio File PAF
GNU Octave 2.0 MAT4
GNU Octave 2.1 MAT5
Portable Voice Format PVFSound
Designer II SD2

samples are here:


150+ formats:

many image formats not in ffmpeg yet.


many OPL2/OPL3 audio formats not in ffmpeg yet.

many music pattern formats not in ffmpeg yet.

SNES-SPC700 Sound Format

port Ut Video decoder/encoder

gpl v2 decoder/encoder at wiki page

Sony psp demuxer

create/port a demuxer for the sony playstation portable format PMP.

libswscale PAL8 output

See the thread: "[RFC] libswscale palette output implementation":

vloopback output support

vloopback is a linux kernel device which allows to create a virtual video device where programs can write, and can be accessed as a normal video device:

This would allow to write the ffmpeg output to a vloopdevice and be displayed by some a program reading from such device (e.g. skype, a voip client etc.).

An example of a program which uses vloopback:

Port video filters from MPlayer/VLC/Mjpegtools/Effectv/etc etc to libavfilter

There are plenty programs providing their own filters, many of them may be easily ported to the superior ;-) framework of libavfilter. Also may be possible to create wrappers around other libraries (e.g. opencv, libgimp, libshowphoto, libaa).