Small FFmpeg Tasks
This page contains ideas for small, relatively simple tasks for the FFmpeg project. People who might be interested in trying one of these tasks:
- Someone who wants to contribute to FFmpeg and needs to find a well-defined task to start with
- Someone who wishes to qualify for one of FFmpeg's coveted Summer of Code project slots
- An existing FFmpeg developer who has been away from the project for a while and needs a smaller task as motivation for re-learning the codebase
For other tasks of varying difficulty, see the Interesting Patches page.
If you would like to work on one of these tasks, please take these steps:
- Subscribe to the FFmpeg development mailing list and indicate your interest
- Ask Multimedia Mike for a Wiki account so you can claim your task on this Wiki
If you would like to add to this list, please be prepared to explain some useful details about the task. Excessively vague tasks with no supporting details will be ruthlessly deleted.
Finish up a previous incomplete SoC project
Several SoC projects from previous years have not yet made it into FFmpeg. Taking any of them and finishing them up to the point that they can be included should make for a good qualification task. Check out the FFmpeg Summer Of Code overview page and look for the unfinished projects, like AMR-NB, Dirac, TS muxer, JPEG 2000.
Generic Colorspace system
This task involves adding support more than 8 bits per component (Y on 10 bits, U on 10 bits, V on 10 bits for example) and generic simple conversion to other colorspaces.
Does this have to do with revising FFmpeg's infrastructure? If so, then it doesn't feel like a qualification task. If it's something simpler, then the vague description does not convey that simplicity. Please expound. --Multimedia Mike 12:56, 25 February 2008 (EST)
I don't think so, extending PixFmt to extended structure with finegrained description like depth, range values, colorspace, sample period, and write generic simple conversion from all formats to all others, like suggested by Michael on the mailing list. Conversion routine can be a good qualification task for video encoders/decoders. What do you think ? --Baptiste Coudurier 00:30, 29 February 2008 (EST)
Make the SoC dts encoder multichannel capable
Here is a skeleton for a dts encoder http://svn.mplayerhq.hu/soc/dcaenc/, currently it can only encode stereo streams. The task is to extend it to support 5.1 channels also.
Specs and info can be found here: http://wiki.multimedia.cx/index.php?title=DTS
GIF LZW Encoder and extend Encoder and Decoder to support Animated GIFs
Lzw encoder is already used for TIFF, it must be extended to support GIF flavor.
Patch cleanup for MPEG 1 & 2 optimizations
Details are in the issue tracker: http://roundup.ffmpeg.org/roundup/ffmpeg/issue100
Implement a Vivo demuxer for FFmpeg
This task corresponds to issue 99: http://roundup.ffmpeg.org/roundup/ffmpeg/issue99
I am ready to help out with understanding MPlayer's demuxer, esp. MPlayer API stuff if necessary. --Reimar 15:46, 1 March 2008 (EST)
Port missing demuxers from MPlayer to FFmpeg
MPlayer supports a few container formats in libmpdemux that are not yet present in libavformat. Porting them over and gettting them relicensed as LGPL or reimplementing them from scratch should make reasonable small tasks.
Jai Menon is working on porting the tivo demuxer
Optimal Huffman tables for (M)JPEG
- Indrani Kundu Saha is currently working on this task as a qualification for Google SoC 2009 --Ce 19:41, 13 March 2009 (EDT)
YOP Playback System
This task is to implement an FFmpeg playback subsystem for Psygnosis YOP files. This will entail writing a new file demuxer and video decoder, both of which are trivial by FFmpeg standards. The Psygnosis YOP page contains the specs necessary to complete this task and points to downloadable samples.
M95 Playback System
This task is to implement an FFmpeg playback subsystem for M95 files. This will entail writing a new file demuxer and video decoder (the audio is already uncompressed), both of which are trivial by FFmpeg standards. The M95 page contains the specs necessary to complete this task and points to downloadable samples.
BRP Playback System
This task is to implement an FFmpeg playback subsystem for BRP files. This will entail writing a new file demuxer as well as a video decoder that can handle at least 2 variations of format data. Further, write an audio decoder for the custom DPCM format in the file. All of these tasks are considered trivial by FFmpeg standards. The BRP page contains the specs necessary to complete this task and points to downloadable samples for both known variations.
16-bit Interplay Video Decoder
FFmpeg already supports Interplay MVE files with 8-bit video data inside. This task involves supporting 16-bit video data. The video encoding format is mostly the same but the pixel size is twice as large. Engage the ffmpeg-devel list to discuss how best to approach this task.
16-bit VQA Video Decoder
FFmpeg already supports Westwood VQA files. However, there are 3 variations of its custom video codec. The first 2 are supported in FFmpeg. This task involves implementing support for the 3rd variation. Visit the VQA samples repository: http://samples.mplayerhq.hu/game-formats/vqa/ -- The files in the directories Tiberian Sun VQAs/, bladerunner/, and dune2000/ use the 3rd variation of this codec. The VQA page should link to all the details you need to support this format.
HNM4 Playback System
This task is to implement an FFmpeg playback subsystem for HNM4 variant of the HNM format. This will entail writing a new file demuxer and video decoder, both of which are trivial by FFmpeg standards. The HNM4 page contains the specs necessary to complete this task and links to downloadable samples.
Apple RPZA encoder
A patch was once sent to the ffmpeg-devel mailing list to include an encoder for the Apple RPZA video codec. That code can be found on the "Interesting Patches" page. This qualification task involves applying that patch so that it can compile with current FFmpeg SVN code and then cleaning it up per the standards of the project. Engage the mailing list to learn more about what to do.
QuickTime Edit List Support
Implement edit list support in FFmpeg's QuickTime demuxer (libavformat/mov.c). This involves parsing the 'elst' atom in a QuickTime file. For a demonstration of how this is a problem, download the file menace00.mov from http://samples.mplayerhq.hu/mov/editlist/ and play it with ffplay or transcode it with ffmpeg. Notice that the audio and video are ever so slightly out of sync. Proper edit list support will solve that. Other samples in that directory also presumably exhibit edit list-related bugs. The Xine demuxer has support for this, it might be useful for hints.
- Krishna Gadepalli is working on this (patch submitted to ffmpeg-devel , currently in review) --Compn 10:35, 14 March 2009 (EDT)
The forward double precision DCT in this file has a non-free license. We need an LGPL replacement of this file.
Implement the Flash Screen Video codec version 2
FFmpeg is missing both a decoder and an encoder. Would be nice to have that.
Add wma fixed point decoder back into libavcodec
http://svn.rockbox.org/viewvc.cgi/trunk/apps/codecs/libwma/ Rockbox's fixed-point WMA decoder was adapted from the decoder in libavcodec.
RealAudio 14.4 encoder
FFmpeg contains a decoder for RealAudio 14.4, a farily simple integer CELP codec. Write an encoder. This would be a good qualification task for anyone interested in working on AMR, Speex, or sipr.
VC1 timestamps in m2ts
Codec copy of VC1 from m2ts currently doesn't work. Either extend the VC1 parser to output/fix timestamps, or fix the timestamps from m2ts demuxing.
Revise the Flic Video decoder at libavcodec/flicvideo.c to support video transported in AVI or MOV files while making sure that data coming from the usual FLI files still works. 'AFLC' and 'flic' FourCC samples are linked from the Flic Video page.
Hook up QT YUV2 FourCC
Extend PNG Decoder
get this png working in ffpng: http://roundup.ffmpeg.org/roundup/ffmpeg/issue813 .
Extend FFmpeg's MJPEG decoder to handle the different frames/packing of CJPG. Samples at: http://roundup.ffmpeg.org/roundup/ffmpeg/issue777
Optimize Theora Decoder
speed up the Theora decoder. 720:480 sample hits 100% cpu on a p4 1.5ghz.
- Do you have any specific optimizations tips? I like these small tasks to present a clearer jumping-off point. --Multimedia Mike 18:57, 22 December 2008 (EST)
- did theora make use of the mmx/sse functions of ffvp3? i was looking at the xiph GSOC page which mentioned a similar task. --Compn 21:17, 22 December 2008 (EST)
- The major optimization I can think of is reworking coefficient decoding to avoid the continue in unpack_vlcs() (basically by having a list of coefficient VLCs for each position rather than for each block, then decoding them when actually rendering the block.) Unfortunately this also requires reworking render_slice() and reverse_dc_prediction() quite significantly which is why I haven't done it yet. Yuvi 18:25, 23 December 2008 (EST)
flip flag for upside-down codecs
about the flip, a patch that decodes images fliped when codec_tag == ff_get_fourcc("GEOX") is welcome. its a metter of 2lines manipulating data/linesize of imgages after get_buffer() or something similar [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
lavf-based concatenation tool
Unless we have multiple files input in FFmpeg, it would be nice to have some libavformat-based tool that would extract frames from multiple files (possible different containers as well) and put them into single one.
cljr and vcr1 encoders
According to this: http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2009-February/063647.html both of the encoders are disabled, and won't compile if enabled. Michael would prefer to keep them around, and have someone grow them into full encoders.
Add waveformat extensible support in wav muxer
http://article.gmane.org/gmane.comp.video.ffmpeg.devel/79503 Clean up that patch.
implement some colorspace fourcc/codecs
some colorspace formats were uploaded to http://samples.mplayerhq.hu/V-codecs/ including:
2vuy.avi CYUV.AVI P422.AVI UYNV.AVI UYNY.avi V422.AVI YUNV.AVI a12v.avi auv2.avi and V-codecs/yuv8/MAILTEST.AVI .
step by step tutorial for adding new input formats to swscale:
cd mplayer/libswscale/ svn di -r20426:20427 the hunks 3 and 5 you dont need, they are optional special converters also the change to isSupportedOut() you dont need above will add a new input format
another example for adding an input format
cd mplayer/libswscale/ svn di -r20604:20605
Create a libamr compatible library of the Android amr codec
Make the rtp demuxer support rtcp BYE packets
rtcp BYE (203) packets are sent from the sender to the receiver to notify that a stream has ended. FFmpeg currently ignores them.
Sample url rtsp://media.lscube.org/tests/tc.mov
Implement the RTP/Vorbis payload
This is supported by the feng RTSP server, and is described in RFC 5215. For testing, you can set up a local feng RTSP server to stream some local Vorbis file, or you can use the online feng test-server (rtsp://media.lscube.org:554/tests/rms_profumo_1.ogv).
Most likely, your implementation will consist of a file called rtp_vorbis.c in libavformat/, which will read the header packets available in the SDP (the "configuration" piece in the fmtp: line) and which parses individual incoming RTP packets from the RTSP demuxer (minus the generic RTP header bits). It should output Vorbis-encoded frames which can subsequently be decoded by the Vorbis decoder in libavcodec/.
Implement the RTP/Theora payload
The Theora payload is currently still a draft. Yet, it would be nice to support this payload. As per above, the feng RTSP server supports the Theora RTP payload draft and can be used for testing your implementation of the draft, or you can use the online feng test-server (rtsp://media.lscube.org:554/tests/rms_profumo_1.ogv).
Most likely, your implementation will consist of a file called rtp_theora.c in libavformat/, which will read the header packets available in the SDP (the "configuration" piece in the fmtp: line) and which parses individual incoming RTP packets from the RTSP demuxer (minus the generic RTP header bits). It should output Theora-encoded frames which can subsequently be decoder by the Theora decoder in libavcodec/.
cdg decoder + demuxer
create a CD Graphics decoder/demuxer. implementations: http://www.kibosh.org/pykaraoke/ or http://users.fbihome.de/~glogow/ or http://miageprojet.unice.fr/twiki/bin/view/Fun/JavaKarPlayer or http://www.kibosh.org/cdgtools/ or this http://bat-kolio.net/cdg2video/ (which uses ffmpeg).
support for YCoCg/RGB colospace in FFV1
This would add a free lossless intra-frame RGB codec for all by FFmpeg supported platforms (most important MacOS + Windows) which is often asked for video editing in video forums (e.g. slashcam.de)