FFmpeg Summer Of Code 2007

From MultimediaWiki
Jump to navigation Jump to search

Google is sponsoring their third annual Summer of Code for the summer of 2007. This entails sponsoring students to work on assorted open source projects as well as sponsoring mentors in those same projects. Everyone wins.

FFmpeg was a Summer of Code participant in the summer of 2006 (here is the corresponding Wiki page).

Mike Melanson (mike -at- multimedia.cx) is the administrator and main point of contact for matters relating to the FFmpeg Summer of Code.

How to apply

Before you can apply make sure you are qualified enough to apply. Last year 50% of the applicants weren't qualified for the task they applied for.

  • You have to know how to program in C fairly well.
  • We would like you to submit a patch that fixes a bug or adds a feature to FFmpeg. By doing that we will know that you are qualified for the task or not. On this page there is a list of Qualification Tasks that can be done. But you are free to submit anything you feel might be of value to FFmpeg. The qualification task can be done after you have filed you application (up until around April 7 as the list of accepted students is scheduled to be posted by Google on April 9).
  • Submit a good application through the formal Google Summer of Code process during the application timeframe (March 14-24, 2007).
  • You have to have >35 hours per week to put into the project.
  • You can't have another job at the same time as the SoC project.

Current Status

  • March 5-12, 2007: Application period for mentoring organizations.
  • March 6, 2007: Mike Melanson submitted FFmpeg mentoring application.
  • March 14, 2007: Google are now accepting student Applications.
  • March 15, 2007: FFmpeg got accepted as mentoring organization; now accepting student applications until March 24, 2007. Note that this is NOT the deadline to complete a qualification task; final student selections are to be made by April 9.
  • April 12, 2007: Google has allocated 8 project slots to FFmpeg.

Project Proposals


Qualifications for a good Summer of Code proposal:

  • discrete, well-defined, modular
  • comprised of a series of measurable sub-goals
  • based on open specs that are available free of charge
  • based on complete specs

An example of a good proposal is the implementation of a decoder or demuxer for an as yet unsupported multimedia format, or an encoder or muxer for a format that can already be demuxed/decoded by FFmpeg.

An example of a less desirable proposal is one that's not as measurable, such as refactoring APIs. Bad proposals tend to be ones that would require touching a lot of core code.

To re-iterate:

  • Localized/isolated code projects = good
  • Global code refactoring = bad



Most of this list is just some ideas we are kicking around.

Projects with Mentors (official projects)

QCELP Decoder

Mentor: Benjamin Larsson

Matroska Muxer

Mentor: Aurelien Jacobs; Backup mentors: Steve Lhomme, Ronald S. Bultje


  • Specification: ISO 13818-1

Mentor: Baptiste Coudurier

MXF Muxer

  • Specification: SMPTE 377M

Mentor: Baptiste Coudurier

RV40 Decoder

  • RealVideo 4 is steadily being reverse engineered and should be a reasonable candidate for re-implementation by the summer.

Mentor: Mike Melanson

PAFF decoding for H.264/AVC

Mentor: Loren Merritt

Dirac Encoder AND Decoder

Mentor: Luca Barbato

E-AC3 Decoder

Mentor: Justin Ruggles

Ideas for more projects to be determined


Monkey's Audio Decoder (APE)


Finish LC-AAC decoder and implement HE-AAC decoder (LGPL)

GSM Decoder

i263 Decoder

VP6 Encoder

NUT Muxer

  • General improvements and enhancements

DPX/Cineon Encoder AND Decoder

OpenEXR Encoder AND Decoder

HD Photo Encoder AND Decoder

mp3PRO & aacPlus & MPEG Surround decoders

  • mp3PRO decoder. Note: mp3PRO decoding means MP3 + SBR (Spectral Band Replication) demuxing/decoding. (Standard MP3 decoders can decode mp3PRO encoded files/streams but without SBR you do not get the full quality. By adding a SBR decoder to FFmpeg and coupling it with the existing MP3 decoder you could playback mp3PRO at full quality? A SBR decoder could be shared with a aacPlus decoder as aacPlus also uses SBR).
  • aacPlus (a.k.a. AAC+) decoder. Note: aacPlus v1 decoding means HE-AAC + SBR (Spectral Band Replication) demuxing/decoding, and aacPlus v2 decoding means HE-AAC + SBR (Spectral Band Replication) + PS (Parametric Stereo) demuxing/decoding. (Standard HE-AAC decoders can decode aacPlus encoded files/streams but without SBR and PS you do not get the full quality. By adding a SBR decoder and a PS decoder to FFmpeg and coupling it with an existing HE-AAC decoder you could playback aacPlus at full quality? A SBR decoder could be shared with a mp3PRO decoder as mp3PRO also uses SBR).
  • MPEG Surround decoder/parser (for all audio but especially MP3/mp3PRO and AAC/aacPlus as those are in use today). MPEG Surround technology share similar characteristics with SBR (Spectral Band Replication) and PS (Parametric Stereo) demuxing/decoding, which mp3PRO and aacPlus decoders also use, so if SBR and PS decoders was added to FFmpeg then those could probebly share common code with a MPEG Surround decoder/parser. (DivX Inc. is one company that uses MPEG Surround technology to achieve 5.1 channel surround sound in smaller files).

Native DirectShow support

Option to build FFmpeg decoder/encoder/demuxer/muxer and post-processing filters for the DirectShow API for Windows by Microsoft, (the native DirectX 8/9 Direct3D overlay for video playback), so that FFmpeg has native support to be compiled for DirectShow and thus be used directly by players that use DirectShow.


http://sourceforge.net/projects/drdivx/ (drffmpeg)

DirectX Video Acceleration (DXVA) 1.0 AND 2.0 for video decoding

Support Microsoft DirectX VA (DXVA) API nativly for GPU assisted decoding under Windows. Note! For this, native support for the above mentioned DirectShow API is needed first.


Additional subtitle support

  • Create a common 'subtitles parser library' (and/or an API system for adding support for additional subtitle formats?) - a common sub-library to FFmpeg with all subtile decoders/demuxers/parsers gathered (similar to the libpostproc and libavutils). Call it "libsubs" (or "libsub", "libsubtitles" or whatever). Move FFmpeg's existing VobSub and DVBsub code there, so no matter if they are bitmap or text-based subs all existing and future subtile code is collected there. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional subtitles.
    • Maybe use MPlayer's recently added "libass" (SSA/ASS subtile reader) as a base for such a common library?
  • Support for advanced SSA/ASS rendering
    • Possible source are libass or the asa library
  • Support bold, italic, underline, RGB colors, size changes and font changes for a whole line or part of one line
  • Line 23 signal (a.k.a. "Wide-screen signal") detecting and use for DVD-Video (VobSub)
  • Support for the subtitles HTML tags
  • Capability of displaying subtitles with no video enabled (for example for audio-books)
  • Support for Karaoke subtitles (for kar and cdg, etc.)
  • Dual-subtitle-display (display two subtitles/languages at the same time, one at the bottom as normal plus one at the top of the screen)
  • Capability of moving the subtitles in the picture (freetype renderer)
  • Support more subtitle formats (text and bitmap-based):
    • Closed captioning (CC) subtile support - (Closed captions for the deaf and hard of hearing, also known as "Line 21 captioning", uses VobSub bitmaps)
      • xine have a SPU decoder for subpictures and Closed Captions software decoding
    • DirectVobSub (VSFilter) - standard VobSubs (DVD-Video subtitles) embedded in AVI containers
    • DivX Subtitles (XSUB) display/reader/decoder (Note: bitmap based subtitle, similar to VobSub)
    • SubRip (.srt) subtile support (Note: simple text-based based subtitle with timestamp)
    • Subviewer (.sub) subtile support (Note: simple text-based based subtitle with timestamp)
    • MicroDVD (.sub) subtile support (Note: simple text-based based subtitle with timestamp
    • Sami (.smi) subtile support (Note: simple text-based based subtitle with timestamp)
    • SubStation Alpha (.ssa+.ass) subtile support (Note: advanced text-based based subtitle with timestamps and XY location on screen)
    • RealText (.rt) subtile support
    • PowerDivx (.psb) subtile support
    • Universal Subtitle Format (.usf) subtile support
    • Structured Subtitle Format (.ssf) subtile support

libstream (a common 'stream client' library)

  • Create a common 'stream demuxer/parser library' for the client-side to receive input streams (and/or API for adding support for additional streaming formats?) - a LGPL'ed sub-library in FFmpeg with all stream demuxers/parsers gathered (similar to the libpostproc and libavutil). Call it "libstream" (or "stream" or whatever). Move FFmpeg's existing stream code there like HTTP and RTSP/RTP. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional streaming formats. All togther making it super easy for audio/video players using FFmpeg to add all-in-one streaming support to their player.
    • Maybe use either MPlayer's "stream" library structure, LIVE555, cURL, or probably the better libnemesi as a base for such a common library?
  • Add support for additional streaming protocols (on the client side) and improve/enhance support for existing protocols:
    • HTTP (Hypertext Transfer Protocol) client
      • plus a SSL (Secure Sockets Layer) client support for HTTPS
    • UDP (User Datagram Protocol) client
    • RTSP - Real-Time Streaming Protocol (RFC2326) client
    • RTP/RTCP - Real-Time Transport Protocol/RTP Control Protocol (RFC3550) client
    • RTP Profile for Audio and Video Conferences with Minimal Control (RFC3551) client
    • RealMedia RTSP/RDT (Real Time Streaming Protocol / Real Data Transport) client
    • SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol) client
    • MMS (Microsoft Media Services) client
      • including the subprotocol mmsh (MMS over HTTP) and mmst (MMS over TCP)

A/V filter API (audio and video pre-process/post-process filters API system)

FFmpeg's already well-known libavcodec module has become the de facto standard library for video decoding and encoding in free software projects. Unfortunately, no similar standard library has surfaced for audio/video filtering and otherwise working with audio/video stream once it has been decoded. Various multimedia projects (such as MPlayer, Xine, GStreamer, VirtualDub, etc.) have implemented their own filter systems to various degrees of success. What is needed is a high quality audio and video filter API - efficient, flexible enough to meet all the requirements which have led various projects to invent their own filter system, and yet easy to use or develop new filters with. This proposal is to implement a high quality audio/video filter library for FFmpeg, where it can be easily used by other multimedia-related software projects.

Mentor: A'rpi (has expressed interest of possibly helping with implementing a filter API in FFmpeg, he also volunteering to help porting the MPlayer filters too if a such API becomes available http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2007-April/051164.html)

Qualification tasks

Add a note if you choose to work on a Qualification task to avoid duplicate work.

Quicktime IMA ADPCM encoder


TIFF encoder


Kamil Nowosad is working on this

Bartlomiej Wolowiec is working on this

Vivo demuxer

Alex Kalouguine is working on this task.

IFF/8SVX 8-bit audio demuxer

Tyler Williams is working on this task.

Port SGI image support to new API


FFmpeg changed image format APIs, but the SGI file format was never ported to the new API.

    • patch pending on ffmpeg-devel

Optimize some code

Do you think some code in FFmpeg could be made to run faster? We always love to get faster decoders or encoders. Note that this will require some ASM (assembly) skills and using timer code to benchmark and compare. Please make sure that any new code do not break compiling for other platforms.

Speedups via optimizations (like SIMD for 3DNow, MMX/MMX2, SSE/SSE2/SSE3 and AltiVec) are needed in FFmpeg's:

  • H.264 video decoder optimizations.
  • VC-1 video decoder optimizations.
  • AAC (LE-AAC and HE-AAC) audio decoder optimizations.
  • cat libavcodec/*.c | grep -i optimize for more files that need optimization.
    • Andrew Savchenko is working on this

BFI Playback System

Add FFmpeg playback capability for the BFI format. This entails writing a new file demuxer and simple video decoder.

DrV said he was working on this.

THP Playback System


Add FFmpeg playback capability for the THP format. This entails writing a new file demuxer and leveraging existing JPEG and ADPCM decoders to handle the video and audio data inside.

Marco Gerards is working on this (one patch applied, one patch pending)

Bethsoft VID


Add FFmpeg playback capability for the Bethsoft VID format (new demuxer, new video decoder).

patch applied. --Nicholas

Other Game Formats

Several game formats are documented in this Wiki, but not yet implemented in FFmpeg. Investigate via the Category:Game Formats page.

Theora in Matroska


The current mkv demuxer supports Vorbis but not Theora. Add support for Theora. This requires parsing the matroska extradata to extract the three header packets, and correctly passing these to the Theora decoder. You might want to read this thread on the MPlayer list:

David Conrad is working on this

See Also