FFmpeg Wishlist: Difference between revisions

From MultimediaWiki
Jump to navigation Jump to search
Line 34: Line 34:
*  a xmv demuxer, look at http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/25207/focus=25224 and http://www.maxconsole.net/?mode=news&newsid=411 for hints; also look at http://sourceforge.net/tracker/index.php?func=detail&aid=1097094&group_id=53761&atid=471491 (should be a working demuxer/decoder)
*  a xmv demuxer, look at http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/25207/focus=25224 and http://www.maxconsole.net/?mode=news&newsid=411 for hints; also look at http://sourceforge.net/tracker/index.php?func=detail&aid=1097094&group_id=53761&atid=471491 (should be a working demuxer/decoder)
*  AMV demuxer, http://scrub50187.com/ has the creator. wikipedia has [http://en.wikipedia.org/wiki/AMV_video_format articles] about the format also.
*  AMV demuxer, http://scrub50187.com/ has the creator. wikipedia has [http://en.wikipedia.org/wiki/AMV_video_format articles] about the format also.
=== libstream (a common 'stream' library) ===
*  Create a common 'stream demuxer/parser library' (and/or API for adding support for additional streaming formats?) - a LGPL'ed sub-library in FFmpeg with all stream demuxers/parsers gathered (similar to the libpostproc and libavutil). Call it "libstream" (or "stream" or whatever). Move FFmpeg's existing stream code there like HTTP and RTSP/RTP. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional streaming formats. All togther making it super easy for audio/video players using FFmpeg to add all-in-one streaming support to their player.
**Maybe use either MPlayer's "'''stream'''" library, [http://www.live555.com LIVE555], or probebly the better [http://streaming.polito.it/client/library libnms] (from NeMeSi) as a base for such a common library?
** HTTP (Hypertext Transfer Protocol)
** UDP (User Datagram Protocol)
** RTSP - Real-Time Streaming Protocol (RFC2326)
** RTP/RTCP - Real-Time Transport Protocol/RTP Control Protocol (RFC3550)
** RTP Profile for Audio and Video Conferences with Minimal Control (RFC3551)
** RealMedia RTSP/RDT (Real Time Streaming Protocol /  Real Data Transport) proprietary transport protocol developed by RealNetworks to stream RealVideo/RealAudio
** SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol)
** MMS (Microsoft Media Services)


== Features ==
== Features ==

Revision as of 22:24, 18 March 2007

A temporary FFmpeg wish/todo list:

Decoders

Demuxers

libstream (a common 'stream' library)

  • Create a common 'stream demuxer/parser library' (and/or API for adding support for additional streaming formats?) - a LGPL'ed sub-library in FFmpeg with all stream demuxers/parsers gathered (similar to the libpostproc and libavutil). Call it "libstream" (or "stream" or whatever). Move FFmpeg's existing stream code there like HTTP and RTSP/RTP. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional streaming formats. All togther making it super easy for audio/video players using FFmpeg to add all-in-one streaming support to their player.
    • Maybe use either MPlayer's "stream" library, LIVE555, or probebly the better libnms (from NeMeSi) as a base for such a common library?
    • HTTP (Hypertext Transfer Protocol)
    • UDP (User Datagram Protocol)
    • RTSP - Real-Time Streaming Protocol (RFC2326)
    • RTP/RTCP - Real-Time Transport Protocol/RTP Control Protocol (RFC3550)
    • RTP Profile for Audio and Video Conferences with Minimal Control (RFC3551)
    • RealMedia RTSP/RDT (Real Time Streaming Protocol / Real Data Transport) proprietary transport protocol developed by RealNetworks to stream RealVideo/RealAudio
    • SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol)
    • MMS (Microsoft Media Services)

Features

Subtitles

  • Create a common 'subtitles parser library' (and/or API for adding support for additional subtitle formats?) - a common sub-library to FFmpeg with all subtile decoders/demuxers/parsers gathered (similar to the libpostproc and libavutils). Call it "libsubs" (or "libsub", "libsubtitles" or whatever). Move FFmpeg's existing VobSub and DVBsub code there, so no matter if they are bitmap or text-based subs all existing and future subtile code is collected there. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional subtitles.
    • Maybe use MPlayer's recently added "libass" (SSA/ASS subtile reader) as a base for such a common library?
  • Closed captioning (CC) subtile support - (Closed captions for the deaf and hard of hearing, also known as "Line 21 captioning", uses VobSub bitmaps)
  • DirectVobSub (VSFilter) - standard VobSubs (DVD-Video subtitles) embedded in AVI containers
  • DivX Subtitles (XSUB) display/reader/decoder (Note: bitmap based subtitle, similar to VobSub)
  • SubRip (.srt) subtile support (Note: simple text-based based subtitle with timestamp)
  • Subviewer (.sub) subtile support (Note: simple text-based based subtitle with timestamp)
  • MicroDVD (.sub) subtile support (Note: simple text-based based subtitle with timestamp
  • Sami (.smi) subtile support (Note: simple text-based based subtitle with timestamp)
  • SubStation Alpha (.ssa+.ass) subtile support (Note: advanced text-based based subtitle with timestamps and XY location on screen)
  • RealText (.rt) subtile support
  • PowerDivx (.psb) subtile support
  • Universal Subtitle Format (.usf) subtile support
  • Structured Subtitle Format (.ssf) subtile support

Misc

Snow

  • multiple reference frames improvements
    • decide which frames to keep (e.g. long-term refs)
    • some changes to the mv prediction code
  • non translational motion compensation
    • estimate non translational parameters per block by using surrounding motion vectors
    • add a ac coded bit per block to switch between translational and non-translational MC
    • borrow the non translational MC code from libmpcodecs/vf_perspective.c
    • some changes to the encoder to decide between translational and non t.
  • Trellis quantization (select quantized coefficient so as to minimize the rate distrortion
  • 4x4 sized block support (we have 16x16 and 8x8 currently)
  • 1/8 pel motion compensation / estimation support (pretty much just encoder changes needed which in case of the iterative me should be trivial)
  • improve the intra color decision


A/V Filters