Difference between revisions of "Talk:FFmpeg Summer Of Code 2007"
|Line 95:||Line 95:|
** SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol)
** SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol)
** MMS (Microsoft Media Services)
** MMS (Microsoft Media Services)
Revision as of 08:26, 19 March 2007
Why? It seems too small a task to be done in the course of SoC. --Kostya 09:20, 7 March 2007 (EST)
- True, I'll rework it. --Merbanan 11:44, 7 March 2007 (EST)
- So now it's a qualification task? Is this doable in 40+ hours? I don't think I'll be able to spend much more time until the applications are due. --Gatoatigrado 17:56, 17 March 2007 (EDT)
From what I understand this is the output of most film scanners. Having not used one I'm not 100% sure of this, and maybe someone could clarify, but my understanding is that film scanners output a collection of these files (I'm unsure if they're put in a container or not). There are already open source implementations in imagemagick and graphics magick.
- added some info and links - Compn 20:25, 8 March 2007 (EST)
Should http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/44526 be added to small tasks? Ce 04:41, 8 March 2007 (EST)
- No, I'll fix it eventually, it's not that simple.--Merbanan 06:46, 8 March 2007 (EST)
is reviewing and fixing up old unapplied ffmpeg patches not a good small task? Compn 20:25, 8 March 2007 (EST)
- I wanted to ask the same question a minute ago;-) The QTRLE patch is not ready yet, and there is no patch for an DNxHD ENcoder in the mailing list. I originally wrote that someone with better knowledge should of course remove those proposals if he thinks they are bad, but I still think they probably tell something about applicants (and that Baptiste possibly didn't read Merbanans "rules"). --Ce 20:31, 8 March 2007 (EST)
I see several GSoC project proposals for encoding and decoding a variety of images. Is this truly appropriate for the FFmpeg project, which traditionally focuses on sequences of moving pictures vs. single images? I know that FFmpeg can put a movie together from a sequence of still pictures, or dump a movie into a series of still pictures. Are we hoping to do the same with HDR images? Otherwise, this type of work seems best left to dedicated photo processing projects. --Multimedia Mike 13:36, 9 March 2007 (EST)
- And they are designed to operate with colour formats not currently supported by FFmpeg (like 16 bit per component). But JPEG-2000 and HD-Photo are likely to be used in movies so their support is undoubtedly useful. BTW, how do you plan to use qualification projects? --Kostya 14:09, 9 March 2007 (EST)
- I'm supportive of something like JPEG-2000 (which uses a colorspace FFmpeg does not presently support, IIRC) since I know that there are plans to include that with certain types of movies. I'm not as enthusiastic about graphic formats that are not known to be encoded as sequences of images in a video file. As for the qualifier projects, we are hoping to weed out unqualified applicants by asking that they perform a task from the list. --Multimedia Mike 16:02, 9 March 2007 (EST)
- i was thinking of that exact feature (convert dpx/tif/exr to h264). also i wonder if the encoder feature could be used for filmmaking using ffmpeg ? e.g. grab from camera right to HD image format? or is this too high end/specialized/low user count? --Compn 18:52, 9 March 2007 (EST)
- You know me-- low user count is not a legit reason for discounting a feature. :-) If there is a legitimate video-type app for a certain feature I think that makes it more relevant to FFmpeg. --Multimedia Mike 19:39, 9 March 2007 (EST)
- btw i dont think they need to be SOC projects. exr and dpx are open and have gpl libs (dunno about microsoft hd photo). maybe just a qualifying task or just a wishlist. --Compn 22:53, 9 March 2007 (EST)
I think finishing AAC support would be a good SoC project. The decoder started in 2006 is far from finished and the project appears to be quite complex. Baptiste appears to disagree, can we come to a consensus here? Does anybody know the exact status of the AAC implementation from 2006? -- DonDiego 10:11, 12 March 2007 (EDT)
- I think the LC part is almost complete. Adding He-AAC(+) features could be a SoC task.--Merbanan 17:05, 12 March 2007 (EDT)
- See comments about adding aacPlus/AAC+ support in the "mp3PRO & aacPlus & MPEG Surround decoders" section bellow. --Gamester17 05:25, 19 March 2007 (GMT+1)
is dirac even finished spec wise? i'd rather see a decoder for the files in the wild (eac3, aac, gsm, 263, indeo)...
- Their site says it's "essentially complete". Granted, it's not out there in the wild yet, but I think that's mostly a matter of time, so why not get a decoder in now? It's not like libavcodec doesn't already have decoders for a number of other rather obscure formats already. -- Koorogi 21:58, 14 March 2007 (EDT)
I'm currently writing a VP6 encoder. This project wouldn't be a complete duplication of effort (since mine won't be part of libavcodec), but just so you know. --Pengvado 17:04, 15 March 2007 (EDT)
- Why can't you integrate it into libavcodec ? It would be more convenient. What license will it have, GPL ? --Bcoudurier 09:07, 16 March 2007 (EDT)
- It's for a company, and they wanted a standalone library with a proprietary license. I convinced them to dual license with GPL, but couldn't convince them to use libavcodec. So the only way to get it into libavcodec (as more than a wrapper) would be to port it after it's done. --Pengvado 23:42, 17 March 2007 (EDT)
Do we have a mentor yet ? We is able to mentor that (wavelet codec) ? Loren ? --Bcoudurier 12:28, 16 March 2007 (EDT)
- I filed an application and if it will be accepted any mentor would do. --Kostya 15:24, 16 March 2007 (EDT)
mp3PRO & aacPlus & MPEG Surround decoders
Suggest add LGPL'ed SBR (Spectral Band Replication) & PS (Parametric Stereo) decoders as SoC projects (maybe tie in with HE-AAC?). SBR is used by mp3PRO, and SBR and PS is used by aacPlus, and "MPEG Surround" technology uses principles from both so coomon code could probebly be created and shared among those, see breakdown bellow. --Gamester17 05:20, 19 March 2007 (GMT+1)
- mp3PRO decoder (Note: mp3PRO is MP3 + SBR. Standard MP3 decoders can decode mp3PRO encoded files/streams but without SBR you do not get the full quality. By adding a SBR decoder to FFmpeg and coupling it with the existing MP3 decoder you could playback mp3PRO at full quality? A SBR decoder could be shared with a aacPlus decoder as aacPlus also uses SBR).
- aacPlus (a.k.a. AAC+) decoder (Note: aacPlus v1 is HE-AAC + SBR, aacPlus v2 is HE-AAC + SBR + PS. Standard HE-AAC decoders can decode aacPlus encoded files/streams but without SBR and PS you do not get the full quality. By adding a SBR decoder and a PS decoder to FFmpeg and coupling it with an existing HE-AAC decoder you could playback aacPlus at full quality? A SBR decoder could be shared with a mp3PRO decoder as mp3PRO also uses SBR).
- MPEG Surround decoder/parser (for all audio but especially MP3/mp3PRO and AAC/aacPlus as those are in use today). MPEG Surround technology share similar characteristics with SBR (Spectral Band Replication) and PS (Parametric Stereo), which mp3PRO and aacPlus also use, so if SBR and PS decoders was added to FFmpeg then those could probebly share common code with a MPEG Surround decoder/parser. (DivX Inc. is one company that uses MPEG Surround technology to achieve 5.1 channel surround sound in smaller files).
Suggest add some sort of "Subtitles" project(s) as sub-project(s) under FFmpeg SoC this year, as adding one LGPL library from FFmpeg would be a very nice way to add complete subtile parsing/display support to your player and a common developer place for future additions --Gamester17 05:35, 19 March 2007 (GMT+1)
- Create a common 'subtitles parser library' (and/or API for adding support for additional subtitle formats?) - one sub-library in FFmpeg with all subtile decoders/demuxers/parsers gathered (similar to the libpostproc and libavutil). Call it "libsubs" (or "libsub", "libsubtitles" or whatever). Move FFmpeg's existing VobSub and DVBsub code there, so no matter if they are bitmap or text-based subs all existing and future subtile code is collected there. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional subtitles.
- Maybe use MPlayer's recently added "libass" (SSA/ASS subtile reader) as a base for such a common library?
- Closed captioning (CC) subtile support - (Closed captions for the deaf and hard of hearing, also known as "Line 21 captioning", uses VobSub bitmaps)
- DirectVobSub (VSFilter) - standard VobSubs (DVD-Video subtitles) embedded in AVI containers
- DivX Subtitles (XSUB) display/reader/decoder (Note: bitmap based subtitle, similar to VobSub)
- SubRip (.srt) subtile support (Note: simple text-based based subtitle with timestamp)
- Subviewer (.sub) subtile support (Note: simple text-based based subtitle with timestamp)
- MicroDVD (.sub) subtile support (Note: simple text-based based subtitle with timestamp
- Sami (.smi) subtile support (Note: simple text-based based subtitle with timestamp)
- SubStation Alpha (.ssa+.ass) subtile support (Note: advanced text-based based subtitle with timestamps and XY location on screen)
- RealText (.rt) subtile support
- PowerDivx (.psb) subtile support
- Universal Subtitle Format (.usf) subtile support
- Structured Subtitle Format (.ssf) subtile support
libstream (a common 'stream' library)
Suggest add some sort of "Steaming protocol" project(s) as sub-project(s) under FFmpeg SoC this year, as adding one LGPL library from FFmpeg would be a very nice way to add complete streaming support to your player for all formats FFmpeg can playback, and s such common library in FFmpeg could be a great developer place for future additions, sharing common code --Gamester17 06:30, 19 March 2007 (GMT+1)
- Create a common 'stream demuxer/parser library' (and/or API for adding support for additional streaming formats?) - a LGPL'ed sub-library in FFmpeg with all stream demuxers/parsers gathered (similar to the libpostproc and libavutil). Call it "libstream" (or "stream" or whatever). Move FFmpeg's existing stream code there like HTTP and RTSP/RTP. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional streaming formats. All togther making it super easy for audio/video players using FFmpeg to add all-in-one streaming support to their player.
- Maybe use either MPlayer's "stream" library, LIVE555, or probebly the better libnms (from NeMeSi) as a base for such a common library?
- HTTP (Hypertext Transfer Protocol)
- UDP (User Datagram Protocol)
- RTSP - Real-Time Streaming Protocol (RFC2326)
- RTP/RTCP - Real-Time Transport Protocol/RTP Control Protocol (RFC3550)
- RTP Profile for Audio and Video Conferences with Minimal Control (RFC3551)
- RealMedia RTSP/RDT (Real Time Streaming Protocol / Real Data Transport) proprietary transport protocol developed by RealNetworks to stream RealVideo/RealAudio
- SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol)
- MMS (Microsoft Media Services)
Long forgotten, but relevant more then ever..
The FFServer code hasn't been update for quite a while, and it's streaming to clients like WMP9,10,11 doesn't seem to work. Furthermore, I think it would be a good idea to finally have some good MMS server streaming support in FFServer. You'll be quite amazed to find that there are actually almost NO solutions for streaming with the MMS protocol on Linux.