Difference between revisions of "FFmpeg Summer of Code 2014"

From MultimediaWiki
Jump to navigation Jump to search
(make less ugly)
(38 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This page is on wiki.multimedia.cx due to trac.ffmpeg.org being down for maintaince. It might be moved back later.
'''This page has been moved.''' See https://trac.ffmpeg.org/wiki/FFmpegSummerOfCode2014
= Introduction =
FFmpeg is the universal multimedia toolkit: a complete, cross-platform solution to record, convert, filter and stream audio and video. It includes libavcodec - the leading audio/video codec library.
[https://developers.google.com/open-source/soc/ Google Summer of Code (GSoC)] is a program that offers students stipends to write code for open source projects. Through the guidance of mentors, students gain valuable experience interacting with and coding for open source projects like FFmpeg. Additionally, the project and its users benefit from code created from students who often continue contributing as developers. FFmpeg participated to several past editions ([[FFmpeg Summer Of Code 2006|2006]], [[FFmpeg Summer Of Code 2007|2007]], [[FFmpeg Summer Of Code 2008|2008]], [[FFmpeg Summer Of Code 2009|2009]], [[FFmpeg Summer Of Code 2010|2010]], and [[FFmpeg / Libav Summer Of Code 2011|2011]]), and we are looking forward to being involved this year.
This is our ideas page for [http://www.google-melange.com/gsoc/homepage/google/gsoc2014 Google Summer of Code 2014]. See the [http://www.google-melange.com/gsoc/document/show/gsoc_program/google/gsoc2014/help_page#2._What_is_the_program_timeline GSoC Timeline] for important dates.
== Information for Students ==
=== Getting Started ===
# '''Get to know FFmpeg.''' If you are a student and interested in contributing to an FFmpeg GSoC project it is recommended to start by subscribing to the [http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ffmpeg-devel] mailing-list, visiting our IRC channels (''#ffmpeg-devel'' and ''#ffmpeg''), and exploring the codebase and the development workflow. Feel free to [[#Contacting_FFmpeg|contact us]] if you have any questions.
# '''Find a project.''' Listed on this page are mentored and unmentored projects. Mentored projects are well-defined and mentor(s) have already volunteered. Unmentored projects are additional ideas that you may consider, but you will have to contact us to find a mentor. You may also propose your own project that may be a better match for your interest and skill level. If a project description is unclear or you have any questions, do not hesitate to contact its mentor or admin.
# '''Contact us.''' If you find a project that you are interested in then get in touch with the community and let us know. In case you want to work on a qualification task, you should ask the respective mentor(s) so that the task can be claimed.
# '''Apply.''' Student proposal period begins 10 March 19:00 UTC and ends 21 March 19:00 UTC. See the See the [http://www.google-melange.com/gsoc/document/show/gsoc_program/google/gsoc2014/help_page#2._What_is_the_program_timeline GSoC timeline] for additional information.
=== Qualification Tasks ===
In order to get accepted you will be requested to complete a small task in the area you want to contribute. FFmpeg GSoC projects can be challenging, and a qualification task will show us that you are motivated and have the potential to successfully finish a project.
The qualification task is usually shown in the project description. Contact the respective mentor(s) for assistance on getting a related qualification task or if you want to propose your own. You can also browse the [https://trac.ffmpeg.org FFmpeg Bug Tracker] for qualification task ideas.
=== Contacting FFmpeg ===
If you have questions or comments feel free to contact us via our mailing list, IRC channel, or e-mail one of the FFmpeg GSoC admins:
* '''Mailing-list:''' [http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ffmpeg-devel]
* '''IRC:''' ''#ffmpeg-devel'' on Freenode
* '''FFmpeg GSoC Admins:''' TBA
You can also contact a mentor directly if you have questions specifically related to one of the projects listed on this page.
= Mentored Projects =
This section lists well-defined projects that have one or more available mentors. If you are new to FFmpeg, and have relatively little experience with multimedia, you should favor a mentored project rather than propose your own. Contact the respective mentor(s) to get more information about the project and the requested qualification task.
== H.264 Multiview Video Coding (MVC) ==
<div class="floatright">[[Image:Mmspg-epfl-ch-double-camera.jpg]]</div>
MVC is used in 3D Blu-ray disks, but FFmpeg is missing a decoder which supports it. The goal of this project is to add support for MVC and 3D-Blu rays.
Since this project also consists of some changes in the current architecture, it is especially important that this project is discussed on the [http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ffmpeg-devel mailing list]. There also exists a [http://article.gmane.org/gmane.comp.video.ffmpeg.devel/174155 patch] and a [https://github.com/kodabb/libav/commits/MVC_orig_clean git branch] which are in rather bad shape but could be used as basis for this project.
'''Expected results:''' Create MVC decoder and add a test for the FFmpeg Automated Testing Environment (FATE).
'''Prerequisites:''' C coding skills, basic familiarity with git. Understanding of H.264
'''Qualification Task:''' Perform work that demonstrates understanding of MVC and that is a subpart of the whole MVC implementation.
'''Mentor:''' TBA, possibly [[User:Michael|Michael Niedermayer]] (''michaelni'' in #ffmpeg-devel on Freenode IRC)
'''Backup mentor:''' TBA, possibly Kieran Kunhya (''kierank'' in #ffmpeg-devel on Freenode IRC)
== Animated Portable Network Graphics (APNG) ==
<div class="floatright">[[Image:Animated PNG example bouncing beach ball.png]]</div>
'''Description:''' FFmpeg currently does not support Animated PNGs, the goal of this project is to change that and add support. The little bouncing ball animation shown to the right is such a APNG file.
'''Specification:''' https://wiki.mozilla.org/APNG_Specification
'''Expected results:'''
* APNG demuxer
** implement robust probing:
*** PNG images are not misdetected as APNG animations
*** APNG animations are not misdetected as PNG images
** splits stream into sensible packets (so they can be easily reused in APNG muxer)
** survives fuzzing (zzuf)
** add FATE coverage, coverage should be at least 70%
** test code under valgrind so no invalid reads/writes happen
* APNG decoder
** use existing PNG decoder code (write decoder in same file)
** implement parsing of all APNG chunks (acTL, fcTL, fdAT)
** error handling
** survives fuzzing (zzuf)
** add test for FATE, coverage should be at least 75%
** CRC checksum validation
** test code under valgrind so no invalid reads/writes happen
* APNG muxer && APNG encoder
** use existing PNG encoder code (write encoder in same file)
** write compliant files, make sure they play correctly in major web browsers that support APNG
** add test for FATE
'''Prerequisites:''' C coding skills, basic familiarity with git.
'''Qualification Task:''' Implement format autodetection for imagepipe and image demuxer.
'''Mentor:''' [[User:Pbm|Paul B Mahol]] (''durandal_1707'' in #ffmpeg-devel on Freenode IRC)
'''Backup mentor:''' [[User:Suxen_drol|Peter Ross]] (''pross-au'' in #ffmpeg-devel on Freenode IRC)
== FFv1 P frame support ==
'''Description:''' FFv1 is one of the most efficient intra-only lossless video codecs. Your work will be to add support for P frames with motion compensation and motion estimation support (the existing motion estimation code in libavcodec can be reused here). Then fine-tune it until the best compression rate is achieved. This will make FFv1 competitive with existing I+P frame lossless codecs like lossless H.264.
'''Expected results:''' State of the art P frame support in the FFv1 encoder and decoder implementation.
'''Prerequisites:''' C coding skills, basic familiarity with git, solid understanding of video coding especially with motion compensation.
'''Qualification Task:''' Implement support for simple P frames without motion compensation in FFv1. That is so that each frame stores the difference to the previous frame.
'''Mentor:''' [[User:Michael|Michael Niedermayer]] (''michaelni'' in #ffmpeg-devel on Freenode IRC)
'''Backup mentor:''' TBA
== Misc Libavfilter extension ==
<div class="floatright">[[Image:Lavfi-gsoc-filter-vintage-illustration.jpg]]</div>
'''Description:''' Libavfilter is the FFmpeg filtering library. It currently supports audio and video filtering and generation support. This work may focus on porting, fixing, extending, or writing new audio and video filters from scratch.
Candidate filters for porting may be the remaining MPlayer filters currently supported through the mp wrapper, libaf MPlayer filters, and filters from other frameworks (e.g. mjpegtools, transcode, avisynth, virtualdub, etc.). In case of mp ports, the student should verify that the new filter produces the same output and is not slower.
Some ideas for more filters:
* a frequency filtering domain filter relying on the FFT utils in libavcodec
* a controller filter which allows to send commands to other filters (e.g. to adjust volume, contrast, etc.), e.g. like the sendcmd filter but through an interactive GUI
* a lua scripting filter, which allows to implement filtering custom logic in lua
For more ideas check [https://trac.ffmpeg.org/query?status=new&status=open&status=reopened&component=avfilter&col=id&col=summary&col=status&col=type&col=priority&col=component&col=version&order=priority trac libavfilter tickets].
'''Expected results:''' Write or port audio and video filters and possibly fix/extend libavfilter API and design when required.
'''Prerequisites:''' C coding skills, basic familiarity with git. Some background on DSP and image/sound processing techniques would be a bonus but is not strictly required.
'''Qualification Task:''' write or port one or more filters
'''Mentor:''' TBA, possibly [[User:Stefanosa|Stefano Sabatini]] (''saste'' in #ffmpeg-devel on Freenode IRC)
'''Backup mentor:''' [[User:Ubitux|Clément Bœsch]] (''ubitux'' in #ffmpeg-devel on Freenode IRC)
== Subtitles ==
'''Description:''' FFmpeg has been working on improving its subtitles support recently, notably by adding the support for various text subtitles and various hardsubbing (burning the subtitles onto the video) facilities. While the theme may sound relatively simple compared to audio/video signal processing, the project carries an historical burden not easy to deal with, and introduces various issues very specific to its sparse form.
'''Expected results:'''
* Add support for new subtitles formats. Example: a demuxer for .SUP files, just like VobSub but for Blu-Ray, or a VobSub muxer.
* Improve text subtitles decoders. Typically, this can be supporting advanced markup features in SAMI or WebVTT.
* Update the API to get rid of the clumsy internal text representation of styles
* Proper integration of subtitles into libavfilter. This is the ultimate goal, as it will notably allow a complete subtitles rendering for applications such as ffplay.
* BONUS: if everything goes well, the student will be allowed to add basic support for teletext
'''Prerequisites:''' C coding skills, basic familiarity with git. Some background in fansubbing area (notably ASS experience) would be a bonus but is not strictly required.
'''Qualification Task:''' write one subtitles demuxer and decoder (for example support for Spruce subtitles format). This is in order to make sure the subtitles chain is understood.
'''Mentor:''' [[User:Ubitux|Clément Bœsch]] (''ubitux'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA, possibly Nicolas George (''Cigaes'' in #ffmpeg-devel on Freenode IRC)
== Postproc optimizations ==
<div class="floatright">[[Image:PostProc.jpg]]</div>
'''Description:''' FFmpeg contains libpostproc, which is used to postprocess 8x8 DCT-MC based video and images (jpeg, mpeg-1/2/4, H.263 among others). Postprocessing removes blocking (and other) artifacts from low bitrate / low quality images and videos. The code though has been written a long time ago and its SIMD optimizations need to be updated to what modern CPUs support (AVX2 and SSE2+).
'''Expected results:'''
* Convert all gcc inline asm in libpostproc to YASM.
* Restructure the code so that it works with block sizes compatible with modern SIMD.
* Add Integer SSE2 and AVX2 optimizations for each existing MMX/MMX2/3dnow optimization in libpostproc.
'''Prerequisites:''' C coding skills, good x86 assembly coding skills, basic familiarity with git.
'''Qualification Task:''' convert 1 or 2 MMX2 functions to SSE2 and AVX2.
'''Mentor:''' [[User:Michael|Michael Niedermayer]] (''michaelni'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA, possibly [[User:Stefanosa|Stefano Sabatini]] (''saste'' in #ffmpeg-devel on Freenode IRC)
<br clear="all">
== Bayer RGB colorspaces ==
<div class="floatright">[[Image:350px-Bayer_pattern_on_sensor.svg.png ]]</div>
'''Description:''' Several image and video format store pixels using Bayer-pattern colorspaces. Supporting these format would broaden FFmpeg's applicability to RAW still and video photography processing.
'''Expected results:'''
* Rebase existing patches
* Implement high quality bayer transformations in libswscale (plain C)
* Add bayer formats to the libavutil pixfmt enumeration routines
* SIMD optimizations of the libswscale transformations
* Complete PhotoCINE demuxer to support Bayer format; (or another format of your choosing)
Optional goodies:
* Extend TIFF decoder to support DNG-Bayer format
* Support a popular proprietary camera format (many to choose from; see dcraw project)
'''Prerequisites:''' C coding skills, basic familiarity with git.
'''Qualification Task:''' Implement a simple and working Bayer->RGB transform in libswscale
'''Mentor:''' [[User:Suxen_drol|Peter Ross]] (''pross-au'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' [[User:Michael|Michael Niedermayer]] (''michaelni'' in #ffmpeg-devel on Freenode IRC)
<br clear="all">
== MPEG-4 ALS encoder ==
A MPEG-4 ALS decoder was implemented several years ago but an encoder is still missing in the official codebase. A rudimentary encoder has already been written and is available on [https://github.com/justinruggles/FFmpeg-alsenc.git github]. For this project, that encoder is first to be updated to fit into the current codebase of FFmpeg and to be tested for conformance using the [http://www.nue.tu-berlin.de/menue/forschung/projekte/beendete_projekte/mpeg-4_audio_lossless_coding_als/parameter/en/#230252 reference codec and specifications]. Second, the encoder is to be brought through the usual reviewing process to hit the codebase at the end of the project.
'''Expected results:'''
* Update the existing encoder to fit into the current codebase.
* Ensure conformance of the encoder by verifying using the reference codec and generate a test case for FATE.
* Ensure the FFmpeg decoder processes all generated files without warnings.
* Enhance the rudimentary feature set of the encoder.
'''Prerequisites:''' C coding skills, basic familiarity with git. A certain interest in audio coding and/or knowledge about the FFmpeg codebase could be beneficial.
'''Qualification Task:''' Add floating point support to MPEG-4 ALS decoder
'''Mentor:''' [[User:Pbm|Paul B Mahol]] (''durandal_1707'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA, possibly [[User:Stefanosa|Stefano Sabatini]] (''saste'' in #ffmpeg-devel on Freenode IRC)
<br clear="all">
== Hardware Acceleration API Software/Tracing Implementation ==
'''Description:''' Our support for hardware accelerated decoding basically remains untested. This is in part due to FFmpeg only implementing part of the required steps and in part since it requires specific operating systems and hardware.
The idea would be to start with a simple stub implementation of an API like e.g. VDPAU that provides only the most core functions. These would then serialize out the function calls and the data they get to allow for easy comparison and thus regression testing. Improvements to this approach are adding basic input validation and replay capability to allow testing regression data against real hardware. This would be similar to what apitrace https://github.com/apitrace/apitrace does for OpenGL.
A further step would be to actually add support for decoding in software, so that full testing including visual inspection is possible without the need for special hardware.
'''Prerequisites:''' C coding skills, basic familiarity with git
'''Qualification Task:''' Anything related to the hardware acceleration code, though producing first ideas and code pieces for this task would also be reasonable
'''Mentor:''' Reimar Döffinger (''reimar'' in #ffmpeg-devel on Freenode IRC, but since I'm rarely there better email me first: Reimar.Doeffinger [at] gmx.de)
== Hardware Accelerated Video Encoding with VA-API ==
'''Description:''' FFmpeg already supports hardware accelerated decoding for multiple codecs but still lacks support for hardware accelerated encoding. The aim of the project is to add support for encoding with VA-API specifically, while keeping a generic enough approach in mind so that other hardware accelerators (TI-DSP, CUDA?) could be supported as well. This means that new ''hwaccel'' hooks are needed and two operational modes are possible: either ''(i)'' driver or hardware pack headers themselves, or ''(ii)'' lattitude is left to perform this task at the FFmpeg library level.
'''Expected results:''' Allow MPEG-2 and H.264 encoding with VA-API, while supporting variable bitrate (VBR) by default, and allowing alternate methods like constant bitrate (CBR) or constant QP (CQP) where appropriate or requested.
* MPEG-2 encoding:
** Add basic encoding with I/P frames (handle the ''-g'' option)
** Add support for B frames (handle the ''-bf'' option)
** Add support for constant bitrate (CBR, i.e. ''maxrate == bitrate'' and ''bufsize'' set)
** (Optionally) add support for interlaced contents
* H.264 encoding:
** Add basic encoding with I/P frames (handle the ''-g'' option)
** Add support for B frames (handle the ''-bf'' option)
** Add support for constant bitrate (CBR, i.e. ''maxrate == bitrate'' and ''bufsize'' set)
** Add support for constant QP (CQP, i.e. handle the ''-cqp'' option)
** Add support for more than one reference frame, while providing/using API to query the hardware capabilities
** Work on HRD conformance. May require to write an independent tool to assess that
** (Optionally) add configurability of the motion estimatation method to use. Define new types for HW accelerated encoding with at least two levels/hints for the accelerator.
* FFmpeg applications:
** Define common hwaccel interface for encoding
** Add initial support for hardware accelerated encoding to the ''ffmpeg'' application
'''Prerequisites:''' C coding skills, basic familiarity with git, hardware supporting VA-API for encoding.
'''Qualification Task:''' Anything related to the Hardware Acceleration (hwaccel) API, or to its related users. e.g. add JPEG decoding support with VA-API, etc.
'''Mentor:''' TBA, possibly [[User:Gwenole_Beauchesne|Gwenole Beauchesne]] (''__gb__'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA, possibly Tushar Gohad
<br clear="all"/>
== AAC Improvements ==
'''Description:''' FFmpeg contains an AAC encoder and decoder, both of them can be improved in various ways. This is enough work for more than one GSoC project, so one part of your submission would be to define on which task exactly you want to work.
* AAC BSAC decoder: This has already been started, but the existing decoder still fails on many samples
* AAC SSR decoder
* AAC 960/120 MDCT window
'''Qualification Task:''' See the FFmpeg bug tracker for AAC issues, fixing one of them or rebasing the existing incomplete BSAC decoder for current git head or fixing one or more existing bugs are possible qualification tasks.
'''Prerequisites:''' C coding skills, basic familiarity with git, knowledge about transform based audio coding would be useful.
'''Mentor:''' Baptiste Coudurier (''bcoudurier'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA, possibly [[User:Stefanosa|Stefano Sabatini]] (''saste'' in #ffmpeg-devel on Freenode IRC)
<br clear="all"/>
== DTS / DCA Decoder Improvements ==
'''Description:''' FFmpeg contains a DTS decoder, but its missing several features
* DTS-HD decoder improvements: A possible qualification task is to implement ticket [https://trac.ffmpeg.org/ticket/1920 #1920]
** Add support for X96 extension (96khz)
** Add support for XLL extension (lossless)
** Add support for pure DTS-HD streams that do not contain a DTS core
** Add support for multiple assets
** Add support for LBR extension
'''Prerequisites:''' C coding skills, basic familiarity with git. Good understanding of DTS and related audio coding is a strict requirement.
'''Mentor:''' Benjamin Larsson (''merbanan/merbzt'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA, possibly [[User:Stefanosa|Stefano Sabatini]] (''saste'' in #ffmpeg-devel on Freenode IRC)
== DCA Encoder improvments ==
'''Description:''' Add more complete multichannel support, subband adpcm support and optimize the decorrelation transform. A [http://wiki.multimedia.cx/index.php?title=Mirror specification] is available.
'''Prerequisites:''' C coding skills, basic familiarity with git. Good understanding of DTS and related audio coding is a strict requirement.
'''Qualification Task:''' Add 3.0 / 3.1 support and fix the channel order for 5.1
'''Mentor:''' Benjamin Larsson (''merbanan/merbzt'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' [[User:Michael|Michael Niedermayer]] (''michaelni'' in #ffmpeg-devel on Freenode IRC)
== MXF Demuxer Improvements ==
'''Description:''' FFmpeg's MXF demuxer needs a proper, compact way to map EssenceContainer? ULs to WrappingKind?. See ticket #2776. I wrote stuff in ticket #1916 which is also relevant.
The gist of this is that essence in MXF is typically stored in one of two ways: as an audio/video interleave or with each stream in one huge chunk (like 1 GiB audio followed by 10 GiB video). Previous ways of telling these apart have been technically wrong, but has worked due to a lack of samples demonstrating the contrary.
'''Expected results:''' The sample in ticket #2776 demuxes fine and there's a test case in FATE for it. The solution should grow libavformat by no more than 32 KiB.
'''Prerequisites:''' C coding skills, basic familiarity with git.
'''Qualification Task:''' Investigate if there may be a compact way of representing the UL -> WrappingKind? mapping specified in the official RP224 Excel document. The tables takes up about half a megabyte verbatim, which is unacceptable in a library as large as libavformat.
'''Mentor:''' TBA, possibly Tomas Härdin (''thardin'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA
<br clear="all"/>
= Unmentored Projects =
This is a list of projects that students are encouraged to consider if a mentored project is unavailable or not within the students skill or interests. The student will have to find a mentor for the project. A student can also [[#Your_Own_Idea|propose their own project]].
== glplay ==
<div class="floatleft">[[Image:Opengl_logo.jpg]]</div>
'''Description:''' The SDL library that is used by FFplay has some deficiencies, adding OpenGL output to FFplay should allow for better performance (and less bugs at least for some hardware / driver combinations). This could be a new application (glplay), but it is probably simpler to extend ffplay to use OpenGL. You can use code from MPlayer's OpenGL vo module which may be relicensed under the LGPL.
'''Mentor:''' TBD Backup: Reimar Döffinger
<br clear="all">
== TrueHD encoder ==
'''Description:''' FFmpeg currently does not support encoding to one of the lossless audio formats used on Bluray discs. This task consists of implementing a TrueHD encoder that allows to losslessly encode audio to play it on hardware devices capable of TrueHD decoding.
== Opus decoder ==
<div class="floatright">[[Image:Opus.png]]</div>
'''Description:''' Opus decoding is currently supported through the external libopus library
* Write a native decoder, continue working on the existing unfinished implementation
A possible qualification task is to port the existing incomplete decoder to current git head and improve it to show that you are capable of working on this task.
== VC-1 interlaced ==
'''Description:''' The FFmpeg VC-1 decoder has improved over the years, but many samples are still not decoded bit-exact and real-world interlaced streams typically show artefacts.
* Implement missing interlace features
* Make more reference samples bit-exact
As a qualification task, you should try to find a bug in the current decoder implementation and fix it.
== JPEG 2000 ==
<div class="floatleft">[[Image:Jpeg2000.jpg]]</div>
'''Description:''' FFmpeg contains an experimental native JPEG 2000 encoder and decoder. Both are missing many features, see also the FFmpeg bug tracker for some unsupported samples.
Work on an issue (for example from the bug tracker) as a qualification task to show that you are capable of improving the codec implementation.
<br clear="all">
== Hardware Acceleration (hwaccel) API v2 ==
<div class="floatright">[[Image:Hardware.jpg]]</div>
'''Description:''' FFmpeg supports hardware accelerated decoding through the internal hwaccel API. Currently supported system hardware acceleration APIs are VA-API (Linux), DXVA2 (Windows) and VDA (MacOS X). However, the current approach requires client applications to allocate the underlying resources (e.g. hardware surfaces and context) themselves, and handing them over to FFmpeg. This incurs a few limitations: this is not scalable to new codecs, i.e. this requires new tokens for each newly supported codec; this incurs extra work in the client application, which tends to be duplicated over several client applications; and this prevents efficient fallback to software decoding mode if the hardware cannot handle a particular codec specification.
The goal of this project is to revamp the FFmpeg Hardware Acceleration API so that hardware resources are allocated and managed in the library, thus requiring the client application to only provide a single hardware context/device handle; provide a way to fallback early to software decoding mode if the underlying hardware won't be able to handle the bitstream; and make it possible to select a hardware accelerator by ID and not polluting the PixelFormats namespace.
'''Expected results:'''
* FFmpeg core library (libavcodec):
** Core API extensions and improvements
*** Add open/close hooks in a way that is backwards compatible with hwaccel v1 enabled applications
*** Add new tokens describing hardware accelerators
*** Add new flags exposing HW capabilities like download/upload
*** Investigate the benefits or impacts to provide a global map/unmap capability to FFmpeg video buffers
** Port hwaccels to v2 infrastructure
*** Port VA-API decoders to v2 infrastructure
*** Validate that VA-API decoders still work with existing applications supporting hwaccel v1
*** Provide download capability through ''vaGetImage()''
*** Validate that ffplay can support this feature with minor changes, and definitely no change to the existing SDL renderer
*** Port VDPAU decoders to hwaccel v2 (optional), and investigate ways to preserve compatibility with older applications
* FFmpeg applications:
** Integrate hardware acceleration into ffplay
*** Create a video-output (VO) infrastructure to ffplay
*** Port the SDL renderer to the new VO infrastructure
*** Add support for VA-API: VA renderer through ''vaPutSurface()'', add -hwaccel option to select "vaapi" renderer
*** Add support for VDPAU (optional): VDPAU renderer through ''VdpPresentationQueueDisplay()''
** Integrate hardware acceleration into ffmpeg
*** Add support for VA-API: use the VA/DRM API for headless (no-X display) decoding, use libudev to determine the device to use
'''Prerequisites:''' C coding skills, basic familiarity with git, hardware supporting VA-API.
'''Qualification Task:''' Anything related to the Hardware Acceleration (hwaccel) API, or to its related users. e.g. add JPEG decoding support with VA-API, etc.
'''Mentor:''' TBA, possibly [[User:Gwenole_Beauchesne|Gwenole Beauchesne]] (''__gb__'' in #ffmpeg-devel on Freenode IRC)
'''Backup Mentor:''' TBA
<br clear="all">
== Your Own Project Idea ==
A student can propose a project. Ideas can also be found by browsing bugs and feature requests on our [https://trac.ffmpeg.org/ bug tracker]. The work should last the majority of the GSoC duration, the task must be approved by the developers, and a mentor must be assigned.
Students can discuss an idea in the [http://ffmpeg.org/mailman/listinfo/ffmpeg-devel ffmpeg-devel mailing-list], the #ffmpeg-devel IRC channel, or contact the FFmpeg GSoC admins for more information.

Latest revision as of 14:04, 9 February 2014