<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.multimedia.cx/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Koorogi</id>
	<title>MultimediaWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.multimedia.cx/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Koorogi"/>
	<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php/Special:Contributions/Koorogi"/>
	<updated>2026-04-18T09:27:36Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.5</generator>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_/_Libav_Summer_Of_Code&amp;diff=11535</id>
		<title>FFmpeg / Libav Summer Of Code</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_/_Libav_Summer_Of_Code&amp;diff=11535"/>
		<updated>2009-04-24T15:05:29Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: link to MPEG ALS page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [[FFmpeg]] project has been a participant in the [http://code.google.com/soc/ Google Summer of Code] program since 2006.&lt;br /&gt;
&lt;br /&gt;
* [[FFmpeg_Summer_Of_Code_2009|2009 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2008|2008 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2007|2007 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2006|2006 project page]]&lt;br /&gt;
&lt;br /&gt;
Each accepted project is developed in its own sandbox, separate from the main FFmpeg codebase. Naturally, the end goal of each of the accepted FFmpeg projects ought to be to have that code in shape for acceptance into the production codebase. This page tracks the status of each project and how well each student did.&lt;br /&gt;
&lt;br /&gt;
== 2006 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== VC-1 Decoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor: [[User:Multimedia Mike|Mike Melanson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AMR-NB Decoder ===&lt;br /&gt;
* Student: [[User:superdump|Robert Swain]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer.&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Project not finished during SoC.&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Floating point code has been implemented up to synthesis.&amp;lt;/font&amp;gt; The next step is to debug the synthesis input and code. Documented on [[AMR-NB]].&lt;br /&gt;
&lt;br /&gt;
=== AC-3 Decoder ===&lt;br /&gt;
* Student: [[User:Cloud9|Kartikey Mahendra BHATT]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:Jruggle|Justin Ruggles]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== AAC Decoder ===&lt;br /&gt;
* Student: Maxim Gavrilov&lt;br /&gt;
* Mentor: [[User:ods15|Oded Shimon]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:andoma|Andreas Öman]] and [[User:superdump|Robert Swain]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Vorbis Encoder ===&lt;br /&gt;
* Student: Mathew Philip&lt;br /&gt;
* Mentor: [[User:ods15|Oded Shimon]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project barely started&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:ods15|Oded Shimon]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== 2007 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== RealVideo 4 Decoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor: [[User:Multimedia Mike|Mike Melanson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt; The project goal morphed to include a RealVideo 3 decoder since the 2 schemes are so similar.&lt;br /&gt;
&lt;br /&gt;
=== QCELP Decoder ===&lt;br /&gt;
* Student: [[User:Reynaldo|Reynaldo Verdejo Pinochet]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg commiter&amp;lt;/font&amp;gt;. &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Produced a working decoder during SoC but the code didn't reach SVN before the end of the program&amp;lt;/font&amp;gt;.&lt;br /&gt;
* Code Status: Picked up by Kenan Gillet and with the help of [[User:Reynaldo|Reynaldo]] &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;. Some features still missing, though.&lt;br /&gt;
&lt;br /&gt;
=== Matroska Muxer ===&lt;br /&gt;
* Student: David Conrad&lt;br /&gt;
* Mentor: [[User:aurel|Aurélien Jacobs]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Video Filter API (AKA [[Libavfilter|libavfilter]]) ===&lt;br /&gt;
* Student: [[User:Koorogi|Bobby Bingham]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]] and Michael Niedermayer&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC but continues working on it&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Parts have been committed to FFmpeg, but remain disabled.&amp;lt;/font&amp;gt; Still in  development (albeit slowly) by [[User:Koorogi|Bobby Bingham]] and [[User:Vitor|Vitor]].  2009 SoC projects are underway to complete its integration and add audio support.&lt;br /&gt;
&lt;br /&gt;
=== E-AC-3 Decoder ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec&lt;br /&gt;
* Mentor:  [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;; &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;code produced worked for most available samples, but there were some unimplemented features.&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:Jruggle|Justin Ruggles]], finished and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== JPEG 2000 Encoder and Decoder ===&lt;br /&gt;
* Student: Kamil Nowosad&lt;br /&gt;
* Mentor: [[User:pengvado|Loren Merritt]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;The code is working but not all features are supported.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Dirac Encoder and Decoder ===&lt;br /&gt;
* Student: Marco Gerards&lt;br /&gt;
* Mentor: [[User:Lu_zero|Luca Barbato]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Project not finished during SoC but continues working on it.&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;The decoder is in good shape, the encoder still needs more work. Both need to be updated to the latest spec.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== TS Muxer ===&lt;br /&gt;
* Student: Xiaohui Sun&lt;br /&gt;
* Mentor:  [[User:bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt; [[Interesting Patches#PES packetizer by Xiaohui Sun|Changes]] requested during the review process for FFmpeg inclusion were never made.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2008 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== Generic frame-level multithreading support  ===&lt;br /&gt;
* Student: Alexander Strange &lt;br /&gt;
* Mentor: Kristian Jerpetjoen&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt; here: http://gitorious.org/projects/ffmpeg/repos/ffmpeg-mt&lt;br /&gt;
&lt;br /&gt;
=== Nellymoser Encoder ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec &lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===  ALAC Encoder ===&lt;br /&gt;
* Student: [[User:Jai|Jai Menon]]&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== LGPL reimplementation of GPL sws_scale parts ===&lt;br /&gt;
* Student: Keiji Costantini&lt;br /&gt;
* Mentor: [[User:Lu_zero|Luca Barbato]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: The GPL YUV table generator has since been &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;implemented as LGPL by [[User:Kostya|Kostya Shishkov]]&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== AAC-LC Encoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya]]&lt;br /&gt;
* Mentor: [[User:Andoma|Andreas Öman]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MLP/TrueHD encoder ===&lt;br /&gt;
* Student: [[User:Angustia|Ramiro Polla]]&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;, the first stage, getting the MLP decoder into shape and committed, has been accomplished.&lt;br /&gt;
&lt;br /&gt;
=== WMA Pro Decoder ===&lt;br /&gt;
* Student: Sascha Sommer&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process, fully functional code in SoC SVN tree [http://svn.ffmpeg.org/soc/wmapro/]&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MXF Muxer ===&lt;br /&gt;
* Student: [[User:spyfeng|Zhentan Feng]]&lt;br /&gt;
* Mentor:  [[User:Bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;finished project&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2009 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== RTMP Support (Flash streaming) ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor:  [[User:Ronald S. Bultje|Ronald Bultje]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== libswscale cleanup ===&lt;br /&gt;
* Student: [[User:Angustia|Ramiro Polla]]&lt;br /&gt;
* Mentor: [[User:reimar|Reimar Döffinger]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== S/PDIF muxer ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec &lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Playlist/Concatenation Support for FFmpeg ===&lt;br /&gt;
* Student: Geza Kovacs&lt;br /&gt;
* Mentor:  [[User:Bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== JPEG2000 decoder and encoder ===&lt;br /&gt;
* Student: [[User:Jai|Jai Menon]]&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Implement the New Seeking API in Libavformat ===&lt;br /&gt;
* Student: [[User:spyfeng|Zhentan Feng]]&lt;br /&gt;
* Mentor:  [[User:Bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[MPEG-4 Audio Lossless Coding (ALS)|MPEG-4 ALS]] decoder ===&lt;br /&gt;
* Student: Thilo Borgmann&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Implementation of AVFilter infrastructure and various audio filters ===&lt;br /&gt;
* Student: Kevin Dubois&lt;br /&gt;
* Mentor:  [[User:Vitor|Vitor Sessak]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Finish AMR-NB decoder and write an encoder ===&lt;br /&gt;
* Student: Colin McQuillan&lt;br /&gt;
* Mentor:  [[User:superdump|Robert Swain]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FFmpeg]]&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_/_Libav_Summer_Of_Code&amp;diff=11527</id>
		<title>FFmpeg / Libav Summer Of Code</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_/_Libav_Summer_Of_Code&amp;diff=11527"/>
		<updated>2009-04-20T22:50:49Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: /* Video Filter API (AKA libavfilter) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [[FFmpeg]] project has been a participant in the [http://code.google.com/soc/ Google Summer of Code] program since 2006.&lt;br /&gt;
&lt;br /&gt;
* [[FFmpeg_Summer_Of_Code_2009|2009 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2008|2008 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2007|2007 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2006|2006 project page]]&lt;br /&gt;
&lt;br /&gt;
Each accepted project is developed in its own sandbox, separate from the main FFmpeg codebase. Naturally, the end goal of each of the accepted FFmpeg projects ought to be to have that code in shape for acceptance into the production codebase. This page tracks the status of each project and how well each student did.&lt;br /&gt;
&lt;br /&gt;
== 2006 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== VC-1 Decoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor: [[User:Multimedia Mike|Mike Melanson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AMR-NB Decoder ===&lt;br /&gt;
* Student: [[User:superdump|Robert Swain]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer.&amp;lt;/font&amp;gt; &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Project not finished during SoC.&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Floating point code has been implemented up to synthesis.&amp;lt;/font&amp;gt; The next step is to debug the synthesis input and code. Documented on [[AMR-NB]].&lt;br /&gt;
&lt;br /&gt;
=== AC-3 Decoder ===&lt;br /&gt;
* Student: [[User:Cloud9|Kartikey Mahendra BHATT]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:Jruggle|Justin Ruggles]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== AAC Decoder ===&lt;br /&gt;
* Student: Maxim Gavrilov&lt;br /&gt;
* Mentor: [[User:ods15|Oded Shimon]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:andoma|Andreas Öman]] and [[User:superdump|Robert Swain]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Vorbis Encoder ===&lt;br /&gt;
* Student: Mathew Philip&lt;br /&gt;
* Mentor: [[User:ods15|Oded Shimon]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project barely started&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:ods15|Oded Shimon]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== 2007 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== RealVideo 4 Decoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor: [[User:Multimedia Mike|Mike Melanson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt; The project goal morphed to include a RealVideo 3 decoder since the 2 schemes are so similar.&lt;br /&gt;
&lt;br /&gt;
=== QCELP Decoder ===&lt;br /&gt;
* Student: [[User:Reynaldo|Reynaldo Verdejo Pinochet]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg commiter&amp;lt;/font&amp;gt;. &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Produced a working decoder during SoC but the code didn't reach SVN before the end of the program&amp;lt;/font&amp;gt;.&lt;br /&gt;
* Code Status: Picked up by Kenan Gillet and with the help of [[User:Reynaldo|Reynaldo]] &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;. Some features still missing, though.&lt;br /&gt;
&lt;br /&gt;
=== Matroska Muxer ===&lt;br /&gt;
* Student: David Conrad&lt;br /&gt;
* Mentor: [[User:aurel|Aurélien Jacobs]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Video Filter API (AKA [[Libavfilter|libavfilter]]) ===&lt;br /&gt;
* Student: [[User:Koorogi|Bobby Bingham]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]] and Michael Niedermayer&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC but continues working on it&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Parts have been committed to FFmpeg, but remain disabled.&amp;lt;/font&amp;gt; Still in  development (albeit slowly) by [[User:Koorogi|Bobby Bingham]] and [[User:Vitor|Vitor]].  2009 SoC projects are underway to complete its integration and add audio support.&lt;br /&gt;
&lt;br /&gt;
=== E-AC-3 Decoder ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec&lt;br /&gt;
* Mentor:  [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;; &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;code produced worked for most available samples, but there were some unimplemented features.&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:Jruggle|Justin Ruggles]], finished and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== JPEG 2000 Encoder and Decoder ===&lt;br /&gt;
* Student: Kamil Nowosad&lt;br /&gt;
* Mentor: [[User:pengvado|Loren Merritt]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;The code is working but not all features are supported.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Dirac Encoder and Decoder ===&lt;br /&gt;
* Student: Marco Gerards&lt;br /&gt;
* Mentor: [[User:Lu_zero|Luca Barbato]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Project not finished during SoC but continues working on it.&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;The decoder is in good shape, the encoder still needs more work. Both need to be updated to the latest spec.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== TS Muxer ===&lt;br /&gt;
* Student: Xiaohui Sun&lt;br /&gt;
* Mentor:  [[User:bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt; [[Interesting Patches#PES packetizer by Xiaohui Sun|Changes]] requested during the review process for FFmpeg inclusion were never made.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2008 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== Generic frame-level multithreading support  ===&lt;br /&gt;
* Student: Alexander Strange &lt;br /&gt;
* Mentor: Kristian Jerpetjoen&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt; here: http://gitorious.org/projects/ffmpeg/repos/ffmpeg-mt&lt;br /&gt;
&lt;br /&gt;
=== Nellymoser Encoder ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec &lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===  ALAC Encoder ===&lt;br /&gt;
* Student: [[User:Jai|Jai Menon]]&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== LGPL reimplementation of GPL sws_scale parts ===&lt;br /&gt;
* Student: Keiji Costantini&lt;br /&gt;
* Mentor: [[User:Lu_zero|Luca Barbato]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: The GPL YUV table generator has since been &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;implemented as LGPL by [[User:Kostya|Kostya Shishkov]]&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== AAC-LC Encoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya]]&lt;br /&gt;
* Mentor: [[User:Andoma|Andreas Öman]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MLP/TrueHD encoder ===&lt;br /&gt;
* Student: [[User:Angustia|Ramiro Polla]]&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;, the first stage, getting the MLP decoder into shape and committed, has been accomplished.&lt;br /&gt;
&lt;br /&gt;
=== WMA Pro Decoder ===&lt;br /&gt;
* Student: Sascha Sommer&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process, fully functional code in SoC SVN tree [http://svn.ffmpeg.org/soc/wmapro/]&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MXF Muxer ===&lt;br /&gt;
* Student: [[User:spyfeng|Zhentan Feng]]&lt;br /&gt;
* Mentor:  [[User:Bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;finished project&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2009 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== RTMP Support (Flash streaming) ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor:  [[User:Ronald S. Bultje|Ronald Bultje]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Finish libavfilter integration ===&lt;br /&gt;
* Student: [[User:Angustia|Ramiro Polla]]&lt;br /&gt;
* Mentor: [[User:reimar|Reimar Döffinger]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== S/PDIF muxer ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec &lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Playlist/Concatenation Support for FFmpeg ===&lt;br /&gt;
* Student: Geza Kovacs&lt;br /&gt;
* Mentor:  [[User:Bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== JPEG2000 decoder and encoder ===&lt;br /&gt;
* Student: [[User:Jai|Jai Menon]]&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Implement the New Seeking API in Libavformat ===&lt;br /&gt;
* Student: [[User:spyfeng|Zhentan Feng]]&lt;br /&gt;
* Mentor:  [[User:Bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MPEG-4 ALS decoder ===&lt;br /&gt;
* Student: Thilo Borgmann&lt;br /&gt;
* Mentor: [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Implementation of AVFilter infrastructure and various audio filters ===&lt;br /&gt;
* Student: Kevin Dubois&lt;br /&gt;
* Mentor:  [[User:Vitor|Vitor Sessak]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Finish AMR-NB decoder and write an encoder ===&lt;br /&gt;
* Student: Colin McQuillan&lt;br /&gt;
* Mentor:  [[User:superdump|Robert Swain]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;active&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;in process&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FFmpeg]]&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=Talk:FFmpeg_Summer_Of_Code_2009&amp;diff=10880</id>
		<title>Talk:FFmpeg Summer Of Code 2009</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=Talk:FFmpeg_Summer_Of_Code_2009&amp;diff=10880"/>
		<updated>2009-01-17T21:26:44Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: /* libavui (a common skins library)? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== S/PDIF muxer ===&lt;br /&gt;
&lt;br /&gt;
Is there any specific qualification task you would like done for this? -- Jai&lt;br /&gt;
&lt;br /&gt;
:Working Jpeg2000 decoder ;), cleaning up this http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2005-June/001673.html would be welcome. It's a rpza encoder. --[[User:Merbanan|Merbanan]] 06:22, 31 December 2008 (EST)&lt;br /&gt;
&lt;br /&gt;
=== speex + gsm ===&lt;br /&gt;
&lt;br /&gt;
Aren't libgsm and libspeex distributed under a permissive license?&lt;br /&gt;
If yes, these tasks do not have very high priority, imo.&lt;br /&gt;
[[User:Ce|Ce]] 14:56, 11 January 2009 (EST)&lt;br /&gt;
&lt;br /&gt;
=== DTS-HD Master Audio decoder? ===&lt;br /&gt;
Would [http://en.wikipedia.org/wiki/DTS-HD_Master_Audio DTS-HD Master Audio] decoder make good project suggestion?  [[User:Gamester17|Gamester17]] 02:51, 16 January 2009 (EST)&lt;br /&gt;
&lt;br /&gt;
http://en.wikipedia.org/wiki/DTS-HD_Master_Audio&lt;br /&gt;
:&amp;quot;&amp;quot;''DTS-HD Master Audio is a lossless audio codec created by Digital Theater System. It was previously known as DTS++ and DTS-HD. It is an extension of DTS which, when played back on devices which do not support the Master Audio extension, degrades to a 1.5 Mbit/s &amp;quot;core&amp;quot; track which is lossy. DTS-HD Master Audio is an optional audio format for both Blu-ray Disc and HD DVD''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Specs, please. From what I know the projects without spec take a looong time to complete. --[[User:Kostya|Kostya]] 03:32, 16 January 2009 (EST)&lt;br /&gt;
&lt;br /&gt;
:AFAIK, there is even no software implementation, so it would be even more difficult;-( [[User:Ce|Ce]] 20:14, 16 January 2009 (EST)&lt;br /&gt;
&lt;br /&gt;
=== WTV (Microsoft Windows Media Center Recording Format) demuxer? ===&lt;br /&gt;
Would a [[WTV|WTV (Microsoft Windows Media Center Recording Format)]] demuxer make good project suggestion? [[User:Gamester17|Gamester17]] 13:14, 16 January 2009 (EST)&lt;br /&gt;
&lt;br /&gt;
[[WTV]]&lt;br /&gt;
:&amp;quot;''WTV is the new container format used to record television shows in Microsoft Windows Vista Media Center starting with Windows Media Center TV Pack 2008.''&amp;quot;, &amp;quot;''WTV is the successor of DVR-MS which is is being replaced with WTV''&amp;quot;, &amp;quot;''WRT is also the default recording format for Windows 7 Media Center''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
::This is tricky. It doesn't strike me as being involved enough to qualify as one of our usual SoC projects. OTOH, it seems a little too involved to be a qualification task. --[[User:Multimedia Mike|Multimedia Mike]] 14:24, 16 January 2009 (EST)&lt;br /&gt;
&lt;br /&gt;
=== libavui (a common skins library)? ===&lt;br /&gt;
Would a common skins library make good project suggestion?&lt;br /&gt;
*MPlayer skin&lt;br /&gt;
*VLC skin&lt;br /&gt;
*Xine skin&lt;br /&gt;
*XMMS skin&lt;br /&gt;
*WINAMP skin&lt;br /&gt;
*Windows Media Player skin&lt;br /&gt;
-[[User:Nazo|Nazo]] 21:29, 16 January 2009 (EST)&lt;br /&gt;
:: Personally, I would advocate a project to stamp out skinnable UIs across the computing landscape. But that's outside of the scope of an SoC project. I hate UI skins. --[[User:Multimedia Mike|Multimedia Mike]] 14:03, 17 January 2009 (EST)&lt;br /&gt;
:::: I second that.  But I don't see how GUI stuff like promoting or discouraging skins relates to libav* in the first place. [[User:Koorogi|Koorogi]] 16:26, 17 January 2009 (EST)&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=Talk:TX2&amp;diff=9417</id>
		<title>Talk:TX2</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=Talk:TX2&amp;diff=9417"/>
		<updated>2008-01-29T17:07:09Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: request clarification on swizzling&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Can somebody offer any clarification on when the image data is swizzled, and when not? The article just saying &amp;quot;sometimes&amp;quot; isn't very helpful.  I'll take a look when I get a chance, but it might be a while, so if somebody happens to know, or has some free time to spend on it, a little clarification would be nice. --[[User:Koorogi|Koorogi]] 12:07, 29 January 2008 (EST)&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=9192</id>
		<title>FFmpeg filter HOWTO</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=9192"/>
		<updated>2007-12-23T23:52:15Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: document the new API since the great colorspace API change&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is meant as an introduction of writing filters for [[libavfilter]].  This is a work in progress, but should at least point you in the right direction for writing simple filters.&lt;br /&gt;
&lt;br /&gt;
== Definition of a filter ==&lt;br /&gt;
&lt;br /&gt;
=== AVFilter ===&lt;br /&gt;
All filters are described by an AVFilter structure.  This structure gives information needed to initialize the filter, and information on the entry points into the filter code.  This structure is declared in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct&lt;br /&gt;
 {&lt;br /&gt;
     char *name;         ///&amp;lt; filter name&lt;br /&gt;
     char *author;       ///&amp;lt; filter author&lt;br /&gt;
 &lt;br /&gt;
     int priv_size;      ///&amp;lt; size of private data to allocate for the filter&lt;br /&gt;
 &lt;br /&gt;
     int (*init)(AVFilterContext *ctx, const char *args, void *opaque);&lt;br /&gt;
     void (*uninit)(AVFilterContext *ctx);&lt;br /&gt;
 &lt;br /&gt;
     int (*query_formats)(AVFilterContext *ctx);&lt;br /&gt;
 &lt;br /&gt;
     const AVFilterPad *inputs;  ///&amp;lt; NULL terminated list of inputs. NULL if none&lt;br /&gt;
     const AVFilterPad *outputs; ///&amp;lt; NULL terminated list of outputs. NULL if none&lt;br /&gt;
 } AVFilter;&lt;br /&gt;
&lt;br /&gt;
The query_formats function sets the in_formats member of connected '''output''' links, and the out_formats member of connected '''input''' links, described below under AVFilterLink.&lt;br /&gt;
&lt;br /&gt;
=== AVFilterPad ===&lt;br /&gt;
Let's take a quick look at the AVFilterPad structure, which is used to describe the inputs and outputs of the filter.  This is also defined in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterPad&lt;br /&gt;
 {&lt;br /&gt;
     char *name;&lt;br /&gt;
     int type;&lt;br /&gt;
 &lt;br /&gt;
     int min_perms;&lt;br /&gt;
     int rej_perms;&lt;br /&gt;
 &lt;br /&gt;
     void (*start_frame)(AVFilterLink *link, AVFilterPicRef *picref);&lt;br /&gt;
     AVFilterPicRef *(*get_video_buffer)(AVFilterLink *link, int perms);&lt;br /&gt;
     void (*end_frame)(AVFilterLink *link);&lt;br /&gt;
     void (*draw_slice)(AVFilterLink *link, int y, int height);&lt;br /&gt;
 &lt;br /&gt;
     int (*request_frame)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     int (*config_props)(AVFilterLink *link);&lt;br /&gt;
 } AVFilterPad;&lt;br /&gt;
&lt;br /&gt;
The actual definition in the header file has doxygen comments describing each entry point, its purpose, and what type of pads it is relevant for.  These fields are relevant for all pads:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|name&lt;br /&gt;
|Name of the pad.  No two inputs should have the same name, and no two outputs should have the same name.&lt;br /&gt;
|-&lt;br /&gt;
|type&lt;br /&gt;
|Only AV_PAD_VIDEO currently.&lt;br /&gt;
|-&lt;br /&gt;
|config_props&lt;br /&gt;
|Handles configuration of the link connected to the pad&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to input pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|min_perms&lt;br /&gt;
|Minimum permissions required to a picture received as input.&lt;br /&gt;
|-&lt;br /&gt;
|rej_perms&lt;br /&gt;
|Permissions not accepted on pictures received as input.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame&lt;br /&gt;
|Called when a frame is about to be given as input.&lt;br /&gt;
|-&lt;br /&gt;
|draw_slice&lt;br /&gt;
|Called when a slice of frame data has been given as input.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame&lt;br /&gt;
|Called when the input frame has been completely sent.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer&lt;br /&gt;
|Called by the previous filter to request memory for a picture.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to output pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|request_frame&lt;br /&gt;
|Requests that the filter output a frame.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Picture buffers ==&lt;br /&gt;
&lt;br /&gt;
=== Reference counting ===&lt;br /&gt;
All pictures in the filter system are reference counted.  This means that there is a picture buffer with memory allocated for the image data, and various filters can own a reference to the buffer.  When a reference is no longer needed, its owner frees the reference.  When the last reference to a picture buffer is freed, the filter system automatically frees the picture buffer.&lt;br /&gt;
&lt;br /&gt;
=== Permissions ===&lt;br /&gt;
The upshot of multiple filters having references to a single picture is that they will all want some level of access to the image data.  It should be obvious that if one filter expects to be able to read the image data without it changing that no other filter should write to the image data.  The permissions system handles this.&lt;br /&gt;
&lt;br /&gt;
In most cases, when a filter prepares to output a frame, it will request a buffer from the filter to which it will be outputting.  It specifies the minimum permissions it needs to the buffer, though it may be given a buffer with more permissions than the minimum it requested.&lt;br /&gt;
&lt;br /&gt;
When it wants to pass this buffer to another filter as output, it creates a new reference to the picture, possibly with a reduced set of permissions.  This new reference will be owned by the filter receiving it.&lt;br /&gt;
&lt;br /&gt;
So, for example, for a filter which drops frames if they are similar to the last frame it output, it would want to keep its own reference to a picture after outputting it, and make sure that no other filter modified the buffer either.  It would do this by requesting the permissions AV_PERM_READ|AV_PERM_WRITE|AV_PERM_PRESERVE for itself, and removing the AV_PERM_WRITE permission from any references it gave to other filters.&lt;br /&gt;
&lt;br /&gt;
The available permissions are:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Permission&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_READ&lt;br /&gt;
|Can read the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_WRITE&lt;br /&gt;
|Can write to the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_PRESERVE&lt;br /&gt;
|Can assume that the image data will not be modified by other filters. This means that no other filters should have the AV_PERM_WRITE permission.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE&lt;br /&gt;
|The filter may output the same buffer multiple times, but the image data may not be changed for the different outputs.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE2&lt;br /&gt;
|The filter may output the same buffer multiple times, and may modify the image data between outputs.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Filter Links ==&lt;br /&gt;
A filter's inputs and outputs are connected to those of another filter through the AVFilterLink structure:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterLink&lt;br /&gt;
 {&lt;br /&gt;
     AVFilterContext *src;       ///&amp;lt; source filter&lt;br /&gt;
     unsigned int srcpad;        ///&amp;lt; index of the output pad on the source filter&lt;br /&gt;
 &lt;br /&gt;
     AVFilterContext *dst;       ///&amp;lt; dest filter&lt;br /&gt;
     unsigned int dstpad;        ///&amp;lt; index of the input pad on the dest filter&lt;br /&gt;
 &lt;br /&gt;
     int w;                      ///&amp;lt; agreed upon image width&lt;br /&gt;
     int h;                      ///&amp;lt; agreed upon image height&lt;br /&gt;
     enum PixelFormat format;    ///&amp;lt; agreed upon image colorspace&lt;br /&gt;
 &lt;br /&gt;
     AVFilterFormats *in_formats;    ///&amp;lt; formats supported by source filter&lt;br /&gt;
     AVFilterFormats *out_formats;   ///&amp;lt; formats supported by destination filter&lt;br /&gt;
 &lt;br /&gt;
     AVFilterPicRef *srcpic;&lt;br /&gt;
 &lt;br /&gt;
     AVFilterPicRef *cur_pic;&lt;br /&gt;
     AVFilterPicRef *outpic;&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
The src and dst members indicate the filters at the source and destination ends of the link, respectively.  The srcpad indicates the index of the output pad on the source filter to which the link is connected.  Likewise, the dstpad indicates the index of the input pad on the destination filter.&lt;br /&gt;
&lt;br /&gt;
The in_formats member points to a list of formats supported by the source filter, while the out_formats member points to a list of formats supported by the destination filter.  The AVFilterFormats structure used to store the lists is reference counted, and in fact tracks its references (see the comments for the AVFilterFormats structure in libavfilter/avfilter.h for more information on how the colorspace negotiation is works and why this is necessary).  The upshot is that if a filter provides pointers to the same list on multiple input/output links, it means that those links will be forced to use the same format as each other.&lt;br /&gt;
&lt;br /&gt;
When two filters are connected, they need to agree upon the dimensions of the image data they'll be working with, and the format that data is in.  Once this has been agreed upon, these parameters are stored in the link structure.&lt;br /&gt;
&lt;br /&gt;
The srcpic member is used internally by the filter system, and should not be accessed directly.&lt;br /&gt;
&lt;br /&gt;
The cur_pic member is for the use of the destination filter.  When a frame is currently being sent over the link (ie. starting from the call to start_frame() and ending with the call to end_frame()), this contains the reference to the frame which is owned by the destination filter.&lt;br /&gt;
&lt;br /&gt;
The outpic member is described in the following tutorial on writing a simple filter.&lt;br /&gt;
&lt;br /&gt;
== Writing a simple filter ==&lt;br /&gt;
&lt;br /&gt;
=== Default filter entry points ===&lt;br /&gt;
Because the majority of filters that will probably be written will take exactly one input, and produce exactly one output, and output one frame for every frame received as input, the filter system provides a number default entry points to ease the development of such filters.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Entry point&lt;br /&gt;
!Actions taken by the default implementation&lt;br /&gt;
|-&lt;br /&gt;
|request_frame()&lt;br /&gt;
|Request a frame from the previous filter in the chain.&lt;br /&gt;
|-&lt;br /&gt;
|query_formats()&lt;br /&gt;
|Sets the list of supported formats on all input pads such that all links must use the same format, from a default list of formats containing most YUV and RGB/BGR formats.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame()&lt;br /&gt;
|Request a buffer to store the output frame in.  A reference to this buffer is stored in the outpic member of the link hooked to the filter's output.  The next filter's start_frame() callback is called and given a reference to this buffer.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame()&lt;br /&gt;
|Calls the next filter's end_frame() callback.  Frees the reference to the outpic member of the output link, if it was set by (ie. if the default start_frame() is used).  Frees the cur_pic reference in the input link.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer()&lt;br /&gt;
|Returns a buffer with the AV_PERM_READ permission in addition to all the requested permissions.&lt;br /&gt;
|-&lt;br /&gt;
|config_props() on output pad&lt;br /&gt;
|Sets the image dimensions for the output link to the same as on the filter's input.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== The vf_negate filter ===&lt;br /&gt;
Having looked at the data structures and callback functions involved, let's take a look at an actual filter.  The vf_negate filter inverts the colors in a video.  It has one input, and one output, and outputs exactly one frame for every input frame.  In this way, it's fairly typical, and can take advantage of many of the default callback implementations offered by the filter system.&lt;br /&gt;
&lt;br /&gt;
First, let's take a look at the AVFilter structure at the bottom of the libavfilter/vf_negate.c file:&lt;br /&gt;
&lt;br /&gt;
 AVFilter avfilter_vf_negate =&lt;br /&gt;
 {&lt;br /&gt;
     .name      = &amp;quot;negate&amp;quot;,&lt;br /&gt;
     .author    = &amp;quot;Bobby Bingham&amp;quot;,&lt;br /&gt;
 &lt;br /&gt;
     .priv_size = sizeof(NegContext),&lt;br /&gt;
 &lt;br /&gt;
     .query_formats = query_formats,&lt;br /&gt;
 &lt;br /&gt;
     .inputs    = (AVFilterPad[]) {{ .name            = &amp;quot;default&amp;quot;,&lt;br /&gt;
                                     .type            = AV_PAD_VIDEO,&lt;br /&gt;
                                     .draw_slice      = draw_slice,&lt;br /&gt;
                                     .config_props    = config_props,&lt;br /&gt;
                                     .min_perms       = AV_PERM_READ, },&lt;br /&gt;
                                   { .name = NULL}},&lt;br /&gt;
     .outputs   = (AVFilterPad[]) {{ .name            = &amp;quot;default&amp;quot;,&lt;br /&gt;
                                     .type            = AV_PAD_VIDEO, },&lt;br /&gt;
                                   { .name = NULL}},&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
Here, you can see that the filter is named &amp;quot;negate,&amp;quot; and it needs sizeof(NegContext) bytes of data to store its context.  In the list of inputs and outputs, a pad whose name is set to NULL indicates the end of the list, so this filter has exactly one input and one output.  If you look closely at the pad definitions, you will see that fairly few callback functions are actually specified.  Because of the simplicity of the filter, the defaults can do most of the work for us.&lt;br /&gt;
&lt;br /&gt;
Let us take a look at the callback function it does define.&lt;br /&gt;
&lt;br /&gt;
==== query_formats() ====&lt;br /&gt;
 static int query_formats(AVFilterContext *ctx)&lt;br /&gt;
 {&lt;br /&gt;
     avfilter_set_common_formats(ctx,&lt;br /&gt;
         avfilter_make_format_list(10,&lt;br /&gt;
                 PIX_FMT_YUV444P,  PIX_FMT_YUV422P,  PIX_FMT_YUV420P,&lt;br /&gt;
                 PIX_FMT_YUV411P,  PIX_FMT_YUV410P,&lt;br /&gt;
                 PIX_FMT_YUVJ444P, PIX_FMT_YUVJ422P, PIX_FMT_YUVJ420P,&lt;br /&gt;
                 PIX_FMT_YUV440P,  PIX_FMT_YUVJ440P));&lt;br /&gt;
     return 0;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This calls avfilter_make_format_list().  This function takes as its first parameter the number of formats which will follow as the remaining parameters.  The return value is an AVFilterFormats structure containing the given formats.  The avfilter_set_common_formats() function which this structure is passed to sets all connected links to use this same list of formats, which causes all the filters to use the same format after negotiation is complete.  As you can see, this filter supports a number of planar YUV colorspaces, including JPEG YUV colorspaces (the ones with a 'J' in the names).&lt;br /&gt;
&lt;br /&gt;
==== config_props() on an input pad ====&lt;br /&gt;
The config_props() on an input pad is responsible for verifying that the properties of the input pad are supported by the filter, and to make any updates to the filter's context which are necessary for the link's properties.&lt;br /&gt;
&lt;br /&gt;
TODO: quick explanation of YUV colorspaces, chroma subsampling, difference in range of YUV and JPEG YUV.&lt;br /&gt;
&lt;br /&gt;
Let's take a look at the way in which this filter stores its context:&lt;br /&gt;
&lt;br /&gt;
 typedef struct&lt;br /&gt;
 {&lt;br /&gt;
     int offY, offUV;&lt;br /&gt;
     int hsub, vsub;&lt;br /&gt;
 } NegContext;&lt;br /&gt;
&lt;br /&gt;
That's right.  The priv_size member of the AVFilter structure tells the filter system how many bytes to reserve for this structure.  The hsub and vsub members are used for chroma subsampling, and the offY and offUV members are used for handling the difference in range between YUV and JPEG YUV.  Let's see how these are set in the input pad's config_props:&lt;br /&gt;
&lt;br /&gt;
 static int config_props(AVFilterLink *link)&lt;br /&gt;
 {&lt;br /&gt;
     NegContext *neg = link-&amp;gt;dst-&amp;gt;priv;&lt;br /&gt;
 &lt;br /&gt;
     avcodec_get_chroma_sub_sample(link-&amp;gt;format, &amp;amp;neg-&amp;gt;hsub, &amp;amp;neg-&amp;gt;vsub);&lt;br /&gt;
 &lt;br /&gt;
     switch(link-&amp;gt;format) {&lt;br /&gt;
     case PIX_FMT_YUVJ444P:&lt;br /&gt;
     case PIX_FMT_YUVJ422P:&lt;br /&gt;
     case PIX_FMT_YUVJ420P:&lt;br /&gt;
     case PIX_FMT_YUVJ440P:&lt;br /&gt;
         neg-&amp;gt;offY  =&lt;br /&gt;
         neg-&amp;gt;offUV = 0;&lt;br /&gt;
         break;&lt;br /&gt;
     default:&lt;br /&gt;
         neg-&amp;gt;offY  = -4;&lt;br /&gt;
         neg-&amp;gt;offUV = 1;&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
     return 0;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This simply calls avcodec_get_chroma_sub_sample() to get the chroma subsampling shift factors, and stores those in the context.  It then stores a set of offsets for compensating for different luma/chroma value ranges for JPEG YUV, and a different set of offsets for other YUV colorspaces.  It returns zero to indicate success, because there are no possible input cases which this filter cannot handle.&lt;br /&gt;
&lt;br /&gt;
==== draw_slice() ====&lt;br /&gt;
Finally, the function which actually does the processing for the filter, draw_slice():&lt;br /&gt;
&lt;br /&gt;
 static void draw_slice(AVFilterLink *link, int y, int h)&lt;br /&gt;
 {&lt;br /&gt;
     NegContext *neg = link-&amp;gt;dst-&amp;gt;priv;&lt;br /&gt;
     AVFilterPicRef *in  = link-&amp;gt;cur_pic;&lt;br /&gt;
     AVFilterPicRef *out = link-&amp;gt;dst-&amp;gt;outputs[0]-&amp;gt;outpic;&lt;br /&gt;
     uint8_t *inrow, *outrow;&lt;br /&gt;
     int i, j, plane;&lt;br /&gt;
 &lt;br /&gt;
     /* luma plane */&lt;br /&gt;
     inrow  = in-&amp;gt; data[0] + y * in-&amp;gt; linesize[0];&lt;br /&gt;
     outrow = out-&amp;gt;data[0] + y * out-&amp;gt;linesize[0];&lt;br /&gt;
     for(i = 0; i &amp;lt; h; i ++) {&lt;br /&gt;
         for(j = 0; j &amp;lt; link-&amp;gt;w; j ++)&lt;br /&gt;
             outrow[j] = 255 - inrow[j] + neg-&amp;gt;offY;&lt;br /&gt;
         inrow  += in-&amp;gt; linesize[0];&lt;br /&gt;
         outrow += out-&amp;gt;linesize[0];&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
     /* chroma planes */&lt;br /&gt;
     for(plane = 1; plane &amp;lt; 3; plane ++) {&lt;br /&gt;
         inrow  = in-&amp;gt; data[plane] + (y &amp;gt;&amp;gt; neg-&amp;gt;vsub) * in-&amp;gt; linesize[plane];&lt;br /&gt;
         outrow = out-&amp;gt;data[plane] + (y &amp;gt;&amp;gt; neg-&amp;gt;vsub) * out-&amp;gt;linesize[plane];&lt;br /&gt;
 &lt;br /&gt;
         for(i = 0; i &amp;lt; h &amp;gt;&amp;gt; neg-&amp;gt;vsub; i ++) {&lt;br /&gt;
             for(j = 0; j &amp;lt; link-&amp;gt;w &amp;gt;&amp;gt; neg-&amp;gt;hsub; j ++)&lt;br /&gt;
                 outrow[j] = 255 - inrow[j] + neg-&amp;gt;offUV;&lt;br /&gt;
             inrow  += in-&amp;gt; linesize[plane];&lt;br /&gt;
             outrow += out-&amp;gt;linesize[plane];&lt;br /&gt;
         }&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
     avfilter_draw_slice(link-&amp;gt;dst-&amp;gt;outputs[0], y, h);&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
The y parameter indicates the top of the current slice, and the h parameter the slice's height.  Areas of the image outside this slice should not be assumed to be meaningful (though a method to allow this assumption in order to simplify boundary cases for some filters is coming in the future).&lt;br /&gt;
&lt;br /&gt;
This sets inrow to point to the beginning of the first row of the slice in the input, and outrow similarly for the output.  Then, for each row, it loops through all the pixels, subtracting them from 255, and adding the offset which was determined in config_props() to account for different value ranges.&lt;br /&gt;
&lt;br /&gt;
It then does the same thing for the chroma planes.  Note how the width and height are shifted right to account for the chroma subsampling.&lt;br /&gt;
&lt;br /&gt;
Once the drawing is completed, the slice is sent to the next filter by calling avfilter_draw_slice().&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:FFmpeg Tutorials]]&lt;br /&gt;
[[Category:FFmpeg]]&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_Wishlist&amp;diff=9191</id>
		<title>FFmpeg Wishlist</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_Wishlist&amp;diff=9191"/>
		<updated>2007-12-23T23:09:25Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: h/vflip can now be done with libavfilter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The various desired features for FFmpeg can be found in the following pages:&lt;br /&gt;
* [https://roundup.mplayerhq.hu/roundup/ffmpeg/issue?%40&amp;amp;%40columns=title&amp;amp;%40columns=topic&amp;amp;id=&amp;amp;%40columns=id&amp;amp;%40columns=activity&amp;amp;%40sort=activity&amp;amp;%40columns=priority&amp;amp;%40group=priority&amp;amp;type=2&amp;amp;status=2&amp;amp;%40columns=substatus&amp;amp;%40pagesize=50&amp;amp;%40startwith=0&amp;amp;%40action=search Open feature requests in the issue tracker]&lt;br /&gt;
* [http://svn.mplayerhq.hu/ffmpeg/trunk/doc/TODO?view=co TODO file in the SVN tree]&lt;br /&gt;
* [[FFmpeg Summer Of Code]] pages&lt;br /&gt;
* Finish and commit any code not yet commited at the [http://svn.mplayerhq.hu/soc/ SoC FFmpeg tree]&lt;br /&gt;
&lt;br /&gt;
Also, other features requests can be found in:&lt;br /&gt;
* [https://roundup.mplayerhq.hu/roundup/ffmpeg/issue?%40&amp;amp;%40columns=title&amp;amp;%40columns=topic&amp;amp;id=&amp;amp;%40columns=id&amp;amp;%40columns=activity&amp;amp;%40sort=activity&amp;amp;%40columns=priority&amp;amp;%40group=priority&amp;amp;type=2&amp;amp;status=1&amp;amp;%40columns=substatus&amp;amp;%40pagesize=50&amp;amp;%40startwith=0&amp;amp;%40action=search Feature requests marked as &amp;quot;new&amp;quot;]&lt;br /&gt;
* Below in this page (mostly deprecated items)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also the discussion about [[Ffmpeg audio api|Audio API]] TODOs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Old list =&lt;br /&gt;
&lt;br /&gt;
The following is deprecated, please '''do not''' add new items to this list, use instead the issue tracker. Also, send a message to the mailing list before implementing one of those items. They could be work-in-progress or not wanted anymore.&lt;br /&gt;
&lt;br /&gt;
Moving any of these items to a '''proper''' feature request in the issue tracker is welcome.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Decoders ==&lt;br /&gt;
* Add b-frame support to the ffsvq3 decoder&lt;br /&gt;
* integrate [[Speex]] (glue code or native) &lt;br /&gt;
* Fix &amp;quot;[rv20 @ 009C8BF0]unknown bit3 set&amp;quot; in [[RV20]] decoder&lt;br /&gt;
* [[xeb|XEB]] - the [[RatDVD]] video codec (stored in [[xvo|XVO]] container format)&lt;br /&gt;
* VNC decoder, files created by vncrec. Re-use code from [[VMware Video]] decoder http://www.sodan.org/~penny/vncrec/&lt;br /&gt;
* Additional game formats support:&lt;br /&gt;
** [[VQA]] v3 support, see http://www.gamers.org/pub/idgames2/planetquake/planetcnc/cncdz/&lt;br /&gt;
** [[Gremlin Digital Video]]&lt;br /&gt;
** [[ARMovie|ARMovie/RPL]]&lt;br /&gt;
** [[ESCAPE]]&lt;br /&gt;
** [[M95]]&lt;br /&gt;
&lt;br /&gt;
== Demuxers ==&lt;br /&gt;
* [[FluxDVD]] / [[RatDVD]] demuxer for [[xvo|XVO]] files (Note! [[RatDVD]] is the predecesor of [[FluxDVD]])&lt;br /&gt;
&lt;br /&gt;
== Muxers ==&lt;br /&gt;
* DVB (MPEG-TS) muxer inside DVB containers&lt;br /&gt;
** MPEG-1/2 video-streams inside DVB containers&lt;br /&gt;
** MPEG-4 ASP video-streams inside DVB containers&lt;br /&gt;
** MPEG-4 AVC (H.264) video-streams inside DVB containers&lt;br /&gt;
** AC3 audio-streams inside DVB containers&lt;br /&gt;
*** Mutiple AC3 audio-streams inside DVB containers&lt;br /&gt;
** MP3 audio-streams inside DVB containers&lt;br /&gt;
*** Mutiple MP3 audio-streams inside DVB containers&lt;br /&gt;
* NSV muxer&lt;br /&gt;
* NSA muxer&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
*  Create a new audio API system&lt;br /&gt;
*  Grabbing from video devices under windows&lt;br /&gt;
** Apply [[Interesting Patches#Microsoft Video for Windows capture support|this]] VFW capture patch&lt;br /&gt;
** Create a DirectShow patch&lt;br /&gt;
* Improved exition documentation and add additional means to document&lt;br /&gt;
**Web&lt;br /&gt;
**WIKI&lt;br /&gt;
**manpage&lt;br /&gt;
&lt;br /&gt;
== Misc ==&lt;br /&gt;
* Clean up the h263 rtp [[Interesting Patches#H.263 rtp patch|patch]].&lt;br /&gt;
&lt;br /&gt;
== Streaming Media Network Protocols ==&lt;br /&gt;
Streaming Media Network Protocols (client and server-side) improvements/enhancements and related ideas for new features/functions.&lt;br /&gt;
*  Create a common 'stream demuxer/parser library' for the client-side to receive input streams (and/or API for adding support for additional streaming formats?) - a LGPL'ed sub-library in FFmpeg with all stream demuxers/parsers gathered (similar to the libpostproc and libavutil). Call it &amp;quot;libstream&amp;quot; (or &amp;quot;stream&amp;quot; or whatever). Move FFmpeg's existing stream code there like HTTP and RTSP/RTP. This will help reduce future code replication by sharing common code, thus making it easier to add support for additional streaming formats. All togther making it super easy for audio/video players using FFmpeg to add all-in-one streaming support to their player.&lt;br /&gt;
**Maybe use either [http://www.mplayerhq.hu MPlayer]'s &amp;quot;''stream''&amp;quot; library structure, [http://www.live555.com LIVE555], [http://curl.haxx.se cURL], or probebly the better [http://streaming.polito.it/client/library libnms] (from [http://streaming.polito.it/client NeMeSi]) as a base for such a common library?&lt;br /&gt;
*Add support for additional streaming protocols (on the client side) and improve/enhance support for existing protocols:&lt;br /&gt;
** HTTP (Hypertext Transfer Protocol) client&lt;br /&gt;
*** plus a SSL (Secure Sockets Layer) client support for HTTPS&lt;br /&gt;
** UDP (User Datagram Protocol) client&lt;br /&gt;
** RTSP - Real-Time Streaming Protocol (RFC2326) client&lt;br /&gt;
** RTP/RTCP - Real-Time Transport Protocol/RTP Control Protocol (RFC3550) client&lt;br /&gt;
** RTP Profile for Audio and Video Conferences with Minimal Control (RFC3551) client&lt;br /&gt;
** RealMedia RTSP/RDT (Real Time Streaming Protocol /  Real Data Transport)  client&lt;br /&gt;
** SDP (Service Discovery Protocol) / SSDP (Simple Service Discovery Protocol)  client&lt;br /&gt;
** MMS (Microsoft Media Services) client&lt;br /&gt;
*** including the subprotocol mmsh (MMS over HTTP) and mmst (MMS over TCP)&lt;br /&gt;
*FFServer (streaming server) updating and improving:&lt;br /&gt;
**FFServer code hasn't been update for quite a while&lt;br /&gt;
**Support for RTSP interleaved RTP media &lt;br /&gt;
**RTSP over HTTP tunneling&lt;br /&gt;
**SLL (Secure Sockets Layer) support&lt;br /&gt;
**TLS (Transport Layer Security) support&lt;br /&gt;
**SCTP (Stream Control Transmission Protocol) support&lt;br /&gt;
***including tunnel SCTP over UDP&lt;br /&gt;
**Per-asset accounting options &lt;br /&gt;
**Profiling and performance improvements of the RTSP, HTTP and RTP server code &lt;br /&gt;
**Streaming to clients like WMP 9, 10 and 11 is broken&lt;br /&gt;
**MMS server streaming support in FFServer, (especially for Linux).&lt;br /&gt;
*** including the subprotocol mmsh (MMS over HTTP) and mmst (MMS over TCP)&lt;br /&gt;
*** Note that al3x has gotten something working with ffserver, you might want to ask him what needs to be done as well :) --[[User:Compn|Compn]] 14:22, 19 March 2007 (EDT)&lt;br /&gt;
***You should also take a look at the [http://streaming.polito.it/server FENG (RTSP Streaming Server)] code, [http://streaming.polito.it/embedded NetEmbryo (Embedded Open Media Streaming Library)], and also [http://curl.haxx.se cURL]  --[[User:Gamester17|Gamester17]] 11:20, 29 March 2007 (GMT+1)&lt;br /&gt;
&lt;br /&gt;
==See Also==&lt;br /&gt;
*[[FFmpeg Summer Of Code]] for more suggestions/requests (ideas for developers).&lt;br /&gt;
*[http://bugzilla.mplayerhq.hu/buglist.cgi?query_format=specific&amp;amp;order=relevance+desc&amp;amp;bug_status=__open__&amp;amp;product=FFmpeg&amp;amp;content= FFmpeg bugs] for bugs in FFmpegs (codecs) that you can help fix or add addition information/samples to.&lt;br /&gt;
*[[:Category:Formats missing in FFmpeg]] for formats not implemented in ffmpeg yet&lt;br /&gt;
&lt;br /&gt;
[[Category:FFmpeg]]&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=User:Koorogi&amp;diff=9190</id>
		<title>User:Koorogi</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=User:Koorogi&amp;diff=9190"/>
		<updated>2007-12-23T23:06:10Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: little more info&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bobby Bingham is an [[FFmpeg Summer Of Code 2007]] student working on [[libavfilter]].  He is also semi-fluent in Japanese, and is currently spending his spare time writing a Sega Saturn emulator for the fun of it.  Oh, and to learn more about how it all works.&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_/_Libav_Summer_Of_Code&amp;diff=9189</id>
		<title>FFmpeg / Libav Summer Of Code</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_/_Libav_Summer_Of_Code&amp;diff=9189"/>
		<updated>2007-12-23T22:59:28Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: colorspace negotiation has been revamped&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [[FFmpeg]] project has been a participant in the [http://code.google.com/soc/ Google Summer of Code] program during the 2006 and 2007 seaons.&lt;br /&gt;
&lt;br /&gt;
* [[FFmpeg Summer Of Code 2006|2006 project page]]&lt;br /&gt;
* [[FFmpeg Summer Of Code 2007|2007 project page]]&lt;br /&gt;
&lt;br /&gt;
Each accepted project is developed in its own sandbox, separate from the main FFmpeg codebase. Naturally, the end goal of each of the accepted FFmpeg projects ought to be to have that code in shape for acceptance into the production codebase. This page tracks the status of each project and how well each student did.&lt;br /&gt;
&lt;br /&gt;
== 2006 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== VC-1 Decoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor: [[User:Multimedia Mike|Mike Melanson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AMR Decoder ===&lt;br /&gt;
* Student: [[User:superdump|Robert Swain]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC but continues working on it&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;To be expected soon. (Within a few weeks from 20071203 I think.)&amp;lt;/font&amp;gt; Narrow band decoding documented on [[AMR-NB]] and floating point code has been implemented up to synthesis.&lt;br /&gt;
&lt;br /&gt;
=== AC3 Decoder ===&lt;br /&gt;
* Student: [[User:Cloud9|Kartikey Mahendra BHATT]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:Jruggle|Justin Ruggles]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== AAC Decoder ===&lt;br /&gt;
* Student: Maxim Gavrilov&lt;br /&gt;
* Mentor: [[User:ods15|Oded Shimon]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Picked up by&amp;lt;/font&amp;gt; [[User:andoma|Andreas Öman]] who is currently preparing code for merge with FFmpeg.&lt;br /&gt;
&lt;br /&gt;
=== Vorbis Encoder ===&lt;br /&gt;
* Student: Mathew Philip&lt;br /&gt;
* Mentor: [[User:ods15|Oded Shimon]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project barely started&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: Picked up by [[User:ods15|Oded Shimon]] and &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;committed to FFmpeg&amp;lt;/font&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== 2007 Projects ==&lt;br /&gt;
&lt;br /&gt;
=== RealVideo 4 Decoder ===&lt;br /&gt;
* Student: [[User:Kostya|Kostya Shishkov]]&lt;br /&gt;
* Mentor: [[User:Multimedia Mike|Mike Melanson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;in the process of being committed to FFmpeg;&amp;lt;/font&amp;gt; the project goal has also morphed to include  a RealVideo 3 decoder since the 2 schemes are so similar. Both RV30 and RV40 are decodeable with visual artifacts.&lt;br /&gt;
&lt;br /&gt;
=== QCELP Decoder ===&lt;br /&gt;
* Student: [[User:Reynaldo|Reynaldo Verdejo Pinochet]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC but continues working on it&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;slowly progressing, it's working though&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Matroska Muxer ===&lt;br /&gt;
* Student: David Conrad&lt;br /&gt;
* Mentor: [[User:aurel|Aurélien Jacobs]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;FFmpeg committer&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#00CC00&amp;quot;&amp;gt;Accepted into the FFmpeg codebase.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Video Filter API (AKA [[Libavfilter|libavfilter]]) ===&lt;br /&gt;
* Student: [[User:Koorogi|Bobby Bingham]]&lt;br /&gt;
* Mentor: [[User:Merbanan|Benjamin Larsson]] and Michael Niedermayer&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC but continues working on it&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Working code for ffplay and ffmpeg.&amp;lt;/font&amp;gt; Still in  development (albeit slowly) by [[User:Koorogi|Bobby Bingham]] and [[User:Vitor|Vitor]].&lt;br /&gt;
&lt;br /&gt;
=== E-AC3 Decoder ===&lt;br /&gt;
* Student: Bartlomiej Wolowiec&lt;br /&gt;
* Mentor:  [[User:Jruggle|Justin Ruggles]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC, (continues working on it?)&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;Working for most available samples. There are still some unimplemented features though.&amp;lt;/font&amp;gt; The code is currently not clean enough for inclusion in FFmpeg. Now picked up by [[User:Jruggle|Justin Ruggles]] and being beaten into shape.&lt;br /&gt;
&lt;br /&gt;
=== JPEG 2000 Encoder and Decoder ===&lt;br /&gt;
* Student: Kamil Nowosad&lt;br /&gt;
* Mentor: [[User:pengvado|Loren Merritt]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;The code is working but not all features are supported.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Dirac Encoder and Decoder ===&lt;br /&gt;
* Student: Marco Gerards&lt;br /&gt;
* Mentor: [[User:Lu_zero|Luca Barbato]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;project not finished during SoC but continues working on it&amp;lt;/font&amp;gt;, just slower than before due to other tasks taking priority. (Winter vacations approaching!)&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CCCC00&amp;quot;&amp;gt;The decoder is in good shape, the encoder still needs more work. Both need to be updated to the latest spec.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== TS Muxer ===&lt;br /&gt;
* Student: Xiaohui Sun&lt;br /&gt;
* Mentor:  [[User:bcoudurier|Baptiste Coudurier]]&lt;br /&gt;
* Student Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt;disappeared, project unfinished&amp;lt;/font&amp;gt;&lt;br /&gt;
* Code Status: &amp;lt;font color=&amp;quot;#CC0000&amp;quot;&amp;gt; [[Interesting Patches#PES packetizer by Xiaohui Sun|Changes]] requested during the review process for FFmpeg inclusion were never made.&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FFmpeg]]&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=8298</id>
		<title>FFmpeg filter HOWTO</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=8298"/>
		<updated>2007-08-20T16:49:10Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: Explain the workings of a simple filter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is meant as an introduction of writing filters for [[libavfilter]].  This is a work in progress, but should at least point you in the right direction for writing simple filters.&lt;br /&gt;
&lt;br /&gt;
== Definition of a filter ==&lt;br /&gt;
&lt;br /&gt;
=== AVFilter ===&lt;br /&gt;
All filters are described by an AVFilter structure.  This structure gives information needed to initialize the filter, and information on the entry points into the filter code.  This structure is declared in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct&lt;br /&gt;
 {&lt;br /&gt;
     char *name;         ///&amp;lt; filter name&lt;br /&gt;
     char *author;       ///&amp;lt; filter author&lt;br /&gt;
 &lt;br /&gt;
     int priv_size;      ///&amp;lt; size of private data to allocate for the filter&lt;br /&gt;
 &lt;br /&gt;
     int (*init)(AVFilterContext *ctx, const char *args, void *opaque);&lt;br /&gt;
     void (*uninit)(AVFilterContext *ctx);&lt;br /&gt;
 &lt;br /&gt;
     const AVFilterPad *inputs;  ///&amp;lt; NULL terminated list of inputs. NULL if none&lt;br /&gt;
     const AVFilterPad *outputs; ///&amp;lt; NULL terminated list of outputs. NULL if none&lt;br /&gt;
 } AVFilter;&lt;br /&gt;
&lt;br /&gt;
=== AVFilterPad ===&lt;br /&gt;
Let's take a quick look at the AVFilterPad structure, which is used to describe the inputs and outputs of the filter.  This is also defined in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterPad&lt;br /&gt;
 {&lt;br /&gt;
     char *name;&lt;br /&gt;
     int type;&lt;br /&gt;
 &lt;br /&gt;
     int min_perms;&lt;br /&gt;
     int rej_perms;&lt;br /&gt;
 &lt;br /&gt;
     int *(*query_formats)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     void (*start_frame)(AVFilterLink *link, AVFilterPicRef *picref);&lt;br /&gt;
     AVFilterPicRef *(*get_video_buffer)(AVFilterLink *link, int perms);&lt;br /&gt;
     void (*end_frame)(AVFilterLink *link);&lt;br /&gt;
     void (*draw_slice)(AVFilterLink *link, int y, int height);&lt;br /&gt;
 &lt;br /&gt;
     int (*request_frame)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     int (*config_props)(AVFilterLink *link);&lt;br /&gt;
 } AVFilterPad;&lt;br /&gt;
&lt;br /&gt;
The actual definition in the header file has doxygen comments describing each entry point, its purpose, and what type of pads it is relevant for.  These fields are relevant for all pads:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|name&lt;br /&gt;
|Name of the pad.  No two inputs should have the same name, and no two outputs should have the same name.&lt;br /&gt;
|-&lt;br /&gt;
|type&lt;br /&gt;
|Only AV_PAD_VIDEO currently.&lt;br /&gt;
|-&lt;br /&gt;
|query_formats&lt;br /&gt;
|Returns a list of colorspaces supported on the pad.&lt;br /&gt;
|-&lt;br /&gt;
|config_props&lt;br /&gt;
|Handles configuration of the link connected to the pad&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to input pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|min_perms&lt;br /&gt;
|Minimum permissions required to a picture received as input.&lt;br /&gt;
|-&lt;br /&gt;
|rej_perms&lt;br /&gt;
|Permissions not accepted on pictures received as input.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame&lt;br /&gt;
|Called when a frame is about to be given as input.&lt;br /&gt;
|-&lt;br /&gt;
|draw_slice&lt;br /&gt;
|Called when a slice of frame data has been given as input.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame&lt;br /&gt;
|Called when the input frame has been completely sent.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer&lt;br /&gt;
|Called by the previous filter to request memory for a picture.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to output pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|request_frame&lt;br /&gt;
|Requests that the filter output a frame.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Picture buffers ==&lt;br /&gt;
&lt;br /&gt;
=== Reference counting ===&lt;br /&gt;
All pictures in the filter system are reference counted.  This means that there is a picture buffer with memory allocated for the image data, and various filters can own a reference to the buffer.  When a reference is no longer needed, its owner frees the reference.  When the last reference to a picture buffer is freed, the filter system automatically frees the picture buffer.&lt;br /&gt;
&lt;br /&gt;
=== Permissions ===&lt;br /&gt;
The upshot of multiple filters having references to a single picture is that they will all want some level of access to the image data.  It should be obvious that if one filter expects to be able to read the image data without it changing that no other filter should write to the image data.  The permissions system handles this.&lt;br /&gt;
&lt;br /&gt;
In most cases, when a filter prepares to output a frame, it will request a buffer from the filter to which it will be outputting.  It specifies the minimum permissions it needs to the buffer, though it may be given a buffer with more permissions than the minimum it requested.&lt;br /&gt;
&lt;br /&gt;
When it wants to pass this buffer to another filter as output, it creates a new reference to the picture, possibly with a reduced set of permissions.  This new reference will be owned by the filter receiving it.&lt;br /&gt;
&lt;br /&gt;
So, for example, for a filter which drops frames if they are similar to the last frame it output, it would want to keeps its own reference to a picture after outputting it, and make sure that no other filter modified the buffer either.  It would do this by requesting the permissions AV_PERM_READ|AV_PERM_WRITE|AV_PERM_PRESERVE for itself, and removing the AV_PERM_WRITE permission from any references it gave to other filters.&lt;br /&gt;
&lt;br /&gt;
The available permissions are:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Permission&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_READ&lt;br /&gt;
|Can read the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_WRITE&lt;br /&gt;
|Can write to the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_PRESERVE&lt;br /&gt;
|Can assume that the image data will not be modified by other filters. This means that no other filters should have the AV_PERM_WRITE permission.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE&lt;br /&gt;
|The filter may output the same buffer multiple times, but the image data may not be changed for the different outputs.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE2&lt;br /&gt;
|The filter may output the same buffer multiple times, and may modify the image data between outputs.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Filter Links ==&lt;br /&gt;
A filter's inputs and outputs are connected to those of another filter through the AVFilterLink structure:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterLink&lt;br /&gt;
 {&lt;br /&gt;
     AVFilterContext *src;       ///&amp;lt; source filter&lt;br /&gt;
     unsigned int srcpad;        ///&amp;lt; index of the output pad on the source filter&lt;br /&gt;
 &lt;br /&gt;
     AVFilterContext *dst;       ///&amp;lt; dest filter&lt;br /&gt;
     unsigned int dstpad;        ///&amp;lt; index of the input pad on the dest filter&lt;br /&gt;
 &lt;br /&gt;
     int w;                      ///&amp;lt; agreed upon image width&lt;br /&gt;
     int h;                      ///&amp;lt; agreed upon image height&lt;br /&gt;
     enum PixelFormat format;    ///&amp;lt; agreed upon image colorspace&lt;br /&gt;
 &lt;br /&gt;
     AVFilterPicRef *srcpic;&lt;br /&gt;
 &lt;br /&gt;
     AVFilterPicRef *cur_pic;&lt;br /&gt;
     AVFilterPicRef *outpic;&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
The src and dst members indicate the filters at the source and destination ends of the link, respectively.  The srcpad indicates the index of the output pad on the source filter to which the link is connected.  Likewise, the dstpad indicates the index of the input pad on the destination filter.&lt;br /&gt;
&lt;br /&gt;
When two filters are connected, they need to agree upon the dimensions of the image data they'll be working with, and the format that data is in.  Once this has been agreed upon, these parameters are stored in the link structure.&lt;br /&gt;
&lt;br /&gt;
The srcpic member is used internally by the filter system, and should not be accessed directly.&lt;br /&gt;
&lt;br /&gt;
The cur_pic member is for the use of the destination filter.  When a frame is currently being sent over the link (ie. starting from the call to start_frame() and ending with the call to end_frame()), this contains the reference to the frame which is owned by the destination filter.&lt;br /&gt;
&lt;br /&gt;
The outpic member is described in the following tutorial on writing a simple filter.&lt;br /&gt;
&lt;br /&gt;
== Writing a simple filter ==&lt;br /&gt;
&lt;br /&gt;
=== Default filter entry points ===&lt;br /&gt;
Because the majority of filters that will probably be written will take exactly one input, and produce exactly one output, and output one frame for every frame received as input, the filter system provides a number default entry points to ease the development of such filters.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Entry point&lt;br /&gt;
!Actions taken by the default implementation&lt;br /&gt;
|-&lt;br /&gt;
|request_frame()&lt;br /&gt;
|Request a frame from the previous filter in the chain.&lt;br /&gt;
|-&lt;br /&gt;
|query_formats() on output pad&lt;br /&gt;
|Return a list of formats indicating that the format currently used on the input pad is the only supported output format.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame()&lt;br /&gt;
|Request a buffer to store the output frame in.  A reference to this buffer is stored in the outpic member of the link hooked to the filter's output.  The next filter's start_frame() callback is called and given a reference to this buffer.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame()&lt;br /&gt;
|Calls the next filter's end_frame() callback.  Frees the reference to the outpic member of the output link, if it was set by (ie. if the default start_frame() is used).  Frees the cur_pic reference in the input link.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer()&lt;br /&gt;
|Returns a buffer with the AV_PERM_READ permission in addition to all the requested permissions.&lt;br /&gt;
|-&lt;br /&gt;
|config_props() on output pad&lt;br /&gt;
|Sets the image dimensions for the output link to the same as on the filter's input.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== The vf_negate filter ===&lt;br /&gt;
Having looked at the data structures and callback functions involved, let's take a look at an actual filter.  The vf_negate filter inverts the colors in a video.  It has one input, and one output, and outputs exactly one frame for every input frame.  In this way, it's fairly typical, and can takea dvantage of many of the default callback implementations offered by the filter system.&lt;br /&gt;
&lt;br /&gt;
First, let's take a look at the AVFilter structure at the bottom of the libavfilter/vf_negate.c file:&lt;br /&gt;
&lt;br /&gt;
 AVFilter avfilter_vf_negate =&lt;br /&gt;
 {&lt;br /&gt;
     .name      = &amp;quot;negate&amp;quot;,&lt;br /&gt;
     .author    = &amp;quot;Bobby Bingham&amp;quot;,&lt;br /&gt;
 &lt;br /&gt;
     .priv_size = sizeof(NegContext),&lt;br /&gt;
 &lt;br /&gt;
     .inputs    = (AVFilterPad[]) {{ .name            = &amp;quot;default&amp;quot;,&lt;br /&gt;
                                     .type            = AV_PAD_VIDEO,&lt;br /&gt;
                                     .draw_slice      = draw_slice,&lt;br /&gt;
                                     .query_formats   = query_formats,&lt;br /&gt;
                                     .config_props    = config_props,&lt;br /&gt;
                                     .min_perms       = AV_PERM_READ, },&lt;br /&gt;
                                   { .name = NULL}},&lt;br /&gt;
     .outputs   = (AVFilterPad[]) {{ .name            = &amp;quot;default&amp;quot;,&lt;br /&gt;
                                     .type            = AV_PAD_VIDEO, },&lt;br /&gt;
                                   { .name = NULL}},&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
Here, you can see that the filter is named &amp;quot;negate,&amp;quot; and it needs sizeof(NegContext) bytes of data to store its context.  In the list of inputs and outputs, a pad whose name is set to NULL indicates the end of the list, so this filter has exactly one input and one output.  If you look closely at the pad definitions, you will see that fairly few callback functions are actually specified.  Because of the simplicity of the filter, the defaults can do most of the work for us.&lt;br /&gt;
&lt;br /&gt;
Let us take a look at the callback function it does define.&lt;br /&gt;
&lt;br /&gt;
==== query_formats() on an input pad ====&lt;br /&gt;
 static int *query_formats(AVFilterLink *link)&lt;br /&gt;
 {&lt;br /&gt;
     return avfilter_make_format_list(10,&lt;br /&gt;
                 PIX_FMT_YUV444P,  PIX_FMT_YUV422P,  PIX_FMT_YUV420P,&lt;br /&gt;
                 PIX_FMT_YUV411P,  PIX_FMT_YUV410P,&lt;br /&gt;
                 PIX_FMT_YUVJ444P, PIX_FMT_YUVJ422P, PIX_FMT_YUVJ420P,&lt;br /&gt;
                 PIX_FMT_YUV440P,  PIX_FMT_YUVJ440P);&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This simply calls avfilter_make\format_list().  This function takes as its first parameter the number of formats which will follow as the remaining parameters.  The return value is a list of formats, terminated with -1, which is suitable for returning from query_formats().  As you can see, this filter supports a number of planar YUV colorspaces, including JPEG YUV colorspaces (the ones with a 'J' in the names).&lt;br /&gt;
&lt;br /&gt;
Notice that the filter definition did not define a query_formats() for the output pad.  In this case, the default will report that the output pad must operate in the same colorspace as the input.&lt;br /&gt;
&lt;br /&gt;
==== config_props() on an input pad ====&lt;br /&gt;
The config_props() on an input pad is responsible for verifying that the properties of the input pad are supported by the filter, and to make any updates to the filter's context which are necessary for the link's properties.&lt;br /&gt;
&lt;br /&gt;
TODO: quick explanation of YUV colorspaces, chroma subsampling, difference in range of YUV and JPEG YUV.&lt;br /&gt;
&lt;br /&gt;
Let's take a look at the way in which this filter stores its context:&lt;br /&gt;
&lt;br /&gt;
 typedef struct&lt;br /&gt;
 {&lt;br /&gt;
     int offY, offUV;&lt;br /&gt;
     int hsub, vsub;&lt;br /&gt;
 } NegContext;&lt;br /&gt;
&lt;br /&gt;
That's right.  The priv_size member of the AVFilter structure tells the filter system how many bytes to reserve for this structure.  The hsub and vsub members are used for chroma subsampling, and the offY and offUV members are used for handling the difference in range between YUV and JPEG YUV.  Let's see how these are set in the input pad's config_props:&lt;br /&gt;
&lt;br /&gt;
 static int config_props(AVFilterLink *link)&lt;br /&gt;
 {&lt;br /&gt;
     NegContext *neg = link-&amp;gt;dst-&amp;gt;priv;&lt;br /&gt;
 &lt;br /&gt;
     avcodec_get_chroma_sub_sample(link-&amp;gt;format, &amp;amp;neg-&amp;gt;hsub, &amp;amp;neg-&amp;gt;vsub);&lt;br /&gt;
 &lt;br /&gt;
     switch(link-&amp;gt;format) {&lt;br /&gt;
     case PIX_FMT_YUVJ444P:&lt;br /&gt;
     case PIX_FMT_YUVJ422P:&lt;br /&gt;
     case PIX_FMT_YUVJ420P:&lt;br /&gt;
     case PIX_FMT_YUVJ440P:&lt;br /&gt;
         neg-&amp;gt;offY  =&lt;br /&gt;
         neg-&amp;gt;offUV = 0;&lt;br /&gt;
         break;&lt;br /&gt;
     default:&lt;br /&gt;
         neg-&amp;gt;offY  = -4;&lt;br /&gt;
         neg-&amp;gt;offUV = 1;&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
     return 0;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This simply calls avcodec_get_chroma_sub_sample() to get the chroma subsampling shift factors, and stores those in the context.  It then stores a set of offsets for compensating for different luma/chroma value ranges for JPEG YUV, and a different set of offsets for other YUV colorspaces.  It returns zero to indicate success, because there are no possible input cases which this filter cannot handle.&lt;br /&gt;
&lt;br /&gt;
==== draw_slice() ====&lt;br /&gt;
Finally, the function which actually does the processing for the filter, draw_slice():&lt;br /&gt;
&lt;br /&gt;
 static void draw_slice(AVFilterLink *link, int y, int h)&lt;br /&gt;
 {&lt;br /&gt;
     NegContext *neg = link-&amp;gt;dst-&amp;gt;priv;&lt;br /&gt;
     AVFilterPicRef *in  = link-&amp;gt;cur_pic;&lt;br /&gt;
     AVFilterPicRef *out = link-&amp;gt;dst-&amp;gt;outputs[0]-&amp;gt;outpic;&lt;br /&gt;
     uint8_t *inrow, *outrow;&lt;br /&gt;
     int i, j, plane;&lt;br /&gt;
 &lt;br /&gt;
     /* luma plane */&lt;br /&gt;
     inrow  = in-&amp;gt; data[0] + y * in-&amp;gt; linesize[0];&lt;br /&gt;
     outrow = out-&amp;gt;data[0] + y * out-&amp;gt;linesize[0];&lt;br /&gt;
     for(i = 0; i &amp;lt; h; i ++) {&lt;br /&gt;
         for(j = 0; j &amp;lt; link-&amp;gt;w; j ++)&lt;br /&gt;
             outrow[j] = 255 - inrow[j] + neg-&amp;gt;offY;&lt;br /&gt;
         inrow  += in-&amp;gt; linesize[0];&lt;br /&gt;
         outrow += out-&amp;gt;linesize[0];&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
     /* chroma planes */&lt;br /&gt;
     for(plane = 1; plane &amp;lt; 3; plane ++) {&lt;br /&gt;
         inrow  = in-&amp;gt; data[plane] + (y &amp;gt;&amp;gt; neg-&amp;gt;vsub) * in-&amp;gt; linesize[plane];&lt;br /&gt;
         outrow = out-&amp;gt;data[plane] + (y &amp;gt;&amp;gt; neg-&amp;gt;vsub) * out-&amp;gt;linesize[plane];&lt;br /&gt;
 &lt;br /&gt;
         for(i = 0; i &amp;lt; h &amp;gt;&amp;gt; neg-&amp;gt;vsub; i ++) {&lt;br /&gt;
             for(j = 0; j &amp;lt; link-&amp;gt;w &amp;gt;&amp;gt; neg-&amp;gt;hsub; j ++)&lt;br /&gt;
                 outrow[j] = 255 - inrow[j] + neg-&amp;gt;offUV;&lt;br /&gt;
             inrow  += in-&amp;gt; linesize[plane];&lt;br /&gt;
             outrow += out-&amp;gt;linesize[plane];&lt;br /&gt;
         }&lt;br /&gt;
     }&lt;br /&gt;
 &lt;br /&gt;
     avfilter_draw_slice(link-&amp;gt;dst-&amp;gt;outputs[0], y, h);&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
The y parameter indicates the top of the current slice, and the h parameter the slice's height.  Areas of the image outside this slice should not be assumed to be meaningful (though a method to allow this assumption in order to simplify boundary cases for some filters is coming in the future).&lt;br /&gt;
&lt;br /&gt;
This sets inrow to point to the beginning of the first row of the slice in the input, and outrow similarly for the output.  Then, for each row, it loops through all the pixels, subtracting them from 255, and adding the offset which was determined in config_props() to account for different value ranges.&lt;br /&gt;
&lt;br /&gt;
It then does the same thing for the chroma planes.  Note how the width and height are shifted right to account for the chroma subsampling.&lt;br /&gt;
&lt;br /&gt;
Once the drawing is completed, the slice is sent to the next filter by calling avfilter_draw_slice().&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=8295</id>
		<title>FFmpeg filter HOWTO</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=8295"/>
		<updated>2007-08-19T20:35:18Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: add information on filter links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is meant as an introduction of writing filters for [[libavfilter]].  This is a work in progress, but should at least point you in the right direction for writing simple filters.&lt;br /&gt;
&lt;br /&gt;
== Definition of a filter ==&lt;br /&gt;
&lt;br /&gt;
=== AVFilter ===&lt;br /&gt;
All filters are described by an AVFilter structure.  This structure gives information needed to initialize the filter, and information on the entry points into the filter code.  This structure is declared in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct&lt;br /&gt;
 {&lt;br /&gt;
     char *name;         ///&amp;lt; filter name&lt;br /&gt;
     char *author;       ///&amp;lt; filter author&lt;br /&gt;
 &lt;br /&gt;
     int priv_size;      ///&amp;lt; size of private data to allocate for the filter&lt;br /&gt;
 &lt;br /&gt;
     int (*init)(AVFilterContext *ctx, const char *args, void *opaque);&lt;br /&gt;
     void (*uninit)(AVFilterContext *ctx);&lt;br /&gt;
 &lt;br /&gt;
     const AVFilterPad *inputs;  ///&amp;lt; NULL terminated list of inputs. NULL if none&lt;br /&gt;
     const AVFilterPad *outputs; ///&amp;lt; NULL terminated list of outputs. NULL if none&lt;br /&gt;
 } AVFilter;&lt;br /&gt;
&lt;br /&gt;
=== AVFilterPad ===&lt;br /&gt;
Let's take a quick look at the AVFilterPad structure, which is used to describe the inputs and outputs of the filter.  This is also defined in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterPad&lt;br /&gt;
 {&lt;br /&gt;
     char *name;&lt;br /&gt;
     int type;&lt;br /&gt;
 &lt;br /&gt;
     int min_perms;&lt;br /&gt;
     int rej_perms;&lt;br /&gt;
 &lt;br /&gt;
     int *(*query_formats)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     void (*start_frame)(AVFilterLink *link, AVFilterPicRef *picref);&lt;br /&gt;
     AVFilterPicRef *(*get_video_buffer)(AVFilterLink *link, int perms);&lt;br /&gt;
     void (*end_frame)(AVFilterLink *link);&lt;br /&gt;
     void (*draw_slice)(AVFilterLink *link, int y, int height);&lt;br /&gt;
 &lt;br /&gt;
     int (*request_frame)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     int (*config_props)(AVFilterLink *link);&lt;br /&gt;
 } AVFilterPad;&lt;br /&gt;
&lt;br /&gt;
The actual definition in the header file has doxygen comments describing each entry point, its purpose, and what type of pads it is relevant for.  These fields are relevant for all pads:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|name&lt;br /&gt;
|Name of the pad.  No two inputs should have the same name, and no two outputs should have the same name.&lt;br /&gt;
|-&lt;br /&gt;
|type&lt;br /&gt;
|Only AV_PAD_VIDEO currently.&lt;br /&gt;
|-&lt;br /&gt;
|query_formats&lt;br /&gt;
|Returns a list of colorspaces supported on the pad.&lt;br /&gt;
|-&lt;br /&gt;
|config_props&lt;br /&gt;
|Handles configuration of the link connected to the pad&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to input pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|min_perms&lt;br /&gt;
|Minimum permissions required to a picture received as input.&lt;br /&gt;
|-&lt;br /&gt;
|rej_perms&lt;br /&gt;
|Permissions not accepted on pictures received as input.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame&lt;br /&gt;
|Called when a frame is about to be given as input.&lt;br /&gt;
|-&lt;br /&gt;
|draw_slice&lt;br /&gt;
|Called when a slice of frame data has been given as input.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame&lt;br /&gt;
|Called when the input frame has been completely sent.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer&lt;br /&gt;
|Called by the previous filter to request memory for a picture.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to output pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|request_frame&lt;br /&gt;
|Requests that the filter output a frame.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Picture buffers ==&lt;br /&gt;
&lt;br /&gt;
=== Reference counting ===&lt;br /&gt;
All pictures in the filter system are reference counted.  This means that there is a picture buffer with memory allocated for the image data, and various filters can own a reference to the buffer.  When a reference is no longer needed, its owner frees the reference.  When the last reference to a picture buffer is freed, the filter system automatically frees the picture buffer.&lt;br /&gt;
&lt;br /&gt;
=== Permissions ===&lt;br /&gt;
The upshot of multiple filters having references to a single picture is that they will all want some level of access to the image data.  It should be obvious that if one filter expects to be able to read the image data without it changing that no other filter should write to the image data.  The permissions system handles this.&lt;br /&gt;
&lt;br /&gt;
In most cases, when a filter prepares to output a frame, it will request a buffer from the filter to which it will be outputting.  It specifies the minimum permissions it needs to the buffer, though it may be given a buffer with more permissions than the minimum it requested.&lt;br /&gt;
&lt;br /&gt;
When it wants to pass this buffer to another filter as output, it creates a new reference to the picture, possibly with a reduced set of permissions.  This new reference will be owned by the filter receiving it.&lt;br /&gt;
&lt;br /&gt;
So, for example, for a filter which drops frames if they are similar to the last frame it output, it would want to keeps its own reference to a picture after outputting it, and make sure that no other filter modified the buffer either.  It would do this by requesting the permissions AV_PERM_READ|AV_PERM_WRITE|AV_PERM_PRESERVE for itself, and removing the AV_PERM_WRITE permission from any references it gave to other filters.&lt;br /&gt;
&lt;br /&gt;
The available permissions are:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Permission&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_READ&lt;br /&gt;
|Can read the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_WRITE&lt;br /&gt;
|Can write to the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_PRESERVE&lt;br /&gt;
|Can assume that the image data will not be modified by other filters. This means that no other filters should have the AV_PERM_WRITE permission.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE&lt;br /&gt;
|The filter may output the same buffer multiple times, but the image data may not be changed for the different outputs.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE2&lt;br /&gt;
|The filter may output the same buffer multiple times, and may modify the image data between outputs.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Filter Links ==&lt;br /&gt;
A filter's inputs and outputs are connected to those of another filter through the AVFilterLink structure:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterLink&lt;br /&gt;
 {&lt;br /&gt;
     AVFilterContext *src;       ///&amp;lt; source filter&lt;br /&gt;
     unsigned int srcpad;        ///&amp;lt; index of the output pad on the source filter&lt;br /&gt;
 &lt;br /&gt;
     AVFilterContext *dst;       ///&amp;lt; dest filter&lt;br /&gt;
     unsigned int dstpad;        ///&amp;lt; index of the input pad on the dest filter&lt;br /&gt;
 &lt;br /&gt;
     int w;                      ///&amp;lt; agreed upon image width&lt;br /&gt;
     int h;                      ///&amp;lt; agreed upon image height&lt;br /&gt;
     enum PixelFormat format;    ///&amp;lt; agreed upon image colorspace&lt;br /&gt;
 &lt;br /&gt;
     AVFilterPicRef *srcpic;&lt;br /&gt;
 &lt;br /&gt;
     AVFilterPicRef *cur_pic;&lt;br /&gt;
     AVFilterPicRef *outpic;&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
The src and dst members indicate the filters at the source and destination ends of the link, respectively.  The srcpad indicates the index of the output pad on the source filter to which the link is connected.  Likewise, the dstpad indicates the index of the input pad on the destination filter.&lt;br /&gt;
&lt;br /&gt;
When two filters are connected, they need to agree upon the dimensions of the image data they'll be working with, and the format that data is in.  Once this has been agreed upon, these parameters are stored in the link structure.&lt;br /&gt;
&lt;br /&gt;
The srcpic member is used internally by the filter system, and should not be accessed directly.&lt;br /&gt;
&lt;br /&gt;
The cur_pic member is for the use of the destination filter.  When a frame is currently being sent over the link (ie. starting from the call to start_frame() and ending with the call to end_frame()), this contains the reference to the frame which is owned by the destination filter.&lt;br /&gt;
&lt;br /&gt;
The outpic member is described in the following tutorial on writing a simple filter.&lt;br /&gt;
&lt;br /&gt;
== Writing a simple filter ==&lt;br /&gt;
&lt;br /&gt;
=== Default filter entry points ===&lt;br /&gt;
Because the majority of filters that will probably be written will take exactly one input, and produce exactly one output, and output one frame for every frame received as input, the filter system provides a number default entry points to ease the development of such filters.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Entry point&lt;br /&gt;
!Actions taken by the default implementation&lt;br /&gt;
|-&lt;br /&gt;
|request_frame()&lt;br /&gt;
|Request a frame from the previous filter in the chain.&lt;br /&gt;
|-&lt;br /&gt;
|query_formats() on output pad&lt;br /&gt;
|Return a list of formats indicating that the format currently used on the input pad is the only supported output format.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame()&lt;br /&gt;
|Request a buffer to store the output frame in.  A reference to this buffer is stored in the outpic member of the link hooked to the filter's output.  The next filter's start_frame() callback is called and given a reference to this buffer.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame()&lt;br /&gt;
|Calls the next filter's end_frame() callback.  Frees the reference to the outpic member of the output link, if it was set by (ie. if the default start_frame() is used).  Frees the cur_pic reference in the input link.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer()&lt;br /&gt;
|Returns a buffer with the AV_PERM_READ permission in addition to all the requested permissions.&lt;br /&gt;
|-&lt;br /&gt;
|config_props() on output pad&lt;br /&gt;
|Sets the image dimensions for the output link to the same as on the filter's input.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=8294</id>
		<title>FFmpeg filter HOWTO</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=FFmpeg_filter_HOWTO&amp;diff=8294"/>
		<updated>2007-08-19T20:04:43Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: Begin documenting how to write a filter for libavfilter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is meant as an introduction of writing filters for [[libavfilter]].  This is a work in progress, but should at least point you in the right direction for writing simple filters.&lt;br /&gt;
&lt;br /&gt;
== Definition of a filter ==&lt;br /&gt;
&lt;br /&gt;
=== AVFilter ===&lt;br /&gt;
All filters are described by an AVFilter structure.  This structure gives information needed to initialize the filter, and information on the entry points into the filter code.  This structure is declared in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct&lt;br /&gt;
 {&lt;br /&gt;
     char *name;         ///&amp;lt; filter name&lt;br /&gt;
     char *author;       ///&amp;lt; filter author&lt;br /&gt;
 &lt;br /&gt;
     int priv_size;      ///&amp;lt; size of private data to allocate for the filter&lt;br /&gt;
 &lt;br /&gt;
     int (*init)(AVFilterContext *ctx, const char *args, void *opaque);&lt;br /&gt;
     void (*uninit)(AVFilterContext *ctx);&lt;br /&gt;
 &lt;br /&gt;
     const AVFilterPad *inputs;  ///&amp;lt; NULL terminated list of inputs. NULL if none&lt;br /&gt;
     const AVFilterPad *outputs; ///&amp;lt; NULL terminated list of outputs. NULL if none&lt;br /&gt;
 } AVFilter;&lt;br /&gt;
&lt;br /&gt;
=== AVFilterPad ===&lt;br /&gt;
Let's take a quick look at the AVFilterPad structure, which is used to describe the inputs and outputs of the filter.  This is also defined in libavfilter/avfilter.h:&lt;br /&gt;
&lt;br /&gt;
 typedef struct AVFilterPad&lt;br /&gt;
 {&lt;br /&gt;
     char *name;&lt;br /&gt;
     int type;&lt;br /&gt;
 &lt;br /&gt;
     int min_perms;&lt;br /&gt;
     int rej_perms;&lt;br /&gt;
 &lt;br /&gt;
     int *(*query_formats)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     void (*start_frame)(AVFilterLink *link, AVFilterPicRef *picref);&lt;br /&gt;
     AVFilterPicRef *(*get_video_buffer)(AVFilterLink *link, int perms);&lt;br /&gt;
     void (*end_frame)(AVFilterLink *link);&lt;br /&gt;
     void (*draw_slice)(AVFilterLink *link, int y, int height);&lt;br /&gt;
 &lt;br /&gt;
     int (*request_frame)(AVFilterLink *link);&lt;br /&gt;
 &lt;br /&gt;
     int (*config_props)(AVFilterLink *link);&lt;br /&gt;
 } AVFilterPad;&lt;br /&gt;
&lt;br /&gt;
The actual definition in the header file has doxygen comments describing each entry point, its purpose, and what type of pads it is relevant for.  These fields are relevant for all pads:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|name&lt;br /&gt;
|Name of the pad.  No two inputs should have the same name, and no two outputs should have the same name.&lt;br /&gt;
|-&lt;br /&gt;
|type&lt;br /&gt;
|Only AV_PAD_VIDEO currently.&lt;br /&gt;
|-&lt;br /&gt;
|query_formats&lt;br /&gt;
|Returns a list of colorspaces supported on the pad.&lt;br /&gt;
|-&lt;br /&gt;
|config_props&lt;br /&gt;
|Handles configuration of the link connected to the pad&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to input pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|min_perms&lt;br /&gt;
|Minimum permissions required to a picture received as input.&lt;br /&gt;
|-&lt;br /&gt;
|rej_perms&lt;br /&gt;
|Permissions not accepted on pictures received as input.&lt;br /&gt;
|-&lt;br /&gt;
|start_frame&lt;br /&gt;
|Called when a frame is about to be given as input.&lt;br /&gt;
|-&lt;br /&gt;
|draw_slice&lt;br /&gt;
|Called when a slice of frame data has been given as input.&lt;br /&gt;
|-&lt;br /&gt;
|end_frame&lt;br /&gt;
|Called when the input frame has been completely sent.&lt;br /&gt;
|-&lt;br /&gt;
|get_video_buffer&lt;br /&gt;
|Called by the previous filter to request memory for a picture.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Fields only relevant to output pads are:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Field&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|request_frame&lt;br /&gt;
|Requests that the filter output a frame.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Picture buffers ==&lt;br /&gt;
&lt;br /&gt;
=== Reference counting ===&lt;br /&gt;
All pictures in the filter system are reference counted.  This means that there is a picture buffer with memory allocated for the image data, and various filters can own a reference to the buffer.  When a reference is no longer needed, its owner frees the reference.  When the last reference to a picture buffer is freed, the filter system automatically frees the picture buffer.&lt;br /&gt;
&lt;br /&gt;
=== Permissions ===&lt;br /&gt;
The upshot of multiple filters having references to a single picture is that they will all want some level of access to the image data.  It should be obvious that if one filter expects to be able to read the image data without it changing that no other filter should write to the image data.  The permissions system handles this.&lt;br /&gt;
&lt;br /&gt;
In most cases, when a filter prepares to output a frame, it will request a buffer from the filter to which it will be outputting.  It specifies the minimum permissions it needs to the buffer, though it may be given a buffer with more permissions than the minimum it requested.&lt;br /&gt;
&lt;br /&gt;
When it wants to pass this buffer to another filter as output, it creates a new reference to the picture, possibly with a reduced set of permissions.  This new reference will be owned by the filter receiving it.&lt;br /&gt;
&lt;br /&gt;
So, for example, for a filter which drops frames if they are similar to the last frame it output, it would want to keeps its own reference to a picture after outputting it, and make sure that no other filter modified the buffer either.  It would do this by requesting the permissions AV_PERM_READ|AV_PERM_WRITE|AV_PERM_PRESERVE for itself, and removing the AV_PERM_WRITE permission from any references it gave to other filters.&lt;br /&gt;
&lt;br /&gt;
The available permissions are:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
!Permission&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_READ&lt;br /&gt;
|Can read the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_WRITE&lt;br /&gt;
|Can write to the image data.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_PRESERVE&lt;br /&gt;
|Can assume that the image data will not be modified by other filters. This means that no other filters should have the AV_PERM_WRITE permission.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE&lt;br /&gt;
|The filter may output the same buffer multiple times, but the image data may not be changed for the different outputs.&lt;br /&gt;
|-&lt;br /&gt;
|AV_PERM_REUSE2&lt;br /&gt;
|The filter may output the same buffer multiple times, and may modify the image data between outputs.&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=TX2&amp;diff=7719</id>
		<title>TX2</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=TX2&amp;diff=7719"/>
		<updated>2007-04-20T09:53:00Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: document the TX2 image format&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Uses===&lt;br /&gt;
This format is used in Disgaea and other games by Nippon Ichi Software.&lt;br /&gt;
&lt;br /&gt;
===Format===&lt;br /&gt;
The file consists of:&lt;br /&gt;
# a file header&lt;br /&gt;
# one or more color palettes as indicated in the header&lt;br /&gt;
# image data&lt;br /&gt;
&lt;br /&gt;
The file may have garbage data after the end of the image.  This should be discarded.&lt;br /&gt;
&lt;br /&gt;
====Header====&lt;br /&gt;
Multibyte values in this format are stored with little-endian byte ordering.&lt;br /&gt;
&lt;br /&gt;
 typedef struct {&lt;br /&gt;
   uint16_t width;&lt;br /&gt;
   uint16_t height;&lt;br /&gt;
   uint16_t colors;   /* only values 16 and 256 have been observed */&lt;br /&gt;
   uint16_t unknown;&lt;br /&gt;
   uint16_t colors2;  /* purpose unknown, but always seems to be the same as colors */&lt;br /&gt;
   uint16_t palettes; /* number of palettes.  only 1 and 16 observed thus far */&lt;br /&gt;
   uint32_t padding;  /* always zero */&lt;br /&gt;
 } HEADER;&lt;br /&gt;
&lt;br /&gt;
====Palette====&lt;br /&gt;
The palettes follow immediately after the header, and all have the number of entries indicated by the color field in the file header.  The palettes have no header of their own, and instead simply consist of each color:&lt;br /&gt;
&lt;br /&gt;
 uint32_t color;      /* 0xAABBGGRR in little endian */&lt;br /&gt;
&lt;br /&gt;
In many images with no apparent need for transparency, the alpha value is always 0x80.  Images which truly make use of the alpha channel have been observed, but not explored in detail yet.&lt;br /&gt;
&lt;br /&gt;
====Image Data====&lt;br /&gt;
For 256 color images, each byte is a palette index, with bits 4 and 5 swapped.  For the purposes of implementing a parser for this format, it is probably more efficient to reorder the palette while reading it, rather than swapping bits for each pixel.&lt;br /&gt;
&lt;br /&gt;
For 16 color images, each byte gives two palette indexes - the first in the low nibble, the second in the high nibble.&lt;br /&gt;
&lt;br /&gt;
[[Category:Image Formats]]&lt;br /&gt;
[[Category:Game Formats]]&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
	<entry>
		<id>https://wiki.multimedia.cx/index.php?title=Talk:FFmpeg_Summer_Of_Code_2007&amp;diff=7293</id>
		<title>Talk:FFmpeg Summer Of Code 2007</title>
		<link rel="alternate" type="text/html" href="https://wiki.multimedia.cx/index.php?title=Talk:FFmpeg_Summer_Of_Code_2007&amp;diff=7293"/>
		<updated>2007-03-15T01:58:06Z</updated>

		<summary type="html">&lt;p&gt;Koorogi: /* Dirac */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== TIFF/TGA encoder? ==&lt;br /&gt;
&lt;br /&gt;
Why? It seems too small a task to be done in the course of SoC. --[[User:Kostya|Kostya]] 09:20, 7 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
: True, I'll rework it. --[[User:Merbanan|Merbanan]] 11:44, 7 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
== DPX Encoder/Decoder ==&lt;br /&gt;
From what I understand this is the output of most film scanners. Having not used one I'm not 100% sure of this, and maybe someone could clarify, but my understanding is that film scanners output a collection of these files (I'm unsure if they're put in a container or not). There are already open source implementations in imagemagick and graphics magick.&lt;br /&gt;
&lt;br /&gt;
: added some info and links - [[User:Compn|Compn]] 20:25, 8 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
== ATRAC3 ==&lt;br /&gt;
Should http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/44526 be added to small tasks?&lt;br /&gt;
[[User:Ce|Ce]] 04:41, 8 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
: No, I'll fix it eventually, it's not that simple.--[[User:Merbanan|Merbanan]] 06:46, 8 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
== Qualification Tasks ==&lt;br /&gt;
is reviewing and fixing up old unapplied ffmpeg patches not a good small task? [[User:Compn|Compn]] 20:25, 8 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
:I wanted to ask the same question a minute ago;-) The QTRLE patch is not ready yet, and there is no patch for an DNxHD ENcoder in the mailing list. I originally wrote that someone with better knowledge should of course remove those proposals if he thinks they are bad, but I still think they probably tell something about applicants (and that Baptiste possibly didn't read Merbanans &amp;quot;rules&amp;quot;). --[[User:Ce|Ce]] 20:31, 8 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
== Photo Codecs ==&lt;br /&gt;
&lt;br /&gt;
I see several GSoC project proposals for encoding and decoding a variety of images. Is this truly appropriate for the FFmpeg project, which traditionally focuses on sequences of moving pictures vs. single images? I know that FFmpeg can put a movie together from a sequence of still pictures, or dump a movie into a series of still pictures. Are we hoping to do the same with HDR images? Otherwise, this type of work seems best left to dedicated photo processing projects. --[[User:Multimedia Mike|Multimedia Mike]] 13:36, 9 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
: And they are designed to operate with colour formats not currently supported by FFmpeg (like 16 bit per component). But JPEG-2000 and HD-Photo are likely to be used in movies so their support is undoubtedly useful. BTW, how do you plan to use qualification projects?  --[[User:Kostya|Kostya]] 14:09, 9 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
::I'm supportive of something like JPEG-2000 (which uses a colorspace FFmpeg does not presently support, IIRC) since I know that there are plans to include that with certain types of movies. I'm not as enthusiastic about graphic formats that are not known to be encoded as sequences of images in a video file. As for the qualifier projects, we are hoping to weed out unqualified applicants by asking that they perform a task from the list. --[[User:Multimedia Mike|Multimedia Mike]] 16:02, 9 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
: i was thinking of that exact feature (convert dpx/tif/exr to h264). also i wonder if the encoder feature could be used for filmmaking using ffmpeg ? e.g. grab from camera right to HD image format? or is this too high end/specialized/low user count? --[[User:Compn|Compn]] 18:52, 9 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
:: You know me-- low user count is not a legit reason for discounting a feature. :-) If there is a legitimate video-type app for a certain feature I think that makes it more relevant to FFmpeg. --[[User:Multimedia Mike|Multimedia Mike]] 19:39, 9 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
::: btw i dont think they need to be SOC projects. exr and dpx are open and have gpl libs (dunno about microsoft hd photo). maybe just a qualifying task or just a wishlist. --[[User:Compn|Compn]] 22:53, 9 March 2007 (EST)&lt;br /&gt;
&lt;br /&gt;
== AAC ==&lt;br /&gt;
&lt;br /&gt;
I think finishing AAC support would be a good SoC project. The decoder started in 2006 is far from finished and the project appears to be quite complex. Baptiste appears to disagree, can we come to a consensus here? Does anybody know the exact status of the AAC implementation from 2006? -- [[User:DonDiego|DonDiego]] 10:11, 12 March 2007 (EDT)&lt;br /&gt;
&lt;br /&gt;
: I think the LC part is almost complete. Adding He-AAC(+) features could be a SoC task.--[[User:Merbanan|Merbanan]] 17:05, 12 March 2007 (EDT)&lt;br /&gt;
&lt;br /&gt;
==Dirac==&lt;br /&gt;
is dirac even finished spec wise? i'd rather see a decoder for the files in the wild (eac3, aac, gsm, 263, indeo)...&lt;br /&gt;
&lt;br /&gt;
: Their site says it's &amp;quot;essentially complete&amp;quot;.  Granted, it's not out there in the wild yet, but I think that's mostly a matter of time, so why not get a decoder in now?  It's not like libavcodec doesn't already have decoders for a number of other rather obscure formats already. -- [[User:Koorogi|Koorogi]] 21:58, 14 March 2007 (EDT)&lt;/div&gt;</summary>
		<author><name>Koorogi</name></author>
	</entry>
</feed>