Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
When a video codec wins an Emmy (blog.mozilla.org)
273 points by todsacerdoti 1 day ago | hide | past | favorite | 91 comments




Here's the Emmy that C-Cube Microsystems won back in 1995 for the MPEG-2 (actually unconstrained MPEG-1) encoder chip set used in the roll-out of DirecTV.

https://www.w6rz.net/DCP_1235.JPG

The original DirecTV encoder was MPEG-1 at 704x480 using eight CL4000 chips. Then in 1995 when the MPEG-2 capable CL4010 was finished, the encoders were upgraded to MPEG-2 (frame only encoding). Then upgraded again to a 12 chip AFF (Adaptive Field/Frame) encoder when the firmware was completed.

https://www.w6rz.net/videorisc.png


> AV1 fixed a structural problem in the ecosystem at the time, but the work isn’t finished. Video demand keeps rising, and the next generation of open codecs must remain competitive.

> AOMedia is working on the upcoming release of AV2. It will feature meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.

That's all nice and good, but please make AV1 as widespread as H264, so that I can just import it in every editing program as well instead of having Adobe Premiere Pro complain about not knowing what that format is (well, I personally prefer DaVinci Resolve, but my editor is on Adobe). But yeah, I think that AV1 is great but would like support for it across the board, on every device (hardware decoding and encoding) as well as Kdenlive and Resolve and all the other editors and everything on the software side.


It's all about hardware support really. AV1 is pretty new (2018), give it some time. E. g. Nvidia supports decoding since 3xxx generation, and encoding only since 4xxx generation.

I still vividly remember what a clusterfk was H264 support on mobile devices just ten years ago, circa 2010-2015. AVC spec was published in 2003, High Profiles standardised in 2005, universally supported only since ~2015. I personally had a 2011 Tegra 2 tablet which did support H264, but didn't support high profiles.


Why does Adobe need HW support?

Rendering out at high quality takes quite a bit of processing, so it helps to have GPU / HW Enc support.

AdobeWebM is adding AV1 support for webm files in premiere and after effects btw. It already has vp9 and 8 support! But yeah I'm hoping it becomes ubiquitous in the future.

What is AdobeWebM and why is it taking more than adding the lib to their CMake?

Some Sony TV's only hardware accelerate AV1 content through streaming services, and not through Blueray and USB...

Right now, I have a drive filled with H264 content that I can hook up to any old hotel TV and play back. It's gonna be a while before I switch to AV1. And is H264 by now largely out of patent anyway?


> And is H264 by now largely out of patent anyway?

Almost for the AVC High profile - WikiMedia track all the known patents here: https://meta.wikimedia.org/wiki/Have_the_patents_for_H.264_M...


Are you aware that you are barking up the wrong tree? AOMedia already made AV1 a free and open standard, if Adobe does not want to do an engineer's afternoon worth of work to link a C library into their executable, then that's on their head, not AOMedia's.

Less of a plea with AOMedia (it's not like they can't work on AV2 when AV1 is out there but doesn't have full adoption), and more at the industry as a whole - otherwise we'll still see H264 be widespread by 2030 and onwards because there's not enough interest in modern codecs even when they don't have licensing bullshit going on.

It's only royalty free from AOM, it doesn't mean it isn't patent encumbered. I know several companies that are claiming AV1 violates their patents and they want money if you use it.

tbf AOMedia doesnt really make this call. The steam deck for example doesn't do AV1 natively. It could, but Valve has so far decided not to implement it. I dont know how many other devices and systems that could do AV1 but don't do it exist, but to get this level of support, we really need to pressure these companies.

You shouldn't be editing in long-GOP codecs anyway.

Shouldn't will only ever be enforced when it can't be. There's a lot of editing that doesn't require a lot of reverse playback which is where long-GOP really falls down to the point it is worth the slight pain in session vs length of delaying session starting for I-frame transcoding

The real killer of NLEs[0] is variable framerate. Long GOPs just give you higher playhead latencies, but it's still possible[1] for the NLE to actually edit video in such a state. Your computer has to be fast enough or it'll be miserable, but in contrast, variable framerate footage will immediately cause audio desync.

Of course, this distinction is moot, since I've yet to see a (consumer) video source that provides fixed framerate footage. If anyone wants to explain why, I'm all ears. As a result, I habitually re-encode everything before taking it into a video editor as a precaution, and once you're doing that then capping the GOP length is a no-brainer.

[0] Non-linear editor. If you're wondering what a linear editor is, please watch https://www.youtube.com/watch?v=AEMdmnNbCZA

[1] It's actually possible to do lossless editing at GOP boundaries, though I don't know if any NLEs would try doing this.


For linear editing, it could also be this:

https://www.youtube.com/watch?v=_cnQv8JCsX4

It's how I got started, indeed by watching this programme when it was on telly.



I recently learnt that one can download WOFF files from any website that uses them, and convert them into TTF files (using an online tool like CloudConvert) if we wanted to use them in say a MS Word or Powerpoint slide deck.

This allowed me to create a custom powerpoint theme / template that captures the essence of a particular brand.


Worth noting that web fonts are often split up across multiple files for sets of codepoints and font weights/styles, so depending on the language you're writing in a single WOFF file might be missing a few letters.

Now that is an interesting TIL, thanks!

I think compression ratio is not as important, as being open-source and patent-free. I would prefer an open codec even if it produces 20-50% larger videos. It's not a big difference between 1 and 1.5 Mbit/s. And if it matters that much for you, then you should be paying for the patents, not everyone else (for example, by using a codec which is free to decode, but encoding software is paid).

H.264 High Profile will soon if not already patent free. Most of the current active patent are actually SVC usage.

To you a 33% drop from 1.5 to 1 is not much, but when you're paying for bandwidth usage that is a pretty good bit of savings. I'm not sure of anyone legitimate that's pushing that kind of data isn't using a licensed encoder.

Then they should be paying for the encoder, and the decoder should be open-source and patent-free.

How many times as a viewer have you had to pay for the decoder? Have you ever paid for a video player?

Ogg Theora is right there.

Theora is incredibly primitive compared even to H.264.

>Through the mid-2010s, video codecs were an invisible tax on the web, built on a closed licensing system

Youtube has used vp8 since 2010. Openly licensed video codes were in use through the mid-2010s.


Well, VP8 was only released as an open codec in 2010, and subject of patent lawsuits until late 2014.

In 2010 the majority of (YouTube and other) videos were still served as H.264, because no major browser supported it back then and the majority of video playback devices were already smartphones (without vp8 decoding capabilities)

iOS for example didn't support VP8 until iOS12 in 2019, Firefox and MS IE only added it in 2011. Even Google only added VP8 to Chrome in September 2010.

So the statement is correct IMO


> In 2010... the majority of video playback devices were already smartphones

I find this extremely difficult to believe. In 2010 the only widely used smartphone would have been the iPhone. The Motorola Droid was the first widely marketed Android device in the US and was only launched in late 2009.


The full context, to avoid confusion: "because no major browser supported it back then and the majority of video playback devices were already smartphones (without vp8 decoding capabilities)"

No major browsers didn't support VP8 back then, and among the remaining devices (other appliances than PCs with those Browsers) the majority of video playback devices were already smartphones (not supporting VP8 in 2010).

Apologies for the lack of clarity.


Wrong. Google aggressively enabled VP8 on YouTube even when there was very little hardware decode. Saved a few megabits per stream on their side, nuked everyone's battery but hey Google didn't give a hoot because that was an externalized cost.

It's why the h264ify extension existed, and forced h264 was for that time a large part of the reason Safari had vastly superior battery life.


"Wrong!"

Chrome didn't support VP8 until the first stable release in September 2010, others browsers added it in 2011.

They can be as aggressive as they want, when opening a video the client/server agreed on a codec both support and in 2010 that codec wasn't VP8


You can choose to believe what you want, in reality Google decided to nuke people their battery for fringe benefit to themselves. They flipped the switch way ahead of broad hardware decode support.


One of my projects won one of these.

It was for standardising widescreen switching signals, in the early 2000s that was a big issue because each company had a different interpretation of what the flags meant. Thus when you were watching TV you would often get the wrong behaviour and distorted pictures. A small group of us sat down and agreed what the proper behaviour should be. Then every other TV standards body in the world adopted it.

I never did get a statue.


There's actually two Engineering Emmy Awards. There's also the 'Primetime Engineering Emmy Awards' given by the ATAS (Academy of Television Arts and Sciences) while 'Technology and Engineering Emmy Awards' are given by NATAS (National Academy of Television Arts and Sciences), not confusing at all.

https://en.wikipedia.org/wiki/Primetime_Engineering_Emmy_Awa...

https://en.wikipedia.org/wiki/National_Academy_of_Television...


And the official site also covers 2025 and 2024, which for some reason, th Wikipedia page does not.

https://theemmys.tv/tech/


It really is amazing how far compression has come in the last decades. Would love to see a chart showing the progress as I think quite a bit of it was very recent. At least, I know that the videos I make on a gopro can't be viewed without effort on a chromebook.

Amazing what you can do when you throw 10 billion transistors at a problem instead of only a few million.

On a similar note Matt Parker recently released a video about Perlin Noise, which won an Oscar (for Technical Achievement) in 1996: https://www.youtube.com/watch?v=JrLSfSh43oA

What does Netflix has to do with AV1 codec? While Netflix’s Norkin has contributed some minor add-ons like film grain, Daala folks should have been mentioned along with x264/x264 guys who were at the origins on AV1 development, Google VP9 guys for their contributions and Intel maybe for HW porting, among others. Basically whomever pays for the show, gets to wear the crown. Nothing to see here…

What made it in from Daala?

* Multi-Symbol Entropy Coder

* Chroma from Luma

* CDEF filter (directional dering filter)

What didn't make it in?

* Lapped Transform

* Use of vector quantization for residuals (aligning the vectors)


Sure, an Emmy is prestigious, but when is a codec going win a coveted FIFA Peace Prize?

When a Codec ends the Codec Wars, obviously.

Which the free software side calls The War of Patent Oppression.

when is C going to win a Pulitzer?

> In 1990, both Ritchie and Thompson received the IEEE Richard W. Hamming Medal from the Institute of Electrical and Electronics Engineers (IEEE), "for the origination of the UNIX operating system and the C programming language".

> In 1997, both Ritchie and Thompson were made Fellows of the Computer History Museum, "for co-creation of the UNIX operating system, and for development of the C programming language."

> On April 21, 1999, Thompson and Ritchie jointly received the National Medal of Technology of 1998 from President Bill Clinton for co-inventing the UNIX operating system and the C programming language

https://en.wikipedia.org/wiki/Dennis_Ritchie#Awards

I think that's also good ;) Ritchie and Thompson also received a Turing Award; not for the C-language, but for UNIX and OS development in general.


Related: AV1 — Now Powering 30% of Netflix Streaming

https://news.ycombinator.com/item?id=46155135


Related:

AV1 powers approximately 30% of Netflix viewing

https://news.ycombinator.com/item?id=46155135


I'm confused - why aren't video codecs winner take all?

Who still uses paten encumbered codecs and why?


video decoding on a general-purpose cpu is difficult, so most devices that can play video include some sort of hardware video decoding chip. if you want your video to play well, you need to deliver it in a format that can be decoded by that chip, on all the devices that you want to serve.

so it takes a long time to transition to a new codec - new devices need to ship with support for your new codec, and then you have to wait until old devices get lifecycled out before you can fully drop support for old codecs.


To this day no AppleTV boxes support hardware AV1 decode (which essentially means it’s not supported). Only the latest Roku Ultra devices support it. So obviously Netflix, for example, can’t switch everyone over to AV1 even if they want to.

These days, even phone-class CPUs can decode 4k video at playback rate, but they use a lot of power doing it. Not reasonable for battery-powered devices. For AC-powered devices, the problem might be heat dissipation, particularly for little streaming boxes with only passive cooling.

Would it be possible to just ship video streaming devices with a FPGA that can be updated to support whatever hardware accelerated codec is fashionable?

probably not at the prices that video streaming devices typically sell for.

I think the need for hardware decoding stinks because it makes capable hardware obsolete since it can't decode new video.

Hardware acceleration has been a thing since...forever. Video in general is a balancing act between storage, bandwidth, and quality. Video playback on computers is a balancing act between storage, bandwidth, power, and cost.

Video is naturally large. You've got all the pixels in a frame, tens of frames every second, and however many bits per pixel. All those frames need to be decoded and displayed in order and within fixed time constraints. If you drop frames or deliver them slowly no one is happy watching the video.

If at any point you stick to video that can be effectively decoded on a general purpose CPU with no acceleration you're never going to keep up with the demands of actual users. It's also going to use a lot more power than an ASIC that is purpose-built to decode the video. If you decide to use the beefiest CPU in order to handle higher quality video under some power envelope your costs are going to increase making the whole venture untenable.


I hear you but I think the benefits fall mainly on streaming platforms rather than users.

Like I'm sure Netflix will lower their prices and Twitch will show fewer ads to pass the bandwidth savings onto us right?


Would anyone pay NetFlix any amount of money if they were using 1Mbps MPEG-1 that's trivially decoded on CPUs?

The whole video/movie industry is rife with mature, hardware-implemented patents. The kind that survive challenges. They are also owned by deep pockets (not fly-by-night patent trolls). Fortunately, the industry is mature enough, that some of the older patents are aging out.

The image processing industry is similar, but not as mature. I hated dealing with patents, when I was writing image processing stuff.


For whatever reason, the file sharing community seems to strongly prefer H.265 to AV1. I am assuming that either the compression at a preferred quality, or the quality at preferred bitrates is marginally better than AV1, and that people who don't care about copyright also don't care about patents.

I assume "file sharing community" is the euphemism for "movie pirating community", but I apologize if I made the wrong assumption.

If that's a correct guess -- I think the biggest reason is about hardware support, actually. When you have pirated movies, where are you going to play it? TV. Your TV or TV box very likely has support for H265, but very few has AV1 support.

Then the choice is apparent.


Once can very well argue that 'movie pirating community' is more properly the dysphemism for 'file sharing community'. :-)

What is odd is that the power-seeders, the ones who actually re-encode, don't do both. You see H264 and H265 released alongside eachother. I'm surprised it doesn't go H265/AV1 at this point.

You would dilute the seeding pool, which will already get diluted enough.

What I wonder is "Why still H264?" I guess it's because some people don't buy new video cards every 6 years and don't have H265 on their hardware.

From a quick skim of hardware support on Wikipedia, it looks like encoding support for H.265 showed up in NVIDIA, AMD, et. al around 2015 whereas AV1 support didn't arrive until 2022.

So, the apparent preference could simply be 5+ years more time to do hardware-assisted transcoding.


Pirates are generally slow to transition formats, but AV1 is also not better than H.265 (in practice) at the high-bitrate encodes.

Scene rules say to start with --crf 17 at 1080p, which is a pretty low CRF (i.e. it results in high bitrates): https://scenerules.org/html/2020_X265.html

AV1 would most likely result in slower encodes that look worse.


Timing. Patent encumbered codecs get a foothold through physical media and broadcast first. Then hw manufactures license it. Then everyone is forced to license them. Free codecs have a longer path to market as they need to avoid the patents and get hw and sw support.

Backwards compatibility. If you host a lot of compressed video content, you probably didn't store the uncompressed versions so any new encoding is a loss of fidelity. Even if you were willing to take that gamble, you have to wait until all your users are on a modern enough browser to use the new codec. Frankly, the winner that takes all is H.264 because it's already everywhere.

AV1 is still worse in practice than H.265 for high-fidelity (high bitrate) encoding. It's being improved, but even at high bitrates it has a tendency to blur.

> AV1 is also the foundation for the image format AVIF, which is deployed across browsers and provides excellent compression for still and animated images

I wish adoption was better. When will Wikipedia support AVIF?


What does it bring over jpegxl?

Way wider browser adoption, potential to evolve together with AV#, since it's using a container format, so it shouldn't be limited to AV1 base. I.e. sites just need to adopt AVIF, and I expect then seamless ability to start using AV2 (and on) there without sites needing another wave of adding a new mime type and etc. which seems to be a huge hurdle.

Same as let's say Webm can contain AV1, AV2 etc.


It doesn't matter that AVIF uses the same container for AV1 or AV2 based encoding, if the browsers don't have the right decoder for it then they can't decode it.

An example of this is MP4: Browsers can decode videos encoded with H264 in MP4 containers, but not H265 even if it uses the same container, because one thing is the container and another thing is the codec, they're related but they aren't the same.


Notably, AVIF uses the HEIF container like HEIC. HEIF is an extension of ISOBMFF, mp4 files are another example of an ISOBMFF format. I'm surprised how ubiquitous that container format is becoming; webm uses the matroska / mkv format but I bet if it was created today they would have likely used something ISOBMFF derived

Browser adoption happens way faster than sites adoption (as current AVIF itself clearly demonstrates), so same container does matter to reduce contention on sites adoption side.

I.e. once browser adoption happens you'll be able to use AV2 for AVIF without the likes of Wikipedia taking another decade after that to add an additional mime type to their supported images.


The standard/spec is not paywalled. So... yeah.

Hopefully never. Abusing Intra-frames from video codecs is an abomination. Use JPEG-XL.

My Fujifilm X100VI shoots HEIC/HEIF, which is like the AVIF of H.265/HEVC. It seems to offer better compression than JPEG while having smaller file size. iPhone does this too. Why are you calling it an abomination?

Cameras should start using AVIF. HEIF is completely DOA for anything, same as H.265 is.

I hope everyone switches over to AV1 or AV2 stills so we can have completely open image pipelines but It’s silly to say something is “DOA” when it’s been in use for 9 years (since 2017) by one of the world’s most popular consumer cameras (iPhone) and is now popping up in high end cameras. The company I work for (Notion) long ago had to start supporting HEIF uploads because a ton of our users expect their pictures to just work.

It'd still call it DOA if Apple are the only ones keeping it around. No one else is and no one else really cares. Also I think they have an ulterior motive - they are part of those who profit from patents on it. So they likely want to keep it around longer than others.

Android also seems to use it, on some devices (default in the stock camera app?). So does Sony. ¯\_(ツ)_/¯

HEIF is the container format shared by AVIF and HEIC.

Yeah, good point. I meant HEIC (image codec), not the container.

On the web? Good luck. AVIF is considered a baseline browser feature as of last year by the W3C; whereas JPEG XL is not fully supported by any stable browser release whatsoever, only Safari has been shipping partial support.


Until Google (and by proxy their underling Mozilla) becomes sane again, JPG or PNG/SVG depending on content.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: