• If you would like to get your account Verified, read this thread
  • Check out Tickling.com - the most innovative tickling site of the year.
  • The TMF is sponsored by Clips4sale - By supporting them, you're supporting us.
  • >>> If you cannot get into your account email me at [email protected] <<<
    Don't forget to include your username

Lossless video codec for use with Adobe Premiere?

The Last Laugh

3rd Level Green Feather
Joined
Apr 21, 2001
Messages
4,588
Points
38
Hello everyone,

I've been using Adobe Premiere 6.5 for editing my videos for the last few years. Yeah, I know I'm a bit behind in terms of software technology, but I'm far from a video wiz, and I figure that as long as it does the job, I prefer to stick with it (if it ain't broke, don't fix it).

Anyway, I was wondering how I can use a lossless video codec with Premiere. I don't much like the idea of losing quality each time I encode a clip, at least when making picture adjustments. I know there are lossless codecs that one can use, like Huffyuv for instance. But I've no idea how to use any of them. I tried installing Huffyuv on my computer, and as far as I know it worked. But I can't find no trace of it, and I don't see how to use it in Premiere. When I go in the project settings, there's an option for the compressor, but the only option I have is Microsoft DV (NTSC or PAL), even after installing Huffyuv. Is there a way to make Premiere understand I'd like it to use Huffyuv instead, or any good lossless codec for that matter?

How lossless is a "lossless" codec, anyway? Does it really allow the use to encode without losing quality, or is it just not as bad as regular codecs?

Another somewhat related question: when using a regular, non-lossless codec, do you lose any quality when simply splicing a clip? I mean without making any changes to contrast, color, sharpness, etc. Just having Premiere put bits of video together, without actually modifying the frames. What would happen if I did it several times in a row? Would I lose any video quality? I did some experiments in the past, but they weren't conclusive.

Thank you for any advice you can offer.

Francois
 
I can't speak much to adding codecs to Premier; I've never used it. To the other questions I can offer a few thoughts.

First, a lossless codec is supposedly "mathematically lossless". Does that mean it truly is? Most commentators (from when I was researching such for audio codecs) say yes. The algorithms utilized certainly all purport to be such and I'd suspect that any quality lost is as close to zero as you're going to get and is certainly below what a human would notice. If you wanted absolute, 100% certainty you lost no quality I'm afraid only an uncompressed format (like WAV for audio in Windows) would provide such.

Any lossy format will continue to "lose" every time it is applied. Take images, for instance. Repeatedly saving a JPEG image results in a loss in quality with each save. Now, if you're saving something multiple times from one program and never close the program between saves you might not see a loss. The program might be intelligent to keep the loaded version at high quality and only degrade the saved copy. However, if you close and reopen your file and then go to work on it again you will definitely lose quality with an additional save.
 
First, a lossless codec is supposedly "mathematically lossless". Does that mean it truly is? Most commentators (from when I was researching such for audio codecs) say yes. The algorithms utilized certainly all purport to be such and I'd suspect that any quality lost is as close to zero as you're going to get and is certainly below what a human would notice.

That would definitely be good enough for me. I wish I knoew how to do it.

Any lossy format will continue to "lose" every time it is applied.

Even if no changes are made and the software doesn't have to extrapolate anything? That's really too bad. I don't understand why a video editing program can't just reorder the footage without making any other modifications, basically making exact copies of the individual frames. I mean, the files don't even get smaller as they lose quality, so what's the point? I'm sure the process would take longer if a losselesss codec were used, but I would be fine with that if it meant better quality. I wish the software would offer that option.

Take images, for instance. Repeatedly saving a JPEG image results in a loss in quality with each save.

What about formats like BMP or TIF? As far as I can tell, they don't lose any quality with repeated saves. Then again, I guess they could be considered "lossless" formats, so they're not affected by saves like JPEG images are. Although I still don't understand why JPEG files need to lose quality when you just save them. It's not like the spelling or the fonts get worse each time one saves a Word file. It remains exactly the same. So why can't an image program just save a JPEG exactly the same as well? I understand if the user chooses to lower the quality to make the file smaller, but otherwise it doesn't make sense to me.

Now, if you're saving something multiple times from one program and never close the program between saves you might not see a loss. The program might be intelligent to keep the loaded version at high quality and only degrade the saved copy. However, if you close and reopen your file and then go to work on it again you will definitely lose quality with an additional save.

Interesting. Definitely something I need to keep in mind. Though I'm not sure how I could apply it in practice. The way I do things, I need to use the resulting encoded clips to encode other ones, at least once after the first encoding. So even if I don't close the program before I'm done (which would represent quite a video editing marathon), I lose some quality each time I create a new clip. Oh well.

In any case, thank you very much for the information.
 
Even if no changes are made and the software doesn't have to extrapolate anything? That's really too bad. I don't understand why a video editing program can't just reorder the footage without making any other modifications, basically making exact copies of the individual frames. I mean, the files don't even get smaller as they lose quality, so what's the point? I'm sure the process would take longer if a losselesss codec were used, but I would be fine with that if it meant better quality. I wish the software would offer that option.
It's the nature of the algorithm, unfortunately. When you save in a lossy format of any kind it tries to save space by discarding any information it deems less important. In a JPEG this includes sharpness, as evidenced by blurry JPEG artifacts on close inspection. Once the file is saved in this manner that discarded information is lost forever. A lossless format economizes but doesn't lose.

Let's take the example of a text file. Say we wrote two compression algorithms. One would replace every instance of the word "apple" with a small code, say the number 12. So that algorithm would go through and replace all instances of "apple" with "12" upon save. When you reopened the file it would replace any "12" (we assume here it could distinguish between just a 12 in the document and the encoded 12). Your file would be exactly as it was, even though it was saved a little smaller than the original.

Now, take a lossy format. Let's say it just looked for the letter "r" in words and removed the first instance in each word whenever found, thinking the reader could infer what the word was without one letter. So a word like "arrange" would become "arange". You save and reopen it and you've lost those r's forever, losing some quality. Say you then saved it again. The algorithm would go through and delete another r from each word. So now "arange" becomes "aange". Further quality reduction just from saving. This is a crude representation but is close enough to what is actually happening to work. A JPEG, as I said, discards color information with each save (like our lost r's). So you save once and it loses some. Save again and it loses more (more r's) and so on.

What about formats like BMP or TIF? As far as I can tell, they don't lose any quality with repeated saves. Then again, I guess they could be considered "lossless" formats, so they're not affected by saves like JPEG images are. Although I still don't understand why JPEG files need to lose quality when you just save them. It's not like the spelling or the fonts get worse each time one saves a Word file. It remains exactly the same. So why can't an image program just save a JPEG exactly the same as well? I understand if the user chooses to lower the quality to make the file smaller, but otherwise it doesn't make sense to me.
A BMP (or any other plain old bitmap format) has no compression at all; it's just raw pixels. So it's not really lossy at all. I believe TIF, at least in most incarnations (and there are many) is also lossless. The answer to your question is in my bit after the first quotation. In short, lossy algorithms by definition lose information with each save. That is how they reduce file size and it is why they are so much smaller than formats which are lossless. A lossless format can't just throw stuff out to make a file smaller; instead, it has to shift things around and try to find places where it can replace redundant information with a smaller marker. (like "apple" to "12")

I hope this helps. Algorithms can be annoying to grasp.
 
It's the nature of the algorithm, unfortunately. When you save in a lossy format of any kind it tries to save space by discarding any information it deems less important. In a JPEG this includes sharpness, as evidenced by blurry JPEG artifacts on close inspection. Once the file is saved in this manner that discarded information is lost forever. A lossless format economizes but doesn't lose.

Let's take the example of a text file. Say we wrote two compression algorithms. One would replace every instance of the word "apple" with a small code, say the number 12. So that algorithm would go through and replace all instances of "apple" with "12" upon save. When you reopened the file it would replace any "12" (we assume here it could distinguish between just a 12 in the document and the encoded 12). Your file would be exactly as it was, even though it was saved a little smaller than the original.

Now, take a lossy format. Let's say it just looked for the letter "r" in words and removed the first instance in each word whenever found, thinking the reader could infer what the word was without one letter. So a word like "arrange" would become "arange". You save and reopen it and you've lost those r's forever, losing some quality. Say you then saved it again. The algorithm would go through and delete another r from each word. So now "arange" becomes "aange". Further quality reduction just from saving. This is a crude representation but is close enough to what is actually happening to work. A JPEG, as I said, discards color information with each save (like our lost r's). So you save once and it loses some. Save again and it loses more (more r's) and so on.


A BMP (or any other plain old bitmap format) has no compression at all; it's just raw pixels. So it's not really lossy at all. I believe TIF, at least in most incarnations (and there are many) is also lossless. The answer to your question is in my bit after the first quotation. In short, lossy algorithms by definition lose information with each save. That is how they reduce file size and it is why they are so much smaller than formats which are lossless. A lossless format can't just throw stuff out to make a file smaller; instead, it has to shift things around and try to find places where it can replace redundant information with a smaller marker. (like "apple" to "12")

I hope this helps. Algorithms can be annoying to grasp.

Than you, that was a very thorough and interesting explanation. Good examples, too. The only thing I don't understand is, if I encode a clip and it loses quality because a lossless
coded is used, then why doesn't the file size go down? Why bother having the processor go through a complex algorithm that saves no space at all? The only way I can understand it is that while quality is indeed lost because the program has to use an algorithm to decide how to modify each pixel, which can only be an approximation due to the finite number of pixels, there's still as much information to save if the software keeps track of the info for each pixel. But if it dropped info, it seems to me that the file size should be smaller. But maybe DV AVI is a special case, I don't know.

I just did some tests with DV AVI files in Premiere, simply dragging some footage into the storyboard without changing anything else. Basically asking the program to recreate the footage as is. I then captured the same frame from each clip and observed both stills (BMPs), magnifying them to examine and compare the pixels. It turns out that they were exactly the same, pixel for pixel, without any compression artefacts. Or at least, I couldn't tell the difference. So it seems that no info is lost if I only splice footage without applying effects. I can understand why quality might go down when applying effects, though.
 
Than you, that was a very thorough and interesting explanation. Good examples, too. The only thing I don't understand is, if I encode a clip and it loses quality because a lossless
coded is used, then why doesn't the file size go down? Why bother having the processor go through a complex algorithm that saves no space at all? The only way I can understand it is that while quality is indeed lost because the program has to use an algorithm to decide how to modify each pixel, which can only be an approximation due to the finite number of pixels, there's still as much information to save if the software keeps track of the info for each pixel. But if it dropped info, it seems to me that the file size should be smaller. But maybe DV AVI is a special case, I don't know.
Lossless codecs should not lose quality. They are designed to maintain full fidelity whilst reducing filesize. I probably didn't make that clear as I don't trust claims of "lossless compression" as much as I should. Mathematical wizardry makes me sleepy. 😀 So not, your clip saved lossless should lose no quality and thus have no filesize to lose.

You could consider, however, a situation where quality decreases with no change in size. Say artifacts do get introduced into a video. Who's to say they reduce size? If you have a video and get pixelation around lots of movement would that necessarily reduce filesize? Perhaps not. Put another way, if you replaced all the instances of black 5x5 pixel squares in a video with green you'd be reducing quality, right? But a 5x5 pixel square requires the same size no matter the color. So an artifact reduces quality but does not touch size. I think that's what you're asking; correct me if I am wrong.

Perhaps someday we'll all have bajillion terabyte hard drives and internet linkups so fast our speed would be limited by the eye's ability to take it all in. If that day comes, we can encode everything lossless. Until then, we're sadly stuck with this complexity.
 
You could consider, however, a situation where quality decreases with no change in size. Say artifacts do get introduced into a video. Who's to say they reduce size? If you have a video and get pixelation around lots of movement would that necessarily reduce filesize? Perhaps not. Put another way, if you replaced all the instances of black 5x5 pixel squares in a video with green you'd be reducing quality, right? But a 5x5 pixel square requires the same size no matter the color. So an artifact reduces quality but does not touch size. I think that's what you're asking; correct me if I am wrong.

I don't really have a problem with that notion anymore. In this case, the artifacts are due to the approximations the program has to make when making changes to the picture's pixel, not to compression. So the amount of information remains the same.

What I don't understand is why the lossless codec in this case is called a compressor (at least in Premiere's settings). Clearly it doesn't compress anything, it just loses quality each time. I would understand if it converted the footage to a AVI with a lower bitrate, or to a different video format. But why call it a compressor if what you get is the exact same size as the original material?

Perhaps someday we'll all have bajillion terabyte hard drives and internet linkups so fast our speed would be limited by the eye's ability to take it all in. If that day comes, we can encode everything lossless. Until then, we're sadly stuck with this complexity.

That would be great. But I think chances are good well get there in a not-so-distant future. After all, not so long ago, we were still using floppies that contained less than 1.5 MB. The first computer I had at college in the early 90s have a hard drive of 130 MB (there are tickling clips that big, now), and it was monstrously slow compared to what we have today. Such figures seem like such a joke now. Internet connections are also a lot faster. So who knows how big and fast things will be in a few years?
 
What I don't understand is why the lossless codec in this case is called a compressor (at least in Premiere's settings). Clearly it doesn't compress anything, it just loses quality each time. I would understand if it converted the footage to a AVI with a lower bitrate, or to a different video format. But why call it a compressor if what you get is the exact same size as the original material?
There are two possibilities. One, the program writers simply choose that term to approximate what is done. It is a poor choice, I would agree.

The second possibility is that the the lossless codec just doesn't do a good job of compression. It may be that it is trying to compress but, for instance, its algorithm is unable to do much good. If you had the quality settings on the encoder set very high it might not compress much at all from the original. If settings were high enough you could, in fact, increase the size while gaining no quality! An odd situation but one easily possible. Encode a 128 kbps MP3 file at 320 kbps and size increases with no quality gain.
 
There are two possibilities. One, the program writers simply choose that term to approximate what is done. It is a poor choice, I would agree.

The second possibility is that the the lossless codec just doesn't do a good job of compression. It may be that it is trying to compress but, for instance, its algorithm is unable to do much good. If you had the quality settings on the encoder set very high it might not compress much at all from the original. If settings were high enough you could, in fact, increase the size while gaining no quality! An odd situation but one easily possible. Encode a 128 kbps MP3 file at 320 kbps and size increases with no quality gain.

The first possibility is more likely, as the resulting clip is the exact same size as the original footage. There's no attempt at any compression at all.
 
What's New

1/25/2025
Visit Clips4Sale for the webs largest fetish clip store!
Door 44
Live Camgirls!
Live Camgirls
Streaming Videos
Pic of the Week
Pic of the Week
Congratulations to
*** brad1701 ***
The winner of our weekly Trivia, held every Sunday night at 11PM EST in our Chat Room
Back
Top