I’ve pirated every video converter known to man (UniConverter, WinX, VideoProc, Aiseesoft, Tipard, etc) & even tried open source tools like ffmpeg and handbrake and I can’t get hardware acceleration to work unless I just don’t understand how it’s supposed to work. I have a Radeon ™ RX 470 graphics card and plenty of processing power.

An example is when I attempt to convert a video to HEVC and don’t use acceleration, I can get like 100 FPS and 2-3 mins rendering time but all my CPUs go to over 100%.

However, when I turn on acceleration or use the AMD HEVC Encoder (ffmpeg, handbrake), the FPS rate drops to like 10-15 FPS, the CPUs barely go over 10% and the GPU then jumps to over 100% which is fine but then it tells me it’ll take like 20 mins to render a 20 mins tv episode!?!?

This is driving me crazy. Can someone provide some insight on this? I’d be forever grateful. Thanks!

  • Shimitar@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    IMU, GPU encoding is for streaming: it aims at fast, not so great, output quality without CPU usage. Exactly what you are getting.

    Don’t use GPU encoding for storage… CPU encoding is much better.

    Edit: since its aimed at streaming, GPU encoding only needs to achieve real time performance, no need to go any faster. CPU encoding instead can go as fast as your cores can push.

    • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah, I’ve done a bunch of reading since this post and I’m going with CPU. It may put a bit more stress on my system but my system was built for high-end gaming in the first place so I shouldn’t have any problems. Thank you for your input and information! :)

  • FBJimmy@lemmus.org
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    5 days ago

    Based on how you’re observing the loading move from 100% CPU ro 100% GPU, I would suggest that it is “working” to some extent.

    I don’t have any experience with that GPU, but here’s few things to keep in mind with this:

    1. When you use a GPU for video encoding, it’s not the case that it’s ‘accelerating’ what you were doing without it. What you’re doing is switching from running a software implementation of an HEVC encoder on your CPU to running a hardware implementation of an HEVC encoder on your GPU. Hardware and Software encoders are very different to one another and they won’t combine forces; it’s one or the other.

    2. Video encoders have literally hundreds of configuration options. How you configure the encoder will have a massive impact on the encoding time. To get results that I’m happy with for archiving usually means encoding at slower than real-time for me on a 5800X CPU; if you’re getting over 100fps on your CPU I would guess that you have it setup on some very fast settings - I wouldn’t recommend this for anything other than real-time transcoding. Conversely, it’s possible you have slower settings configured for your GPU.

    3. Video encoding is very difficult to do “well” in hardware. Generally speaking software is better suited to the sort of algorithms that are needed. GPUs can be beneficial in speeding up an encode, but the result won’t be as good in terms of quality vs file size - for the same quality a GPU encode will be bigger, or for the same file size a GPU encode will be lower quality.

    I guess this is a roundabout way of suggesting that if you’re happy with the quality of your 100fps CPU encodes, stick with it!

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      This is probably the best answer you will get OP ! I have done some encode (BD -> SVT-AV1) and everything FBJimmy said is everything I have gathered through my search on how to get the best quality/speed encode without loosing to much of fine details.

      This won’t make you happy if what you want is to use GPU encoding, cauz this is for on the fly encoding (streaming via twitch, Youtube, whatever…). It seems a nice idea to do GPU encoding but CPU software encoding is way more efficient than GPU.

      It seems You aren’t looking for quality video encoding, but more speedy encoding? If that’s the case, yeah GPU encoding seems the best idea here. But can’t help sorry…

      Most of the encode I have done with ffmpeg on AV1 got arround 20fps ? Yes it’s slow, however I get near “lossless” quality with an acceptable file size to serve over Jellyfin. Also, I never heard someone mention that 80-90% CPU utilization is bad for your CPU if your temps are all right (over 80° seems a bit alarming). Sure if you’re doing video encoding every day, your CPU will suffer offer time, I mean that’s practically what they are build for… Processing information ! And like everything, the more you use it, the more it wears out (the same goes for your GPU…)

      But I can understand your determination and hope you will find your way arround. I’m also stubborn when I want something to work the way I want.

    • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      It doesn’t help when I don’t have a very good grasp of the Hardware mechanics of it. Thanks for trying to clarify for me! The thing I’m most concerned with in using the CPU for everything is most software including Handbrake I try, if I let the CPU do all the processing, each CPU core goes to >100% which is not good for the system for long periods of time and literally got 100s of DVD/BluRays I want to reprocess. I’ve always been told around 55%-65% on each core is acceptable when processing video. Any additional information you can provide would be most appreicated.

      • we_avoid_temptation@lemmy.zip
        link
        fedilink
        English
        arrow-up
        15
        ·
        5 days ago

        each CPU core goes to >100% which is not good for the system for long periods of time

        If you don’t have effective cooling, maybe, but I’ve never heard of any reason to keep core utilization under any specific percentage. Are your temps an issue?

          • Paula_Tejando@lemmy.eco.br
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            A good range for CPU utilization is 100%. Same for memory. Anything less and you’re wasting your computer, letting energy flow through your components and degrading them without much benefit.

          • Kissaki@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            That’s bullshit. There’s no reason to limit or target a specific or non-maximum CPU core usage.

            That would only make sense to evade hardware faults or cooling issues. Never as a general guideline.

          • catloaf@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            I can think of no logical explanation for that. Maybe if you wanted to use CPU encoding and use the system at the same time. But given how many cores systems have these days, percentages don’t mean much. As long as you leave a few cores available, you’ll be able to use the system.

            If you don’t care about that, let it go to 100%.

  • istdaslol@feddit.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    We need more detailed information, what codec you want do encode in, what’s the settings to the encoder, what type of input file etc. And since you use windows it would be nice to check if you use the 3D accelerator or the Encoder. This is easy shown in the taskmanager, when you encode in hardware the hardware encoder should show usage. For me I use both intels quicksync aswell as nvidias nvec for encoding and never had any issue so it just might be an AMD issue or your specific GPU may be faulty-or driver issue.

    • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      Sorry, I’ll try to be a bit clearer here. I have an MKV (h.264, aac, ass soft sub) file 1920x1080 16:9 I want to re-encode with HEVC reducing it to a 640x360 while retaining the soft subs. Actually, I would like to burn in the soft subs but it keeps defaulting to the programs standard font instead of using the soft subs styles. I use HWMonitor when checking the various softwares so I can see how much CPU or GPU usage is going on. I’m wondering if it’s just a freaking AMD issue. I HATE so bad I didn’t get an Intel or Nvidia graphics card when I had my system rebuilt. Anyway, I hope this provides a bit more clarity. Feel free to ask anything else as I’m just basically going around in circles right now. Thanks so much for your input!

      • istdaslol@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        thats the issue, downscaling. when I downscale 4K to HD it takes about 45min for a 2h film, when I keep the resolution its like 5min.

  • Hackerpunk1@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    Certain Converters might require additional configs to support fast conversion, such as UniConverter. I had to get some dll and paste them in the app directory for it to unlock hardware acceleration.

    • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      Hmm, I wonder… It’s funny you mention this because Wondershare UniConverter I pirated and it installed and registered fine but I couldn’t get the AMD acceleration to show up under acceleration. It’s like it didn’t recognize it. I wonder if this file you mention has something to do with that because it DOES support AMD acceleration.

    • Rodrigo_de_Mendoza@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      5 days ago

      Really!? I would LOVE to get that converter to work. I’ve got a pirated copy somewhere around here. Do you happen to remember what file it was and where to get it?

  • ShepherdPie@midwest.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    5 days ago

    AFAIK x265 encoder support has always been flaky with AMD GPUs. I used to get constant stuttering when trying to playback HEVC files with my old RX570 before switching to a 1080TI. I think Nvidia or Intel is your best bet for this.

  • dingdongitsabear@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    don’t you need ROCm drivers for that sorta thing? I know you need 'em for OpenCL, Blender, etc., so I assumed it’s the same for ffmpeg, so I never bothered to try.

  • wax@feddit.nu
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    One possible culprit is CPU-GPU memory transfers. Have you tried encoding without burning in the subtitles? Maybe the burn-in requires a CPU transfer after decoding and then back again to encode

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 days ago

    What platform, version, drivers, encoder, and ffmpeg?

    Edit: also the command and a sample file ideally