Beiträge von Joe24

    It does not occur during Scene Test.

    Initial GPU memory used: 0.2 GB

    Memory during test: 0.5 GB

    Memory after test: 0.2 GB

    Memory after 30 consecutive tests: 0.2 GB


    Note regarding Vegas: No matter how much GPU memory is used by Vegas/VoPro, all of it is released on exit from Vegas.

    Using "doesn't work.scene" from above (post #11). Single encode, single card, single instance of Vegas 20, stock drivers, VoPro 0.7.2.9.

    7 runs using normal graph speed in Task Manager:


    30 runs using slow graph speed in Task Manager:

    Yes, I have been using YUV 420 8-bit templates exclusively for this.

    I tried installing a fresh unpatched driver (537.13) and this made no difference. The patch only relates to how many things you can do at once on a non-Quadro card, so shouldn't have anything to do with anything.

    I tried enabling GPU globally in Vegas (which is the default), and this made no difference either.

    Every time I start a render, the process allocates ~0.6 GB of VRAM. After the render, ~0.2 GB is released, and the remaining ~0.4 GB stays occupied.

    I'll run through the different CUDA filters again when I get time . . . but as I said, that was probably operator error. I was just quickly looking for something CUDA-based that wasn't an encoder. Never checked the output at all, just some of them started to encode and others wouldn't. Chroma Key was one that wouldn't work at all.

    Are you testing using the Scene from post #5, above?

    I'm still testing, trying to pin down exactly what is causing the issue.

    What I'm seeing so far is that when the following chain is used, there is no memory leak: Video input -> NVENC encoder (either h.264 or HEVC).

    works.scene.zip


    However, when the following chain is used, the memory leak exists: Video input -> CUDA upload -> NVENC encoder (either h.264 or HEVC).

    doesn't work.scene.zip


    No apparent memory leaks if I go Upload -> [various CUDA filters] -> Download -> [software encoder]. Some of the filters just plain didn't work (FFmpeg error), but that could be operator error.

    The only other possibly-related unusual behavior I've noticed so far is with the CUDA Thumbnail filter. Maybe not a leak, but something is getting left behind in GPU memory, then cleared at the start of each render:

    Load Vegas, 1.3 GB GPU memory.

    During 1st render with Thumbnail filter: 1.8 GB

    After 1st render: 1.8 GB

    Immediately after starting 2nd render: briefly drops back to 1.3 GB

    During 2nd render: 1.8 GB

    After 2nd render: 1.8 GB

    thumbnail filter.scene.zip

    I was not aware people run parallel instance of it

    In a multi-socket server/workstation system, it's normal to have multiple jobs running, each confined to its own NUMA node. Likewise if you're running multiple hardware encoders, each with different render projects. This is part of why VoukoderPro is such a big deal.

    To be clear, the memory leak issue is occurring even with a single render in a single instance of Vegas. Simply starting and stopping the same render over and over will do this, as in Post #5 above. Leak still present in version 0.7.2.9.

    No change in version 0.7.2.8. Tried just a single render, and the problem exists there too.

    I tried to see if there was an upper limit to the memory usage. Eventually VoukoderPro does refuse to run, displays error "Unable to start VoukoderPro: Undefined error!" Screenshot below is taken at this point, after starting and stopping the same render many times. As you can see, the total video memory usage has grown to over 35 GB.


    The last few lines of the log file at the time of the crash are:


    And here's an example scene that causes the issue:

    temp v0.1.scene.zip

    I'm afraid it isn't fixed in 0.7.2.7. Maybe improved, but still an issue.

    Sequence of actions starting and stopping the same render job, with VRAM usage (8 GB card):

    • Windows idle: 0.4 GB
    • Loaded 2 instances of Vegas 20: 1.8 GB
    • During 1st render: 3.6 GB
    • After 1st render: 3.0 GB
    • During 2nd render: 4.8 GB
    • After 2nd render: 4.1 GB
    • During 3rd render: 5.9 GB
    • After 3rd render: 5.3 GB
    • During 4th render: 7.1 GB
    • After 4th render: 6.4 GB
    • During 5th render: 7.8 GB (paging)
    • After 5th render: 7.2 GB (possibly paging)
    • During 6th render: 7.8 GB (page 10 GB)
    • After 6th render: 7.2 GB (paging)
    • During 7th render: 7.7 GB (page 12.x GB)
    • After 7th render: 7.3 GB (page 11.3 GB)

    Screenshot taken partway through the test run:


    Graphics card memory doesn't seem to be released after encoding with VoukoderPro. Up to and including v0.7.2.6.

    I have GPU globally disabled in Vegas 20 (because it slows rendering down - it's a Vegas thing!).

    Typically I'm running 2 instances of Vegas in parallel, and rendering in both simultaneously using NVENC through VoukoderPro. My first pair of VoPro renders usually use about 3 GB of VRAM. When the renders end, the memory is not released. If I run a second set of renders, now usage is up to 6 GB. If I run a third set of renders, now usage is 9 GB (paged, my current card is 8 GB). And so on.

    This behavior has not been observed with other render methods, including Voukoder 13.0.2. These other methods release the video memory as soon as the render is stopped/finished.

    The memory is released if I restart Vegas.

    Sounds like maybe the same issue I had, related to which base VoPro template I used in Vegas?

    See post #17 in following thread:

    Joe24
    31. Juli 2023 um 07:28

    does it work if you select the 'YUV 4:2:0 (8 bit)' template?

    Yes that was the problem. I guess I based my templates off of the VoukoderPro YUVA template in Vegas, like you said.

    Started fresh with the YUV 4:2:0 8-bit template instead, linked it to my desired Scene, and it seems to render. I'll test it more tomorrow. But it does appear to encode what I told it to do, and we do finally seem to have the speed advantage in rendering.

    That being the case, I have a suggestion for your consideration. Once a VoPro template in Vegas is renamed, there is no way to tell what pixel format is being exported from Vegas. And that can lead to confusion, as just witnessed.

    So why not have a dropdown menu in the Vegas template itself which allows you to select which pixel format to use? Perhaps accompanied with a helpful hint that if you use YUVA with certain CUDA functions, you're screwed! See below for a mockup of what I mean.

    This would serve as both control of the settings, and visual feedback of what the current settings are.


    Question: What video/picture format does Vegas pass to Voukoder? Is it raw bitmap, or some form of h.264? If h.264, maybe h264_cuvid would offer an advantage over hwupload_cuda. I found that this was necessary for processing h.264 source material in FFmpeg, seems to be able to cope with a lot of different h.264 flavors. Maybe this wouldn't be useful here. Just a thought. See my command lines in Post #4 above for how I used this.


    P.S.: I'll also add the 'hwdownload' filter, but I doubt it makes much sense to upload the frame to gpu, scale it, download it to the cpu, then upload it again to encode it ...

    In some specific cases it might make sense. Depends on if you need some oddball filter that NVENC doesn't have, or you might want to balance the workload between your GPU and CPU, or even multiple GPUs with different jobs. There are outlying cases where it makes sense.

    the codec h.265 doesnt seem to have loaded into my copy of Vegas 13.

    Voukoder doesn't 'load any codecs into Vegas'. It won't give Vegas 13 the ability to open h.265 files. What it DOES do is give you a way to encode your video in Vegas 13 as h.265 using Voukoder in the Render As menu.

    Depending on your computer's capabilities, Voukoder can perform h.265 encoding by using either software (x265) or hardware (AMD, Nvidia, Intel).