Update: So, the manual splitting doesn't help after all. I found the artifacts even in small segmented videos. The most typical case is the blue sky. So it seems I cannot use the DAIN app on these videos. :-(
pauliezmm
Recent community posts
Update: So, after some testing I found a solution. Changing the section size or padding doesn't help. The only workaround is to split the input video before actual processing. I use a combination of batch script and ffmpeg so this is not a big deal but I then have to run the DAIN process on all segments (25 segments in my case) manually.
Is there any way to control the DAIN app via the command line so that I could automatize the whole process? It would be extremely helpful for me!
Hello, I am using DAIN APP 0.31 on fulldome videos (special format of video captured with a fisheye lens and used in digital planetariums). The algorithm performs great in terms of motion smoothness but sometimes, it produces strange artifacts. They typically occur in large homogeneous areas such as a blue sky (rarely, they occur elsewhere). Here's a sample:
Does anyone have a similar experience? Do you know of any workaround? I am using the GTX 1080 card, the video resolution is 3744x3744, 29.97 fps. I split into segments 624x624 pixels, use padding of 70-90 pixels, export to PNG. Any advice much appreciated.
I would be very grateful if the devlog could be kept up-to-date. I am now testing the app on fulldome videos with 4K+ resolution so I very much appreciate the version 0.3 which has introduced the split option. However, it is hard for me to keep track of development. Thanks very much for the effort anyway, I will surely spread the word among the fulldome community!
Similar behaviour here. But if you click on GPU to view the details you can see some activity in the memory copy section. My guess is that the algorithm itself uses a lot of memory accesses but not so much computations. That's generally not ideal for GPU acceleration. However, the application seems to be some kind of straightforward build from the python code so the GPU code most likely comes from python libraries. So there definitely is a room for optimization but it would probably require rewriting the app from scratch in CUDA C/C++.
From the screenshot you seem to have a similar problem as me. Almost 6 GB of memory is already allocated by other applications. Try closing all the programs (especially video/image editors such as Photoshop, Premiere etc.). Also try to connect your monitor to an integrated GPU if possible. That way, you'll have all the 2080's memory available for DAIN.