Hey ChristianundCo,
I agree that scene detection would be a really great feature to get working well in DainApp. I've had some problems too with interpolated frames being created that have merged scenes together. Even worse is when you use Split Frame and you get a mind bending mosaic of two scenes!
For the meantime, I've been splitting my footage using Premiere (loads of other video editing apps let you do this) into scenes and then rendering them out as individual videos before putting them into DainApp. Then I put the output back into Premiere to reconstruct the video, add the original video audio back in and render the video as a whole.
Very time intensive depending on how many scenes you have, but this method means you get all the frames possible. Also, adding the video audio back in at the last step means you can use time remapping to stretch/compress the audio to match the interpolated video if you've lost or removed some frames. This becomes hard if you have a video with some long scenes and then short ones together as you loose more frames from the quick scene changes end of the video and then the audio doesn't match up if you only change the overall length. So you have to start using keyframes on your audio time mapping... lets not even go there...
When you try to remap time manually you start to realize how difficult it is to do automatically!!
So as long as your original video doesn't have any fades between scenes, splitting the video up before processing works really well.
Thinking about it a bit more, if you have a fade between scenes of a few frames in your original video and there is no motion in those frames, you could remove the fade entirely before processing. Then once processed you could use the last and first frames from the two scenes to re-create the fade and fill in those pesky gaps that mess up your audio sync.
I love these types of thought experiments!
Theo