I still don't get it. As you said, there's a buffer recording the frames ahead of what you're currently watching. So, you don't need to add the time between frames in the audio equation; CPU is processing this before you see each frame. MVTools2, for example, can interpolate how many frames you want as long as you have CPU power available, and there's no audio sync problem. I'm not sure how audio sync is made, if it's by time or by frame. If it's by time, doesn't matter how much frames you have/want, as long as you know that at time X you must have audio X. If it's by frame, audio sync should be compensated properly by the interpolator, and let's be honest, that's not a complex math, considering that it knows the original and the desirable framerate.Nielo TM wrote:That's because for MCFI to work it needs minimum of two frames to interpolate the third and the forth frame, which causes lag.
For an example, the latency between frame A and frame B of 24p content is ~42ms (1000/24). So 42ms + processing + insertion can mount to increased lag. So the audio must be compensated, which is difficult but not impossible.
I have enough CPU power available here (Core i7 860 @ 3.5 GHz), and the pick CPU usage I've seen is 20% with a H.264 1080p video. What you think?
BTW, I wrote before that I didn't experience lag with Motion 2, and now it looks like I'm saying the opposite. What I meant before is that the motion was fluid all the time the interpolator judges the scene as 'interpolable'. What I'm saying now is that motion is still fluid, but there's a lag (i.e. sync problem) between video and audio.