This is just insane, I was expecting artificial intelligence to at least take 2-3 years to get to this level, but progress has been remarkably swift, especially considering the accuracy of AI-generated videos. We’re just one step away from making videos that would be indistinguishable from reality, even these videos are exceptionally close.
A few things to realise here, this model is not just good for Realistic footage, but also for VFX style clips, anime etc.
Here’s what makes this new AI model groundbreaking:
• In the initial phase, Openai has only allowed access to certain individuals and Filmmakers to test the product because of the safety reasons.
• Currently, the AI can generate videos up to one minute long per prompt. While this limitation may change over time, even this capability places it far ahead of competitors like RunwayML
• Sora can also create multiple shots within a single generated video that accurately persist characters and visual style, in short multiple videos with same style and energy that would look as if they were shot in real world
• Open AI says that the current model does have weaknesses as well. It may struggle with accurately simulating the physics of a complex scene. They gave an example which goes like, “a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark”
Nonetheless, this is a huge leap towards the future video creation where we might see a lot of movies that have never been shot, but were made just by typing prompts, lemme know what your thoughts are in the comments, are you excited about this tech? or is it something that you’re concerned about