How to use Open AI’s Sora, a text-to-video generator

    372
    how to use Open AI sora

    ChatGPT was just beginning, Open AI has lots of super exciting AI stuff in its portfolio, one of which was unveiled on ThursdaySora. It’s a hyper-realistic text-to-video diffusion model that can create realistic 1080p videos up to a minute long with written prompts.

    No, it’s not the first text-to-video AI model. We already have plenty of them with some prominent names, such as Runway ML and Pika Labs. But the output created by Open AI’s Sora model is just amazing. AI enthusiasts (including me) are blown away by the demos shown by Open AI and Sam Altman.

    In a blog post, Open AI wrote, “Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

    While a lot of people are blown away by the results shown by Open AI, many are finding ways to use Open AI’s Sora for video generation. Here, I’ll tell you what it takes to try the Sora text-to-video generator.

    How to use Open AI’s Sora text-to-video generator

    While the public announcement is live, Open AI hasn’t made Sora accessible for the public. The company says it’s available to red teamers to access critical areas for harms and risks. In other words, it’s being tested thoroughly to ensure it doesn’t produce harmful or inappropriate videos.

    Open AI has granted access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals.

    Is there a way for others to try Sora? Well, there’s a way, but you must be lucky enough. Reply with your text prompt to Sam Altman’s Twitter post; if you get lucky, you’ll get the generated video.

    Open AI hasn’t revealed when Sora will be out for the public. So, if you haven’t been invited to try Sora, sit tight and explore demos.