Gen-2 is an advanced AI system developed by Runway Research that revolutionizes the process of video synthesis. This multi-modal AI is capable of creating unique videos by utilizing text, images, or video clips as input. With Gen-2, users have an extraordinary level of precision, realism, and control over the videos they generate, allowing them to explore various styles and concepts simply by providing a text prompt.
Here is a video completely created by artificial intelligence for the prompt “Indoor urban jungle”. And here is what we obtained. This service is still experimental, but the results are already impressive!
Information from their website:
Deep neural networks for image and video synthesis are becoming increasingly precise, realistic and controllable. In a couple of years, we have gone from blurry low-resolution images to both highly realistic and aesthetic imagery allowing for the rise of synthetic media. Large language models and models with shared text-image latent spaces, such as CLIP, are now also enabling new ways of interacting with software and synthesizing media. Diffusion models are a prime example of the power of such approaches. Runway Research is at the forefront of these developments and we ensure that the future of content creation is both accessible, controllable and empowering for users.
Gen-2 provide some free seconds for generation. So you can try it for free: https://runwayml.com/