OpenAI’s DALL.E 2

(T) OpenAi has released its second generation of model DALL.E 2 which generates images from text inputs. DALL.E showcases OpenAI’s research in multimodal and generative models. DALL.E 2 is capable of editing images based on natural language captions, and of generating variations of input images that are much more realistic images that DALL.E.

DALL. E 2 leverages two recent advances in deep learning: CLIP and diffusion models. CLIP is proprietary to OpenAI, and jointly embeds images and associated text. Diffusion models are generative models that sample images by gradually removing noise from an initial signal. OpenAI released a paper last year comparing diffusion models to GANs for image synthesis.

Following is the abstract of the paper on DALL.E 2 and a short video about it:

Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.”

Note: The picture above was created by DALL.E2.

Copyright © 2005-2022 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com