Skip to Main Content

Research Guides

Artificial Intelligence for Image Research

A guide on how to use Generative AI for image generation, editing, concept creation and development.

Visualization Workflows

Image Generation tools are able to create extremely convincing final renders for visualization in art and architecture. While previous concept creation techniques embrace unintentional generations as a means to create new concepts, for visualization we require image generation that is hyperspecific. Form, surroundings, image composition, and mood all need to be iteratively tuned to suit our desired output.

 

In traditional visualization workflows, final renders and images typically follow this order:

1. Base image creation (Image of building form, in intended view/perspective, with materials applied/designated areas for collage)

2. Editing (Base image is added to with new materials, backgrounds, surroundings, etc. in Photoshop or other image editing software)

 

AI techniques are not so different and can be supplemented with traditional workflows to speed up processes. 

Base image creation:

1. LookX - Architecture-specific image generation model

2. StableDiffusionXL with ControlNet - A tool within the SDXL model that allows a user to create images with prompts and a base image of their own (e.g. a sketch or set of lines)

Image editing:

1. Photoshop Generative Fill - Allows users to specify different regions in an image to edit with prompts.

2. Illustrator Text-to-Vector Graphic - Allows users to generate editable vector graphics via text prompts.