GSAPP CDP 2023-4 Colloquium II

Yilin Wang

1 – Explore

Urban 3D Morphology prediction model adaptive to local climate zones(lcz) classification

Explore/ Experience

At the first few weeks of this semester, I explored on issues how AI plays the role as a tool in Computational Design. And I have tried some experienment on different aspects of this issue.

Experience on combining AI with design / data visualziation

The first experience is combining parametric design with stable diffusion rendering. Stable Diffusion Webui is originally a text/image to image algorithm, which would be great for architectural designers to make renderings much more conveniently. I combine this AI part with grasshopper parametric design, so as to make renders interactively and conveniently as parametric architectural model changes to every option.

The second experience is to integrate advanced Computer Vision technologies with data visualization in javascript /mapbox. The method for creating dataset is to firstly utilize google api to download street view images and then employ computer vision to make semantic segmentation for each image, and finally calculate each components' ratio in images.

Research on GAN (pix2pix) algorithm

Generative Adversarial Networks (GANs) are a type of deep learning model consisting of two parts: a generator and a discriminator. The generator is responsible for creating realistic-looking data, while the discriminator's job is to distinguish between data produced by the generator and real data. Through this adversarial process, the generator learns to produce increasingly realistic data.

Pix2pix is a specialized form of GAN designed for paired image-to-image translation. It learns to transform an input image into an output image through training, making it suitable for tasks like style transfer, colorization, and more. Pix2pix trains with paired input and output images, enabling it to learn how to convert one type of image into another.

Existing Research on GAN as a tool for city planning

Here is a piece of paper on how to use GAN in urban planning. The first one is to utilize GAN to generate imagined building footprint for blocks and then designers make 3D model based on this. The paired input and output images are empty block boundaries and the ground truth building footprint in respective blocks.

Local Climate Zone(LCZ) in micro climate analysis of urban planning

Urban has endured severe micro-climate situation such as urban heat island, urban heat wave, and flooding.

Local Climate Zones (LCZ) maps play a crucial role in urban planning by providing detailed classifications of urban and natural environments based on thermal and physical properties. These maps facilitate the understanding of microclimates within cities, aiding in designing more sustainable and climate-resilient urban spaces.

By categorizing areas into distinct climate zones, LCZ maps help urban planners and environmental researchers to assess the impact of urban design on local temperature, air quality, and energy consumption. This data-driven approach enables informed decision-making for urban development, prioritizing both environmental sustainability and the well-being of urban inhabitants.

Research / Experience on current condition of computational urban design/ parametric design

While micro-climate plays an significant role in adaptive urban planning, in the following two experiences I have done in normal computational design in architecture/ urban planning, it is often considered as metric analysis for designed option instead of input parameter that designers take into account at the beginning of design process.

Research Question: Generating urban 3D morphology that rapidly responds to LCZ changes? 

Current computational urban design primarily involves the generation of 3D models based on various parameters (street width, block shape, FAR, building types), followed by environmental analysis based on these models.

Meanwhile, AI research in the field of building environment has been mainly focused on creating two-dimensional images for visual presentations or further data visualization.

Therefore, the research question for my project is: Can designers generate urban 3D morphology models using urban microclimate data(lcz maps) as parameters? Is it possible to utilize lcz maps as a design guide, so that when the color classification of lcz maps change, the urban morphology change responsively?

The primary goal of the project is to utilize LCZ maps for predicting and modifying urban 3D morphology models. This innovation has the potential to influence urban planning/ design by enabling more dynamic adaptations to the local climate, thereby fostering more sustainable and responsive urban environments.

2 – Explain

Explain a design framework for generating urban 3D morphology that rapidly responds to local climate zones(lcz) changes

Overall Methodologies 

The following diagram demonstrates the overall methodologies for my capstone project.Basically it can be conceived as two parts. The first part is to set up AI -parametric design workflow, the second part is to utilize this workflow to generate new design options.

The first part consists of two procedures: pix2pix and then parametric design. The paired input and output images for pix2pix part are lcz classification mapping and respective scale, ground truth morphology mapping. By combining this two datasets to pix2pix algorithm and training the AI model, we can generate new urban morphology mapping based on new input of lcz classification mapping. Then these generated urban morphology mapping will be imported into grasshopper to make parametric model for corresponding urban blocks.

The second part is to adapt to some changes on existing lcz and road network so that simultaneously urban 3d model will make responsive changes based on lcz data changes. In this case, urban 3d model will display dynamic response with lcz mapping.

Data Source -- Ground Truth Urban Morphology Mapping 

As introduced before, the dataset for pix2pix model consists of paired input and output images.

The output images for training the model are ground truth urban morphology mapping. I got city footprint from mapbox street tile v8 and make color coded for water and greenery. From OSM buildings I acquired geojson of buildings containing information of each building's height. Finally, I joined these height information with building footprint and color coded them by red color. Since it requires api to get more access to height information on OSM building and api is quite expensive, for this semester I only acquire New York City's and Los Angeles's city center part building height information.

The input images for training the model are lcz classification mappings. I get access to some existing lcz mapping on World Urban Database and Access Portal Tools (WUDAPT). The resolution of these data are 30m, which is detailed enough to distinguish building footprint in the scale of city blocks. I have chosen some metropolis to analyze lcz mapping since the mega city could have as many classifications as possible. Plus, I chose main cities' central area for analysis.

3 – Propose

Make a prototype and list final deliverables for next semester

Make a prototype for the capstone project

During this semester, I have made a prototype for preparing datasets and training pix2pix models. Their position in overall methodologies is displayed below.

MVP for pix2pix part

I have chose data of city center of New York City and Los Angeles. I have exported from QGIS the paired images of lcz mapping+ road network / urban morphology mapping with building information color coded. The scale of these paired images have the exact same location and dimension, and the scale is roughly 1km* 1km.

Training and testing pix2pix model

Now the dataset contains 80 paired input and output images of New York City's and Los Angeles's central area lcz mapping and urban morphology mapping. For the prototype of pix2pix model, I have randomly selected 80% of paired data for training the model; 15% for val; 5% for testing data.

The paired images on the right demonstrates two results among data for testing the model. The order of these images from left to right is: existing lcz mapping; ground truth urban morphology mapping and generated/faked new urban morphology mapping.

List for final deliverables

The first part is to gather and preprocess a whole dataset. As I introduced before, the lcz data I acquired for this semester is from website and they are generated by others. For next semester, I am supposed to utilize lcz generator to create my own data for lcz mapping.

Another important prototype for final deliverable is setting up grasshopper custom tool that generated 3d model based on imagined 3d urban morphology mapping from pix2pix model.

A thorough documentation including paired images dataset, responsive 3d model, dynamic response of models, website/video displaying the whole workflow will be delivered next semester.

Bibliography

https://github.com/SerjoschDuering/StableDiffusion-Grasshopper github repo on how to set up stable diffusion in grasshopper https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix github repo on what is pix2pix and how to utilize pix2pix algorithm https://journals.sagepub.com/doi/full/10.1177/23998083221100550 a paper on integrating GAN to urban design https://link.springer.com/chapter/10.1007/978-981-33-4400-6_10#Fig2 a paper on integrating GAN to urban design https://www.frontiersin.org/articles/10.3389/fenvs.2021.637455/full a paper on how to use LCZ Generator to generate lcz maps on one’s own https://www.wudapt.org/lcz-maps/ World Urban Database and Access Portal Tools, the website where to access lcz data and lcz generator https://scout.kpfui.dev/?project=tampa a case study of KPFui on current computational urban design

Data Source

https://docs.mapbox.com/ mapbox style setup and edit customize styles https://osmbuildings.org/ using this website api can get most buildings’ height information in the world