Machine learning shows fresh potential in urban renewal

Young bearded male road construction worker on his job. Bright sunny day. Strong light.
• In the 2010s, the city of New York set an example for urban authorities when it used big data to optimise public services.
• Since then, progress in machine learning has led to further advances in the field of data analysis.
• A new computer vision project has notably demonstrated how Google Street View images can now be used to monitor urban decay.

In a ground-breaking project in the 2010s, the city of New York reorganized a wide range of public services to take into account the analysis of big data collected by local authorities. These included measures to prune the city’s trees, and to investigate buildings with high levels of fire risk, properties managed by slumlords, and restaurants illegally dumping cooking oil into public sewers. Since then, progress in the field of machine learning has continued to extend the potential for data-driven public initiatives, and scientists are also investigating the use of new data sources on which they could be based, among them two researchers from the universities of Stanford (California) and Notre-Dame (Indiana), who recently presented a new approach for the monitoring of urban decay in the journal Scientific Reports.

We wanted to highlight the flexibility of the approach rather than propose a method with a fixed set of features.

Urban decay in eight variables

The algorithm developed by their project identifies eight visual features of urban decay in street-view images: potholes, barred or broken windows, dilapidated facades, tents, weeds, graffiti, garbage, and utility markings. Until now, the researchers note, the measurement of urban change has largely centred on quantifying urban growth, primarily by examining land use, land cover dynamics and changes in urban infrastructure.

The idea of their project was not so much to show all that can be done with street-view images, but rather to test the use of a single algorithm trained on data from several cities, and if necessary to retrain it without modifying its underlying structure. At the same time, it should be noted that the data being used was not collected by public authorities, but from a new source: “Big data and machine learning are increasingly being used for public policies,” points out Yong Suk Lee, an assistant professor at Notre-Dame, specializing in technology and urban economics. “Our proposed method is complementary to these approaches. Our paper highlights the potential to add street-view Images to the increasing toolkit of urban data analytics.”

A doubly trained algorithm

As the researchers explain, the automated analysis of images can facilitate the evaluation of the scope of deterioration: “The measurement of urban decay is further complicated by the fact that on the ground measurements of urban environments are often expensive to collect, and can at times be more difficult, and even dangerous, to collect in deteriorating parts of the city.”.

The research project focused on images from three urban areas: the Tenderloin and Mission districts in San Francisco, Colonia Doctores and the historic centre of Mexico City, and the western part of South Bend, Indiana, an average size American town.

A single algorithm (YOLO) was trained twice on, on two different corpora. The first of these was composed of manually collected pictures from the streets of San Francisco and images of graffiti captured in Athens (Greece) from the STORM corpus. This dataset also included Google Street View shots of San Francisco, Los Angeles and Oakland with homeless people’s tents and tarps, and images of Mexico City. All of these were sourced from a multiyear period to measure ongoing change. Subsequently the Mexican pictures were withdrawn to create a second training dataset.

“We initially worked with US data but decided to compare if adding data from Mexico City made a difference,” explains Yong Suk Lee. “Not surprisingly, the larger consolidated data set was better. Also, we tried different model sizes (number of parameters) to see the trade-offs between speed and performance.” For example, the algorithm was better able to detect potholes and broken windows in San Francisco when the training data included images from Mexico City.

Dilapidated facades and weeds

However, due to a lack of similar images of in its training corpus, the algorithm significantly underperformed when tested on more suburban spaces in South Bend, although it was largely successful in following local changes signalled by dilapidated facades and weeds. The results showed that towns of this type require a specially adapted training corpus. “The features identifying decay could differ in other places. That is what we wanted to convey as well, by comparing different cities,” points out the Notre-Dame researcher. “We wanted to highlight the flexibility of the approach rather than propose a method with a fixed set of features.” With its inherent flexibility and a vast amount of readily available source data in Google Street View, this new approach will likely feature many more future research projects.

Read also on Hello Future

Osmo: artificial intelligence learns to recognize smells


Limiting the Carbon Footprint of AI

GettyImages - machine learning - research recherche

Machine-Learning-Based Early Decision-Making (ML-EDM)


Keeping watch on the carbon footprint of machine learning


Machine learning to combat ocean plastic pollution

GettyImages - ChatGPT

Is ChatGPT a human-like conversational agent?

A screen features DNA in front of a lab worker

Automated Analysis of Clinical Notes for Genetic Diagnosis

A man having his face scanned by a tablet for better audiovisual recognition

Better Hearing Through Seeing: Separating Voices in an Audiovisual Stream