Break-through progress on urban-scale quantitative visual analysis would open-up completely new ways smart cities are visualized, modeled, planned and simulated, taking into account large-scale dynamic visual input from a range of visual sensors (e.g., cameras on cars, visual data from citizens, or static surveillance cameras). Example applications include: (i) real-time quantitative mapping and visualization of existing urban spaces [Doersch12] to support architects and decision makers (see Figure 1; Section 3.3.3.); (ii) modeling and predicting the evolution of cities [Vanegas10] (e.g., the effect of land-use policies on visual appearance of different neighborhoods); (iii) obtaining detailed semantic city-scale 3D reconstruction [Musialski12] and its subsequent use for simulation of, for example, the level of noise, energy consumption or illumination. (iv) the analysis of human activities could be used, for example, to evaluate the success of a future restaurant at one place or the need of introducing new traffic security measures at another place. This work builds upon the research of the WILLOW team in collaboration with Prof. Alexei Efros from UC Berkeley.
[Doersch12] C. Doersch, S. Singh, A. Gupta, J. Sivic, A. Efros. What makes Paris look like Paris? ACM Transactions on Graphics (SIGGRAPH 2012).
[Musialski12] P. Musialski et al. A Survey of Urban Reconstruction. Eurographics 2012-State of the Art Reports. 2012.
[Vanegas10] C.A. Vanegas et al. Modelling the appearance and behaviour of urban spaces. Computer Graphics Forum, 29(1). 2010.