By Colin Lecher
Generations of sweating architects and designers have been at work for hundreds of years, pulling inspiration from different sources, to give the biggest, most iconic cities in the world their unique looks. The result is a Paris that isn't the same as New York and a Barcelona that isn't the same as Tokyo. We can pick up on the subtle differences, and now new software can, too.
Researchers from Carnegie Mellon University and INRIA/Ecole Normale Supérieure in Paris have designed software that combs over thousands of Google street view images from Paris, London, New York, and more, then learns to tell the difference based on, say, the number of fire escapes (Hello, NYC). If it picks up on a lot of cast-iron balconies, that's a good indication of Paris. Meanwhile, it takes a record of what nuances are shared by the cities, then eliminates those from the equation.
The project can be looked at as the visual cousin to data mining, the process where machines pick over disparate data and pull out patterns useful to humans. But in this case, the data comes in the form of images, something we've seen less of. The software could have applications in determining the flow of architectural influence across an area, and could be scaled up or down to identify a continent, or even a certain neighborhood.
Of course, cities were built by humans, and some of the sociological side effects of the project are also interesting. Old, historical Paris was marked and identified readily by the software. By contrast, the architectural melting pots that are U.S. cities gave the machine some trouble.
Dissecting Paris Carnegie Mellon University
Generations of sweating architects and designers have been at work for hundreds of years, pulling inspiration from different sources, to give the biggest, most iconic cities in the world their unique looks. The result is a Paris that isn't the same as New York and a Barcelona that isn't the same as Tokyo. We can pick up on the subtle differences, and now new software can, too.
Researchers from Carnegie Mellon University and INRIA/Ecole Normale Supérieure in Paris have designed software that combs over thousands of Google street view images from Paris, London, New York, and more, then learns to tell the difference based on, say, the number of fire escapes (Hello, NYC). If it picks up on a lot of cast-iron balconies, that's a good indication of Paris. Meanwhile, it takes a record of what nuances are shared by the cities, then eliminates those from the equation.
The project can be looked at as the visual cousin to data mining, the process where machines pick over disparate data and pull out patterns useful to humans. But in this case, the data comes in the form of images, something we've seen less of. The software could have applications in determining the flow of architectural influence across an area, and could be scaled up or down to identify a continent, or even a certain neighborhood.
Of course, cities were built by humans, and some of the sociological side effects of the project are also interesting. Old, historical Paris was marked and identified readily by the software. By contrast, the architectural melting pots that are U.S. cities gave the machine some trouble.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.