Optimisation in the 1960s and 2010

Daniel Davis – 23 May 2010

Computational architecture got off to a pretty bizarre start in the 1960s. Pick up a copy of  Cross’s The Automated Architect (1976) to see what I mean: study after study of methods to optimize designs to reduce the distance occupants walked. Even by today’s standards, the distance occupants walk seems a pretty strange measure of design success. One can only conclude that architects in the 1960s must have lived in almost perfect buildings, where commodity, firmness and delight were taken care of, and all that remained was to minimise the distances between tasks. Unfortunately, the dream to minimise the distances between tasks ends abruptly about the same time as the publication of The Automated Architect, never again to be considered a measure of building success again.

I recently stumbled across Sean Keller’s explanation of this period in his article Fenland Tech: Architectural Science in Postwar Cambridge. Keller traces computational architecture back to the Second World War, a period when:

The extremities of war had forced to the surface many doubts about architecture as a significant modern profession: Did architects possess special expertise? Was their expertise objective or merely based on taste? In times of real need, were architects necessary? In short, was architecture serious business? (pg. 48)

Arising from a sense of war-time inferiority, architects attempted to legitimize the profession by turning to science and mathematics. In post-war Britain, architecture was centralised around the Ministry of Works, which enabled the funding of – what was seen as legitimising — research into environmental design. With it came the rejection of “intuitive skill,” “confusion,” “sophistical sciences,” “individual hunches,” “court jesters and acrobats,” “private pranks,” “pricey prima-donnas,” “hallucinations,” “extravagant and empty images,” “individual expression,” and “personal prejudice” that “threaten architecture and planning.”(pg. 51). This manifested itself in an attempt to generate the perfect plan though minimising distances people walked. Keller gives three reasons for the interest in distances people walked:

  • It was based on observed behaviour and statistics.
  • The architects were able to legitimise the work through the mathematics of topology and graph theory.
  • The result was not geometrical and did not require a (then expensive) screen.

By the mid-1970s this approach had fallen out of favor. Of the many reasons given, the most relevant are:

  • A functional study could not objectively translate into a formal representation of a building.
  • It is difficult to measure a quality of a building – the distance people walk is itself a function of the building.
  • A satisfactory, but not optimal, solution can be found intuitively. It is not worth giving up control of how the building looks, how much it costs and what the environment is like, in exchange for a small improvement in a fairly unimportant characteristic of a building – the distance people walk.

I would also add that telecommunication probably solved a large part of this problem. This speaks to the non-obvious nature of design, that the solution to reducing the distances people walked was the design of a protocol for exchanging information between computers, rather than the design of a perfect building layout. Tabor and Willoughby, two previous advocates of mathematical plan generation, concluded that “quantitative approaches have a limited use for certain very complex problems, and must always rely on many assumptions that cannot be quantified and on inherited typologies.”

Recently performance has been back on the architectural agenda. Optimisation in the 1960s has much to teach us about the dangers of false optimization and indifference to the resulting architecture. I think this is particularly important as optimisation becomes black-boxed and commodified. So with this warning, I welcome Galapagos, an evolutionary optimizer still in beta, developed for Grasshoppper by David Rutten. In the video below it has been linked with the physics engine Kangaroo to optimise the position of attractors.

Cover image from page 42 of Keller, Sean. 2006. “Fenland Tech: Architectural Science in Postwar Cambridge.” Grey Room 23 (April): 40–65. http://www.mitpressjournals.org/doi/abs/10.1162/grey.2006.1.23.40.

Share

Subscribe

Join the mailing list to receive an email whenever a new post comes out. Otherwise, you can follow me on Twitter, or add the RSS feed to your reader.

4 comments

  • Mentioned 24 May 2010 at 2:57 am

    […] This post was mentioned on Twitter by Rodrigo Medina, bojana vuksanovic and bojana vuksanovic, Pablo Kobayashi. Pablo Kobayashi said: Digital Morphogenesis – Optimisation in the 1960s and 2010 http://bit.ly/d2I5Zu […]

  • Mentioned 9 November 2010 at 8:29 pm

    […] contribution was to provide a framework to popularise these concepts. Much like the 1960′s space planning movement, the projects that flowed from The Logic of Architecture were fruitless applications of ethusiasm […]

  • Mentioned 11 August 2011 at 12:22 pm

    […] The Logic of Architecture be compelling if Mitchell admitted his system would not work? would the early optimisation studies be funded if the researchers admitted the optimal layout of floor plans was not important to […]

  • Mentioned 5 May 2013 at 2:16 pm

    […] contribution was to provide a framework to popularise these concepts. Much like the 1960′s space planning movement, the projects that flowed from The Logic of Architecture were fruitless applications of enthusiasm […]

Leave a comment