The Living City
The Living City

Monday, May 30, 2011

New Background Looking Pretty Darn Good

Russell asked me to restyle my background to make the fine evolutionary structures and processes going on behind the scenes more evident. Since I "spaghetti coded" the final touches to it, I figured it'd be better to start from square one again, this time with a clearer goal in mind.

Currently, I've gone through 20 subversions of the new background, carefully working on a new aesthetic. This time, instead of basic blocks, the background is comprised of living, growing garden beds.

Each garden bed is made up of:
  • One tree that grows and nourishes the garden. If the tree dies, the garden bed's other plants can't receive nourishment to grow.
  • Several flowers that grow only if the garden bed's tree is alive. All flowers of a given garden bed are the same colour (the colours used were picked to match the colour scheme of my blog).
  • Grass, which grows only while the garden bed's tree is alive. As the tree's main trunk grows in size, the grass spreads.

I'm now quickly working on two things:
  • Replication and variation through successful garden beds (ones that've lived long enough) spreading their seeds to create new garden beds beside them. Seeds will spread randomly in four directions: up, down, left, and right, and will vary slightly in their attributes from the parent.
  • Feeding trees to keep them alive by hovering the mouse over their garden bed.

An interesting aspect of this experiment is that it can be thought of as an analogy for the evolution of city blocks. The mapping would be:
  • Garden beds to city blocks,
  • Trees to activity nodes,
  • Grass to the parasitic exposure smaller buildings receive from the presence of a successful activity node, and
  • Energy to the "life & longevity" a city block's buildings gain from the keen inhabitance of people.

Wednesday, May 25, 2011

Genes, Memes, and Bemes

After talking with Russell, he suggested that I succinctly define what I mean when I say "gene", "meme", and "beme", since I haven't yet clarified those terms in my posts so far.



Quick Definitions of Gene, Meme, and Beme

A gene is a unit of heredity (things copied from parent to child) in evolution. Though popularly associated with only biological evolution, genes exist in any system involving evolution. The vast majority of the time, genes directly affect the capacity for an organism to survive and its capacity to replicate its genes (eg, having children). More information can be found on the relevant Wikipedia article.

A meme is a gene in the specific context of cultural evolution, where the organisms are cultural trends that compete and cooperate. The term was first coined by Richard Dawkins in The Selfish Gene as an analogy for how genes work, but soon took off as a useful concept in cultural development.

Here's a video of a TEDTalk by Susan Blackmore that lucidly describes the basics of memes, amongst other interesting hypotheses and observations about them.




A beme is a meme in the even more specific context of the evolution of culture and fashion in building design. This term was coined by me for this research studio to aid in succinctly understanding and describing relevant aspects of my design methodology for the coming final semester. At its base, it is a class of meme, which itself is an abstracted class of gene. When I talk about evolution involving bemes, I will refer to it as bemetic evolution.



Evidence Supporting the Validity of the Concept of a Beme

In a way, the reasoning laid out below served as a loose, conceptual experiment to test the hypothesis that "bemes exist and apply to the evolution of building design". The crux of the evidence for bemes is proving evolution can be applied to the culture and process of building design. As such, a specific definition is required to determine when a system does and does not involve evolution.

According to part of the Wikipedia article mentioned above, in Dan Dennett's book "Consciousness Explained" (1991, Boston: Little, Brown and Co.), evolution exists when a system encapsulates three conditions:
  • variation, or the introduction of new change to existing elements,
  • heredity or replication, or the capacity to create copies of elements
  • differential "fitness", or the opportunity for one element to be more or less suited to the environment than another.

So now it must be proven whether or not the culture of building design involves variation, heredity (or replication), and differential fitness. If any one condition is missing, by definition, the system doesn't involve evolution.

Looking into the history of building design culture, bemetic evolution has manifested as the use of precedent inspiration (replication and heredity), the reinterpretation and appropriation of precedent (variation), and the success and popularity of a design (differential fitness through cultural selection).

And thus, "bemes exist and apply to the evolution of building design" has been qualitatively confirmed through philosophical argumentation.

Thursday, May 19, 2011

Re:Design Matrix ReDesign Matrix

Confusing post title, but this post is essentially documentation of some brainstorming for how my Design Matrix is best structured. The original idea I had could be integrated, but this one seems much more sound and able to be translated into actual experiments, rather than the frankly woolly ideas I was working with before (shown in an interactive 3D environment in this post).

Now that I've settled on a title of my graduation project (evolvedDesignInformant), I'll be uploading my work, progress, and thoughts to this blog. Also, the idea of evolution is now established as perhaps the strongest influence on the work I'll be doing. It's a philosophically and mathematically challenging set of ideas to get my head around, but I think I'll get a lot out of it with enough focus. Anyway, on to the design matrix...



EVOLUTION
  • Physical buildings, "bemes"
  • Metaphysical society, memes
  • Virtual design informants

PROGRAMMING
  • Physical interpretation and adaptation to physical stimulae through arduino
  • Metaphysical programmatic encapsulation of emotions and personality in the parts of a place that give it "life" in a sense -> involve yahoo pipes?
  • Virtual modelling of evolution, expanding generic and specific knowledge base [GENEric, SPECIfic -> concepts of evolution can be applied here, too]

MODELLING
  • Physical laser cutting, 3d printing, manual modelling, "genetic recombination" of these methods to produce new mutations
  • Metaphysical programmatic encapsulation of emotions and personality in the parts of a place that give it "life" in a sense -> involve yahoo pipes?
  • Virtual grasshopper [deterministic, evolution], interactive 3D environments

Experimenting with Rhinoceros Grasshopper Video 2 Uploaded

I decided to squeeze in a little more "fiddling time" in transit to and from uni yesterday. One of the first things I said I wanted to know how to make in Grasshopper was a smooth, undulating surface. Thus, the aim of this model was to make an undulating surface, with a few extra things thrown into the mix to expand the amount of things I'd need to figure out to get a finished model. If I recall correctly, the time taken for this model was about 90 minutes.





There weren't many hiccups through this video, since I'm getting more used to knowing where to look for the kinds of functions I want. The main sticking point this time was when I tried to give the undulating 2D surface thickness. My initial thought was to duplicate the surface upwards with a "move", and then try the "cap holes" function. However, that didn't work since "cap holes" only works for holes defined by planar curves, rather than intelligently joining vertices with edges, and the resulting closed polygons with surfaces like I'd hoped. I ended up realising an "extrude" was all I needed.

After that, the only confusing thing was figuring out how the solid functions work. At first, I couldn't figure out why two cylinders would no longer work for a solid union when I was trying to intersect them, but then I noticed that Grasshopper's cylinder primitive isn't a closed solid, so trying to perform a solid union for them when their open ends weren't exactly planar would end up failing. After using cap holes, that problem was fixed.

Another thing: I just took a look at PK's blog and saw he'd posted a video using Grasshopper's "Galapagos" capsule. I remember Russell mentioning it in a previous studio class, but I'd forgotten to check it out. Looking at some videos and fiddling briefly with it in Grasshopper to evolve solutions to various simple equations, it seems to be exactly the sort of thing I'd be interested in using to evolve different aspects of my final design!

I'll definitely be using it to see what I can do with it.

Experimenting with Rhinoceros Grasshopper Video Uploaded

The video finished rendering and uploading, so here it is!


Experimenting with Rhinoceros Grasshopper

I'm currently rendering a video made using Chronolapse while I toyed around with Grasshopper. Shortly into the video, I decided to give myself something to try to achieve by the end of it - a twisted rectangular prism shape with rectangles along it representing floors. Looking back over it and playing a little more with Grasshopper, I noticed there are a few tools I could possibly have used to make the final form much faster.

However, the main point of this stage of my computer modelling experiment is to get myself familiar with Grasshopper, rather than efficiently producing an end result in one go. As with any experiment, the criterion of success is what is learnt.

The main sticking point was when it came to combining two lists of transforms - one list of rotations, one list of translations - so that I'd get one list of transforms that each represented a rotation and translation. I found out how to combine an entire list of transforms into a single transform, but that was the closest I could get. So, in a bit of desperation and having never before programmed in VB.NET, I fiddled wildly with it in a script until I got my head around how Grasshopper let you handle its parameters and "output" - which is really just any variables passed by reference rather than by value.

The next sticking point was that I couldn't find an object reference that would drastically speed up the rate at which I could throw together the script. I noticed the IDE had an autocomplete list that would pop up as you wrote your script, but it wasn't really comprehensive enough for me to know what each variable or method in the list was for. After Googling for a while and finding nothing, I resorted to looking in the plugin folder for Grasshopper, and found RhinoCommon.xml under Plug-ins/Grasshopper/rh_common/RhinoCommon.xml (relative to the Rhino install folder). You'll probably see a few flashes of it opened in Chrome in the video once I upload it. It was somewhat helpful because I could easily Ctrl+F and enter the name of the type I wanted, and hit Ctrl+G until I found it. Admittedly, that took quite a few Ctrl+Gs at times, but on the way, I was finding a few useful tidbits here and there.

After I got past that point - and a few sundry errors to do with invalid typecasting of Transform to Vector3 - it was fairly smooth sailing.

I intend for my next experiment in Grasshopper will involve generating a form that's a little more complex, but I'm not quite sure what would be appropriate yet.

I'll put the Chronolapse video up in my next post when it finishes rendering.

Experimenting with Evolution Programming 2

After today's effort, I've made some significant steps with the evolution program you see milling away in the background whenever you move your mouse in Firefox or Chrome. For a simple description of how to interact with it, wave the mouse over empty space in the background to spawn new "creatures" (squares). Over time, the creatures consume energy and so start to shrink (like a dying flower). To feed a creature to keep it alive, wave the mouse off it and back on it again. If a creature stays alive until maturity (currently 0.3 seconds), it gives birth to anywhere between 1 and 8 children into the adjacent blocks around it. Birthing happens in a reliable manner (eg, most successful births make a child in the position directly below the parent), which you'll probably notice as the creatures appearing to fall downwards like Tetris blocks. Creatures come in two colours, determined by how implicitly nervous they are - blue creatures shiver more the more nervous they are, whereas orange creatures aren't implicitly nervous enough to shiver.

Currently, I have a system that emulates:
  • Creatures (each square is a potential creature, until one is either born there, or spawned there)
  • Genes (of three genes: metabolic rate, aggressiveness, and nervousness),
  • Inheritance of genes,
  • Replication (because of the above two),
  • Energy transfer (in the form of children taking energy from parents at birth, and children absorbing the energy of the elderly if they're occupying the same space when born, causing the elderly to die and be replaced),
  • Thermodynamic behaviour for the most part (eg, ignoring energy given by the mouse through feeding and spawning creatures, energy isn't created out of nowhere, and is only "lost" as useless energy).

Random variation could be added with some careful effort, but I suspect its results wouldn't really manifest in any way noticeably different from what you currently see.

I'm still working on getting a reliable "feeding" method added to the mix, beyond children eating the elderly. What I was trying for a while - and you can still see commented out in my JS draw() function - was a feeding method that let creatures eat adjacent creatures. This was working well, but it dominated the living patterns of the creatures. With only a few creatures near one another, a checkerboard pattern would almost immediately form, with babies being born into the "holes" and getting immediately eaten by an adjacent adult with more energy.

In the attempt to get feeding working in a more complex way, I added a randomised variable that let lower-energy creatures occasionally eat higher-energy ones, but it only either slightly dampened the checkerboard effect, or resulted in flickering graphical madness.

Other than that, I encountered a few interesting phenomena. The funniest was creating what is best described as a race of immortal zombies, because creatures, when dead, were still able to feed off the living creatures near them. Another thing that happened was an explosion in the creature population when I accidentally made it possible for a parent to give birth to more children than it had the energy for.

I should mention that this idea came from a Java Applet that I found. It didn't model evolution, but it did model thermodynamics. Here it is, linked from his website, Repeat While True.


To view this content, you need to install Java from java.com

About CellShades
CellShades is derived from the concept of cellular automata, showing how complex behaviour of organic appearance can emerge from a simple set of rules. Using the mouse, the user spills liquid onto a virtual petri-dish. If the amount of liquid on any position on the grid remains above a certain level for a prolonged time, cells will emerge there. These cells will move and consume liquid to harvest energy according to a set of parameters which you may change and toy around with.

The intensity of the liquid on the grid is visualized by a color gradient from orange to purple.

Interestingly, the Applet was made in Processing, which I've seen is installed on the FBE's computers. I don't have immediate plans to try it out since there's so much software I'm already planning to get my head around as part of these experiments, but it's definitely piqued my interest.

Experimenting with Evolution Programming

As part of looking into evolution in programming, I wanted to modify this blog to be a little better-themed to my design matrix. What I figured could prove useful is a system that adapts its form to the presence of other objects. To integrate that idea with this blog, I wanted to make a background that adapts its form the location of the mouse. Depending on how useful it proves, this could be reapplied later in 3D for form generation in my final project.

Currently, I've got a background that reacts to the mouse and uses the HTML5 canvas element. This was a test I threw together to see if it'd have any problems when working with Blogger or Firefox / Chrome. There's two canvases being used at the moment, but I'll roll back a version tomorrow so I can instead have one large canvas that will let the evolving block units interact more easily.

Attempt to Upload my Design Matrix Hosted in an Interactive 3D Environment

An idea I had for presenting my design matrix is to place it in a surreal 3D environment that reacts to the user's presence, and allows them to discover the design matrix in a format that's a little more interesting than static paper.

The method of representation could tolerate being more complex, giving the user more implicit incentive to search for the parts of the matrix. In a later version, I think it would make sense to let the user keep track of the parts of the matrix they've found, since they act like torn scraps of a complete document found floating through the ethereal landscape. Almost as though they're the recorded thoughts of a kooky, solitary scientist, torn up and strewn across the landscape. A little modification - appropriate sounds, slightly modified aesthetics, and extending the spaces defined by the ghostly building - would do this interactive 3D environment well in my opinion. I might also modify the cubes constituting the building so some of them fly into position automatically once the user is close enough, rather than holding their distance depending on how close the player is to their final position.

This post also serves as a testing ground for getting the Unity Webplayer (hopefully!) running on my blog. Failing that, I'll upload it to Kongregate.com under the guise of being a game, and point there instead.


Controls
Use W, A, S, and D or the arrow keys to move. Look around by moving the mouse. Jump by hitting the spacebar. If the framerate is choppy, right click on the game and click "fullscreen". Hit Esc to exit fullscreen mode.

Created with Unity.


Awesome, it works!

Some references for computing applied to architecture

A site that might prove useful for mathematical theory and programming theory is arxiv.org, which provides "Open access to 670,431 e-prints in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics", as described on its main page.

The site also mentions it has an RSS feed (amongst other things) for robots that automatically parse the archives, so that could prove useful after filtering the information through Yahoo Pipes.

Slashdot.org might also prove interesting to look at from time to time for technology-related news and popular, cutting-edge information.

Computing applied in architecture

The reason for my lack of a post so far about the interaction of the disciplinary views of computing and architecture is that I have had trouble narrowing down exactly what might prove to be a useful (or at least promising) interaction to pursue.

After some thought, I've worked out some areas of research for this part of my research matrix that were selected for personal interest and their future utility:

  • Researching the Theoretical Aspects of Computing: in particular, the mathematical concepts that form its base, or helped develop computing to what it is today. The ideas and concepts in this discipline could be abstracted and applied to architecture. Or perhaps not even require abstraction to apply to architecture, since maths itself is abstract.
  • 3D Visualisation and Interaction: I would like to devote some time in this course to working on 3D visualisations (renders, point tracking videos, and augmented reality) and interaction in a 3D environment (Unity3D, Crysis; to simulate a building's structurally feasible, intelligent response to occupants)
  • Research into and Use of Programming: Given my experience in the area, it would be a shame to not use it when it comes to the interaction of the disciplinary views of computing and architecture. In the past, concepts from programming have proven useful for me in generating good architecture, so I'm confident they will help once more. Furthermore, concepts from programming relate this point directly to the first point of researching the theoretical aspects of computing. If I broaden my knowledge in that area, I will have even more concepts to aid my design process.

Evolution applied in architecture

As one of the three disciplines I have chosen, I will now show some examples and attempt to eloquently express some thoughts about how I think the disciplinary views of architecture and evolution can interact.

To start with, my contention is that architectural design would benefit from a practical application of evolution. Plenty of building designs have been generated with a very conscious incorporation of automated evolution, but I have as of yet to find genuinely useful examples rather than something that's just coincidentally resulted in a pretty form. What is generally lacking in the explanations of such "evolved" buildings is what the criteria for survival were, what the varied genes were, and how many generations deep the evolution was carried out.

For a long time, I've been interested in the application of evolution in fields other than biology, used with a particular view to solve various complex problems in incredibly simple - though almost unintelligible - ways. For instance, one of the non-fiction chapters of the first "Science of the Discworld" novel (by Terry Pratchett, Ian Stewart, and Jack Cohen) discusses an exploration of evolution through a genetic algorithm approach to making an electronic circuit able to distinguish between two tones. The circuit's logic gates were the genes that were randomly varied and inherited from generation to generation, and the survivors were selected on their capacity to give a different output - 1 or 0 - to each of the two different tones, not caring which tone was given which value as long as there was a difference.

Early on, the circuit's capacity to tell the difference was non-existent or negligible. However, after sufficiently many generations, reliable differences began to arise. After only 4000 generations, the circuit would get the tones wrong barely 1 in 1000 times. At 8000 generations, there were no errors in tone distinction that were encountered. The resulting circuit, however, was very complicated and hard to understand - for example, a portion of the circuit was found to not be connected to anything else, but if removed always caused the circuit to stop working.

In my opinion, the most important part of the experiment, however, was the efficiency and elegance of solution that results from the use of appropriately constrained and defined evolution. The evolved circuit was far smaller (ie, had far fewer logic gates) than other circuits previously made to tell the difference between two tones.

After doing some Googling, I've found a few interesting sources that I'll look into over the holidays. They are:

  • An ecomorphic theatre as a case study for embodied design (paper located here): mentions some interesting historical precedent to generative design of architecture.
  • An Evolutionary Architecture (version of book released online located here): Covers some interesting concepts to do with the kinds of forms that can be generated, and some ways of using the internet to expose a 3D model to genetic variation.
  • Autotechtonica.org (link located here): Is currently under construction, but seems to offer a few simple existing neologisms and their definitions, which might be handy to glance over if you're trying to learn about the topic like I am.
  • Morphogenesis of Spatial Configurations (link located here): Talks about evolution when selecting forms based on building performance criteria such as structure and accessibility.

From the second source, I found an example of what I was talking about at the start of my post when I said "I have as of yet to find genuinely useful examples rather than something that's just coincidentally resulted in a pretty form".


An animated example of co-operative evolution by a network of computers. Pretty, but lacks useful information to explain what it is. (from http://www.aaschool.ac.uk/publications/ea/intro.html)


What I would like to create using the process of evolution for this masters studio is something of utility. I want to produce an intelligible analysis that can be clearly and specifically used to inform an architectural design. Ideally, the evolution will be applied to an area that doesn't already have its own simple solutions. I think it would be more exciting if it were applied to, for example, the problem of space organisation and linkage, which I have observed is often a point of unfounded contention between designers - eg, arguments about one space not being suited to be connected to another, and so forth.

Furthermore, it seems that evolution of useful aspects to a building would best be used as a design informant rather than a means of producing the end design in itself, since there are philosophical aspects of design that haven't yet been accurately encapsulated in formal systems (such as those used to found computer science and information technology). To clarify: I trust that sufficiently many generations of rigorously managed evolution would produce an effectively failsafe product, but if and only if the appropriate conditions of selection and the appropriate genes were known and also formally encoded in the process of evolution. However, culture and many philosophies have as of yet to be formally encoded in such a comprehensive manner. Thus, I would use evolution to inform my design process for those conditions of selection and genes which are known, but I would want to refine the design personally to ensure it suits the formally undefined constituents of philosophy and culture.

Something that seems a little more promising than the above animation is the paper on Morphogenesis of Spatial Configurations. Referring to the below image of a 3D model generated from Lindenmayer Systems (aka L-Systems) and genetic programming, it certainly seems to produce what looks like a far more sensible form, though I am unsure of what the original L-System's configuration was.



I'm really liking how many freely available online resources are turning up for this topic. There'll be a lot of reading to do, but I suspect I'll learn a lot in the process, which will hopefully save time when it comes to producing my own evolved design informant.

Building my learning machine, and the building as a learning machine

This idea spawned somewhat spontaneously while I was thinking back over a quote that's stuck with me for the better part of three years. It was delivered with such colloquial, intelligent profundity that I couldn't help but wholly absorb it, as well as the rest of the information delivered with the speech it came from (shown below).


"If you look at the interactions of a human brain, as we heard yesterday from a number of presentations, intelligence is wonderfully interactive. The brain isn't divided into compartments. In fact, creativity -- which I define as the process of having original ideas that have value -- more often than not comes about through the interaction of different disciplinary ways of seeing things." (Ken Robinson)



There are two aspects of this quote that stand out to me with respect to this final year studio and architecture. Firstly, that an original idea that has value "more often than not comes about through the interaction of different disciplinary ways of seeing things", and that this would be a useful way to build my learning machine (ie, the processes I will follow for research and development) for this studio. Secondly, that "intelligence is wonderfully interactive", and that a brain "isn't divided into compartments", and that these points can be used to conceptualise a building as a learning machine.



The Building of My Learning Machine

The first aspect is how I want to work through this master's year studio - the interaction of different disciplinary ways of seeing things to generate original ideas that have value. I will select three disciplinary ways of seeing things which seem to have promising potential when interacting with the disciplinary way of seeing things that is architectural design. Currently, the three disciplinary ways of seeing things I have chosen are:

  1. Biology,
  2. Evolution, and
  3. Computing.


The Building as a Learning Machine

This train of thought fits best under the discipline of biology, but still has some strong relationships with evolution and computing.

The idea of an intelligent, interactive building is an exciting one, and seems to be cropping up more and more these days (specific examples to be searched up later and added retrospectively so I don't break my train of thought). Most of the time, no one part of the building's operation and usage is wholly distinct and unaffected by the other parts. Thus, according to the definition Ken Robinson uses, it could be said that a building is like a brain. Historically, humans have been creating braindead buildings. Sometimes beautiful buildings, but braindead nonetheless. They operate, they breathe, and everything like that - but they're vegetables. They cannot respond to us. Or if they do, it's only in obvious terms, through wholly controlled interactions. Recently, however, technologically-minded interventions have introduced the capacity for reactions in the built form. It's as though some buildings and building elements are now recovering from a coma, and are starting to be able to autonomously respond in complex ways to interaction.

Though Ken's speech was explicitly about human learning and education, it is my contention that interactive, responsive architecture - occasionally stylistically classified as "high technology" - would benefit from such a conceptualisation. The building as a learning machine is an interesting, exciting idea. Due specifically to technological advances, built forms are capable of being dynamic, and adapting to their purpose. Through analysis of occupational usage - which in the case of this studio would likely be emulated through data obtained via social website analysis - buildings can be programmatically designed to modify themselves to suit occupational use.

The conceptualisation of the building as a learning machine can extend beyond this specific example. More generally, the building as a learning machine is capable of responding to its environment and, moreover, learning the trends. Conceptualised as a finite state machine, the learning machine's state would change in response to the input from its environment, with subsequent states depending on the previous states as a form of trend-learning. In a way, having the building as a learning machine relates somewhat to forming a practical application of phenomenalism.

There are many directions this idea could go, so to keep the project feasible, I will need to restrict myself to researching and developing one or a few specific examples.

You can find other speeches and presentations from TEDtalks here, or through the TEDtalks YouTube channel.

Note about Blog Usage

This won't be my final year blog. I will create my final blog once I have finalised - with certainty - the title of my final year project. Once I've done that, I'll create a blog with the same title - or similar if the URL's already taken - and re-post everything relevant from this blog into that.

Part of the reason I've used this blog for now is that it helps to express a continuum of ideas, linking on from the last subject I recently used this blog for - Augmented Reality, during the late summer term this year. I have had a long-standing affinity for technology, and its multifarious utilities as a tool in design and in making human life more productive, comfortable, and / or stimulating.

Part of what makes technology so interesting is just that; that it's stimulating. It's that stimulation which engages the mind - or, rather, the senses - and from there, the mind and the technology (if it's good technology) are putty in each other's hands. And more recently, it can even be argued that each learns from the other, through the rise of "machine learning algorithms" as a branch of artificial intelligence.

I think it's a pretty exciting thing to look into. Objects which are self-modifying, and responsive to their environment seem to be picking up interest in areas other than specifically computing. Other forms of technology are catching up with this idea. A simple example of this is that of a responsive facade like the CH2 Building in Melbourne, opening and adjusting the angles of its louvres in response to local climate internal and external to the building. The integration of responsive technology into our daily lives is becoming more comprehensive, and is - as just mentioned - becoming integrated into architecture.


The CH2 building again, at a different time of day. (image from http://inhabitat.com/ch2-australias-greenest-building/)

This is just one aspect of architectural design that I think is worthwhile investigating during this research studio - I'll be elaborating on other ideas I think are promising in later posts.

Also, I should mention that I'll be splitting the main points of my ideas into different posts to try and establish a thought continuum with useful landmarks, rather than a huge wall of text. On to the next post!