The Living City
The Living City

Thursday, June 9, 2011

Marketing Images, Design Matrix, Blog Table of Contents

Design Matrix Images
















A note on the matrix: as always, I intentionally gave myself several goals to achieve so I could select from them as I progressed. In the end, there were some goals that, though I'm interested in them, I either didn't have the time or resources to achieve. Though obviously undesirable, it was anticipated and was part of my over all strategy for research during this semester.

For a summary of my experiments and their findings, please read this post.



Marketing "Images"


The set of building forms in this image are shown in more detail below. They were produced using a genetic algorithm I created using Grasshopper for Rhinoceros 4.0 Trial.








Blog Table of Contents

This Semester at a Glance:

Experiment 7:


Experiment 6:


Experiment 5:

Experiment 4:

Experiment 3:

Experiment 2:

Experiment 1:

Research and Readings for Development of Experimental Ideas (evolutionary programming, generative modelling, various kinds of evolution):

Summary of my Experiments and their Findings

Rather than being distinct, the experiments at times melded in with one another conceptually. So, I'll describe them individually, but you'll likely see some overlap. To save on words, I'll also state here that many of the experiments served a secondary purpose of improving my programming skills.



Experiment 1 Programming Forms that React to Occupants in Unity3D
Purposes: Check feasibility of hosting a 3D model of a building on this blog that users can walk through and experience. Program forms that can dynamically react to someone's presence.
Observations:
  • It's possible to host a 3D model of a building on this blog, and users can walk through and experience it. Graphically not as pleasing or realistic as the Crysis Engine, but portability and formal nature somewhat balances this. Custom shaders could be downloaded and used to improve the aesthetic.
  • Forms reacted to someone's presence, but are too abstract for now so were left as ethereal blocks. This idea could prove useful and interesting for seeing realtime sections of building spaces.

Experiment 2 Evolution with Javascript
Purposes: Familiarise myself with concepts of evolution and encapsulating them through Javascript.
Observations:
  • Evolution was fairly simple to encapsulate, though the script started to get a little messy from a bit of design-as-you-code that was inevitable in this circumstance.
  • Russell commented that it the fine structure and processes behind the evolution weren't really being shown, so that could stand revision.

Experiment 3 Generating Forms in Grasshopper
Purposes: Familiarise myself with Grasshopper and Rhino for later use in evolving a building.
Observations:
  • After to referring to a few video tutorials online, some examples from Jeremy Harkins that Russell suggested, and doing a ton of trial-and-error modelling, I can safely say I'm fairly confident in Grasshopper now. It definitely seems promising to use in the coming semester!

Experiment 4 Growth and Evolution with Javascript
Purposes: Familiarise myself with concepts of evolution and encapsulating them through Javascript. Show fine structure and processes behind the evolution.
Observations:
  • Evolution wasn't successfully encapsulated this time, since I spent too much time trying to get the aesthetic both pleasing and simple enough for the canvas to render (quite a mission given the poor canvas support from most browser versions!).
  • HTML5 seems not to be as well-supported as needed by most browsers yet.

Experiment 5 Evolutionary Algorithms in Grasshopper: City Generation
Purposes: Familiarise myself with Grasshopper and Rhino. Seeing if I can produce something that looks convincingly like a city to test my understanding of city form and development.
Observations:
  • This was my favourite experiment of all, and perhaps the most challenging. The main issues I faced related to caveats of Rhino and Grasshopper functionality, some of which needed workarounds with VBScript components. For example, when offsetting a curve, there's no way to tell Rhino or Grasshopper to expand or contract it, so you just need to try an offset and see if the curve is longer or shorter than the original. The other issue was to do with iterating over trees and lists, and was solved with a Graft component.
  • Convincing city forms were successfully being generated, though the subdivision method for city blocks could do with improvement.
  • After reading through relevant information about evolution, I've got my head around how the Galapagos evolutionary solver component works, and tried it out with the city generator to optimise increasing city GFA against minimising total building footprint area, all while keeping plan areas within reasonable bounds. I'll definitely be involving Grasshopper in next semester's form generation.

Experiment 6 Viewing Sydney through Information Technology (Yahoo Pipes)
Purposes: Familiarise myself with Yahoo Pipes and see if it will prove useful as a design informant to look into the life of Sydney as a kind of living, dynamically changing organism.
Observations:
  • As a feed processor and design informant, Yahoo Pipes shows promise, but as it stands it's lacking in some basic areas of streamlined functionality (eg, the process of filtering out everything but the tags and images from a feed seems needlessly complicated, and getting results from the pipe sometimes requires clicking on each "element" in turn and waiting for it to do its processing.)
  • Over all, I think I can work a little with Yahoo Pipes, but I don't think it would be a good idea to rely on it too heavily.

Experiment 7 Retrieving Information Updated in Real Time with Yahoo Pipes and Processing it for Viewing
Purposes: Develop a means by which to view a collection of snapshots of Sydney found uploaded online; this collection will inform the range of lifestyles and events that a building might potentially be host to and exposed to. Ideally, this data representation will also be aesthetically pleasing.
Observations:
  • Once a Yahoo Pipe is constructed, it works pleasingly well and simply. Remotely extracting information from a Pipe proved fairly easy, needing only a bit of custom Javascript and HTML. Shuffling the results to reduce instances of contiguously-themed blocks of data was simple, too, thanks to a little Googling.
  • With some Javascript and CSS, a pleasing set of images were produced on a grid, which looks somewhat aesthetically pleasing.
  • To get a good snapshot of the city, it would be best to retrieve images from as many sources as possible to get a larger sample population. This means scouring the internet for image feeds.

A Quick List of Some Precedents and Inspiration

Increasing Urban Density and the Utility of an Adaptive Space

As argued by Matthew Pullinger of Hassell, intelligently increasing urban density is an essential move to creating a sustainable city. With the implementation of a few simple strategies - intelligently measuring urban density (as mentioned), linking main city centres with an orbital rail network, and integrating mixed use zoning rather than compartmentalised zoning to reduce transit distances (ie, the failing of the garden city strategy) - a sustainable city can very likely be realised. One concern voiced by Matthew was that of amenity in urban density. Density can be increased, but often at the cost of amenity.

One idea that's stuck with me over time is that of a space which can physically transform to suit its function. Such a space could be argued as being active, alive, and adaptive. As an extended phenotype of its inhabitants, a building capable of physical transformations can effectively coadapt with humans through adjusting its amenity to survive.

Some quick examples of compact living achieved through a dynamic space design (there are plenty of examples online, but these two stood out to me for being a clever renovation of a space with existing dimensions rather than a compact design with a freer range of dimensions being plonked in the middle of a field or other open space):



Such spaces require active, able-bodied occupants. As suggested by Christian in the second video, such a space provides physical exercise whole also addressing the issue of intelligently increasing urban density. However, it's obvious that some living spaces would have to meet the physical capacity of less fit occupants. In this sense, a new, lighter degree of disability would implicitly be established in creating these apartments - the disability to live in such a dense, active environment.



Choosing Bemes to Include in Evolutionary Form Generation

In the final critique last week, it was suggested by two of the guest critics that I should narrow down the range of bemes (click for a definition of "beme") I am looking at so that I can specifically use two or three of them - for example, the increasingly popular beme for homes having a living space dividing the front door, kitchen, and bedrooms. A further suggestion was to choose bemes that can have their fitness in a given context measured. This gives the opportunity to actually test my design to see if it meets given performance criteria (eg, sun access, living space dimensions, etc).

A method I've found useful for planning work is to set two goals for myself. One that's realistic to achieve, and another that's far harder and thus more unlikely for me to achieve in the same timeframe.

The first goal, in this instance, will be the integration of two bemes into form generation that relies on the process of evolution. Since I'm aiming to improve amenity while intelligently increasing urban density, the bemes I've chosen are:
  • Optimal Sun Access for as many apartments (and offices) as possible, typically in the form of north-facing windows.
  • Optimal Apartment Dimensions for functions typically contained within them, while also considering compact apartment living types and prototypes I'll continue to research.

The second goal will be:
  • Automatic Space Arrangement within apartments and / or offices. This is something I've been wanting to do for several years, but until now I haven't really had the support to attempt it in a design studio. It'd be great to achieve this aim, because many projects in the real world could stand to have some automation to save manually designing each parametrically repetitive space in a large complex.

Thursday, June 2, 2011

Grasshopper City Generator Video Uploaded

Here's a video of that City Generator I made in action, iterating through a ton of genomes within each generation, right up to about the 22nd or 23rd generation, at which point the morphology of the city had begun to stabilise into a reliable subset of the gene space.


The generator's still using a Substrate component to split the city blocks up, which is something I'd like to change for a subdivision method of my own devices later on. The issue with the Substrate component is that it occasionally produces incredibly thin city blocks, rather than more "neatly" spacing its generated lines. And with that, sleep!

Screenshot of the Yahoo Pipe used for the latest City Evolution Snapshotter script

The Pipe itself is fairly simple, searching a range of emotionally and culturally powerful terms that can be expanded as desired for more comprehensive information. As the experiment stands, it serves as a proof of concept for the kinds of interesting results that can come from a simple data-collecting process.


A link to the bigger image is here.

Wednesday, June 1, 2011

Quick Revamp of City Evolution Snapshotter

I noticed that certain images were being grouped together in the feed provided, creating sequential sets of data from like sources. To fix this, I found a clean, generic shuffle() function someone had written in Javascript, based on what they called the "Fisher-Yates" algorithm. This lets the image feeds be viewed the way I intended: in a random order, so that at a glance you have a good chance of seeing the face of the city for what it is to its locals and visitors.

Here's the latest version of the City Evolution Snapshotter script, and here's some more screenshots:


A link to the bigger image is here.




A link to the bigger image is here.




A link to the bigger image is here.




A link to the bigger image is here.




A link to the bigger image is here.



I think something based off these snapshots would form a good background for the panels I'm producing for tomorrow. Like a randomly varied genome, I'll replace random sequences of images with new information.

Also, a note for me to read later: upload the timelapse of a proto-city evolving in Grasshopper with Galapagos!

Working on a Yahoo Pipes City Evolution Snapshotter

Since what I've got going on in the background is somewhat transient and mixed with the other information on this page, I thought it'd be a good idea to throw together a bit of HTML to act as a locally available webpage that I (or you, if you copy the text from here and save it as a .html file) can access at any time and capture at the press of a button.

Edit: some screenshots of it in action:


A link to the bigger image is here.




A link to the bigger image is here.

Yahoo Pipes Working Well

After some Googling, I managed to find an example to get some Yahoo Pipes stuff working. Pretty nifty. In a few minutes, you'll see it in the background, below the canvas element. It's a simple graphical way of reading the mind of the city of Sydney, by finding prominent images describing its life. It's a quick and easy design informant for the kinds of activity Sydney's popularity and publicity thrives on, and can serve as a basis for the kinds of events, activities, and ways of life a new built organism on the F.A.M.E. site would do better to nurture and grow with.

A technical note: the Pipe is currently working in Firefox and Google Chrome (on Windows 7), though IE is another question because it's majorly lagging behind the other browsers when it comes to... anything usefully or interestingly complex, really. IE compatability isn't an important part of the Yahoo Pipes experiment, anyway. All that matters is that I can retrieve interesting and useful information, then - because this is a design subject - process it into aesthetically pleasing information.

Letting Grasshopper Experimentation Evolve with Galapagos

I'm currently fiddling with something I'd been sitting on since late last week, not sure if it would work or not. A city generator in Grasshopper. I'm using it to test my knowledge of how a city forms over time. It's an ongoing experiment, since a city is a very complex system of cityblock-level and building-level evolution, and relies on many factors as outlined in my previous posts.

For a while, I was a little stuck on iterating over arrays stored oddly in a tree, but with a post on the Grasshopper forums and a prompt reply from David Rutten (the creator of Grasshopper O_o), the solution is working much more convincingly.

Rather than manually trawling through all possible variations of the parameters controlling my city form generator, I decided to let Grasshopper's Galapagos solver do its thing. The cool part of it is that once I made it, I could just sit back and watch Galapagos find a city form optimised for the survival functions I define.

Also, while on the Grasshopper website, I found a cool video that really inspires me to do more with Galapagos:


Monday, May 30, 2011

New Background Looking Pretty Darn Good

Russell asked me to restyle my background to make the fine evolutionary structures and processes going on behind the scenes more evident. Since I "spaghetti coded" the final touches to it, I figured it'd be better to start from square one again, this time with a clearer goal in mind.

Currently, I've gone through 20 subversions of the new background, carefully working on a new aesthetic. This time, instead of basic blocks, the background is comprised of living, growing garden beds.

Each garden bed is made up of:
  • One tree that grows and nourishes the garden. If the tree dies, the garden bed's other plants can't receive nourishment to grow.
  • Several flowers that grow only if the garden bed's tree is alive. All flowers of a given garden bed are the same colour (the colours used were picked to match the colour scheme of my blog).
  • Grass, which grows only while the garden bed's tree is alive. As the tree's main trunk grows in size, the grass spreads.

I'm now quickly working on two things:
  • Replication and variation through successful garden beds (ones that've lived long enough) spreading their seeds to create new garden beds beside them. Seeds will spread randomly in four directions: up, down, left, and right, and will vary slightly in their attributes from the parent.
  • Feeding trees to keep them alive by hovering the mouse over their garden bed.

An interesting aspect of this experiment is that it can be thought of as an analogy for the evolution of city blocks. The mapping would be:
  • Garden beds to city blocks,
  • Trees to activity nodes,
  • Grass to the parasitic exposure smaller buildings receive from the presence of a successful activity node, and
  • Energy to the "life & longevity" a city block's buildings gain from the keen inhabitance of people.

Wednesday, May 25, 2011

Genes, Memes, and Bemes

After talking with Russell, he suggested that I succinctly define what I mean when I say "gene", "meme", and "beme", since I haven't yet clarified those terms in my posts so far.



Quick Definitions of Gene, Meme, and Beme

A gene is a unit of heredity (things copied from parent to child) in evolution. Though popularly associated with only biological evolution, genes exist in any system involving evolution. The vast majority of the time, genes directly affect the capacity for an organism to survive and its capacity to replicate its genes (eg, having children). More information can be found on the relevant Wikipedia article.

A meme is a gene in the specific context of cultural evolution, where the organisms are cultural trends that compete and cooperate. The term was first coined by Richard Dawkins in The Selfish Gene as an analogy for how genes work, but soon took off as a useful concept in cultural development.

Here's a video of a TEDTalk by Susan Blackmore that lucidly describes the basics of memes, amongst other interesting hypotheses and observations about them.




A beme is a meme in the even more specific context of the evolution of culture and fashion in building design. This term was coined by me for this research studio to aid in succinctly understanding and describing relevant aspects of my design methodology for the coming final semester. At its base, it is a class of meme, which itself is an abstracted class of gene. When I talk about evolution involving bemes, I will refer to it as bemetic evolution.



Evidence Supporting the Validity of the Concept of a Beme

In a way, the reasoning laid out below served as a loose, conceptual experiment to test the hypothesis that "bemes exist and apply to the evolution of building design". The crux of the evidence for bemes is proving evolution can be applied to the culture and process of building design. As such, a specific definition is required to determine when a system does and does not involve evolution.

According to part of the Wikipedia article mentioned above, in Dan Dennett's book "Consciousness Explained" (1991, Boston: Little, Brown and Co.), evolution exists when a system encapsulates three conditions:
  • variation, or the introduction of new change to existing elements,
  • heredity or replication, or the capacity to create copies of elements
  • differential "fitness", or the opportunity for one element to be more or less suited to the environment than another.

So now it must be proven whether or not the culture of building design involves variation, heredity (or replication), and differential fitness. If any one condition is missing, by definition, the system doesn't involve evolution.

Looking into the history of building design culture, bemetic evolution has manifested as the use of precedent inspiration (replication and heredity), the reinterpretation and appropriation of precedent (variation), and the success and popularity of a design (differential fitness through cultural selection).

And thus, "bemes exist and apply to the evolution of building design" has been qualitatively confirmed through philosophical argumentation.

Thursday, May 19, 2011

Re:Design Matrix ReDesign Matrix

Confusing post title, but this post is essentially documentation of some brainstorming for how my Design Matrix is best structured. The original idea I had could be integrated, but this one seems much more sound and able to be translated into actual experiments, rather than the frankly woolly ideas I was working with before (shown in an interactive 3D environment in this post).

Now that I've settled on a title of my graduation project (evolvedDesignInformant), I'll be uploading my work, progress, and thoughts to this blog. Also, the idea of evolution is now established as perhaps the strongest influence on the work I'll be doing. It's a philosophically and mathematically challenging set of ideas to get my head around, but I think I'll get a lot out of it with enough focus. Anyway, on to the design matrix...



EVOLUTION
  • Physical buildings, "bemes"
  • Metaphysical society, memes
  • Virtual design informants

PROGRAMMING
  • Physical interpretation and adaptation to physical stimulae through arduino
  • Metaphysical programmatic encapsulation of emotions and personality in the parts of a place that give it "life" in a sense -> involve yahoo pipes?
  • Virtual modelling of evolution, expanding generic and specific knowledge base [GENEric, SPECIfic -> concepts of evolution can be applied here, too]

MODELLING
  • Physical laser cutting, 3d printing, manual modelling, "genetic recombination" of these methods to produce new mutations
  • Metaphysical programmatic encapsulation of emotions and personality in the parts of a place that give it "life" in a sense -> involve yahoo pipes?
  • Virtual grasshopper [deterministic, evolution], interactive 3D environments

Experimenting with Rhinoceros Grasshopper Video 2 Uploaded

I decided to squeeze in a little more "fiddling time" in transit to and from uni yesterday. One of the first things I said I wanted to know how to make in Grasshopper was a smooth, undulating surface. Thus, the aim of this model was to make an undulating surface, with a few extra things thrown into the mix to expand the amount of things I'd need to figure out to get a finished model. If I recall correctly, the time taken for this model was about 90 minutes.





There weren't many hiccups through this video, since I'm getting more used to knowing where to look for the kinds of functions I want. The main sticking point this time was when I tried to give the undulating 2D surface thickness. My initial thought was to duplicate the surface upwards with a "move", and then try the "cap holes" function. However, that didn't work since "cap holes" only works for holes defined by planar curves, rather than intelligently joining vertices with edges, and the resulting closed polygons with surfaces like I'd hoped. I ended up realising an "extrude" was all I needed.

After that, the only confusing thing was figuring out how the solid functions work. At first, I couldn't figure out why two cylinders would no longer work for a solid union when I was trying to intersect them, but then I noticed that Grasshopper's cylinder primitive isn't a closed solid, so trying to perform a solid union for them when their open ends weren't exactly planar would end up failing. After using cap holes, that problem was fixed.

Another thing: I just took a look at PK's blog and saw he'd posted a video using Grasshopper's "Galapagos" capsule. I remember Russell mentioning it in a previous studio class, but I'd forgotten to check it out. Looking at some videos and fiddling briefly with it in Grasshopper to evolve solutions to various simple equations, it seems to be exactly the sort of thing I'd be interested in using to evolve different aspects of my final design!

I'll definitely be using it to see what I can do with it.

Experimenting with Rhinoceros Grasshopper Video Uploaded

The video finished rendering and uploading, so here it is!


Experimenting with Rhinoceros Grasshopper

I'm currently rendering a video made using Chronolapse while I toyed around with Grasshopper. Shortly into the video, I decided to give myself something to try to achieve by the end of it - a twisted rectangular prism shape with rectangles along it representing floors. Looking back over it and playing a little more with Grasshopper, I noticed there are a few tools I could possibly have used to make the final form much faster.

However, the main point of this stage of my computer modelling experiment is to get myself familiar with Grasshopper, rather than efficiently producing an end result in one go. As with any experiment, the criterion of success is what is learnt.

The main sticking point was when it came to combining two lists of transforms - one list of rotations, one list of translations - so that I'd get one list of transforms that each represented a rotation and translation. I found out how to combine an entire list of transforms into a single transform, but that was the closest I could get. So, in a bit of desperation and having never before programmed in VB.NET, I fiddled wildly with it in a script until I got my head around how Grasshopper let you handle its parameters and "output" - which is really just any variables passed by reference rather than by value.

The next sticking point was that I couldn't find an object reference that would drastically speed up the rate at which I could throw together the script. I noticed the IDE had an autocomplete list that would pop up as you wrote your script, but it wasn't really comprehensive enough for me to know what each variable or method in the list was for. After Googling for a while and finding nothing, I resorted to looking in the plugin folder for Grasshopper, and found RhinoCommon.xml under Plug-ins/Grasshopper/rh_common/RhinoCommon.xml (relative to the Rhino install folder). You'll probably see a few flashes of it opened in Chrome in the video once I upload it. It was somewhat helpful because I could easily Ctrl+F and enter the name of the type I wanted, and hit Ctrl+G until I found it. Admittedly, that took quite a few Ctrl+Gs at times, but on the way, I was finding a few useful tidbits here and there.

After I got past that point - and a few sundry errors to do with invalid typecasting of Transform to Vector3 - it was fairly smooth sailing.

I intend for my next experiment in Grasshopper will involve generating a form that's a little more complex, but I'm not quite sure what would be appropriate yet.

I'll put the Chronolapse video up in my next post when it finishes rendering.

Experimenting with Evolution Programming 2

After today's effort, I've made some significant steps with the evolution program you see milling away in the background whenever you move your mouse in Firefox or Chrome. For a simple description of how to interact with it, wave the mouse over empty space in the background to spawn new "creatures" (squares). Over time, the creatures consume energy and so start to shrink (like a dying flower). To feed a creature to keep it alive, wave the mouse off it and back on it again. If a creature stays alive until maturity (currently 0.3 seconds), it gives birth to anywhere between 1 and 8 children into the adjacent blocks around it. Birthing happens in a reliable manner (eg, most successful births make a child in the position directly below the parent), which you'll probably notice as the creatures appearing to fall downwards like Tetris blocks. Creatures come in two colours, determined by how implicitly nervous they are - blue creatures shiver more the more nervous they are, whereas orange creatures aren't implicitly nervous enough to shiver.

Currently, I have a system that emulates:
  • Creatures (each square is a potential creature, until one is either born there, or spawned there)
  • Genes (of three genes: metabolic rate, aggressiveness, and nervousness),
  • Inheritance of genes,
  • Replication (because of the above two),
  • Energy transfer (in the form of children taking energy from parents at birth, and children absorbing the energy of the elderly if they're occupying the same space when born, causing the elderly to die and be replaced),
  • Thermodynamic behaviour for the most part (eg, ignoring energy given by the mouse through feeding and spawning creatures, energy isn't created out of nowhere, and is only "lost" as useless energy).

Random variation could be added with some careful effort, but I suspect its results wouldn't really manifest in any way noticeably different from what you currently see.

I'm still working on getting a reliable "feeding" method added to the mix, beyond children eating the elderly. What I was trying for a while - and you can still see commented out in my JS draw() function - was a feeding method that let creatures eat adjacent creatures. This was working well, but it dominated the living patterns of the creatures. With only a few creatures near one another, a checkerboard pattern would almost immediately form, with babies being born into the "holes" and getting immediately eaten by an adjacent adult with more energy.

In the attempt to get feeding working in a more complex way, I added a randomised variable that let lower-energy creatures occasionally eat higher-energy ones, but it only either slightly dampened the checkerboard effect, or resulted in flickering graphical madness.

Other than that, I encountered a few interesting phenomena. The funniest was creating what is best described as a race of immortal zombies, because creatures, when dead, were still able to feed off the living creatures near them. Another thing that happened was an explosion in the creature population when I accidentally made it possible for a parent to give birth to more children than it had the energy for.

I should mention that this idea came from a Java Applet that I found. It didn't model evolution, but it did model thermodynamics. Here it is, linked from his website, Repeat While True.


To view this content, you need to install Java from java.com

About CellShades
CellShades is derived from the concept of cellular automata, showing how complex behaviour of organic appearance can emerge from a simple set of rules. Using the mouse, the user spills liquid onto a virtual petri-dish. If the amount of liquid on any position on the grid remains above a certain level for a prolonged time, cells will emerge there. These cells will move and consume liquid to harvest energy according to a set of parameters which you may change and toy around with.

The intensity of the liquid on the grid is visualized by a color gradient from orange to purple.

Interestingly, the Applet was made in Processing, which I've seen is installed on the FBE's computers. I don't have immediate plans to try it out since there's so much software I'm already planning to get my head around as part of these experiments, but it's definitely piqued my interest.

Experimenting with Evolution Programming

As part of looking into evolution in programming, I wanted to modify this blog to be a little better-themed to my design matrix. What I figured could prove useful is a system that adapts its form to the presence of other objects. To integrate that idea with this blog, I wanted to make a background that adapts its form the location of the mouse. Depending on how useful it proves, this could be reapplied later in 3D for form generation in my final project.

Currently, I've got a background that reacts to the mouse and uses the HTML5 canvas element. This was a test I threw together to see if it'd have any problems when working with Blogger or Firefox / Chrome. There's two canvases being used at the moment, but I'll roll back a version tomorrow so I can instead have one large canvas that will let the evolving block units interact more easily.

Attempt to Upload my Design Matrix Hosted in an Interactive 3D Environment

An idea I had for presenting my design matrix is to place it in a surreal 3D environment that reacts to the user's presence, and allows them to discover the design matrix in a format that's a little more interesting than static paper.

The method of representation could tolerate being more complex, giving the user more implicit incentive to search for the parts of the matrix. In a later version, I think it would make sense to let the user keep track of the parts of the matrix they've found, since they act like torn scraps of a complete document found floating through the ethereal landscape. Almost as though they're the recorded thoughts of a kooky, solitary scientist, torn up and strewn across the landscape. A little modification - appropriate sounds, slightly modified aesthetics, and extending the spaces defined by the ghostly building - would do this interactive 3D environment well in my opinion. I might also modify the cubes constituting the building so some of them fly into position automatically once the user is close enough, rather than holding their distance depending on how close the player is to their final position.

This post also serves as a testing ground for getting the Unity Webplayer (hopefully!) running on my blog. Failing that, I'll upload it to Kongregate.com under the guise of being a game, and point there instead.


Controls
Use W, A, S, and D or the arrow keys to move. Look around by moving the mouse. Jump by hitting the spacebar. If the framerate is choppy, right click on the game and click "fullscreen". Hit Esc to exit fullscreen mode.

Created with Unity.


Awesome, it works!

Some references for computing applied to architecture

A site that might prove useful for mathematical theory and programming theory is arxiv.org, which provides "Open access to 670,431 e-prints in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics", as described on its main page.

The site also mentions it has an RSS feed (amongst other things) for robots that automatically parse the archives, so that could prove useful after filtering the information through Yahoo Pipes.

Slashdot.org might also prove interesting to look at from time to time for technology-related news and popular, cutting-edge information.

Computing applied in architecture

The reason for my lack of a post so far about the interaction of the disciplinary views of computing and architecture is that I have had trouble narrowing down exactly what might prove to be a useful (or at least promising) interaction to pursue.

After some thought, I've worked out some areas of research for this part of my research matrix that were selected for personal interest and their future utility:

  • Researching the Theoretical Aspects of Computing: in particular, the mathematical concepts that form its base, or helped develop computing to what it is today. The ideas and concepts in this discipline could be abstracted and applied to architecture. Or perhaps not even require abstraction to apply to architecture, since maths itself is abstract.
  • 3D Visualisation and Interaction: I would like to devote some time in this course to working on 3D visualisations (renders, point tracking videos, and augmented reality) and interaction in a 3D environment (Unity3D, Crysis; to simulate a building's structurally feasible, intelligent response to occupants)
  • Research into and Use of Programming: Given my experience in the area, it would be a shame to not use it when it comes to the interaction of the disciplinary views of computing and architecture. In the past, concepts from programming have proven useful for me in generating good architecture, so I'm confident they will help once more. Furthermore, concepts from programming relate this point directly to the first point of researching the theoretical aspects of computing. If I broaden my knowledge in that area, I will have even more concepts to aid my design process.

Evolution applied in architecture

As one of the three disciplines I have chosen, I will now show some examples and attempt to eloquently express some thoughts about how I think the disciplinary views of architecture and evolution can interact.

To start with, my contention is that architectural design would benefit from a practical application of evolution. Plenty of building designs have been generated with a very conscious incorporation of automated evolution, but I have as of yet to find genuinely useful examples rather than something that's just coincidentally resulted in a pretty form. What is generally lacking in the explanations of such "evolved" buildings is what the criteria for survival were, what the varied genes were, and how many generations deep the evolution was carried out.

For a long time, I've been interested in the application of evolution in fields other than biology, used with a particular view to solve various complex problems in incredibly simple - though almost unintelligible - ways. For instance, one of the non-fiction chapters of the first "Science of the Discworld" novel (by Terry Pratchett, Ian Stewart, and Jack Cohen) discusses an exploration of evolution through a genetic algorithm approach to making an electronic circuit able to distinguish between two tones. The circuit's logic gates were the genes that were randomly varied and inherited from generation to generation, and the survivors were selected on their capacity to give a different output - 1 or 0 - to each of the two different tones, not caring which tone was given which value as long as there was a difference.

Early on, the circuit's capacity to tell the difference was non-existent or negligible. However, after sufficiently many generations, reliable differences began to arise. After only 4000 generations, the circuit would get the tones wrong barely 1 in 1000 times. At 8000 generations, there were no errors in tone distinction that were encountered. The resulting circuit, however, was very complicated and hard to understand - for example, a portion of the circuit was found to not be connected to anything else, but if removed always caused the circuit to stop working.

In my opinion, the most important part of the experiment, however, was the efficiency and elegance of solution that results from the use of appropriately constrained and defined evolution. The evolved circuit was far smaller (ie, had far fewer logic gates) than other circuits previously made to tell the difference between two tones.

After doing some Googling, I've found a few interesting sources that I'll look into over the holidays. They are:

  • An ecomorphic theatre as a case study for embodied design (paper located here): mentions some interesting historical precedent to generative design of architecture.
  • An Evolutionary Architecture (version of book released online located here): Covers some interesting concepts to do with the kinds of forms that can be generated, and some ways of using the internet to expose a 3D model to genetic variation.
  • Autotechtonica.org (link located here): Is currently under construction, but seems to offer a few simple existing neologisms and their definitions, which might be handy to glance over if you're trying to learn about the topic like I am.
  • Morphogenesis of Spatial Configurations (link located here): Talks about evolution when selecting forms based on building performance criteria such as structure and accessibility.

From the second source, I found an example of what I was talking about at the start of my post when I said "I have as of yet to find genuinely useful examples rather than something that's just coincidentally resulted in a pretty form".


An animated example of co-operative evolution by a network of computers. Pretty, but lacks useful information to explain what it is. (from http://www.aaschool.ac.uk/publications/ea/intro.html)


What I would like to create using the process of evolution for this masters studio is something of utility. I want to produce an intelligible analysis that can be clearly and specifically used to inform an architectural design. Ideally, the evolution will be applied to an area that doesn't already have its own simple solutions. I think it would be more exciting if it were applied to, for example, the problem of space organisation and linkage, which I have observed is often a point of unfounded contention between designers - eg, arguments about one space not being suited to be connected to another, and so forth.

Furthermore, it seems that evolution of useful aspects to a building would best be used as a design informant rather than a means of producing the end design in itself, since there are philosophical aspects of design that haven't yet been accurately encapsulated in formal systems (such as those used to found computer science and information technology). To clarify: I trust that sufficiently many generations of rigorously managed evolution would produce an effectively failsafe product, but if and only if the appropriate conditions of selection and the appropriate genes were known and also formally encoded in the process of evolution. However, culture and many philosophies have as of yet to be formally encoded in such a comprehensive manner. Thus, I would use evolution to inform my design process for those conditions of selection and genes which are known, but I would want to refine the design personally to ensure it suits the formally undefined constituents of philosophy and culture.

Something that seems a little more promising than the above animation is the paper on Morphogenesis of Spatial Configurations. Referring to the below image of a 3D model generated from Lindenmayer Systems (aka L-Systems) and genetic programming, it certainly seems to produce what looks like a far more sensible form, though I am unsure of what the original L-System's configuration was.



I'm really liking how many freely available online resources are turning up for this topic. There'll be a lot of reading to do, but I suspect I'll learn a lot in the process, which will hopefully save time when it comes to producing my own evolved design informant.

Building my learning machine, and the building as a learning machine

This idea spawned somewhat spontaneously while I was thinking back over a quote that's stuck with me for the better part of three years. It was delivered with such colloquial, intelligent profundity that I couldn't help but wholly absorb it, as well as the rest of the information delivered with the speech it came from (shown below).


"If you look at the interactions of a human brain, as we heard yesterday from a number of presentations, intelligence is wonderfully interactive. The brain isn't divided into compartments. In fact, creativity -- which I define as the process of having original ideas that have value -- more often than not comes about through the interaction of different disciplinary ways of seeing things." (Ken Robinson)



There are two aspects of this quote that stand out to me with respect to this final year studio and architecture. Firstly, that an original idea that has value "more often than not comes about through the interaction of different disciplinary ways of seeing things", and that this would be a useful way to build my learning machine (ie, the processes I will follow for research and development) for this studio. Secondly, that "intelligence is wonderfully interactive", and that a brain "isn't divided into compartments", and that these points can be used to conceptualise a building as a learning machine.



The Building of My Learning Machine

The first aspect is how I want to work through this master's year studio - the interaction of different disciplinary ways of seeing things to generate original ideas that have value. I will select three disciplinary ways of seeing things which seem to have promising potential when interacting with the disciplinary way of seeing things that is architectural design. Currently, the three disciplinary ways of seeing things I have chosen are:

  1. Biology,
  2. Evolution, and
  3. Computing.


The Building as a Learning Machine

This train of thought fits best under the discipline of biology, but still has some strong relationships with evolution and computing.

The idea of an intelligent, interactive building is an exciting one, and seems to be cropping up more and more these days (specific examples to be searched up later and added retrospectively so I don't break my train of thought). Most of the time, no one part of the building's operation and usage is wholly distinct and unaffected by the other parts. Thus, according to the definition Ken Robinson uses, it could be said that a building is like a brain. Historically, humans have been creating braindead buildings. Sometimes beautiful buildings, but braindead nonetheless. They operate, they breathe, and everything like that - but they're vegetables. They cannot respond to us. Or if they do, it's only in obvious terms, through wholly controlled interactions. Recently, however, technologically-minded interventions have introduced the capacity for reactions in the built form. It's as though some buildings and building elements are now recovering from a coma, and are starting to be able to autonomously respond in complex ways to interaction.

Though Ken's speech was explicitly about human learning and education, it is my contention that interactive, responsive architecture - occasionally stylistically classified as "high technology" - would benefit from such a conceptualisation. The building as a learning machine is an interesting, exciting idea. Due specifically to technological advances, built forms are capable of being dynamic, and adapting to their purpose. Through analysis of occupational usage - which in the case of this studio would likely be emulated through data obtained via social website analysis - buildings can be programmatically designed to modify themselves to suit occupational use.

The conceptualisation of the building as a learning machine can extend beyond this specific example. More generally, the building as a learning machine is capable of responding to its environment and, moreover, learning the trends. Conceptualised as a finite state machine, the learning machine's state would change in response to the input from its environment, with subsequent states depending on the previous states as a form of trend-learning. In a way, having the building as a learning machine relates somewhat to forming a practical application of phenomenalism.

There are many directions this idea could go, so to keep the project feasible, I will need to restrict myself to researching and developing one or a few specific examples.

You can find other speeches and presentations from TEDtalks here, or through the TEDtalks YouTube channel.

Note about Blog Usage

This won't be my final year blog. I will create my final blog once I have finalised - with certainty - the title of my final year project. Once I've done that, I'll create a blog with the same title - or similar if the URL's already taken - and re-post everything relevant from this blog into that.

Part of the reason I've used this blog for now is that it helps to express a continuum of ideas, linking on from the last subject I recently used this blog for - Augmented Reality, during the late summer term this year. I have had a long-standing affinity for technology, and its multifarious utilities as a tool in design and in making human life more productive, comfortable, and / or stimulating.

Part of what makes technology so interesting is just that; that it's stimulating. It's that stimulation which engages the mind - or, rather, the senses - and from there, the mind and the technology (if it's good technology) are putty in each other's hands. And more recently, it can even be argued that each learns from the other, through the rise of "machine learning algorithms" as a branch of artificial intelligence.

I think it's a pretty exciting thing to look into. Objects which are self-modifying, and responsive to their environment seem to be picking up interest in areas other than specifically computing. Other forms of technology are catching up with this idea. A simple example of this is that of a responsive facade like the CH2 Building in Melbourne, opening and adjusting the angles of its louvres in response to local climate internal and external to the building. The integration of responsive technology into our daily lives is becoming more comprehensive, and is - as just mentioned - becoming integrated into architecture.


The CH2 building again, at a different time of day. (image from http://inhabitat.com/ch2-australias-greenest-building/)

This is just one aspect of architectural design that I think is worthwhile investigating during this research studio - I'll be elaborating on other ideas I think are promising in later posts.

Also, I should mention that I'll be splitting the main points of my ideas into different posts to try and establish a thought continuum with useful landmarks, rather than a huge wall of text. On to the next post!