Interaction: Collective Life

For the third assignment in my Designing with Data course, we where to create an interaction. The twist this time was that we where in groups, one Creative Director, one Technical Lead. Thus I became the Technical Lead on this project, with my Creative Director being Molly McLennan.

We used Molly's original statistic, and this assignment focused on creating a design focused on how we could change this statistic.

By 2050, the equivalent of almost three planets could be required to provide the natural resources needed to sustain current lifestyles.

Taken from the Sustainable Development goals website.

Our first task was to divide up tasks and create a timeline. We decided to go for a text list as we could both read and understand it, and spending time making a graphical one right now is time not spent on the actual project.


Our timeline

We originally split up the tasks based on our skillset and role in the project, making sure that at each point we would be able to work separately without waiting on things from the other person. This ended up an unnecessary precaution, as we where able to re-enter the design space and both spent a lot of time there. Thus we worked much more collaboratively on the various parts of the project.

Initial research was done by both of us, and we found that a lot of the current advice on how to change the statistic was things such as "Be vegan" or "Buy Sustainably", which are all well and good, but one person being better doesn't fix things, and a lot of people don't have the ability to do those things due to monetary barriers.
So we decided to broaden our idea of changing the statistic, not focusing on an individual but a collective as a few of our findings suggested.

  • Stopping space-colonialism from tech giants like Elon Musk (Haskins, 2018)
  • Addressing the unequal distribution of resources (Rashbrooke, 2020)
  • Challenging cult of consumerism (Bradshaw, 2019)
  • Demanding action from privileged classes; (Wretched of the Earth, 2019)
  • Re-imagining a sustainable future (using the metaphor of a relationship between slug and algae) Challenge the existing framework and predicted frameworks for “restoring climate balance” from a less humanistic perspective (Sarancino, 2020)

How to change the Statistic
Broad Approach: Challenge users’ perceptions of their place within the system of consumption that leads to the interconnected consequences of overconsumption and unequal distribution of resources.

What causes this statistic: Individualistic responses to climate change and resource overconsumption will be constantly ineffective. It is hard to feel motivated to make substantial changes to one’s own behaviour when you perceive yourself as a small fish in a big pond.

Narrow Approach: Encourage our classmates, who are future leaders, designers and influential people to see themselves as a component of a larger organism. This will help them to feel confident in speaking out against injustices, to push for climate action and gain empathy for those who have different experiences to themselves both in Aotearoa and in a global context.

Molly, as the creative director on this project, did the looking into visual and audio precedents, while I focused on looking at methods of interaction and UX.


UX precedents. From top left clockwise: Frogger (Simple controls), Everything (You play as something non-human, thus creating empathy for it), Refik Anadol, Refik Anadol (Generative art, Reacts to the user), Flappy Bird (Simple controls), Windows Drag-select (Simple controls that almost everyone knows)

Our brainstorming and narrowing of scope was done together in a class.
From our initial brainstorm we decided that we wanted the experience to involve augmenting the users’ environment. Such as through projection and easy interaction. We also identified some artistic styles that would be popular amongst our classmates. Mostly a minimalist style seemed to be popular, as well as pixel art.

Narrowing the scope and creating a matrix at the beginning of our process helped us to keep our design intent at the forefront of our minds.
We decided from the beginning to not design for a hypothetical situation (i.e. placing the visualisation in a gallery setting), but rather design for the context and for the users we actually had access to: the design school and cohort.
We decided that rather than being directly informing of how our peers could change their behaviour, we would go for a gentle ‘nudging’ approach. We wanted to foster a sense of empowerment over apathy, collective action over individual action and holistic approaches to solving design issues that consider the historical and geographical contexts and power structures we find ourselves living in.
The difficulty with this approach was finding a balance between collective action and passivity.

During further brainstorming we uncovered a few similarities in both of our personal interests. We were both really interested in Bird-oid Objects and artificial life programs as well as generative art.

Molly did some storyboarding and we decided to start developing the code for the project.

The Initial Code

To start I did research into Boids, an artificial life program developed by Craig Reynolds in 1986. Like many Artificial life programs this one takes a "bottom up" approach to coding where you code basic rules for actors inside the program to follow and emergent behaviour creates the overall effect, in this case bird like flocking.

Boids have 3 simple rules:

  1. Separation: Each boid will aim to move away from other boids that are too close.
  2. Alignment: Each boid will aim to move in the same direction and at the same speed as other nearby boids.
  3. Cohesion: Each boid will aim to move towards the centre of mass of other nearby boids.

For this project I needed to add a fourth rule so that they would be able to create the image in response to people.

  4. Collaboration: Each boid will aim to move towards a home point that they are given when they are created.

Ironically the way I created the Collaboration rule it is the only rule in which they do not actually check where other boids are. They just "trust" the other boids to form in with them.

At this point I started coding. Daniel Shiffman had already created a basic implementation of a boids program in Processing 3 (The programming IDE/language combo we where using), so I based my code on that (No need to re-invent the wheel). I coded in that new rule and then created an algorithm using some pixel math I had learned while working on my previous project to translate any given image into boids.

This initially caused memory and speed issues, so I created another constraint that it would not generate boids too soon after the last boid. This simple solution lead to another problem.


An early attempt.

As it was scanning from top to bottom pixle-wise, the skipping a number of pixels before creating the next boid would create 1 at the edge of the planet, then create the next in lines based on that.

I fixed this by altering the algorythum to take into account distance from all other created boids.


A second attempt.

Then the image was on its side, and it took a bit of re-reading code for me to realise that I had mixed up my X and Y axies, which is what caused this.


A working attempt!

Now that I had that done I built a method for changing the weighting of each rule based on the number of people.

All of my code was made to be relatively modular, so I would be able to change out certain parts (such as the method of counting people) without needing to fully rebuild my code.

Interaction Design

The first method we thought to use was the Kinect. Its libraries would have been able to detect people via skeleton tracking or depth-mapping. I say would have, as I couldn't get the libraries to work.

The next method was Open CV, an open source computer vision library that had a processing wrapper. I worked on it for a bit and managed to get it working, although the actual computer vision had difficulties if the person was obstructed at all, or if there was a particularly "noisy" background.

At this point we also got some feedback from Thomas (one of the technicians in the design studio) that he felt there wasn't enough indication that people where actually interacting with the design.

We decided to kill two boids with one stone and fix the input reliability and indication of interaction by swapping to a more active (yet still easy to figure out without instructions) interaction.


The prototype pads

Our first idea was pressure pads placed on the floor that would detect when people stood on them. We tested this with square fabric "pads" and me actually controlling the program through the debug commands we had added in. This gave us a good indication that we where moving in the right direction, as people experimented with getting on and off the pads to see what "instant" change would happen.

While looking into how we could actually create these pads we found some capacitance sensors in the Fab-Lab's supplies. Testing them on a whim I found that they could detect hands through materials. We decided that the metaphor of people putting their hands into a circle in order to take place in collective action was a good one. So continued with that.


The prototype table

We laser cut a quick prototype circle with handprints and showed it to some people, who responded (as expected and hoped) by putting their hand onto the handprint without instruction.


People trying out the prototype

Initially we used the prefabricated capacitance sensors, but unfortunately they auto-calibrated for a change in capacitance after about four seconds. Thus you couldn't hold your hand on the board for a time longer than that.
To fix that I created some custom capacitance sensors out of copper tape and a resistor, and hooked that up to an Arduino.


The final electronics

We sprayed the top of the board to make it resemble a planet and sanded it back to get a smooth surface. Then created a base out of a stool frame, some tough fabric, and another lasercut circle.

testing the paint and final interface

Sound

Sound took a while to figure out. The standard Sound library in Processing didn't have the tools we wanted to use. We attempted to use other libraries, but ran into the same issue.

We determined we would need to use an external program to control the sound to the level we wished, and thus we installed Ableton Live, and I set to work on communications. At first I tried to use an OSC (Open Sound Control) library, although that also required programming on the Ableton Live side. I attempted to do that although ran into many issues with my non-knowledge of Ableton's node based programming. Then Peter (Another of the technicians in the design space) suggested a driver which takes a MIDI input and acts as a "loop" so that another program on the same computer can use that as MIDI input.

Using this and a MIDI library for processing I managed to get the communication and control of Ableton Live to work.

Quad-Tree Implementation

We had an issue with earlier forms of the design, that once we got about about 1500 boids the simulation would start to slow the framerate to a recognisable point. I had previously identified the issue. Every boid looked at every other boid in order to find their distances. This made the current iteration of the program have a O(n^2) time complexity.

I started by looking at threading, although that used too much memory. Then I looked for other people who had solved it and discovered the Quad-Tree data structure.

A Quad-Tree data structure is a data structure in which each node has 4 children. I couldn't find a proper implementation in processing, so created my own.
This changed the time complexity to O(n log n), much better.

I also added a neat visualisation to assist in conceptualising the data structure:


Visualising the "physical" placement of the quad-tree "nodes"


Visualising the "physical" placement of the quad-tree "nodes"


Visualising the Sightranges of every boid

My final design page is here.

The Creative Director's blog is here.

REFERENCES:

  1. Anadol, R. (2020). Refik Anadol. https://refikanadol.com/
  2. Haskins, C. (2018). The racist language of space exploration. The Outline. https://theoutline.com/post/5809/the-racist-language-of-space-exploration?zd=1&zi=ortb4z2s
  3. Rashbrooke, M. (2020). New Zealand’s astounding wealth gap challenges our ‘fair go’ identity. The Guardian. https://www.theguardian.com/world/2020/aug/31/new-zealands-astounding-wealth-gap-challenges-our-fair-go-identity 
  4. Saracino, V. (2020). A Viable Planetary Future Beyond Extraction, Predation & Production. Strelka Mag.https://strelkamag.com/en/article/a-viable-planetary-future-beyond-extraction-predation-and-production 
  5. Wretched of the Earth. (2019). An open letter to Extinction Rebellion. Red Pepper. https://www.redpepper.org.uk/an-open-letter-to-extinction-rebellion/