Experiments

Human < > Smoke

Following on from my meeting with Jon Holmes I began experimenting further with physics simulation in Blender and the different effects that could be achieved, particularly in reference to materials. Initially I practiced with the smoke simulation, converting the face to emit smoke that can be interacted with by outside forces. The final effect is interesting, but doesn’t really communicate the aspects of the cyborg and the border between the natural and machine I have been exploring.

facesmoke

Screen Shot 2016-05-07 at 13.38.23

Screen Shot 2016-05-07 at 13.53.31

Screen Shot 2016-05-07 at 17.16.04

Standard
Experiments

The Teeth – 3D Printing Set Up

After meeting with Kevin Hallsworth I have adapted my teeth model so this can be printed effectively. I decided to remove the sections of the tonsils, the roof of the mouth and the uvula as these were the problem areas. Also in terms of the final scale the teeth will be printed at, this level of detail will be unnecessary whilst still communicating the desired effect. I also decided to remove the connective tissue that joined the top and bottom sections, this again was an issue and it will be easier to print these in two separate parts and then join them together after.

I then tested these .stl files in Meshmixer, this can analyse the model and show its suitability for printing. It also has the capability of filling any holes in the model, again making sure it would print effectively.

Screen Shot 2016-05-05 at 21.21.37

Screen Shot 2016-05-05 at 20.47.51

Screen Shot 2016-05-05 at 20.48.48

Screen Shot 2016-05-05 at 20.46.46

Standard
Experiments, Research

Project Testing

04/05/2016:

Meeting with James Field:

  • Following on from my initial experiments with an digital/animation for my project James suggested it would be a good idea to consider realtime music visualisation, whereby the animation would react to the audio input from the mic/choir.
    • This could be done through a combination of blender and processing/python
      • However I would need to consider if the animation could access the output level/ the full frequency range
    • Or this could be done through web GL using a web browser
      • Creating an audio visualiser similar to Lights by Ellie Goulding produced by Hello Enjoy http://helloenjoy.com/project/lights/
    • I could harness the low, mid and high notes to affect the animation in different ways, creating a connection to the choir.

Meeting with Jon Holmes:

I then met with Jon Holmes to go through blender animation and particularly physics simulation in order to animate a texture.

Jon went through all the different soft body/ rigid body simulations such as objects reacting when hitting a surface, cloth draping over an object, the object turning into cloth itself and the object being made of smoke that can react with sliced through by another. This was extremely helpful and interesting to see the different effects I could create, it will be good to consider this in terms of the different textures/materials that would effectively communicate the notion of human > object. Particularly paying attention to the border between the natural and machine. For instance the materials could be metallic, porcelain, silicone, rubber to give an interesting juxtaposition to the natural and man-made and those materials we deem the most akin to the human.

Screen Shot 2016-05-04 at 18.22.04

mouthfabric

 Meeting with Craig Bratley:

I then met with Craig Bratley to test the audio aspect of my project, having the input through 1 mic output to 4 speakers each pitch shifted differently. Craig helped me out with this and advised it would need to be set up as followed:

Mic > Mixer > 2 x audio interface (this splits the input to multiple outputs) > computer with Audition (to pitch shift the input) > 4 x speakers (2 x on each computer)

I would need to use Audition in order to perform the realtime pitch shifting, so that the mic input is altered and this is directly output. This does cause a delay but I think it could be accommodated, the issue does become that this cannot be looped and replayed. Craig suggested setting this up with a loop pedal in between either mic and the mixer or the mic and the audio interface, that way the audio can loop constantly and added to with each new input from the mic.

IMG_1989

IMG_1991

IMG_1990

IMG_1992

Meeting with Kevin Hallsworth:

Finally I met with Kevin Hallsworth to go through everything I would need to have set up in order to 3D print 4 of my mouths and 4 of my teeth. Kevin let me know this would cost approx £10 per head using the printer where the support material can be dissolved away, leaving a clean print which is purely the object. The teeth do need some extra work in order to make them printable, the mouth roof and tonsils need to be attached to the upper/lower teeth in order for the printer to handle them. Kevin also let me know they are currently testing with metallic filament (gold and aluminium) and I could print a test mouth using these, he showed me some models while I was there and they have an interesting shine to them which would be interesting to consider. If I wanted to print all the mouths using these then I would need to buy a roll of filament, which would be approx £20.

IMG_1994

IMG_1995

Standard
Experiments

The Choir File Set-up

I began setting up my files ready for 3D printing, exporting the individual mouths and teeth as .stl files. The teeth work particularly well with the mouths created using Seene and add an interesting dimension rather than having a void.

Also again the finder previews of .stl files show an interesting dimension of the object and exposes their jagged/ somewhat crystalline surface.

Screen Shot 2016-05-03 at 18.44.53

Screen Shot 2016-05-03 at 18.47.53

Screen Shot 2016-05-03 at 19.02.37

Screen Shot 2016-05-03 at 19.03.06

Screen Shot 2016-05-03 at 19.04.19

Screen Shot 2016-05-03 at 19.05.21

Standard
Experiments

Experiment in Texture Animation

As a possible output for my current MA project I’ve been thinking about a digital animation based on the objects I have been creating. Initially I contemplated an animation where the mouth moves in a more realistic representation, however I feel this would ground the work too much in the human when I am attempting to find the translation to object. Thus the exposure of the material would be a better way to approach this, animating a material rather than the object would highlight the blurring of the border between the natural and the machine. So I am proposing to view this as a form of experiments in texture, metal, marble etc, and then animate the breaking/manipulation of these.

I have started by doing an initial animation test in blender (as this is something I haven’t attempted previously). This uses the mouths I have been creating and distorts and stretches it as if it is elastic. There’s much further to go with this, but the initial test confirms the effect would compliment my project.

mouthanim

mouthanim3

Screen Shot 2016-05-01 at 14.35.43

Screen Shot 2016-05-01 at 14.39.32

Screen Shot 2016-05-01 at 14.42.25

Screen Shot 2016-05-01 at 14.44.52

Screen Shot 2016-05-01 at 14.47.47

Screen Shot 2016-05-01 at 14.49.43

Screen Shot 2016-05-01 at 16.08.17

 

Standard
Experiments

The Choir (cont.)

As I was going through the process of translating the human to a 3D model using Photoscan I came across another 3D app called Seene. After testing it, it actually gave some interesting results. The models produced come out very standardised and capture quite a bit of detail. There is a sheen like quality to the texture, which gives the effect of material or a form of digital fabric that seems to drape over the face. It could be interesting to combine Seene with Photoscan as a representation of the pursuit of the perfect translation, further blurring the boundaries between human and machine.

IMG_1769

IMG_1771

IMG_1772

IMG_1791

Screen Shot 2016-04-06 at 22.37.16

Screen Shot 2016-04-06 at 22.37.36

Standard
Experiments

The Choir (cont.)

As a continuation of the makings of a cyborg choir, I began the rendering process of creating the 3D mouths using Photoscan. This initially proved slightly difficult to gauge how well the software would translate the human mouth into that of a digital copy, at times it saw nothing and others had sections missing or repeated. When it did work the results highlighted how the translation process can produce very different results from the same source, I used my face and took the photos in the same place each time but the final 3D renders each have their own variations and mistakes.

In order to continue the notion of the cyborg I would like to either print the models using metallic filament to communicate the meeting of man and machine bridged using digital tools. Or the models could be printed in white or clear and then airbrushed with metallic paint, this would bring elements of the human back in and perhaps provide a link with the human voice that will eventually be fed through them. If I we’re to choose paint as an option, then this could potentially be decided using digital tools in the creation of either gradients or patterns. The gradient is a digital re-creation of colour and light/shadows in the natural world.

Screen Shot 2016-03-28 at 17.03.29

Screen Shot 2016-03-28 at 17.16.45

Screen Shot 2016-03-29 at 12.17.04

Screen Shot 2016-03-29 at 12.34.37

Screen Shot 2016-03-29 at 13.13.56

Screen Shot 2016-03-30 at 12.21.34

Screen Shot 2016-03-30 at 13.29.55

Screen Shot 2016-04-02 at 09.42.24

Screen Shot 2016-04-04 at 19.41.54

Screen Shot 2016-04-06 at 17.37.26

Standard