Research

3D Printing – Research

Myself and Clive McCarthy visited Kevin Hallsworth, technician who runs the 3D printing software/equipment, on 2/12/15. This was a very useful meeting and we went through what was possible and how I would go about this. I can 3D print the 3D renders I have from 123D Catch and Photoscan as long as they are converted to a .stl file (a solid file) or I can use one of the 3D scanners that creates a real-time render (although this has issues with hair, so a hat may be required). We went through different materials/colours, in particular Kevin showed us examples that used a clear material where you could see the printing process underneath. This works well with my ideas around process, in particular highlighting the moment that one object becomes another or is translated in that of another. The human is translated into the digital and then into the object, equally the material is translated from its original state into that of the 3D object.

There were some obstacles highlighted, particularly that the 3D render mesh would need to be offset slightly and applied to the original in order to create the solid surface the 3D printer can see. This is especially necessary is the 3D render is open, i.e. that it is not a complete object.

The plan is to go back next week and start the process, along the lines of process and translation it would be interesting to print a model from each of the 3 processes I have used so far in my attempts to reducing a human to an object:

  1. Render from 123D Catch
  2. Render from Agisoft Photoscan
  3. Render from 3D Scanner
Standard

Leave a Reply

Your email address will not be published. Required fields are marked *