Tuesday, 6 December 2022

Test Progress

 Test 1 - WORKED BUT NOT USEFUL - 360 camera test - Capturing the dice tower diorama using a 360 camera and putting the resulting footage into after effects to learn the VR composition tools. This test worked but after looking at it, it is not suitable to be used in the production pipeline for my project. This is because it is for 2D/2.5D rather than 3D. The footage captured could be used to create a space in VR but it would not be explorable or interactive as I want. This way of collecting the footage and set means that it creates a 2D 360 video which in after effect I could add text, and objects that I could move along the axis as though they were in 3D space, how ever as they are not fully 3D it meant that viewers would not be able to look round them. This method of production would be good to use if the production I was aiming for was similar to “Mad God” however as I am trying to create something that can be interacted with I need the objects in it to be fully 3D. This test has shown that in my production pipeline I need to look at software and methods that are more focused on game play such as unreal engine. 


Test 2 - FAILED - 3D scan using the 3D scan camera on my Wacom mobile studio - the Artec software would not load on the Wacom, I tried to install Artec 17 and activate a trial, but unfortunately, it would not recognise the scanner. I tried to re-calibrate the intel re-sense camera using its own software this also would not work, when I googled this issue there were only a few results and it seems there is an issue with intel update that stops it from working. I tried to fix this for well over an hour with no success, it was at this time that I decided that my time would be better spent by continuing to test using the photogrammetry software that I have so that I can get results. When I have more time at a later date I may try to get this scanner and software fixed but at the moment I do not have the time in this unit to argue with it. 


Test 3.1 - WORKED - photogrammetry test, Capturing the dice tower diorama, worked well, it picked up details of the tower, but it didn't pick up some of the bits of smaller figures that were hard to get into with the camera. 


  • Continue testing with other objects and items 

  • Make another diorama or object that can be used to test materials but that doesn't have as many surfaces as the dice tower. Make this a modular item that can be scanned and built-in 3D eventually, and try and build a small scene similar to what my finished idea is (a fantasy-themed environment) This could be built at a small scale and scaled up in 3D, I could use this to create a 3D environment in VR that people could go into, the presentation of this could have the diorama displayed and then allow people into it via VR.

  • The scene could make it so that participants can enter one of the buildings, if I test this in the unreal engine VR system I should be able to make it so that people can interact with the door and some objects in the scene. 


Test 3.2 - FAILED - Photogrammetry test with a doll, capturing a monster high doll using a photogrammetry scan, the resulting footage will be put into Maya and rigged, it will then be used to conduct a test animation. 

This test failed, the scan didn't pick up the details and led to a really bumpy result, I believe this was due to the doll I used having glittery skin. I did however learn a few points that I need to change when I next try this. These are: use a different doll, take photos closer, and use a ND filter on my lens. I will try this test again and I will use all the things above that I have mentioned. 


I also need to email Mat and ask what methods he used to capture the wind in the willows puppets as this may be a different method that I can use in my work. 


Test 4 - MOVED TO NEXT UNIT -  practical materials test board or test box - shortcomings of photogrammetry with reflective and high specularity (a highlight from lights) surfaces


Practical test reflection

Through these practical tests I have learnt a lot, they have enabled me to upskill and learn new software as well as develop how I approach completing my work. 


Through these tests I have learnt how to use Agisoft Metashape, a photogrammetry software, I have also learnt about how to take photos for this and get the best results. I still need to improve the photos that I get and invest in a ND filter for my camera to stop the issues that I am having with reflective surfaces. But when I take into account the scale of the objects that I have been capturing I am very happy with the results that I have achieved. As part of these tests, I have also started to investigate how to decimate meshes of objects I create and have started to look at how to put the scans into Maya, as I progress with this the next stages of this testing are learning how to edit, create and re-mesh the scans I have and then how to create UVs for my models. 


I have also been able to further refine the list of software that I need to learn and that I will use in my pipeline, which has helped make the project more manageable. As well as this I have learnt various strategies that help my workflow as well as my stress levels. I have found that using backward planning, and refining down from this into small tasks stops me from panicking over the amount of work. That creating clear and concise documents to present to an outside audience, which summarise the project so far, helps to keep the project clearer for me and If I stop worrying about perfectionism then all my workflows smoother with much less stress. 


The practical experiments from this unit have been informed by the research collected in my dissertation, and they start to test some of the theories discussed in it. One example of this is in the photogrammetry scans where I have been creating texture maps, which apply the real word textures of objects to the 3D computer-generated models, this experiment starts to look at how embodied memories and visual haptics can be used to add additional immersive elements to VR. These tests will continue to be developed until I am able to put the models into a VR environment.


Another area of research that I have been looking further into is the discussion about the maker and their role in artwork. I created a practical test piece (dice tower diorama) which I then scanned into the computer and created a digital model of it. I started to evaluate my thoughts about the digital version and assessed my level of connection to the piece. Previously I have found that when I create purely digital work, whether that be drawings, animations or sculpts that I do not feel the same connection to them that I do when I create the same pieces practically, which is one of the reasons that I work in stop motion. However, through these practical experiments, I found that I had the same level of connection to the 3D scanned diorama as I do to the physical version, I believe that because the digital model still retains the remnant of my physical touch on it, the connection also transfers into the digital version. This is an interesting point that I have found as it links to the theories about embodied memories discussed in my dissertation, the piece of work is definitely sparking embodied memories for me as the maker and keeping that connection between physical and digital, the next stages of this idea will be to collect other peoples reactions to the work and see if the piece also sparks their embodied memories. 


When I reflect on the practical tests from the 703, they have all helped me immensely in different ways and being able to identify the point above that has helped me will mean that the next stages of the MA go much smoother.

No comments:

Post a Comment

Final Presentation

 To end the MA we were to complete a final presentation to our peers, supervisors, course leader as well as people from industry.  For my pr...