Posts Tagged ‘ visualisation

ARkit and Archaeology – Hougoumont Farm, Waterloo

For the last 3 years I have had the absolute privilege of being one of the archaeological directors of the current excavations of the battlefield of Waterloo. As part of the incredible project Waterloo Uncovered (http://www.waterloouncovered.com) – we have been taking wounded serving and veteran soldiers, students and professional archaeologists  to the battlefield to conduct the first major systematic excavation of parts of the battlefield that shaped the history of Europe in 1815.

We only have two weeks in the field each year, which means there is not a lot of time to doing anything but excavate, record and backfill (see the dig diaries and videos of all we got up to here). However, this year I managed to find the final afternoon to play with the new Apple ARkit and see what potential there is for archaeological sites.

The short answer is that there is a lot of potential! I have discussed Augmented Reality and archaeology to the nth degree on this blog and in other places (see here for a round-up) – but with the beta release of ARkit as an integrated part in iOS11, Apple may have provided the key to making AR more accessible and easier to deploy. I tried out two experiments using some of the data we have accrued over the excavations. Sadly I didn’t have any time to finesse the apps – but hopefully they should give a hint of what could be done given more time and money  (ahem, any prospective post-doc funders – my contact details are on the right).

Exploring the lost gardens of Hougoumont

The first video shows a very early experiment in visualising the lost gardens of Hougoumont. The farm and gardens at Hougoumont were famously defended by the Allied forces during the battle of Waterloo (18th June 1815). Hougoumont at the time was rather fancy, with a chateau building, large farms buildings and also a formal walled garden, laid out in the Flemish style. One of the participants this year, WO2 Rachel Willis, is currently in the process of leaving the army and studying horticulture at the Royal Horticultural Society. She was very excited to look at the garden and to see if it was possible to recreate the layout – and perhaps even at some point start replanting the garden. To that end she launched herself into the written accounts and contemporary drawings of Hougoumont and we visited a local garden that was set out in a similar fashion. Rachel is in the process of colouring and drawing a series of Charlie Dimmock style renditions of the garden plans for us to work from – but more on that in the future.

Gardens of Gaasbeek Castle

Similar gardens at Gaasbeek Castle

Extract from Wm. Siborne's survey

Extract from Wm. Siborne’s survey of the gardens at Hougoumont

As a very first stab at seeing what we might be able to do in the future, I quickly loaded up one of Rachel’s first sketches into Unity and put a few bushes and a covered walkway in. I then did some ARkit magic mainly by following tutorials here, here, and here. Bear in mind that at the time of writing, ARkit is in beta testing, which means you need to install Xcode Beta, sign up for and install the iOS 11 beta program for the iPhone and also run the latest beta version of Unity. It is firmly at the bleeding edge and not for the faint-hearted! However, those tutorial links should get you through fine and we should only have to wait a few months and it will be publicly released.  The results of the garden experiment are below:

As can be seen, the ARkit makes it very simple to place objects directly into the landscape OUTSIDE – something that has previously only really been possible reliably using a marker-based AR plugin (such as Vuforia). Being able to reliably place AR objects outside (in bright sunshine) has been somewhat of a holy grail for archaeologists, as unsurprisingly we often work outside.  I decided to use a ‘portal’ approach to display the AR content, as I think for the time being it gives the impression of looking through into the past – and gives an understandable frame to the AR content. More practically, it also means it is harder to see the fudged edges where the AR content doesn’t quite line up with the real world! It needs a lot of work to tidy up and make more pretty, but not bad for the first attempt – and the potential for using this system for archaeological reconstructions goes without saying! Of course as it is native in iOS and there is a Unity plugin, it will fit nicely with the smell and sound aspects of the embodied GIS – see the garden, hear the bees and smell the flowers!

Visualising Old Excavation Trenches

Another problem we archaeologists have is that it is very dangerous to leave big holes open all over the place, especially in places frequented by tourists and the public like Hougoumont. However, ARkit might be able to help us out there. This video shows this year’s backfilled trenches at Hougoumont (very neatly done, but you can just still see the slightly darker patches of the re-laid wood chip).

Using the same idea of the portal into the garden, I have overlaid the 3D model one of our previous trenches in its correct geographic location and scale, allowing you to virtually re-excavate the trench and see the foundations of the buildings underneath, along with a culverted drain that we found in 2016. It lines up very well with the rest of the buildings in the courtyard and will certainly help with understanding the further foundation remains we uncovered in 2017. Again, it needs texturing, cleaning and bit of lighting, but this has massive potential as a tool for archaeologists in the field, as we can now overlay any type of geolocated information into the real world. This might be geophysical data, find scatter plots or, as I have shown, 3D models of the trenches themselves.

These are just very initial experiments, but I for one am looking forward to seeing where this all goes. Watch this space!

 

Pokémon Go Home – Why Pokémon is not what the heritage sector really needs

Gently I edged toward the beast. It had 4 long semi-transparent wings, the same length as its tube-like body. The body was iridescent in the light, changing colour through blue to green.  Its face was incredibly ugly… a mixture of large bug-like eyes on the side of its head and a gaping mouth filled with prehistoric fangs. It fluttered its wings gently in the breeze, as it cleaned itself by rubbing its long spindly white legs all over its body. I reached down quietly in readiness to capture it, brought my phone up to look through the camera feed. Just as I was about to swipe the screen it got alarmed, its wings became a blur of movement and it took off – flitting away out of sight. I attempted to chase it – but it was gone in a blink of an eye. Another two years would pass before my friend and I got another tip-off about the location of the quasi-mythical beast.

I could be talking about my hunt for the Yanma, the large dragonfly, famous for being able to see in all directions at once and having such extreme wing-speed that it can shatter glass. Except of course I am not talking about my hunt for a Pokémon – I am talking about the day I went out with a very good friend of mine hunting in the Fens for the white-legged damselfly. We didn’t manage to capture a picture of it that day, but we did briefly see it alight on a leaf, which was more than good enough.

White-legged Damselfly by Philipp Weigell

White-legged Damselfly by Philipp Weigell (picture taken by Philipp Weigell) [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons

Pokémon Go fever has been sweeping seemingly everyone in the last couple of weeks. It has been hailed as a saviour to stop idle kids sitting in front of their computer screens, the herald of mass-adoption of mainstream Augmented Reality, and the re-invigorator of an interest in heritage and museum sites.

As someone who has done quite a bit of research on AR and heritage, I would like to wade into the discussion here and question a few things. To go someway toward mitigating the risk of sounding like a grumpy old man who doesn’t understand ‘kids today’ (too late!) –  I would like to point out that I have 3 boys of my own, play all sorts of different games on both smartphone and computer and I once even bought a girlfriend a pair of Pikachu pyjamas. I understand the nostalgic call of Pokémon, the excitement of the treasure hunt and the lure of the screen over the real world.

Pokémon Go as an AR revolution

First I would like to tackle the pedantic bit. Pokémon Go is not really an AR game. The AR element of it (overlaying the Pokémon onto your video feed) is (as someone else has already said) the most basic of AR possible. So much so that it can’t really be called AR at all. There is no interaction with or reading of the real surroundings, the Pokémon don’t hide behind the cooker or pop out from behind the Museum gates. You could be standing in your toilet or at the edge of the Grand Canyon and the Pokémon would still be badly superimposed, via a simple geo-location, onto the video feed. Even the Snapchat AR which superimposes silly things on people’s faces is more AR – as at least it is doing some kind of recognition of the real world (in that case facial recognition).

Calling Pokémon Go an AR revolution is doing a disservice to the potential and power of AR for integrating with the real world. AR has so much more potential than this. Pokémon Go is a locative game, not true AR.

Pokémon Go gets kids outdoors

What’s not to like about this? Even a cynical old git like me surely can’t complain about kids (and adults) getting away from their screens and going outside. Except, of course, they are not getting away from their screens. In fact it is probably worse – by taking the screen outside and searching for Pokémon through it they are not even really taking part in the outside world. The outside world is being entirely mediated through the screen – a small rectangular box guides your every movement. The alternate reality provided by the smartphone is so beguiling that there are people falling in canals, crashing into police cars and even plummeting off the edge of cliffs whilst playing the game. Clearly even though they are outside, they are oblivious to the world around them.

Do we really live in a world where kids can’t be bothered to get off the sofa and go outside without taking a screen or a game with them? What kind of world is this? What is it going to become if the next generation take this as normal? Why is it that hunting for a Squirtle is seen as the utmost of cool – but following a tip-off about the location of a Spoonbill or standing on the end of train platform hunting trains is seen as the ultimate in nerdiness?

I’m not sure I can really see the logic. I guess that Pokémon Go is the epitome of easy and quick satisfaction. Sure you may have to travel a little, to get to a place to capture the computer-generated critter – but when you arrive you don’t have to wait and watch and hope that you glimpse a sight of it. You don’t have to be silent and scan the sky with your binoculars and be PATIENT. If someone has said that the Charmander is there, it is pretty much guaranteed that if you go to those Lat Long coordinates you will find it. Bird-watching is not the same. You can go back to the same hide for days and days and perhaps not spot what you are looking for. It may even be there, but you might not have done quite enough research to differentiate the colour of the wing flash. It is not quick or easy, and because of that it is surely more ultimately satisfying.

Pokémon Go brings all the kids to the (Archaeological) Yard

This then brings me to the final point – Pokémon Go as a way to get people more engaged with heritage sites. We have seen this before, museums and heritage sites jumping on trendy locative game bandwagons to get more people to come to their sites (Andy Dufton and I wrote about this with Foursquare a few years ago). I think it may be a little early to say whether or not this is really going to be a big thing. We will need to see stats on the increases in ticket sales to show that the Pokéhunters are not just going to the museum car park. And if they are paying the ticket price and entering the site, how much are they actually engaging with the archaeology?

Charmanders in the BM

Terry Brock is also hopeful about this:

Terry Brock

As Andrew Reinhard’s archaeogaming foray shows, there is the potential for providing extra contextual information at the ‘cultural’ Pokéstops. However a quick look at his example of the Pokéstop at his local Washington memorial shows only the information that is on the plaque of the monument itself – but then you would have to look away from your screen to read that.

Route of Washington’s March monument (taken from the Archaeogaming blog by Andrew Reinhard) – https://archaeogaming.com/2016/07/09/pokemon-go-archaeogaming/

So let us stand back a little and think about what all this means. I’ve concentrated recently on creating ways for people to use Augmented Reality to engage with, explore and understand heritage sites (take a browse around my website to see some examples). The key for me is that by someone visiting the site physically they can engage both their body AND their mind simultaneously. The AR I use is exclusively made to facilitate this, to show hidden elements of the site, to waft unexpected smells to make you THINK about the space in different ways, to play sounds that have some kind of relevance to what happened in that location in the past.

A visit to an archaeological site by a Pokéhunter is the antithesis of this. When a Pokéhunter arrives at a site (drawn by the lure of a rich Pokéstop) they are in the classic state of Cartesian disconnect. Their body may be there, but their mind is far away, thinking of the next Pokéstop or the phone notification that just came through from their mate about a rare [insert rare Pokémon name here] up the road.

You only have to look at this tweet to see the effects of this:

https://twitter.com/ohmycrayon/status/751778120647180288

This girl is at STONEHENGE, for crying out loud. Instead of taking an interest in how the stones were put up, how they fit into the surrounding landscape, what actually happened in and around them, and, crucially, how the experience of actually being there makes her feel – she is chasing an Eevee. She herself admits her attention is “so divided right now”. If this is happening at one of Britain’s most iconic and engaging monuments – what does it mean for other heritage sites? This girl’s mind is clearly not in the same place as her body. She is engaged in two separate realities, linked only by coordinates on a Google Map. Using Pokémon Go to get bums on seats and through the ticket barriers might be good for sales, but at what cost? If it really takes a Squirtle to get our youth (and adults) to go to a heritage site, then we are doing something very wrong.

What about the Real World?

I’m sorry this post has been rather despairing. I am getting increasingly sad for the state of the world, where people go head over heels hunting virtual creatures, while the real incredible biodiversity is ignored, built over and marginalised. Instead of re-wilding the world with animals, insects, plants and birds we are enchanted by the opposite: introducing the computer and virtual creatures into our diminishing natural and cultural spaces. How can it be that I am in the minority for being bewitched by the hunt for the white-legged damselfly, a beautiful, crazy, prehistoric looking creature – while the vast majority of people are instead happy to jump in their cars, park in the car park of the local baptist church and stare into their phones flicking imaginary red balls at imaginary creatures?

I haven’t even touched on the inevitable monetisation of all this, how long will it be until the big museums have to pay Niantic loads of money to host an incredibly rare Pokéstop and the smaller sites (that are actually crying out for visitors) will be priced out of the Pokémarket?

If you really can’t get your kids (or yourself) out to a heritage site without gamifying it by chasing animals, why not go and find that pair of peregrine falcons roosting in the local church steeple? Or go newt-hunting in your local historic ponds? Perhaps try to spot a red kite above the prehistoric landscape of Dartmoor? You could even use this map of rare bird sightings around the country to plan a day out birding and visiting nearby heritage sites.

But please please please – leave your smartphone behind.

Embodied GIS HowTo: Part 1a – Creating RTIs Using Blender (an aside)

This is a bit of an aside in the HowTo series, but nevertheless it should be a useful little tutorial and as I was given a lot of help during the process it is only right to give something back to the community! So this HowTo shows you how to take the 3D model you created in Part1 and create a Reflectance Transformation Imaging (RTI) Image from it. Now if you don’t know what that is then here is the definition from the biggest advocates of the technique for archaeology, Cultural Heritage Imaging (chi) :

RTI is a computational photographic method that captures a subject’s surface shape and color and enables the interactive re-lighting of the subject from any direction.

What this means basically in GIS terms is that you have a fully interactive hillshade to play with and can change the angle of the light on-the-fly. No more need to create hundreds of hillshades with the sun at different angles – this is an all-in-one approach and is way more interactive. It is a really awesome technique for analysing rock-art, artefacts and even documents and can be used to reveal tiny little details that might not be obvious just by examining the object normally. It has also been used by Tom Goskar and Paul Cripps to interactively re-light some LiDAR data that Wessex Archaeology have of Stonehenge (see their paper in  here). RTI images are created by surrounding the subject with a dome of lights that are turned on one by one and a photograph taken each time. Every photograph needs a shiny ball (usually a snooker ball) in it – which enables the software to record the angle of each light, and by using some complex maths is then used to merge together all of the images (for a fuller and probably more accurate explanation see Mudge et al 2006).

This technique can also be used virtually (as Tom and Paul have done) by recreating the dome of lights in a 3D modeling package and shining them on a virtual object (often a laser scan) or a chunk of LiDAR data. I am going to show you exactly the same technique that Tom and Paul used, except where they used Vue I’m going to be using blender to create the virtual dome. I have also supplied the .blend file and the python script used – so you should be able to do it all yourself.

Right first things first, open up blender and load the .blend file that you saved from Part 1 – if you haven’t got one then you’ll need a 3D model of some description within blender – the concepts will work the same on any 3D model, but I am presuming for this tutorial that you have a chunk of a Digital Elevation Model.

  1. Luckily for us blender has a method of easily creating a dome of lights – this is because during the 3D modelling process a light dome is often used to create a warmer, more realistic feeling for ambient lighting (see Radiosity) so we can use this to our advantage. Press Shift+A to create a new mesh and create an ‘icosphere’.  To get enough lights we’ll need to subdivide the icosphere once so change the subdivisions (on the left hand panel) to be 3.
  2. For the purposes of this tutorial I am presuming that you have a chunk of 10kmx10km DEM – therefore in order to light it properly we need to create a dome that will cover the whole thing. Change the dimensions of your dome to be X:15km, Y:15km and Z:10km – you can change this to be as spherical as you want – these settings worked for me. You will also want to move it to the middle of your DEM – so change the location to be X:5km, Y:5km and Z:0.

    Building the icosphere

  3. Now we have a sphere (albeit squashed) we are going to want to cut the bottom of it to give us our dome. To do this enter Edit Mode (by pressing TAB). Now change your view – so that you are viewing from the front – you either press 1 on your numeric keypad or use the menu View->Front. Press the ‘A ‘ key to clear your selection and then press the ‘B’ key to begin selecting by a border. Draw a box around the bottom of the icosphere and it should select those faces and vertices. Once selected press the ‘X’ key and delete the vertices. Depending on the size of your sphere you may have to zoom forward a little bit to select and delete the faces on the far side of the icosphere. Keep doing this until you are left with a tidy dome sitting above your DEM.

    Deleting the bottom of the sphere

  4. Once you have your dome we are ready to start adding lights to it. First off if you have any other Lamps in the scene – delete those, so we don’t get confused at later stage. Once deleted, come out of Edit Mode (TAB) and use Shift+A to add a new Lamp. I use a Sun lamp [helps with my year-round tan… ahem.. sorry] – you could have an experiment with other types of lamps too – but the Sun seems to work well. Move the Sun to be at the centre of your DEM (X:5km, Y:5km,Z:0). Rotate the Sun so that it’s Y axis is at 180 degrees.
  5. In the little panel on the right you want to select the new Sun by clicking on it, then holding down Shift click the icosphere, so you should now have both selected (you can tell because their little icons light up) – now hover you mouse in the centre of the viewport and press Ctrl+P and parent the Sun to the icosphere. The Sun should now become a child of the icosphere in your objects panel (if you expand the icosphere in the panel you will see the Sun as part of its hierarchy).

    Parenting the Sun

  6. The Sun is now the son of the parent, therefore, we want to multiply the number of them and set them on each vertex – blender has a great function for this (DupliVerts) – click on the Object properties of the icosphere, scroll down to Duplication, click Verts and  click the Rotation checkbox. You should see a whole host of Suns appear. They should be in the right place on each vertex, but if not you can move the Sun to the centre of the DEM (by clicking on the icosphere in the hierarchy panel and then clicking on the Sun – see Step 4).

    Duplicating the Suns

  7. As we are using Suns the direction that they are pointing doesn’t really matter – however, if you are using other types of lamp – spots for instance – you will need to make sure they are pointing in the right direction. [NOTE: if you need to do this here is how, if you are using Suns disregard this step – select the icosphere, enter Edit Mode (TAB) and then choose Normals: Flip Direction from the Mesh Tools panel on the left. That will ensure the lamps are pointing inside the dome. Go back into Object Mode (TAB)].
  8. Now we have a lovely dome of Suns, we need to detach them from the icosphere, so we can manipulate them individually. This is pretty easy – select the icosphere and then Press Ctrl+Shift+A and you should see the Suns all detach themselves into individual objects (you will see about 90 or so Suns in the hierarchy panel on the right). At this stage you are free to delete or turn off the icosphere as we won’t be needing it anymore.

    Blinded by the light

  9. Next we need to set up our camera. Images for RTIs are normally taken by a camera set at the top of the dome, pointing directly downwards. Select your camera (there should be one by default in your scene – if not then you can Add one using Shift+A). Change the camera’s location to be directly above the centre of your DEM at the apex of the dome (in my case X:5km, Y:5km, Z:10km). Blender cameras automatically point downards – so there should be no need to add any rotation (if you have any rotation already set change all the values to 0). Before we render out a test image, we’ll need to adjust our camera viewport and clipping range. Press 0 on the numeric keypad or use the menu View->Camera to take a look and see what the camera is seeing. You will likely just get a grey box – this is because the camera is clipping the distance it can see. Select the camera and go to the settings in the right panel – Set the end Clipping range to 10km and you should see your DEM appear.

    Adjusting the camera settings

  10. Now you are going to want to adjust the Sensor size, to make sure your whole DEM is in the shot – for my 10km DEM the sensor had to be set to 70.
  11. Try a test render (press F12 or go via the menu Render->Render Image). You should be presented with a lovely render of your DEM, currently lit from all the angles.

    DEM test render

  12.  Press F11 again to hide the render view – at this stage you might want to increase the energy setting on your Suns – to get a bit more light on the DEM. Our suns are all still linked together – so you can change the energy setting by clicking on the top Sun in your hierarchy, clicking the Sun object properties (the little sun icon in the object properties panel) and changing the Energy as required (I recommend energy level 5). This should change the energy of all the suns.
  13. Once you are happy with the energy levels we can render out a test sequence, by using a small python script that turns each sun on individually and renders out an image. Change the bottom panel to be a Text Editor panel (see image).

    Selecting the text editor

  14. Click the [New+] button in the Text Editor panel and cut and paste the following code into the window
    import bpy, bgl, blf,sys
    sceneKey = bpy.data.scenes.keys()[0]
    filepath = "PUT YOUR ABSOLUTE FILEPATH HERE"
    # Loop all objects and try to find the Lamps
    print('Looping Lamps')
    l=0
    # first run through all of the lamps turning them off
    for obj in bpy.data.objects:
        if ( obj.type =='LAMP'):
            obj.hide_render = True
            l = l + 1
    print('You have hidden ' + str(l) + "lamps")
    
    # now we can go through and
    # individually turn them on
    # and render out a picture
    for obj in bpy.data.objects:
        if ( obj.type =='LAMP'):
            print (obj.name)
            obj.hide_render = False
            bpy.data.scenes[sceneKey].render.image_settings.file_format = 'JPEG'
            bpy.data.scenes[sceneKey].render.filepath = filepath + '//lamp_' + str(obj.name)
            # Render Scene and store the scene
            bpy.ops.render.render( write_still=True )
            obj.hide_render = True
    
  15. Adjust line 3 – so that you have a filepath that fits your system. This is where it will save out the images – but beware if the folder doesn’t exist it will go ahead and create it – so make sure you type carefully. When you are ready to go – click the Run Script button and it should happily go away and render your images for you. If you have problems when running the code the errors should appear in the console. [NOTE FOR MAC USERS: If you are on a Mac to get the console requires you to start blender from a Terminal window. Save your .blend, close blender. Open Terminal.app then change directory to the blender application by running “cd /Applications/blender.app/Contents/MacOS/” (change the path to fit where you installed blender), then run “./blender”. Any console messages will now appear in the Terminal window]
  16. This will give us a nice set of images (one for each camera) that we can use later to create our RTIs.
  17. You may recall from the beginning of this HowTo that in order to create an RTI image we also need to use a shiny snooker ball. Luckily we can create ones of these with blender as well. Use Shift+A to create a Metaball-> Ball. Make the Ball dimensions 1kmx1kmx1km and move it to the centre of your view (say X:5km, Y:5km, Z:2.5km).
  18. Now we want to make the ball really shiny and black – so apply a material to the ball (using the Material button in the object properties). Set the Diffuse intensity to be 0.0, the Intensity to be 1.0, the Hardness to it top value (511) and click the little Mirror checkbox. That should give us a nice hard shiny black ball for the RTI software to deal with.

    How to get a shiny black ball

  19. Now we want to render out a set of images with only the ball in it so that we can ‘train’ the RTI software. You will want to turn the render off on your DEM Plane (press the little camera button next to it in your hierarchy view), so that when you output the images you will only be rendering the ball.
  20. Change the filepath in the script in your Text Editor panel so that you will be saving the ball images to a different folder (otherwise you will just overwrite your DEM images). Then hit Run Script and you should get a set of rendered images of the ball ready for importing into the RTI software.
  21. You now have the 2 sets of images ready to create your final RTI image!
  22. I am not going to go through the minute detail of the steps to create the RTI image, as Cultural Heritage Imaging have already written a detailed how to. So the next step is to download the RTI Builder software and the reference guide from this page and go through the steps outlined within their reference manual.
  23. A couple of notes on the process, you are going to want to run the first RTI build using the ball images as the input images (put them in a folder called jpeg-exports/ within your RTI project directory). This will create an RTI of the ball – and will produce a .lp file in the assembly-files/ folder of your RTI project directory.
  24. Once you have produced the .lp file from your ball images, you can then use this .lp file to create a RTI image of your DEM itself. Start a new RTI project and choose Dome LP File (PTM Fitter) on the first page – this will direct you through and allow you to specify the .lp file from your ball project, and the images of the DEM that you rendered from blender. As we have already trained the program using the ball images, it should now just happily go through and create the RTI image from your DEM renders.

    AFTER you have run the RTI Builder through on your ball images – use this mode to specify the .lp file

  25. That’s it – here is how mine turned out (a little dark, so probably need more energetic suns)…

    The final RTI image

You can download my lightdome.blend file that has a 15km x 15km light dome in it – if you don’t want to make your own. If you used this tutorial, post some screenshots of your own RTI images in the comments – I’m interested to see what people get up to! If you have any questions or need further help, don’t hesitate to ask below. Thanks go to Tom Goskar, Paul Cripps and Grant Cox for help and advice in setting up the virtual RTI dome.