Surfing the Hypegeist

This post is written as part of the Call for Papers over at ThenDig, looking at Zeitgeist in archaeological research and how to follow it, keep up with it, or create it. As will be clear from the previous posts on my blog, I am interested in using Mixed and Augmented Reality to aid in archaeological research. Augmented Reality (AR) is currently just over the ‘Peak of Inflated Expectations’ of the Gartner Hype Cycle meaning that it has been hailed previously as the next Big Thing, but has not quite lived up to the hype and so now needs a lot of work to make it a sustainable and useful technology – I have previously written about what this means in terms of archaeology here.

As I have just been awarded my PhD on the use of AR in archaeology I decided to write this post to give some brief reflections on what it has been like trying to surf the Hype Cycle, whilst still producing 85,000 words of scholarly research on the topic. Please check out the post on ThenDig that has some insightful comments from the the two peer reviewers -http://arf.berkeley.edu/then-dig/2014/03/zeitgeist-stuart-eve/. I have reproduced my text below:

Twitter is your enemy

Perhaps a controversial statement, but for one attempting to sit down and write intelligently about something that is currently the zeitgeist Twitter is not your friend. I don’t say this because of the many wasted hours of procrastination that goes into reading and obsessively checking a million and one tweets (although this is certainly true), I say it because when working on something at the bleeding edge of tech Twitter provides hundreds of teasing snippets of the amazing research that other people are doing. This isn’t just other researchers, but also companies and hackers who seem to have all the time (and money) in the world to make cool proof-of-concept videos. While initially amazing and a great source for early ideas and ways in which to give your research the ‘wow-factor’, it quickly becomes disheartening – seeing what other people are achieving whilst you are stuck still making sure your bibliography is formatted correctly. It provokes the need to be blogging/creating/making/hacking almost continually to keep up with everyone, and show that you are somehow simultaneously surfing the Hype Cycle. In my experience there is always going to be someone who has done it better so for anyone who wants to have a life outside of their research, my advice is keep your Twitter usage limited to finding new dubstep tracks and getting irate at the state of the world today.

Remember your roots

One of the key things to remember when using new tech is that no matter how deeply you immerse yourself in the tech world, when you emerge you need to convince other archaeologists that what you have been doing is useful. Archaeologists are notoriously wary of new technology and will be your biggest crtics – and this is A Good Thing. Every new digital method or gadget should only be developed to further archaeological method/theory and our knowledge of the past – not simply for wow-factor or as a result of a ride on a Hypegeist bandwagon. If it won’t work outside in the rain or you can’t convince a colleague of the usefulness of it without resorting to fancy videos or Prezis then don’t bother.

Every surfer loses a wave

Be prepared to fall off the wave, and watch other people riding. It is going to happen anyway and by being patient, sitting back and watching other people ride the wave you can learn just as much as you can by constantly doing. It is less tiring and often very much more rewarding. I have found that acknowledging you are always going to be behind the curve promotes a feeling of calm reflection that is vital for properly researching what you are doing, and gives you the knowledge to choose the right time to jump back on the crest.

Take your time

Whilst blogs are great for working through ideas, writing academically makes you consider every word and sentence and forces you to find other research that backs up or challenges your claims. As someone who researches new technology everyday, a digital detox is almost unheard of. However, taking the time to unplug everything, sit down and write the paper or thesis makes you critically examine everything you are saying or promoting with a clear unhindered perspective.

I am convinced this is the reason that baking is so zeitgiest at the moment. People are craving time away from the digital world to watch their sourdough grow and savouring the time it takes for a loaf to prove and bake puts you back in the real world. Sadly, however, they are tech-ifying sourdough too.

Pin It

Guest Blog on ASOR

I have just submitted a guest blog post on the American Schools of Oriental Research (ASOR) blog for their ongoing special series on Archaeology in the Digital Age. It’s an introduction to Augmented Reality for Archaeology and also includes some sneak peeks of the results of some of my own AR fieldwork on Bodmin Moor. The original post can be found at http://asorblog.org/?p=4707.

Pin It

Learning by Doing – Archaeometallurgy

This post will be a little off my normal topics, in that there will no augmented reality and no computers (although I did make some nice 3D models that I’ll link to later). It is about technology, but mostly about prehistoric technology.

I have spent the last four days on a prehistoric metallurgy weekend, run by Fergus Milton and Dr. Simon Timberlake at Butser Ancient Farm in Hampshire. The aim of the course was to introduce us to the basics of prehistoric metallurgy and then teach us the practical skills so that we could take the process all the way from breaking the ore to casting an axe. I decided to take part in the course, not because I am focusing on the techniques of Bronze Age metallurgy, but because the site that I am looking at on Bodmin Moor was very likely to have been created to work the nearby tin sources and I wanted to know how they would have done it and what it would have felt like. I have read quite a bit around the subject, and have a good idea of the steps involved, but it wasn’t enough. As with all of my work, I am interested in the human experience of a landscape or an activity and find it is necessary to get my hands dirty to see and feel what smelting is like – something you can’t get from just reading about it.

The course was quite archaeology focused, and being at Butser Ancient Farm meant there was also a large element of experimentation – rather than just demonstration. We were encouraged to try out different ideas and set up experiments based on our own research aims. The best part for me was that we made every part of the furnace and refractories (tuyeres, crucibles, collecting pots, etc.) ourselves – we even hand-stitched our own bellows.

bellows

Hand-stitched leather bellows

Drying out the refractories

Drying out the refractories

After making our refractories we set to digging the furnaces, my group decided to dig a bank furnace and a bowl furnace. As can be seen from this 3D model the bank furnace is unsurprisingly dug down into a bank of earth with a horizontal passage dug into the shaft to hold the tuyere and bellows.

In contrast, the bowl furnace is a simple bowl dug out of the ground lined with a thin layer of clay, with a slightly sloping passage to hold the bellows and tuyere.

In order to fire the furnaces up, all that is needed is a small fire in the bottom of the furnace which is slowly covered with charcoal until the furnace is entirely full. Obviously the bellows need to be continually pumped to get some oxygen into the fire under the charcoal.

Bowl furnace in action

Bowl furnace in action

The ore is prepared for smelting using a beneficiation mortar (in our case we used a granite mortar which was probably originally used for grinding flour). Essentially it is as easy as smashing a few rocks and then grinding them down to powder using a stone hammer. This, perhaps weirdly, is the part of the process I was most interested in. I believe that the Bronze Age inhabitants of Leskernick Hill were collected and crushing cassiterite (tin-stone) on-site and I wanted to see how hard it was to do and how long it would take. Simon had some streamed Cornish cassiterite with him and so I got to have a go at crushing it to fine powder. It was remarkably easy and took very little time and effort to go from the rock itself to the powder ready for smelting. The mortar we were using had smooth sides and so the tinstone kept skating up the sides and escaping onto the floor, but perhaps this might have been prevented if we were using a mortar with straighter sides.

As can be seen from the 3D model above, once the ore was crushed we loaded it into a hand-made crucible, ready for smelting. This crucible was filled with a mixture of cassiterite dust and malachite (copper-bearing ore) dust in an attempt to co-smelt them creating a ‘one-step bronze’. The mortar is stained green in this case from crushing up the malachite. Unfortunately on this experiment the hand-made crucible cracked in the furnace and so the one-step bronze leaked out and we eventually found it at the bottom of the furnace. We had also put a layer of crushed malachite directly into the furnace, which smelted away nicely and mingled with the leaked bronze to create a big lump of slightly tinned copper.

A lovely lump of smelted copper (with a tiny bit of tin)

A lovely lump of smelted copper (with a tiny bit of tin)

Working my way through the entire process of metallurgy (minus the mining/collecting of the ore or the making of the charcoal) made me appreciate actually how surprisingly easy the whole thing is – and equally what rather unremarkable archaeological remains it produces. This is especially true of our bowl furnace, which when burnt out looked almost exactly like a hearth, complete with burnt ceramic material that one could easily mistake for simple prehistoric pottery. It makes me wonder how many smelting sites may have been misidentified as hearths. After this weekend I would happy to build a small furnace in my back garden and smelt some copper, and I wonder if the smelting furnaces of the Bronze Age were similar, small bowl furnaces in or around the family home.

We undertook a total of 5 smelts and a couple of castings over the weekend, with varying levels of success. Even with the professionals there (Simon and Fergus) things did not always go to plan (crucibles broke, furnaces didn’t heat up enough, molten metal was spilled on the ground) but this, for me, was the key to the whole experience. While the entire process was much easier than I had first imagined, there was still effort involved in smelting a relatively small amount of metal. These mistakes and accidents would have happened in antiquity as well and so even when a whole smelt of tin vapourised to nothing due to the furnace being too hot, I didn’t really regret the 2 hours spent bellowing and in fact felt a little closer to the frustration that might have been felt by the inhabitants of Bronze Age Leskernick Hill. Although I know the chemistry behind the smelting process (just about!) I was dumbstruck by the magical process of turning rock to metal. We literally sprinkling crushed malachite into the furnace and covered it with charcoal, heated it and then found a lump of copper at the bottom of the furnace. It was quite a powerful experience, and one I am sure would not have been lost on the early prehistoric smelters.

This whole weekend has made me realise that just as it is important to walk the hills of Bodmin Moor in order to really get a feeling for what it is like to inhabit the place, it is equally important to build a furnace, crush ore and smelt it to metal in order to find out what it is like to inhabit the activities as well. Of course experimental archaeologists have been doing this for years, but just one weekend of it has already changed the way I am thinking about some of my evidence and will almost certainly have a big influence on at least one chapter of my PhD.

Pin It

Archaeology, GIS and Smell (and Arduinos)

I have had quite few requests for a continuation of my how-to series, for getting GIS data into an augmented reality environment and for creating an embodied GIS. I promise I will get back to the how-tos very soon, but first I wanted to share something else that I have been experimenting with.

Most augmented reality applications currently on the market concentrate on visual cues for the AR experience, overlaying things on a video feed, etc. There are not a lot that I have found that create or play with smells – and yet smell is one of the most emotive senses. In the presentation of archaeology this has been long known and the infamous and varied smells of the Jorvik Centre are a classic example of smell helping to create a scene. The main reason for this lack of experimentation with smells is presumably the delivery device. AR is quite easy to achieve now within the visual realm mainly because every smartphone has a video screen and camera. However, not every phone has a smell chamber – never mind one that can create the raft of different smells that would be needed to augment an archaeological experience. As a first stab at rectifying this, then, I present the Dead Man’s Nose:

The Dead Men's Nose

The Dead Man’s Nose

The Dead Man’s Nose (DMN) is a very early prototype of a smell delivery device that wafts certain smells gently into your nose based on your location. The hardware is built using an Arduino microcontroller and some cheap computer parts along with any scent of your choice. The software is a very simple webserver that can be accessed via WiFi and ‘fire off’ smells via the webserver’s querystring. This means that it can easily be fired by Unity3D (or any other software that can access a webpage) – so it fits very nicely into my embodied GIS setup.

How does it work?

This little ‘maker hack’ takes it inspiration from projects such as ‘My TV Stinks‘, ‘The Smell of Success‘ and Mint Foundry’s ‘Olly‘. Essentially, I followed the instructions for building an Olly (without the 3D housing) and instead of using an Ethernet shield for the Arduino – I connected it to a WiFi shield and from there joined it to an ad-hoc WiFi network created by my Macbook. With the Macbook, iPad and the DMN on the same network it is very easy to send a message to the DMN from within the Unity gaming engine. As the iPad running the Unity application knows where I am in the world (see the previous blog) it means that I can fire off smells according to coordinates (or areas) defined in a GIS layer. Therefore, if I have an accurate ‘smellscape’ modeled in GIS, I can deploy that smellscape into the real world and augment the smells in the same way that I can augment the visual elements of the GIS data.  The code is very simple for both ends, I am just using the a slightly adjusted sample WiFi shield code on the Arduino end and a small script on the Unity end that pings the webserver when the ‘player’ moves into a certain place on the landscape. When the webserver is pinged, it starts the fan and that wafts the smell around. From a relatively simple setup, it provides the possibility of a very rich experience when using the embodied GIS.

A Field Test

The first thing to do was to find the smells to actually augment using the Dead Man’s Nose. It turns out there are a lot of different places to buy scents, but luckily in this case archaeologists came to the rescue – an article in the excellent Summer 2012 edition of Love Archaeology e-zine pointed me to the website of Dale Air who have over 300 aromas ranging from the mundane (Crusty Bread) to the completely weird (Dragon’s Breath). I purchased a set of samples (Barbeque, Dirty Linen, Woodsmoke, Farmyard, among others) and was ready to go. I was quite surprised, but they do actually smell pretty much as described, especially the Dirty Linen.

As I was just experimenting, the housing for the DMN was very simple (a cardboard box) and there was only one choice of smell and that was sellotaped to the outside of the box…

The Dead Man's Nose, in a box with a BBQ scent attached

The Dead Man’s Nose, in a box with a BBQ scent attached

The prototype was then loaded into a bag (in this case a simple camera bag), which was slung around my neck. I popped the top of the BBQ scent open and then whenever the fan started whirring the sweet, slightly acrid smell of Barbequing meat was gently wafted to my nostrils.

The Dead Man's Nose in a nosebag, ready to go

The Dead Man’s Nose in a nosebag, ready to go

Using my embodied GIS of the roundhouses on Leskernick Hill, Bodmin Moor, I set the DMN to fire off a smell of lovely Barbeque whenever I got within 20m of a roundhouse. I set the fan to run slowly at first and get faster as I got closer to the ‘source’ of the smell. The DMN performed admirably, as I walked within range of the houses I heard the tell-tale whirr of the fan and the next moment I had the lovely scent of cooking ribs. Future models will allow for more than one smell at a time (I just need a couple more computer fans) and also a better housing, a bit of 3D printing is in order!

Now I can use the iPad to view the roundhouses overlaid onto the video feed, plug in my headphones and hear 3D sounds that get louder or quieter depending on where I am in the settlement and also I can augment different smells as I walk around. Not only can I walk around the modern day Bronze Age landscape and see the augmented roundhouses, hear the Bronze Age sheep in the distance, I can also smell the fires burning and the dinner cooking as I get closer to the village….

If there is interest I can put together a how-to for creating the system, but for now I am going to carry on experimenting with it – to refine the delivery and the housing and to clean up the code a little bit.

Pin It

Embodied GIS HowTo: Part 1a – Creating RTIs Using Blender (an aside)

This is a bit of an aside in the HowTo series, but nevertheless it should be a useful little tutorial and as I was given a lot of help during the process it is only right to give something back to the community! So this HowTo shows you how to take the 3D model you created in Part1 and create a Reflectance Transformation Imaging (RTI) Image from it. Now if you don’t know what that is then here is the definition from the biggest advocates of the technique for archaeology, Cultural Heritage Imaging (chi) :

RTI is a computational photographic method that captures a subject’s surface shape and color and enables the interactive re-lighting of the subject from any direction.

What this means basically in GIS terms is that you have a fully interactive hillshade to play with and can change the angle of the light on-the-fly. No more need to create hundreds of hillshades with the sun at different angles – this is an all-in-one approach and is way more interactive. It is a really awesome technique for analysing rock-art, artefacts and even documents and can be used to reveal tiny little details that might not be obvious just by examining the object normally. It has also been used by Tom Goskar and Paul Cripps to interactively re-light some LiDAR data that Wessex Archaeology have of Stonehenge (see their paper in  here). RTI images are created by surrounding the subject with a dome of lights that are turned on one by one and a photograph taken each time. Every photograph needs a shiny ball (usually a snooker ball) in it – which enables the software to record the angle of each light, and by using some complex maths is then used to merge together all of the images (for a fuller and probably more accurate explanation see Mudge et al 2006).

This technique can also be used virtually (as Tom and Paul have done) by recreating the dome of lights in a 3D modeling package and shining them on a virtual object (often a laser scan) or a chunk of LiDAR data. I am going to show you exactly the same technique that Tom and Paul used, except where they used Vue I’m going to be using blender to create the virtual dome. I have also supplied the .blend file and the python script used – so you should be able to do it all yourself.

Right first things first, open up blender and load the .blend file that you saved from Part 1 – if you haven’t got one then you’ll need a 3D model of some description within blender – the concepts will work the same on any 3D model, but I am presuming for this tutorial that you have a chunk of a Digital Elevation Model.

  1. Luckily for us blender has a method of easily creating a dome of lights – this is because during the 3D modelling process a light dome is often used to create a warmer, more realistic feeling for ambient lighting (see Radiosity) so we can use this to our advantage. Press Shift+A to create a new mesh and create an ‘icosphere’.  To get enough lights we’ll need to subdivide the icosphere once so change the subdivisions (on the left hand panel) to be 3.
  2. For the purposes of this tutorial I am presuming that you have a chunk of 10kmx10km DEM – therefore in order to light it properly we need to create a dome that will cover the whole thing. Change the dimensions of your dome to be X:15km, Y:15km and Z:10km – you can change this to be as spherical as you want – these settings worked for me. You will also want to move it to the middle of your DEM – so change the location to be X:5km, Y:5km and Z:0.

    Building the icosphere

  3. Now we have a sphere (albeit squashed) we are going to want to cut the bottom of it to give us our dome. To do this enter Edit Mode (by pressing TAB). Now change your view – so that you are viewing from the front – you either press 1 on your numeric keypad or use the menu View->Front. Press the ‘A ‘ key to clear your selection and then press the ‘B’ key to begin selecting by a border. Draw a box around the bottom of the icosphere and it should select those faces and vertices. Once selected press the ‘X’ key and delete the vertices. Depending on the size of your sphere you may have to zoom forward a little bit to select and delete the faces on the far side of the icosphere. Keep doing this until you are left with a tidy dome sitting above your DEM.

    Deleting the bottom of the sphere

  4. Once you have your dome we are ready to start adding lights to it. First off if you have any other Lamps in the scene – delete those, so we don’t get confused at later stage. Once deleted, come out of Edit Mode (TAB) and use Shift+A to add a new Lamp. I use a Sun lamp [helps with my year-round tan... ahem.. sorry] – you could have an experiment with other types of lamps too – but the Sun seems to work well. Move the Sun to be at the centre of your DEM (X:5km, Y:5km,Z:0). Rotate the Sun so that it’s Y axis is at 180 degrees.
  5. In the little panel on the right you want to select the new Sun by clicking on it, then holding down Shift click the icosphere, so you should now have both selected (you can tell because their little icons light up) – now hover you mouse in the centre of the viewport and press Ctrl+P and parent the Sun to the icosphere. The Sun should now become a child of the icosphere in your objects panel (if you expand the icosphere in the panel you will see the Sun as part of its hierarchy).

    Parenting the Sun

  6. The Sun is now the son of the parent, therefore, we want to multiply the number of them and set them on each vertex – blender has a great function for this (DupliVerts) – click on the Object properties of the icosphere, scroll down to Duplication, click Verts and  click the Rotation checkbox. You should see a whole host of Suns appear. They should be in the right place on each vertex, but if not you can move the Sun to the centre of the DEM (by clicking on the icosphere in the hierarchy panel and then clicking on the Sun – see Step 4).

    Duplicating the Suns

  7. As we are using Suns the direction that they are pointing doesn’t really matter – however, if you are using other types of lamp – spots for instance – you will need to make sure they are pointing in the right direction. [NOTE: if you need to do this here is how, if you are using Suns disregard this step - select the icosphere, enter Edit Mode (TAB) and then choose Normals: Flip Direction from the Mesh Tools panel on the left. That will ensure the lamps are pointing inside the dome. Go back into Object Mode (TAB)].
  8. Now we have a lovely dome of Suns, we need to detach them from the icosphere, so we can manipulate them individually. This is pretty easy – select the icosphere and then Press Ctrl+Shift+A and you should see the Suns all detach themselves into individual objects (you will see about 90 or so Suns in the hierarchy panel on the right). At this stage you are free to delete or turn off the icosphere as we won’t be needing it anymore.

    Blinded by the light

  9. Next we need to set up our camera. Images for RTIs are normally taken by a camera set at the top of the dome, pointing directly downwards. Select your camera (there should be one by default in your scene – if not then you can Add one using Shift+A). Change the camera’s location to be directly above the centre of your DEM at the apex of the dome (in my case X:5km, Y:5km, Z:10km). Blender cameras automatically point downards – so there should be no need to add any rotation (if you have any rotation already set change all the values to 0). Before we render out a test image, we’ll need to adjust our camera viewport and clipping range. Press 0 on the numeric keypad or use the menu View->Camera to take a look and see what the camera is seeing. You will likely just get a grey box – this is because the camera is clipping the distance it can see. Select the camera and go to the settings in the right panel – Set the end Clipping range to 10km and you should see your DEM appear.

    Adjusting the camera settings

  10. Now you are going to want to adjust the Sensor size, to make sure your whole DEM is in the shot – for my 10km DEM the sensor had to be set to 70.
  11. Try a test render (press F12 or go via the menu Render->Render Image). You should be presented with a lovely render of your DEM, currently lit from all the angles.

    DEM test render

  12.  Press F11 again to hide the render view – at this stage you might want to increase the energy setting on your Suns – to get a bit more light on the DEM. Our suns are all still linked together – so you can change the energy setting by clicking on the top Sun in your hierarchy, clicking the Sun object properties (the little sun icon in the object properties panel) and changing the Energy as required (I recommend energy level 5). This should change the energy of all the suns.
  13. Once you are happy with the energy levels we can render out a test sequence, by using a small python script that turns each sun on individually and renders out an image. Change the bottom panel to be a Text Editor panel (see image).

    Selecting the text editor

  14. Click the [New+] button in the Text Editor panel and cut and paste the following code into the window
    import bpy, bgl, blf,sys
    sceneKey = bpy.data.scenes.keys()[0]
    filepath = "PUT YOUR ABSOLUTE FILEPATH HERE"
    # Loop all objects and try to find the Lamps
    print('Looping Lamps')
    l=0
    # first run through all of the lamps turning them off
    for obj in bpy.data.objects:
        if ( obj.type =='LAMP'):
            obj.hide_render = True
            l = l + 1
    print('You have hidden ' + str(l) + "lamps")
    
    # now we can go through and
    # individually turn them on
    # and render out a picture
    for obj in bpy.data.objects:
        if ( obj.type =='LAMP'):
            print (obj.name)
            obj.hide_render = False
            bpy.data.scenes[sceneKey].render.image_settings.file_format = 'JPEG'
            bpy.data.scenes[sceneKey].render.filepath = filepath + '//lamp_' + str(obj.name)
            # Render Scene and store the scene
            bpy.ops.render.render( write_still=True )
            obj.hide_render = True
    
  15. Adjust line 3 – so that you have a filepath that fits your system. This is where it will save out the images – but beware if the folder doesn’t exist it will go ahead and create it – so make sure you type carefully. When you are ready to go – click the Run Script button and it should happily go away and render your images for you. If you have problems when running the code the errors should appear in the console. [NOTE FOR MAC USERS: If you are on a Mac to get the console requires you to start blender from a Terminal window. Save your .blend, close blender. Open Terminal.app then change directory to the blender application by running "cd /Applications/blender.app/Contents/MacOS/" (change the path to fit where you installed blender), then run "./blender". Any console messages will now appear in the Terminal window]
  16. This will give us a nice set of images (one for each camera) that we can use later to create our RTIs.
  17. You may recall from the beginning of this HowTo that in order to create an RTI image we also need to use a shiny snooker ball. Luckily we can create ones of these with blender as well. Use Shift+A to create a Metaball-> Ball. Make the Ball dimensions 1kmx1kmx1km and move it to the centre of your view (say X:5km, Y:5km, Z:2.5km).
  18. Now we want to make the ball really shiny and black – so apply a material to the ball (using the Material button in the object properties). Set the Diffuse intensity to be 0.0, the Intensity to be 1.0, the Hardness to it top value (511) and click the little Mirror checkbox. That should give us a nice hard shiny black ball for the RTI software to deal with.

    How to get a shiny black ball

  19. Now we want to render out a set of images with only the ball in it so that we can ‘train’ the RTI software. You will want to turn the render off on your DEM Plane (press the little camera button next to it in your hierarchy view), so that when you output the images you will only be rendering the ball.
  20. Change the filepath in the script in your Text Editor panel so that you will be saving the ball images to a different folder (otherwise you will just overwrite your DEM images). Then hit Run Script and you should get a set of rendered images of the ball ready for importing into the RTI software.
  21. You now have the 2 sets of images ready to create your final RTI image!
  22. I am not going to go through the minute detail of the steps to create the RTI image, as Cultural Heritage Imaging have already written a detailed how to. So the next step is to download the RTI Builder software and the reference guide from this page and go through the steps outlined within their reference manual.
  23. A couple of notes on the process, you are going to want to run the first RTI build using the ball images as the input images (put them in a folder called jpeg-exports/ within your RTI project directory). This will create an RTI of the ball – and will produce a .lp file in the assembly-files/ folder of your RTI project directory.
  24. Once you have produced the .lp file from your ball images, you can then use this .lp file to create a RTI image of your DEM itself. Start a new RTI project and choose Dome LP File (PTM Fitter) on the first page – this will direct you through and allow you to specify the .lp file from your ball project, and the images of the DEM that you rendered from blender. As we have already trained the program using the ball images, it should now just happily go through and create the RTI image from your DEM renders.

    AFTER you have run the RTI Builder through on your ball images – use this mode to specify the .lp file

  25. That’s it – here is how mine turned out (a little dark, so probably need more energetic suns)…

    The final RTI image

You can download my lightdome.blend file that has a 15km x 15km light dome in it – if you don’t want to make your own. If you used this tutorial, post some screenshots of your own RTI images in the comments – I’m interested to see what people get up to! If you have any questions or need further help, don’t hesitate to ask below. Thanks go to Tom Goskar, Paul Cripps and Grant Cox for help and advice in setting up the virtual RTI dome.

Pin It

Embodied GIS HowTo: Part 1 – Loading Archaeological Landscapes into Unity3D (via Blender)

Recently I have been attempting to move closer to what I have coined embodied GIS (see this paper)- that is the ability to use and create conventional GIS software/data and then view it in the real world, in-situ and explore and move through that data and feedback those experiences. As is clear from the subject of this blog I am using Augmented Reality to achieve this aim, and therefore am using a combination of 3D modeling software (blender), gaming-engine software (Unity3D) and conventional GIS software (QGIS). Where possible I have been using Free and Open Source Software (FOSS), to keep costs low – but also to support the community and to show that pretty much anything is possible with a FOSS solution.

One of the main hurdles to overcome when trying to combine these approaches is to figure out the workflow between the 2D/2.5D GIS software, the 3D gaming-engine environment and then finally overlaying all of that information onto the real world. There are many points during the process when data integrity can be lost, resolution of the original data can be affected and decisions on data-loss have to be made. I hope that this blog post (and the subsequent howtos on the next stages of the process) will enable people to identify those points and also to step people through the process so you can do it with your own data.

The first step toward embodied GIS is to move from the GIS software into the gaming engine. There are many ways to do this, but I have used QGIS, some command line GDAL tools and then blender. Over the next few posts I will show how you import elevation data, import/place archaeological information and then view the finished data via the web and also in the landscape itself.

This first post presumes you have at least a working knowledge of GIS software/data.

First you will need a Digital Elevation Model of your landscape. I am using Leskernick Hill on Bodmin Moor as my case study. I have the Ordnance Survey’s Landform PROFILE product which is interpolated from contours at 1:10,000 – resulting in a digital DTM with a horizontal resolution of 10m. To be honest this is not really a great resolution for close-up placement of data, but it works fine as a skeleton for the basic landscape form. The data comes from the OS as a 32bit TIFF file – the import process can’t deal with the floating-point nature of the 32bit TIFF and therefore we need to convert the TIFF to a 16-bit TIFF using the gdal tools. To install GDAL on my Mac I use the KyngChaos Mac OSX Frameworks. Binaries for other platforms are available here. Once you have GDAL installed, running the following command will convert the 32bit to a 16bit TIFF -

gdal_translate -ot UInt16 leskernick_DTM.tif  leskernick_DTM_16.tif

This is the first stage where we are losing resolution of the original data. The conversion from a floating point raster to an integer-based raster means our vertical resolution is being rounded to the nearest whole number – effectively limiting us to a 1m vertical resolution minimum. This is not too much of a problem with the PROFILE data as the vertical resolution is already being interpolated from contour lines of between 10m and 5m intervals – however, it can lead to artificial terracing which we will tackle a bit later. It is a bit more of a problem with higher-resolution data (such as LiDAR data) as you will be losing actual recorded data values – however with the PROFILE data we are just losing the already interpolated values from the contours.

Once the TIFF is converted then you will need to setup a local grid within your GIS software. Unity doesn’t handle large game areas that well – and will start the gamespace at 0,0 – therefore when we import our data it makes things much easier if we also can import our data relative to a 0,0 coordinate origin then to real-world coordinates. This is much easier than it sounds – and just involves using a false easting and northing for your data. In my case I made a simple shapefile of a 10k x 10k square that covered my study area the bottom left coordinates of the square (in the Ordnance Survey GB coordinate system (EPSG:27700)) were 212500, 75000. This means that the coordinates of any data I import into Unity will need to have 212500 subtracted from their eastings and 75000 subtracted from their northings. We can either do this programmatically or ‘in our heads’ when placing objects on the Unity landscape (more on this later in the howtos). It is an advantage having a relatively small study area and also having data in a planar/projected map projection – as the conversion will not need to take account of projections of earth curvature (as it would in a geographic projection such as LatLongs).

Therefore, you can choose to reproject/spatially adjust all of your data using the false eastings and northings within your GIS software – which makes the import a little easier. Or you can do it on an individual layer dataset basis as and when you import into Unity (which is what I do).

Once you have sorted out the GIS side of things, you will need to import the raster into blender – and build the 3D landscape mesh. I’ll try and explain this step-by-step but it is worth finding your way around blender a little bit first (I recommend these tutorials). Also, please bear in mind you may have slightly different window set-up to mine, but hopefully you will be able to find your way around. Please feel free to ask any questions in the comments below.

  1. Open up blender – you should see the default cube view. Delete the cube, by selecting it in the panel to the right – then press ‘X’ and click delete
  2. Now we want to make sure our units are set to metres – do this by clicking the little scene icon in the right-hand panel and then scrolling down to the Units drop-down and click the Metric button.

    Changing units to metric

  3. Now add a plane – using Shift+A Add->Mesh->Plane (or use the Add menu). This will create a Plane of 2mx2m. We want this Plane to be the size of our DEM (in world units) so change the dimensions to be the same, in my case I set X to be ’10km’ and Y to be ’10km’. If you don’t have the dimensions panel on the right, click the ‘N’ key to make it appear.

    Setting the Plane Dimensions

  4. You will notice that your plane has disappeared off into the distance. We need to adjust the clipping values of our viewport. Scroll down the panel with the Dimensions in it until you see the View dropdown. You will see a little section called ‘Clip:’ – change the End value from 1km to say 12km. Now if you zoom out (pinch to zoom out on a trackpad or use the mouse scroll wheel) you will see your Plane in all its very flat glory.
  5. Before we start the interesting bit of giving it some elevation – we need to make sure it is in the right place. Remember that we are using false eastings and northings, so we want the bottom corner of our Plane to be at 0,0,0. To do this first set the 3D cursor to 0,0,0 (in the right-hand panel just beneath where you set the viewport clip values). Now click the ‘Origin’ button in the left-hand Object Tools panel, and click Origin to 3D cursor (the shortcut Shift+Ctrl+Alt+C)
  6. You will also want to make sure the bottom left of the Plane is at 0,0,0. As the origin handle of the Plane is in the middle, for a 10x10km DEM you will need to move the X 5km and the X 5km, by changing the location values in the right-hand properties panel. That should ensure your bottom left corner is sitting nicely at 0,0,0.

    Setting the location

  7. Our Plane currently only has 1 face – meaning we are not going to be able to give it much depth. So now we need to subdivide the Plane to give it more faces – think of this a bit like the resolution of a raster – the more faces the more detailed the model will be (at the cost of file size!). Enter Edit Mode (by pressing Tab). You will see the menu change in the Left Panel – and it will give you a set of Mesh Tools.
  8. Click the Subdivide button – you can choose how much you want to subdivde but I usually make it to be around the same resolution as my DEM. So for a 10k square with 10m resolution we will want a subdivided plane with approx 1,000,000 faces. In Blender terms the closest we can get is 1,048576 faces. This is a BIG mesh – so I would suggest that you do one at high resolution like this – and then also have a lower resolution one for using as the terrain [see the terrain howto - when written!].

    Subdividing the Plane

  9. We now want to finally give the Plane some Z dimension. This is done using the Displace Modifier. First come out of Edit mode – by pressing TAB. Now apply a material to the Plane, by pressing the Material button on the far right panel and hitting the [+New] button.

    The Material and Texture Buttons

  10. Now add a texture to the new material by hitting the Texture button and again hitting the [New+] button. Scroll down the options and change the Type to ‘Image or Movie’. Scroll down further and change the Mapping coordinates from Generated to UV. Now click the Open icon on the panel and browse to the 16bit Tiff you made earlier. The image will be blank in the preview – but don’t worry blender can still read it.

    Applying the Image Texture

  11. Once you have applied the texture – click the Object Modifiers button and choose the Displace Modifier from the Add Modifiers dropdown.

    Object Modifiers Button

  12. When you have the Displace Modifier options up choose the texture you made by clicking the little cross-hatched box in the Texture section and choosing ‘Texture’ from the dropdown. First change the Midlevel value to be ’0m’. Depending on your DEM size you may start seeing some changes in your Plane already. However, you will probably need to do some experimentation with the strength (the amount of displacement). For my DEM the strength I needed was 65000.203. This is a bit of weird number – but you can check the dimensions of the plane as you change the strength (see screenshot) you want the z value to be as close as possible to 255m (this basically means you will get the full range of the elevation values as the 16bit Tiff has 255 colour values. These should map to real-world heights on import into Unity. You may want to do some checking of this later when in Unity).

    Changing the Strength

  13. Hopefully by this stage your landscape should have appeared on your Plane and you can spin and zoom it around as much as you like…
  14. At this stage you are going to want to save your file! Unity can take a .blend file natively, but let’s export it as an FBX – so we can insert it into Unity (or any 3D modelling program of your choice). Go to File->Export->Autodesk FBX and save it somewhere convenient.

Well done for getting this far! The final steps in this HowTo are simply inserting the FBX into Unity. This is very easy, but I will be presuming you have a bit of knowledge of Unity.

  1. Open Unity and start a new project. Import whichever packages you like, but I would suggest that you import at least the ones I have shown here – as they will be helpful in later HowTos.

    Creating a new Unity Project

  2. Now simply drag your newly created FBX into Unity.  If you have a large mesh the import will probably take quite a long time – for large meshes (greater than 65535 vertices) you will also need the latest version of Unity (>3.5.2) which will auto split the large mesh into separate meshes for you. Otherwise you will have to pre-split it within blender.
  3. Drag the newly imported FBX into your Editor View and you will see it appear – again you can zoom and pan around, etc. Before it is in the right place, however, you will need to make sure it it the correct size and orientation. First change the scale of the import from 0.01 to 1 – by adjusting the Mesh Scale Factor. Don’t forget to scroll down a little bit and click the apply button. After hitting apply you will likely have to wait a bit for Unity to make the adjustments.

    The FBX in Unity

  4. Finally you will need to rotate the object once it is in your hierarchy on the y axis by 180 (this is because Blender and Unity have different ideas of whether Z is up or forward).

    Set the Y rotation

  5. You should then have a 1:1 scale model of your DEM within Unity – the coordinates and heights should match your GIS coordinates (don’t forget to adjust for the false eastings and northings). In my case the centre of my DEM within real-world space is 217500, 80000. The adjustment for the false eastings and northings would be performed as follows:-

actual_coord - false_coord = unity_coord
therefore 217500 - 212500 = 5000 and 80000 - 75000 = 5000
therefore the Unity Coordinates of the centre of the area = 5000,5000

To double-check it would be worth adding an empty GameObject at a prominent location in the landscape (say the top of a hill) and then checking that the Unity coordinates match the real-world coordinates after adjustment for the false values.

I hope that helps a few people, there are a couple of other tutorials using different 3D modelling software on this topic so it is worth checking them out too here and here and one for Blender here.

In the next HowTo I’ll be looking at the various different ways of getting vector GIS data into Unity and adding in different 3D models for different GIS layers so stayed tuned!

Pin It

New Article in J. of Archaeological Method and Theory

I have just had an article published in the J. of Archaeological Method and Theory – which explains a bit more about my approach to using Augmented Reality within archaeology and how it might aid in a phenomenological approach to the landscape. The article is as a result of a conference I attended last year, ‘In Search of Middle Ground’ (organised by Dot Graves and Kirsty Millican) and forms part of a special issue that will be coming out in paper print a bit later in the year. There are some really interesting papers in the issue (some of which are already available from the journal’s Online-First section), all of which deal with the tricky area that lies between computer-based analysis of the landscape and actually getting out into the field and walking around.

It is good to get some work out there and hopefully start some debates regarding the validity of the approach, although as I read it back now I can see how far my thinking has come already and a few things that need some further development.

The article is available on the journal’s site (for people who have a subscription or institutional access) and also from UCL’s Open Access site (just a pre-print with no fancy formatting). Let me know in the comments below if you have any questions!

Pin It

Future of Conference Posters?

Last month I entered a poster into the UCL Graduate School Poster Competition and was lucky enough to win first prize. I find conference posters a bit of a strange animal. The poster session always seems to take place over lunchtime or the coffee break and more often than not the person who made the poster isn’t around to talk you through it. You are then usually left with a poster that has masses of text, that either has too much detail or not enough, and the whole thing can get quickly boring.

I wanted to challenge this a little bit, and as my poster subject was my work with AR, I was provided with the perfect opportunity. The poster was a pretty simple (but hopefully striking) design, a pair of old school binoculars looking at some rocks on Leskernick Hill, Bodmin Moor. The area within the binoculars shows some roundhouses  – giving the impression that looking through the bins reveals the ancient landscape.

Graduate School Poster

My winning poster

I tried to keep the text to a bare minimum so that the poster was dominated by the binoculars. However, this being an AR project, there was a bit of a twist. Using the Junaio API I augmented the poster with a video that overlaid the whole thing when viewed through a smartphone or tablet. The video showed the binoculars moving around the poster, revealing more of the roundhouses.

I am increasingly finding that the best way to explain AR is to give someone an example of it. It was a bit of a gamble, as in order to see the AR content the viewer needed to have a smartphone, have an app to scan the QR code on the poster and have good enough internet access to install and run the Junaio app. The main judge of the competition wasn’t at the prize-giving, so I didn’t get any feedback or a chance to ask if they had seen the AR content, but they awarded it first prize so I hope they did!

I am of course not the first person to use AR in a poster, but I am sure that it will become a lot more popular as it really is an excellent way of adding content to a poster, without being too intrusive. I guess at the moment it could be seen as being a little gimmicky, however this isn’t all that bad when trying to attract people to your poster and your research. One of the important things to remember though is that the poster needs to be able to stand on it’s own without the AR content, as it is quite an ask at the moment to get people to download an app on their phone just to learn more about your research.

The process of adding the content via the Junaio app also wasn’t quite as easy as I had hoped, mainly because the video itself had to be made into a 3D object and be of a very low quality and in a special .3g2 format to enable it to be delivered fast to a mobile device. You immediately lose your audience if they have to wait 2 minutes for your content to download and the .3g2 format was specifically designed to look ok on a smartphone screen and be small enough to download quickly. However, as you can see from the video above, the quality is pretty poor. I created the animation using 3D Studio Max, and then rendered it out to a number of tiffs. I then used ffmpeg to render the tiffs to a video and encoded it into the .3g2 format. The Junaio developer website has instructions for how to do all of this, but it is not really for the faint of heart.  Junaio provides a number of sample PHP scripts that can be run on your own server to deliver the content, and their trouble-shooting process is really excellent. So if you have your own webserver and are happy with tweaking some scripts then you can do some really quite nice stuff. I should note that they also have a simple upload interface for creating simple AR ‘channels’ which is a great way of quickly getting things up there – but doesn’t allow you to have total control or upload videos. But, if you want to pop a simple 3D model on your conference poster, then the Junaio Channel Creator is the app for you! The other thing to remember if you want to augment your own conference poster, is that the channels can take up to a week to be approved by Junaio, so you can’t leave it all to the last minute!

I suspect we will be seeing many more AR-enabled conference posters, particularly as AR booklets, magazines and museum guides are becoming more popular. One can envisage holographic type projections of people standing beside their posters talking the viewers through it, or interactive posters where the content changes depending on what and where you touch it. As I keep coming back to on this blog, it is the melding of the paper with the digital that I find so fascinating about AR, the ability to re-purpose old ideas (such as the conference poster) and breathe new life into the concept – but without losing the original purpose and feel of the thing itself. The design of the paper poster stands on its own (for better or worse!) and the AR content just gives the creator the chance to provide further information and give the viewer that extra dimension into their research.

Pin It

AR and Archaeology: Opportunities, Challenges and the Trench of Disillusionment

I have just come back from giving a guest seminar to the Archaeological Computing Research Group at the University of Southampton and thought I would put up a post with the gist of it. It was really an introduction to Augmented Reality in Archaeology, but was also inspired by the recent article in Wired. In his article Clark Dever explains that AR is currently languishing in the Trough of Disillusionment.

The (Archaeological) Hype Cycle

The (Archaeological) Hype Cycle

What this means is that according to the Gartner Hype Cycle AR as a technology has already reached it’s peak of marketing, expectation and excitement and hasn’t really delivered much. Instead of providing the world with a technology to allow the seamless integration of the real and the virtual, we are left with a few applications that provide a way to overlay virtual information onto a video screen, which are mostly used to direct us to the nearest Starbucks.

I am afraid that I have to agree with Clark Dever, and I feel seem the same about AR. I follow a large number of AR blogs and tweeters and all everyone seems to report on is new apps that basically overlay info onto a screen with no relationship to the real world. A good example is Falcon Gunner, a Star Wars based app which places you in the seat of a gunner on the Millennium Falcon. Whilst it is a really fun game [who doesn't like shooting down TIE fighters!?] the ‘AR mode’ has absolutely no connection to the real world and basically overlays the game with a transparent background so that it looks like TIE fighters are flying over your sofa. While this is kind of interesting for about 5 minutes, what I really want is the TIEs to interact with the real world – I want them to hide behind the sofa and fly out at me – or fly into a cupboard, hide and wait until I’m not looking and then attack me. I want to feel like I am part of the Star Wars galaxy and it is part of my front room.

Falcon Gunner app

Star Wars Arcade: Falcon Gunner (http://jhaepfenning.wordpress.com/2011/06/30/toilets-are-obsolete-a-falcon-gunner-review/)

Heritage applications are bread and butter for AR, one of the first things that comes to mind when talking about AR is how cool it would be to see what the world used to look like. Indeed, archaeological AR apps are actually some of the better apps that are trying to meld the virtual with the real. For instance, the Museum of London’s Streetmuseum app does a good job of pulling in virtual content (in their case pictures/paintings) and overlaying them into their ‘real’ place in the world.

MoL Streetmuseum (image from: http://www.bullseyehub.com/blog/2011/01/top-6-mobile-apps-for-culture-events/)

But, again, this app just overlays the image in (roughly) the right place – there is no way to enter into the image or interact with it, or have people walking around it, through it, behind it. Instead it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

In my opinion, AR will continue to languish in the Trench of Disillusionment until we can address the following issues:

  1. The technology needs to be used intelligently.Adding on an ‘AR view’ to an app that simply overlays the app on your video feed is not enough. In addition, simply putting GPS locations into a ’3D’ space and giving them an icon is equally flawed. Especially when those locations are far away and should be obscured [occluded] by the buildings in the way. It is much easier to navigate to these things using a map (saves you trying to walk through buildings) – and I am not entirely sure how much the AR mode adds to it. We need to think of ways that AR is going to add information or provide a new type of information, not just be a different (and less useful) way of displaying the same old information.

    Panoramio

    Layar's 'AR View' - note the points that are on different streets (some kilometres away), and should be occluded by the buildings.

  2. The AR algorithms need to recognise the real world. Sorry to keep banging on about this, but if the AR content is not respecting the real world (i.e. being occluded by it or wrapping round it or interacting with it in some way) then you lose the point and the feel of the augmentation. We should be using the real world as a template for the AR experience, taking as much of it as possible and then gently melding the virtual world with it – not harshly slapping virtual content on and simply making it move with the motion of the accelerometers. Advances are currently being made toward this, via the use of depth cameras (such as the Kinect) and also computer-vision based algorithms (such as SLAM and SfM). Metaio, the developer of the popular Junaio AR app, are clearly making big leaps in this area as this video shows. We are a little way off this being commercially available, but it shows that the big companies are finding ways to make the meld more seamless.
  3. AR needs to be seamless (and cheap!). The current normal delivery of AR requires either a head-mounted display (HMD) or a smartphone/tablet. Whilst an AR experience will always need some kind of mediation in order to provide the experience, these devices need to be less bulky and also cheaper in order for them to become accessible to a normal person. In archaeology, the majority of the AR apps are likely to involve tourism, or visits to archaeological/historic sites or museums and therefore the delivery technology needs to be cheap and robust, and ubiquitous enough to enable the AR content to be experienced. Perhaps the fabled real-life Google Googles that have been promised by the end of the year will go someway to making this happen.
  4. We need to wrest the technology away from advertisers. Up until now, a lot of AR content has just been a way for marketeers to sell us stuff. That’s fine and its the way of the world. In fact it obviously drives a lot of the technological advances, because after all who is paying for all this stuff? But we need to be careful that we are also doing good research with AR that does not just have the aim of making the killer app to sell loads of stuff. As archaeologists we are in a unique position where we can advance knowledge and use AR to show people our research in-situ or use it as an aid to field practice, rather than just to present out results. As our discipline moves towards attempting to gain a more embodied experience of the past, AR is the perfect technology to aid in that embodiment and to let us experience visions/sounds/smells of past events in the places that they happened. It can be used to help us think about the past as we are excavating it, and may even aid in/change our interpretations as we go along. We don’t have to be led by the nose with the technology and instead we need to bend it to our will, make use of it intelligently for our discipline. Otherwise we are simply going to end up with Matsuda’s dystopic vision of AR Advertising Hell.

While in danger of pushing the metaphor of the Archaeological Hype Cycle to breaking point let me sum up:

AR is like one of those archaeological excavations where you are promised the world and then when you break ground it doesn’t quite deliver. You see the amazing Barrow of Inflated Expectation that promises archaeological finds and fame beyond your wildest dreams, you engage the press, start a website, hit every social media site possible and get everyone (including your funders and institution) excited beyond belief. Then you cut a slot through through the barrow and realise that it isn’t filled with the grave goods of a lost Bronze Age King, instead there is very little in the Trench at all. The press get bored, your website hit-rate plummets, the previously frequent on-site blogging reduces to once a month and your institution starts worrying about your REF submission. You languish in your trench, wondering how you can rescue the project. But then you remember you have taken whole load of environmental samples, the few scraps of wood you recovered are good enough for dendro-analysis, you analyse the complex stratigraphy very carefully and realise it is a unique sequence… 2 or 3 years of careful post-excavation analysis by just a few team members follows, the hard-graft of making the project really work begins to come to fruition and you are left with a mature project that has real results and is pushing the field of archaeology forward. That is where we are with AR now. We need to get our heads down and do that hard-graft, start thinking what we can take from the hype of AR and build it into something that works, helps us during our field practice and dissemination and hopefully pushes archaeological knowledge forward, rather than just being more eye-candy.

Please leave some comments if you can think of or have examples of applications for AR in archaeology or heritage studies that could get us out of the Trench, it would be great to get a discussion going. I have uploaded an HTML version of my Southampton seminar here. Please note, it was exported from Keynote, and therefore the embedded movies only seem to work when viewed in Safari.

Pin It

Augmenting a Roman Fort

The following video shows something that I have been working on as a prototype for a larger landscape AR project.

As you can see, by using the Qualcomm AR SDK and Unity3D it is possible to augment some quite complex virtual objects and information onto the model Roman fort. I really like this application, as all I have done is take a book that you can buy at any heritage site (in the UK at least) and simply changed the baseboard design so that the extra content can be experienced. Obviously there was quite a lot of coding behind the scenes in the app and 3D modelling, but from a user point of view the AR content is very easy to see – simply print out the new baseboard, stick it on and load up the app.

For me that is one of the beautiful things about AR, you still have the real world, you still have the real fort that you have made and can play with it whether or not you have an iPad or Android tablet or what-have-you. All the AR does is augment that experience and allow you to play around with virtual soldiers or peasants or horses instead of using static model ones. It also opens up all sorts of possibilities for adding explanations of building types, a view into the day-to-day activities in a fort, or even for telling stories and acting out historical scenarios.

The relative ease of the deployment of the system (now that I have the code for the app figured out!) means this type of approach could be rolled out in all sorts of different situations. Some of my favourite things in museums, for instance, are the old-school dioramas and scale-models. The skill and craftsmanship of the original model will remain, but it could be augmented by the use of the app – and made to come alive.

Housesteads Diorama

The model of Housesteads fort in the Housesteads museum

The same is true of modern day prototyping models or architectural models. As humans we are used to looking at models of things, and want to be able to touch them and move them around. Manipulating them on a computer screen just doesn’t somehow seem quite right. But the ability to combine the virtual data, with the manipulation and movement of the real-life model gives us a unique and enhanced viewpoint, and can also allow us to visualise new buildings or exisiting buildings in new ways.

A particularly important consideration when creating AR content is to ensure that it looks as believable or ‘real’ as possible. The human eye is very good at noticing things that seem out of the ordinary or “don’t feel quite right”. One of the main ways to help with creating a believable AR experience  is to ensure the real-world occludes the virtual objects. That is the virtual content can be seen to move behind the real-world objects (such as the soldiers walking through the model gateway). Also it should be possible to interact with the real-world objects and have that affect the virtual content (such as touching one of the buildings and making the labels appear). This will become particularly important as I move into rolling the system out into a landscape instead of just a scale-model. As I augment the real world with virtual objects, those objects have to interact with the real-world as if they are part of it – otherwise too many Breaks in Presence will occur and the value of the AR content is diminished. An accurate 3D model of the real-world is quite a bit harder to create than that of a paper fort, but if I can pull it off, the results promise to be quite a bit more impressive…

 

Pin It