Posts Tagged ‘ Unity3D

CAAUK 2016 – Embodied GIS and applied Multi-Sensory Archaeology

I recently attended the CAAUK 2016 meeting in Leicester, a great couple of days with a few really interesting papers.

As usual, the rather excellent Dougs Rocks-Macqueen was on hand to record the talks. His videos can be found here – he records all sorts of diverse archaeological conferences, so it is well worth clicking the subscribe button on his account.

In case anyone is interested, I have embedded the video of my talk below – where I discuss the Embodied GIS, using examples from my previous research including Voices Recognition and the Dead Man’s Nose.

Guest Blog on ASOR

I have just submitted a guest blog post on the American Schools of Oriental Research (ASOR) blog for their ongoing special series on Archaeology in the Digital Age. It’s an introduction to Augmented Reality for Archaeology and also includes some sneak peeks of the results of some of my own AR fieldwork on Bodmin Moor. The original post can be found at http://asorblog.org/?p=4707.

Archaeology, GIS and Smell (and Arduinos)

I have had quite few requests for a continuation of my how-to series, for getting GIS data into an augmented reality environment and for creating an embodied GIS. I promise I will get back to the how-tos very soon, but first I wanted to share something else that I have been experimenting with.

Most augmented reality applications currently on the market concentrate on visual cues for the AR experience, overlaying things on a video feed, etc. There are not a lot that I have found that create or play with smells – and yet smell is one of the most emotive senses. In the presentation of archaeology this has been long known and the infamous and varied smells of the Jorvik Centre are a classic example of smell helping to create a scene. The main reason for this lack of experimentation with smells is presumably the delivery device. AR is quite easy to achieve now within the visual realm mainly because every smartphone has a video screen and camera. However, not every phone has a smell chamber – never mind one that can create the raft of different smells that would be needed to augment an archaeological experience. As a first stab at rectifying this, then, I present the Dead Man’s Nose:

The Dead Men's Nose

The Dead Man’s Nose

The Dead Man’s Nose (DMN) is a very early prototype of a smell delivery device that wafts certain smells gently into your nose based on your location. The hardware is built using an Arduino microcontroller and some cheap computer parts along with any scent of your choice. The software is a very simple webserver that can be accessed via WiFi and ‘fire off’ smells via the webserver’s querystring. This means that it can easily be fired by Unity3D (or any other software that can access a webpage) – so it fits very nicely into my embodied GIS setup.

How does it work?

This little ‘maker hack’ takes it inspiration from projects such as ‘My TV Stinks‘, ‘The Smell of Success‘ and Mint Foundry’s ‘Olly‘. Essentially, I followed the instructions for building an Olly (without the 3D housing) and instead of using an Ethernet shield for the Arduino – I connected it to a WiFi shield and from there joined it to an ad-hoc WiFi network created by my Macbook. With the Macbook, iPad and the DMN on the same network it is very easy to send a message to the DMN from within the Unity gaming engine. As the iPad running the Unity application knows where I am in the world (see the previous blog) it means that I can fire off smells according to coordinates (or areas) defined in a GIS layer. Therefore, if I have an accurate ‘smellscape’ modeled in GIS, I can deploy that smellscape into the real world and augment the smells in the same way that I can augment the visual elements of the GIS data.  The code is very simple for both ends, I am just using the a slightly adjusted sample WiFi shield code on the Arduino end and a small script on the Unity end that pings the webserver when the ‘player’ moves into a certain place on the landscape. When the webserver is pinged, it starts the fan and that wafts the smell around. From a relatively simple setup, it provides the possibility of a very rich experience when using the embodied GIS.

A Field Test

The first thing to do was to find the smells to actually augment using the Dead Man’s Nose. It turns out there are a lot of different places to buy scents, but luckily in this case archaeologists came to the rescue – an article in the excellent Summer 2012 edition of Love Archaeology e-zine pointed me to the website of Dale Air who have over 300 aromas ranging from the mundane (Crusty Bread) to the completely weird (Dragon’s Breath). I purchased a set of samples (Barbeque, Dirty Linen, Woodsmoke, Farmyard, among others) and was ready to go. I was quite surprised, but they do actually smell pretty much as described, especially the Dirty Linen.

As I was just experimenting, the housing for the DMN was very simple (a cardboard box) and there was only one choice of smell and that was sellotaped to the outside of the box…

The Dead Man's Nose, in a box with a BBQ scent attached

The Dead Man’s Nose, in a box with a BBQ scent attached

The prototype was then loaded into a bag (in this case a simple camera bag), which was slung around my neck. I popped the top of the BBQ scent open and then whenever the fan started whirring the sweet, slightly acrid smell of Barbequing meat was gently wafted to my nostrils.

The Dead Man's Nose in a nosebag, ready to go

The Dead Man’s Nose in a nosebag, ready to go

Using my embodied GIS of the roundhouses on Leskernick Hill, Bodmin Moor, I set the DMN to fire off a smell of lovely Barbeque whenever I got within 20m of a roundhouse. I set the fan to run slowly at first and get faster as I got closer to the ‘source’ of the smell. The DMN performed admirably, as I walked within range of the houses I heard the tell-tale whirr of the fan and the next moment I had the lovely scent of cooking ribs. Future models will allow for more than one smell at a time (I just need a couple more computer fans) and also a better housing, a bit of 3D printing is in order!

Now I can use the iPad to view the roundhouses overlaid onto the video feed, plug in my headphones and hear 3D sounds that get louder or quieter depending on where I am in the settlement and also I can augment different smells as I walk around. Not only can I walk around the modern day Bronze Age landscape and see the augmented roundhouses, hear the Bronze Age sheep in the distance, I can also smell the fires burning and the dinner cooking as I get closer to the village….

If there is interest I can put together a how-to for creating the system, but for now I am going to carry on experimenting with it – to refine the delivery and the housing and to clean up the code a little bit.

Embodied GIS HowTo: Part 1 – Loading Archaeological Landscapes into Unity3D (via Blender)

Recently I have been attempting to move closer to what I have coined embodied GIS (see this paper)- that is the ability to use and create conventional GIS software/data and then view it in the real world, in-situ and explore and move through that data and feedback those experiences. As is clear from the subject of this blog I am using Augmented Reality to achieve this aim, and therefore am using a combination of 3D modeling software (blender), gaming-engine software (Unity3D) and conventional GIS software (QGIS). Where possible I have been using Free and Open Source Software (FOSS), to keep costs low – but also to support the community and to show that pretty much anything is possible with a FOSS solution.

One of the main hurdles to overcome when trying to combine these approaches is to figure out the workflow between the 2D/2.5D GIS software, the 3D gaming-engine environment and then finally overlaying all of that information onto the real world. There are many points during the process when data integrity can be lost, resolution of the original data can be affected and decisions on data-loss have to be made. I hope that this blog post (and the subsequent howtos on the next stages of the process) will enable people to identify those points and also to step people through the process so you can do it with your own data.

The first step toward embodied GIS is to move from the GIS software into the gaming engine. There are many ways to do this, but I have used QGIS, some command line GDAL tools and then blender. Over the next few posts I will show how you import elevation data, import/place archaeological information and then view the finished data via the web and also in the landscape itself.

This first post presumes you have at least a working knowledge of GIS software/data.

First you will need a Digital Elevation Model of your landscape. I am using Leskernick Hill on Bodmin Moor as my case study. I have the Ordnance Survey’s Landform PROFILE product which is interpolated from contours at 1:10,000 – resulting in a digital DTM with a horizontal resolution of 10m. To be honest this is not really a great resolution for close-up placement of data, but it works fine as a skeleton for the basic landscape form. The data comes from the OS as a 32bit TIFF file – the import process can’t deal with the floating-point nature of the 32bit TIFF and therefore we need to convert the TIFF to a 16-bit TIFF using the gdal tools. To install GDAL on my Mac I use the KyngChaos Mac OSX Frameworks. Binaries for other platforms are available here. Once you have GDAL installed, running the following command will convert the 32bit to a 16bit TIFF –

gdal_translate -ot UInt16 leskernick_DTM.tif  leskernick_DTM_16.tif

This is the first stage where we are losing resolution of the original data. The conversion from a floating point raster to an integer-based raster means our vertical resolution is being rounded to the nearest whole number – effectively limiting us to a 1m vertical resolution minimum. This is not too much of a problem with the PROFILE data as the vertical resolution is already being interpolated from contour lines of between 10m and 5m intervals – however, it can lead to artificial terracing which we will tackle a bit later. It is a bit more of a problem with higher-resolution data (such as LiDAR data) as you will be losing actual recorded data values – however with the PROFILE data we are just losing the already interpolated values from the contours.

Once the TIFF is converted then you will need to setup a local grid within your GIS software. Unity doesn’t handle large game areas that well – and will start the gamespace at 0,0 – therefore when we import our data it makes things much easier if we also can import our data relative to a 0,0 coordinate origin then to real-world coordinates. This is much easier than it sounds – and just involves using a false easting and northing for your data. In my case I made a simple shapefile of a 10k x 10k square that covered my study area the bottom left coordinates of the square (in the Ordnance Survey GB coordinate system (EPSG:27700)) were 212500, 75000. This means that the coordinates of any data I import into Unity will need to have 212500 subtracted from their eastings and 75000 subtracted from their northings. We can either do this programmatically or ‘in our heads’ when placing objects on the Unity landscape (more on this later in the howtos). It is an advantage having a relatively small study area and also having data in a planar/projected map projection – as the conversion will not need to take account of projections of earth curvature (as it would in a geographic projection such as LatLongs).

Therefore, you can choose to reproject/spatially adjust all of your data using the false eastings and northings within your GIS software – which makes the import a little easier. Or you can do it on an individual layer dataset basis as and when you import into Unity (which is what I do).

Once you have sorted out the GIS side of things, you will need to import the raster into blender – and build the 3D landscape mesh. I’ll try and explain this step-by-step but it is worth finding your way around blender a little bit first (I recommend these tutorials). Also, please bear in mind you may have slightly different window set-up to mine, but hopefully you will be able to find your way around. Please feel free to ask any questions in the comments below.

  1. Open up blender – you should see the default cube view. Delete the cube, by selecting it in the panel to the right – then press ‘X’ and click delete
  2. Now we want to make sure our units are set to metres – do this by clicking the little scene icon in the right-hand panel and then scrolling down to the Units drop-down and click the Metric button.

    Changing units to metric

  3. Now add a plane – using Shift+A Add->Mesh->Plane (or use the Add menu). This will create a Plane of 2mx2m. We want this Plane to be the size of our DEM (in world units) so change the dimensions to be the same, in my case I set X to be ’10km’ and Y to be ’10km’. If you don’t have the dimensions panel on the right, click the ‘N’ key to make it appear.

    Setting the Plane Dimensions

  4. You will notice that your plane has disappeared off into the distance. We need to adjust the clipping values of our viewport. Scroll down the panel with the Dimensions in it until you see the View dropdown. You will see a little section called ‘Clip:’ – change the End value from 1km to say 12km. Now if you zoom out (pinch to zoom out on a trackpad or use the mouse scroll wheel) you will see your Plane in all its very flat glory.
  5. Before we start the interesting bit of giving it some elevation – we need to make sure it is in the right place. Remember that we are using false eastings and northings, so we want the bottom corner of our Plane to be at 0,0,0. To do this first set the 3D cursor to 0,0,0 (in the right-hand panel just beneath where you set the viewport clip values). Now click the ‘Origin’ button in the left-hand Object Tools panel, and click Origin to 3D cursor (the shortcut Shift+Ctrl+Alt+C)
  6. You will also want to make sure the bottom left of the Plane is at 0,0,0. As the origin handle of the Plane is in the middle, for a 10x10km DEM you will need to move the X 5km and the X 5km, by changing the location values in the right-hand properties panel. That should ensure your bottom left corner is sitting nicely at 0,0,0.

    Setting the location

  7. Our Plane currently only has 1 face – meaning we are not going to be able to give it much depth. So now we need to subdivide the Plane to give it more faces – think of this a bit like the resolution of a raster – the more faces the more detailed the model will be (at the cost of file size!). Enter Edit Mode (by pressing Tab). You will see the menu change in the Left Panel – and it will give you a set of Mesh Tools.
  8. Click the Subdivide button – you can choose how much you want to subdivde but I usually make it to be around the same resolution as my DEM. So for a 10k square with 10m resolution we will want a subdivided plane with approx 1,000,000 faces. In Blender terms the closest we can get is 1,048576 faces. This is a BIG mesh – so I would suggest that you do one at high resolution like this – and then also have a lower resolution one for using as the terrain [see the terrain howto – when written!].

    Subdividing the Plane

  9. We now want to finally give the Plane some Z dimension. This is done using the Displace Modifier. First come out of Edit mode – by pressing TAB. Now apply a material to the Plane, by pressing the Material button on the far right panel and hitting the [+New] button.

    The Material and Texture Buttons

  10. Now add a texture to the new material by hitting the Texture button and again hitting the [New+] button. Scroll down the options and change the Type to ‘Image or Movie’. Scroll down further and change the Mapping coordinates from Generated to UV. Now click the Open icon on the panel and browse to the 16bit Tiff you made earlier. The image will be blank in the preview – but don’t worry blender can still read it.

    Applying the Image Texture

  11. Once you have applied the texture – click the Object Modifiers button and choose the Displace Modifier from the Add Modifiers dropdown.

    Object Modifiers Button

  12. When you have the Displace Modifier options up choose the texture you made by clicking the little cross-hatched box in the Texture section and choosing ‘Texture’ from the dropdown. First change the Midlevel value to be ‘0m’. Depending on your DEM size you may start seeing some changes in your Plane already. However, you will probably need to do some experimentation with the strength (the amount of displacement). For my DEM the strength I needed was 65000.203. This is a bit of weird number – but you can check the dimensions of the plane as you change the strength (see screenshot) you want the z value to be as close as possible to 255m (this basically means you will get the full range of the elevation values as the 16bit Tiff has 255 colour values. These should map to real-world heights on import into Unity. You may want to do some checking of this later when in Unity).

    Changing the Strength

  13. Hopefully by this stage your landscape should have appeared on your Plane and you can spin and zoom it around as much as you like…
  14. At this stage you are going to want to save your file! Unity can take a .blend file natively, but let’s export it as an FBX – so we can insert it into Unity (or any 3D modelling program of your choice). Go to File->Export->Autodesk FBX and save it somewhere convenient.

Well done for getting this far! The final steps in this HowTo are simply inserting the FBX into Unity. This is very easy, but I will be presuming you have a bit of knowledge of Unity.

  1. Open Unity and start a new project. Import whichever packages you like, but I would suggest that you import at least the ones I have shown here – as they will be helpful in later HowTos.

    Creating a new Unity Project

  2. Now simply drag your newly created FBX into Unity.  If you have a large mesh the import will probably take quite a long time – for large meshes (greater than 65535 vertices) you will also need the latest version of Unity (>3.5.2) which will auto split the large mesh into separate meshes for you. Otherwise you will have to pre-split it within blender.
  3. Drag the newly imported FBX into your Editor View and you will see it appear – again you can zoom and pan around, etc. Before it is in the right place, however, you will need to make sure it it the correct size and orientation. First change the scale of the import from 0.01 to 1 – by adjusting the Mesh Scale Factor. Don’t forget to scroll down a little bit and click the apply button. After hitting apply you will likely have to wait a bit for Unity to make the adjustments.

    The FBX in Unity

  4. Finally you will need to rotate the object once it is in your hierarchy on the y axis by 180 (this is because Blender and Unity have different ideas of whether Z is up or forward).

    Set the Y rotation

  5. You should then have a 1:1 scale model of your DEM within Unity – the coordinates and heights should match your GIS coordinates (don’t forget to adjust for the false eastings and northings). In my case the centre of my DEM within real-world space is 217500, 80000. The adjustment for the false eastings and northings would be performed as follows:-

actual_coord - false_coord = unity_coord
therefore 217500 - 212500 = 5000 and 80000 - 75000 = 5000
therefore the Unity Coordinates of the centre of the area = 5000,5000

To double-check it would be worth adding an empty GameObject at a prominent location in the landscape (say the top of a hill) and then checking that the Unity coordinates match the real-world coordinates after adjustment for the false values.

I hope that helps a few people, there are a couple of other tutorials using different 3D modelling software on this topic so it is worth checking them out too here and here and one for Blender here.

In the next HowTo I’ll be looking at the various different ways of getting vector GIS data into Unity and adding in different 3D models for different GIS layers so stayed tuned!

Augmenting a Roman Fort

The following video shows something that I have been working on as a prototype for a larger landscape AR project.

As you can see, by using the Qualcomm AR SDK and Unity3D it is possible to augment some quite complex virtual objects and information onto the model Roman fort. I really like this application, as all I have done is take a book that you can buy at any heritage site (in the UK at least) and simply changed the baseboard design so that the extra content can be experienced. Obviously there was quite a lot of coding behind the scenes in the app and 3D modelling, but from a user point of view the AR content is very easy to see – simply print out the new baseboard, stick it on and load up the app.

For me that is one of the beautiful things about AR, you still have the real world, you still have the real fort that you have made and can play with it whether or not you have an iPad or Android tablet or what-have-you. All the AR does is augment that experience and allow you to play around with virtual soldiers or peasants or horses instead of using static model ones. It also opens up all sorts of possibilities for adding explanations of building types, a view into the day-to-day activities in a fort, or even for telling stories and acting out historical scenarios.

The relative ease of the deployment of the system (now that I have the code for the app figured out!) means this type of approach could be rolled out in all sorts of different situations. Some of my favourite things in museums, for instance, are the old-school dioramas and scale-models. The skill and craftsmanship of the original model will remain, but it could be augmented by the use of the app – and made to come alive.

Housesteads Diorama

The model of Housesteads fort in the Housesteads museum

The same is true of modern day prototyping models or architectural models. As humans we are used to looking at models of things, and want to be able to touch them and move them around. Manipulating them on a computer screen just doesn’t somehow seem quite right. But the ability to combine the virtual data, with the manipulation and movement of the real-life model gives us a unique and enhanced viewpoint, and can also allow us to visualise new buildings or exisiting buildings in new ways.

A particularly important consideration when creating AR content is to ensure that it looks as believable or ‘real’ as possible. The human eye is very good at noticing things that seem out of the ordinary or “don’t feel quite right”. One of the main ways to help with creating a believable AR experience  is to ensure the real-world occludes the virtual objects. That is the virtual content can be seen to move behind the real-world objects (such as the soldiers walking through the model gateway). Also it should be possible to interact with the real-world objects and have that affect the virtual content (such as touching one of the buildings and making the labels appear). This will become particularly important as I move into rolling the system out into a landscape instead of just a scale-model. As I augment the real world with virtual objects, those objects have to interact with the real-world as if they are part of it – otherwise too many Breaks in Presence will occur and the value of the AR content is diminished. An accurate 3D model of the real-world is quite a bit harder to create than that of a paper fort, but if I can pull it off, the results promise to be quite a bit more impressive…

 

ARK and Augmented Reality

Recently I have been working away in the Unity gaming engine using it to make some Augmented Reality applications for the iPhone and iPad. It is surprisingly successful and with at least 3 different ways of getting 3D content to overlay on the iOS video feed (Qualcomm, StringAR and UART) the workflow is more open than ever. I have been attempting to load 3D content at runtime, so that dynamic situations can be created as a result of user interaction – rather than having to have all of the resources (3D models, etc.) pre-loaded into the app. This not only saves on file size of the app, it also means that the app can pull real-time information and data that can be changed by many people at once. However, in order to do that I needed some kind of back-end database…

For those of you that know me, you will know that as well as doing my PhD I work on the development of the open-source archaeological database system known as the Archaeological Recording Kit (ARK). It seemed like a logical step to combine these two projects and use ARK as the back-end database. So that is what I went and did and at the same time created a rudimentary AR interface to ARK. The preliminary results can be seen in the video below:

This example uses the Qualcomm AR API, and ARK v1.0. Obviously at the moment it is marker-based AR (or at least image recognition based), the next task is to incorporate the iDevices’ gyroscope to enable the AR experience to continue even when the QR code is not visible.