Archive for the ‘ AR ’ Category

Future of Conference Posters?

Last month I entered a poster into the UCL Graduate School Poster Competition and was lucky enough to win first prize. I find conference posters a bit of a strange animal. The poster session always seems to take place over lunchtime or the coffee break and more often than not the person who made the poster isn’t around to talk you through it. You are then usually left with a poster that has masses of text, that either has too much detail or not enough, and the whole thing can get quickly boring.

I wanted to challenge this a little bit, and as my poster subject was my work with AR, I was provided with the perfect opportunity. The poster was a pretty simple (but hopefully striking) design, a pair of old school binoculars looking at some rocks on Leskernick Hill, Bodmin Moor. The area within the binoculars shows some roundhouses  – giving the impression that looking through the bins reveals the ancient landscape.

Graduate School Poster

My winning poster

I tried to keep the text to a bare minimum so that the poster was dominated by the binoculars. However, this being an AR project, there was a bit of a twist. Using the Junaio API I augmented the poster with a video that overlaid the whole thing when viewed through a smartphone or tablet. The video showed the binoculars moving around the poster, revealing more of the roundhouses.

I am increasingly finding that the best way to explain AR is to give someone an example of it. It was a bit of a gamble, as in order to see the AR content the viewer needed to have a smartphone, have an app to scan the QR code on the poster and have good enough internet access to install and run the Junaio app. The main judge of the competition wasn’t at the prize-giving, so I didn’t get any feedback or a chance to ask if they had seen the AR content, but they awarded it first prize so I hope they did!

I am of course not the first person to use AR in a poster, but I am sure that it will become a lot more popular as it really is an excellent way of adding content to a poster, without being too intrusive. I guess at the moment it could be seen as being a little gimmicky, however this isn’t all that bad when trying to attract people to your poster and your research. One of the important things to remember though is that the poster needs to be able to stand on it’s own without the AR content, as it is quite an ask at the moment to get people to download an app on their phone just to learn more about your research.

The process of adding the content via the Junaio app also wasn’t quite as easy as I had hoped, mainly because the video itself had to be made into a 3D object and be of a very low quality and in a special .3g2 format to enable it to be delivered fast to a mobile device. You immediately lose your audience if they have to wait 2 minutes for your content to download and the .3g2 format was specifically designed to look ok on a smartphone screen and be small enough to download quickly. However, as you can see from the video above, the quality is pretty poor. I created the animation using 3D Studio Max, and then rendered it out to a number of tiffs. I then used ffmpeg to render the tiffs to a video and encoded it into the .3g2 format. The Junaio developer website has instructions for how to do all of this, but it is not really for the faint of heart.  Junaio provides a number of sample PHP scripts that can be run on your own server to deliver the content, and their trouble-shooting process is really excellent. So if you have your own webserver and are happy with tweaking some scripts then you can do some really quite nice stuff. I should note that they also have a simple upload interface for creating simple AR ‘channels’ which is a great way of quickly getting things up there – but doesn’t allow you to have total control or upload videos. But, if you want to pop a simple 3D model on your conference poster, then the Junaio Channel Creator is the app for you! The other thing to remember if you want to augment your own conference poster, is that the channels can take up to a week to be approved by Junaio, so you can’t leave it all to the last minute!

I suspect we will be seeing many more AR-enabled conference posters, particularly as AR booklets, magazines and museum guides are becoming more popular. One can envisage holographic type projections of people standing beside their posters talking the viewers through it, or interactive posters where the content changes depending on what and where you touch it. As I keep coming back to on this blog, it is the melding of the paper with the digital that I find so fascinating about AR, the ability to re-purpose old ideas (such as the conference poster) and breathe new life into the concept – but without losing the original purpose and feel of the thing itself. The design of the paper poster stands on its own (for better or worse!) and the AR content just gives the creator the chance to provide further information and give the viewer that extra dimension into their research.

AR and Archaeology: Opportunities, Challenges and the Trench of Disillusionment

I have just come back from giving a guest seminar to the Archaeological Computing Research Group at the University of Southampton and thought I would put up a post with the gist of it. It was really an introduction to Augmented Reality in Archaeology, but was also inspired by the recent article in Wired. In his article Clark Dever explains that AR is currently languishing in the Trough of Disillusionment.

The (Archaeological) Hype Cycle

The (Archaeological) Hype Cycle

What this means is that according to the Gartner Hype Cycle AR as a technology has already reached it’s peak of marketing, expectation and excitement and hasn’t really delivered much. Instead of providing the world with a technology to allow the seamless integration of the real and the virtual, we are left with a few applications that provide a way to overlay virtual information onto a video screen, which are mostly used to direct us to the nearest Starbucks.

I am afraid that I have to agree with Clark Dever, and I feel seem the same about AR. I follow a large number of AR blogs and tweeters and all everyone seems to report on is new apps that basically overlay info onto a screen with no relationship to the real world. A good example is Falcon Gunner, a Star Wars based app which places you in the seat of a gunner on the Millennium Falcon. Whilst it is a really fun game [who doesn’t like shooting down TIE fighters!?] the ‘AR mode’ has absolutely no connection to the real world and basically overlays the game with a transparent background so that it looks like TIE fighters are flying over your sofa. While this is kind of interesting for about 5 minutes, what I really want is the TIEs to interact with the real world – I want them to hide behind the sofa and fly out at me – or fly into a cupboard, hide and wait until I’m not looking and then attack me. I want to feel like I am part of the Star Wars galaxy and it is part of my front room.

Falcon Gunner app

Star Wars Arcade: Falcon Gunner (

Heritage applications are bread and butter for AR, one of the first things that comes to mind when talking about AR is how cool it would be to see what the world used to look like. Indeed, archaeological AR apps are actually some of the better apps that are trying to meld the virtual with the real. For instance, the Museum of London’s Streetmuseum app does a good job of pulling in virtual content (in their case pictures/paintings) and overlaying them into their ‘real’ place in the world.

MoL Streetmuseum (image from:

But, again, this app just overlays the image in (roughly) the right place – there is no way to enter into the image or interact with it, or have people walking around it, through it, behind it. Instead it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

In my opinion, AR will continue to languish in the Trench of Disillusionment until we can address the following issues:

  1. The technology needs to be used intelligently.Adding on an ‘AR view’ to an app that simply overlays the app on your video feed is not enough. In addition, simply putting GPS locations into a ‘3D’ space and giving them an icon is equally flawed. Especially when those locations are far away and should be obscured [occluded] by the buildings in the way. It is much easier to navigate to these things using a map (saves you trying to walk through buildings) – and I am not entirely sure how much the AR mode adds to it. We need to think of ways that AR is going to add information or provide a new type of information, not just be a different (and less useful) way of displaying the same old information.


    Layar's 'AR View' - note the points that are on different streets (some kilometres away), and should be occluded by the buildings.

  2. The AR algorithms need to recognise the real world. Sorry to keep banging on about this, but if the AR content is not respecting the real world (i.e. being occluded by it or wrapping round it or interacting with it in some way) then you lose the point and the feel of the augmentation. We should be using the real world as a template for the AR experience, taking as much of it as possible and then gently melding the virtual world with it – not harshly slapping virtual content on and simply making it move with the motion of the accelerometers. Advances are currently being made toward this, via the use of depth cameras (such as the Kinect) and also computer-vision based algorithms (such as SLAM and SfM). Metaio, the developer of the popular Junaio AR app, are clearly making big leaps in this area as this video shows. We are a little way off this being commercially available, but it shows that the big companies are finding ways to make the meld more seamless.
  3. AR needs to be seamless (and cheap!). The current normal delivery of AR requires either a head-mounted display (HMD) or a smartphone/tablet. Whilst an AR experience will always need some kind of mediation in order to provide the experience, these devices need to be less bulky and also cheaper in order for them to become accessible to a normal person. In archaeology, the majority of the AR apps are likely to involve tourism, or visits to archaeological/historic sites or museums and therefore the delivery technology needs to be cheap and robust, and ubiquitous enough to enable the AR content to be experienced. Perhaps the fabled real-life Google Googles that have been promised by the end of the year will go someway to making this happen.
  4. We need to wrest the technology away from advertisers. Up until now, a lot of AR content has just been a way for marketeers to sell us stuff. That’s fine and its the way of the world. In fact it obviously drives a lot of the technological advances, because after all who is paying for all this stuff? But we need to be careful that we are also doing good research with AR that does not just have the aim of making the killer app to sell loads of stuff. As archaeologists we are in a unique position where we can advance knowledge and use AR to show people our research in-situ or use it as an aid to field practice, rather than just to present out results. As our discipline moves towards attempting to gain a more embodied experience of the past, AR is the perfect technology to aid in that embodiment and to let us experience visions/sounds/smells of past events in the places that they happened. It can be used to help us think about the past as we are excavating it, and may even aid in/change our interpretations as we go along. We don’t have to be led by the nose with the technology and instead we need to bend it to our will, make use of it intelligently for our discipline. Otherwise we are simply going to end up with Matsuda’s dystopic vision of AR Advertising Hell.

While in danger of pushing the metaphor of the Archaeological Hype Cycle to breaking point let me sum up:

AR is like one of those archaeological excavations where you are promised the world and then when you break ground it doesn’t quite deliver. You see the amazing Barrow of Inflated Expectation that promises archaeological finds and fame beyond your wildest dreams, you engage the press, start a website, hit every social media site possible and get everyone (including your funders and institution) excited beyond belief. Then you cut a slot through through the barrow and realise that it isn’t filled with the grave goods of a lost Bronze Age King, instead there is very little in the Trench at all. The press get bored, your website hit-rate plummets, the previously frequent on-site blogging reduces to once a month and your institution starts worrying about your REF submission. You languish in your trench, wondering how you can rescue the project. But then you remember you have taken whole load of environmental samples, the few scraps of wood you recovered are good enough for dendro-analysis, you analyse the complex stratigraphy very carefully and realise it is a unique sequence… 2 or 3 years of careful post-excavation analysis by just a few team members follows, the hard-graft of making the project really work begins to come to fruition and you are left with a mature project that has real results and is pushing the field of archaeology forward. That is where we are with AR now. We need to get our heads down and do that hard-graft, start thinking what we can take from the hype of AR and build it into something that works, helps us during our field practice and dissemination and hopefully pushes archaeological knowledge forward, rather than just being more eye-candy.

Please leave some comments if you can think of or have examples of applications for AR in archaeology or heritage studies that could get us out of the Trench, it would be great to get a discussion going. I have uploaded an HTML version of my Southampton seminar here. Please note, it was exported from Keynote, and therefore the embedded movies only seem to work when viewed in Safari.

Augmenting a Roman Fort

The following video shows something that I have been working on as a prototype for a larger landscape AR project.

As you can see, by using the Qualcomm AR SDK and Unity3D it is possible to augment some quite complex virtual objects and information onto the model Roman fort. I really like this application, as all I have done is take a book that you can buy at any heritage site (in the UK at least) and simply changed the baseboard design so that the extra content can be experienced. Obviously there was quite a lot of coding behind the scenes in the app and 3D modelling, but from a user point of view the AR content is very easy to see – simply print out the new baseboard, stick it on and load up the app.

For me that is one of the beautiful things about AR, you still have the real world, you still have the real fort that you have made and can play with it whether or not you have an iPad or Android tablet or what-have-you. All the AR does is augment that experience and allow you to play around with virtual soldiers or peasants or horses instead of using static model ones. It also opens up all sorts of possibilities for adding explanations of building types, a view into the day-to-day activities in a fort, or even for telling stories and acting out historical scenarios.

The relative ease of the deployment of the system (now that I have the code for the app figured out!) means this type of approach could be rolled out in all sorts of different situations. Some of my favourite things in museums, for instance, are the old-school dioramas and scale-models. The skill and craftsmanship of the original model will remain, but it could be augmented by the use of the app – and made to come alive.

Housesteads Diorama

The model of Housesteads fort in the Housesteads museum

The same is true of modern day prototyping models or architectural models. As humans we are used to looking at models of things, and want to be able to touch them and move them around. Manipulating them on a computer screen just doesn’t somehow seem quite right. But the ability to combine the virtual data, with the manipulation and movement of the real-life model gives us a unique and enhanced viewpoint, and can also allow us to visualise new buildings or exisiting buildings in new ways.

A particularly important consideration when creating AR content is to ensure that it looks as believable or ‘real’ as possible. The human eye is very good at noticing things that seem out of the ordinary or “don’t feel quite right”. One of the main ways to help with creating a believable AR experience  is to ensure the real-world occludes the virtual objects. That is the virtual content can be seen to move behind the real-world objects (such as the soldiers walking through the model gateway). Also it should be possible to interact with the real-world objects and have that affect the virtual content (such as touching one of the buildings and making the labels appear). This will become particularly important as I move into rolling the system out into a landscape instead of just a scale-model. As I augment the real world with virtual objects, those objects have to interact with the real-world as if they are part of it – otherwise too many Breaks in Presence will occur and the value of the AR content is diminished. An accurate 3D model of the real-world is quite a bit harder to create than that of a paper fort, but if I can pull it off, the results promise to be quite a bit more impressive…


ARK and Augmented Reality

Recently I have been working away in the Unity gaming engine using it to make some Augmented Reality applications for the iPhone and iPad. It is surprisingly successful and with at least 3 different ways of getting 3D content to overlay on the iOS video feed (Qualcomm, StringAR and UART) the workflow is more open than ever. I have been attempting to load 3D content at runtime, so that dynamic situations can be created as a result of user interaction – rather than having to have all of the resources (3D models, etc.) pre-loaded into the app. This not only saves on file size of the app, it also means that the app can pull real-time information and data that can be changed by many people at once. However, in order to do that I needed some kind of back-end database…

For those of you that know me, you will know that as well as doing my PhD I work on the development of the open-source archaeological database system known as the Archaeological Recording Kit (ARK). It seemed like a logical step to combine these two projects and use ARK as the back-end database. So that is what I went and did and at the same time created a rudimentary AR interface to ARK. The preliminary results can be seen in the video below:

This example uses the Qualcomm AR API, and ARK v1.0. Obviously at the moment it is marker-based AR (or at least image recognition based), the next task is to incorporate the iDevices’ gyroscope to enable the AR experience to continue even when the QR code is not visible.