Monthly Archives: May 2016

Accurate OnSite Georeferenced Subsurface Utility Model Visualisation

In our modern life it public utilities like gas, electricity, water etc. are essential. Maintaining this infrastructure is a challenge since pipelines are usually buried under the surface. To properly maintain gas pipelines it is important to plan the maintaining process carefully to avoid traffic congestion or to prevent the gas from leaking and harming the local environment. The paper “Accurate OnSite Georeferenced Subsurface Utility Model Visualisation” which was at the SALENTO AVR 2015 presents a solution with AR technology.

The gas pipes, electrical wires etc. mostly only have a vaguely correct position. In most cases a the terrain has to be scanned to locate the exact position of the pipes and the depth. The scanning process takes in the information of the buildings and hydrants and the street borders to locate the utilities. The next step is usually marking the presumed positon of the utilities with spray paint on the road surface. The paper proposes to use AR to use the scanning data as an input to display the renderings of the presumed locations in the real world. Furthermore for the exact measuring of the street the use of robotic total stations (RTS) are proposed, which are important to measure the geolocation of an asset within millimetre accuracy. The system uses the RTS, a survey prism mounted on a pole and a tablet computer attached to the same pole.



The user is able to move the prism around and the display updates in real time and shows a 3D virtual excavation near the prism location.


Although the system is in the early stages it shows potential for the future development. Future Steps could involve the use of model and terrain renderings to help interpret the data, the system could be used to collect data on the site like hydrant and manholes location and load it a database, this technology could be used in other contexts like in construction work, building site monitoring, or augmentation of building walls.[1]



[1] S. Côté and A. Girard-Vallée Accurate OnSite Georeferenced Subsurface Utility Model Visualisation


AR applications in medical training

Medical education is one of the application fields of augmented reality. As mentioned in the use-case section, the head-mounted device HoloLens released by Microsoft can be used to visualize a fully virtual body and its parts in a room for studying anatomy. But there are approaches in the scientific world in this area as well.

In 2014, a study about the skills assessment in minimal invasive surgery with AR was conducted. A system consisting of a box-trainer, a camera and surgery tools equipped with sensors was developed by the scientists. Basic skills, hand-eye coordination and bimanual operation were tested in this survey. Therefore, experienced surgeons and novices in this field were compared. Differences between the two groups regarding smoothness and economy of motion were found. The authors highlighted the potential of the technology for medical training as a conclusion. [1]

The study of Kim, Chan, & Du did not deal with the potential of AR for surgical training, but for interaction with a radar screen. Also for this study, an AR system was developed. In addition, a training textbook for the same task was written. Two different groups of undergraduates trained either with the AR system or the textbook. During a practice session, the performance of the students was measured. Significant differences were found between the two groups. Students practicing with the augmented reality application were more aware of the situation than students training with the textbook. This indicates that AR can have a positive learning effect on trainees in the medical field. [2]

The above studies suggest that the potential of augmented reality in medical training is high. In 2016, Lahanas el al. made a survey of the educational opportunities of this technology. Thus, the authors looked for relevant articles and evaluated them. Up to August 215, they found twenty-seven relevant studies. The articles could be assigned to the following categories:

  • Laparoscopic surgical training
  • Neurosurgical procedures
  • Echocardiography

Even though face- and construct-validity was proven for most training methods, none of the applications was proven to have the ability to transfer information to the user. [3]

In conclusion, there are promising possibilities of applying AR to medical training. Possible areas of application in the medical education field are minimal invasive surgery, interaction with a radar screen or anatomy. It seems like AR applications have a positive learning effect. However, there is no evidence in literature that AR systems are able to transfer information to medical personnel.


[1] Lahanas, Loukas, Smailis, & Georgiou, 2015

[2] Kim, Chan, & Du, 2015

[3] Lahanas et al., 2015

Visual aspects of Augmented Reality

Vision Summit 2016

The International Conference on Augmented Reality, Virtual Reality and Computer Graphics (SALENTO AVR) takes place annually in the city of Otranto in Italy. The main goal of this conference is to bring together researchers, scientists, and practitioners to discuss key issues, approaches, ideas, open problems, innovative applications and trends on virtual and augmented reality, 3D visualization and computer graphics. The main areas of research are in the fields of medicine, cultural heritage, arts, education, entertainment, industrial and military sectors. Some of the currently discussed topics about Augmented Reality of this conference are in the field of the visual aspects of AR or research regarding perceptual issues in AR. [1]

Legibility Issues within Augmented Reality Applications

A lot of newly developed Augmented Reality applications in the industrial environment use Head-Worn Displays (HWDs) to provide the user with technical documentation by adding textual graphics, 3D Models and animations to the screen. A main problem of augmented reality in the industrial sector is the legibility and readability of the text which is displayed with HWDs. Because it is very important to display text-style, colour and illustrations properly, efficient text visualisation is therefore critical for industrial AR applications. Currently there is no standard which defines how text should be displayed via HWDs. Because of that a lot of developers don’t know the optimal text style for specific displays and applications.

According to the literature, legibility of a text depends mainly on the following aspects: background, display technology and text style including font, size and colour. There are currently four approaches how to increase text legibility in AR:

  • Move the text on the display or direct user’s gaze towards high contrast areas.
    • For assembly and maintenance operations, where the user must be focused on the task, this is not very feasible. Some workstations may not afford darker areas of the scene.
  • Modify the text contrast with hardware solutions in the display (e.g. LCD masking)
    • This solution, while the most promising approach, is still in the research stage.
  • Adapt the text color according to the background
    • For industrial applications this is not always feasible because it could violate color-coding guidelines that can be regulated by international standards (e.g. “ASME A13.1, 2007”, “ISO 3864”-Safety Symbols)
  • Employ outline/billboard technique
    • Based on the latest results it is suggested that this is a feasible solution for industrial applications because it the flexibility and the ease of implementation

plain text and billboard

Comparing the two text styles. Plain text on the left, billboard on the right.

The latest research suggests that enhancing text contrast via software, using outline or billboard, is an effective practice to improve legibility in many situations. Maximum contrast styles, like “black text and white billboard” or “white text only” is suggested when the reading time of a scenario is important. In conclusion billboards provide the best performances but at the slight cost of scene occlusion. [2]



[2] A. Uva, M. Fiorentino, G. Monno, “Addressing Legibility Issues in Industrial Augmented Reality”, 2014


3D real time capturing combined with AR


Communication between two remote parties has always been an interesting research topic. With the emerging technologies of augmented reality and the devices based on it, the opportunities have again reached a new height. The aim to create a virtual face-to-face experience is now possible with the help of 3D capturing and augmented reality.


“holoportation is a new type of 3D capture technology that allows high-quality 3D models of people to be reconstructed, compressed and transmitted anywhere in the world in real time. When combined with mixed reality displays such as HoloLens, this technology allows users to see, hear, and interact with remote participants in 3D as if they are actually present in the same physical space. Communicating and interacting with remote users becomes as natural as face-to-face communication.” [1]


The room2room technology makes it possible to create a telepresence between two remote participants. Its aim is to recreate the experience of a real face-to-face conversation with the help of 3D capturing and augmented reality technologies. Other than in the holoportation approach stated above, the room2room telepresence does not relay on any wearable device. The local user is being captured by colour and depth cameras which allows to create a virtual copy of the user projecting an image with the correct perspective. This setup makes it possible to enhance the non-verbal communication and allows to perform physical collaborative tasks more efficiently proven in a research environment. [2]

Fig. 1: Room2Room Setup
Fig. 1: Room2Room Setup



[2]         T. Pejsa, J. Kantor, H. Benko, E. Ofek, and A. Wilson, “Room2Room : Enabling Life – Size Telepresence in a Projected Augmented Reality Environment,” 2016.

Augmented Human International Conference

Focuses on scientific contributions towards augmenting humans capabilities through technology for increased well-being and enjoyable human experience. The topics of interest include, but are not limited to: Brain-Computer Interfaces, Muscle Interfaces and Implanted Interfaces; Wearable Computing and Ubiquitous Computing; Augmented and Mixed Reality; Human Augmentation, Sensory Substitution and Fusion; Hardware and Sensors for Augmented Human Technologies; Safety, Ethics, Trust, Privacy and Security Aspects of Augmented Humanity.[1]

  • First Held: 2010
  • Last Held: February 25-27, 2016 in Geneva

The topics below are chosen from presentation held at the Augmented Human Conference at 25-27 February in Geneva 2016 and with a relevance to Augmented Reality.

Wearability Factors for Skin Interfaces [2]
There are two aspects of wearable skin interfaces to consider.
Body Aspect: location, body movements and body characteristics
Device Aspect: attachment methods, weight, insulation, accessibility, communication, interaction, aesthetics, conductors, device care and connection, battery life
Skin interfaces Skin interfaces example

A Lifelog System for Detecting Psychological Stress with Glass-equipped Temperature Sensors [3]
In this presentation a lifelog system enable us to measure biological information at all times with wearable devices . This experiment was done by using a glass that measures nasal skin temperature and makes a video at the same time. With that information the team could identify stress situation.
stress diagram

Augmented Visualization for Guiding Arm Movement in the First-Person Perspective [4]
The motivation behind the Guiding Arm Movement is to learn physical activities Tai-Chi. It can also be AR body partsused to learn any other movements. The user wears an AR-glass and sees the movement of the body as an augmented shape of body parts.  Occlusion caused by other students or objects becomes irrelevant as there is no more a traditional trainer showing the movements.
Physical activities can be learned in two steps:
1. We learn the new movement roughly, i.e. roughly learn the complete form of moves.
2. We know the basic movements and we need to learn the detail by correcting the small deviation.

Exploring Eye-Tracking-Driven Sonification for the Visually Impaired [5]
The idea is that the user can decide which information is relevant. The control is done by tracking the eye movements of the users exploration field. The device can play sounds for color, text and facial expression.
By color sonification, the color will be mapped with instruments and the pitch represents the brightness.
If there appears text in the Eye-Tracker it will be mapped to spoken sounds. With the Pitch and with Stereo it can be located in a 2D position (See picture below).
Facial Expressions mapped similar as color recognition to instruments.

text AR


[2] Wearability Factors for Skin Interfaces.Xin Liu, Katia Vega, Pattie Maes, Joe A. Paradiso. MIT Media Lab
[3] A Lifelog System for Detecting Psychological Stress with Glass-equipped Temperature Sensors. Hiroki Yasufuku, Tsutomu Terada, Masahiko Tsukamoto
[4]Augmented Visualization for Guiding Arm Movement in the First-Person Perspective. Ping-Hsuan Han, Kuan-Wen Chen, Chen-Hsin Hsieh, Yu-Jie Huang, and Yi-Ping Hung
[5] Exploring Eye-Tracking-Driven Sonification for the Visually Impaired. Michael Dietz, Maha El Garf, Ionut Damian, Elisabeth André

Smart eyewear and the opportunities of the future

In the past and the near future, on Augmented Reality and technology conferences, one circular topic is often discussed: The evolution of the wearable technologies and the opportunities for the future in the consumer and the enterprise areas.

One discussed and upcoming subject is the next generation of smart eyewear. After Google had presented his Google Glass, the important question is, what happens next with this mobile computing platform and what are the applications in enterprises.

These topics are actually discussed at the Augmented and Virtual Reality conferences. Below there is a selection of presentations and discussions about Augmented Reality wearables, especially about Smart Glasses or Smart Eye Wear, at 2016 technology conferences.

Conference Name: Wearable Technology Show, March 2016, London, UK [1]

Presentation title: Sony Smart Eyeglass – Designing for the Human Being
Description: Customers doesn’t know the possibilities of Smart Eyeglasses and the question is, what would be good solutions and applications for them. This keynote expands Sony’s philosophical design considerations for Smart Eyeglass and what are Augmented Reality application designs and use cases.

Presentation title: An Introduction to Smart Eyewear
In this presentation several of the leading protagonists in Augmented Reality eye wear present their smart glasses. And they discuss the possibilities for consumer and enterprises with these new mobile computing platforms.

Conference Name: Augmented World Expo, June 2016, Santa Clara CA, USA [2]

Presentation title: Smart Glasses – Opportunities for the Enterprise Market
The main question of this short presentation is “what are the valuable opportunities for smart glasses in the enterprise?” The presentation will review the evolution of the technology and how that the industry has adopt this change to force up the power of human productivity.

Presentation title: The butterfly dream: Smart eyewear in 2031
The population of Augmented Reality devices will increase in the next 15 years and the eyewear looks almost like regular glasses. That makes the Augmented Reality industry very powerful and important and they will also bear great responsibilities. These topics and the future design choices will be discussed in this presentation.

Presentation title: The business impact of smart glasses for work
This presentation shares real stories on how business are deploying smart glasses into production environments. The presentation also gives an outlook how smart glasses begins to transform the workplaces and how workers will use technologies like Internet of Things IoT and big data analytics. Smart glasses connect the human workforce to with the intelligence of machines and data.

Conference Name: ARVR Innovate, April 2016, Dublin, Ireland [3]

This conference has one presentation about the topic of smart glasses and Augmented reality. The presentations title is “Immersive AR & Smart Glasses – the opportunities on the road to consumer adoption“, but there is no description of the content of this presentation.

A short discussion about smart glasses and smart eye wear

Since Google launched his Google Glass project, it is clear, that the use of wearable eyewear will be in the near future inevitable. The question is not “if”, only “when” the technology is available and technically mature.

Smart glasses will link available information’s and humans closer together. They feed augmented live information to the normal view during the people’s activities. Another question are the application areas of this technology: Where is it help- and useful and where can enterprises generate money. It is only a matter of time before smart glasses become a part of our daily lives.

Here is a list of Augmented Reality and smart glasses which are in development or already available. The list of course not closing. [4]

Epson Moverio Smart Eyewear

Epson Moverio BT-200 Smart Glass
Epson Moverio BT-200 Smart Glass

Meta Augmented Reality Headset

Meta 2 Smart Glass (Dev Kit)
Meta 2 Smart Glass (Development Kit)

Vuzix Smart Glasses

Vuzix M300 Smart Glass
Vuzix M300 Smart Glass

LaForge Shima

La Forge Optical - Shima Smart Glass
La Forge Optical – Shima Smart Glass

Optinvent ORA-2

Optinvent Ora-X Smart Glass
Optinvent Ora-X Smart Glass


[1] Wearable Technology Show, Augmented Reality & VR Show Conference 2016, [28.04.2016]
[2] AWE, Augmented World Expo,[30.04.2016]
[3]ARVR Innovate: Where Augmented and Virtual Reality Get Down to Business, [01.05.2016]
[4] Hongkiat, Ten Forthcoming Augmented Reality & Smart Glasses You Can Buy, [01.05.2016]