For our application, we used Unity as our real-time 3D software, with the addition of ARFoundation for augmented reality development. Included was ARKit and ARCore, for cross platform builds, and platform specific features (such as real-time reflection probes) to iOS and Android devices.
To accurately overlay all of our interactive and static digital objects, we used a handheld LiDAR scanner to map out the dimensions of the existing physical space, with attention and care for the walls and locations of photographs. This was then exported as a 3D file, and used as the blueprint to build out the virtual environment. An advantage of the LiDAR scanner was that it gave us a very accurate map, matching the event space's physical dimensions almost 1:1.
From there, the scan was taken into Maya. We then created image walls for each exhibit according to their blueprint, and overlaid them on top of their location in the scan. Further models were added in, created from photographs and other media, of architectural elements that existed in the space at the time of the event. The completed model, with accurate measurements and components placed accordingly, was then exported over to Unity.
In Unity, we applied materials for images and video content, and to texture the architectural materials. We then created UI components to help guide the user, with brief information and introductions to the app and its contents. We then created iOS and Android application builds, and then exported them over to our devices.
In the finished app, our digital scene was instantiated through image detection, with it then spawning in a static prefab containing all of our architectural and interactive elements. From there, images and other triggers could be clicked in the app and bring up further information and media for the user to look at, and allow them to move around the space from exhibit to exhibit while maintaining proper tracking.