Why virtual exhibitions feel broken and how to fix them

Have you ever tried those games that museums use to showcase their collections? They can be amazing, but you might have noticed that hopping from one virtual museum to another isn’t as smooth as it should be. Each virtual experience is so different that it feels like you’re starting from scratch every time. So, why is this happening, and what can museums do to fix it?

Firstly, let’s talk about why virtual experiences in museums feel so disconnected. The main reason is that each museum typically develops its game independently. They hire different developers, use various technologies, and focus on unique user experiences. This approach results in a wide variety of environments, controls, and navigation systems. While it’s great for creativity and innovation, it can be a bit of a headache for users.

Imagine visiting a physical museum where the way you move, look at exhibits, and interact with displays changes room by room. That’s what it feels like when switching between different museum games. One minute, you’re using a control pad to move around, and the next, you’re teleporting from spot to spot with a completely different interface. It’s like learning a new language every time you want to enjoy some art or history.

Consistency is key in creating a seamless and enjoyable user experience. When virtual experiences are consistent, users can navigate them more easily and spend more time engaging with the content rather than figuring out the controls. Think about popular video games or apps. They follow design conventions that make it easy for users to understand and interact with them, no matter who made them.

In the case of museums, a consistent experience would mean that users could move smoothly from the Louvre to the British Museum to the Smithsonian without needing to adapt to new controls or navigation methods. This consistency would enhance the overall experience, making virtual visits more immersive and enjoyable.

Still from Mona Lisa Beyond the Glass. Emissive and HTC Vive Arts

So, how can museums fix this and offer a more unified virtual experience? Here are a few ideas:

Adopt Common Standards: Museums could collaborate to establish common standards for experiences. These standards would cover everything from movement controls to user interfaces. By adopting these standards, developers can create games that feel familiar to users, even if they’re visiting different collections.

Use Shared Platforms: Another approach is to develop experiences on shared platforms. For example, a platform specifically designed for cultural heritage games could provide a consistent framework for all museums. This platform could offer standardised controls and interfaces, ensuring that users have a similar experience across different games.

Collaborate and Share Best Practice: Museums can also benefit from collaborating and sharing best practices. By working together, we can learn from each other’s successes and challenges. This collaboration can lead to the development of more user-friendly virtual experiences that are consistent and engaging.

Seeing virtual curation as actual curation: Finally, museum teams are often not directly involved with creating virtual exhibitions, despite having all the skills required. It’s so important to stay in close contact with whoever builds the virtual experience, after all a virtual exhibition is an actual exhibition…

Still from virtual British Museum tour. Google Arts & Culture

In a nutshell, the disjointed feel of museum games boils down to a lack of standardisation and collaboration. Each museum’s unique approach, while innovative, can make the user experience a bit jarring when moving between different virtual environments. But there’s hope! By adopting common standards, using shared platforms, collaborating, and applying existing skills in real-world curation to virtual curation, museums can create more seamless and enjoyable experiences.

Imagine a virtual world where you can smoothly glide from the Mona Lisa at the Louvre to the Rosetta Stone at the British Museum. That’s the future we should aim for – one where technology enhances our ability to explore and appreciate our rich cultural heritage.

So, here’s to hoping that museums will come together to make virtual experiences more connected and user-friendly. Until then, let’s keep exploring and enjoying the incredible virtual journeys they offer, one unique experience at a time.

The Virtual Reading Room

 

 

When you think of an archival reading room, you probably picture a quiet, dimly-lit space filled with old books, manuscripts, and people whispering as they handle fragile documents with care. While this traditional setting has its charm, imagine adding a splash of modern technology to it—specifically, Virtual Reality (VR).

VR can bring history to life in ways that static documents simply can’t. Imagine putting on a VR headset and being transported to ancient Rome, walking through the bustling streets, or standing in the middle of a historic battle. This immersive experience can make learning about history not just educational, but truly engaging and memorable. It’s one thing to read about the past, but another to virtually experience it.

Secondly, VR can make archives more accessible to some people. Not everyone has the luxury of traveling to archives, and some may have physical limitations that prevent them from handling delicate documents. With VR, archive users can virtually explore these treasures from anywhere in the world. This can help make historical research more inclusive.

Additionally, VR can help preserve the original documents. Handling old manuscripts and books, no matter how carefully, can lead to wear and tear over time. By offering VR replicas, archives can minimise the need for physical handling, thus preserving these irreplaceable items for future generations.

Lastly, VR is just plain cool! Integrating cutting-edge technology into archival research can attract younger audiences who might otherwise find the traditional approach boring. It’s a great way to bridge the gap between the past and the present, showing that history isn’t just about dusty old books—it’s about stories and experiences that are still relevant today.

So, let’s embrace VR in archival reading rooms. It’s not about replacing traditional methods, but enhancing them, making history more accessible, engaging, and preserved for all to enjoy.

Uncertain Space: a virtual museum for the University of Bristol

The Uncertain Space: a virtual museum for the University of Bristol

By Catherine Dack

The Uncertain Space is the new virtual museum for the University of Bristol. It is the result of a joint project between Library Research Support and Cultural Collections, funded by the AHRC through the Capability for Collections Impact Funding, which also helped fund the first exhibition.

The project originated in a desire to widen the audience to some of the University’s collections, but in a sustainable way which would persist beyond the end of the project. Consequently, The Uncertain Space is a permanent museum space with a rolling programme of exhibitions and a governance structure, just like a physical museum.

The project had two main outcomes: the first was the virtual museum space and the second was the first exhibition to be hosted in the museum. The exhibition, Secret Gardens, was co-curated with a group of young Bristolians, aged 11-18 and explores connections between the University’s public artworks and some of the objects held in our rich collections.

Entrance to the Secret Gardens exhibition
Entrance to the Secret Gardens exhibition

The group of young people attended a series of in-person and online workshops to discover their shared interests and develop the exhibition. The themes of identity, activism and environmental awareness came through strongly and these helped to inform their choice of items for the exhibition.

hand pointing at manuscripts on a table
Choosing items from Special Collections for the exhibition

Objects, images and audiovisual clips, to link with each of the public artworks, were selected from the Theatre Collection, Special Collections, the Botanic Gardens and from collections held in the Anatomy, Archaeology and Earth Sciences departments. For some of the choices, digital copies already existed, but most of the items had to be digitised by photography or by scanning, using a handheld structured light scanner. The nine public artworks were captured by 360 degree photography. In addition, the reactions of the young people were recorded as they visited each of the public artworks and these are also included in the exhibition.

scanning a piece of malachite
Scanning a piece of malachite for the first exhibition

As the virtual museum was designed to mimic a real-world exhibition, the University of Bristol team and the young people worked with a real-world exhibition designer, and it was found that designing a virtual exhibition was a similar process to designing a real-world exhibition. Some aspects of the process, however, were unique to creating a virtual exhibition, such as the challenges of making digital versions of some objects. The virtual museum also provides possibilities that the real-world version cannot, for example the opportunity to pick up and handle objects and to be transported to different locations.

Towards the end of the project, a second group of young people, who were studying a digital music course at Creative Youth Network, visited the virtual museum in its test phase and created their own pieces of music in response. Some of these are included in a video about the making of the museum.

The museum and first exhibition can be visited on a laptop, PC or mobile device via The Uncertain Space webpage, by downloading the spatial.io app onto a phone or VR headset, or by booking a visit to the Theatre Collection  or Special Collections, where VR headsets are available for anyone to view the exhibition.

We are looking forward to a programme of different exhibitions to be hosted in The Uncertain Space and are interested in hearing from anyone who would like to put on a show.

You can read more about the making of The Uncertain Space and its first exhibition from our colleagues in Special Collections and Theatre Collection:
Our collections go virtual!
Digitising for the new virtual museum: The Uncertain Space

Shiny shells and steamships: an experiment in phototexturing a 3D model

Shiny shells and steamships: an experiment in phototexturing a 3D model

By Catherine Dack

In the Library Research Support team we have quite a bit of experience of 3D scanning and of photogrammetry, but have never tried combining digital photographs with scan data to make a ‘photorealistic’ 3D model.
When we were asked to scan a large, engraved shell belonging to the Brunel Institute , we decided it was time to give it a go, using our Artec Space Spider structured light scanner and the ‘phototexturing’ function in Artec Studio 16.  This phototexturing option allows photographs of the object to be combined with the digital model to improve the model’s textures and produce a more photorealistic result.

The shell in question has a shiny surface and is engraved with text and images, including depictions of the SS Great Britain and Omar Pasha, an Ottoman Field Marshall and governor. Shiny surfaces can be problematic when scanning, but we dialled up the sensitivity of the scanner a bit and encountered no difficulties. We were also concerned that the very low relief engravings would not be discernible in the final model, which did indeed prove to be the case.

We were careful to capture both scans and photographs under the same conditions, scanning one side of the shell and then, without moving it, taking photographs from every angle before turning it over to scan and photograph the underside.

When processing the scan data, the main difficulty was fixing a large hole in the mesh which occurred in the cavity of the shell where the scanner had not been able to capture data. Because of the complex geometry, Artec Studio’s hole-filling options simply covered the hole with a large blob. Therefore, we used the bridge function to link opposite edges of the large hole and subdivide it into smaller ones, which could be filled with a less blobby result. We then used the defeature brush and the smoothing tool to reduce flaws. The result is not an accurate representation of the inside of the shell, but gives a reasonable impression of it and, without any holes in the mesh, the model can be printed in 3D.

Adding texture from the photographs was simply a matter of importing them in two groups (photos of the top and photos of the underside) and matching them to the fusion. A handful of photographs couldn’t be matched but there was enough overlap between the other photographs to complete the texture. The phototextured model does show some shadows as we were not using dedicated lights, but there is significant improvement in the resolution and in the visibility of the engravings.

an engraved shell
The shell before phototexturing, showing texture captured by the scanner.
an engraved shell
The shell with texture from the photographs applied.

When we came to experiment with printing the model, we found there was not enough 3d geometry to reproduce the engravings, though we had avoided simplifying the mesh during processing. As the faint engravings on the shell are mostly visible through discolouration, we think that 3D printing in colour would be a good solution and the Brunel Institute are also considering other possibilities, such as engraving directly onto a 3D print. We look forward to seeing the result of their chosen solution.

3D scanning and virtual reality environments at the University of Bristol

By Digital Archivist, Emma Hancox

As well as digitising two dimensional materials, at the University of Bristol we have been experimenting with three-dimensional scanning methods. We are actively engaged in digital preservation and the creation of 3D scans creates its own set of challenges. In terms of access, we have started working on ways of increasing interaction with the 3D models we create breaking down barriers, giving our collections a wider reach and unlocking their potential in the long-term.

In June 2021 we used 3D scanning methods to image the Blandford collection of antiquities which is housed in the University’s Department of Anthropology and Archaeology. The collection was donated to the University by Dennis Blandford, a retired classics teacher, and incudes Greek and Roman artefacts such as pottery, terracotta figures, roof tiles, bracelets and pins.

Hancox 1

It took three members of staff six days to scan around half of the collection prioritising in-demand and unusual objects. The team used a hand-held scanner called an Artec Space Spider. At this stage the work involved data-capture and upload of the scans to a cloud server. Post-processing tasks such as reconstructing 3D depth took a further three weeks to complete.

During the scanning stage the team found that scanning objects with reflective surfaces was a challenge because they reflected the light from the scanner. Larger objects were easier to scan than smaller ones. While scanning, a cake-stand with stickers placed in a non-repeating pattern was used so that the objects could be rotated while scanning. The stickers helped the scanner identify the rotation. A lesson learnt during the processing stage was that it was difficult to auto-align objects with plain sides because there were no distinguishing features to base the alignment on.

Of course, the process of 3D scanning creates a set of digital assets that need to be preserved in the long-term. The 3D models were stored in the Wavefront OBJ format which was selected because it is open source, widely used and is considered to be an acceptable format for scanned 3D objects by the Library of Congress in their recommended formats statement. Alongside the .obj file a .png file holds the colour and texture information and an .mtl file contains information about how the textures should be applied to the 3D object. The three files are required to view the 3D model. Preservica (the digital preservation system we use) supports the format identification, validation and rendering of Wavefront OBJ with associated .mtl and image files such as .png and we are in the processing of testing the ingest of 3D scans to the system.

As well as making the 3D models available in data.bris.ac.uk we wanted to explore ways of allowing users to interact with the 3D models in a virtual reality environment. A virtual museum environment was coded using ValveHammer and shared via the SteamCommunity. It allows the user to view a clipboard with a list of boxes on it, select a box and view the items from the Blandford Collection within it. The items can be sorted, shrunk or enlarged and taken to a returns table when finished with.

The Blandford Collection is primarily used for teaching and the creation of the 3D models and VR environment will allow more students to connect with the objects. It also enables wider access to the collections from users based anywhere in the world. In the future we would like to have a virtual museum space where we can hold a series of virtual exhibitions using a variety of 2D and 3D scans. We envisage that these could be curated by students as part of their university courses.

One of the challenges we are yet to explore is how we might approach preserving the 3D museum environment we have created and this is something we would like to investigate in the future.

Thank you to my colleagues Stephen Gray, Catherine Dack and Sam Brenton who carried out this work and explained their processes to me.