Week 2 — Work of the Imagination 💫

From 19th February to 25th February

Sanya Thapar
5 min readApr 14, 2021

Team: Jinsong Liu (Sylvester) , Sanya Nayar, Shiwen Shen (Svaney) and Ziyou (Ines) Yin

From the feedback we received in the previous week, we decided it was time to go beyond doing the work of the intellect. We sat in a group meeting on Friday and planned to reorient our approach so that a connect between the physical and the digital could be established.

A SWOT analysis was done to determine the advantages of being physically present in a museum that needed to be transferred to online exploration of the collections. We realised that digital presence afforded a huge opportunity of catering to an international audience spread over a wide geographical distance but at same time, there were measures required to compensate for the lack of sensory immersion and personalisation usually experienced during a physical visitation to the museum site.

I started off by researching about the different user interfaces. At tabletop scale, one of the most interesting research trends is Tangible User Interfaces (Holmquist, et al. 2004). In TUIs, everyday objects are used for simulating, and to a certain extent also extending the ordinary Graphical User Interfaces (GUIs) by means of direct manipulation of physical objects. “An alternative approach to interaction with physical space would be to exploit properties of objects’ motion (such as path, speed, direction and acceleration) and let applications make sense of these dynamic features in a new way, mainly as intentional actions.” (Hirsbrunner and Pallotta, 2016). These are called as the Kinetic User Interfaces (KUIs) which impart physical embodiment of interactions.

As interesting as it seemed, it was a bit far fetched to implement this technology given it was a fairly new one and that the time we had at hand was limited to explore the possibilities.

However, taking a parallel road, I was led to learn more about sensor technologies commonly available in the modern-day mobile phones and computers. Simultaneously, Ines and Sylvester extracted codes that could make use of gesture detection simply using camera of phone/laptop. Processing, a software sketchbook based on Java Script for coding within the context of visual arts, came in handy at this moment.

As they did that, in the meantime I thought about improvement of information architecture. Jack had pointed us to in the briefing session about the problem of categorisation and organisation of colossal data available for the V&A’s online collection. I went through other museums’ websites and was specially inspired to see how Jardin des Plant’s collection had been arranged for young children to learn botany by playing. I sketched out a concept showing a globe of networked objects belonging to related and unrelated categories. This was an interactive alternative over conventional lists and catalogues. Besides spinning or zooming in/out to see all the objects, I wanted to show specific category of objects being segregated based on a parameter. This parameter, in this case, is colour. Related objects linked with the same colour could be called upon by various ways. How about a person flashing a colour in their physical environment that gets detected on webcam? To me, this seemed to be a good way to bridge reality and virtuality. Communicating my idea to John in the tutorial, he recommended us a website for the project, ‘Of Machine Learning to See Lemon’ that used a similar algorithm.

Meanwhile, the other teammates had cracked interesting codes that allowed simple interactions to take place from a web browser like the detection of hands, tracking of eyeballs, movement of cursor to create a trail, etc. Hence, the four of us collectively decided to head in this direction as it was an exciting breakthrough. We tried to improvise these a step further by thinking of implementing object detection, light detection and movement of clothes as few instances of interactions that people can experience while navigating specific collections online.

For the same, each one of us in the team made concept videos and embedded them in a common theme. Using the interactions listed above, we demonstrated the following journey of a person online:

1.) Object auto-detection: A person tries to auto-detect an object placed in front of webcam. The algorithm produces a number of search results from which the person can choose which objects explore in detail. In this example, it is a Japanese sword.

2.) Expanding contexts and scalability: Next, the sword can be found used in different contexts. It is shown as part of scene of Japanese Edo war in a painting found in museum’s online collection. This also gives an idea of scalability of the object. @ Svaney

3.) Light ray as the new cursor: This painting is now inspected carefully using a torch light detected through webcam, to highlight intricate features. @ Ines

4.) Gameplay as a Samurai: The last step facilitates an experience to the visitor by allowing them to embody the same object and interact with it which is reciprocated on the screen. @ Sylvester

FEEDBACK and AFTER

We received a very constructive feedback for our demonstration of this week’s progress during Thursday’s presentation. A lot of speakers from the class participated in expressing their opinions about each one of the ideas. Maria Carolina and Sebastian noticed that the function of tactility remained missing from the entire experience. David, Max and Zhalou mentioned that they found the idea of the Samurai role-play as very interesting and immersive. Damul appreciated object detection facility and also pointed out about the decision making step when the distinction between two similar objects would be unobservable.

When it came for the final words, Alaistair encouraged us to find solutions less technologically and more behaviourally oriented. Though John thought that the objects being detected and the movement of clothes being mapped through a webcam were brilliant ideas, he advised us to reduce and synthesise them for further development.

Lastly, the style of merging videos into a storyline was commended by everyone to be great exposition of our concepts.

--

--

Sanya Thapar
Sanya Thapar

Written by Sanya Thapar

MA User Experience Design at the University of Arts, London

No responses yet