The Coming Age of Alternative Ways of Seeing. Part 2: Technological Considerations
The Coming Age of Alternative Ways of Seeing. Part 2: Technological Considerations
Christopher Lueg
Introduction
In part 1 of this series, we referenced a growing body of work that points out that as researchers and designers, we need to find ways to go beyond traditional human-centric design approaches that preference by definition human needs over the needs of other creatures. We highlighted research on non-human personas and suggested that we need to find novel ways to bring non-human personas to life that scale better for broader impact than existing speculative approaches. We argued that by bringing together research in animal behavior/perception and simulation (computational or otherwise) we can venture beyond what is largely imagining and complement non-human personas by adding glimpses into how the world would likely appear to the creatures represented by those personas.
We call the approach to venture beyond human and non-human personas viewsonas. Viewsonas complement those personas by helping people explore specific aspects of the “views” of the creature being represented which could be human or non-human animals, or aliens. Unlike traditional human and non-human personas that remain largely imaginative and intangible, viewsonas are designed to be tangible and are meant to be interacted with.
—Viewsonas map aspects of what a creature “sees” such that humans are capable of perceiving it.—
Ways To Adjust How We See
There are already plenty of examples of technologies that allow to experience different human conditions, such as aging suits imposing mobility limitations and simulating various sensory deficiencies (visual, auditory). Not-sameness remains a formidable challenge though even when researchers try explicitly to understand how another human experiences their world. Bennett and Rosner (2019) highlight this in the context of designing for people with disabilities: “designers who use disability simulation techniques such as blindfolds to empathize with blind users may not need to consider the user with disabilities; instead, they may focus on their own experience wearing a blindfold.”
Other technologies help see from another human’s point of view. Wheelchair simulators that may or may not include actual wheelchairs allow participants to experience the world from the lower point of view introduced by most wheelchairs. Schlüpfer et al (2013) used video cameras and head mounted displays to swap the perspectives of human participants in select activities in real-time. Virtual reality (VR) can enable people to “viscerally experience anything from another person’s point of view” (Milk 2015, cited in Herrera 2018). Asher et al (2018), for example, created an interactive VR simulation where participants would look at a street scenario from a simulated homeless person’s point of view. VR enabled shifts in perspective can lead to an increase in empathy for others (Herrera et al 2018, Mado et al 2021) but can also generate backslash when perceived as mocking the reality of homelessness (Melnick 2017). Empathic computing aims to “go beyond [perspective taking] by enabling people to also share their feelings and non-verbal communication cues” (Billinghurst 2017).
There are other technologies that help change one’s perspective to reflect different body geometries. Schlüpfer et al (ibid) let participants experience in real-time what they would see if they were of the height of a dog or giraffe, respectively. Pictures taken by the internet-famous CatCam, a small digital camera that had been attached to a domestic cat, showed how the world looks like from the low height of a cat and also what they were up to while roaming around the house. Andrea Arnold’s Cow is a documentary filmed entirely from the point of view of a cow. The U.S. Geological Survey (USGS) attached a camera to a polar bear to obtain information about polar bear habitat changes during climate change (https://www.usgs.gov/media/videos/polar-bear-pov-cams-spring-2016).
Glimpsing Across the Species Barrier
The focus of viewsonas is to go beyond that, though, to respond to a need to offer glimpses of what non-human animals would actually see. Currently, the only non-human ways of seeing that most humans ever experience are conveyed by movies and similar media. Those showcases (e.g., the alien hunter’s vision shown in the 1987 science fiction movie Predator) are designed for human viewers though, and their human physiology. Notable exceptions include Jevbratt (2009) representing canine and other animals’ ways of seeing in exhibitions and in the development of software filters for video and imaging software.
When discussing concrete approaches, it is crucial to keep in mind that most such “views,” especially those that peek across the species barrier, are not ontologically neutral as they change the nature of what is perceived. The speculative nature of such glimpses does not interfere with viewsonas utility though since viewsonas are not meant to be a scientifically accurate simulation of the perception of other creatures but rather tools that support human designers in empathizing with the creatures represented by respective personas. The aim is to be able to see to some extent how a project under development would impact on the creature’s environment and how these changes might present visually to them. The aim is not to understand what it is like to be a possum, a lorikeet, a bee (Tomitsch et al 2021), or a bat (Nagel 1974).
CatCam almost broke the internet! Viewsonas do not have to be technically sophisticated to generate impact since even low key glimpses into the sensing world of other animals can be very informative and also deeply touching.
We briefly discussed canine vision in Part 1 of this series and mentioned that even for dog enthusiasts it might be a revelation to experience how the world looks like when the visual impression is computationally altered in such a way that it reflects canine vision (dichromatic not trichromatic color vision). The immense potential for impact becomes easier to understand when imagining, as an example, a high-rise construction scenario. Glass collisions kill up to one billion birds in the United States alone (https://abcbirds.org/glass-collisions/). In an environmentally-aware approach, the design team would include bird personas to help the design team empathize with them. Hearing in a briefing about glass fronts being death traps for birds due to the specific characteristics of bird vision is one thing but experiencing the difficulties to recognize glass as such while using an enhanced interactive bird persona aka viewsona would demonstrate why architects need to careful when designing glass fronts.
Going forward it is important to keep in mind that specific viewsonas model a certain way of seeing. The fact that a particular animal-human mapping of sensory information is used when developing a particular viewsona means that the mapping was considered useful when the viewsona was created. Its development is tied to a purpose. It does not imply that it was the only way to do so, or even that it was the best way to do so. For example, a viewsona modeling bird vision to demonstrate the dangers of glass fronts is likely to be less suitable when showing how light pollution may interfere with bird migration.
Viewsona Design Principles
Developing viewsonas should be guided by several design principles:
1. Conceptually, viewsonas consist of a view model and a world model. For explorative activities, the view model is applied to a suitable world model aka world view. Following roboticist Rodney Brooks’ observation that “the world really is a rather good model of itself” (Brooks 1988) an actual world model is the last resort when using viewsonas and only advised if the world to be explored isn’t readily accessible.
2. Viewsonas don’t show what a creature actually sees but what it might be seeing when what they might be seeing is mapped such that humans are able to perceive it. In other words, perception mapping is done deliberately and explicitly in viewsonas.
3. Viewsonas help us understand how creatures may see select aspects of their environments but that is fundamentally different to understanding how they experience their Umwelt. We don’t understand what is like to be a bat just because we understand how echolocation works (cf. Nagel).
4. Viewsonas map aspects of what a creature “sees” such that humans are capable of perceiving it. Such mappings are scientifically informed but nevertheless highly speculative (see the earlier discussion of neural processing of sensory input). As Pongrácz et al (2017) stated in the context of canine vision, “we could not be sure whether canine and human brains process similarly the visual sensation.”
5. Viewsonas are not meant to be accurate scientific simulations but that doesn’t mean that they can’t be be useful in research trying to understand animal behavior. In Part 1 we mentioned the emerging evidence that dogs’ sense of smell is somehow integrated with their vision (Andrews 2022). Does that mean canines can “see” traces of odor? Overlaying what is established knowledge regarding canine vision (dichromatic color vision, acuity, etc.) with an “artistic” visual representation of olfactory stimuli that dissipate over time could result in a visual representation that might help interpret certain canine behaviors even if we can’t know for sure whether the visual representation actually resembles anything that canines “see”.
Conclusions and Future Work
The motivation to look into technologies that are suitable for developing viewsonas is purpose driven but at the same time it’s a fascinating way to learn about other species and how they may or may not see the world. By doing so, we are also forced to look at what the very role of the technology is since, as mentioned earlier, peeking across the species barrier is never ontologically neutral especially when we alter something humans cannot actually perceive into something humans can perceive. Part 3 (forthcoming) we look at further examples of non-human animals where researchers know just enough about their visual perception to create meaningful human-oriented representations thereof (Cheng and Lueg, forthcoming). We are also exploring what we can learn from biomechanical user models (Ikkala et al 2022). Those works use a similar structure (model plus view), but the overall aim seems rather different.
References
Andrews, E.F., Pascalau, R., Horowitz, A., Lawrence, G.M. and Johnson, P.J., 2022. Extensive Connections of the Canine Olfactory Pathway Revealed by Tractography and Dissection. Journal of Neuroscience, 42(33), pp.6392–6407.
Asher, T., Ogle, E., Bailenson, J.N., & Herrera, F. (2018). Becoming homeless: a human experience. ACM SIGGRAPH 2018 Virtual, Augmented, and Mixed Reality, DOI: https://doi.org/10.1145/3226552.3226576
Bennett, C.L and Rosner, D.K (2019). The Promise of Empathy: Design, Disability, and Knowing the “Other”. Proc. CHI 2019, paper 298. Glasgow, Scotland UK. ACM.
Billinghurst, M. (2017). The Coming Age of Empathic Computing. Medium May 4, 2017. https://medium.com/super-ventures-blog/the-coming-age-of-empathic-computing-617caefc7016
Brooks, R. (1988) How to Build Complete Creatures Rather than Isolated Cognitive Simulators. In Kurt VanLehn (ed) Architectures for Intelligence: The 22nd Carnegie Mellon Symposium on Cognition.
Herrera F., Bailenson J., Weisz E., Ogle E., Zaki J. (2018). Building long-term empathy: A large-scale comparison of traditional and virtual reality perspective-taking. PLoS ONE 13(10): e0204494. https://doi.org/10.1371/journal.pone.0204494
Ikkala, A., Fischer, F., Klar, M., Bachinski, M., Fleig, A., Howes, A., Hämäläinen, P., Müller, J., Murray-Smith, R. and Oulasvirta, A. (2022). Breathing Life Into Biomechanical User Models. Proc UIST. ACM.
Jevbratt, L. (2009). ZooMorph - enabling interspecies collaboration. Presentation at
ISEA 2009.
Mado, M, Herrera, F., Nowak, K.L., Bailenson, J.N. (2021). Effect of Virtual Reality Perspective-Taking on Related and Unrelated Contexts. Cyberpsychology, behavior and social networking n. pag.
Melnick, K. (2017). CEOs Under Fire For Using VR To Experience Being Homeless. VRScout June 28, 2017. https://vrscout.com/news/ceos-vr-experience-homeless/
Milk C. (2015). How virtual reality can create the ultimate empathy machine https://www.ted.com/talks/chris_milk_how_virtual_reality_can_create_the_ultimate_empathy_machine
Nagel, T. (1974). What Is It Like to Be a Bat? Philosophical Review. 83 (4): 435–450.
Pongrácz, P., Ujvári, V., Faragó, T., Miklósi, A., Péter, A. (2017). Do you see what I see? The difference between dog and human visual perception may affect the outcome of experiments. Behavioural Processes, Vol 140, 2017, pp 53–60, ISSN 0376-6357.
Schlüpfer, M., Ryser, H., Aerni, P., Gehbauer, U. (2013). Mit dem Körper sehen. Bern University of the Arts (HKB). https://www.optickle.com/forschungsprojekt See also https://www.bfh.ch/dam/jcr:e53f8b01-0741-46dc-9704-0139df635fe5/IM_Mit_dem_Koerper_sehen.pdf
Tomitsch, M., Fredericks, J., Vo, D., Frawley, J., and Foth, M. (2021). Non-human Personas: Including Nature in the Participatory Design of Smart Cities. Interaction Design and Architecture(s), 50(50), pp. 102–130.
Cite this article in APA as: Lueg, C. (2023, March 22). The coming age of alternative ways of seeing. Part 2: Technological considerations. Information Matters, Vol. 3, Issue 3. https://informationmatters.org/2023/03/the-coming-age-of-alternative-ways-of-seeing-part-2-technological-considerations/
Author
-
Christopher Lueg is a professor in the School of Information Sciences at the University of Illinois Urbana-Champaign. Internationally recognized for his research in human computer interaction and information behavior, Lueg has a special interest in embodiment—the view that perception, action, and cognition are intrinsically linked—and what it means when designing for others. Prior to joining the faculty at Illinois, Lueg served as professor of medical informatics at the Bern University of Applied Sciences in Biel/Bienne, Switzerland. He spent almost twenty years as a professor in Australia teaching at the University of Technology, Sydney; Charles Darwin University; and the University of Tasmania, where he co-directed two of the university's research themes, Data, Knowledge and Decisions (DKD) and Creativity, Culture, Society (CCS).
View all posts