In the event of fires and other hazards, visual guidance systems that support evacuation are critical for the safety of individuals. Current visual guidances for evacuations are typically non-adaptive signs in that they always indicate the same exit route independently of the hazard’s location. Adaptive signage systems can facilitate wayfinding during evacuations by optimizing the route towards the exit based on the current emergency situation. In this paper, we demonstrate that participants that evacuate a virtual museum using adaptive signs are quicker, use shorter routes, suffer less damage caused by the fire, and report less distress compared to participants using non-adaptive signs. Furthermore, we develop both centralized and decentralized computational frameworks that are capable of calculating the optimal route towards the exit by considering the locations of the fire and automatically adapting the directions indicated by signs. The decentralized system can easily recover from the event of a sign malfunction because the optimal evacuation route is computed locally and communicated by individual signs. Although this approach requires more time to compute than the centralized system, the results of the simulations show that both frameworks need less than two seconds to converge, which is substantially faster than the theoretical worst case. Finally, we use an agent-based model to validate various fire evacuation scenarios with and without adaptive signs by demonstrating a large difference in the survival rate of agents between the two conditions.
Read MoreCognitive tests are critical for the reliable assessment of cognitive functioning in an aging population. However, even validated psychometric tests are subject to a variety of extraneous factors (for example, culture and language) that may affect performance. Regarding aging, one factor that stands out is reduced visual function. Indeed, performance on cognitive tests has been found to be negatively affected by vision impairment (for example, age related macular degeneration or cataracts). When vision impairment is neglected during assessments, poor test scores may be falsely attributed to lower cognitive ability. This oversight can have substantial ramifications for research on cognitive functioning and the accurate diagnosis of cognitive impairment.
Read MoreRecent advances in Augmented Reality (AR), the Internet of Things (IoT), cloud computing, and Digital Twins transform the types, rates, and volume of information generated in buildings as well as the mediums through which they can be perceived by users. These advances push the standard approach of media architecture to embed screens in the built environment to its limits because screens lack the immersive capacity that newer media afford. To bridge this gap, we propose a novel AR approach to media architecture that uses a Digital Twin as a platform for structuring and accessing data from various sources, including IoT and simulations. Our technical contribution to media architecture is threefold. First, we extend the possibilities of media architecture beyond embedded screens to three dimensions by presenting a Digital Twin using AR with a head-mounted display. This approach results in a shared and consistent augmented experience across large architectural spaces. Second, we use the Digital Twin to integrate and visualize real physical sensor information. Third, we make artificial occupancy simulations accessible to everyday users by presenting them within their natural context in the Digital Twin. Observing the Digital Twin in situ of the Physical Twin also has applications beyond media architecture. Fusing the two twins using AR can reduce the cognitive load of users from consuming big and complex information sources and enhance their experience. We present two use cases of the proposed Fused Twins in a university building at ETH Zürich. In the first use case, we visualize a dense indoor sensor network (DISN) with 390 IoT sensors that collected data from March 2020 to May 2021. In the second use case, we immerse visitors in agent-based simulations to enable insights into the real and projected uses of space. This work brings forward an ambitious vision for media architecture beyond traditional flat screens, and showcases its potential through fusing state of the art simulations, sensor data integration and augmented reality, finally making the jump from fiction to reality.
Read MorePurpose: Investigating difficulties during activities of daily living is a fundamental first step for the development of vision-related intervention and rehabilitation strategies. One way to do this is through visual impairment simulations. The aim of this review is to synthesize and assess the types of simulation methods that have been used to simulate age-related macular degeneration (AMD) in normally sighted participants, during activities of daily living (e.g., reading, cleaning, and cooking). Methods: We conducted a systematic literature search in five databases and a critical analysis of the advantages and disadvantages of various AMD simulation methods (following PRISMA guidelines). The review focuses on the suitability of each method for investigating activities of daily living, an assessment of clinical validation procedures, and an evaluation of the adaptation periods for participants. Results: Nineteen studies met the criteria for inclusion. Contact lenses, computer manipulations, gaze contingent displays, and simulation glasses were the main forms of AMD simulation identified. The use of validation and adaptation procedures were reported in approximately two-thirds and half of studies, respectively. Conclusions: Synthesis of the methodology demonstrated that the choice of simulation has been, and should continue to be, guided by the nature of the study. While simulations may never completely replicate vision loss experienced during AMD, consistency in simulation methodology is critical for generating realistic behavioral responses under vision impairment simulation and limiting the influence of confounding factors. Researchers could also come to a consensus regarding the length and form of adaptation by exploring what is an adequate amount of time and type of training required to acclimatize participants to vision impairment simulations.
Read MoreLong Range Wide Area Network (LoRaWAN) has been advanced as an alternative for creating indoor sensor networks that extends beyond its original long-distance communication purpose. For the present paper, we developed a Dense Indoor Sensor Network (DISN) with 390 sensor nodes and three gateways and empirically evaluated its performance for half a year. Our analysis of more than 14 million transmissions revealed that DISNs achieve a much lower distance coverage compared to previous research. In addition, the deployment of multiple gateways decreased the loss of transmissions due to environmental and network factors such as concurrently received messages. Given the complexity of our system, we received few colliding concurrent messages, which demonstrates a gap between the projected requirements of LoRaWAN systems and the actual requirements of real-world applications. Our attenuation model indicates that robust coverage in an indoor environment can be maintained by placing a gateway every 30 m and every 5 floors. We discuss the application of DISNs for the passive sensing and visualization of human presence using a Digital Twin (DT).
Read MoreThe high prevalence of office stress and its detrimental health consequences are of concern to individuals, employers and society at large. Laboratory studies investigating office stress have mostly relied on data from participants that were tested individually on abstract tasks. In this study, we examined the effect of psychosocial office stress and work interruptions on the psychobiological stress response in a realistic but controlled group office environment. We also explored the role of cognitive stress appraisal as an underlying mechanism mediating the relationship between work stressors and the stress response. Ninety participants were randomly assigned to either a control condition or one of two experimental conditions in which they were exposed to psychosocial stress with or without prior work interruptions in a realistic multi-participant laboratory setting. To induce psychosocial stress, we adapted the Trier Social Stress Test for Groups to an office environment. Throughout the experiment, we continuously monitored heart rate and heart rate variability. Participants repeatedly reported on their current mood, calmness, wakefulness and perceived stress and gave saliva samples to assess changes in salivary cortisol and salivary alpha-amylase. Additionally, cognitive appraisal of the psychosocial stress test was evaluated. Our analyses revealed significant group differences for most outcomes during or immediately after the stress test (i.e., mood, calmness, perceived stress, salivary cortisol, heart rate, heart rate variability) and during recovery (i.e., salivary cortisol and heart rate). Interestingly, the condition that experienced work interruptions showed a higher increase of cortisol levels but appraised the stress test as less threatening than individuals that experienced only psychosocial stress. Exploratory mediation analyses revealed a blunted response in subjective measures of stress, which was partially explained by the differences in threat appraisal. The results showed that experimentally induced work stress led to significant responses of subjective measures of stress, the hypothalamic-pituitary-adrenal axis and the autonomic nervous system. However, there appears to be a discrepancy between the psychological and biological responses to preceding work interruptions. Appraising psychosocial stress as less threatening but still as challenging could be an adaptive way of coping and reflect a state of engagement and eustress.
Read MoreRenewable energy systems (RES) can impact landscape aesthetics and influence the public's perception of the landscape and their acceptance of large infrastructure projects. Perceptual processes have consequences for both physiological and behavioral reactions to visual landscape changes and have not been systematically assessed in the context of RES. In this paper, we measured participants' physiological (electrodermal activity) and behavioral (i.e., landscape preferences) responses to landscapes with different amounts of RES. The visual stimuli were composed of either a low or high amount of wind turbines and photovoltaic systems in seven different landscape types. Participants were asked to choose their preferred landscape image from pairs of sequentially presented images while we recorded their electrodermal activity. The results revealed that participants were significantly more physiologically aroused while viewing landscapes with high RES compared to landscapes with low RES. We also found that the participants significantly preferred landscapes with low RES to landscapes with high RES and that this effect was larger for some landscapes than others. The results also revealed significant differences in preferences among landscape types. Specifically, participants tended to prefer the more natural landscapes to the more urban landscapes. A systematic analysis of the visual features of these stimuli revealed a positive correlation between physiological arousal and the visual impact of photovoltaic systems. Overall, we conclude that both physiology and behavior can be relevant for studies of landscape perception and that these insights may inform planners and policy makers in the implementation of landscape changes related to RES.
Read MoreDense crowds in public spaces have often caused serious security issues at large events. In this paper, we study the 2010 Love Parade disaster, for which a large amount of data (e.g. research papers, professional reports and video footage) exist. We reproduce the Love Parade disaster in a three-dimensional computer simulation calibrated with data from the actual event and using the social force model for pedestrian behaviour. Moreover, we simulate several crowd management strategies and investigate their ability to prevent the disaster. We evaluate these strategies in virtual reality (VR) by measuring the response and arousal of participants while experiencing the simulated event from a festival attendee’s perspective. Overall, we find that opening an additional exit and removing the police cordons could have significantly reduced the number of casualties. We also find that this strategy affects the physiological responses of the participants in VR.
Read MoreGaining awareness of the user’s affective states enables smartphones to support enriched interactions that are sensitive to the user’s context. To accomplish this on smartphones, we propose a system that analyzes the user’s text typing behavior using a semi-supervised deep learning pipeline for predicting affective states measured by valence, arousal, and dominance. Using a data collection study with 70 participants on text conversations designed to trigger different affective responses, we developed a variational auto-encoder to learn efficient feature embeddings of two-dimensional heat maps generated from touch data while participants engaged in these conversations. Using the learned embedding in a cross-validated analysis, our system predicted three levels (low, medium, high) of valence (AUC up to 0.84), arousal (AUC up to 0.82), and dominance (AUC up to 0.82). These results demonstrate the feasibility of our approach to accurately predict affective states based only on touch data.
Read MoreCurrent Mixed Reality (MR) systems rely on a variety of sensors (e.g., cameras, eye tracking, GPS) to create immersive experiences. Data collected by these sensors are necessary to generate detailed models of a user and the environment that allow for different interactions with the virtual and the real world. Generally, these data contain sensitive information about the user, objects, and other people that make up the interaction. This is particularly the case for MR systems with eye tracking, because these devices are capable of inferring the identity and cognitive processes related to attention and arousal of a user. The goal of this position paper is to raise awareness on privacy issues that result from aggregating user data from multiple sensors in MR. Specifically, we focus on the challenges that arise from collecting eye tracking data and outline different ways gaze data may contribute to alleviate some of the privacy concerns from aggregating sensor data
Read MoreThe inspection of feature-rich information spaces often requires supportive tools that reduce visual clutter without sacrificing details. One common approach is to use focus+context lenses that provide multiple views of the data. While these lenses present local details together with global context, they require additional manual interaction. In this paper, we discuss the design space for gaze-adaptive lenses and present an approach that automatically displays additional details with respect to visual focus. We developed a prototype for a map application capable of displaying names and star-ratings of different restaurants. In a pilot study, we compared the gaze-adaptive lens to a mouse-only system in terms of efficiency, effectiveness, and usability. Our results revealed that participants were faster in locating the restaurants and more accurate in a map drawing task when using the gaze-adaptive lens. We discuss these results in relation to observed search strategies and inspected map areas.
Read MoreA carefully designed map can reduce pedestrians’ cognitive load during wayfinding and may be an especially useful navigation aid in crowded public environments. In the present paper, we report three studies that investigated the effects of map complexity and crowd movement on wayfinding time, accuracy and hesitation using both online and laboratory-based networked virtual reality (VR) platforms. In the online study, we found that simple map designs led to shorter decision times and higher accuracy compared to complex map designs. In the networked VR set-up, we found that co-present participants made very few errors. In the final VR study, we replayed the traces of participants’ avatars from the second study so that they indicated a different direction than the maps. In this scenario, we found an interaction between map design and crowd movement in terms of decision time and the distributions of locations at which participants hesitated. Together, these findings can help the designers of maps for public spaces account for the movements of real crowds.
Read MoreSignage systems are critical for communicating spatial information during wayfinding among a plethora of noise in the environment. A proper signage system can improve wayfinding performance and user experience by reducing the perceived complexity of the environment. However, previous models of sign-based wayfinding do not incorporate realistic noise or quantify the reduction in perceived complexity from the use of signage. Drawing upon concepts from information theory, we propose and validate a new agent-signage interaction model that quantifies available wayfinding information from signs for wayfinding. We conducted two online crowd-sourcing experiments to compute the distribution of a sign’s visibility and an agent’s decision-making confidence as a function of observation angle and viewing distance. We then validated this model using a virtual reality (VR) experiment with trajectories from human participants. The crowd-sourcing experiments provided a distribution of decision-making entropy (conditioned on visibility) that can be applied to any sign/environment. From the VR experiment, a training dataset of 30 trajectories was used to refine our model, and the remaining test dataset of 10 trajectories was compared with agent behavior using dynamic time warping (DTW) distance. The results revealed a reduction of 38.76% in DTW distance between the average trajectories before and after refinement. Our refined agent signage interaction model provides realistic predictions of human wayfinding behavior using signs. These findings represent a first step towards modeling human wayfinding behavior in complex real environments in a manner that can incorporate several additional random variables (e.g., environment layout).
Read MoreThis study investigates how social and physical environments affect human wayfinding and locomotion behaviors in a virtual multi-level shopping mall. Participants were asked to locate a store inside the virtual building as efficiently as possible. We examined the effects of crowdedness, start floor, and trial number on wayfinding strategies, initial route choices, and locomotion behaviors. The results showed that crowdedness did not affect wayfinding strategies or initial route choices, but did affect locomotion in that participants in the high crowdedness condition were more likely to avoid crowds by moving close to the boundaries of the environment. The results also revealed that participants who started on the second floor were more likely to use the floor strategy than participants who started on the third floor, possibly because of the structure of the virtual building. These results suggest that both physical and social environments can influence multi-level indoor wayfinding.
Read MoreLiving in a disadvantaged neighborhood is associated with worse health and early mortality. Although many mechanisms may partially account for this effect, disadvantaged neighborhood environments are hypothesized to elicit stress and emotional responses that accumulate over time and influence physical and mental health. However, evidence for neighborhood effects on stress and emotion is limited due to methodological challenges. In order to address this question, we developed a virtual reality experimental model of neighborhood disadvantage and affluence and examined the effects of simulated neighborhoods on immediate stress and emotion. Exposure to neighborhood disadvantage resulted in greater negative emotion, less positive emotion, and more compassion, compared to exposure to affluence. However, the effect of virtual neighborhood environments on blood pressure and electrodermal reactivity depended on parental education. Participants from families with lower education exhibited greater reactivity to the disadvantaged neighborhood, while those from families with higher education exhibited greater reactivity to the affluent neighborhood. These results demonstrate that simulated neighborhood environments can elicit immediate stress reactivity and emotion, but the nature of physiological effects depends on sensitization to prior experience.
Read MoreThe role of affective states in learning has recently attracted considerable attention in education research. The accurate prediction of affective states can help increase the learning gain by incorporating targeted interventions that are capable of adjusting to changes in the individual affective states of students. Until recently, most work on the prediction of affective states has relied on expensive and stationary lab devices that are not well suited for classrooms and everyday use. Here, we present an automated pipeline capable of accurately predicting (AUC up to 0.86) the affective states of participants solving tablet-based math tasks using signals from low-cost mobile bio-sensors. In addition, we show that we can achieve a similar classification performance (AUC up to 0.84) by only using handwriting data recorded from a stylus while students solved the math tasks. Given the emerging digitization of classrooms and increased reliance on tablets as teaching tools, stylus data may be a viable alternative to bio-sensors for the prediction of affective states.
Read MoreExploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.
Read MoreThe collective behavior of human crowds often exhibits surprisingly regular patterns of movement. These patterns stem from social interactions between pedestrians such as when individuals imitate others, follow their neighbors, avoid collisions with other pedestrians, or push each other. While some of these patterns are beneficial and promote efficient collective motion, others can seriously disrupt the flow, ultimately leading to deadly crowd disasters. Understanding the dynamics of crowd movements can help urban planners manage crowd safety in dense urban areas and develop an understanding of dynamic social systems. However, the study of crowd behavior has been hindered by technical and methodological challenges. Laboratory experiments involving large crowds can be difficult to organize, and quantitative field data collected from surveillance cameras are difficult to evaluate. Nevertheless, crowd research has undergone important developments in the past few years that have led to numerous research opportunities. For example, the development of crowd monitoring based on the virtual signals emitted by pedestrians’ smartphones has changed the way researchers collect and analyze live field data. In addition, the use of virtual reality, and multi-user platforms in particular, have paved the way for new types of experiments. In this review,we describe these methodological developments in detail and discuss how these novel technologies can be used to deepen our understanding of crowd behavior.
Read MoreVirtual reality (VR) experiments are increasingly employed because of their internal and external validity compared to real-world observation and laboratory experiments, respectively. VR is especially useful for geographic visualizations and investigations of spatial behavior. In spatial behavior research, VR provides a platform for studying the relationship between navigation and physiological measures (e.g., skin conductance, heart rate, blood pressure). Specifically, physiological measures allow researchers to address novel questions and constrain previous theories of spatial abilities, strategies, and performance. For example, individual differences in navigation performance may be explained by the extent to which changes in arousal mediate the effects of task difficulty. However, the complexities in the design and implementation of VR experiments can distract experimenters from their primary research goals and introduce irregularities in data collection and analysis. To address these challenges, the Experiments in Virtual Environments (EVE) framework includes standardized modules such as participant training with the control interface, data collection using questionnaires, the synchronization of physiological measurements, and data storage. EVE also provides the necessary infrastructure for data management, visualization, and evaluation. The present paper describes a protocol that employs the EVE framework to conduct navigation experiments in VR with physiological sensors. The protocol lists the steps necessary for recruiting participants, attaching the physiological sensors, administering the experiment using EVE, and assessing the collected data with EVE evaluation tools. Overall, this protocol will facilitate future research by streamlining the design and implementation of VR experiments with physiological sensors.
Read MoreInvestigating the interactions among multiple participants is a challenge for researchers from various disciplines, including the decision sciences and spatial cognition. With a local area network and dedicated software platform, experimenters can efficiently monitor the behavior of the participants that are simultaneously immersed in a desktop virtual environment and digitalize the collected data. These capabilities allow for experimental designs in spatial cognition and navigation research that would be difficult (if not impossible) to conduct in the real world. Possible experimental variations include stress during an evacuation, cooperative and competitive search tasks, and other contextual factors that may influence emergent crowd behavior. However, such a laboratory requires maintenance and strict protocols for data collection in a controlled setting. While the external validity of laboratory studies with human participants is sometimes questioned, a number of recent papers suggest that the correspondence between real and virtual environments may be sufficient for studying social behavior in terms of trajectories, hesitations, and spatial decisions. In this article, we describe a method for conducting experiments on decision-making and navigation with up to 36 participants in a networked desktop virtual reality setup (i.e., the Decision Science Laboratory or DeSciL). This experiment protocol can be adapted and applied by other researchers in order to set up a networked desktop virtual reality laboratory.
Read More