In the event of fires and other hazards, visual guidance systems that support evacuation are critical for the safety of individuals. Current visual guidances for evacuations are typically non-adaptive signs in that they always indicate the same exit route independently of the hazard’s location. Adaptive signage systems can facilitate wayfinding during evacuations by optimizing the route towards the exit based on the current emergency situation. In this paper, we demonstrate that participants that evacuate a virtual museum using adaptive signs are quicker, use shorter routes, suffer less damage caused by the fire, and report less distress compared to participants using non-adaptive signs. Furthermore, we develop both centralized and decentralized computational frameworks that are capable of calculating the optimal route towards the exit by considering the locations of the fire and automatically adapting the directions indicated by signs. The decentralized system can easily recover from the event of a sign malfunction because the optimal evacuation route is computed locally and communicated by individual signs. Although this approach requires more time to compute than the centralized system, the results of the simulations show that both frameworks need less than two seconds to converge, which is substantially faster than the theoretical worst case. Finally, we use an agent-based model to validate various fire evacuation scenarios with and without adaptive signs by demonstrating a large difference in the survival rate of agents between the two conditions.
Cognitive tests are critical for the reliable assessment of cognitive functioning in an aging population. However, even validated psychometric tests are subject to a variety of extraneous factors (for example, culture and language) that may affect performance. Regarding aging, one factor that stands out is reduced visual function. Indeed, performance on cognitive tests has been found to be negatively affected by vision impairment (for example, age related macular degeneration or cataracts). When vision impairment is neglected during assessments, poor test scores may be falsely attributed to lower cognitive ability. This oversight can have substantial ramifications for research on cognitive functioning and the accurate diagnosis of cognitive impairment.
Recent advances in Augmented Reality (AR), the Internet of Things (IoT), cloud computing, and Digital Twins transform the types, rates, and volume of information generated in buildings as well as the mediums through which they can be perceived by users. These advances push the standard approach of media architecture to embed screens in the built environment to its limits because screens lack the immersive capacity that newer media afford. To bridge this gap, we propose a novel AR approach to media architecture that uses a Digital Twin as a platform for structuring and accessing data from various sources, including IoT and simulations. Our technical contribution to media architecture is threefold. First, we extend the possibilities of media architecture beyond embedded screens to three dimensions by presenting a Digital Twin using AR with a head-mounted display. This approach results in a shared and consistent augmented experience across large architectural spaces. Second, we use the Digital Twin to integrate and visualize real physical sensor information. Third, we make artificial occupancy simulations accessible to everyday users by presenting them within their natural context in the Digital Twin. Observing the Digital Twin in situ of the Physical Twin also has applications beyond media architecture. Fusing the two twins using AR can reduce the cognitive load of users from consuming big and complex information sources and enhance their experience. We present two use cases of the proposed Fused Twins in a university building at ETH Zürich. In the first use case, we visualize a dense indoor sensor network (DISN) with 390 IoT sensors that collected data from March 2020 to May 2021. In the second use case, we immerse visitors in agent-based simulations to enable insights into the real and projected uses of space. This work brings forward an ambitious vision for media architecture beyond traditional flat screens, and showcases its potential through fusing state of the art simulations, sensor data integration and augmented reality, finally making the jump from fiction to reality.
Purpose: Investigating difficulties during activities of daily living is a fundamental first step for the development of vision-related intervention and rehabilitation strategies. One way to do this is through visual impairment simulations. The aim of this review is to synthesize and assess the types of simulation methods that have been used to simulate age-related macular degeneration (AMD) in normally sighted participants, during activities of daily living (e.g., reading, cleaning, and cooking). Methods: We conducted a systematic literature search in five databases and a critical analysis of the advantages and disadvantages of various AMD simulation methods (following PRISMA guidelines). The review focuses on the suitability of each method for investigating activities of daily living, an assessment of clinical validation procedures, and an evaluation of the adaptation periods for participants. Results: Nineteen studies met the criteria for inclusion. Contact lenses, computer manipulations, gaze contingent displays, and simulation glasses were the main forms of AMD simulation identified. The use of validation and adaptation procedures were reported in approximately two-thirds and half of studies, respectively. Conclusions: Synthesis of the methodology demonstrated that the choice of simulation has been, and should continue to be, guided by the nature of the study. While simulations may never completely replicate vision loss experienced during AMD, consistency in simulation methodology is critical for generating realistic behavioral responses under vision impairment simulation and limiting the influence of confounding factors. Researchers could also come to a consensus regarding the length and form of adaptation by exploring what is an adequate amount of time and type of training required to acclimatize participants to vision impairment simulations.
Long Range Wide Area Network (LoRaWAN) has been advanced as an alternative for creating indoor sensor networks that extends beyond its original long-distance communication purpose. For the present paper, we developed a Dense Indoor Sensor Network (DISN) with 390 sensor nodes and three gateways and empirically evaluated its performance for half a year. Our analysis of more than 14 million transmissions revealed that DISNs achieve a much lower distance coverage compared to previous research. In addition, the deployment of multiple gateways decreased the loss of transmissions due to environmental and network factors such as concurrently received messages. Given the complexity of our system, we received few colliding concurrent messages, which demonstrates a gap between the projected requirements of LoRaWAN systems and the actual requirements of real-world applications. Our attenuation model indicates that robust coverage in an indoor environment can be maintained by placing a gateway every 30 m and every 5 floors. We discuss the application of DISNs for the passive sensing and visualization of human presence using a Digital Twin (DT).
The high prevalence of office stress and its detrimental health consequences are of concern to individuals, employers and society at large. Laboratory studies investigating office stress have mostly relied on data from participants that were tested individually on abstract tasks. In this study, we examined the effect of psychosocial office stress and work interruptions on the psychobiological stress response in a realistic but controlled group office environment. We also explored the role of cognitive stress appraisal as an underlying mechanism mediating the relationship between work stressors and the stress response. Ninety participants were randomly assigned to either a control condition or one of two experimental conditions in which they were exposed to psychosocial stress with or without prior work interruptions in a realistic multi-participant laboratory setting. To induce psychosocial stress, we adapted the Trier Social Stress Test for Groups to an office environment. Throughout the experiment, we continuously monitored heart rate and heart rate variability. Participants repeatedly reported on their current mood, calmness, wakefulness and perceived stress and gave saliva samples to assess changes in salivary cortisol and salivary alpha-amylase. Additionally, cognitive appraisal of the psychosocial stress test was evaluated. Our analyses revealed significant group differences for most outcomes during or immediately after the stress test (i.e., mood, calmness, perceived stress, salivary cortisol, heart rate, heart rate variability) and during recovery (i.e., salivary cortisol and heart rate). Interestingly, the condition that experienced work interruptions showed a higher increase of cortisol levels but appraised the stress test as less threatening than individuals that experienced only psychosocial stress. Exploratory mediation analyses revealed a blunted response in subjective measures of stress, which was partially explained by the differences in threat appraisal. The results showed that experimentally induced work stress led to significant responses of subjective measures of stress, the hypothalamic-pituitary-adrenal axis and the autonomic nervous system. However, there appears to be a discrepancy between the psychological and biological responses to preceding work interruptions. Appraising psychosocial stress as less threatening but still as challenging could be an adaptive way of coping and reflect a state of engagement and eustress.
Renewable energy systems (RES) can impact landscape aesthetics and influence the public's perception of the landscape and their acceptance of large infrastructure projects. Perceptual processes have consequences for both physiological and behavioral reactions to visual landscape changes and have not been systematically assessed in the context of RES. In this paper, we measured participants' physiological (electrodermal activity) and behavioral (i.e., landscape preferences) responses to landscapes with different amounts of RES. The visual stimuli were composed of either a low or high amount of wind turbines and photovoltaic systems in seven different landscape types. Participants were asked to choose their preferred landscape image from pairs of sequentially presented images while we recorded their electrodermal activity. The results revealed that participants were significantly more physiologically aroused while viewing landscapes with high RES compared to landscapes with low RES. We also found that the participants significantly preferred landscapes with low RES to landscapes with high RES and that this effect was larger for some landscapes than others. The results also revealed significant differences in preferences among landscape types. Specifically, participants tended to prefer the more natural landscapes to the more urban landscapes. A systematic analysis of the visual features of these stimuli revealed a positive correlation between physiological arousal and the visual impact of photovoltaic systems. Overall, we conclude that both physiology and behavior can be relevant for studies of landscape perception and that these insights may inform planners and policy makers in the implementation of landscape changes related to RES.
Dense crowds in public spaces have often caused serious security issues at large events. In this paper, we study the 2010 Love Parade disaster, for which a large amount of data (e.g. research papers, professional reports and video footage) exist. We reproduce the Love Parade disaster in a three-dimensional computer simulation calibrated with data from the actual event and using the social force model for pedestrian behaviour. Moreover, we simulate several crowd management strategies and investigate their ability to prevent the disaster. We evaluate these strategies in virtual reality (VR) by measuring the response and arousal of participants while experiencing the simulated event from a festival attendee’s perspective. Overall, we find that opening an additional exit and removing the police cordons could have significantly reduced the number of casualties. We also find that this strategy affects the physiological responses of the participants in VR.
Gaining awareness of the user’s affective states enables smartphones to support enriched interactions that are sensitive to the user’s context. To accomplish this on smartphones, we propose a system that analyzes the user’s text typing behavior using a semi-supervised deep learning pipeline for predicting affective states measured by valence, arousal, and dominance. Using a data collection study with 70 participants on text conversations designed to trigger different affective responses, we developed a variational auto-encoder to learn efficient feature embeddings of two-dimensional heat maps generated from touch data while participants engaged in these conversations. Using the learned embedding in a cross-validated analysis, our system predicted three levels (low, medium, high) of valence (AUC up to 0.84), arousal (AUC up to 0.82), and dominance (AUC up to 0.82). These results demonstrate the feasibility of our approach to accurately predict affective states based only on touch data.
Current Mixed Reality (MR) systems rely on a variety of sensors (e.g., cameras, eye tracking, GPS) to create immersive experiences. Data collected by these sensors are necessary to generate detailed models of a user and the environment that allow for different interactions with the virtual and the real world. Generally, these data contain sensitive information about the user, objects, and other people that make up the interaction. This is particularly the case for MR systems with eye tracking, because these devices are capable of inferring the identity and cognitive processes related to attention and arousal of a user. The goal of this position paper is to raise awareness on privacy issues that result from aggregating user data from multiple sensors in MR. Specifically, we focus on the challenges that arise from collecting eye tracking data and outline different ways gaze data may contribute to alleviate some of the privacy concerns from aggregating sensor data
The inspection of feature-rich information spaces often requires supportive tools that reduce visual clutter without sacrificing details. One common approach is to use focus+context lenses that provide multiple views of the data. While these lenses present local details together with global context, they require additional manual interaction. In this paper, we discuss the design space for gaze-adaptive lenses and present an approach that automatically displays additional details with respect to visual focus. We developed a prototype for a map application capable of displaying names and star-ratings of different restaurants. In a pilot study, we compared the gaze-adaptive lens to a mouse-only system in terms of efficiency, effectiveness, and usability. Our results revealed that participants were faster in locating the restaurants and more accurate in a map drawing task when using the gaze-adaptive lens. We discuss these results in relation to observed search strategies and inspected map areas.
A carefully designed map can reduce pedestrians’ cognitive load during wayfinding and may be an especially useful navigation aid in crowded public environments. In the present paper, we report three studies that investigated the effects of map complexity and crowd movement on wayfinding time, accuracy and hesitation using both online and laboratory-based networked virtual reality (VR) platforms. In the online study, we found that simple map designs led to shorter decision times and higher accuracy compared to complex map designs. In the networked VR set-up, we found that co-present participants made very few errors. In the final VR study, we replayed the traces of participants’ avatars from the second study so that they indicated a different direction than the maps. In this scenario, we found an interaction between map design and crowd movement in terms of decision time and the distributions of locations at which participants hesitated. Together, these findings can help the designers of maps for public spaces account for the movements of real crowds.
Signage systems are critical for communicating spatial information during wayfinding among a plethora of noise in the environment. A proper signage system can improve wayfinding performance and user experience by reducing the perceived complexity of the environment. However, previous models of sign-based wayfinding do not incorporate realistic noise or quantify the reduction in perceived complexity from the use of signage. Drawing upon concepts from information theory, we propose and validate a new agent-signage interaction model that quantifies available wayfinding information from signs for wayfinding. We conducted two online crowd-sourcing experiments to compute the distribution of a sign’s visibility and an agent’s decision-making confidence as a function of observation angle and viewing distance. We then validated this model using a virtual reality (VR) experiment with trajectories from human participants. The crowd-sourcing experiments provided a distribution of decision-making entropy (conditioned on visibility) that can be applied to any sign/environment. From the VR experiment, a training dataset of 30 trajectories was used to refine our model, and the remaining test dataset of 10 trajectories was compared with agent behavior using dynamic time warping (DTW) distance. The results revealed a reduction of 38.76% in DTW distance between the average trajectories before and after refinement. Our refined agent signage interaction model provides realistic predictions of human wayfinding behavior using signs. These findings represent a first step towards modeling human wayfinding behavior in complex real environments in a manner that can incorporate several additional random variables (e.g., environment layout).
This study investigates how social and physical environments affect human wayfinding and locomotion behaviors in a virtual multi-level shopping mall. Participants were asked to locate a store inside the virtual building as efficiently as possible. We examined the effects of crowdedness, start floor, and trial number on wayfinding strategies, initial route choices, and locomotion behaviors. The results showed that crowdedness did not affect wayfinding strategies or initial route choices, but did affect locomotion in that participants in the high crowdedness condition were more likely to avoid crowds by moving close to the boundaries of the environment. The results also revealed that participants who started on the second floor were more likely to use the floor strategy than participants who started on the third floor, possibly because of the structure of the virtual building. These results suggest that both physical and social environments can influence multi-level indoor wayfinding.
Living in a disadvantaged neighborhood is associated with worse health and early mortality. Although many mechanisms may partially account for this effect, disadvantaged neighborhood environments are hypothesized to elicit stress and emotional responses that accumulate over time and influence physical and mental health. However, evidence for neighborhood effects on stress and emotion is limited due to methodological challenges. In order to address this question, we developed a virtual reality experimental model of neighborhood disadvantage and affluence and examined the effects of simulated neighborhoods on immediate stress and emotion. Exposure to neighborhood disadvantage resulted in greater negative emotion, less positive emotion, and more compassion, compared to exposure to affluence. However, the effect of virtual neighborhood environments on blood pressure and electrodermal reactivity depended on parental education. Participants from families with lower education exhibited greater reactivity to the disadvantaged neighborhood, while those from families with higher education exhibited greater reactivity to the affluent neighborhood. These results demonstrate that simulated neighborhood environments can elicit immediate stress reactivity and emotion, but the nature of physiological effects depends on sensitization to prior experience.
The role of affective states in learning has recently attracted considerable attention in education research. The accurate prediction of affective states can help increase the learning gain by incorporating targeted interventions that are capable of adjusting to changes in the individual affective states of students. Until recently, most work on the prediction of affective states has relied on expensive and stationary lab devices that are not well suited for classrooms and everyday use. Here, we present an automated pipeline capable of accurately predicting (AUC up to 0.86) the affective states of participants solving tablet-based math tasks using signals from low-cost mobile bio-sensors. In addition, we show that we can achieve a similar classification performance (AUC up to 0.84) by only using handwriting data recorded from a stylus while students solved the math tasks. Given the emerging digitization of classrooms and increased reliance on tablets as teaching tools, stylus data may be a viable alternative to bio-sensors for the prediction of affective states.
Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.
The collective behavior of human crowds often exhibits surprisingly regular patterns of movement. These patterns stem from social interactions between pedestrians such as when individuals imitate others, follow their neighbors, avoid collisions with other pedestrians, or push each other. While some of these patterns are beneficial and promote efficient collective motion, others can seriously disrupt the flow, ultimately leading to deadly crowd disasters. Understanding the dynamics of crowd movements can help urban planners manage crowd safety in dense urban areas and develop an understanding of dynamic social systems. However, the study of crowd behavior has been hindered by technical and methodological challenges. Laboratory experiments involving large crowds can be difficult to organize, and quantitative field data collected from surveillance cameras are difficult to evaluate. Nevertheless, crowd research has undergone important developments in the past few years that have led to numerous research opportunities. For example, the development of crowd monitoring based on the virtual signals emitted by pedestrians’ smartphones has changed the way researchers collect and analyze live field data. In addition, the use of virtual reality, and multi-user platforms in particular, have paved the way for new types of experiments. In this review,we describe these methodological developments in detail and discuss how these novel technologies can be used to deepen our understanding of crowd behavior.
Virtual reality (VR) experiments are increasingly employed because of their internal and external validity compared to real-world observation and laboratory experiments, respectively. VR is especially useful for geographic visualizations and investigations of spatial behavior. In spatial behavior research, VR provides a platform for studying the relationship between navigation and physiological measures (e.g., skin conductance, heart rate, blood pressure). Specifically, physiological measures allow researchers to address novel questions and constrain previous theories of spatial abilities, strategies, and performance. For example, individual differences in navigation performance may be explained by the extent to which changes in arousal mediate the effects of task difficulty. However, the complexities in the design and implementation of VR experiments can distract experimenters from their primary research goals and introduce irregularities in data collection and analysis. To address these challenges, the Experiments in Virtual Environments (EVE) framework includes standardized modules such as participant training with the control interface, data collection using questionnaires, the synchronization of physiological measurements, and data storage. EVE also provides the necessary infrastructure for data management, visualization, and evaluation. The present paper describes a protocol that employs the EVE framework to conduct navigation experiments in VR with physiological sensors. The protocol lists the steps necessary for recruiting participants, attaching the physiological sensors, administering the experiment using EVE, and assessing the collected data with EVE evaluation tools. Overall, this protocol will facilitate future research by streamlining the design and implementation of VR experiments with physiological sensors.
Investigating the interactions among multiple participants is a challenge for researchers from various disciplines, including the decision sciences and spatial cognition. With a local area network and dedicated software platform, experimenters can efficiently monitor the behavior of the participants that are simultaneously immersed in a desktop virtual environment and digitalize the collected data. These capabilities allow for experimental designs in spatial cognition and navigation research that would be difficult (if not impossible) to conduct in the real world. Possible experimental variations include stress during an evacuation, cooperative and competitive search tasks, and other contextual factors that may influence emergent crowd behavior. However, such a laboratory requires maintenance and strict protocols for data collection in a controlled setting. While the external validity of laboratory studies with human participants is sometimes questioned, a number of recent papers suggest that the correspondence between real and virtual environments may be sufficient for studying social behavior in terms of trajectories, hesitations, and spatial decisions. In this article, we describe a method for conducting experiments on decision-making and navigation with up to 36 participants in a networked desktop virtual reality setup (i.e., the Decision Science Laboratory or DeSciL). This experiment protocol can be adapted and applied by other researchers in order to set up a networked desktop virtual reality laboratory.
Cognitive neuroscience has provided additional techniques for investigations of spatial and geographic thinking. However, the incorporation of neuroscientific methods still lacks the theoretical motivation necessary for the progression of geography as a discipline. Rather than reflecting a shortcoming of neuroscience, this weakness has developed from previous attempts to establish a positivist approach to behavioral geography. In this chapter, we will discuss the connection between the challenges of positivism in behavioral geography and the current drive to incorporate neuroscientific evidence. We will also provide an overview of research in geography and neuroscience. Here, we will focus specifically on large-scale spatial thinking and navigation. We will argue that research at the intersection of geography and neuroscience would benefit from an explanatory, theory-driven approach rather than a descriptive, exploratory approach. Future considerations include the extent to which geographers have the skills necessary to conduct neuroscientific studies, whether or not geographers should be equipped with these skills, and the extent to which collaboration between neuroscientists and geographers can be useful.
EVE is a framework for the setup, implementation, and evaluation of experiments in virtual reality. The framework aims to reduce repetitive and error-prone steps that occur during experiment-setup while providing data management and evaluation capabilities. EVE aims to assist researchers who do not have specialized training in computer science. The framework is based on the popular platforms of Unity and MiddleVR. Database support, visualization tools, and scripting for R make EVE a comprehensive solution for research using VR. In this article, we illustrate the functions and flexibility of EVE in the context of an ongoing VR experiment called Neighbourhood Walk.
Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition.
We tell stories to save the past. Most of these stories today are experienced through reading texts, and we consequently are denied the visceral experience of the past even though we strive to recapture and animate lost worlds through our distinct senses. Virtual Plasencia is our highly realistic and interactive model of the Spanish medieval city of Plasencia. Virtual Plasencia offers dynamic new ways of storytelling via visual and auditory senses. By navigating the three dimensional city simulation, users begin to experience the sights and sounds of daily life in a medieval city, meander its cobbled streets and contemplate its principal structures and residences, and observe human interactions from different (e.g., religious, personal, communal) points of view. Inside Virtual Plasencia, users encounter people and places that cannot usually be achieved through traditional written narratives. The opportunity to observe historical events in loco represents a valuable new form of representation of the past.
One significant feature of urbanisation in the twenty-first century is the increase in large, complex and densely populated city quarters. Airports, shopping precincts, sports venues and cultural facilities increasingly combine with generic function buildings such as hotels, housing, businesses and offices to produce horizontal and vertical nodes in a city. The capacity of such city quarters to bring large numbers of people into proximity produces crowds of unprecedented complexity. The manner in which such crowds ‘behave’ in space by aggregating, disaggregating, flowing or stalling generate new kinds of urban experience that can be thrilling, bewildering, stressful or even threatening. In turn, this creates a set of complex challenges for architectural design and its capacity to understand human behaviour and crowd dynamics.
Signage systems are critical for communicating environmental information. Signage that is visible and properly located can assist individuals in making efficient navigation decisions during wayfinding. Drawing upon concepts from information theory, we propose a framework to quantify the wayfinding information available in a virtual environment. Towards this end, we calculate and visualize the uncertainty in the information available to agents for individual signs. In addition, we expand on the influence of new signs on overall information (e.g., joint entropy, conditional entropy, mutual Information). The proposed framework can serve as the backbone for an evaluation tool to help architects during different stages of the design process by analyzing the efficiency of the signage system.
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation-relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations.
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over-reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population.
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over-reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population.
We present the results of a study that investigated the interaction of strategy and scale on search quality and efficiency for vista-scale spaces. The experiment was designed such that sighted participants were required to locate “invisible” objects whose locations were marked only with audio cues, thus enabling sight to be used for search coordination, but not for object detection. Participants were assigned to one of three conditions: a small indoor space (~20 m2), a medium-sized outdoor space (~250 m2), or a large outdoor space (~1000 m2), and the entire search for each participant was recorded either by a laser tracking system (indoor) or by GPS (outdoor). Results revealed a clear relationship between the size of space and search strategy. Individuals were likely to use ad-hoc methods in smaller spaces, but they were much more likely to search large spaces in a systematic fashion. In the smallest space, 21.5% of individuals used a systematic gridline search, but the rate increased to 56.2% for the medium-sized space, and 66.7% for the large-sized space. Similarly, individuals were much more likely to revisit previously found locations in small spaces, but avoided doing so in large spaces, instead devoting proportionally more time to search. Our results suggest that even within vista-scale spaces, perceived transport costs increase at a decreasing rate with distance, resulting in a distinct shift in exploration strategy type.
Choosing the topic for research is an expression of a person’s fascination for the subject. This fascination is nothing more than the culmination of perceived peculiarities about someone or something that constantly intrigues the individual. In my case I was taken aback by the astounding differences found in the neighbouring districts of Parc-Extension and the Town of Mont-Royal. Call it fate or serendipity, but all it took was a wrong turn on l’Acadie Boulevard to prompt my curiosity about the differences in lifestyle between these two areas. The sight of the fence that separated the calm, quiet and spatially organized environment of the Town of Mont-Royal from the noisy and crowded setting of Parc-Extension was enough to offend me. I had never come across such a variation in land use within such a short distance. Intrigued by this phenomenon, I set out to investigate the reason for this spatial segregation. The study of the different lifestyles required that I go beyond a simple observation of the resident’s daily activities and find a way to experience life as a fellow resident.
We investigated the interpolation of missing values in data that were fit by bidimensional regression models. This addresses a problem in spatial cognition research in which sketch maps are used to assess the veracity of spatial representations. In several simulations, we compared samples of different sizes with different numbers of interpolated coordinate pairs. A genetic algorithm was used in order to estimate parameter values. We found that artificial inflation in the fit of bidimensional regression models increased with the percent of interpolated coordinate pairs. Furthermore, samples with fewer coordinate pairs resulted in more inflation than samples with more coordinate pairs. These results have important implications for statistical models, especially those applied to the analysis of spatial data.
There are marked individual differences in the formation of cognitive maps both in the real world and in virtual environments (VE; e.g., Blajenkova, Motes, & Kozhevnikov, 2005; Chai & Jacobs, 2010; Ishikawa & Montello, 2006; Wen, Ishikawa, & Sato, 2011). These differences, however, are poorly understood and can be difficult to assess except by self-report methods. VEs offer an opportunity to collect objective data in environments that can be controlled and standardized. In this study, we designed a VE consisting of buildings arrayed along 2 separated routes, allowing for differentiation of between-route and within-route representation. Performance on a pointing task and a model-building task correlated with self-reported navigation ability. However, for participants with lower levels of between-route pointing, the Santa Barbara Sense of Direction scale (Hegarty, Richardson, Montello, Lovelace, & Subbiah, 2002) did not predict individual differences in accuracy when pointing to buildings within the same route. Thus, we confirm the existence of individual differences in the ability to construct a cognitive map of an environment, identify both the strengths and the potential weaknesses of self-report measures, and isolate a dimension that may help to characterize individual differences more completely. The VE designed for this study provides an objective behavioral measure of navigation ability that can be widely used as a research tool.
The idea that humans use flexible map-like representations of their environment to guide spatial navigation has a long and controversial history. One reason for this enduring controversy might be that individuals vary considerably in their ability to form and utilize cognitive maps. Here we investigate the behavioral and neuroanatomical signatures of these individual differences. Participants learned an unfamiliar campus environment over a period of three weeks. In their first visit, they learned the position of different buildings along two routes in separate areas of the campus. During the following weeks, they learned these routes for a second and third time, along with two paths that connected both areas of the campus. Behavioral assessments after each learning session indicated that subjects formed a coherent representation of the spatial structure of the entire campus after learning a single connecting path. Volumetric analyses of structural MRI data and voxel-based morphometry (VBM) indicated that the size of the right posterior hippocampus predicted the ability to use this spatial knowledge to make inferences about the relative positions of different buildings on the campus. An inverse relationship between gray matter volume and performance was observed in the caudate. These results suggest that (i) humans can rapidly acquire cognitive maps of large-scale environments and (ii) individual differences in hippocampal anatomy may provide the neuroanatomical substrate for individual differences in the ability to learn and flexibly use these cognitive maps.
Classical theories of spatial microgenesis (Siegel and White, 1975) posit that information about landmarks and the paths between them is acquired prior to the establishment of more holistic survey-level representations. To test this idea, we examined the neural and behavioral correlates of landmark and path encoding during a real-world route learning episode. Subjects were taught a novel 3 km route around the University of Pennsylvania campus and then brought to the laboratory where they performed a recognition task that required them to discriminate between on-route and off-route buildings. Each building was preceded by a masked prime, which could either be the building that immediately preceded the target building along the route or immediately succeeded it. Consistent with previous reports using a similar paradigm in a virtual environment (Janzen and Weststeijn, 2007), buildings at navigational decision points (DPs) were more easily recognized than non-DP buildings and recognition was facilitated by in-route vs. against-route primes. Functional magnetic resonance imaging (fMRI) data collected during the recognition task revealed two effects of interest: first, greater response to DP vs. non-DP buildings in a wide network of brain regions previously implicated in spatial processing; second, a significant interaction between building location (DP vs. non-DP) and route direction (in-route vs. against-route) in a retrosplenial/parietal-occipital sulcus region previously labeled the retrosplenial complex (RSC). These results indicate that newly learned real-world routes are coded in terms of paths between decision points and suggest that the RSC may be a critical locus for integrating landmark and path information.
This article discusses several aspects of psychosocial adjustment to blindness and low-vision and proposes that the education of both the self and society are essential for positive adjustment. It exposes some of the general misunderstandings about visual impairment and demonstrates how these are partly responsible for the perpetuation of myths and misconceptions regarding the character and abilities of this population. It argues that confidence and self-esteem are deeply connected to ability and should be regarded as constructive elements of the ego usually manifested in different types of introverted or extroverted behaviour. Wherever possible arguments will be backed by current and past research in social and abnormal psychology as well as specific case studies recorded by the author during the years he spent conducting research and working as a life-skills tutor at the Royal London Society for the Blind.
The paper reports on two studies being conducted with students from Dorton College, Royal London Society for the Blind (RLSB) in Kent. The first experiment will examine the content and accuracy of mental representations of a well-known environment. Students will walk a route around the college campus and learn the position of 10 buildings or structures. They will then be asked to make heading judgments, estimate distances, complete a spatial cued model and sequentially visit a series of locations. The second experiment will examine the strategies and coding heuristics used to explore a complex novel environment. Students will be asked to explore a maze and learn the location of different places. Their search patterns will be digitally tracked, coded and analyzed using GIS software. Students will be tested using the same methods as in the first experiment and their performance level will be correlated with their exploratory patterns. Throughout the paper we are reminded that construct validity can only be secured by employing multiple converging techniques in the collection and analysis of cognitive data. Methods should be designed to test content and accuracy as well as the utility of mental representations.
The article reports on the second and final stage of a study concerned with the impact of an entertainment retrofit on the per[ ormance of a shopping center. The study focused on the changes in the type of visitor and the level of patronage inside the Place Alexis Nihon in downtown Montreal after the construction of the neighboring Pepsi Forum. By tracking 729 individuals, a comprehensive picture of the spatial behavior and trip characteristics of visitors was developed that was compared with the behavior of 722 individuals before the entertainment center was opened. Motivations, trip-planning and evaluations were also probed with a questionnaire applied to 283 individuals. Expectations that each center would benefit from the presence of the other were largely not fulfilled. Results indicated that only a slight synergy exists between the entertainment venues and shopping. The estimated contribution to the shopping center of visitors whose first destination was the entertainment center was 5%. Except for anchor store patronage, the center experienced a decrease in visits to small stores and a tendency for visitors to remain on Doors close to the ground. One year after opening, the entertainment center operators continue to try new retailing combinations to build their own clientele.