This chapter delves into the innovative fusion of sensory technologies and art, illustrating how this synergy is expanding artistic expression and experience. It begins by examining sensory technologies as new mediums in the arts, focusing on ambient sound applications and the transformative impact of virtual and augmented reality in visual arts. This includes exploring virtual tourism and its influence on cultural aesthetics, the use of VR in language revitalization, and how AR is reshaping color interpretation in art. The chapter then presents a series of artist case studies, showcasing how visual, auditory, tactile, olfactory, and gustatory technologies are being integrated into artistic processes. These studies highlight solutions for colorblindness, immersive language learning environments, auditory representations of visual phenomena, and the use of digital scent technology. The future of sensory experience in art is also explored, discussing the roles of projected synesthesia in art therapy, multisensory technologies in bridging sensory gaps, and emerging trends like tangible media and biofeedback in immersive art experiences. This chapter not only illustrates current innovations but also forecasts the exciting potential of sensory technologies in expanding the boundaries of how we create, interact with, and perceive art in the modern world.

1 Sensory Technologies as Artistic Medium

Exploring the intersection of art and sensory technology reveals a transformative space where traditional media merges with cutting-edge innovations. Historically, art has been experienced within the physical confines of galleries, museums, and performance spaces. Now, these boundaries have been transcended as art enters the spheres of immersive experiences using a variety of emergent technology, including extended reality. Extended reality (XR) represents a suite of technologies that enhance our perception of the digital and physical worlds. Augmented reality (AR) layers virtual elements over the real world, while virtual reality (VR) immerses users in a completely virtual environment. Mixed reality (MR) merges the two, allowing real and virtual elements to coexist and interact in a single space. Such novel progressions of immersion into multisensory dimensions offer audiences novel and profound ways to interact with artistic works (Vital et al., 2023). Thus, the exploration starts with visual enhancements, such as XR, which have become the forerunners in this evolution. For instance, VR allows for a form of tourism that transcends physical and geographical barriers, inviting viewers into digitally reconstructed spaces that pulse with cultural and historical resonance (Hutson & Olsen, 2022). AR layers digital information onto the physical world, altering and enriching perceptions of art, allowing even those with visual impairments to experience the kaleidoscope of colors and forms (Akın & Cömert, 2023).

Beyond the traditional forms of experiencing art, and in keeping with the challenge to ocularcentrism, artists transcend traditional visual-centric methods by engaging multiple senses in installations and interactive exhibits, fostering holistic sensory experiences. In considering other sense experiences, ambient soundscapes represent another realm for investigation with artists using them to envelop viewers in an auditory experience that complements the visual (Fraisse et al., 2023). This auditory dimension can stand alone as a form of art or blend with visual, tactile, or olfactory elements to create a holistic experience (Murao, 2022). Spatial-audio technology envelops the listener, creating a three-dimensional soundscape that can replicate the acoustics of a Medieval cathedral or the intimate whispers of a quiet, modern gallery.

While sight and sound in exhibition spaces may be familiar to the artgoing public, there are examples of other artists who challenge convention by tapping into cross-sensory perceptions. These less charted territories include tactile (touch), olfactory (smell), and even gustatory (taste) experiences within art. Tactile feedback systems bring texture and touch to digital creations, allowing objects (or even light and sound) to be felt as well as seen (Chaudhury et al., 2022). Olfactory technologies introduce scent as an element of narrative and ambiance, while gustatory experiments challenge artists and viewers to consider taste as a component of the aesthetic experience (Bembibre & Strlič, 2022). Looking ahead, a future where technology enables the creation of synesthetic experiences will allow individuals to perceive art in ways typically exclusive to those with the neurological condition of synesthesia. These experiences merge novelty with expanding the inclusivity of art and bridge gaps for those with different sensory abilities. Emerging technologies which allow for a dialogue between the physical responses of viewers and the art itself, like tangible media and biofeedback, create a potential feedback loop which personalizes and deepens the engagement (Castelló et al., 2023). Therefore, the boundaries of art are continuously being pushed further by technological advances, promising a future where artistic expression and sensory experience are limited only by the imagination. It heralds a time when the full spectrum of human sensory and emotional experience can be engaged and celebrated through art.

The fusion of art and sensory perception has given rise to multisensory exhibitions which mirror the layered experiences of synesthetes. Renowned and emerging artists alike are navigating these waters, bringing to life the intangible qualities of synesthesia in their work. The art of David Hockney (1937–), for instance, dissects the nuances of color perception (Fig. 2.1), while Christa Sommerer (1964–) creates interactive pieces that meld the organic with the digital (Bergantini, 2019; Boden & Edmonds, 2019). Similarly, French artist Joanie Lemercier (1982–) transports viewers into the heart of nature through her illuminated geometric patterns (https://joanielemercier.com/) (Tian, 2023). These forays into sensory fusion have been more thoroughly explored by Yann Marussich (1966–) (https://www.yannmarussich.ch/home.php) and Shiro Takatani (1963–) (http://shiro.dumbtype.com/works/index) in their performative works, exemplifying the intersection, merging sound, light, aroma, and texture to replicate the synesthetic experience in a setting meant to both disorient and captivate (Andrieu et al., 2020). Similarly, the installations of Anicka Yi (1971–) (https://www.anickayistudio.biz/) weave together scent and sound, creating an environment that engages all senses, while the works of Gustav Metzger (1926–2017) and Nam June Paik (1932–2006) present a dialogue on how technology shapes human experience (Barrett, 2023; Daris, 2021). A notable example is TV Buddha (1974) (Fig. 2.2) from Paik, a traditional Buddha statue is placed in a contemplative pose, facing a television that broadcasts a live feed of the statue itself. This innovative work melds technology with spirituality, presenting a thought-provoking intersection of Eastern philosophy and Western media culture. The arrangement creates a loop of reflection, where the Buddha, symbolizing peace and meditation, is juxtaposed with the dynamic, ever-changing nature of modern media. This continuous loop blurs the lines between reality and its digital representation, inviting viewers to ponder the interplay of the ancient and the contemporary, and the profound implications of this fusion on our perception of reality and spirituality (Lim, 2019).

Fig. 2.1
A collage of a woman posing with one hand near her chin and the other above her head. The collage is composed of 32 square photos, each contributing a portion to the overall image. The photo is characterized by soft colors and vintage tones.

(Source from Wikimedia Commons, licensed under CC-BY 2.0)

David Hockney, Celia, Los Angeles. April 10, 1982

Fig. 2.2
A photo of a dull-lit room where a Buddha statue is placed on the table against a monitor. The monitor is connected to a camera on a tripod behind the monitor. The Buddha statue is reflected on the screen.

(Source from Wikimedia Commons, licensed under CC-BY 2.0)

Nam June Paik, TV Buddha, 1974

The exhibit at the Museum of Modern Art (MoMA), Sounds: A Contemporary Score (2013) (https://www.moma.org/calendar/exhibitions/1351), brought the phenomenon of synesthesia into the public eye, featuring a group exhibition of sound art that allowed visitors to experience the intertwining of the senses firsthand (Heddaya, 2013). Emilie Gossiaux (1989–) (http://www.emiliegossiaux.com/), influenced by neuroscience, provides glimpses into the synesthetic process through her art (Lane, 2023). On the other hand, German-born artist Diemut Strebe (1982–) (https://www.diemutstrebe.com/) takes a more extreme approach in “Works on the Intersection of Art and Science,” plunging visitors into sensory deprivation, challenging them to find meaning in the void (Sexton, 2021). These examples represent a thematic mosaic where art not only imitates life but also seeks to transcend ordinary sensory experiences, inviting a deeper exploration of perception and the human mind.

As these examples illustrate, artists are increasingly conscious of the diverse sensory profiles and conditions of audiences, creating works that acknowledge and embrace these differences. The artist-activist Liz Crow work, such as Resistance (2009), utilizes technology to simulate the perceptual experiences of individuals with disabilities, while the UK-based Tim Etchell (1962–) also delves into lived experience of physical disability in his performance art (Prendergast, 2021). Other examples include the sensory-rich installations of Stokholm-based Christine Ödlund (1963–) (https://www.christineodlund.se/), which immerses viewers in a world akin to blindness, whereas conceptual artist-photographer Teresa Margolles (1963–) (https://macm.org/en/exhibitions/teresa-margolles-mundos/) employs forensic science to address themes of trauma and disability (Olivas, 2023). With each of these examples, various technologies are used to simulate different sensory experiences, primarily for a neurotypical and non-disabled audience. However, other creatives have sought to more extensively represent neurodivergent perspectives by and for these audiences, including autism.

This sensitivity to neurodiversity is bringing forward a surge in creativity aimed at neurological inclusivity. Evidence suggests a higher prevalence of synesthesia among individuals with autism, particularly mirror-touch synesthesia, which may arise from unique brain processing patterns in social and sensory information (Bouvet et al., 2019; Van Leeuwen et al., 2021). As such, these findings have inspired artistic endeavors that cater to and celebrate these distinct sensory experiences. The Reality Center (https://www.realitymgmt.com/about), for instance, has pioneered an example that merges enjoyment with therapeutic neuroscientific approaches, offering biofeedback and frequency technologies designed to stimulate the brain in ways akin to meditation or psychedelic therapies. In such a way, the center serves as a sanctuary for those who have experienced trauma, providing a rebomgulated environment free from overstimulation and promoting mental well-being without relying solely on traditional therapeutic conversations.

Aside from organizations, artists are also creating art that provides insights into the autistic experience. For instance, artists with autism, like Anna Berry, are creating works (https://www.annaberry.co.uk/) that center around experiences of marginalized groups, while Amy Sequenzia, a non-speaking autistic artist, employs painting and installation art to forge a visual language that expresses her thoughts and experiences beyond verbal communication (Brackett, 2022). Likewise, artists like Vietnamese-Australian visual artist and storyteller Matt Huynh (https://www.matthuynh.com/) and American visual artist Liz Phillips (https://lizphillips.net/w/) are contributing to this dialogue by focusing on how individuals with autism perceive sensory details. The drawings of Huynh emphasize sensory elements such as light and texture, while Phillips uses sound in her artwork to explore and communicate the sensory experiences of those with autism and sensory processing disorders (Cressman & DeTora, 2021). The experiences of neurodivergence are also vividly captured in multimedia works of Huynh such as Reconfiguration: An Evening With Other Lives (2015) (https://vimeo.com/145831948) Therefore, a growing number of artists are recognizing the sensory differences inherent in various neurotypes and are using this understanding in crafting their art to reflect, resonate with, and respect these differences. This evolving landscape of art underscores the potential for creative expression to serve as a powerful conduit for understanding, inclusivity, and awareness of the rich tapestry of human sensory and cognitive experiences.

The journey through the realm of multisensory art and its profound impact on perception and experience finds a complementary perspective in the Aesthetic Mindset Index (https://www.yourbrainonart.com/aesthetic-mindset-index) developed by computational cognitive neuroscientist Ed Vessel (Magsamen & Ross, 2023). This index, assessing aesthetic responsiveness and the influence of art on environmental attunement, adds depth to our understanding of how art affects our physical and psychological states and provides a valuable framework for comprehending the power of art to influence not only our physical responses but also our perception and interpretation of color within different cultural contexts. This exploration sets the stage for delving into specific case studies in visual technologies. These studies will further illuminate how technological advancements in visual arts are reshaping our engagement with art, enhancing our sensory experiences, and offering new insights into the intricate relationship between art, technology, and the human mind.

2 Visual Technologies and Sensory Case Studies

In the exploration of visual technologies and sensory case studies, the complexity of how our brains process visual information shall be foregrounded. While the human eye is responsible for registering up to 80% of all sensory impressions through sight, the interpretation and understanding of this visual data involves intricate neural mechanisms. In fact, there has been much more research on vision than any other sensory modality (Hutmacher, 2019). This section investigates how artists have harnessed visual elements to convey messages that resonate with diverse neurotypes and experiences. These artistic endeavors transcend the restrictive bounds of the sighted world, offering insights into how visual art can communicate in ways that are both universally accessible and deeply personal. The discussion will illuminate the creative ways in which artists have utilized visual technologies, not just for aesthetic purposes, but as tools to express and navigate the varied landscapes of human perception and cognitive experience.

In proceeding into the realm of visual technologies and their sensory implications, a recent study by the University of Waterloo (2023), offers crucial insights. The discoveries underscore the profound impact of this cognitive bias toward symbols on the processing of visual stimuli, implications that are critical for understanding the artistic examples discussed in the subsequent section. The study demonstrates that symbols are more memorable than words, providing a more effective anchor for abstract concepts. The enhanced recall of symbols is due to their unique and unambiguous nature, which renders them readily identifiable and less susceptible to diverse interpretations than words (Clark, 2006). The findings align with the dual-coding theory, suggesting that symbols, being coded both visually and verbally in the brain, have an enhanced memorability factor (Damayanti et al., 2023). Symbols or pictorial representations can elicit a stronger neurological response than words because they are processed faster and are more directly linked to the sensory and perceptual experience (Borghesani & Piazza, 2017). The brain regions involved in visual and spatial processing, such as the fusiform gyrus and the right hemisphere, are particularly activated by images and symbols, providing a more robust memory trace (Ganis & Kutas, 2003).

Along with neurotypical considerations of symbolic preference are those with various physical limitations. For instance, augmentative and alternative communication (AAC) systems, such as those utilizing pictograms, play a crucial role in aiding individuals with speech disabilities. These systems are particularly beneficial for individuals with autism, offering a means of communication that bypasses traditional language barriers. Pictogram-based applications, like PictoEditor, incorporate predictive functionalities that enhance the availability of desired pictograms for users. This approach becomes more effective when predictions are based on the frequency of pictogram use (Hervás et al., 2020). Additionally, Blissymbols, known for their minimalistic and semantically combinatory nature, are instrumental in tactile communication for AAC needs. Their design is geared toward maximizing tactile processing, thereby aiding in the representation of new words and meanings in diverse environments (Bliss et al., 2012). Furthermore, systems like Pictogrammar, which link to the Simple Upper Ontology and PictOntology, utilize formal semantics to manage pictogram sets and generate natural language utterances from a sequence of symbols, dynamically adapting the communication board based on syntactic and semantic content (Martínez-Santiago et al., 2020). These advancements in AAC technologies exemplify how specialized communication systems can significantly improve the quality of life for those with speech and communication challenges. This preference for symbols over words has significant implications in the field of neuroarts, especially considering individuals with varying sensory processing capabilities, such as synesthesia or visual limitations. For instance, the use of VR technology to create immersive environments for people with synesthesia allows for a more intuitive and symbol-centric exploration of their unique sensory experiences (Lee, 2023).

Additionally, the study sheds light on the concept of lingual vision and the use of reflective surfaces by individuals who are deaf or hard of hearing, illustrating how visual technologies can be adapted to different sensory needs (Holmström, 2022). In the context of visual impairment, understanding the role of photoreceptor function, the optic nerve, and the occipital lobe is crucial in recognizing and appreciating art, paving the way for technology-enhanced sensory experiences (Fig. 2.3) (Othman et al., 2022). In other words, when it comes to visual impairment, it is important to understand how certain parts of the eye and brain work. This includes the photoreceptors (which are like tiny cameras in our eyes), the optic nerve (the cable that sends pictures from our eyes to our brain), and the occipital lobe (a part of the brain that helps us understand what we are seeing). Understanding these can help people with visual impairments experience and appreciate art in new ways, especially with the help of technology that enhances their sensory experiences.

Fig. 2.3
An anatomy of an eyeball. The parts labeled are conjunctiva, ciliary body, suspensory ligament of lens, pupil, anterior chamber, comes, posterior chamber, lens, hyaloid canal, sclera, retinal blood vessel, retina, optic nerve, optic disc, macula, and choroid.

(Source From Wikimedia Commons, licensed under CC0)

Eyeball dissection, 2017

Exploring various examples of technological intervention demonstrates the potential for multisensory integration. For instance, the exhibition Sensorium at the Tate Britain (2015), revolutionized art appreciation by offering a multisensory experience. The exhibit, made up of four rooms, featured four paintings from the Tate collection by Richard Hamilton (1922–2011), John Latham (1921–2006), David Bomberg (1890–1957), and Francis Bacon (1909–1992). Wearing wristband monitors, visitors could not only see but also hear, smell, taste, and touch the artworks through carefully curated sensory elements. Collaborating with experts in sound, taste, scent, and touch, this exhibition transformed art into a holistic encounter. Visitors could, for example, listen to sounds corresponding to the theme of the work, smell scents inspired by the paintings, taste flavors evoked by the art, and feel physical forms echoing their essence. This innovative approach was realized through the partnership with Flying Object, a London-based creative agency that leveraged technology to tailor sensory experiences to individual works of art. Sensorium challenged traditional modes of art appreciation, providing an immersive journey that engaged multiple senses and redefined the art-gallery experience (Blundell, 2023). Such installations utilize advancements in visual technology to deepen the engagement with art, making it more accessible and resonant with a wider audience.

While multisensory experiences, like the Sensorium exhibition, demonstrate how visual experiences can be augmented with other senses, technology can also be adeptly employed to accommodate and enhance the artistic experience for those with various visual conditions, such as colorblindness. It should be noted that there are levels of colorblindness as illustrated in Fig. 2.4. New technologies present promising solutions for individuals with colorblindness, enhancing their ability to fully experience visual art. Applications leveraging machine learning (ML) algorithms have the proven potential to transmute colors into perceivable shades or patterns for those affected (Zhang & Tan, 2023). For instance, Color Oracle (Design for the Color Impaired) (https://colororacle.org/) is an application that interprets colors into distinct shades of gray, aiding in differentiation. The color blindness simulator for Windows shows in real time what those with common color vision impairments see to help adjust designs (Engeset et al., 2022). Another application, Chromatic Vision Simulator by Kazurnori Asada (1948–) (https://apps.apple.com/us/app/chromatic-vision-simulator/id389310222), provides simulations of various types of colorblindness, which could be invaluable in art exhibitions, enabling colorblind visitors to engage with artworks in a more meaningful manner. Beyond these, ColorADD (https://www.coloradd.net/en/) presents an innovative color-coding system that employs geometric shapes to signify different hues, enhancing accessibility within artistic spaces. Furthermore, Vischeck (https://www.vischeck.com/) offers a digital solution by adjusting color presentations on screens to accommodate the visual limitations of colorblind users and simulates colorblind vision. These technological interventions represent a concerted effort to ensure that visual art remains inclusive and enriching for all audiences, regardless of their visual capacities.

Fig. 2.4
A chart presents the levels of colorblindness. Normal vision 92%, Deuteranomaly 2.7%, protanomaly 0.66%, protanopia 0.59%, tritanopia 0.015%, tritanomaly 0.01%, and Achromatopsia less than 0.0001%. Normal vision can see various gradients of color and becomes a grayscale when moving to Achromatopsia.

(Source From Wikimedia Commons, licensed under CC0)

Colorblindness, 2022

Continuing to explore technological solutions for various visual impairments, the focus shifts to the notable yet often overlooked effects of albinism on vision and the innovative methods technology provides for support. Albinism is characterized by reduced melanin production and results in several vision-related challenges due to the lack of pigment in the eyes. This leads to conditions such as light sensitivity, nystagmus, and photophobia, which can significantly affect visual acuity and overall sensory experience of individuals (Moreno-Artero et al., 2021). Neuroscientific studies have further indicated that albinism may also alter the structure of visual pathways, impacting the processing of visual inputs (Ather et al., 2019; Hoffmann & Dumoulin, 2015). For example, the development of the fovea, which is crucial for sharp vision may be affected and lead to difficulties in visual acuity and contrast sensitivity. Furthermore, there might be distinctive patterns in how the brains of individuals with albinism process visual information, especially in the domains of color vision and motion detection (Oduntan et al., 2002).

In addressing the specific challenges associated with albinism, especially those related to vision, the application of advanced technologies such as AR and VR shows significant promise. These technologies can be tailored to create adaptive visual environments, compensating for the unique visual needs of individuals with albinism. For instance, AR and VR can be programmed to adjust light levels, color contrasts, and motion perception in a controlled manner, thereby reducing issues like light sensitivity and enhancing visual clarity (Fig. 2.5) (Hsiang et al., 2022). Furthermore, these technologies can adapt or simulate real-world environments which cater to particular visual impairments and create a more accessible and comfortable visual experience. This adaptation significantly aids in routine activities and enhances the appreciation of visual arts, promoting a more inclusive and immersive experience for those with albinism. Concurrently, artists have employed similar technologies to immerse audiences in multisensory experiences.

Fig. 2.5
An illustration depicts a boy wearing V R glasses. Three adjustable sliders are provided for adjusting brightness, contrast, and color settings. The sliders are labeled as follows. Brightness, Contrast, V G S L, and V S L.

(Source Permission of authors)

VR adjustable light levels, 2023

In the realm where art intersects with technology, artists ingeniously utilize visual elements to convey messages that resonate across different neurotypes and sensory experiences. The exploration has resulted in a diverse array of innovative and stimulating installations. In addressing visual impairment, over 80% of those affected can differentiate between light and dark (Müller et al., 2022). This fact is crucial for artists and technologists aiming to create inclusive art experiences. Leveraging what is known about photoreceptors and the processing of visual signals in the occipital lobe, artists can create works accessible and engaging for visually impaired audiences. This approach is evident in installations which use light and shadow to create immersive experiences, making visual art more inclusive and resonant with a wider audience.

Starting with the interplay of colors and soundscapes, The Color of Rain exhibit at the Museum of Arts and Sciences (MOAS) stands as a poignant exploration of color and perception, intertwining the elements of rain and artistic interpretation. Launched on March 26, 2022, the exhibit challenges visitors to reimagine the relationship between rain and color. A diverse group of 44 artists rose to this challenge, each contributing their unique vision to the exhibit. The resulting display is a symphony of umbrellas, each a canvas where rain's interaction with color is reimagined and expressed. Visitors are immersed in an ambiance filled with the sounds of rain, courtesy of CitiSound, further enhancing the sensory experience. As they move through the exhibit, they encounter a multitude of interpretations, each umbrella revealing a distinct story of how rain transforms and interacts with color. Tobey, curator of the exhibition, offers an insightful perspective on the inherent lack of color of rain and how that sets the stage for a transformative artistic journey. “Rain doesn't really have a color,” Tobey noted, “But what I challenged the artists to do was to give their interpretation on their umbrella canvas of what rain looks like when it touches color” (Almenas, 2022). The intriguing concept posits rain as a magnifying glass, one that intensifies and deepens colors, lending them a newfound vibrancy and depth. Thus, the phenomenon becomes a central theme of the exhibit as the rain plays as catalyst to illuminate hidden perceptions of color.

Other artists have put color and the construction of space centerstage. In experiences like Perfectly Clear and Hind Sight (https://massmoca.org/event/james-turrell/) by artist James Turrell (1943–), illumination becomes a discrete, physical object. The artist employs light to sculpt space and manipulate perception in spaces that use carefully calibrated light to create the illusion of infinite space, offering a unique sensory experience that challenges the perception of depth and boundaries of the viewer. In his works, segments of the sky appear as if magically hovering within ceilings or walls; conventional architecture seems to dissolve; and vivid, geometric forms appear to defy gravity. The effects of changing light and space can also be seen in the 2013 exhibition Aten Reign at the Guggenheim Museum in New York. Figure 2.6 represents two separate photographs of the same spot in the ceiling to illustrate the change brought about by light and color. Initiated in 1966, Turrell’s journey with light as a medium began with painting the windows of his Santa Monica studio, effectively obscuring natural light, while he explored the possibilities of light projections (Fallon, 2014). His art has consistently transformed through the creative modification of architectural elements, thus reimagining and redefining viewer interaction within their environment. The strategic use of light modifies the physical space and elicits a deep emotional reaction, encouraging reflection and introspection.

Fig. 2.6
Two images of concentric circles with gradients of color radiating from the center outward. In each image, the color transitions from darker tones at the center to brighter shades towards the outer edges.

(Source From Wikimedia Commons, licensed under CC-BY 3.0)

James Turrell, Aten Reign, Gugenheim Museum, New York, 2013

Further exploration into the convergence of art and technology involves adapting medical devices for artistic creation. For instance, EEG art is a project that delves into the realm of human emotions and their representation through digital art, a frontier where the human mind and technology converge in creative harmony (Barrera, 2021). EEG, or electroencephalogram, serves as the cornerstone of this artistic venture. The neurological technology captures the electrical activity of the brain, translating these signals into graphical data. The project, led by Random Quark, a creative studio managed by Theodorous Papatheodoru and Tom Chambers, employs EEG headsets to scan brain activity of participants. Individuals are immersed in a tranquil environment and asked to recall emotionally charged memories. The EEG data, representing the electrical activity of the brain, is then used to create visual representations of these emotions.

The heart of this endeavor lies in the Geneva Emotions Wheel (Fig. 2.7), a tool which plots human emotions on a valence-activation graph. The approach of Random Quark filters these emotions, focusing on seven major ones: joy, sadness, anger, love, disgust, fear, and surprise. Each emotion is given a unique color, creating a vivid and dynamic portrayal of the participant's emotional state. Notable case studies from this project include the Saatchi & Saatchi Wellness Emoscape, which sought to combine scientific precision with artistic flair. Participants reminisced about significant life events, and their brainwaves were transformed into complex digital paintings, each a unique representation of the associated emotions. These paintings, exhibited at the Truman Brewery gallery in East London, showcased the intricate and personalized nature of EEG art. This project is not only a testament to the artistic potential of neurotechnology but also a reflection on the future of human–computer interaction. As Tom Chambers envisions, the integration of emotional context in computing could lead to more adaptive and empathetic technological interactions. The possibilities are immense, from creating art that captures our deepest emotions to designing environments that respond to our mood.

Fig. 2.7
A Cartesian axis graphically represents emotions categorized by pleasantness and perceived control levels, accompanied by color gradients. Pleasant emotions such as pride with high control and hope with low control. Unpleasant emotions with envy with high control and shame with low control.

(Source From Sacharin et al. [2012])

Geneva emotion wheel

Along the same lines as EEG art, is the EEG KISS project (2014–2022) (https://www.lancelmaat.nl/work/e.e.g-kiss/), an avant-garde artistic exploration that delves into the realms of intimacy, technology, and shared experiences. Experienced in exhibitions from Amsterdam to Beijing, tele-presence technologies are used to scrutinize the concept of kissing in the digital age, addressing questions around the transference of intimate gestures like a kiss into the online realm and the feasibility of quantifying such intimate experiences. At the core of this project once again lies the innovative use of EEG technology to translate the act of kissing into biofeedback data. The artists deconstruct the physical act of kissing to reconstruct a new, synesthetic ritual of shared experiences in what Joyce Roodnat called “Another kind of Rodin,” as though the experience was a contemporary take on the Kiss by the artist (Fig. 2.8).

Fig. 2.8
A bronze statue of a nude man and woman seated on a rock. The female figure leans against the man, holding him around the neck, while he holds her outer thigh.

(Source From Wikimedia Commons, licensed under CC0)

Auguste Rodin, The Kiss, between 1898 and 1918. Bronze

The performance installation, Digital Synaesthetic EEG KISS, stands as a testament to this artistic inquiry. Participants are invited to engage in a live kissing experiment while adorned with EEG headsets, their brainwaves measured and manifested as EEG data. This data is then used to create an immersive datascape surrounding the kissers, integrating them within a floor projection of streaming EEG data. Accompanying this visual spectacle is a soundscape generated from the EEG data, effectively translating the brainwaves of the kissing individuals into a musical score. The audience becomes an integral part of this experience, enveloped in the sound and visual representation of the kiss. As such, the project goes beyond mere technological demonstration and serves as a platform for participants to attribute personal meaning to the abstract and often mystifying EEG data visualizations of their kisses. Importantly, the project avoids scientific interpretation and validation, instead inviting individuals to interpret the data based on their shared memories and imagination. The result is often seen as intimate co-creation, with participants viewing the data as a “Portrait of our kiss.” Furthermore, each unique EEG KISS visual data sequence is preserved in a database, available to be printed as an EEG KISS Portrait, while the corresponding soundscape is saved for public sharing. This facet of the project prompts crucial contemplation on intimacy and big data, challenging the safeguarding of personal data against the formation of new hybrid, relational rituals which permit public intimacy.

As we transition from exploring visual technologies to auditory experiences, the role of language in shaping perception becomes increasingly evident. Coyote (https://coyote.pics/), an innovative platform providing visual descriptions, serves as a bridge between the visual and auditory realms, particularly enhancing art experiences for individuals with visual impairments. This collaboration with the Museum of Contemporary Art in Chicago underscores the potential for auditory descriptions to enrich the appreciation of visual arts (Lupton & Lipps, 2018). Such initiatives highlight the symbiotic relationship between visual and auditory senses, emphasizing that perception is not isolated within a single sensory modality but is profoundly influenced by language and cultural context. As we delve into auditory experiences, we carry forward the understanding of how language not only differentiates colors but also structures the sensory world, driving further investigation into the dynamic interplay between visual art and auditory narrative. This exploration encourages us to consider the broader spectrum of neuroarts, where the fusion of sight and sound, influenced by linguistic nuances, creates a more inclusive and deeply resonant artistic experience.

3 Auditory Technologies and Sensory Case Studies

Although balance, spatial awareness, and proprioception are essential in human sensory experience, discourse on art primarily centers on visual perception, frequently overlooking the auditory dimension. Yet, auditory processing is crucial in our sensory engagement, providing vital spatial information about our surroundings, second only to vision. Notably, music and sound constitute the most extensively researched area in neuroarts. Therefore, it is essential to recognize and emphasize sound as more than a secondary element in sensory experiences, acknowledging its profound impact on perception and interpretation of the world around us. Furthermore, the phenomenon of synesthesia, where sensory experiences cross over, is a notable aspect in the arts with auditory-visual synesthesia being one of the most common manifestations (Goller et al., 2009). Many artists, as noted, including Kandinsky, and contemporary musicians like Pharrell Williams (1973–), have experienced synesthesia, where auditory stimuli trigger visual perceptions, and vice versa. Williams, for instance, perceives music in terms of colors, a sensory crossover that significantly influences his songwriting process (Harkness et al., 2023). Kandinsky similarly experienced synesthetic connections, reportedly being able to “hear” the colors in his vibrant paintings (Cardullo, 2017). This interplay of senses, particularly between sound and sight, enriches the creative process and underscores the multisensory nature of artistic expression.

Beyond multisensory integration, the significance of auditory perception in human life is evident in the extensive array of technologies developed to enhance hearing or mitigate hearing loss. These innovations include hearing aids, cochlear implants, bone-anchored hearing systems, auditory brainstem implants, and middle ear implants. Additionally, there are sound amplification devices, frequency modulation (FM) systems, advanced digital hearing aid technology, tinnitus maskers, sound therapy systems, and directional microphones. Further aiding auditory communication are real-time captioning devices, speech-to-text software, loop systems for telecoil-equipped hearing aids, sound field systems, vibrotactile devices, personal amplifiers, smartphone hearing aid applications, hearing aid compatible phones, adaptive alarm systems, and visual as well as vibrating alert systems (Kim & Kim, 2014; Tharpe et al., 2008). Each of these technologies addresses various aspects of hearing enhancement or compensation, reflecting the critical role of hearing in human interaction and well-being.

These diverse auditory technology devices offer insights into the interaction between sound and the brain, illustrating the complexity of auditory processing once sound enters the ear. Initially, sound waves are captured by the outer ear and funneled down the ear canal, striking the eardrum, which causes the eardrum to vibrate, a movement that is then transmitted through the small bones of the middle ear known as the ossicles (Fig. 2.9). These vibrations are further conveyed into the cochlea, a fluid-filled structure in the inner ear, inside of which tiny hair cells are stimulated by the fluid's motion, converting these mechanical vibrations into electrical signals. These electrical impulses in turn are then transmitted via the auditory nerve to the brain, specifically to the auditory cortex. In the auditory cortex, these signals are processed and interpreted, allowing us to perceive and understand the sounds as distinct entities, such as speech, music, or environmental noises. Through the translation of sound waves into meaningful information by the brain, different technologies such as hearing aids and cochlear implants have been pivotal in aiding those with hearing impairments. Hearing aids amplify sound to make it more accessible to the user, whereas cochlear implants bypass damaged parts of the ear and directly stimulate the auditory nerve, offering an alternative pathway for sound perception (Green et al., 2022).

Fig. 2.9
An anatomy of a human ear. The parts labeled include stapes, semicircular canals, vestibular nerves, cochlear nerve, Eustachian tube, Round window, tympanic cavity, tympanic membrane, external auditory canal, and auricle. The cochlea has 0.5 kilohertz at the apex, 6 in the middle, and 4 at the base.

(Source From Wikimedia Commons, licensed under CC-BY 2.0)

Anatomy of human ear with cochlear frequency mapping

The multitude of innovations designed to assist hearing underscores the critical role of sound in daily life, mobility, communication, and enjoyment. These advancements reflect the scientific understanding of auditory perception as essential for navigating environments, interacting socially, and experiencing pleasure (Zhao et al., 2018). From the vibration of the eardrum to the intricate workings of the auditory cortex, the profound effects that music and sound can have on the human brain and emotions are historically undeniable. The ability of music to evoke strong emotional responses, influence cognitive processes, and facilitate social interaction makes it a vital element in the study of neuroarts and multisensory experiences. Neuroarts seeks to understand and bridge these sensory differences, offering alternative pathways for emotional expression and reception (Guga & Uspenski, 2016). By moving beyond a visual-centric approach and embracing the auditory domain, a more comprehensive understanding of sensory experiences and their impact on human cognition and emotion can be achieved.

Neuroscientific research has increasingly substantiated the health benefits of music, deeply embedded in the human experience, highlighting its therapeutic potential across a range of medical and psychological settings. (Cephas et al., 2022; Pant et al., 2022; Saifman et al., 2023). Different tempos, languages, and sound levels in music can significantly alter mood and perception (Imasato et al., 2023). For instance, slower tempos and softer volumes are often associated with relaxation and calmness, as facilitated by alpha brain waves. In contrast, delta waves, typically associated with deeper states such as sleep, can be stimulated by certain musical frequencies. These recent discoveries, however, merely reinforce the use of music for therapeutic purposes that extend back through our history as a species.

Music therapy is an age-old practice with therapeutic benefits beginning in the Paleolithic Era, over 10,000 years ago. During this time, early humans utilized music for communication and emotional expression, as evidenced by archaeological discoveries of ancient bone flutes, percussion instruments, and cave markings that identified acoustically resonant locations. These findings indicate the early role of music in communal activities and rituals (Nikolsky, 2020). In the Neolithic Era, music saw significant evolution within permanent settlements globally. Various instruments like harps and complex percussion tools were developed, and rudimentary music notation began to appear, as seen in clay tablets from ancient Mesopotamia. The period also in fact marked the growing significance of music in religious ceremonies and social gatherings (Pomberger et al., 2021).

The therapeutic power of music was also recognized by ancient Greek philosophers. For instance, Plato extolled music as a source of joy and healing, famously stating that it gives “soul to the universe, wings to the mind, flight to the imagination” (Dhar & Das, 2020). The pupil of Plato, Aristotle, understanding the profound impact it had on human character, also advocated for its inclusion in the education of the young. Throughout history, different cultures have acknowledged the healing powers of music (Provenza, 2020). Ancient Egyptians incorporated it into religious ceremonies (Fig. 2.10) (Teeter, 2011), Native American tribes like the Navajo used music and dance in healing rituals (Frisbie, 1980), and traditional Chinese medicine employed specific musical tones and rhythms to balance the body’s energy (qi) (Tao et al., 2016). In the Middle Ages and Renaissance, the Christian Church played a crucial role in popularizing music among the masses. Congregational hymn singing during church services became a powerful medium for religious devotion, teaching, and therapeutic experience, especially for the non-literate populace (Fenlon, 2002; Lord, 2008; Monti, 2012).

Fig. 2.10
A painting of 4 women dressed in traditional attire with headdresses, earrings, bangles, and necklaces. One woman on the left claps her hands, another in the middle also claps, and the third woman holds two musical pipes in her mouth. On the far right, a naked woman turns her upper body while clapping her hands.

(Source From Wikimedia Commons, licensed under CC-BY 3.0)

Tomb of Nebamun, 1350 BCE, Thebes (Luxor)

Following the early modern period, the rise of the Industrial Revolution and Age of Enlightenment in the eighteenth century witnessed the parallel development of studies on the human nervous system and music therapy (León-Sanz, 2016). American physician Benjamin Rush (1745–1813) recognized the potential of music to improve mental health, while his student, Samuel Mathews, conducted experiments exploring the effects on the nervous system, laying the groundwork for modern music therapy (Heller, 1987). As seen in their Tranquilizing Chair (Fig. 2.11), such early experiments included having a patient sit in a chair, immobilized with straps at the shoulders, waist, arms, and feet and have a box-like apparatus confining the head to focus the sounds. With these early rejections of mental illness as the result of demonic possession, new forms of psychiatric and psychological treatment arose. For instance, the psychologist Everett Thayer Gaston (1901–1970), often referred to as the “father of music therapy,” was instrumental in establishing it as a recognized discipline in the United States in the 1940s–1960s, while the British music therapist Mary Priestley (1925–2017) contributed significantly to the development and respect of the field, developing Analytical Music Therapy, a synthesis of psychoanalytic theory and music therapy (Bunt & Stige, 2014).

Fig. 2.11
A painting of a man sitting in a tranquilizing chair, with his hands, legs, and arms secured by straps. A box-like structure around his head obscures his eyes.

(Source From Wikimedia Commons, licensed under CC0)

Tranquilizing Chair of Benjamin Rush, 1811. National Library of Medicine

Since the acceptance of music therapy as a field of its own, several phenomena have been observed. The “Mozart effect” is one such example that stands out as a compelling inquiry into the influence of music on cognitive function. Originating from a 1993 study by Frances Rauscher and Hinton (2006), the concept suggests that listening to the classical music of Wolfgang Amadeus Mozart (1756–1791) may improve spatial reasoning skills. Despite mixed evidence, the research has deepened our interest and understanding of the neurological impact of music (Shi, 2020). Follow-up studies using other classical composers, such as Ludwig van Beethoven (1770–1827) and various types of rock music have utilized EEG to measure attention and dopamine release, illustrating that classical music, in particular, can enhance cognitive processing without the influence of musical preference. This was evidenced by the accelerated processing of visual stimuli under the influence of classical music (Santhosh et al., 2020; Tai & Kuo, 2019). Moreover, personal enjoyment of music was linked to increased alertness and mood, akin to the effects of other pleasurable experiences, suggesting the artform’s broader capacity to enhance mental well-being (Hennessy et al., 2021).

The ability of music to engage the brain encompasses a complex interplay of auditory signals and neurochemical reactions, notably the release of dopamine, which is associated with inducing feelings of pleasure (Ferreri et al., 2019). Contrary to previous beliefs, studies have indicated that even somber tones, such as those from Antonio Vivaldi’s (1678–1741) concerto L’inverno (Winter) from the Four Seasons (1718–1720), can induce strong dopamine responses (Riby et al., 2023). While activities such as eating and sex have clear evolutionary purposes, the benefits of music, although less direct, are significant for social bonding and cultural continuity (McGrath & Brennan, 2020). The nucleus accumbens, central to the reward system of the brain, activates the dopamine pathway when we interact with music, amplifying positive emotions (Gold et al., 2019). Furthermore, the amygdala, the center for emotional response, is notably responsive to melodic stimuli (Fraile et al., 2023). EEG tests confirm that music promotes neural communication, shaping our emotional and cognitive experiences: happy tunes generally foster positive, outward-looking emotions, whereas sad music prompts reflective, inward-focused thoughts (Ueno & Shimada, 2023). These insights hold promise for music therapy, particularly for those grappling with emotional or neurological challenges, as music offers a means for emotional catharsis and introspection. The power of music to trigger potent autobiographical recollections can lead to transformative experiences, such as the deeply moving response of the ex-ballerina Martha González Saldaña when she hears Pyotr Illyich Tchaikovsky's Swan Lake (1875–1876). The 2019 recorded session (https://youtu.be/owb1uWDg3QM) where the former prima ballerina, who suffered from dementia and memory loss, had the melody played for her had an immediate reaction and began swaying as though dancing (Tsioulcas, 2020).

These instances bear witness to the potent responses music can elicit in the brain, prompting ongoing research into how different musical rhythms and compositions affect emotional health and neurochemical dynamics, as well as how artists might utilize these sensory experiences. In the artistic domain, technological advancements are reshaping and broadening the spectrum of sensory experiences. For instance, the American deaf artist Christine Sun Kim (1980–) uses her work to explore sensory perceptions. Based in Berlin, the sound artist incorporates sign language, text, and sound to reflect on how deaf people interact with their surroundings. For her project One Week of Lullabies for Roux (2018), Kim orchestrated a collaboration with seven parent-friends to create unique, calming soundscapes aimed at soothing her newborn, Roux, into slumber. Her directive for these lullabies was to avoid lyrics and focus on low frequencies, ensuring she could both monitor and feel at ease with the sounds introduced into her baby’s auditory environment. This project, symbolized by an artistically crafted bench resembling a color-coded pillbox, metaphorically represents these soundtracks as a form of daily nourishment (Cocozza, 2022).

Similarly, the Mexican-Canadian electronic artist Rafael Lozano-Hemmer (1967–) brings together art, design, and technology in his interactive exhibitions such as Pulse (2018–2019) at the Hirshorn Museum. The artist transformed the Second Level of the museum with immersive installations that utilized heart-rate sensors to create kinetic and audiovisual experiences based on biometric data of visitors. These installations captured and visualized biometric signatures as sequences of lights, soundscapes, and animated fingerprints, integrating them into a rhythmic, communal archive. The exhibit showcased three major works: Pulse Index, which recorded fingerprints and heart rates on a large-scale projection grid; Pulse Tank, where the pulses of visitors created ripple patterns in illuminated water tanks; and Pulse Room, featuring hundreds of light bulbs pulsating with past the heartbeats of visitors. Visitors interacted with these installations, contributing their biometric data, thereby blending anonymity with a sense of community (Lozano-Hemmer, 2010).

Introducing a unique blend of technology and sound, sound artist Yuri Suzuki's (1980–) Sonic Bloom installation (https://yurisuzuki.com/projects/sonic-bloom) presents a site-specific auditory experience that captures and melds themes of people, nature, and the environment. The installation fosters human interaction within spaces traditionally void of such engagements. By inviting participants to not only communicate with each other but also to connect with their environment, the sound experience transcends conventional boundaries of art and communication. Its digital counterpart extends the reach of this experience, allowing a global audience to share in the enchantment of the physical installation. In this digital space, voice recordings are artistically transformed into flower animations and then whimsically “planted” onto a virtual map of Mayfair (where the installation is located) creating an interactive and immersive exploration of universal communication (Bode, 2022).

Auditory experiences continue to evolve, propelled by technological advances, leading to ever greater personalization. As an example, the “cyborg artist” Neil Harbisson (1982–) takes sensory integration a step further with an antenna implant in his skull (Fig. 2.12) that translates colors into sounds, offering a synesthetic experience and expanding the boundaries of perception. Born completely colorblind, the prosthetic device that he refers to as an “eyeborg” allows him to hear colors as interpreted by the device as audible frequencies. The device allows the artist to experience a broader spectrum of colors, including ultraviolet (Harbisson, 2019). Less invasive auditory augmentation devices also exist for the general public. Other advances in auditory technologies, like PerL's sonic adaptive headphones, offer personalized listening experiences, and apps like Coffitivity and Study Ambience, which simulate ambient environments to boost focus and creativity (Droumeva, 2021). These technological advancements underscore the evolving synergy between personalized sensory experiences and technological innovation, opening new avenues for artists.

Fig. 2.12
A photo of Neil Harbisson who wears an eyeborg that extends outward from the back of his head. The frame containing the antenna rests against his forehead.

(Source From Wikimedia Commons, licensed under CC-BY 2.0)

Neil Harbisson, October 2012

Such personalized auditory experiences need not be isolated to the sense of hearing but can also coalesce with other stimuli for unique sensations. One notable crosspollination is between auditory explorations and the tactile dimension. The fusion of auditory and tactile experiences in art offers a unique sensory exploration, exemplified by the works of Christine Sun Kim and Cevdet Erek (1974–). As noted, Kim was deaf from birth, therefore, she delves into her personal relationship with sound. In the Greater New York exhibition at MoMA PS1, her piece Game of Skill 2.0 (2015–2016) (https://vimeo.com/142659892) integrated sound and touch, engaging viewers to move a staff-like device along a velcro strip, which, in turn, altered the radio's sound levels and speeds. Similarly, Cevdet Erek, with a background in architecture and as a member of Nekropsi, ventures into the art world with interactive projects. His award-winning work Shore Scene Soundtrack (2012) (https://vimeo.com/2867660) invites participants to mimic the sound of the ocean by rubbing their hands on a carpet, merging auditory and tactile elements to create a vivid seascape (Towers, 2019). These artists exemplify the creative potential of blending senses to enrich artistic experiences. Through these tactile ventures, neuroarts continues to illuminate the intricacies of human cognition, inviting us to experience and appreciate the complex beauty of sensory integration.

4 Tactile Technologies and Sensory Case Studies

Although tactile sensations, activated by touch receptors in the skin, do not process as high a percentage of environmental sensory stimuli as vision, they are crucial for comprehending the human condition and proprioception. Proprioception, often described as the “sixth sense,” refers to the ability of the body to sense its position, movement, and spatial orientation in space and is a fundamental aspect of our sensory experience, though less overtly recognized compared to other senses like sight (Tuthill & Azim, 2018). Proprioception, a critical aspect of the somatosensory system, is the internal sense that tells us where our body parts are without having to look. It's mediated by proprioceptors, specialized sensory receptors located in muscles, tendons, and joint capsules, providing continuous feedback to the central nervous system about the position and movement of our body parts. For instance, even if our eyes are closed, we can sense if our arm is raised or bent, facilitating a nuanced awareness of our limbs’ positioning and movement. This internal focus differentiates proprioception from externally oriented senses like touch, making it essential for coordinated movement, balance, and executing complex motor tasks (Zhan et al., 2023).

Simultaneously, the somatosensory system, encompassing skin, muscles, and joints, detects various stimuli such as light touch, deep pressure, pain, and temperature. It plays a dual role by contributing to both interoceptive and exteroceptive processing. This means it not only helps in understanding stimuli originating inside the body, thereby impacting our higher-order awareness of our physiological state but also aids in perceiving external stimuli, enhancing our immediate external awareness (Abraira & Ginty, 2013). These integrated sensory inputs are foundational for our interaction with the environment and self-regulation within it.

Therefore, while proprioception and touch are functionally distinct, they often work together to provide a comprehensive understanding of both the internal state of the body and its interaction with the external world. For instance, when holding an object, tactile receptors enable feeling its texture and shape, while proprioceptors allow us to adjust grip strength and arm position (Qi et al., 2023). The external journey of tactile perception thus begins at the skin, where a variety of touch receptors detect stimuli such as pressure, temperature, and vibration (Fig. 2.13). These receptors are connected to neurons that transmit signals through the spinal cord to the brain's thalamus, acting as a relay station. Once these signals reach the thalamus, they are then forwarded to the somatosensory cortex located in the parietal lobe. This region of the brain is crucial for interpreting tactile information and integrating it with other sensory data. The somatosensory cortex is adept at discerning where on the body a touch occurred and what kind of touch it was, whether a gentle caress or a sharp poke (Puts & Cascio, 2023). Moreover, tactile experiences often engage brain regions traditionally associated with visual processing (Calzavarini, 2023). The tactile sensory system intricately intertwines with the visual processing regions in the brain, illustrating a complex relationship between touch and sight. Touch receptors in our skin serve as the initial point of contact for tactile experiences, detecting various stimuli such as pressure, temperature, and texture. These receptors send signals through neurons in the spinal cord, which then relay the information to the thalamus. The thalamus acts as a crucial hub, distributing these tactile signals to the somatosensory cortex in the parietal lobe of the brain, where they are processed and interpreted. This overlap in sensory processing areas underscores the interconnected nature of our sensory systems and the capacity of the brain for integrating multisensory information.

Fig. 2.13
An anatomy of the skin. The epidermis is divided into hairy skin and glabrous skin. The hairy skin has a free nerve ending, Merkel's receptor. Glabrous skin has papillary ridges and septa. The dermis part includes Meissner's corpuscle and Pacinian corpuscle. Other parts labeled are hair receptors and the sebaceous gland.

(Source From Wikimedia Commons, licensed under CC-BY 3.0)

Tactile receptors in the skin

Touch also plays a vital role in emotional and social interactions, largely due to the release of oxytocin. Often referred to as the “love hormone,” oxytocin is released in response to physical touch and is associated with feelings of trust, bonding, and happiness (Morrison, 2023). This neurochemical response enhances the emotional resonance of tactile experiences, further cementing touch as an integral component of human connection and well-being. The exploration of tactile technologies extends these understandings into the realm of sensory experience and artistic expression. By leveraging the complex neural networks involved in touch perception, artists and technologists create immersive experiences that not only stimulate the skin but also engage the brain in profound and emotionally resonant ways. This section will delve into the diverse applications and innovations in tactile technology, examining how they are utilized in sensory case studies to create novel and meaningful sensory experiences.

The exploration of tactile technologies in neuroarts is a journey through historical precedents and modern advancements, each contributing significantly to sensory experiences. This narrative begins with the foundational work in tactile communication, notably the development of Braille (Fig. 2.14), and advances to contemporary innovations in haptic technology and tactile headsets. In fact, the history of tactile communication is deeply intertwined with the story of Braille. Louis Braille (1809–1852), a 12-year-old student in Paris, revolutionized tactile reading for the visually impaired in 1821. Building upon an embossed typeface by Valentine Hauy (1745–1822) and a dot-and-dash system used for night-time army communication, Braille developed a tactile code based on six dots, creating symbols that were significantly easier to read than Roman letters (Thompson & Christian, 2022). The system, despite gaining traction more rapidly in Europe than in the United States, laid the groundwork for tactile communication. In the United States, Samuel Gridley Howe (1801–1876), directing the first school for the blind, created the Boston Line Letters in 1835. These tactile letters featured strong distinctions from each other, aiding in differentiation and recognition (Fulas, 2023). Following this, Elia Chepaitis in 1988 introduced a modified Latin alphabet for tactile use. This contemporary counterpart to Howe’s system employed raised rectangular boxes with distinctive gaps and ticks, providing a clear tactile reference to the Latin alphabet (Chepaitis et al., 2004).

Fig. 2.14
A photo of a pathway lined with signboards for walking and cycling. On one side, trees border the pathway, while the other side is occupied by shops. Groups of people walk along the pathway, and bicycles are parked along the other side.

(Source From Wikimedia Commons, licensed under CC-BY 2.0)

Tactile path

Advancements in recent years have seen a significant leap from these historical foundations. The Tactile Picture Book Project (https://www.colorado.edu/atlas/tactile-picture-books-project-build-better-book), spearheaded by Tom Yeh at the University of Colorado Boulder, represents a modern iteration of tactile reading. This project offers a digital library of books with 3D models representing various elements, allowing for an interactive tactile reading experience (Dalton & Musetti, 2018). But recent advances move beyond updates to traditional braille. Haptic feedback technology, for instance, which utilizes touch-based feedback like vibrations and pressure, emerged as a key innovation (Huang et al., 2022). An early example of this is in The “Museum of Pure Form,” as conceptualized by Bergamasco et al. (2002), represents a pioneering integration of haptics technologies within the domain of cultural heritage, offering an immersive VR system that enables users to interact with digital models of 3D art forms and sculptures via both touch and sight. Developed at PERCRO in Pisa, Italy, this innovative system is being realized in two distinct formats: one designed for installation within various museums and art galleries across Europe, providing an enhanced experiential layer for visitors, and the other for implementation within a CAVE environment, offering a fully immersive virtual experience. While the technology is most well known in gaming with controllers providing haptic feedback to players, it is used in a number of fields now. Also, the field of sound-absorbing furniture intersects auditory and tactile experiences. These furniture designs incorporate materials that reduce ambient noise, creating a more comfortable environment for those sensitive to sound (Smardzewski et al., 2015).

Recent strides in tactile and haptic technologies mark a considerable evolution in sensory experiences, especially for those with visual impairments or unique sensory requirements. These innovations extend beyond conventional boundaries, offering new means of interaction and perception. A notable development is the placement of tactile replicas accessible to individuals using wheelchairs (Fig. 2.15). These replicas, often situated on turntables, allow visitors to feel and manipulate objects without needing to move their bodies (Eardley et al., 2016). The approach underscores the spatial aspect of touch, which relies on proprioception—the awareness of the position and movement of the body. Such awareness, as noted, is facilitated by receptors distributed throughout the skin, muscles, tendons, and joints, working in tandem with visual and auditory cues to contribute to a sense of movement and orientation. The concept of subjective skeletal space, as proposed by environmental psychologist James J. Gibson (1904–1979), is central to this experience, enabling a fleeting geometric schema of posture in space. The integration of body-based and skin-based touch leads to haptic perception, a system defined by Gibson where individuals feel an object relative to their body and vice versa (Gibson, 2002). Receptors in the muscles and skin relay signals about motion, pressure, pain, heat, and resistance, crafting a comprehensive sensory experience.

Fig. 2.15
A photo of a tactile floor plan layout of the ground floor. A hand points to the children's library located at the center. To the left of the children's library are areas designated for music and fun, a reading area, and a cafe. The symbols on the plan are mentioned on the right including lifts.

(Source From Wikimedia Commons, licensed under CC-BY 2.0)

Tactile floorplan of Liverpool Central Library

Urban design has also benefitted from tactile technological developments. For example, Tactile City (https://www.tactilecity.org/) is an experimental project that envisions a citywide tactile communication system for visually impaired pedestrians in New York City, New York. Conceived by Cooper Union students, it uses different textures to denote points of interest like bus stops and entrances (Phillips, 2015). The concept builds on the tactile paths (Fig. 2.13) invented in Japan by engineer and inventor Seiichi Miyake (1926–1982) in 1967 (Garofolo, 2022). Likewise, the accessibility work of Joshua Miele (1969–) should also be noted as a breakthrough for tactile maps (Fig. 2.16). His first-generation tactile maps utilize online geographic information systems to create user-centered, tactile maps for blind pedestrians. These maps, including San Francisco subway line maps, combine tactile features with audio explanations, accessed via a digital pen. Haptic signals vary in intensity and pattern to convey different messages. These signals can tap against the skin in precise patterns or blend into an overall texture, creating tangible illusions of objects and actions. Ultra haptic technologies, for instance, manipulate air pressure to simulate touch sensations, enhancing the perception of virtual objects (Morash et al., 2014).

Fig. 2.16
A photo of a Braille book with a finger touching the raised dots.

(Source From Wikimedia Commons, licensed under CC-BY 2.0)

Reading Braille

Aside from navigating the physical environment through the use of tactile maps and paths, other systems like Feelipa (https://feelipa.com/) and Color ADD (https://www.coloradd.net/en/), both developed in Portugal, offer innovative solutions for translating colors into tactile forms. Feelipa uses basic geometric shapes to represent primary colors, while Color ADD employs a modular system based on parts of a square to depict primary and secondary colors, plus black and white. These systems can be applied to various products, aiding both color perception and reinforcing color knowledge. Notably, research indicates that color can significantly boost object recognition for individuals with low vision (Khan et al., 2012; Shukla & Verma, 2019). Furthermore, objects become more memorable when experienced through multiple senses, emphasizing the importance of multisensory integration in enhancing cognitive functions (Cox & Guillemin, 2018). In neuroarts, these tactile technologies are not standalone developments but are integrated into a larger sensory framework. They complement auditory and visual experiences, enriching the overall sensory journey. From historical tactile communication systems to modern haptic technologies, the evolution of tactile experiences highlights the dynamic nature of the field.

The latest technological advances with generative AI provide yet another tool for artists considering tactile elements. The integration of AI, such as OpenAI's ChatGPT 4 with vision capabilities, revolutionizes how neurodivergent individuals engage with tactile activities like crafting, offering new opportunities for sensory exploration and understanding. By taking a photograph of different knitting yarns with a smartphone and querying an AI like ChatGPT Vision about the sensory attributes of each yarn, neurodivergent individuals can make informed decisions about which materials best suit their sensory needs (Fig. 2.17). The AI can analyze textures, colors, and patterns, providing insights into the comfort and sensory impact of each type of yarn (Fig. 2.18). This not only personalizes the crafting experience but also empowers individuals to create in a way that is most comfortable and satisfying for them, enhancing their creative expression and sensory well-being. The potential of AI to interpret and advise on sensory-friendly materials represents a significant advancement in personalized crafting and neurodivergent accommodation, making creative pursuits more accessible and enjoyable. These advancements not only cater to specific sensory needs but also broaden our understanding of sensory perception, making the arts more inclusive and accessible.

Fig. 2.17
A screenshot of a photo that displays six types of fabrics with a caption requesting information on which yarn is more comfortable against itching, as well as the pros and cons of each type of yarn.

Photograph of types of fabric with prompt. ChatGPT Vision, 2023

Fig. 2.18
A screenshot of Chat G P T providing an introduction and general advice on choosing knits based on appearance. It suggests opting for softer yarns like bamboo and avoiding rough wools, especially when buying online, where checking reviews is recommended.

ChatGPT response

Contemporary art is experiencing a notable fusion of tactile components to craft a multisensory experience that goes beyond conventional visual and auditory limitations. These artistic endeavors highlight the innovative ways in which tactile experiences and technologies are being utilized in the art world, expanding the traditional boundaries of sensory perception and artistic expression. The work of Bernhard Riecke stands as an example of integrating tactile sensory experiences with the realm of art, pushing the boundaries of how we engage with virtual and real environments (http://ispace.iat.sfu.ca/person/riecke/). As a psycho-physicist and cognitive scientist, Riecke's extensive background, with a PhD in Physics from Tübingen University and significant research in the Virtual Reality group at the Max Planck Institute for Biological Cybernetics, informs his pioneering approach. Now a professor at Simon Fraser University's School of Interactive Arts & Technology, Riecke's research transcends traditional disciplines, combining immersive virtual environments with multidisciplinary research approaches to explore human spatial cognition, orientation, and behavior. His work on enabling robust and effortless spatial orientation in virtual environments, as well as his investigation into self-motion perception and multimodal cue integration, not only advances fundamental scientific knowledge but also guides the design of novel human–computer interfaces. These interfaces facilitate intuitive, embodied interactions within computer-mediated environments like VR, enriching both the creation and experience of multimodal, interactive art and dance performances. Riecke's commitment to enhancing human–computer interaction reflects a deep understanding of the interconnectedness of sensory experiences, offering new vistas for artistic exploration and audience engagement.

Another example in this realm can be found in Alexandro Perini's Tactile Headset (2014) (https://alessandroperini.com/2014/09/11/tactileheadset/), a device that transmits information through vibrations. This headset provides an alternative form of sensory input for those with auditory or visual impairments, broadening the scope of sensory engagement in art. Similar environmental experiences use sound while reducing other stimuli. In her first major work titled The Whispering Room (1991), Canadian artist Janet Cardiff (1957–) introduced a captivating auditory experience in a minimalist artwork set in a dimly lit space, featuring sixteen compact, circular speakers mounted on stands. Each speaker emitted the voice of a distinct character, creating an immersive soundscape. As visitors navigated through this array of voices, their movements activated a film projector. This projector then displayed a film, subtly altered to play in slow motion, adding a visual layer to the auditory immersion. This work exemplified Cardiff's innovative approach to blending sound and visual elements, creating a multisensory environment that engaged visitors in a unique and interactive way.

Still, other artists akin to “cyborg artist” Neil Harbisson have integrated haptics directly into their bodies. Moon Ribas (1985–), for instance, a Spanish cyborg dancer and activist utilizes this technology to sense seismic activity in real time. Developing and implanting online seismic sensors in her feet allows her to experience earthquakes through vibrations from the ground. Thus, her work is a testament to the innovative ways artists are merging technology with human sensory experiences, offering new perspectives on how we interact with our environment (Alcaraz, 2019). Another artist also uses technology to enhance their work and bridges the gulf between olfactory and gustatory sensory studies. Blind artist Emile Gossiaux (1989–) (http://www.emiliegossiaux.com/), who, despite having no light perception, harnesses the unique capabilities of the BrainPort device (https://youtu.be/8Ccjq5LaSyE). The dense sensory nerve network of the tongue receives these electrical signals, which the brain interprets as visual images. A camera, mounted between the eyes, captures visual stimuli from the environment. This visual data is then processed into electrical pulses by an electronic unit. These pulses are conveyed to a plate of electrodes placed on the tongue. The dense sensory nerve network of the tongue receives these electrical signals, which the brain interprets as visual images. Thus, the BrainPort device allows individuals with visual impairments to perceive their surroundings in a novel way. As in the case of Gossiaux, she uses the device coupled with a rubber drawing board that generates tactile lines. The approach opens a new path into the olfactory and gustatory realms, where sensory experiences can be enhanced or even reimagined through technological intervention.

5 Olfactory and Gustatory Sensory Case Studies

The exploration of olfactory and gustatory sensory experiences delves into the intricate physiology and neurology of taste and smell. While considered biologically necessary, the significance of the senses goes beyond mere nourishment or warning of danger; they also elicit profound emotional connections, underpinned by complex neurological processes. These senses are deeply interconnected and play a vital role in shaping our perception and interaction with the world. The olfactory system, responsible for our sense of smell, is notably the most direct pathway to memory, owing to its close synaptic connection to the brain. Olfactory receptors in the nasal cavity capture scent molecules, which are then relayed to the olfactory cortex (Fig. 2.19). The process profoundly influences our emotions and memories, as scents are four times more likely to be remembered accurately than images and for longer durations (Olofsson et al., 2020). As such, smells have the ability to transport us past events or immediately call to mind the smell of individuals, leading to highly nostalgic experiences (Green et al., 2023).

Fig. 2.19
A schematic of the human head and brain highlights the areas associated with taste perception when smelling food. They include the somatosensory cortex, insula, dorsal striatum, parabrachial nucleus, nucleus of the solitary tract, nucleus accumbens, hippocampus, thalamus, amygdala, and cerebellum.

(Source From Wikimedia Commons, licensed under CC-BY 4.0)

Taste areas in the brain

The human olfactory system, with its capacity to detect approximately 1 trillion odors, is facilitated by an extensive array of over 400 types of scent receptors. These receptors, intriguingly, undergo renewal every 30 to 60 days, demonstrating the dynamic nature of our sense of smell. The olfactory receptors, located in the nasal cavity, play a crucial role in capturing scent molecules and transmitting them to the olfactory cortex in the temporal lobe of the brain (Gaillard et al., 2004). This process significantly influences our emotions and memory. The olfactory cortex's connection to regions like the amygdala and hippocampus is particularly noteworthy, as these areas are integral to emotional processing and memory formation. The activation of these regions by certain smells can have a profound impact, potentially reducing stress and lowering cortisol levels, thus underscoring the therapeutic potential of olfactory stimulation (Duroux et al., 2020).

The significance of the olfactory bulb in this process cannot be overstated, as it is a unique component of the brain which continues to grow throughout our lives, generating new neurons and feeding directly into the limbic system (Shah et al., 2023). The limbic system encompasses the amygdala, hippocampus, and thalamus, and is pivotal in modulating emotions and managing the creation and consolidation of long-term memories (Thomasson et al., 2023). In parallel, the gustatory system, responsible for our sense of taste, is equally complex and vital. The human tongue houses approximately 10,000 taste buds, each containing receptors that respond to the five basic taste sensations: sweet, salty, sour, bitter, and umami. These taste buds are directly connected to the gustatory cortex in the brain, where the sensory information is processed and interpreted. The gustatory cortex not only deciphers the flavors we experience but also integrates these sensory inputs with emotional and visceral responses (Vicario et al., 2022). This intricate network of signaling pathways between the taste buds and the gustatory cortex underscores how taste perception extends beyond mere flavor recognition, playing a pivotal role in our overall sensory experience and emotional well-being.

Therefore, the olfactory system not only detects and processes scents, but also imbues them with emotional associations before they reach the upper cortex. Similarly, the sense of taste is governed by clusters of cells known as taste buds, located on the tongue and in the back of the mouth. These taste buds are responsive to various chemicals in food, discerning the primary taste channels: sweet, salty, sour, bitter, and umami. Each receptor on the tongue sends separate signals to the brain, where the gustatory cortex assembles a sensory map. This mapping enables experience and differentiation between various flavors, contributing to visceral and emotional experiences associated with eating (Schamarek et al., 2023).

The fusion of technology with olfactory and gustatory senses is unveiling novel sensory experiences. Both artists and scientists are utilizing these senses to create immersive and poignant experiences. These efforts stimulate taste and smell while also activating emotional and memory-related faculties, offering comprehensive sensory explorations. Technological advancements in the realm of olfaction have particularly noteworthy applications. For instance, the recognition of common smells is now a vital diagnostic tool in identifying neurodegenerative diseases. Individuals with memory loss, such as those suffering from Alzheimer’s, often exhibit a significantly reduced sensitivity to smell. Addressing this, the Ode dementia player serves as a personal scent device for dementia patients, diffusing specific smells in a room to stimulate appetite, particularly around mealtimes (Yamashita et al., 2023). Additionally, Cyrano is a notable innovation in this field—a portable digital scent speaker that combats olfactory fatigue. This phenomenon refers to the gradual loss of awareness of a smell over time, which can lessen the sensory burden of the brain. With Cyrano, users can insert scent disks into the device and control the output through a smartphone app named oNotes (Kitson & McHugh, 2019). In 2007, the Brooklyn-based company Flavor Paper introduced the first scratch-and-sniff wall covering, marking a novel intersection between everyday objects and sensory stimulation. Such technological interventions demonstrate the growing importance and potential of integrating the senses of smell and taste with technology, paving the way for new and profound sensory explorations.

Such explorations include research into the effects of other senses on smell and vice versa. For instance, the intersections between different senses, particularly taste and smell, reveal the complex yet harmonious synergy of our sensory experiences. A notable study by Ward et al. (2023) at Liverpool John Moores University delved into how our sense of smell can subtly alter color perception. The research team discovered that specific odors, such as caramel and coffee, are subconsciously linked to certain colors, thereby influencing our perception of color. Participants were exposed to various scents in a sensory-deprived room and then asked to adjust a randomly colored square to a neutral gray. Intriguingly, while most scents led to predictable color perception shifts, peppermint was an exception, indicating a nuanced interplay of sensory associations. The study sheds light on how sensory inputs are not isolated but intricately woven into our daily perceptions and emphasizes the profundity of the brain's ability to process information from multiple senses simultaneously. For instance, higher temperatures are often associated with warmer colors, lower sound pitches with lower physical positions, and specific colors with the taste of certain foods, like oranges with the color orange.

Extending this exploration into the realm of music and taste, Mesz et al. (2011) conducted an experiment linking musical qualities with taste. Musicians were asked to improvise compositions in response to basic tastes like bitter, sweet, and salty. The analysis revealed that music created in response to a bitter taste often had slower, lower, and more softly legato notes. Similar correlations were observed for sweet and salty tastes, suggesting a profound connection between auditory and gustatory experiences. Together these studies illuminate the multifaceted and interconnected nature of our sensory world, highlighting how our perceptions are a blend of various sensory inputs. The insights gained from such research are crucial for advancing our understanding of sensory integration and its implications in fields like neuroarts and sensory therapy. As we continue to explore these sensory intersections, we uncover the vast potential for creating immersive and transformative experiences that engage multiple senses in harmony.

The realm of sensory art, particularly focusing on olfaction and gustation, has witnessed remarkable innovations, where artists have creatively integrated these senses into their works. These artistic endeavors not only offer unique sensory experiences but also serve as explorations into the emotional and cognitive impacts of smell and taste. One pioneering project in this domain was a collaboration between International Flavors and Fragrances, the Dutch design collective Polymorph, and Professor Asifa Majid. They conceived an “invisible dictionary” of contemporary emotional states, rooted in research on indigenous communities like the Jahai, who possess an elaborate smell vocabulary. The project, blending emotional states like collective déjà vu with original scents crafted by senior perfumer Le Guernec, was presented through an installation of translucent pedestals, releasing scents via laser-cut text. This immersive experience enabled visitors to forge new emotional-smell associations (Lupton & Lipps, 2018).

Another example of the use of smell in art comes by way of Sensory Maps from Kate McLean. These “smell maps” offer a captivating exploration of urban environments through the lens of olfactory experiences (https://sensorymaps.com/about/). Her innovative approach involves collecting and meticulously analyzing olfactory samples from diverse locations within cities. From these samples, McLean crafts maps that convey the unique narrative of each city, articulated not through traditional visual symbols but through its distinct smells. These “smell maps” represent an intersection of human-perceived smellscapes, cartography, and the communication of sensory data that typically eludes the eye.

In creating these sensory maps, McLean steps beyond conventional cartographic boundaries. She often omits traditional elements like street names, architectural landmarks, rivers, and parks, choosing instead to represent geographic space solely through the depiction of odors and their pathways. This method offers viewers an insightful and emotionally resonant exploration of cities, transcending the visual to engage with the often overlooked yet profoundly impactful world of urban scents. These maps challenge and redefine our understanding of spatial narratives, emphasizing the vital role of olfactory experiences in shaping our perception of and relationship with urban spaces.

The realm of “edible art” is epitomized by the innovative work of Sam Bompas, co-founder of Bompas & Parr (https://youtu.be/y03AI5Ad9IM). This studio, leading in flavor-based experience design, culinary research, architectural installations, and contemporary food design, consists of a team of creatives, cooks, designers, technicians, and architects. In one of their most remarkable installations, visitors find themselves immersed in an environment resembling a giant sandwich, complete with bread walls, lettuce ceilings, and tomato floors. But the explorations of the pair extend beyond mere culinary creations. For instance, their experiments with jelly demonstrate the transformative potential of scaling a simple concept to extraordinary levels. Their work reveals the unexplored possibilities within everyday items: a gherkin, ordinarily a mere food item, becomes a source of light when electricity is passed through it, emitting a sodium glow due to its vinegar content. In a striking installation from 2015, Bompas and Parr created a work at the intersection of taste and sight with Gerkin Chandelier (Fig. 2.20). The work used 60 pickles as light sources and consumed the electricity typical of an entire city block, creating a magnificent, albeit expensive, chandelier.

Fig. 2.20
A photo of a dark chandelier hanging from the ceiling, emitting smoke from three sections.

(Source Permission of the artist)

Sam Bompas and Harry Parr, Gherkin Chandelier, 2015

This edible installation is a multisensory feast, stimulating not just the senses of smell and taste, but also providing a visually and physically immersive experience. The work of Bompas & Parr and the other artists mentioned are emblematic of the diverse and innovative approaches artists are adopting to incorporate olfactory and gustatory elements in their art. These endeavors challenge and expand the traditional boundaries of artistic expression, offering fresh perspectives on sensory engagement and interaction with the world. As we transition into discussing the future of sensory experiences in art, these examples stand as a testament to the limitless potential for creativity and exploration in the realm of neuroarts.

6 The Future of Sensory Experience in Art

As we envision the future of sensory experience in art, intersecting the realms of neurology, technology, and artistic creativity, the focus extends beyond merely capturing attention and conveying information. The emerging landscape emphasizes a profound engagement with our emotional and cognitive processes. This integration not only enhances the aesthetic experience but also fosters a deeper, more meaningful interaction with art, resonating with our psychological and emotional states. Thus, the future of sensory experience in art is intrinsically linked to our understanding of the neurochemical responses of the brain. Artistic experiences can trigger the release of various neurochemicals, hormones, and endorphins, eliciting powerful emotional reactions. These biochemical responses are not mere epiphenomena; they play a crucial role in how art affects us on a deep, emotional level. The enhancement of memory saliency through emotional responses to art is a key area of exploration. The heightened emotional engagement facilitated by art can lead to more vivid and lasting memories of the experience, underscoring the potential of art as a powerful tool for emotional and cognitive stimulation.

Another fascinating aspect of the future of sensory experience in art lies in the exploration of individual brain connectivity patterns and their influence on artistic preferences. Neuroscientific research is increasingly revealing our brains are wired in unique ways, leading to diverse responses to artistic stimuli. Further investigations will revolutionize the way we approach art, moving toward more personalized and tailored experiences resonating with individual neural patterns. One area of research likely to prove fruitful is the role played by the default mode network (Fig. 2.21), a key brain network associated with the neurological basis of the self. This network, active during periods of internal focus, daydreaming, and meaning-making, is integral to our experience of art (Menon, 2023). By engaging the default mode network, art can become a medium for introspection, self-reflection, and the construction of personal narratives. This engagement not only enriches the artistic experience, it also contributes to our understanding of the self and our place in the world.

Fig. 2.21
A sagittal and axial view of a human brain on the left and right. Left, two bright spots are visible on the frontal and parietal lobes. Right, The bright spots are located in the frontal lobes, both left and right sides, and toward the back of the brain.

(Source From Wikimedia Commons, licensed under CC0)

Default mode network

Such research can be combined with new advances in technology to achieve the desired experiential states. The advent of sensory substitution devices offers a groundbreaking avenue for the future of sensory experience in art. Technologies crafted for the restoration or modification of specific sensory functions, or for converting one sensory experience into another, inaugurate avenues for enhancing accessibility and investigating sensory experiences (Eagleman & Perrotta, 2023). By harnessing these technologies, artists can create experiences which are inclusive and adaptable to diverse sensory capabilities, thus broadening the scope of who can engage with and appreciate art. Looking toward the future, we envision a landscape where artists leverage advanced technologies to create bespoke sensory experiences. Such experiences are poised not merely to stimulate the senses, but also to accommodate personal predilections and neural configurations. Among the foremost technologies harboring potential for these sought-after outcomes are those involving elements of sensory substitution.

In the realm of sensory substitution devices, innovations aimed at enhancing or replacing vision stand at the forefront, offering alternative perceptions to those with visual impairments. Of these, tactile cameras have garnered attention for their ability to translate visual information into tactile formats, thus enabling visually impaired individuals to interact more effectively with their environment (Maćkowski et al., 2023). One such device is the OrCam MyEye Pro, a wearable assistive technology providing smart reading, face recognition, color and product identification, and orientation assistance. It significantly aids individuals who are blind or visually impaired in navigating the visual world. Another notable innovation is the 2C3D tactile camera, created by Oren Geva, which converts images into 3D tactile representations, allowing blind individuals to “feel” images and gain a deeper understanding of the visual world (Matchinski et al., 2023).

Additionally, the development of “acoustic touch” technology in smart glasses marks a significant leap forward. This technology, pioneered by Australian researchers, translates visual data into unique auditory cues, significantly improving the ability of users to detect and reach objects. The acoustic touch technology in these smart glasses uses distinct sound representations to convey visual information, such as rustling for plants or buzzing for mobile phones, enhancing the independence and quality of life for the visually impaired (Zhu et al., 2023). Also, advancements in addressing colorblindness showcase the potential of sensory substitution devices in art experiences. Applications that allow real-time color correction offer individuals with colorblindness a more authentic and meaningful interaction with art. The concept of “smart art,” which adapts its colors based on the viewer's color perception, represents another innovative approach, enabling art to dynamically respond to individual viewers’ sensory needs (Recupero et al., 2023). These technological breakthroughs in sensory substitution devices not only enhance visual experiences for those with impairments but also open up new avenues for artistic exploration and appreciation, paving the way for a future where art is accessible and resonant for all.

Along with those for vision, the landscape of wearable technology is rapidly evolving, offering innovative solutions for mood regulation and emotional well-being (Devi et al., 2023). Among these pioneering devices is the Mood Sweater, a groundbreaking tool that gauges one's emotional state and physiological responses. The device utilizes the data it collects to offer personalized support, such as haptic feedback to mitigate anxiety or actionable suggestions like engaging in physical activity or social interaction. Its ability to tailor interventions makes it a powerful ally in managing mental health. Another noteworthy project in this domain is the Friendship Bench, which focuses on leveraging technology to provide support for individuals suffering from depression. Other devices for health and wellness include HeartMath's Inner Balance and emWave are pioneering new paths in heart-rate variability and stress management. These tools utilize biofeedback to synchronize heart rhythm patterns with emotional states, thereby facilitating stress reduction and emotional regulation. Their ability to align physiological and emotional well-being has made them indispensable in the quest for mental balance. The effectiveness of these devices in enhancing heart-rate variability exemplifies the significant impact biofeedback technology can have on personal health management (Lin et al., 2023).

Muse Headbands represent a critical development in this technological landscape, concentrating on brainwave activity to foster meditation and mindfulness practices. These headbands provide invaluable real-time feedback about the user's mental state, promoting relaxation and focus during meditation sessions. By offering insights into brain function, Muse Headbands have become essential for individuals aiming to deepen their mindfulness practices and achieve greater mental serenity and clarity (Cheng & Lin, 2023). Likewise, technology by Thync takes a unique approach to stress reduction and sleep enhancement, employing low-level electrical stimulation to directly influence neural activity. By targeting specific cranial nerves and brain regions, this wearable technology demonstrates the capacity to modulate mood and promote relaxation, showcasing the advanced capabilities of wearable technology in altering and improving mental states. Additionally, addressing mental health concerns such as depression, anxiety, and insomnia, the Fisher Wallace Stimulator utilizes cranial electrotherapy stimulation (CES) to deliver a gentle electrical current to the brain. This innovative approach helps regulate neurotransmitter levels, providing a novel method for alleviating symptoms associated with these mental health issues.

In the realm of holistic health monitoring, Spire Health Tags offers a comprehensive approach. These discreet wearable devices monitor key indicators like respiratory patterns and heart rate, providing users with insightful data on stress levels, sleep quality, and physical activity. They represent a holistic approach to health monitoring, delivering essential information that can guide lifestyle choices and enhance overall well-being. TouchPoints utilize bilateral alternating stimulation tactile (BLAST) technology, offering a novel method for managing stress and anxiety. By influencing the body's fight or flight response, these devices demonstrate the transformative potential of wearable technology in the realm of stress management and focus enhancement. Finally, Apollo Neuro employs gentle vibrations to improve the body's resilience to stress, illustrating the innovative use of haptic technology. This device impacts both the sympathetic and parasympathetic nervous systems, balancing the stress response and heralding a new age in wearable stress management technology. Each of these devices showcases the profound potential of technological innovation to revolutionize personal health and wellness, paving the way for a future where mental and emotional balance is within reach for everyone.

Diverse wearable devices represent a significant advancement in the intersection of technology, health, and well-being. By providing personalized interventions, real-time feedback, and innovative approaches to mental health management, they open up new possibilities for enhancing emotional regulation, reducing stress, and improving overall quality of life. As these technologies continue to evolve, they promise to redefine the way we approach personal health and emotional well-being.

At the same time, the intersection of art and technology, especially within the realm of neuroscience, heralds a new frontier in artistic expression. Including the pioneering initiative is Neurogami (https://neurogami.com/), which melds art with neuroscience. In this unique collaboration, artists and neuroscientists join forces to interpret fMRI brain scans, crafting artworks that echo the brain's activity. These creations range from direct visual representations of neural patterns to abstract works that embody the emotions or thoughts linked with the observed brain activity. This project exemplifies how the intricacies of the human brain can be translated into compelling artistic narratives.

Brain Painting (http://www.brainpaint.com/index.html) is another groundbreaking venture that intertwines art with brain science. In this project, participants don EEG headsets that capture brain activity as they engage with art. The EEG data then serves as the basis for generating new artworks, each uniquely reflective of the viewer's emotional and cognitive responses. This innovative approach personalizes art creation, making each piece a distinct representation of the individual's psychological and emotional state.

The BrainPort Vision Pro (https://www.wicab.com/brainport-vision-pro) represents a fascinating blend of art, neuroscience, and technology, primarily aimed at assisting individuals with vision loss. This device, worn on the tongue, translates visual information into electrical signals that the brain interprets as tactile sensations on the tongue. Over time, the brain learns to decode these signals as visual data, effectively teaching the brain to “see” through the tongue. This venture is a testament to the increasingly blurred boundaries between art, neuroscience, and technological innovation. Other devices, such as the Tactile-Vision Substitution System and Seeing with Sound, are transforming images into sensory experiences like vibrations on the skin or sound patterns. Though still experimental, these technologies show immense promise in redefining sensory perception and art creation.

Perhaps most importantly, Brain–Computer Interfaces (BCI) mark a significant leap in this interdisciplinary field. BCIs enable individuals to control computers through brain activity, opening a plethora of possibilities for those with physical disabilities. Artists are harnessing this technology to create unique forms of art. A notable example is musician Tod Machover (2004) (1953–), who collaborates with disabled individuals to compose music derived from their brain activity. This approach not only democratizes art creation but also illustrates the profound impact of integrating neuroscience and technology in the arts. Coupled with biofeedback, BCI promises new tailored experiences. At the forefront of this bespoke approach to experience is the integration of biofeedback into immersive art experiences. Researchers are exploring innovative ways to create art that dynamically responds to individual mental and physiological states (Nigg, 2023; Stern, 2010). In groundbreaking studies, participants engage with art while wearing devices that measure brain activity, allowing the art to morph in real time based on their emotional and cognitive responses (Prpa & Pasquier, 2019; Yang et al., 2023). Such an approach has proven potential to craft highly personalized art experiences which cater to individual preferences and also promote mental well-being.

Emerging technologies like haptic feedback are being developed to aid individuals with sensory processing disorders. This technology can gently acclimate individuals to challenging sensory inputs, such as loud noises or bright lights, in a controlled and therapeutic manner (Davies & Gavin, 2007). Such interventions hold immense promise for enhancing sensory tolerance and comfort in daily life for individuals with conditions like autism. The future of sensory experience in art is rich with possibilities, marked by a shift toward more inclusive, adaptive, and personally meaningful forms of artistic expression. However, the value of modulating sensory experiences hinges critically on their alignment with the neurotype of the intended audience. Understanding the diagnostic criteria and symptoms of each sensory processing condition is imperative to tailor the stimuli precisely, ensuring the outcome aligns with the artist's vision and the audience's sensory profile. This refined approach recognizes the variety in sensory processing among individuals and initiates contemplation on how the art experience can expand to become more inclusive and accessible.

The future of sensory experience in art, thus, is a balance between artistic intent and audience receptivity. It is a domain where art and technology converge with an understanding of neurological diversity, fostering experiences that are as unique and dynamic as the individuals interacting with them. This era of art challenges and redefines the traditional norms, paving the way for bespoke experiences able to honor and cater to the varied sensory and cognitive landscapes of its audience. The promise of this future lies in its ability to adapt art to individual needs, making it a deeply personal and engaging journey for each participant.