Abstract

Spontaneous and conversational laughter are important socio-emotional communicative signals. Neuroimaging findings suggest that non-autistic people engage in mentalizing to understand the meaning behind conversational laughter. Autistic people may thus face specific challenges in processing conversational laughter, due to their mentalizing difficulties. Using fMRI, we explored neural differences during implicit processing of these two types of laughter. Autistic and non-autistic adults passively listened to funny words, followed by spontaneous laughter, conversational laughter, or noise-vocoded vocalizations. Behaviourally, words plus spontaneous laughter were rated as funnier than words plus conversational laughter, and the groups did not differ. However, neuroimaging results showed that non-autistic adults exhibited greater medial prefrontal cortex activation while listening to words plus conversational laughter, than words plus genuine laughter, while autistic adults showed no difference in medial prefrontal cortex activity between these two laughter types. Our findings suggest a crucial role for the medial prefrontal cortex in understanding socio-emotionally ambiguous laughter via mentalizing. Our study also highlights the possibility that autistic people may face challenges in understanding the essence of the laughter we frequently encounter in everyday life, especially in processing conversational laughter that carries complex meaning and social ambiguity, potentially leading to social vulnerability. Therefore, we advocate for clearer communication with autistic people.

Introduction

Autistic people often encounter difficulties in non-verbal social communication (Mundy et al. 1986). While most research in this area centers on visual cues, such as eye gaze, gesture and facial expressions, revealing different patterns between autistic and non-autistic people (Senju et al. 2009; Trevisan et al. 2018), auditory cues have largely been neglected. It is equally important to understand how autistic people perceive and experience non-verbal auditory cues differently, particularly in the context of positive emotional expressions. Indeed, the literature to date has mostly focussed on negative emotional expressions, such as anger, sadness, fear, disgust, with relatively limited attention paid to the expression of positive emotions (see the reviews by Uljarevic and Hamilton 2013 and Leung et al. 2022). A shift toward exploring the full range of emotional expressions would enhance our understanding of the social strengths and weaknesses of autistic people, promoting a more balanced perspective. Furthermore, by exploring the diverse social communication experiences of autistic people, we can gain insights into the underlying mechanisms of social communication and interaction.

Laughter, as a universal positive emotional expression, plays a significant role in social bonding during human interactions (Sauter et al. 2010; Bryant and Bainbridge 2022). Although laughter is often viewed as a spontaneous and uncontrolled emotional vocalization triggered by tickling and humor (Provine 2004; Gervais and Wilson 2005), it predominantly occurs in conversation as a voluntary communicative signal (Provine 1993; Vettin and Todt 2004): people frequently laugh after verbal utterances to signal affiliation and agreement with others, mediating the meaning of utterances, and regulating the flow of conversation (Vettin and Todt 2004). Here, we define spontaneous laughter as uncontrolled and involuntary, and conversational laughter as controlled and voluntary (Provine 2004; Gervais and Wilson 2005; McGettigan et al. 2015). Although the production of laughter varies in the degree of volitional control and emotional content, much of the laughter that occurs naturally is likely to be a mix of both types (Scott et al. 2022). Spontaneous and conversational laughter are therefore both salient social signals that can play very different roles in communicating socio-emotional meaning (Neves et al. 2017), recruit different production systems (Wild et al. 2003; Gerbella et al. 2020), and are perceived to differ in authenticity, associated with acoustic differences (Lavan et al. 2016). Laughter not only promotes group cohesion but also fosters rapport in human interactions (Dunbar et al. 2012; Manninen et al. 2017) and intimacy (Gray et al. 2015). Understanding the meaning of laughter in various social contexts is therefore essential for individuals to establish and maintain social bonds and relationships (Scott et al. 2014).

Human laughter is a highly contagious behavior (Provine 1992). The perception of laughter engages oro-facial mirror networks, including premotor cortex, the pre-supplementary motor area (SMA) and right inferior frontal gyrus (Warren et al. 2006; O’Nions et al. 2017), consistent with findings that much of human adult laughter is associated with behavioral contagion (Provine 1992)—we are primed to laugh when we hear laughter, and we are 30 times more likely to laugh when with others than when alone (Provine and Fischer 1989). The contagious-laughter effect is strongly mediated by social contexts, such as the audience size and the intimacy/familiarity of the relationship (Provine 1992; Scott et al. 2022). However, autistic people tend to express and experience laughter differently to their peers (Reddy et al. 2002; Hudenko et al. 2009; Wu et al. 2015; Helt and Fein 2016; Helt et al. 2020). For instance, autistic children are less likely to join in others’ laughter and are more prone to laughing by themselves; they seldom try to elicit laughter from others and are less inclined to laugh at funny faces or socially inappropriate act (Reddy et al. 2002). They perceive cartoons with a laugh track to be less enjoyable than one without, and laugh less when watching these cartoons than non-autistic children (Helt and Fein 2016). Laughter might be less socially contagious for autistic people.

The perception of laughter involves the engagement of high-level cognitive networks, such as the metalizing network (Szameitat et al. 2010; McGettigan et al. 2015; Lavan et al. 2017; Sumiya et al. 2017). However, the processing of laughter differs with respect to its authenticity (McGettigan et al. 2015; Lavan et al. 2017). When considering the processing of different types of laughter, passive listening to spontaneous laughter induces greater activity in auditory cortex (superior temporal gyri, STG) than conversational laughter, likely reflecting the processing of emotional authenticity between these types of laughter. Interestingly, conversational laughter specifically engages mentalizing networks, showing greater activation in the medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC) compared to spontaneous laughter (McGettigan et al. 2015), and the degree of mPFC activation correlates with the perceived authenticity of laughter (Lavan et al. 2017). This suggests that laughter processing also involves representing the intentions behind the laughter, especially with socially ambiguous conversational laughter. As autistic people experience difficulties in mentalizing, supported by neuroimaging evidence of atypical activation of mPFC (Frith and Frith 2006; Gilbert et al. 2009; White et al. 2014), they may specifically struggle to comprehend the meaning of conversational laughter, processing it more like spontaneous laughter, which may subsequently impact their use of laughter in social interactions. Indeed, autistic children rarely produce unvoiced laughter during social play, which is more closely associated with spontaneous than conversational laughter (Hudenko et al. 2009).

Experiencing and understanding laughter in real life may be different for autistic people due to its contagious nature and the richness in social meaning; however, studies focusing on the processing of laughter in autistic adults are rare. We have previously found laughter is implicitly processed similarly by both autistic and non-autistic people: not only by its presence but also by its type. By adding either spontaneous or conversational laughter to spoken jokes, laughter increases the funniness of “dad jokes”; additionally, spontaneous laughter amplifies the funniness of jokes more than conversational laughter, likely due to its greater authenticity (Cai et al. 2019). We further found that although both autistic and non-autistic adults could explicitly differentiate between these two types of laughter by rating the affective properties of laughter itself, autistic adults rated conversational laughter as more authentic and emotionally arousing than non-autistic adults, perceiving it to be more similar to spontaneous laughter (Cai et al. in press). This discrepancy in the processing of laughter, evident between implicit and explicit processing, suggests that autistic people do not universally experience difficulties with all forms of non-verbal cues; in our case, all types of laughter. Intriguingly, however, autistic adults may process conversational laughter to be more like spontaneous laughter, possibly as a result of mentalizing difficulties in interpreting the socio-emotional meanings embedded in conversational laughter.

Despite typical behavioral responses to laughter, an fMRI study of laughter in autistic adults found reduced mPFC activation to written jokes followed by laughter; while laughter increased the pleasantness of jokes for all participants, this effect was smaller in autistic adults (Sumiya et al. 2020). However, this study did not report what type of laughter was used. It is therefore unclear whether autistic adults would show reduced mentalizing-related activation during the processing of all types of laughter due to their atypical non-verbal communication, or whether autistic adults might only show atypical neural response to conversational laughter that contains a degree of social ambiguity, whilst perceiving spontaneous laughter similarly to non-autistic people. Given laughter’s salient role in social communication and bonding, understanding the neural similarities and differences in processing different types of laughter in autism is essential. This can reveal how autistic people process laughter as a socio-emotional signal in everyday contexts and help identify potential areas of social vulnerability during communication. To address this gap in the literature, we aim to further explore the neural systems recruited in the implicit processing of all types of laughter. Additionally, we aim to explore whether the profile of neural activity in implicit processing of different types of laughter is in line with previous findings of explicit processing of laughter, with a particular interest in the involvement of the mPFC in the processing conversational laughter. Using fMRI, both autistic and non-autistic adults passively listened to funny words followed by either spontaneous laughter, conversational laughter, or noise-vocoded (NV) human vocalization. Post-scan, participants listened to the word plus laughter pairs again and rated the funniness of each word. We hypothesized that there would be differences in the neural correlates of implicit processing of these different types of laughter between autistic and non-autistic adults, specifically in the mPFC when listening to conversational laughter, and in the sensorimotor network to both types of laughter, despite typical behavioral responses to laughter.

Materials and methods

Participants

Twenty-five autistic adults and 23 non-autistic adults participated in this study, they were right-handed and had no speech, hearing or neurological difficulties. The groups were comparable for sex (χ2(1) = 0.060, P = 0.807), age (t(46) = −.134, P = 0.894), and verbal (t(46) = 0.720, P = 0.475), performance (t(46) = −.875, P =. 386), and full-scale IQ (t(46) = 0.031, P = 0.975), as measured by four subtests of the Wechsler Adult Intelligence Scale (WAIS-IV UK; Wechsler 2008: Matrix Reasoning, Block Design, Similarities, Vocabulary). The groups differed on Autism Spectrum Quotient (AQ; Baron-Cohen et al. 2001), t(46) = 11.879, P < 0.001. Although two autistic males were unable to complete the scan due to discomfort and health concerns, the scan groups remained comparable for sex (χ2(1) = 0.000, P = 1.000), age (t(44) = − 0.258, P = 0.798), and verbal (t(44) = 0.392, P = 0.697), performance (t(44) = −1.123, P =. 268), and full-scale IQ (t(44) = −.303, P = 0.763), and differed on AQ, t(44) = −12.325, P < 0.001. See Table 1.

Table 1

Participant demographic information.

NAAutismAutism for scan
N (male: female)23 (13:10)25 (15:10)23 (13:10)
Age (years)28.348 (9.203)28.720 (9.969)29.087 (10.238)
Verbal IQ121.435 (15.048)117.800 (19.416)119.435 (19.315)
Performance IQ109.478 (15.985)113.560 (16.289)114.783 (16.059)
Full Scale IQ119.000 (15.895)118.840 (19.429)120.565 (19.023)
AQ13.652 (5.928)35.560 (7.292)37.348 (7.062)
NAAutismAutism for scan
N (male: female)23 (13:10)25 (15:10)23 (13:10)
Age (years)28.348 (9.203)28.720 (9.969)29.087 (10.238)
Verbal IQ121.435 (15.048)117.800 (19.416)119.435 (19.315)
Performance IQ109.478 (15.985)113.560 (16.289)114.783 (16.059)
Full Scale IQ119.000 (15.895)118.840 (19.429)120.565 (19.023)
AQ13.652 (5.928)35.560 (7.292)37.348 (7.062)

Note. Values are given as mean (standard deviation). NA = non-autistic; AQ = autism-spectrum quotient.

Table 1

Participant demographic information.

NAAutismAutism for scan
N (male: female)23 (13:10)25 (15:10)23 (13:10)
Age (years)28.348 (9.203)28.720 (9.969)29.087 (10.238)
Verbal IQ121.435 (15.048)117.800 (19.416)119.435 (19.315)
Performance IQ109.478 (15.985)113.560 (16.289)114.783 (16.059)
Full Scale IQ119.000 (15.895)118.840 (19.429)120.565 (19.023)
AQ13.652 (5.928)35.560 (7.292)37.348 (7.062)
NAAutismAutism for scan
N (male: female)23 (13:10)25 (15:10)23 (13:10)
Age (years)28.348 (9.203)28.720 (9.969)29.087 (10.238)
Verbal IQ121.435 (15.048)117.800 (19.416)119.435 (19.315)
Performance IQ109.478 (15.985)113.560 (16.289)114.783 (16.059)
Full Scale IQ119.000 (15.895)118.840 (19.429)120.565 (19.023)
AQ13.652 (5.928)35.560 (7.292)37.348 (7.062)

Note. Values are given as mean (standard deviation). NA = non-autistic; AQ = autism-spectrum quotient.

The autistic participants had received an official diagnosis from a qualified clinician. Due to testing restrictions during the COVID-19 pandemic, we were unable to administer ADOS (Hus and Lord 2014) to confirm their diagnosis. Nonetheless, the AQ was used in the pre-screening assessment, and non-autistic participants with an AQ score below the cut-off of 32 were included in the study. Non-autistic participants were recruited from local participant databases. Autistic participants were recruited through university disability services and autism databases throughout the c. Informed written consent was obtained prior to testing, and the project received approval from the university research ethics committee.

Experimental design and procedure

Word stimuli

A subset of 300 words was selected from the original pool (Engelthaler and Hills 2018). The words were chosen to avoid floor and ceiling effects, as per the results of the baseline ratings task (funniness: mean = 3.309, SD = 0.586; intelligibility: mean = 92.959%, SD = 10.619%), and recorded by a professional male comedian, who read the words in a comedic manner (duration: mean = 0.736 s, SD = 0.225 s; root-mean-square: mean = 0.031, SD = 0.000; pitch: mean = 189.677 Hz, SD = 82.154 Hz). Full details of word selection are given below.

To avoid the floor effect, we initially selected 719 words with a humor rating higher than 2.8 from a 5-point Likert scale (1—humorless; 5—humorous) from original pool (Engelthaler and Hills 2018). Subsequently, a list of 621 words was screened by four native English speakers to ensure appropriateness. The raw audio file was downsampled at a rate of 44,100 Hz to mono.wav files with 32-bit resolution, and each word was trimmed and edited into a 1-s sound file (.wav) using version 2.3.3 of Audacity (R) recording and editing software. The files were then normalized for root-mean-square (RMS) amplitude using PRAAT (Boersma, 2021). We further conducted an online task to establish baseline ratings for the funniness of words. In this task, 621 words were assigned to three lists, each containing 207 words. The three lists were matched on the humor ratings from the original pool (List 1: M = 3.19, SD = 1.23; List 2: M = 3.20, SD = 1.23; List 3: M = 3.18, SD = 1.23). Fifty-eight native English speakers were randomly assigned to one of the three lists (list 1 n = 18; list 2 n = 19; list 3 n = 21). Participants were instructed to listen to the recordings of a comedian, called “Ben” (not his real name), performing some funny words and were asked to rate the funniness of each word on a 7-point rating scale (“How funny was the word the way that Ben said it?” 1—not funny at all, 4—neutral, 7—extremely funny). Additionally, participants indicated whether they understand the meaning of each word. A practice trial was given before the actual task. The task was built and presented using Gorilla Experiment Builder (Anwyl-Irvine et al. 2020).

Sound stimuli

This study used 150 sound stimuli: 50 spontaneous laughter stimuli, 50 conversational laughter stimuli, and 50 NV human vocalizations. The spontaneous and conversational laughter stimuli were recorded using a method previously validated in behavioral and neuroimaging experiments (McGettigan et al. 2015; O’Nions et al. 2017) and were selected from a previous study, as detailed in (Cai et al. 2019). We created NV stimuli by applying one-channel noise-vocoding to various human emotional vocalizations, including expressions of anger (eight clips), pleasure (six clips), disgust (six clips), surprise (three clips), achievement (four clips), contentment (eight clips), fear (five clips), relief (five clips), sad (five clips). The resulting NV stimuli lack emotional meaning and are not recognized by normal-hearing listeners. We normalized all stimuli for RMS amplitude using PRAAT (Boersma, 2021), and extracted the acoustic parameters. One-way ANOVA indicated that spontaneous laughter, conversational laughter and NV sound stimuli were comparable in duration, RMS, and intensity (Table 2). For the detailed comparation of acoustic properties between spontaneous and conversational laughter, see Supplementary Table S1.

Table 2

Acoustic properties of sound stimuli set.

Acoustic measureConditionsMeanSDFP
Total duration (s)Spont2.376.4061.297.276
Conver2.269.361
NV2.307.216
Root-mean-squareSpont.317.000.908.406
Conver.317.000
NV.317.000
Intensity (dB)Spont64.000.000.945.391
Conver64.000.000
NV64.000.000
Acoustic measureConditionsMeanSDFP
Total duration (s)Spont2.376.4061.297.276
Conver2.269.361
NV2.307.216
Root-mean-squareSpont.317.000.908.406
Conver.317.000
NV.317.000
Intensity (dB)Spont64.000.000.945.391
Conver64.000.000
NV64.000.000

Note. df = (2, 147). P-values are given in two tailed. Spont = spontaneous laughter; Conver = conversational laughter; NV = noised-vocoded human vocalization.

Table 2

Acoustic properties of sound stimuli set.

Acoustic measureConditionsMeanSDFP
Total duration (s)Spont2.376.4061.297.276
Conver2.269.361
NV2.307.216
Root-mean-squareSpont.317.000.908.406
Conver.317.000
NV.317.000
Intensity (dB)Spont64.000.000.945.391
Conver64.000.000
NV64.000.000
Acoustic measureConditionsMeanSDFP
Total duration (s)Spont2.376.4061.297.276
Conver2.269.361
NV2.307.216
Root-mean-squareSpont.317.000.908.406
Conver.317.000
NV.317.000
Intensity (dB)Spont64.000.000.945.391
Conver64.000.000
NV64.000.000

Note. df = (2, 147). P-values are given in two tailed. Spont = spontaneous laughter; Conver = conversational laughter; NV = noised-vocoded human vocalization.

fMRI experimental design

An event-related paradigm was utilized, with each trial consisting of a jittered inter-trial interval (ITI) ranging from 2 to 4 s. In the sound stimuli conditions, a funny word was presented, followed by a sound stimulus from one of three conditions (Spont Laugh, Conver Laugh, NV) with a fixed duration inter-stimulus interval (ISI) of 0.09 s. The rest condition included a 2-s period of silence following the ITI. Vigilance trials involved a 0.5-second beep sound, requiring participants to press a button within 3 s. Each functional run, approximately 14 min long, comprised 25 trials per condition and five vigilance trials to assess attentiveness. The entire experiment consisted of four functional runs with a 1-min rest period between runs, each comprising 105 trials, during which participants passively listened to 300 words paired with sound stimuli. The sound stimuli were used twice each. Trial conditions were pseudorandomized to prevent more than two consecutive trials of the same condition involving words plus sound stimuli. Furthermore, neither rest nor vigilance conditions were the first trial of each run, nor were they presented consecutively. The pairs of words plus sound stimuli were pseudorandomized and counterbalanced across both runs and participants (see Fig. 1A).

Fig. 1

Experimental design of (A) scan session: implicit laughter processing and (B) post-scan behavioral session: Implicit laughter rating task. Note. Spont Laugh = spontaneous laughter; Conver Laugh = conversational laughter; Noised-vocoded = noised-vocoded human vocalization.

Behavioral experimental design

Participants listened to the 200 word plus laughter pairs again and this time also rated the funniness of each word. Due to time constraints imposed by COVID testing restrictions, the 100 word plus NV pairs were excluded. The pairs were the same as in the previous scan conditions, but the order of the pairs was shuffled. Participants rated each word on a 7-point rating scale (“How funny was the word the way that Ben said it?,” 1—not funny at all, 4—neutral, 7—extremely funny). For each trial, participants had up to 6 s to give a rating. There was a short practice session before the real task to let participants become familiar with the structure of the task. The post-scan behavioral task lasted approximately 25 min (see Fig. 1B).

Procedure

Participants were informed that the fMRI study was about humor processing. Notably, any mention of laughter was intentionally avoided during the recruitment and prior to testing. Before the scan, participants were instructed to listen to humorous words spoken by a comedian, and people’s reactions. They were told to press a button on a button-box whenever they heard a “beep” sound. A practice sequence at the beginning ensured the volume was adequate and that participants could clearly hear the stimuli by recalling the words they heard. The testing lasted approximately 2 h, split between a 1-h brain scan and a 1-h behavioral test session, which encompassed the post-scan behavioral task, IQ test, and questionnaires. Both fMRI and behavioral experiments were presented using MATLAB R2018B (MathWorks Inc 2018) with the psychophysics toolbox (Brainard 1997).

Neuroimaging, pre-processing, and analysis

Acquisition

We employed continuous event-related fMRI, acquiring blood-oxygen-level-dependent (BOLD) images using a Siemens Avanto 1.5-Tesla MRI scanner with a 32-channel head coil. The study involved four runs of 260 echo-planer whole-brain volumes (TR = 3 s; TE = 50 ms; TA = 86 ms; Slice tilt = 25° ± 5°; flip angle = 90°; 3 mm × 3 mm × 3 mm in-plane resolution). Auditory stimuli were delivered via an MR-compatible insert earphone connected to a Sony STR-DH510 digital AV control center. After two functional runs, we obtained high-resolution anatomical images using a T1-weighted magnetization prepared-rapid acquisition gradient echo sequence (176 sagittal slices, TR = 2730 ms; TE = 3.57 ms; flip angle = 7°, acquisition matrix = 224 × 256 × 176, slice thickness = 1 mm, 1 mm × 1 mm × 1 mm).

Software

Pre-processing and statistical analysis were conducted in SPM12 (Penny et al. 2011), implemented in MATLAB R2018B (MathWorks Inc 2018).

Pre-processing

The first three volumes of each EPI sequence were discarded. The remaining volumes underwent spatial alignment along the AC-PC axis for each participant, followed by slice time correction using the last slice as a reference. The corrected images were then spatially realigned and registered to the mean. The structural image was co-registered with the mean of the corrected images, aligning structural scans with SPM12 (Penny et al. 2011) tissue probability maps during segmentation. The forward deformations image from the segmentation was used to normalize the functional images to standard MNI space. Finally, the normalized functional images were resampled into 2 × 2 × 2 mm voxels and spatially smoothed using an isotropic 8 mm full width at half-maximum Gaussian kernel.

Analysis

fMRI data were analyzed in an event-related manner. Variance in each time series was decomposed in a voxelwise general linear model with the following regressors: onsets and durations of (1) words plus spontaneous laughter (Spont Laugh), (2) words plus conversational laughter (Conver Laugh), (3) words plus NV stimuli (NV), and (4) vigilance trials. These regressors, along with six additional regressors representing realignment parameters calculated by SPM12 (Penny et al. 2011), constituted the full model for each session. The data were high-pass filtered at 128 s.

Individual design matrices were contrasted per participant [All Laughs (Spont Laugh & Conver Laugh) > NV, Spont Laugh > NV, Conver Laugh > NV], modeling the three experimental conditions across four runs and including movement parameters as nuisance variables. These contrasts were entered into a second level, two-sample t-test for the group analysis. Whole-brain analysis results were corrected for multiple comparisons using a cluster-extent-based thresholding approach (Poline et al. 1997): a voxel-wise threshold of P < 0.001 combined with a cluster extent threshold determined by SPM12 (Penny et al. 2011) (P < 0.05 family-wise-error cluster-corrected threshold). All reported clusters exceeded this cluster-corrected threshold. Reported cluster coordinates corresponded to the Montreal Neurological Institute (MNI) coordinate system and were labeled using the AAL labelling atlas in SPM12 (Penny et al. 2011).

Region-of-interest extraction

As we had an a priori hypothesis about the mPFC being a region of interest regarding laughter perception, MNI peak coordinates from a prior fMRI study on the perception of conversational laughter versus spontaneous laughter in non-autistic adults (McGettigan et al. 2015) were extracted from three ROIs: (1) left superior medial frontal gyrus (mPFC; x = −3, y = 54, z = 9); (2) left temporal thalamus (x = −3, y = −6, z = 9); and (3) right ACC (x = 0, y = 30, z = 30). To further confirm the mPFC activation, we detected during the processing of conversation laughter and its relationship to the engagement of mentalizing network, an additional analysis was conducted on the meta-analytic mPFC region (x = 0, y = 50, z = 20) reported by Van Overwalle and Baetens (2009). The MarsBaR toolbox (Brett et al. 2002) was used for creating ROIs, building spherical 8-mm radius ROIs around the peak voxels in selected contrasts. Beta values were extracted for analysis.

Behavioral analysis

The linear mixed model analysis was conducted using R Studio (RStudio 2020) with lme4 package (Bates et al. 2014) to estimate fixed and random coefficients. Model term selection was guided by the Akaike information criterion (AIC; Sakamoto et al. 1986). The car package (Fox et al. 2012) was used to obtain t-statistics, with significance of fixed effects determined using Satterthwaite degrees of freedom. The lmerTest package (Kuznetsova et al. 2017) was used to calculate significance, while the emmeans package (Lenth et al. 2018) was used for Tukey’s honestly significant difference tests and computed the estimated marginal means.

Results

Behavioral ratings of words plus laughter

Linear mixed model analysis was conducted to investigate how different types of laughter modulated the perceived funniness of words. Laughter Type (spontaneous vs conversational) and Group (Autism vs NA) were included as fixed effects. We also included participants and word items as two crossed random effects. The models were fitted by restricted maximum likelihood (REML), statistical significance was established via Satterthwaite’s method. There was a significant main effect of Laughter Type (⁠|$\beta$| = 0.08, t(9496) = 2.94, P = 0.003), but not of Group (⁠|$\beta$| = 0.22, t(9496) = 0.68, P = 0.498). This suggests that participants rated words plus spontaneous laughter (M = 3.34, SEM = 0.164, 95% CI [3.02, 3.67]) as significantly funnier than the words plus conversational laughter (M = 3.27, SEM = 0.164, 95% CI [2.95, 3.59]). We further included the interaction between Laughter type and Group as a fixed effect to explore whether the laughter modulation effect changes depending on groups. Neither significant interaction effect (⁠|$\beta$| = 0.05, t(9495) = 0.89, P = 0.546) nor significant main effects (Laugh Type: |$\beta$| = 0.05, t(9495) = 1.48, P = 0.138; Group: |$\beta$| = 0.20, t(9495) = 0.60, P = 0.546) were detected from this model. This suggests that autistic and NA participants experienced the laughter modulation effect on the perceived funniness of words similarly (Table 3). Our findings replicate our previous study of the perceived funniness of jokes (Cai et al. 2019).

Table 3

Fixed and random effects estimated with the linear mixed model.

PredictorsEstimatesM1EstimatesM2
CIpCIp
(Intercept)3.162.71–3.60<.0013.172.73–3.61<.001
LaughterType0.080.03–0.130.0030.05−0.02 – 0.0130.138
Group0.22−0.42– 0.850.4980.20−0.44– 0.830.546
LaughterType × Group0.005−0.06– 0.150.372
Random Effects
σ21.641.64
τ000.15words0.15words
1.25participants1.25participants
ICC0.460.46
N48participants48participants
300words300words
Observations95029502
AIC32,30932,310
Marginal R2/conditional R20.004/0.4640.004/0.464
PredictorsEstimatesM1EstimatesM2
CIpCIp
(Intercept)3.162.71–3.60<.0013.172.73–3.61<.001
LaughterType0.080.03–0.130.0030.05−0.02 – 0.0130.138
Group0.22−0.42– 0.850.4980.20−0.44– 0.830.546
LaughterType × Group0.005−0.06– 0.150.372
Random Effects
σ21.641.64
τ000.15words0.15words
1.25participants1.25participants
ICC0.460.46
N48participants48participants
300words300words
Observations95029502
AIC32,30932,310
Marginal R2/conditional R20.004/0.4640.004/0.464

Note. P-values are given in two tailed.

Table 3

Fixed and random effects estimated with the linear mixed model.

PredictorsEstimatesM1EstimatesM2
CIpCIp
(Intercept)3.162.71–3.60<.0013.172.73–3.61<.001
LaughterType0.080.03–0.130.0030.05−0.02 – 0.0130.138
Group0.22−0.42– 0.850.4980.20−0.44– 0.830.546
LaughterType × Group0.005−0.06– 0.150.372
Random Effects
σ21.641.64
τ000.15words0.15words
1.25participants1.25participants
ICC0.460.46
N48participants48participants
300words300words
Observations95029502
AIC32,30932,310
Marginal R2/conditional R20.004/0.4640.004/0.464
PredictorsEstimatesM1EstimatesM2
CIpCIp
(Intercept)3.162.71–3.60<.0013.172.73–3.61<.001
LaughterType0.080.03–0.130.0030.05−0.02 – 0.0130.138
Group0.22−0.42– 0.850.4980.20−0.44– 0.830.546
LaughterType × Group0.005−0.06– 0.150.372
Random Effects
σ21.641.64
τ000.15words0.15words
1.25participants1.25participants
ICC0.460.46
N48participants48participants
300words300words
Observations95029502
AIC32,30932,310
Marginal R2/conditional R20.004/0.4640.004/0.464

Note. P-values are given in two tailed.

Neural responses to implicit processing of words plus laughter versus NV human vocalizations

We first investigated the neural responses associated with the implicit processing of laughter. This was accomplished by contrasting the activations elicited during the laughter conditions (spontaneous and conversational laughter) with those evoked during the NV human vocalizations condition. Across all participants, whole-brain analyses indicated that hearing words plus both types of laughter versus NV human vocalizations was associated with activation in bilateral Heschl’s gyrus (HG), bilateral STG, bilateral SMA, bilateral posterior (PCC), mid (MCC) and ACC, bilateral precuneus, left middle temporal gyrus, and left superior medial frontal gyrus (mPFC). Additionally, widespread responses were detected in bilateral calcarine, cerebellum, lingual, thalamus, insula, vermis, and inferior frontal gyrus, though the peak coordinates were not located in these regions (See Table 4; Fig. 2A).

Fig. 2

Activations of implicit processing of (A) spontaneous and conversational laughter versus noised-vocoded human vocalization; (B) spontaneous laughter versus noised-vocoded human vocalization; and (C) conversational laughter versus noised-vocoded human vocalization.

Table 4

Brain regions showing significant activity in response to implicit processing of laughter versus noised-vocoded human vocalization.

ContrastNo. of voxelsRegionCoordinatesTZ
xyz
All Laugh > NV30,686Left Heschl’s gyrus−36−281420.80> 8
Left superior temporal gyrus−44−24819.69> 8
Left middle temporal gyrus−66−26415.63> 8
Right Heschl’s gyrus48−14418.29> 8
Right superior temporal gyrus60−12418.86> 8
2642Right supplementary motor area28647.075.75
Right middle cingulate cortex616386.375.33
Left supplementary motor area−414546.335.31
Left middle cingulate cortex−614406.015.11
Left superior medial gyrus−818405.875.02
Left anterior cingulate cortex−626225.644.87
024305.144.52
833Left precuneus−10−48425.965.08
Left posterior cingulate cortex−4−38304.163.80
Right precuneus2−48384.974.33
Right posterior cingulate cortex4−38283.923.61
236Left middle cingulate cortex−8−12404.804.28
Right middle cingulate cortex4−8363.953.63
ContrastNo. of voxelsRegionCoordinatesTZ
xyz
All Laugh > NV30,686Left Heschl’s gyrus−36−281420.80> 8
Left superior temporal gyrus−44−24819.69> 8
Left middle temporal gyrus−66−26415.63> 8
Right Heschl’s gyrus48−14418.29> 8
Right superior temporal gyrus60−12418.86> 8
2642Right supplementary motor area28647.075.75
Right middle cingulate cortex616386.375.33
Left supplementary motor area−414546.335.31
Left middle cingulate cortex−614406.015.11
Left superior medial gyrus−818405.875.02
Left anterior cingulate cortex−626225.644.87
024305.144.52
833Left precuneus−10−48425.965.08
Left posterior cingulate cortex−4−38304.163.80
Right precuneus2−48384.974.33
Right posterior cingulate cortex4−38283.923.61
236Left middle cingulate cortex−8−12404.804.28
Right middle cingulate cortex4−8363.953.63

Note. All Laugh = spontaneous and conversational laughter; NV = noise-vocoded human vocalization.

Table 4

Brain regions showing significant activity in response to implicit processing of laughter versus noised-vocoded human vocalization.

ContrastNo. of voxelsRegionCoordinatesTZ
xyz
All Laugh > NV30,686Left Heschl’s gyrus−36−281420.80> 8
Left superior temporal gyrus−44−24819.69> 8
Left middle temporal gyrus−66−26415.63> 8
Right Heschl’s gyrus48−14418.29> 8
Right superior temporal gyrus60−12418.86> 8
2642Right supplementary motor area28647.075.75
Right middle cingulate cortex616386.375.33
Left supplementary motor area−414546.335.31
Left middle cingulate cortex−614406.015.11
Left superior medial gyrus−818405.875.02
Left anterior cingulate cortex−626225.644.87
024305.144.52
833Left precuneus−10−48425.965.08
Left posterior cingulate cortex−4−38304.163.80
Right precuneus2−48384.974.33
Right posterior cingulate cortex4−38283.923.61
236Left middle cingulate cortex−8−12404.804.28
Right middle cingulate cortex4−8363.953.63
ContrastNo. of voxelsRegionCoordinatesTZ
xyz
All Laugh > NV30,686Left Heschl’s gyrus−36−281420.80> 8
Left superior temporal gyrus−44−24819.69> 8
Left middle temporal gyrus−66−26415.63> 8
Right Heschl’s gyrus48−14418.29> 8
Right superior temporal gyrus60−12418.86> 8
2642Right supplementary motor area28647.075.75
Right middle cingulate cortex616386.375.33
Left supplementary motor area−414546.335.31
Left middle cingulate cortex−614406.015.11
Left superior medial gyrus−818405.875.02
Left anterior cingulate cortex−626225.644.87
024305.144.52
833Left precuneus−10−48425.965.08
Left posterior cingulate cortex−4−38304.163.80
Right precuneus2−48384.974.33
Right posterior cingulate cortex4−38283.923.61
236Left middle cingulate cortex−8−12404.804.28
Right middle cingulate cortex4−8363.953.63

Note. All Laugh = spontaneous and conversational laughter; NV = noise-vocoded human vocalization.

Neural responses to implicit processing of words plus spontaneous and conversational laughter

Next, we investigated the neural responses associated with implicit processing of different types of laughter. This was accomplished by contrasting the activations elicited during each type of “word plus laughter” condition with those during the NV human vocalization condition (Spont Laugh > NV; Conver Laugh > NV) across all participants. Whole-brain analyses revealed that the spontaneous laughter versus NV condition showed activation in bilateral STG and bilateral Heschl’s gyrus (HG) (see Table 5; Fig. 2B), while the conversational laughter versus NV condition was associated with widespread responses in bilateral superior medial frontal gyri (mPFC), left ACC, left medial orbitofrontal cortex, right superior frontal gyrus (see Table 5; Fig. 2C). No significant group differences (NA > Autism; Autism > NA) were found within these clusters for any of the contrasts in the whole-brain-corrected analysis.

Table 5

Brain regions showing significant activity in response to implicit processing of spontaneous laughter and conversational laughter.

ContrastNo. of voxelsRegionCoordinatesTZ
xyz
Spont Laugh > NV1030Left superior temporal gyrus−58−1427.505.99
−58−2247.155.79
Left Heschl’s gyrus−42−1845.334.66
1397Right superior temporal gyrus64−1048.156.33
60−1028.126.32
Right Heschl’s gyrus34−26165.214.57
Conver Laugh > NV1812Left superior temporal gyrus−56−12411.257.68
Left Heschl’s gyrus−34−26168.846.67
165Left anterior cingulate cortex−124244.013.68
Left superior medial gyrus−45263.773.49
Left medial orbitofrontal cortex−446−143.653.39
−846−103.493.26
2064Right superior temporal gyrus64−10410.977.57
Right Heschl’s gyrus36−26167.475.97
Right superior temporal pole5610−144.363.95
Right middle temporal pole5410−243.923.61
234Right superior frontal gyrus1626605.064.47
1612504.724.42
290Right superior frontal gyrus1854384.574.11
1648464.173.81
Right superior medial gyrus654343.433.21
ContrastNo. of voxelsRegionCoordinatesTZ
xyz
Spont Laugh > NV1030Left superior temporal gyrus−58−1427.505.99
−58−2247.155.79
Left Heschl’s gyrus−42−1845.334.66
1397Right superior temporal gyrus64−1048.156.33
60−1028.126.32
Right Heschl’s gyrus34−26165.214.57
Conver Laugh > NV1812Left superior temporal gyrus−56−12411.257.68
Left Heschl’s gyrus−34−26168.846.67
165Left anterior cingulate cortex−124244.013.68
Left superior medial gyrus−45263.773.49
Left medial orbitofrontal cortex−446−143.653.39
−846−103.493.26
2064Right superior temporal gyrus64−10410.977.57
Right Heschl’s gyrus36−26167.475.97
Right superior temporal pole5610−144.363.95
Right middle temporal pole5410−243.923.61
234Right superior frontal gyrus1626605.064.47
1612504.724.42
290Right superior frontal gyrus1854384.574.11
1648464.173.81
Right superior medial gyrus654343.433.21

Note. Spont Laugh = spontaneous laughter; Conver Laugh = conversational laughter; NV = noise-vocoded human vocalization.

Table 5

Brain regions showing significant activity in response to implicit processing of spontaneous laughter and conversational laughter.

ContrastNo. of voxelsRegionCoordinatesTZ
xyz
Spont Laugh > NV1030Left superior temporal gyrus−58−1427.505.99
−58−2247.155.79
Left Heschl’s gyrus−42−1845.334.66
1397Right superior temporal gyrus64−1048.156.33
60−1028.126.32
Right Heschl’s gyrus34−26165.214.57
Conver Laugh > NV1812Left superior temporal gyrus−56−12411.257.68
Left Heschl’s gyrus−34−26168.846.67
165Left anterior cingulate cortex−124244.013.68
Left superior medial gyrus−45263.773.49
Left medial orbitofrontal cortex−446−143.653.39
−846−103.493.26
2064Right superior temporal gyrus64−10410.977.57
Right Heschl’s gyrus36−26167.475.97
Right superior temporal pole5610−144.363.95
Right middle temporal pole5410−243.923.61
234Right superior frontal gyrus1626605.064.47
1612504.724.42
290Right superior frontal gyrus1854384.574.11
1648464.173.81
Right superior medial gyrus654343.433.21
ContrastNo. of voxelsRegionCoordinatesTZ
xyz
Spont Laugh > NV1030Left superior temporal gyrus−58−1427.505.99
−58−2247.155.79
Left Heschl’s gyrus−42−1845.334.66
1397Right superior temporal gyrus64−1048.156.33
60−1028.126.32
Right Heschl’s gyrus34−26165.214.57
Conver Laugh > NV1812Left superior temporal gyrus−56−12411.257.68
Left Heschl’s gyrus−34−26168.846.67
165Left anterior cingulate cortex−124244.013.68
Left superior medial gyrus−45263.773.49
Left medial orbitofrontal cortex−446−143.653.39
−846−103.493.26
2064Right superior temporal gyrus64−10410.977.57
Right Heschl’s gyrus36−26167.475.97
Right superior temporal pole5610−144.363.95
Right middle temporal pole5410−243.923.61
234Right superior frontal gyrus1626605.064.47
1612504.724.42
290Right superior frontal gyrus1854384.574.11
1648464.173.81
Right superior medial gyrus654343.433.21

Note. Spont Laugh = spontaneous laughter; Conver Laugh = conversational laughter; NV = noise-vocoded human vocalization.

We hypothesized there would be a difference in the neural correlates of the implicit processing of different types of laughter within mPFC and sensorimotor network between groups, as suggested by previous findings. To examine this, we employed a region of interest (ROI) approach. The MNI peak coordinates from a prior fMRI study on the perception of conversational laughter versus spontaneous laughter in non-autistic adults (McGettigan et al. 2015) were used to define the ROIs: left superior medial frontal gyrus (mPFC; x = −3, y = 54, z = 9); left temporal thalamus (x = −3, y = −6, z = 9); and right ACC (x = 0, y = 30, z = 30). For each participant, the mean signal across all voxels was then extracted for the spontaneous laughter versus NV contrast and the conversational laughter versus NV contrast from each of the three ROIs. While significant brain activations in the mPFC were identified in the whole-brain results of implicit processing of conversational laughter across all participants, we adopted an independent approach for hypothesis testing. This ensured that the data used to define the ROIs were not related to the hypothesis testing data, in line with the principles of null hypothesis testing (Kriegeskorte et al. 2009). This unbiased approach also maximized our statistical power to detect group differences within our prior hypotheses.

The 3 (ROIs) × 2 (Contrasts) × 2 (Groups) ANOVA showed a significant three-way interaction (ROIs × Contrasts × Groups), F[1.473,64.797] = 5.898, P = 0.009, |${\eta}_p^2$| = 0.118, indicating that the interaction among contrasts and ROIs was different between autistic and non-autistic adults. There was also a significant main effect of ROI, F[2,88] = 5.499, P = 0.006, |${\eta}_p^2$| = 0.111, indicating that the brain activation in these three ROIs was different. Follow-up tests were conducted on each ROI: No significant main effects or interactions were detected in the analysis of the right ACC; a significant main effect of Group was observed in the left temporal thalamus, F[1,44] = 4.681, P = 0.036, |${\eta}_p^2$| = 0.096, indicating greater activation on both perceptual contrasts in non-autistic (M = 0.173, SEM = 0.143) than autistic adults (M = −.266, SEM = 0.143); and a significant two-way interaction (Contrasts × Groups) was detected in the left superior medial gyrus, F[1,44] = 7.111, P = 0.011, |${\eta}_p^2$| = 0.139. Post hoc analyses revealed significantly greater mentalizing-related BOLD signal change during the implicit processing of conversational laughter compared to spontaneous laughter in the non-autistic group, F[1,22] = 9.720, P = 0.005, |${\eta}_p^2$| = 0.306 (Spontaneous laughter: M = 0.162, SD = 1.103; Conversational laughter: M = 1.048, SD = 1.120). However, there was no significant difference between the two perceptual contrasts in the autistic group, F[1,22] = 0.661, P = 0.425, |${\eta}_p^2$| = 0.029 (Spontaneous laughter: M = 0.637, SD = 1.709; Conversational laughter: M = 0.374, SD = 1.903) (Fig. 3A).

Fig. 3

Brain activation and regions in the (A) three ROIs; and (B) metalizing ROI based on meta-analyses between the two groups. Note. 3A indicates extracted beta values for ROIs from McGettigan et al. (2015); three brain regions are represented: in orange, the left superior medial frontal gyrus (x = −3, y = 54, z = 9); in green, the left temporal thalamus (x = −3, y = −6, z = 9); and in yellow, the right ACC (x = 0, y = 30, z = 30). 3B indicates extracted beta values for an ROI from Van Overwalle and Baetens (2009); One brain region is represented in orange, the mPFC (x = 0, y = 50, z = 20). Spont = spontaneous laughter; Conver = conversational laughter; NV = noised-vocoded human vocalization. Error bars represent the standard error of the mean.

To delve more deeply into whether the mPFC activation identified in aforementioned analyses signifies engagement of the mentalizing network, we conducted a further ROI analysis. We compared the activation of spontaneous laughter and conversational laughter between two groups using a specific mentalizing region of interest, identified through meta-analyses reporting a range of mentalizing tasks which are independent from our current implicit laughter processing task (Van Overwalle 2009; Van Overwalle and Baetens 2009), located at coordinates (mPFC: ×= 0, y = 50, z = 20). The 2 (Contrasts) × 2 (Groups) ANOVA showed a significant two-way interaction, F[1,44] = 4.142, P = 0.048, |${\eta}_p^2$| = 0.086, indicating that the activation on two perceptual contrasts differed between groups. Post hoc analyses revealed significantly greater mentalizing-related BOLD signal change during the implicit processing of conversational laughter compared to spontaneous laughter in the non-autistic group, t(22) = −2.075, P = 0.050, d = −.433 (Spontaneous laughter: M = −.209, SD = 1.571; Conversational laughter: M = 0.510, SD = 1.026). However, there was no significant difference between the two perceptual contrasts in the autistic group, t(22) = 0.847, P = 0.407, d = 0.176 (Spontaneous laughter: M = 0.638, SD = 1.689; Conversational laughter: M = 0.325, SD = 1.666) (Fig. 3B).

Together, these findings suggest that we have replicated previous results on the engagement of the mPFC during the perception of conversational laughter in non-autistic adults (McGettigan et al. 2015). Additionally, by comparing neural differences in mentalizing between autistic and non-autistic groups defined by meta-analytic evidence, our findings further suggest the important role of the mPFC in autistic differences in interpreting different types of laughter but not all types of laughter.

Exploratory time-courses of neural responses

We additionally to explore the group differences in the hemodynamic response of brain areas during implicit processing of the words plus laughter conditions and the NV condition, to unpack humor and laughter; we did not have strong predictions about neural differences in the autistic group. Five regions were selected based on a prior interest from the whole-brain results: these correspond to regions associated with hearing sounds and processing speech and emotional vocalizations (bilateral HG and the STG, respectively; Mummery et al. 1999; McGettigan et al. 2015); responses to heard laughter (SMA;McGettigan et al. 2015; O’Nions et al. 2017)) and responses to humor (bilateral precuneus; Li et al. 2018; Brawer and Amir 2021; Chan et al. 2023). The finite impulse response (FIR) event-related time courses were extracted from the above regions, with an FIR length of 30 sec and time bins of 3 s, across all trials of interest. The graphs in Fig. 4 illustrate the greater sensitivity in BOLD signal changes within the SMA and bilateral precuneus areas, but not in HG and STG (Supplementary Fig. S1; HG and STG showed similar profiles), in the non-autistic group during the laughter compared to the NV condition. This contrasted with the autistic group, who demonstrated similar BOLD signal changes in response to both conditions.

Fig. 4

Average time series for all laughter versus NV human vocalization in the (A) SMA, (B) left precuneus, and (C) right precuneus. Note. Laugh = spontaneous and conversational laughter; Noised-vocoded = noised-vocoded human vocalization. Error bars represent the standard error of the mean.

To perform a region of interest analysis and compare the two groups’ neural responses within these regions, we adopted a method similar to that used by White et al. (2014). A significant difference was observed only in the SMA area (MNI peak coordinate: x = 2, y = 8, z = 64; 8 mm sphere) between non-autistic (M = 0.982, SD = 1.175) and autistic (M = 2.287, SD = 2.099) adults during implicit processing of words plus laughter versus words plus the NV human vocalization, t(34.566) = −2.601, P = 0.014, d = − 0.767.

Discussion

This is the first study to use fMRI to investigate the neural mechanisms behind the implicit processing of different types of laughter in autistic and non-autistic adults. Despite no behavioral differences between the groups, we found that non-autistic adults showed increased activation in the mPFC when listening to words plus conversational than spontaneous laughter, while autistic adults showed no difference in mPFC activity between these two types of laughter. Additionally, autistic adults showed greater activation in SMA than non-autistic adults when listening to words paired with either type of laughter. These findings suggest a critical role for the mPFC and sensorimotor network in the implicit processing of laughter, respectively, in engaging mentalizing to understand socially ambiguous laughter, and in eliciting laughter contagion. Our current findings suggest that autistic people exhibit specific similarities and differences compared to non-autistic people in the implicit processing of laughter. In particular, autistic people experience difficulties in understanding conversational laughter with sophisticated meaning and social ambiguity. It is possible that the challenges autistic people encounter in non-verbal communication seems to not be in processing all social signals, but rather in processing a specific type of social signal during communication.

In the whole-brain activation comparing words plus laughter to NV across both autistic and non-autistic groups, we observed neural responses in several regions of the auditory cortex, including bilateral HG and bilateral STG. Additionally, responses were noted in the bilateral PCC, primary visual cortex, precuneus, SMA and mPFC. The recruitment of the dorsolateral temporal lobe fields is consistent with a greater loading on the processing of vocalizations in the words plus laughter conditions (Sander et al. 2005; Fecteau et al. 2007). Our design cannot expand upon this, but it is also possible that there is greater processing in auditory areas than would be seen to spoken words (Mummery et al. 1999; Wise et al. 1999) or laughter alone (McGettigan et al. 2015): the activation in this contrast is extensive in primary and secondary auditory cortex, despite the high-level baseline containing spoken words and NV sounds. It is important to note that the design of our current study differs from previous laughter research (McGettigan et al. 2015; O’Nions et al. 2017): instead of having participants passively listen to laughter stimuli, we used an implicit measure of laughter processing and presented a cover story about people’s responses to a comedian saying funny words, which may have enhanced speech-related processing in auditory fields.

The sensorimotor network, which includes the SMA and ACC as detected in the whole-brain results, has been consistently identified during laughter perception (Warren et al. 2006; McGettigan et al. 2015; O’Nions et al. 2017). This can be attributed to its role as part of a simulation mechanism in facilitating social understanding (McGettigan et al. 2015), contributing to emotional contagion (O’Nions et al. 2017), and its involvement in auditory processing and auditory imagery (Lima et al. 2016). Laughter carries greater socio-emotional weight and is more contagious; therefore increased activity in the sensorimotor network was observed when participants heard words plus laughter compared to NV. Through ROI analyses, autistic adults displayed greater activation in the SMA compared to non-autistic adults, indicating that laughter might be differently contagious for autistic people. This observation seems to align with existing findings highlighting differences in the contagiousness of laughter among autistic people: autistic children are less likely to join in others’ laughter and perceive cartoons with a laughter track as less enjoyable than their peers (Reddy et al. 2002; Helt et al. 2020).

Similar to the involvement of sensorimotor network, the mPFC was consistently observed in the whole-brain results of words plus laughter versus NV, as well as specifically for conversational but not spontaneous laughter. This finding aligns with previous evidence showing greater mPFC activation while listening to laughter with complex social meaning (Szameitat et al. 2010; Sumiya et al. 2017, 2020). In the subsequent ROI results, the non-autistic group showed significantly greater activation in the mPFC during implicit processing of conversational laughter compared to spontaneous laughter. In contrast, no such differential activation was observed in autistic adults. Our finding replicates and extends previous studies on the engagement of the mPFC in processing conversational laughter by including an autistic group (McGettigan et al. 2015; Lavan et al. 2017). Moreover, we went a step further by replicating this difference in neural activation in the mPFC between two groups using a meta-analytic mentalizing ROI (Van Overwalle 2009; Van Overwalle and Baetens 2009). This ROI encompassed a wide range of mentalizing tasks, all designed to tap into a unified cognitive mechanism and independent of our implicit laughter processing task. Our results indicate that the observed mPFC activation reflects the socio-emotional ambiguity of hearing conversational laughter together with funny words. This seems to suggest that non-autistic adults may attempt to resolve the reason for the laughter by engaging in mentalizing, to interpret the meaning and intention behind it.

Further, the lack of mPFC activation in autistic adults during implicit processing of conversational laughter is consistent with Sumiya et al.’s (2020) study, which showed lower mPFC activity in autistic adults compared to non-autistic adults to jokes followed by laughter. Although they did not define their laughter as being spontaneous or conversational, it seems likely that they used the latter. Our results are also consistent with previous findings of atypical function within this mentalizing-related region in autism (Frith and Frith 2006; Gilbert et al. 2009; White et al. 2014). This group difference in autistic adults might reflect their “capacity limits in mentalizing” as proposed by (White et al. 2014), given that we used an implicit measure (adding laughter to modulate the perceived funniness of words) to investigate laughter processing, preventing them from using compensatory strategies. Behaviourally, autistic adults were able to implicitly differentiate between spontaneous and conversational laughter; they rated funny words paired with spontaneous laughter as funnier than the words paired with conversational laughter (see also Cai et al. 2019). There may be multiple potential cues that can be used to implicitly differentiate these two types of laughter. One cue might be related to mentalizing, while another could stem from the inherent acoustic differences between these two types of laughter (Scott et al. 2014; McGettigan et al. 2015; Lavan et al. 2016). It is possible that autistic adults implicitly differentiate laughter based on auditory acoustic properties without engaging in mentalizing. However, our neuroimaging results do not show differences in basic auditory processing. Instead, our data suggest that a reduction in mentalizing may have led to processing laughter without a full understanding of the differences between these two types, and difficulties comprehending the meaning of the laughter in autistic adults. Indeed, subtle differences in the perception of the two types of laughter have been observed during an explicit processing task (explicitly rating the affective properties of two types of laughter); autistic adults rated conversational laughter as more authentic and emotionally arousing, and therefore more similar to spontaneous laughter, than non-autistic adults (Cai et al. in press). Hence, autistic people are able to implicit and explicitly distinguish between these two types of laughter albeit to a lesser extent than non-autistic people, and hence they may struggle to understand the meaning and intentions behind conversational laughter in everyday contexts. This could lead to misunderstandings in social situations, potentially leaving them socially vulnerable (Trundle et al. 2023).

In addition to laughter perception-related areas such as the mPFC and sensorimotor network, we also report significant results associated with the processing of words plus laughter in auditory fields associated with the perception of non-verbal emotional vocalizations and speech, and more notably in the precuneus, posterior bilateral cingulate cortex and primary visual fields. The precuneus has been associated with studies both of mentalizing (Takahashi et al. 2015; Arioli et al. 2021) and of humor (Li et al. 2018; Brawer and Amir 2021; Chan et al. 2023), and its involvement in laughter modulating how funny words seem suggests that it may be an important component in the processes underlying this modulation. The laughter may provoke a further rumination on the meaning of the word, its possible humorous associations, and perhaps the intentions of the person laughing. These reflective processes may relate to the role of the precuneus in metacognitive processes (Ye et al. 2018). The involvement of primary visual cortex in the absence of visual stimulation implies the recruitment of visual imagery to support these processes. Posterior bilateral cingulate cortex has been implicated in studies of autobiographical memory retrieval and emotional salience and is a key node in the default mode network (Leech et al. 2012); its involvement here may indicate both emotional and personal kinds of information being recruited when processing the potentially humorous implications of a word.

The present study has several limitations. Behaviourally, we were unable to collect baseline funniness ratings of words from both autistic and non-autistic adults due to the in-lab testing time restrictions imposed by COVID-19. Given that we measured implicit processing of auditory stimuli, our effects of interests were highly sensitive to background noise, which precluded online testing. Including a baseline measure would enable us to draw more robust conclusions about the similarity of laughter modulation effects between autistic and non-autistic adults. Neurally, though we used laughter to modulate the perceived funniness of funny words in the current study, our design does not distinguish between the processing of laughter and humor. Hence, we further explored group differences by plotting out the time courses of neural responses, to determine if any time-related effects existed, considering the temporal dependency in our stimuli (laughter followed the words). The graphs indicate that, where there are differences between the words plus laughter and NV conditions over time, these occur later in the processing of the stimuli in SMA and precuneus, especially for non-autistic participants. This is consistent with the later integration of the laughter with the verbal material, however, future studies with finer temporal resolution, such as through magnetoencephalography, will better unpack this integration and its overall effects on perceived funniness.

Conclusion

In sum, this study indicates that autistic people face challenges in processing a specific type of social cue during communication, but not in processing all social cues. Our study is the first to use fMRI to probe the neural mechanisms underlying different types of laughter during implicit processing (i.e. how it makes funny words funnier), and, further, to compare the similarities and differences between autistic and non-autistic adults. Despite no behavioral differences between the groups, we found that non-autistic adults exhibited increased mPFC activation when listening to words plus conversational laughter, while autistic adults showed no difference in mPFC activity between these two types of laughter. In addition, autistic adults showed greater activation in SMA than non-autistic adults when listening to words paired with either type of laughter. Our findings suggest a critical role for the mPFC and sensorimotor network in the implicit processing of laughter, respectively in engaging mentalizing to understand socio-emotional meaning of laughter, and in eliciting laughter contagion. Overall, our results highlight the possibility that autistic people may face challenges in understanding the essence of the laughter they frequently encounter in everyday life, especially in processing conversational laughter that carries complex meanings and social ambiguity, potentially leading to social vulnerability. Therefore, we advocate for clearer communication with autistic people.

Acknowledgments

We thank all participants who took part in this study and the people who assisted with recruitment during COVID-19. We also thank the BUCNI team for their assistance with the fMRI scans. We would like to offer special thanks to the comedian Howard Read for his professional performance in reading the funny words, to Manying Chiu and Hannah Partington for their assistance with editing word stimuli, as well as to Alexis MacIntyre and Paul MacIntyre for their help with word annotation.

Author contributions

Ceci Qing Cai (Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Visualization, Writing—original draft, Writing—review & editing), Nadine Lavan (Conceptualization, Investigation, Methodology, Supervision, Writing—review & editing), Sinead H.Y. Chen (Resources), Claire Z.X. Wang (Data curation, Investigation, Project administration), Ozan Cem Ozturk (Data curation, Investigation, Project administration), Roni Man Ying Chiu (Data curation, Investigation, Resources), Sam Gilbert (Investigation, Supervision, Writing—review & editing), Sarah J. White (Conceptualization, Funding acquisition, Investigation, Methodology, Supervision, Writing—review & editing), and Sophie Scott (Conceptualization, Investigation, Methodology, Supervision, Writing—review & editing).

Funding

This work was supported by the Royal Society (DH150167, RF\ERE\231054, and RF\ERE\210122 to S.J.W.) and the Academy of Medical Sciences (SBF003\1169 to S.J.W).

Conflict of interest statement: None declared.

References

Anwyl-Irvine
AL
,
Massonnié
J
,
Flitton
A
,
Kirkham
N
,
Evershed
JK
.
Gorilla in our midst: an online behavioral experiment builder
.
Behav Res Methods
.
2020
:
52
(
1
):
388
407
. https://doi.org/10.3758/s13428-019-01237-x.

Arioli
M
,
Cattaneo
Z
,
Ricciardi
E
,
Canessa
N
.
Overlapping and specific neural correlates for empathizing, affective mentalizing, and cognitive mentalizing: a coordinate-based meta-analytic study
.
Hum Brain Mapp
.
2021
:
42
(
14
):
4777
4804
. https://doi.org/10.1002/hbm.25570.

Baron-Cohen
S
,
Wheelwright
S
,
Skinner
R
,
Martin
J
,
Clubley
E
.
The autism-spectrum quotient (AQ): evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians
.
J Autism Dev Disord
.
2001
:
31
(
1
):
5
17m
. https://doi.org/10.1023/A:1005653411471.

Bates
D
,
Mächler
M
,
Bolker
B
,
Walker
S
.
Fitting linear mixed effects models using lme4
.
J Stat Softw
.
2014
:
67
(1):1–48.

Boersma
P
.
Praat, a system for doing phonetics by computer.
Glot International
.
2001
:
5
:(
9/10
):
341
345
.

Brainard
DH
.
The psychophysics toolbox
.
Spat Vis
.
1997
:
10
(
4
):
433
436
. https://doi.org/10.1163/156856897X00357.

Brawer
J
,
Amir
O
.
Mapping the “funny bone”: neuroanatomical correlates of humor creativity in professional comedians
.
Soc Cogn Affect Neurosci
.
2021
:
16
(
9
):
915
925
. https://doi.org/10.1093/scan/nsab049.

Brett
M
,
Anton
J-L
,
Valabregue
R
,
Poline
J-B
. Region of interest analysis using an SPM toolbox.
NeuroImage
.
2002
:
13
(
2
):
210
217
.

Bryant
GA
,
Bainbridge
CM
.
Laughter and culture
.
Philos Trans R Soc B
.
2022
:
377
:
20210179
.

Cai
Q
,
Chen
S
,
White
SJ
,
Scott
SK
.
Modulation of humor ratings of bad jokes by other people’s laughter
.
Curr Biol
.
2019
:
29
(
14
):
R677
R678
. https://doi.org/10.1016/j.cub.2019.05.073.

Cai
Q
,
White
SJD
,
Chen
SH
,
Mueller
MAE
,
Scott
SK
.
Autistic adults perceive and experience laughter differently to non-autistic adults
.
Sci Rep
.
in press
.

Chan
Y
,
Zeitlen
DC
,
Beaty
RE
.
Amygdala-frontoparietal effective connectivity in creativity and humor processing
.
Hum Brain Mapp
.
2023
:
44
(
6
):
2585
2606
. https://doi.org/10.1002/hbm.26232.

Dunbar
R
,
Baron
R
,
Frangou
A
,
Pearce
E
,
van
Leeuwen
EJC
,
Stow
J
,
Partridge
G
,
MacDonald
I
,
Barra
V
,
van
Vugt
M
.
Social laughter is correlated with an elevated pain threshold
.
Proc Royal Soc B Biological Sci
.
2012
:
279
(
1731
):
1161
1167
. https://doi.org/10.1098/rspb.2011.1373.

Engelthaler
T
,
Hills
TT
.
Humor norms for 4,997 English words
.
Behav Res Methods
.
2018
:
50
(
3
):
1116
1124
. https://doi.org/10.3758/s13428-017-0930-6.

Fecteau
S
,
Belin
P
,
Joanette
Y
,
Armony
JL
.
Amygdala responses to nonlinguistic emotional vocalizations
.
NeuroImage
.
2007
:
36
(
2
):
480
487
. https://doi.org/10.1016/j.neuroimage.2007.02.043.

Fox
J
,
Weisberg
S
,
Adler
D
,
Bates
D
,
Baud-Bovy
G
,
Ellison
S
,
Graves
S
,
Heiberger
H
.
Package ‘car’
.
Vienna: R Foundation for Statistical Computing
;
2012
:
16
(
332
):333.

Frith
CD
,
Frith
U
.
The neural basis of mentalizing
.
Neuron
.
2006
:
50
(
4
):
531
534
. https://doi.org/10.1016/j.neuron.2006.05.001.

Gerbella
M
,
Pinardi
C
,
Cesare
GD
,
Rizzolatti
G
,
Caruana
F
.
Two neural networks for laughter: a tractography study
.
Cereb Cortex
.
2020
:
31
(
2
):
899
916
. https://doi.org/10.1093/cercor/bhaa264.

Gervais
M
,
Wilson
DS
.
The evolution and functions of laughter and humor: a synthetic approach
.
Q Rev Biol
.
2005
:
80
(
4
):
395
430
. https://doi.org/10.1086/498281.

Gilbert
SJ
,
Meuwese
JDI
,
Towgood
KJ
,
Frith
CD
,
Burgess
PW
.
Abnormal functional specialization within medial prefrontal cortex in high-functioning autism: a multi-voxel similarity analysis
.
Brain
.
2009
:
132
(
4
):
869
878
. https://doi.org/10.1093/brain/awn365.

Gray
AW
,
Parkinson
B
,
Dunbar
RI
.
Laughter’s influence on the intimacy of self-disclosure
.
Hum Nat
.
2015
:
26
(
1
):
28
43
. https://doi.org/10.1007/s12110-015-9225-8.

Helt
MS
,
Fein
DA
.
Facial feedback and social input: effects on laughter and enjoyment in children with autism spectrum disorders
.
J Autism Dev Disord
.
2016
:
46
(
1
):
83
94
. https://doi.org/10.1007/s10803-015-2545-z.

Helt
MS
,
Fein
DA
,
Vargas
JE
.
Emotional contagion in children with autism spectrum disorder varies with stimulus familiarity and task instructions
.
Dev Psychopathol
.
2020
:
32
(
1
):
383
393
. https://doi.org/10.1017/S0954579419000154.

Hudenko
WJ
,
Stone
W
,
Bachorowski
J-A
.
Laughter differs in children with autism: an acoustic analysis of laughs produced by children with and without the disorder
.
J Autism Dev Disord
.
2009
:
39
(
10
):
1392
1400
. https://doi.org/10.1007/s10803-009-0752-1.

Hus
V
,
Lord
C
.
The autism diagnostic observation schedule, module 4: revised algorithm and standardized severity scores
.
J Autism Dev Disord
.
2014
:
44
(
8
):
1996
2012
. https://doi.org/10.1007/s10803-014-2080-3.

MathWorks Inc
.
MATLAB
.
Natick, MA
:
The MathWorks Inc
;
2018
.

Kriegeskorte
N
,
Simmons
WK
,
Bellgowan
PSF
,
Baker
CI
.
Circular analysis in systems neuroscience: the dangers of double dipping
.
Nat Neurosci
.
2009
:
12
(
5
):
535
540
. https://doi.org/10.1038/nn.2303.

Kuznetsova
A
,
Brockhoff
PB
,
Christensen
RHB
.
lmerTest package: tests in linear mixed effects models
.
J Stat Softw
.
2017
:
82
(
13
):
1
26
. https://doi.org/10.18637/jss.v082.i13.

Lavan
N
,
Scott
SK
,
McGettigan
C
.
Laugh like you mean it: authenticity modulates acoustic, physiological and perceptual properties of laughter
.
J Nonverbal Behav
.
2016
:
40
(
2
):
133
149
. https://doi.org/10.1007/s10919-015-0222-8.

Lavan
N
,
Rankin
G
,
Lorking
N
,
Scott
S
,
McGettigan
C
.
Neural correlates of the affective properties of spontaneous and volitional laughter types
.
Neuropsychologia
.
2017
:
95
:
30
39
. https://doi.org/10.1016/j.neuropsychologia.2016.12.012.

Leech
R
,
Braga
R
,
Sharp
DJ
.
Echoes of the brain within the posterior cingulate cortex
.
J Neurosci
.
2012
:
32
(
1
):
215
222
. https://doi.org/10.1523/JNEUROSCI.3689-11.2012.

Lenth
R
,
Signmann
H
,
Love
J
,
Buerkner
P
,
Herve
M
.
Emmeans: estimated marginal means, aka least-squares means
. R package version 1.4.8.
2018
. Available at: https://CRAN.R-project.org/package=emmeans.

Leung
FYN
,
Sin
J
,
Dawson
C
,
Ong
JH
,
Zhao
C
,
Veić
A
,
Liu
F
.
Emotion recognition across visual and auditory modalities in autism spectrum disorder: a systematic review and meta-analysis
.
Dev Rev
.
2022
:
63
:
101000
. https://doi.org/10.1016/j.dr.2021.101000.

Li
B
,
Li
X
,
Pan
Y
,
Qiu
J
,
Zhang
D
.
The relationship between self-enhancing humor and precuneus volume in young healthy individuals with high and low cognitive empathy
.
Sci Rep
.
2018
:
8
(
1
):
3467
. https://doi.org/10.1038/s41598-018-21890-0.

Lima
CF
,
Krishnan
S
,
Scott
SK
.
Roles of supplementary motor areas in auditory processing and auditory imagery
.
Trends Neurosci
.
2016
:
39
(
8
):
527
542
. https://doi.org/10.1016/j.tins.2016.06.003.

Manninen
S
,
Tuominen
L
,
Dunbar
RI
,
Karjalainen
T
,
Hirvonen
J
,
Arponen
E
,
Hari
R
,
Jääskeläinen
IP
,
Sams
M
,
Nummenmaa
L
.
Social laughter triggers endogenous opioid release in humans
.
J Neurosci
.
2017
:
37
(
25
):
6125
6131
. https://doi.org/10.1523/JNEUROSCI.0688-16.2017.

McGettigan
C
,
Walsh
E
,
Jessop
R
,
Agnew
ZK
,
Sauter
DA
,
Warren
JE
,
Scott
SK
.
Individual differences in laughter perception reveal roles for mentalizing and sensorimotor systems in the evaluation of emotional authenticity
.
Cereb Cortex New York Ny
.
2015
:
25
(
1
):
246
257
. https://doi.org/10.1093/cercor/bht227.

Mummery
CJ
,
Patterson
K
,
Wise
RJ
,
Vandenberghe
R
,
Vandenbergh
R
,
Price
CJ
,
Hodges
JR
.
Disrupted temporal lobe connections in semantic dementia
.
Brain
.
1999
:
122
(
1
):
61
73
. https://doi.org/10.1093/brain/122.1.61.

Mundy
P
,
Sigman
M
,
Ungerer
J
,
Sherman
T
.
Defining the social deficits of autism: the contribution of non-verbal communication measures
.
J Child Psychol Psychiatry
.
1986
:
27
(
5
):
657
669
. https://doi.org/10.1111/j.1469-7610.1986.tb00190.x.

Neves
L
,
Cordeiro
C
,
Scott
SK
,
Castro
SL
,
Lima
CF
.
High emotional contagion and empathy are associated with enhanced detection of emotional authenticity in laughter
.
Q J Exp Psychol
.
2017
:
71
(
11
):
2355
2363
. https://doi.org/10.1177/1747021817741800.

O’Nions
E
,
Lima
CF
,
Scott
SK
,
Roberts
R
,
McCrory
EJ
,
Viding
E
.
Reduced laughter contagion in boys at risk for psychopathy
.
Curr Biol
.
2017
:
27
(
19
):
3049
3055.e4
. https://doi.org/10.1016/j.cub.2017.08.062.

Penny
WD
,
Friston
KJ
,
Ashburner
JT
,
Kiebel
SJ
,
Nichols
TE
.
Statistical parametric mapping: the analysis of functional brain images
.
Cambridge, Massachusetts: Elsevier
;
2011
.

Poline
J-B
,
Worsley
KJ
,
Evans
AC
,
Friston
KJ
.
Combining spatial extent and peak intensity to test for activations in functional imaging
.
NeuroImage
.
1997
:
5
(
2
):
83
96
. https://doi.org/10.1006/nimg.1996.0248.

Provine
RR
.
Contagious laughter: laughter is a sufficient stimulus for laughs and smiles
.
B Psychonomic Soc
.
1992
:
30
(
1
):
1
4
. https://doi.org/10.3758/BF03330380.

Provine
RR
.
Laughter punctuates speech: linguistic, social and gender contexts of laughter
.
Ethology
.
1993
:
95
(
4
):
291
298
. https://doi.org/10.1111/j.1439-0310.1993.tb00478.x.

Provine
RR
.
Laughing, tickling, and the evolution of speech and self
.
Curr Dir Psychol Sci
.
2004
:
13
(
6
):
215
218
. https://doi.org/10.1111/j.0963-7214.2004.00311.x.

Provine
RR
,
Fischer
KR
.
Laughing, smiling, and talking: relation to sleeping and social context in humans
.
Ethology
.
1989
:
83
(
4
):
295
305
. https://doi.org/10.1111/j.1439-0310.1989.tb00536.x.

Reddy
V
,
Williams
E
,
Vaughan
A
.
Sharing humour and laughter in autism and Down’s syndrome
.
Brit J Psychol
.
2002
:
93
(
2
):
219
242
. https://doi.org/10.1348/000712602162553.

RStudio
.
RStudio: integrated development for R
.
Boston, MA
:
RStudio, PBC
;
2020
.

Sakamoto
I
,
Ishiguro
M
,
Kitagawa
G
.
Akaike information criterion statistics
.
The Netherlands: D Reidel Publiushing Company
;
1986
.

Sander
D
,
Grandjean
D
,
Pourtois
G
,
Schwartz
S
,
Seghier
ML
,
Scherer
KR
,
Vuilleumier
P
.
Emotion and attention interactions in social cognition: brain regions involved in processing anger prosody
.
NeuroImage
.
2005
:
28
(
4
):
848
858
. https://doi.org/10.1016/j.neuroimage.2005.06.023.

Sauter
DA
,
Eisner
F
,
Ekman
P
,
Scott
SK
.
Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations
.
Proc National Acad Sci
.
2010
:
107
(
6
):
2408
2412
. https://doi.org/10.1073/pnas.0908239106.

Scott
SK
,
Lavan
N
,
Chen
S
,
McGettigan
C
.
The social life of laughter
.
Trends Cogn Sci
.
2014
:
18
(
12
):
618
620
. https://doi.org/10.1016/j.tics.2014.09.002.

Scott
SK
,
Cai
CQ
,
Billing
A
.
Robert Provine: the critical human importance of laughter, connections and contagion
.
Philos Trans R Soc B
.
2022
:
377
(
1863
):
20210178
. https://doi.org/10.1098/rstb.2021.0178.

Senju
A
,
Southgate
V
,
White
S
,
Frith
U
.
Mindblind eyes: an absence of spontaneous theory of mind in Asperger syndrome
.
Science
.
2009
:
325
(
5942
):
883
885
. https://doi.org/10.1126/science.1176170.

Sumiya
M
,
Koike
T
,
Okazaki
S
,
Kitada
R
,
Sadato
N
.
Brain networks of social action-outcome contingency: the role of the ventral striatum in integrating signals from the sensory cortex and medial prefrontal cortex
.
Neurosci Res
.
2017
:
123
:
43
54
. https://doi.org/10.1016/j.neures.2017.04.015.

Sumiya
M
,
Okamoto
Y
,
Koike
T
,
Tanigawa
T
,
Okazawa
H
,
Kosaka
H
,
Sadato
N
.
Attenuated activation of the anterior rostral medial prefrontal cortex on self-relevant social reward processing in individuals with autism spectrum disorder
.
Neuroimage Clin
.
2020
:
26
:
102249
. https://doi.org/10.1016/j.nicl.2020.102249.

Szameitat
DP
,
Kreifelts
B
,
Alter
K
,
Szameitat
AJ
,
Sterr
A
,
Grodd
W
,
Wildgruber
D
.
It is not always tickling: distinct cerebral responses during perception of different laughter types
.
NeuroImage
.
2010
:
53
(
4
):
1264
1271
. https://doi.org/10.1016/j.neuroimage.2010.06.028.

Takahashi
HK
,
Kitada
R
,
Sasaki
AT
,
Kawamichi
H
,
Okazaki
S
,
Kochiyama
T
,
Sadato
N
.
Brain networks of affective mentalizing revealed by the tear effect: the integrative role of the medial prefrontal cortex and precuneus
.
Neurosci Res
.
2015
:
101
:
32
43
. https://doi.org/10.1016/j.neures.2015.07.005.

Trevisan
DA
,
Hoskyn
M
,
Birmingham
E
.
Facial expression production in autism: a meta-analysis
.
Autism Res
.
2018
:
11
(
12
):
1586
1601
. https://doi.org/10.1002/aur.2037.

Trundle
G
,
Jones
KA
,
Ropar
D
,
Egan
V
.
Prevalence of victimisation in autistic individuals: a systematic review and meta-analysis
.
Trauma Violence Abuse
.
2023
:
24
(
4
):
2282
2296
. https://doi.org/10.1177/15248380221093689.

Uljarevic
M
,
Hamilton
A
.
Recognition of emotions in autism: a formal meta-analysis
.
J Autism Dev Disord
.
2013
:
43
(
7
):
1517
1526
. https://doi.org/10.1007/s10803-012-1695-5.

Van Overwalle
F
.
Social cognition and the brain: a meta-analysis
.
Hum Brain Mapp
.
2009
:
30
(
3
):
829
858
. https://doi.org/10.1002/hbm.20547.

Van Overwalle
F
,
Baetens
K
.
Understanding others’ actions and goals by mirror and mentalizing systems: a meta-analysis
.
NeuroImage
.
2009
:
48
(
3
):
564
584
. https://doi.org/10.1016/j.neuroimage.2009.06.009.

Vettin
J
,
Todt
D
.
Laughter in conversation: features of occurrence and acoustic structure
.
J Nonverbal Behav
.
2004
:
28
(
2
):
93
115
. https://doi.org/10.1023/B:JONB.0000023654.73558.72.

Warren
JE
,
Sauter
DA
,
Eisner
F
,
Wiland
J
,
Dresner
MA
,
Wise
RJS
,
Rosen
S
,
Scott
SK
.
Positive emotions preferentially engage an auditory-motor ‘Mirror’ system
.
J Neurosci
.
2006
:
26
(
50
):
13067
13075
. https://doi.org/10.1523/JNEUROSCI.3907-06.2006.

Wechsler
D
.
Wechsler Adult Intelligence Scale–Fourth Edition (WAIS-IV)
. APA PsycTests, Washington DC, USA;
2008
.

White
SJ
,
Frith
U
,
Rellecke
J
,
Al-Noor
Z
,
Gilbert
SJ
.
Autistic adolescents show atypical activation of the brain’s mentalizing system even without a prior history of mentalizing problems
.
Neuropsychologia
.
2014
:
56
:
17
25
. https://doi.org/10.1016/j.neuropsychologia.2013.12.013.

Wild
B
,
Rodden
FA
,
Grodd
W
,
Ruch
W
.
Neural correlates of laughter and humour
.
Brain
.
2003
:
126
(
10
):
2121
2138
. https://doi.org/10.1093/brain/awg226.

Wise
R
,
Greene
J
,
Büchel
C
,
Scott
S
.
Brain regions involved in articulation
.
Lancet
.
1999
:
353
(
9158
):
1057
1061
. https://doi.org/10.1016/S0140-6736(98)07491-1.

Wu
C-L
,
An
C-P
,
Tseng
L-P
,
Chen
H-C
,
Chan
Y-C
,
Cho
S-L
,
Tsai
M-L
.
Fear of being laughed at with relation to parent attachment in individuals with autism
.
Res Autism Spectr Disord
.
2015
:
10
:
116
123
. https://doi.org/10.1016/j.rasd.2014.11.004.

Ye
Q
,
Zou
F
,
Lau
H
,
Hu
Y
,
Kwok
SC
.
Causal evidence for mnemonic metacognition in human precuneus
.
J Neurosci
.
2018
:
38
(
28
):
6379
6387
. https://doi.org/10.1523/JNEUROSCI.0660-18.2018.

Author notes

Sarah J. White and Sophie K. Scott senior authors.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Supplementary data