studyfinds.org /heres-why-you-wince-at-others-pain/

Hate Seeing People Get Shots? Here's Why You Wince At Others' Pain

StudyFinds Analysis 15-20 minutes 11/18/2025
DOI: 10.1162/imag.a.1017, Show Details

Woman hiding her fae

It takes about one-third of a second from seeing someone get hurt to full neural response. (Credit: bodnar.photo on Shutterstock)

We may not feel their pain, but our brains process the agony of others in milliseconds.

In A Nutshell

What they did: Researchers at Western Sydney University used EEG to track brain activity in 80 people watching videos of hands being touched in various ways, from gentle stroking to threatening scenarios.

What they found: The brain processes pain information from observed touch in just 135 milliseconds. Pleasant versus unpleasant judgments emerge even faster (130ms), while threat assessment takes slightly longer (230ms). Surprisingly, this processing happens primarily in visual brain regions, not touch-processing areas.

Why it matters: The findings challenge popular theories about mirror neurons and suggest our visual system is built to rapidly extract emotional meaning from what we observe, faster than conscious thought.

You see someone slam their finger in a door and your stomach clenches. A character on screen takes a punch and you instinctively wince. Watching a friend receive a shot and you feel your own arm tense. These reactions are not deliberate decisions. They come from brain activity that underlies the kind of fast, gut-level empathy you feel when you see someone get hurt.

Scientists have now mapped the precise timing of this phenomenon, revealing exactly when your brain processes the pain of watching someone else get hurt. Your brain processes observed pain with remarkable speed. Neural signals show clear pain-related activity beginning around 240 milliseconds after you see someone get hurt, peaking at 330 milliseconds. Early traces of pain information appear even earlier, around 135 milliseconds, though this initial signal is brief and less distinct. That’s about one-third of a second from sight to full neural response.

The study from Western Sydney University tracked brain activity in 80 people as they viewed brief video clips of hands being touched in various ways, from gentle stroking with a soft brush to threatening scenarios like a knife approaching skin. Using electroencephalography (EEG) to capture brain activity millisecond by millisecond, researchers mapped when different types of touch information became detectable in neural signals.

“Understanding how the brain processes visual information about touch offers insight not only into visual and multisensory perception, but also into the core mechanisms of social cognition,” the researchers wrote in the journal Imaging Neuroscience. Lead author Sophie Smit and her colleagues sought to determine when the brain recognizes everything from basic body position to the emotional and sensory qualities of observed touch.

The team adapted 90 videos from a validated database of touch interactions, standardizing each clip to 600 milliseconds and presenting them from different viewing angles. Each participant viewed 2,880 video trials while wearing an EEG cap with 64 electrodes recording electrical activity across their scalp. Between video clips, participants counted how many times they saw a white object being touched rather than another hand, keeping them engaged throughout the roughly hour-long experiment.

Your Brain On Pain

Using advanced decoding techniques that treat brain activity patterns like a code to be cracked, the researchers identified when different types of information became distinguishable in the EEG signals. The brain recognized basic visual cues, such as which hand was being touched and whether it was viewed from a self or other perspective, within approximately 60 milliseconds. Sensory details like what object was involved (a knife, brush, or another hand) and the material properties (metal, wood, skin) became clear within 110 to 120 milliseconds.

Emotional information followed a staggered timeline. Valence (the pleasant-versus-unpleasant quality of the touch) emerged clearly by 130 milliseconds and peaked around 300 milliseconds. Pain information appeared briefly around 135 milliseconds but showed the most sustained decoding from 240 milliseconds, peaking at 330 milliseconds. Threat-related information became most clearly detectable around 230 milliseconds, peaking at 390 milliseconds. Information about arousal emerged latest, from 260 milliseconds, peaking at 410 milliseconds.

The different timing for these emotional dimensions suggests a processing hierarchy. Simpler judgments about whether something looks pleasant or unpleasant happen almost instantly. Evaluations about potential danger or harm require slightly longer neural processing, possibly involving deeper consideration of the context and consequences.

It may not be our pain, but the human brain can't help but react to others' injuries.
It may not be our pain, but the human brain can’t help but react to others’ injuries. (Credit: yangtak on Shutterstock)

Spatial patterns of brain activity throughout these time windows pointed strongly to visual brain regions rather than the somatosensory cortex, which processes direct touch on our own bodies. This finding pushes back on the common story about mirror neurons and vicarious touch experiences, showing how much the visual system is doing on its own. While many previous studies have focused on how observing touch activates the brain’s touch-processing regions as if we’re feeling the sensation ourselves, this research suggests that the visual system does most of the heavy lifting.

Brain Waves Show Emotional Processing in Slow Frequencies

Beyond tracking when information appeared over time, the researchers also examined which brain wave frequencies carried touch-related information. Different brain rhythms are associated with different types of processing. Slower oscillations in the delta and theta ranges (roughly 1-8 cycles per second) have been linked to emotion processing. Mid-range alpha waves (8-13 Hz) connect to attention and internal mental imagery. Faster beta and gamma rhythms relate to active sensory processing and communication between brain regions.

The team found that basic body cues (like hand orientation and viewing perspective) were encoded across a broad spectrum but especially in theta, alpha, and low beta bands (approximately 6-20 Hz). Sensory features such as object type and material showed up mainly in delta, theta, and alpha frequencies (1-13 Hz). Emotional and affective dimensions (including valence, arousal, threat, and pain) were primarily reflected in the slower delta, theta, and alpha bands.

These frequency patterns align with previous research linking low-frequency brain oscillations to emotion-related visual processing and vicarious experiences. The dominance of alpha-band activity particularly fits with studies showing that alpha rhythms support both attention to tactile events and internal visual representations during mental imagery.

Snap Judgments Happen Automatically

The speed of these processes points to a counterintuitive conclusion: making sense of observed touch, including its emotional and threatening qualities, doesn’t require slow, deliberate thought. Instead, these assessments happen automatically through rapid, bottom-up visual processing within the first few hundred milliseconds of seeing a touch interaction.

This finding extends recent theories proposing that understanding social interactions (including their emotional tone) operates fundamentally as a visual process. Rather than seeing something neutral and then adding interpretation through higher-level thinking, our visual system appears built to extract social and emotional meaning directly from what we observe.

Previous research has shown rapid processing for other social cues. Studies of people watching social touch interactions (like two people hugging) found that dimensions like valence and arousal were processed within 180 milliseconds. Research on observing pain in others has identified brain wave modulations differentiating painful from neutral stimuli at roughly 140 milliseconds and again around 380 milliseconds. The current study lines up with these earlier results, showing that purely visual observation of detailed touch interactions still triggers fast emotional processing.

The researchers note that some aspects of the timing patterns might reflect not just brain processing speed but also when diagnostic visual information becomes available across the video frames. Features like hand orientation are visible immediately, while assessments of threat or pain might become clearer as motion unfolds. Even so, the fact that threat and pain information required about 100 milliseconds longer to decode than valence suggests genuine differences in how the brain evaluates these dimensions.

Rethinking How We Experience Empathy

These findings carry weight for how scientists understand empathy and social perception. Much previous research on vicarious touch has centered on the somatosensory cortex activating when we observe others being touched, as if we internally simulate the sensation. That research has supported theories about mirror neurons and the idea that we understand others’ experiences by replicating them in our own sensory systems.

These results suggest that visual processing alone can carry a lot of the information we need to understand someone else’s touch and pain in this kind of task. Somatosensory simulation might still play a role, but it may not be required at these early stages. The visual system might be sufficient for making those inferences before potentially engaging touch-processing regions.

These findings challenge our understanding of human empathy on a neurological level.
These findings challenge our understanding of human empathy on a neurological level. (Credit: Jair Rangel on Shutterstock)

The study revealed that decoding patterns for sensory details and emotional valence started at posterior (visual) electrode sites on the scalp and progressively extended to more central, frontal, and temporal regions over time. This spread could reflect information flow from visual areas to other brain regions, possibly including somatosensory cortex for material properties and temporal regions for emotional content extraction.

The research included 80 participants, a substantial number for an EEG study. The team carefully controlled for low-level visual differences between videos by regressing out basic image properties like brightness and image variation, as well as features from a deep learning computer vision model. This ensured that the decoded information reflected meaningful touch-related content rather than superficial visual differences between clips.

Individual differences might shape how people process observed touch. For instance, some individuals report physically feeling sensations when watching others being touched, a phenomenon called mirror-touch synaesthesia. These heightened responders might show different neural timing patterns or reduced differentiation between self and other perspectives due to less distinct self-other boundaries.

Understanding the visual processing of touch could inform applications from neuroprosthetics (where visual feedback might enhance the perception of touch through artificial limbs) to social robotics and human-computer interaction design. If the visual system is this adept at extracting touch meaning, incorporating visual elements into touch-related technologies might amplify their effectiveness.

The next time you instinctively react to seeing someone in pain, you can appreciate the sophisticated neural machinery that made it possible. In a quarter of a second, your brain decoded the visual scene, extracted sensory details, evaluated emotional meaning, assessed potential threat, and estimated how painful the situation likely was. All of this happened automatically through visual pathways, faster than you could consciously decide how to react. That visceral response reflects how evolution shaped our brains to rapidly understand the tactile experiences of those around us through sight alone.


Paper Notes

Study Limitations

The study acknowledges several constraints on its findings. The relatively modest decoding accuracies for some features, while statistically robust, indicate that single-trial neural signals contained limited information for certain dimensions. This is partly due to the minimal preprocessing approach and tightly controlled stimuli, which reduced potential confounds but also lowered signal-to-noise ratios.

The temporal dynamics of information availability may partly reflect when diagnostic visual information becomes available across video frames rather than purely neural processing speed. Some features are visible from the first frame while others emerge through motion, potentially contributing to timing differences between dimensions.

The study used videos showing one person’s two hands interacting, removing broader social contexts present when two separate individuals touch. This isolated sensory and emotional features but retained an inherent social element. Research using purely mechanical touch could further separate sensory processing from social influences.

Participants viewed standardized 600-millisecond videos that may not capture the full temporal dynamics of real-world touch interactions. The rapid serial presentation paradigm, while enabling many trials, differs from natural viewing conditions.

The EEG spatial resolution limits precise localization of neural sources. While topographic patterns suggested posterior visual involvement progressing to central and frontal regions, the exact brain structures cannot be definitively identified without complementary neuroimaging techniques like fMRI.

The study used validated ratings from an independent sample to characterize video dimensions, which prevented task-related biases but didn’t capture individual participants’ moment-to-moment subjective appraisals of specific stimuli.

Funding and Disclosures

This research was supported by Australian Research Council grants DP220103047 and DE230100380. The authors declared no competing interests. All participants provided informed written consent, and the study received approval from the Western Sydney University ethics committee (project number 15644).

Publication Details

Title: Rapid visual engagement in neural processing of detailed touch interactions

Authors: Sophie Smit, Almudena Ramírez-Haro, Genevieve L. Quek, Manuel Varlet, Denise Moerel, Tijl Grootswagers

Affiliations: The MARCS Institute for Brain, Behaviour and Development, Western Sydney University; School of Psychology, Western Sydney University; School of Computer, Data and Mathematical Sciences, Western Sydney University

Journal: Imaging Neuroscience, Volume 3, 2025 | DOI: https://doi.org/10.1162/IMAG.a.1017 | Published: Nov. 17, 2025

Data Availability: Raw EEG and behavioral data are provided in BIDS format via OpenNeuro (https://openneuro.org/datasets/ds005662). Stimulus presentation and analysis scripts are available on the Open Science Framework (https://osf.io/ntfae/). The touch-video database is available at https://osf.io/jvkqa/.

Sample: 80 participants (54 female, 24 male, 2 non-binary; mean age 30.1 years, range 18-76 years; 68 right-handed, 9 left-handed, 3 ambidextrous)

Methods: Electroencephalography (EEG) with 64 channels, rapid serial visual presentation design, multivariate pattern decoding analysis, frequency-domain analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Team

Steve Fink

Editor-in-Chief

Sophia Naughton

Associate Editor