Apple PARS: This is how Apple wants to read your brain activity from your ear.

Last update: 04/12/2025
Author Isaac
  • Apple PARS is a self-learning method that learns the temporal structure of EEG signals without annotated data.
  • The approach combines ear-EEG and patented earphones with electrodes to measure brain activity from the ear.
  • Models trained with PARS match or outperform previous methods in tasks such as sleep, epilepsy, or abnormal EEG.
  • This technology could lead to future AirPods capable of monitoring brain health and well-being on a daily basis.

Apple PARS AI EEG

The idea that headphones can Listen to your music and, at the same time, "listen" to your brain It seems like something out of science fiction, but Apple is already paving the way with a very powerful combination: new in-ear sensors and advanced models of Artificial IntelligenceBehind all of this is a method called PARS (PAirwise Relative Shift), a self-learning approach that allows an algorithm to understand brain electrical activity without relying on specialists manually recording data.

Instead of focusing on a specific gadget, Apple's research focuses on how a model of IA can learn the temporal structure of EEG signals (electroencephalography) and then apply that knowledge to tasks such as classifying sleep stages or detecting neurological abnormalities. And, although the study doesn't directly mention AirPods, it adds to patents and prototypes that point to a future in which simple headphones could become a kind of "mini-laboratory" for brain monitoring from the ear.

What is Apple PARS (Pairwise Relative Shift) and why is it so relevant?

Apple PARS Algorithm

The PARS method originated from research presented by a team from Apple and academic collaborators, in a paper accepted at the NeurIPS 2025 Foundation Models for the Brain and Body workshopThe study, titled "Learning the relative composition of EEG signals using pairwise relative shift pretraining," proposes a different way to train models with electroencephalography signals without using human labels.

In practice, PARS is a technique of self-supervised learning applied to EEG. Instead of asking neurologists to manually indicate which signal segment corresponds to each sleep phase or the onset of an epileptic seizure, the model is trained with unannotated data and is forced to solve an artificial but very useful problem: predict what time distance separates two signal fragments.

The basic idea is that if the model learns to estimate how much time is between two EEG windows, it eventually understands the global structure and long-range dependencies of brain activity. This subsequently allows it to perform better in real clinical tasks, such as detecting sleep patterns, identifying epilepsy, or recognizing motor signals.

The authors emphasize that, in contrast to classic EEG self-learning methods, which focus primarily on reconstruct masked parts of the signal (as masked autoencoders, MAE, do), PARS focuses on relative temporal composition. That is, it doesn't just fill in local "gaps," but captures how separate fragments of the signal fit together over time.

In the tests performed, the PARS-based models show that they are capable of to match or surpass previous strategies in several EEG benchmarks, especially when few labels are available (a very common scenario in medicine). This makes PARS a very attractive option for any system that wants to take advantage of large volumes of brain signals without relying on exhaustive annotations.

How PARS works: from tokenization to relative displacement estimation

PARS Functioning in EEG

To apply the PARS approach to EEG, researchers design an architecture based on Transformers with several key stepsIt all starts with preprocessing the signal and converting it into a representation that the model can easily handle.

First, the EEG signal is divided into temporary windows or “tokens”This tokenization process allows each piece of signal to be represented as a unit on which the transformer can operate, much like how text or images are used. These tokens are then added... positional embeddings, although in a particular way, because PARS plays precisely with the mask and the manipulation of these temporal positions.

One of the distinctive components of the method is the use of masked positional embeddingInstead of giving the model the exact position information directly in There For each token, certain positional data is hidden or altered. This forces the encoder to infer the temporal structure from the contentwithout relying solely on an index or an explicit time stamp.

  iCloud vs OneDrive on Windows: real advantages and limitations

The heart of the PARS pre-workout is the task of pairwise relative shift estimationThe model receives two EEG windows randomly extracted from the same recording and must predict the temporal distance between them. It's not just about guessing whether they are close or far apart, but about learning a continuous or discretized mapping that reflects the relative time interval.

For this, a decoder with cross-attention mechanismsThis component cross-references the encoded information from both windows and learns to relate their internal characteristics to deduce the time separating them. Thanks to this process, the transformer ends up modeling long-term dependencies and patterns of brain activity evolution that extend far beyond the local environment of a few milliseconds.

In later phases, the model is adapted to different tasks through multi-channel fine-tuning and specific evaluationThis means that, once pre-trained with PARS on a variety of EEG recordings (including multi-electrode setups), it is fine-tuned for specific tasks such as sleep classification, detection of abnormal EEG or epileptic seizures.

The technical article also details practical aspects such as datasets used, the exact architecture of the encoder, the type of decoder chosen, the mask and patch sampling schemesAs well as computing resources usedIn addition, different architectural variants are compared with ablation studies to determine which design decisions yield the best performance.

Comparison with other methods: MAE, MP3, DropPos and others

PARS comparison with other models

The study does not simply describe PARS, but contrasts it with Reference methods in self-learning for EEGAmong the approaches compared are the masked autoencoders (MAE), MP3 and DropPos, each with a different philosophy when it comes to learning from unlabeled data.

MAEs focus on reconstruct masked parts of the signalDuring pre-training, parts of the input are hidden, and the model attempts to recover them from the context. This forces the encoder to learn meaningful representations, but it is highly oriented towards local patterns—that is, filling in nearby "gaps" rather than understanding long-range relationships.

MP3 and other similar approaches also explore pretext strategies for capturing structural informationHowever, according to the paper's results, they are still less effective than PARS at modeling the relative time intervals between distant segments of the signal.

DropPos, for its part, modifies or removes explicit positional information in the transformer with the idea of make the model more robust in the exact positionAlthough this type of technique helps networks not to depend excessively on positional embeddings, it has been shown that, on its own, it is not enough to optimally exploit the temporal structure of EEG signals.

Experimental tests demonstrate that the models pretrained with PARS equal or surpass these alternatives in three of the four EEG benchmarks used. Where it shines most is in scenarios of label efficiencyThat is, when only a fraction of the recordings are available. This is crucial in the clinical setting, where accurately labeling every minute of EEG is very time-consuming and requires specialized experts.

The appendix to the work describes in detail the configuration of each baseline, the tested hyperparameters, the effect of different mask levels or different decoder architectures and the final quantitative results. The clear message is that, for EEG, explicitly learning the temporal relationship between signal fragments offers a practical advantage over simply reconstructing or masking.

Datasets used: from ear-EEG sleep to epilepsy detection

Ear EEG and epilepsy datasets

To validate PARS, Apple and its collaborators used four well-known EEG datasetswhich cover different usage scenarios: sleep, pathologies, motor activity and even electrode configurations in the ear.

The first dataset is Wearable Sleep Staging (EESM17), focused on sleep monitoring with wearable devices. Includes nighttime recordings of 9 subjects with a sleep monitoring system 12-channel ear-EEG and a 6-channel scalp EEGThis dataset is particularly interesting because it demonstrates that electrodes placed in the ear can capture a significant portion of the brain activity relevant to distinguishing sleep stages.

  Easy Python examples for AI with scikit-learn, TensorFlow, and PyTorch

The second is TUAB (Temple University Abnormal EEG Corpus), a corpus designed for the detection of abnormal EEGIt gathers records labeled as normal or pathological, useful for training models that detect general neurological alterations, beyond a specific condition.

The third, TUSZ (Temple University Seizure Corpus), is focused on the detection of epileptic seizuresIt includes annotations marking the start and end of seizures, as well as interictal segments. It is one of the reference datasets in epilepsy for evaluating AI algorithms.

Finally, the fourth dataset is PhysioNet-MI, focused on motor imagination tasksIn this case, participants imagine movements (for example, moving a hand) while the EEG is recorded, allowing the training of models that recognize patterns associated with the intention of movement, something key in brain-machine interfaces.

PARS pre-training is performed on these and other datasets described in the technical appendix, while fine-tuning is tailored to specific tasks within each of them. The choice of such varied benchmarks demonstrates the approach It is not limited to a single use case and that can serve as a general basis for self-monitored EEG models.

Ear-EEG and AirPods: How PARS research connects with Apple's headphones

One particularly striking aspect of this whole topic is the use of ear-EEG, that is, the capture of brain signals from the earThe EESM17 dataset already uses systems that place electrodes in the ear canal and ear, instead of on the scalp, which greatly reduces visual impact and improves comfort.

Meanwhile, public documents and Apple patents indicate that the company has been exploring for some time headphones capable of measuring biosignals from the ear. In a 2023 patent application, the company describes a “wearable electronic device” designed to record brain activity using electrodes located in or around the ear, as a less visible alternative to classic scalp EEG systems.

The patent itself acknowledges that conventional ear-EEG solutions typically require devices customized for each user (custom-fitted to the size and shape of your ear, ear canal, etc.), which is expensive and impractical. Furthermore, even a custom-made device can lose contact with the skin over time, degrading signal quality.

To address these challenges, Apple proposes a solution in that document based on placing more electrodes than strictly necessary, distributed throughout the earand let an AI model determine which ones offer the best reading at any given time. To do this, metrics such as the impedance, noise level, skin contact quality, or the distance between active and reference electrodes.

Once these metrics are calculated, the system assigns different weights for each electrode and combines its signals into a single optimized waveform. The patent even includes simple gestures, such as pressing or squeezing the earpiece, to start or stop the measurement, as well as different design and assembly variations that would make the hardware.

From theory to product: sensors in AirPods and everyday brain monitoring

The combination of this line of research with PARS makes it quite easy to imagine AirPods with sensors capable of measuring EEG from the ear canalIn fact, there have already been advances in that direction: the AirPods Pro 3 have incorporated a photoplethysmograph (PPG) sensor to measure heart rate, and Apple has been adding health features to its wearables over the last few years.

If we add to this electrodes for ear-EEG and a self-monitored model like PARS capable of interpret the signals without major databases annotatedThe result would be a device that could detect sleep stages, changes in attention, or early signs of certain neurological pathologies, all transparently to the user.

In the experiments described in the research paper, the PARS algorithm takes random segments of the brain signal and learn to predict the temporal distance between themBased on this ability, the model develops an enriched understanding of how brain activity evolves over time, resulting in better outcomes when classifying sleep stages, locating epileptic events, or distinguishing normal from abnormal EEGs.

The great appeal of this approach is that it can operate in a context where labels are scarce. In a commercial product, that could mean that AirPods equipped with this technology would be able to adapt to each user with very little supervised information, taking advantage of huge amounts of raw signal captured during daily use.

  ChatGPT integrates purchases with Instant Payment: everything that changes

According to the study results, models pre-trained with PARS reach to match or exceed the accuracy of previous methods in various tasks, and this opens the door for eminently consumer devices, such as headphones, to begin offering measurements that were previously reserved for bulky and specialized hospital equipment.

Of course, all of this comes with reasonable questions about data privacy and security and ethical boundaries. The idea that AI could know not only your heart rate or your daily steps, but also your brain activity patterns, generates a certain amount of concern. For now, both the paper and the patents remain in the realm of research and conceptual design, without a specific date for a commercial product.

Potential applications: health, wellness, driving, and cognitive performance

If PARS and ear-EEG-based technology materializes in future AirPods or other wearables, the range of applications could be very broad. Firstly, it would be ideal for monitor sleep continuously and comfortably, automatically classifying the REM and NREM phases (NREM 1, NREM 2 and NREM 3) and providing detailed information on the quality of rest.

Furthermore, the detection of levels of attention, episodes of stress, or states of alertness This would be extremely useful in contexts such as driving, mentally demanding work, or studying. A device that identifies sudden drops in attention could warn the user when they are at risk of falling asleep at the wheel or making serious errors due to fatigue.

In the clinical setting, discreet and continuous brain monitoring could facilitate early detection of disorders such as epilepsy, sleep problems, or neurodegenerative diseasesIt is not about replacing a neurologist or a hospital, but about providing valuable data that can serve as an early warning or complement to the diagnosis.

Another interesting line is the biofeedback focused on mental well-beingIf the device is able to relate certain EEG patterns to states of relaxation, deep concentration, or stress, it could guide the user in breathing exercises, meditation, or cognitive training, providing real-time indicators of whether these practices are having the desired effect.

It wouldn't stop at just the brain. Apple's public documents mention the possible addition of sensors to measure blood volume, facial muscle activity, and eye movements from the earphone itself. Combined with the EEG signal and processed in a iPhone or another device, this data could feed AI models capable of providing a very complete picture of the user's physiological and emotional state.

All these uses would have to be accompanied by strict controls over consent, data management and access by third partiesIn the examples presented, it is assumed that information can be shared with healthcare professionals only if the user authorizes it, and that much of the processing is done locally to minimize risks.

In practice, what both the PARS paper and the patents and prototypes show is a very clear convergence: Apple is exploring, on the one hand, the best way to collect brain signals from the ear and on the other the best way to interpret them with AI without relying on human annotationsIf both pieces fit together, headphones could cease to be simple audio players and become advanced health and performance tools, provided that ethical and privacy aspects are well managed.

Everything points to the fact that we are at the beginning of a new generation of wearables in which methods such as Apple PARS (Parwise Relative Shift) and discrete sensors such as the ear-EEG They can transform how we understand our sleep, our attention, and our neurological health, starting with something as everyday as AirPods.

What is Semantic Scholar?
Related article:
What is Semantic Scholar: an AI-powered academic search engine