Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
fb931fa
pr vep stuff
pellet Apr 12, 2026
0497302
rename VisualVEP to VisualGratingVEP and vvep.rst to vprvep.rst
pellet Apr 12, 2026
aae0f14
feat: add pattern reversal VEP visualization example and utility func…
pellet Apr 12, 2026
31b8994
feat: add Cyton config constants and PRVEP run experiment example
pellet Apr 12, 2026
d6f2470
fix docs build: add python-dotenv to docsbuild requirements
pellet Apr 12, 2026
cfd47b1
prepare PR-VEP example for fetch_dataset; exclude from CI until data …
pellet Apr 12, 2026
b84d736
fix analysis/utils.py imports: lazy-load EEG and pynput to unblock do…
pellet Apr 12, 2026
1399491
fix NameError: add __future__ annotations to defer EEG type annotatio…
pellet Apr 12, 2026
d0a2ac6
fix docs CI: remove root examples dir and visual_vep from gallery unt…
pellet Apr 12, 2026
a066755
docs: update PR-VEP intro to reference Cyton and electrode placement
pellet Apr 12, 2026
270a197
docs: update PR-VEP electrode placement section for Cyton
pellet Apr 12, 2026
b67fc9a
ci: deploy docs on dev/* branches as well as master
pellet Apr 12, 2026
4fd9de6
docs: replace remaining Muse references with Cyton in PR-VEP docs
pellet Apr 12, 2026
46950bf
docs: document refresh rate requirements and effect on P100 latency p…
pellet Apr 12, 2026
07934c8
docs: restructure PR-VEP page and add visual correction section
pellet Apr 13, 2026
9457817
docs: remove API Reference section from PR-VEP page for consistency
pellet Apr 13, 2026
a2670df
docs: set PR-VEP example refresh rate to 120 Hz for Quest 2
pellet Apr 13, 2026
2940599
feat: sub-sample P100 peak interpolation and high-precision defaults
pellet Apr 13, 2026
ddb85eb
docs: update photodiode sync patch wording
pellet Apr 13, 2026
72c670b
docs: remove gc/rush detail from PR-VEP page (handled by base class)
pellet Apr 13, 2026
feb78d2
feat: add longitudinal P100 tracking notebook and docs section
pellet Apr 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions .github/actions/setup-conda-env/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Set up conda env
description: >
Install Miniconda and create/activate an EEG-ExPy conda environment from
the given env yml. Shared by the Test and Typecheck jobs so the two
don't drift apart. Environment name is not set in the yml files so local
installs can use any name they like.

inputs:
environment-file:
required: true
description: Path to the conda environment yml file to install from.
activate-environment:
required: true
description: Name to give the created environment.
python-version:
required: false
description: >
Python version to pin (e.g. '3.8'). Overrides the version conda would
otherwise resolve from the environment file's constraints. When omitted,
conda resolves freely within the environment file's range.

runs:
using: composite
steps:
- uses: conda-incubator/setup-miniconda@v3
with:
environment-file: ${{ inputs.environment-file }}
activate-environment: ${{ inputs.activate-environment }}
python-version: ${{ inputs.python-version }}
auto-activate-base: false
channels: conda-forge
miniconda-version: "latest"
39 changes: 13 additions & 26 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,29 +9,20 @@ on:
jobs:
build:
runs-on: ubuntu-22.04
defaults:
run:
shell: bash -el {0}
steps:
- name: Checkout repo
uses: actions/checkout@v3
with:
fetch-depth: 0

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8

- name: Install dependencies
run: |
make install-deps-apt
python -m pip install --upgrade pip wheel
python -m pip install attrdict

make install-deps-wxpython

- name: Build project
run: |
make install-docs-build-dependencies
fetch-depth: 0

- name: Set up conda env
uses: ./.github/actions/setup-conda-env
with:
environment-file: environments/eeg-expy-docsbuild.yml
activate-environment: eeg-expy-docsbuild

- name: Get list of changed files
id: changes
Expand All @@ -40,21 +31,20 @@ jobs:
git diff --name-only origin/master...HEAD > changed_files.txt
cat changed_files.txt


- name: Determine build mode
id: mode
run: |
if grep -vqE '^examples/.*\.py$' changed_files.txt; then
echo "FULL_BUILD=true" >> $GITHUB_ENV
echo "Detected non-example file change. Full build triggered."
else
CHANGED_EXAMPLES=$(grep '^examples/.*\.py$' changed_files.txt | paste -sd '|' -)
# || true prevents grep's exit code 1 (no matches) from aborting the step
CHANGED_EXAMPLES=$(grep '^examples/.*\.py$' changed_files.txt | paste -sd '|' - || true)
echo "FULL_BUILD=false" >> $GITHUB_ENV
echo "CHANGED_EXAMPLES=$CHANGED_EXAMPLES" >> $GITHUB_ENV
echo "Changed examples: $CHANGED_EXAMPLES"
fi


- name: Cache built documentation
id: cache-docs
uses: actions/cache@v4
Expand All @@ -65,15 +55,12 @@ jobs:
restore-keys: |
${{ runner.os }}-sphinx-


- name: Build docs
run: |
make docs
run: make docs


- name: Deploy Docs
uses: peaceiris/actions-gh-pages@v3
if: github.ref == 'refs/heads/master' # TODO: Deploy seperate develop-version of docs?
if: github.ref == 'refs/heads/master' || startsWith(github.ref, 'refs/heads/dev/')
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: doc/_build/html
18 changes: 6 additions & 12 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,12 @@ jobs:
if: "startsWith(runner.os, 'Linux')"
run: |
make install-deps-apt
- name: Install conda
uses: conda-incubator/setup-miniconda@v3
- name: Set up conda env
uses: ./.github/actions/setup-conda-env
with:
environment-file: environments/eeg-expy-full.yml
auto-activate-base: false
python-version: ${{ matrix.python_version }}
activate-environment: eeg-expy-full
channels: conda-forge
miniconda-version: "latest"
python-version: ${{ matrix.python_version }}

- name: Fix PsychXR numpy dependency DLL issues (Windows only)
if: matrix.os == 'windows-latest'
Expand Down Expand Up @@ -75,15 +72,12 @@ jobs:

steps:
- uses: actions/checkout@v2
- name: Install conda
uses: conda-incubator/setup-miniconda@v3
- name: Set up conda env
uses: ./.github/actions/setup-conda-env
with:
environment-file: environments/eeg-expy-full.yml
auto-activate-base: false
python-version: ${{ matrix.python_version }}
activate-environment: eeg-expy-full
channels: conda-forge
miniconda-version: "latest"
python-version: ${{ matrix.python_version }}
- name: Typecheck
run: |
make typecheck
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ __pycache__
# Built as part of docs
doc/auto_examples
doc/_build
doc/generated/
doc/sg_execution_times.rst

# Built by auto_examples
examples/visual_cueing/*.csv
Expand All @@ -18,4 +20,4 @@ htmlcov
# PyCharm
.idea/

**/.DS_Store
**/.DS_Store
6 changes: 3 additions & 3 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,9 +255,9 @@ def setup(app):

# Configurations for sphinx gallery

sphinx_gallery_conf = {'filename_pattern': '(?=.*r__)(?=.*.py)',
'examples_dirs': ['../examples','../examples/visual_n170', '../examples/visual_p300','../examples/visual_ssvep', '../examples/visual_cueing', '../examples/visual_gonogo'],
'gallery_dirs': ['auto_examples','auto_examples/visual_n170', 'auto_examples/visual_p300','auto_examples/visual_ssvep', 'auto_examples/visual_cueing', 'auto_examples/visual_gonogo'],
sphinx_gallery_conf = {'filename_pattern': '(?=.*r__)(?=.*.py)',
'examples_dirs': ['../examples/visual_n170', '../examples/visual_p300','../examples/visual_ssvep', '../examples/visual_cueing', '../examples/visual_gonogo'],
'gallery_dirs': ['auto_examples/visual_n170', 'auto_examples/visual_p300','auto_examples/visual_ssvep', 'auto_examples/visual_cueing', 'auto_examples/visual_gonogo'],
'within_subsection_order': FileNameSortKey,
'default_thumb_file': 'img/eeg-notebooks_logo.png',
'backreferences_dir': 'generated', # Where to drop linking files between examples & API
Expand Down
206 changes: 206 additions & 0 deletions doc/experiments/vprvep.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,206 @@
********************************
_
*********************************

Visual Pattern Reversal VEP
===========================

The Pattern Reversal VEP (PR-VEP) is the most widely studied visual
evoked potential paradigm. A checkerboard pattern swaps its black and
white squares at a regular rate (typically 2 reversals per second) while
the participant fixates a central dot. Each reversal elicits a
stereotyped waveform whose most prominent feature is the **P100**, a
positive deflection occurring ~100ms after the reversal at midline
occipital electrodes. The other components are a small N75 before it
and an N145 after it.

In this notebook, we will attempt to detect the P100 with the OpenBCI
Cyton, with the most critical electrode at Oz, followed by O1 and O2,
then POz. Fp1 and Fp2 are optional channels for detecting eye movement
artefacts. We use monocular pattern reversal blocks and run the analysis
pipeline to pull out the per-eye P100 latency and the interocular
latency difference.


**PR-VEP Experiment Notebook Examples:**

.. include:: ../auto_examples/visual_vep/index.rst


Running the Experiment
----------------------

.. code-block:: python

from eegnb.devices.eeg import EEG
from eegnb.experiments.visual_vep import VisualPatternReversalVEP

eeg = EEG(device='cyton')
experiment = VisualPatternReversalVEP(
display_refresh_rate=120, # must match display and be divisible by 2; higher rates give better latency precision
eeg=eeg,
save_fn='my_vep_recording.csv',
use_vr=True, # False for monitor mode
)
experiment.run()


Participant Preparation
-----------------------

The PR-VEP is sensitive to the optical quality of the retinal image.
Participants who normally wear glasses or contact lenses **must** wear
their corrective lenses during the test. Uncorrected refractive error
blurs the checkerboard's high spatial frequency edges, which attenuates
the P100 amplitude and can increase its latency — mimicking a genuine
neural conduction delay. This is especially important when comparing
latencies between eyes or across sessions.

ISCEV guidelines require that visual acuity be documented for each
recording session. If a participant's corrected acuity is worse than
6/9 (20/30), note it alongside the data so that downstream analysis can
account for it.


Stimulus Parameters
-------------------

Parameters follow the ISCEV "large check" option [Odom2016]_:

- **Check size**: 1° of visual angle (0.5 cpd)
- **Reversal rate**: 2 reversals per second (one reversal per two display frames)
- **Field size**: 16° (monitor) / 20° (VR)
- **Contrast**: High contrast black/white, mean luminance held constant
- **Fixation**: Central red dot
- **Recording**: Monocular, alternating left and right eye per block

Eight blocks of 50 seconds by default, giving ~100 reversals per eye per
block (400 per eye total).

The experiment requires a display refresh rate that is divisible by two,
since each reversal occupies exactly two frames. Any such refresh rate is
supported — 60 Hz, 90 Hz, 120 Hz, 144 Hz, etc. A higher refresh rate
reduces the temporal jitter between the true reversal onset and the
nearest frame boundary, which directly translates to more precise P100
latency estimates. For example, at 60 Hz each frame is ~16.7 ms wide,
whereas at 120 Hz it is ~8.3 ms — halving the worst-case timing error.
VR headsets running at 90 Hz or above are therefore preferred over a
standard 60 Hz monitor when absolute latency precision matters.


Monitor vs VR
-------------

The experiment supports both standard monitor presentation and Meta
Quest (VR) presentation via ``use_vr=True``.

**VR mode is preferred** for two reasons:

- Each eye sees the checkerboard independently, so there is no manual
eye closure and no light leakage.
- The OpenXR compositor supplies a per-frame predicted photon time
(``tracking_state.headPose.time``), which is attached to the EEG
marker in place of ``time.time()``. This cancels most of the
output-side display latency — render queue, compositor buffering,
scan-out, HMD persistence — on a per-frame basis, which matters for
P100 latency where even small shifts are clinically meaningful.

In monitor mode the software marker is the only timing source, so any
fixed display-pipeline latency has to be handled separately (see below).
A proof-of-concept photodiode sync patch is drawn in the bottom-left
corner of the window in monitor mode — a 50px square whose polarity
flips with each reversal. Taping a photodiode over that square and
routing its TTL into a spare channel would give hardware timing ground
truth; the code is in place but the hardware path is a work in progress —
instructions for wiring a photodiode to a Cyton digital input pin will
be added in a future update.


Electrode Placement
-------------------

The P100 is generated in occipital cortex. Priority electrode placement
for the OpenBCI Cyton is:

1. **Oz** — the primary electrode; highest amplitude P100
2. **O1, O2** — lateral occipital; provide left/right asymmetry information
3. **POz** — parieto-occipital midline; useful fallback or supplement
4. **Fp1, Fp2** — optional; placed on the forehead to record eye movement
artefacts (EOG) for rejection during analysis


Latency Resolution
------------------

The precision of a P100 latency estimate depends on three factors:

1. **Display refresh rate** — determines the worst-case stimulus timing
jitter (see *Stimulus Parameters* above). At 120 Hz this is ~4.2 ms
per frame.

2. **EEG sampling rate** — the Cyton samples at 250 Hz, giving 4 ms
between samples. Without interpolation, the peak latency is locked to
the nearest sample and cannot resolve shifts smaller than 4 ms.

3. **Number of trials** — averaging more reversals reduces noise in the
ERP waveform, tightening the confidence interval around the peak
estimate. The default is 8 blocks of 100 reversals (400 per eye).

To achieve sub-sample precision the analysis pipeline uses **parabolic
interpolation**: a parabola is fitted through the peak sample and its
two neighbours, and the vertex of the fit is taken as the true peak
location. At 250 Hz this brings effective resolution to ~0.5 ms — well
below the sample interval. The interpolated peak finder is used by
default in ``vep_utils.plot_vep()``.

For studies that require detecting latency shifts of 1–2 ms (e.g.
within-subject longitudinal comparisons), the combination of 120 Hz
display, parabolic interpolation, and the default 8-block design is
recommended.


Longitudinal Tracking
---------------------

To monitor P100 latency over time — for example during nerve recovery or
neuroplasticity studies — record multiple sessions using the same subject
and session numbering scheme and compare the per-eye P100 across them.

Before attributing a latency change to an intervention, establish a
**baseline**: record at least 3–5 sessions over 1–2 weeks under the same
conditions. This gives you the natural session-to-session variability for
your setup and participant, so you can distinguish a real shift from
measurement noise.

The ``02r__pattern_reversal_longitudinal.py`` example notebook
demonstrates the full workflow: discovering sessions, extracting per-eye
P100 latencies with parabolic interpolation, printing a summary table,
and plotting latency trends and interocular differences over time.


Timing Notes
------------

Measured P100 latency is the true P100 latency plus the display-pipeline
delay, plus the EEG device's input delay, plus any clock-alignment
error. For the Cyton the USB-serial latency is typically ~30–40ms, so
if you need *absolute* latencies you need to characterise and subtract
it; for *relative* comparisons (between-eye, within-subject across
sessions) it cancels out and you can ignore it.

Two sidecar files are written alongside each recording to let you check
timing after the fact:

- ``{save_fn}_timing.csv`` — per-trial software and compositor
timestamps and their delta
- ``{save_fn}_frame_stats.json`` — per-frame intervals and dropped-frame
count (150%-of-refresh threshold)


References
----------

.. [Odom2016] Odom JV, Bach M, Brigell M, Holder GE, McCulloch DL, Mizota A,
Tormene AP; International Society for Clinical Electrophysiology of Vision.
**ISCEV standard for clinical visual evoked potentials: (2016 update).**
*Documenta Ophthalmologica* 133(1):1-9. doi:10.1007/s10633-016-9553-y
1 change: 1 addition & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
experiments/vn170
experiments/vp300
experiments/vssvep
experiments/vprvep
experiments/cueing
experiments/gonogo
experiments/all_examples
Expand Down
1 change: 1 addition & 0 deletions eegnb/analysis/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from eegnb.analysis import vep_utils # noqa: F401
Loading
Loading