Platforms
Windows, Linux, macOS
Built in?
Yes
Key Developers
Aarón Cuevas López, Josh Siegle
Source Code
https://github.com/open-ephys/plugin-GUI/tree/master/Source/Processors/RecordNode/BinaryFormat
Advantages
Continuous data is stored in a compact format of tiled 16-bit integers, which can be memory mapped for efficient loading.
Additional files are stored as JSON or NumPy data format, which can be read using numpy.load in Python, or the npy-matlab package.
numpy.load
The format has no limit on the number of channels that can be recorded simultaneously.
Continuous data files are immediately compatible with most spike sorting packages.
Limitations
Requires slightly more disk space because it stores two 64-bit timestamps for every sample.
Continuous files are not self-contained, i.e., you need to know the number of channels and the “bit-volts” multiplier in order to read them properly.
It is not robust to crashes, as the NumPy file headers need to be updated when recording is stopped.
Within a Record Node directory, data for each experiments (stop/start acquisition) is contained in its own sub-directory. Experiment directories are further sub-divided for individual recordings (stop/start recording).
A recording directory contains sub-directories for continuous, events, and spikes data. It also contains a structure.oebin, which is a JSON file detailing channel information, channel metadata, and event metadata descriptions.
structure.oebin
Continuous data is written separately for each stream within a processor (a block of synchronously sampled channels):
Each continuous directory contains the following files:
continuous.dat: A simple binary file containing N channels x M samples 16-bit integers in little-endian format. Data is saved as ch1_samp1, ch2_samp1, ... chN_samp1, ch1_samp2, ch2_samp2, ..., chN_sampM. The value of the least significant bit needed to convert the 16-bit integers to physical units is specified in the bitVolts field of the relevant channel in the structure.oebin JSON file. For “headstage” channels, multiplying by bitVolts converts the values to microvolts, whereas for “ADC” channels, bitVolts converts the values to volts.
continuous.dat
ch1_samp1, ch2_samp1, ... chN_samp1, ch1_samp2, ch2_samp2, ..., chN_sampM
bitVolts
sample_numbers.npy: A numpy array containing M 64-bit integers that represent the index of each sample in the .dat file since the start of acquisition. Note: This file was called timestamps.npy in GUI version 0.5.X. To avoid ambiguity, “sample numbers” always refer to integer sample index values starting in version 0.6.0.
sample_numbers.npy
.dat
timestamps.npy
timestamps.npy: A numpy array containing M 64-bit floats representing the global timestamps in seconds relative to the start of the Record Node’s main data stream (assuming this stream was synchronized before starting recording). Note: This file was called synchronized_timestamps.npy in GUI version 0.5.X. To avoid ambiguity, “timestamps” always refer to float values (in units of seconds) starting in version 0.6.0.
synchronized_timestamps.npy
Event data is organized by stream and by “event channel” (typically TTL). Each event channel records the states of multiple TTL lines.
TTL
Directories for TTL event channels include the following files:
states.npy: numpy array of N 16-bit integers, indicating ON (+CH_number) and OFF (-CH_number) states
states.npy
sample_numbers.npy Contains N 64-bit integers indicating the sample number of each event since the start of acquisition. Note: This file was called timestamps.npy in GUI version 0.5.X. To avoid ambiguity, “sample numbers” always refer to integer sample index values starting in version 0.6.0.
timestamps.npy Contains N 64-bit floats indicating representing the global timestamp of each event in seconds relative to the start of the Record Node’s main data stream (assuming this stream was synchronized before starting recording). Note: This file did not exist in GUI version 0.5.X. Synchronized (float) timestamps for events first became available in version 0.6.0.
full_words.npy: Contains N 64-bit integers containing the “TTL word” consisting of the current state of all lines when the event occurred
full_words.npy
Text events are routed through the GUI’s Message Center, and are stored in a directory called MessageCenter. They contain the following files:
MessageCenter
text.npy: numpy array of N strings
text.npy
sample_numbers.npy Contains N 64-bit integers indicating the sample number of each text event on the Record Node’s main data stream. Note: This file was called timestamps.npy in GUI version 0.5.X. To avoid ambiguity, “sample numbers” always refer to integer sample index values starting in version 0.6.0.
timestamps.npy Contains N 64-bit floats indicating representing the global timestamp of each text event in seconds relative to the start of the Record Node’s main data stream. Note: This file did not exist in GUI version 0.5.X. Synchronized (float) timestamps for events first became available in version 0.6.0.
Spike data is organized first by stream and then by electrode.
Each electrode directory contains the following files:
waveforms.npy: numpy array with dimensions S spikes x N channels x M samples containing the spike waveforms
waveforms.npy
sample_numbers.npy: numpy array of S 64-bit integers containing the sample number corresponding to the peak of each spike. Note: This file was called timestamps.npy in GUI version 0.5.X. To avoid ambiguity, “sample numbers” always refer to integer sample index values starting in version 0.6.0.
timestamps.npy: numpy array of S 64-bit floats containing the global timestamp in seconds corresponding to the peak of each spike (assuming this stream was synchronized before starting recording). Note: This file did not exist in GUI version 0.5.X. Synchronized (float) timestamps for spikes first became available in version 0.6.0.
clusters.npy: numpy array of S unsigned 16-bit integers containing the sorted cluster ID for each spike (defaults to 0 if this is not available).
clusters.npy
More detailed information about each electrode is stored in the structure.oebin JSON file.
(recommended) Create a Session object using the open-ephys-python-tools package. The data format will be automatically detected.
Session
Create a File object using the pyopenephys package.
File
Use the DatLoad() method from Binary.py in the open-ephys/analysis-tools repository.
DatLoad()
Binary.py
Use load_open_ephys_binary.m from the open-ephys/analysis-tools repository.
load_open_ephys_binary.m