Welcome to BrainAccess Python API’s documentation!

This package provides a Python API for BrainAccess Core and BrainAcess BCI Library.
BrainAccess Core implements an interface with BrainAccess EEG Hardware and signal processing functions.
BrainAccess BCI Library offers BCI algorithms such as Motion Classifier, Alpha Detector, SSVEP Detector.

Everything described here can be accessed through the brainaccess Python package.

BrainAccess Core

This part describes functions for EEG data acquisiton, signal processing, configuration of both BrainAccess hardware and software, and other functionalities. In addition, data structures used in the BrainAccess Core API are defined.

To jump right into using BrainAccess Core, checkout the BrainAccess Core Usage Examples section.

Functions

This section describes BrainAccess Core functions that are provided through brainaccess Python package.

For convenience, all of the below functions can be imported as

from brainaccess import <function_name>

skipping the .core module specification.

brainaccess.core.configure_logging(verbosity, output_file)[source]

Configures BrainAccess Core library logging.

Args:
verbocity (int): a value between 0 and 4, which describes what information is logged.
0 – nothing is logged,
1 – only error messages are logged,
2 – warnings and error messages,
3 – information messages designed to briefly reflect the library status + everything above,
4 – debug messages, thoroughly describing actions performed by the library + everything above.
output_file (string): might be either an empty string, a full path or the name of the log output file.
If outputFile is an empty string, logging will be done to console.
If the output file name is provided without the full path, the file is created in the libarary’s directory.
If the output file already exists, log output is appended to the existing file.
Escaped back slashes (“\ \ “) or forward slashes (“/”) should be used.
brainaccess.core.discard_data()[source]

Discards data in the internal buffer of baCore.

Requires initialization.

brainaccess.core.estimate_quality(signal)[source]

Estimates EEG signal quality.

Args:

signal ( list[float] OR numpy array [float64]): 1D array containing an EEG signal. The signal should be detrended before supplying it to this function.

Returns:

double: a number in range 0-1, 1 for good quality, 0 for worst quality

brainaccess.core.fourier_transform(signal)[source]

Calculates FFT for a given signal.

Args:

signal ( list[float] OR numpy array [float64] ): 1D array containing an EEG signal.

Returns:

brainaccess.models.FourierTransform: an object that holds the results of FFT.

brainaccess.core.get_active_channels()[source]

Gets a list of channels that are turned on in BrainAccess EEG hardware.

Requires initialization.

Returns:
Tuple (activeChannels, biasChannels, labels)
where
activeChannels is a list of channel indices turned on in hardware,
biasChannels is a list of channel indices used in bias feedback
labels is a list of string labels of the active channels.
brainaccess.core.get_battery_level()[source]

Gets battery charge level. It is a rough estimate determined from the battery voltage.

Requires initialization.

Returns:

int: battery level in percentage

brainaccess.core.get_battery_voltage()[source]

Gets voltage of the integrated LiPo battery.

Requires initialization.

Returns:

int: battery voltage in mV. Approximately, 4200mV for fully charged battery and 3600mV for fully depleted.

brainaccess.core.get_current_data()[source]

Requests all the data that is currently acquired by BrainAccess EEG hardware.

The number of samples returned will be different, depending on the time passed since the last call. For continuous acquisition we recommend using brainaccess.core.get_data() or brainaccess.core.get_data_samples()

Requires initialization and active acquisition.

Returns:

brainaccess.models.EEGDataStream: an object that holds info on hardware status, EEG data, lead status and accelerometer data.

brainaccess.core.get_data(time_span_in_milliseconds)[source]

Requests data acquired (or acquire if not yet acquired) by BrainAccess EEG hardware. This function should be used for continuous acquisition.

Requires initialization and active acquisition.

Args:

time_span_in_milliseconds (int): recording length to acquire in milliseconds.

Returns:

brainaccess.models.EEGDataStream: an object that holds info on hardware status, EEG data, lead status and accelerometer data.

brainaccess.core.get_data_from_now(time_span_in_milliseconds)[source]

Requests data to acquire straightaway by BrainAccess EEG hardware. This function should be used for triggered acquisition.

Previously collected data is discarded.

Requires initialization and active acquisition.

Args:

time_span_in_milliseconds (int): recording length to acquire in milliseconds.

Returns:

brainaccess.models.EEGDataStream: an object that holds info on hardware status, EEG data, lead status and accelerometer data.

brainaccess.core.get_data_samples(num_samples)[source]

Requests a number of data samples. These are pulled from internal buffer or acquired from the hardware if data in the buffer is not sufficient. This function should be used for continuous acquisition.

Requires initialization and active acquisition.

Args:

num_samples (int): number of samples to acquire.

Returns:

brainaccess.models.EEGDataStream: an object that holds info on hardware status, EEG data, lead status and accelerometer data.

brainaccess.core.get_data_samples_from_now(num_samples)[source]

Requests a number of data samples to acquire straightaway by BrainAccess EEG hardware. This function should be used for triggered acquisition.

Requires initialization and active acquisition.

Args:

num_samples (int): number of samples to acquire.

Returns:

brainaccess.models.EEGDataStream: an object that holds info on hardware status, EEG data, lead status and accelerometer data.

brainaccess.core.get_num_available_channels()[source]

Gets the number of available channels in connected BrainAccess EEG hardware.

Requires initialization.

Returns:

int: number of channels.

brainaccess.core.get_preprocessing_settings()[source]

Gets the current parameters for signal processing.

Returns:

Tuple (int, brainaccess.models.DetrendSettings, list [brainaccess.models.FilterSettings], brainaccess.models.WindowSettings )

where the first Tuple element is the sampling frequency.

brainaccess.core.get_sampling_frequency()[source]

Gets the sampling frequency currently used by BrainAccess EEG hardware.

Returns:

double: sampling frequency in Hz.

brainaccess.core.has_connection()[source]

Checks if BrainAccess core library still has connection with BrainAccess EEG hardware.

Requires initialization.

Returns:

bool: True if library can communicate, False if connection is broken.

brainaccess.core.initialize()[source]

Initializes BrainAccess Core library and attempts to connect to BrainAccess EEG hardware.

initialize or load_config need to be called before any further actions, that require connection to the BrainAccess EEG hardware, are done.

Returns:
int:
0 if successful,
1 if connection could not be established due to WiFi issues,
2 if no board is inserted in the first board slot,
3 if called when acquisition is in progress which is not allowed.
brainaccess.core.is_charging_on()[source]

Gets battery charging status.

Requires initialization.

Returns:

bool: True if battery charger is plugged in, False otherwise.

brainaccess.core.load_config(config_path)[source]

Loads a configuration file with acquisition parameters from the provided path and (re)initializes the library.

When ba_core.load_config is used, ba_core.intialize is not necessary beforehand.

Args:

config_path (string): a full path including the configuration file name. Escaped back slashes (“\ \ “) or forward slashes (“/”) should be used.

Returns:
int:
0 if successful,
1 if connection could not be established due to WiFi issues,
2 if no board is inserted in the first board slot,
3 if called when acquisition is in progress which is not allowed.
brainaccess.core.load_data(file_path, separator=' ')[source]

Loads data from a csv file.

Args:

file_path (string): a full path including the file name. Escaped back slashes (“\ \ “) or forward slashes (“/”) should be used.

separator (char): a separator character used to separate columns in the csv file.

Returns:

brainaccess.models.EEGData: an object containing EEG data, accelerometer data and other relevant info. Returns empty EEG and accelerometer data arrays (see object attributes) if loading was unsuccessful.

brainaccess.core.load_preprocessing_config(config_path)[source]

Loads configuration file with signal preprocessing parameters.

If the configuration is loaded successfully, it is saved to the library’s working directory and automatically reloaded on successive library runs.

Args:

config_path (string): a full path including the configuration file name. Escaped back slashes (“\ \ “) or forward slashes (“/”) should be used.

brainaccess.core.preprocess(signal)[source]

Processes an EEG signal with given preprocessing parameters.

Args:

signal (list[float] OR numpy array [float64]): 1D array containing an EEG signal

Returns:

numpy array [float64]: processed signal.

brainaccess.core.save_config(config_path)[source]

Saves current acquisition parameters to configuration file.

This is only needed when it is wished to have multiple configurations ready for different experiments. Otherwise, all the values set using this API are automatically saved to the library’s working directory.

Args:

config_path (string): a full path including the configuration file name OR an empty string. If an empty string is provided, the configuration will be saved to the same location it was loaded from. If the full path is provided, escaped back slashes (“\ \ “) or forward slashes (“/”) should be used.

brainaccess.core.save_data(eeg_data, file_path, separator=' ')[source]

Saves data to a csv file.

Args:

eeg_data ( brainaccess.models.EEGData ): an object containing EEG data and information on the file.

file_path (string): full path including the file name. Escaped back slashes (“\ \ “) or forward slashes (“/”) should be used.

separator (char): a separator character used to separate columns in the csv file.

Returns:

int: True if the data was successfully saved, False if I/O error occured.

brainaccess.core.save_preprocessing_config(config_path)[source]

Saves signal preprocessing parameters to a configuration file.

Args:

config_path (string): a full path including the configuration file name OR an empty string. If a full path is provided, escaped back slashes (“\ \ “) or forward slashes (“/”) should be used. If an empty string is provided, configuration will be saved to the same location it was loaded from.

brainaccess.core.set_buffer_size(buffer_size)[source]

Sets the size of internal buffer used in acquisition.

Requires initialization.

Args:

buffer_size (int): buffer length (min 1, max 400000), the maximum number of samples that could be stored in the internal buffer.

brainaccess.core.set_channel_labels(indices, labels)[source]

Updates the labels for certain channels.

Requires initialization.

Args:

indices (list[int]): A list of channel numbers for which the labels should be updated.

labels (list[string]): A list of labels corresponding to the indices. A label might be an empty string, meaning that the label needs to be removed.

brainaccess.core.set_channels(channel_idx, bias_channel_idx)[source]

Turns on the specified channels in BrainAccess EEG hardware

Requires initialization and inactive acquisition.

Args:

channel_idx (list[int]): a list containing channel numbers to turn on. BrainAccess Core always handles and holds channel indices in ascending order based on index integer value. This includes acquisition data (obtained from baCore_getData() methods) which is always provided in the ascending order of channel indices. We recommend users to have indices in ascending order in their code as well, in order to prevent confusion and mishaps.

bias_channel_idx (list[int]): a list containing channel numbers that should be used in bias feedback. Similarly to channelIdx, we recommend providing indices in ascending order.

Returns:
int:
0 if succesful,
3 if channel set was attempted during acquisition in progress (not allowed),
4 if library was not (successfully) initialized.
brainaccess.core.set_detrend_settings(detrend_settings)[source]

Sets parameters for signal detrending algorithm.

Args:

detrend_settings (brainaccess.models.DetrendSettings): an object holding parameters for detrend algorithm.

brainaccess.core.set_filter_settings(filters)[source]

Sets filters for signal filtering.

Args:

filters ( list [brainaccess.models.FilterSettings] ): a list of filters, where each filter is an object containing filter parameters.

brainaccess.core.set_preprocessing_sampling_frequency(sampling_frequency)[source]

Sets sampling frequency for preprocessing functions.

Args:

sampling_frequency (double): sampling frequency in Hz.

brainaccess.core.set_preprocessing_settings(fs, detrend_settings=None, filters=[], window_settings=None)[source]

Sets parameters for signal preprocessing.

Args:

fs (float): sampling frequency in Hz,

detrend_settings (brainaccess.models.DetrendSettings): an object holding parameters for detrend algorithm.

filters ( list [brainaccess.models.FilterSettings] ): a list of filters, where each filter is an object containing filter parameters.

window_settings (brainaccess.models.WindowSettings): an object holding temporal window parameters.

brainaccess.core.set_sampling_frequency(sampling_frequency)[source]

Sets the sampling frequency for the BrainAccess EEG hardware.

Args:

sampling_frequency (double): sampling frequency in Hz. It can either be 250.0 Hz or 125.0 Hz.

Returns:

double: The actual frequency that has been set in the hardware in Hz.

brainaccess.core.set_window_settings(window_settings)[source]

Sets parameters for temporal window.

Args:

window_settings (brainaccess.models.WindowSettings): an object holding parameters for temporal window.

brainaccess.core.start_acquisition()[source]

Starts acquiring data from BrainAccess EEG hardware and filling an internal buffer of BrainAccess Core library.

Requires initialization.

brainaccess.core.stop_acquisition()[source]

Stops acquiring data from BrainAccess EEG hardware.

Requires initialization.

Data Models

Here BrainAccess Core data models are described.

For convenience, all of the below classes can be imported as

from brainaccess import <class_name>

skipping the .models module specification.

class brainaccess.models.DetrendSettings[source]

A python object containing parameters for signal detrend algorithm.

Attributes:

is_active (bool): True if used, False otherwise.

polynomial_degree (int): Order of polynomial curve used to remove data trend.

class brainaccess.models.EEGData[source]

A python object holding EEG data. Used in data saving/loading.

Attributes:

sampling_frequency (double): data sampling frequency in Hz.

labels (list [string]): channel labels.

measurements (numpy array [float64]): EEG data (number of channels x time points).

accelerometer_data (numpy array [float64]): accelerometer data (3 x time points).

class brainaccess.models.EEGDataStream[source]

A python object defining a collection of BrainAccess EEG hardware measurement samples and stream status information.

Attributes:

num_samples (int): number of acquired data samples.

stream_disrupted (bool): True if data stream with Brain Access EEG hardware was disrupted and some samples might be lost. False otherwise.

reading_is_too_slow (bool): True if data is read too slowly (i.e. getData methods are called too infrequently). In this case internal BrainAccess Core buffer gets full and some data might be lost. False otherwise.

connection_lost (bool): True if wifi connection with Brain Access EEG hardware has been lost, False if everything is ok.

measurements (numpy array [float64]): EEG data (number of channels x time points), values in uV. Note that channel order in measurements is always in ascending order based on channel indices. We recommend users to store channel indices in ascending order in their code as well, to prevent confusion and mishaps.

lead_status (numpy array [int]): lead statuses of active channels for each time point (number of channels x time points), values: 0-connected, 1-not connected. Note that channel order in leadStatus is always in ascending order based on channel indices. We recommend users to store channel indices in ascending order in their code as well, to prevent confusion and mishaps.

accelerometer_data (numpy array [float64]): accelerometer data (3 x time points), values in fraction of g.

class brainaccess.models.FilterSettings[source]

A python object containing signal filter parameters.

Attributes:

is_active (bool): True if used, False otherwise.

type (string): filter type, possible options: bandpass, bandstop, highpass and lowpass.

order (int): filter order.

min_frequency (double): low cut-off frequency for band filters and cut-off frequency for highpass filter

max_frequency (double): high cut-off frequency for band filters and cut-off frequency for lowpass filter

class brainaccess.models.FourierTransform[source]

A python object containing spectrum data calculated using FFT.

Attributes:

frequencies (numpy array [float64]): a frequency axis for calculated spectrum.

spectrum (numpy array [complex]): the calculated spectrum.

magnitudes (numpy array [float64]): the magnitude of the spectrum, normalized to the number of samples.

phases (numpy array [float64]): the phase values of the spectrum.

class brainaccess.models.WindowSettings[source]

A python object containing temporal window parameters.

Attributes:

is_active (bool): True if used, False otherwise.

type (string): window type, possible options: tukey or hann.

tukey_alpha (double): tukey window parameter.

BrainAccess Core Usage Examples

Continuous data acquisition demo

"""
Demo of continuous acquisition of EEG data using brainaccess module.

It demonstrates how to set up acquisition and how to pass a continuous stream of
data into a local buffer. It also demonstrates some preprocessing capabilities
of the module.

It should be run from the same directory where the BACore.dll and BACommon.dll are
or the path to these files should be added to python path. 
Connect to BrainAccess EEG hardware over WIFI before running this script.
"""

import brainaccess as ba
import numpy as np
from sys import exit

# Initialize the module
res = ba.initialize()
if res != 0:
    print("Could not initialize BrainAccess Core library.")
    print("Please see library logs for details.")
    exit()

print("Initialization succesful!")


# Get some info on the hardware status
# check the batterry level
battery_level = ba.get_battery_level()
print("Battery level is: ", battery_level, "%")
if battery_level < 20:
    print("Please charge the battery, it is getting low.")

# get total number of available channels in the connected BrainAccess EEG hardware.
tot_num_channels = ba.get_num_available_channels()
print("Total number of available channels: ", tot_num_channels)


# Acquisition parameters

# Note: these can also be saved and loaded from configuration file using save_config and load_config.
# Look for configuration file BA_core_config.json located in the same directory where BACore.dll is.
# It stores the current state of acquisition parameters.

# channel numbers corresponding to channels used in acquisition
channel_numbers = [0, 1]

# labels for the channels
electrode_labels = ["Fp1", "Fp2"]

# channels used in bias feedback
bias_channel_numbers = [0]

# get the default sampling frequency, it is also possible to set the sampling frequency see BrainAccess documentation.
sampling_frequency = ba.get_sampling_frequency()

# set the channels
ba.set_channels(channel_numbers, bias_channel_numbers)

# set channel labels
ba.set_channel_labels(channel_numbers, electrode_labels)


# Create buffers to store the EEG data

# aquisition length in seconds and time axis
t = 5
time = np.arange(0, t, 1.0/sampling_frequency)

# buffer to store data for different channels
data = np.zeros((len(channel_numbers), len(time)))

# lead status buffer, lead status indicates if the electrode is connected to person
lead_status = np.zeros((len(channel_numbers), len(time)))

# buffer to store accelerometer data
data_accel = np.zeros((3, len(time)))


# Signal preprocessing parameters

# Note: these can also be saved and loaded from configuration file using save_preprocessing_config
# and load_config. Look for configuration file BA_prepr_config.json located in the same directory
# where BACore.dll is. It stores the current state of acquisition parameters.

# set sampling frequency for preprocessor
ba.set_preprocessing_sampling_frequency(sampling_frequency)

# add trend remover to remove DC offset and drift from EEG signals
detrend = ba.DetrendSettings()  # will get default settings for detrender
ba.set_detrend_settings(detrend)

# add band pass filter (more filters can be added to the preprocessor as necessary)
filt = ba.FilterSettings()
filt.type = "bandpass"
filt.order = 2
filt.min_frequency = 1
filt.max_frequency = 40
ba.set_filter_settings([filt])

# create a buffer that would store processed signal
data_processed = np.zeros((len(channel_numbers), len(time)))


# Start acquisition (the library will start pulling samples from the BrainAccessEEG hardware)
ba.start_acquisition()

# Note: continuous acquisition loop would be typically run in a separate thread and
# data buffers would be shared with the main or other threads. Main thread can
# then continuously access data and use for different purposes such as plotting,
# signal processing, BCI inference, etc.

# number of samples to pull per single acquisition
num_samples_to_acquire = 20
# total number of samples to acquire
tot_num_samples = t*sampling_frequency
scounter = 0
print("Acquisition started!")
while True:
    # ask for certain number of samples (the function returns EEGDataStream object,
    # see brainaccess.models for more info)
    eeg_data = ba.get_data_samples(num_samples_to_acquire)
    print("Got " + str(eeg_data.num_samples) + " samples")

    # check connection and stream status
    if eeg_data.connection_lost:
        print('Connection Lost!')
        exit()

    if eeg_data.stream_disrupted:
        print("BrainAccess Core library missed some samples from the EEG hardware, " + \
            "check if there is a good WIFI connection with the device.")

    if eeg_data.reading_is_too_slow:
        print("Acquire data quicker as the internal buffer of BrainAccess Core has filled up.")

    #shift the buffers and append the newly retrieved data
    for m in range(0, len(channel_numbers)):
        data[m] = np.roll(data[m], -eeg_data.num_samples)
        data[m, -eeg_data.num_samples:] = eeg_data.measurements[m]

        lead_status[m] = np.roll(lead_status[m], -eeg_data.num_samples)
        lead_status[m, -eeg_data.num_samples:] = eeg_data.lead_status[m]

    for m in range(0, 3):
        data_accel[m] = np.roll(data_accel[m], -eeg_data.num_samples)
        data_accel[m, -eeg_data.num_samples:] = eeg_data.accelerometer_data[m]

    # preprocess the eeg data
    for m in range(0, len(channel_numbers)):
        data_processed[m] = ba.preprocess(data[m])

    # update the counter and check for stop criterion
    scounter += eeg_data.num_samples
    if scounter > tot_num_samples:
        break

# Note: after this loop executes the data buffers data, lead_status and data_accel
# will be filled with data acquired over the last t seconds.

# stop acquisition (the library will stop pulling data)
ba.stop_acquisition()
print("Acquisition stopped.")

Single data acquisitions demo

"""
Demo of single acquisitions of EEG data using brainaccess module.

It demonstrates how to make time critical single acquisitions. Time critical means
that data is recorded from the time point of the request. This is useful, for example,
when it is needed to make a record straight after the visual or audio stimulus. It
also shows how the library can be used to save recorded data in csv format. 

It should be run from the same directory where the BACore.dll and BACommon.dll are
or the path to these files should be added to python path.
Connect to BrainAccess EEG hardware over WIFI before running this script.
"""
import brainaccess as ba
import numpy as np
from sys import exit

# Initialize the module
res = ba.initialize()
if res != 0:
    print("Could not initialize BrainAccess Core library.")
    print("Please see library logs for details.")
    exit()

print("Initialization succesful!")

# Get some info on the hardware status
# check the batterry level
battery_level = ba.get_battery_level()
print("Battery level is: ", battery_level, "%")
if battery_level < 20:
    print("Please charge the battery, it is getting low.")

# get total number of available channels in the connected BrainAccess EEG hardware.
tot_num_channels = ba.get_num_available_channels()
print("Total number of available channels: ", tot_num_channels)


# Acquisition parameters

# Note: these can also be saved and loaded from configuration file using save_config and load_config.
# Look for configuration file BA_core_config.json located in the same directory where BACore.dll is.
# It stores the current state of acquisition parameters.

# channel numbers corresponding to channels used in acquisition
channel_numbers = [0, 1]

# labels for the channels
electrode_labels = ["O1", "O2"]

# channels used in bias feedback
bias_channel_numbers = [0]

# get the default sampling frequency, it is also possible to set the sampling frequency see BrainAccess documentation.
sampling_frequency = ba.get_sampling_frequency()

# set the channels
ba.set_channels(channel_numbers, bias_channel_numbers)

# set channel labels
ba.set_channel_labels(channel_numbers, electrode_labels)


# Record parameters

# record length in miliseconds
t_record = 500

# number of acquisitions
num_acquisitions = 10

# initialize data structure required for data saving
eeg_data = ba.EEGData()
eeg_data.sampling_frequency = sampling_frequency
eeg_data.labels = electrode_labels


# Start acquisition (the library will start pulling samples from the BrainAccessEEG hardware)
ba.start_acquisition()

# make requested time critical acquisitions for num_acquisitions number of times
for n in range(num_acquisitions):
    print('Acquisition number: ', n)

    # Note: here should be a call for stimulus. After executing the stimulus call,
    # the data request should be made straight away

    # request data (_from_now means that all the data already in the internal buffer
    # will be discarded and only data from this time moment will be recorded).
    # this call blocks until the required number of samples are collected
    eeg_data_stream = ba.get_data_from_now(t_record)

    # Save the record to current working directory in csv format

    # generate file name
    fname = 'record'+str(n)+'.csv'

    # pass data for saving
    eeg_data.measurements = eeg_data_stream.measurements
    eeg_data.accelerometer_data = eeg_data_stream.accelerometer_data

    # save data
    ba.save_data(eeg_data, fname, ' ')

# stop acquisition (the library will stop pulling data)
ba.stop_acquisition()
print("Acquisition stopped.")

BrainAccess BCI Library

Here all of the available BrainAccess BCI algorithms are described.

We highly recommend reading the demo code provided with your BrainAccess installation, before trying to use any of the algorithms.

For convenience, all of the below modules can be imported as

from brainaccess import <module>

skipping the .bcilibrary module specification.

Motion Classifier

This section describes functions available to control the Motion Classifier algorithm.

Motion Classifier recognizes facial motions performed by the user:
calm state (when no action is pefromed),
single eye blink,
double eye blink,
eye movement up,
eye movement down,
teeth grind.
These motions can then be mapped to any software for different kinds of control actions.
(See Motion Classifier Browser Controller demo for an example).

The algorithm expects exactly 2 EEG channels placed at Fp1 and Fp2 positions.

The workflow of using Motion Classifier is as follows:
2. Call brainaccess.bcilibrary.motion_classifier.start() to start collecting data.
3. Whenever a prediction is needed call brainaccess.bcilibrary.motion_classifier.predict().
4. Optionally call brainaccess.bcilibrary.motion_classifier.discard_data() (see description).
5. When the algorithm is no longer needed call brainaccess.bcilibrary.motion_classifier.stop().

See all function descriptions for more information.

brainaccess.bcilibrary.motion_classifier.discard_data()[source]

Discards currently collected data, data collection continues from scratch.

This can be useful when a pause between predictions is needed. For example, it can be used after the user performs some action and it triggers a response in GUI. Then discarding data would allow to remove data that was collected while the user was waiting for the GUI response.

brainaccess.bcilibrary.motion_classifier.initialize(fp1Idx, fp2Idx)[source]

Initializes Motion Classifier’s internal structures, intializes BrainAccess Core library and attempts to connect to BrainAccess EEG hardware.

Note that only Fp1 and Fp2 channels are suitable for this algorithm.

Args:

fp1Idx (int): index of the channel that is in the Fp1 position.

fp2Idx (int): index of the channel that is in the Fp2 position.

Returns:

bool: True on success, False on error.

brainaccess.bcilibrary.motion_classifier.predict()[source]

Predicts which motion was performed by the user

The prediction is made from the last 2.5 seconds of data. If not enough data is available for a prediction, this function waits for the required data and only then predicts.

When calling brainaccess.sdk.motion_classifier.predict() consequently, the previously collected data is reused, meaning that predictions can be made as often as needed.

Note that the accuracy of the algorithm depends if the collected EEG data captures the whole motion. This means that if Motion Classifier predicts on data that only has the start of the motion recorded, the prediction will not be accurate.

To counteract this, we suggest predicting often and accepting the prediction as trustworthy only when the prediction is the same for some number of times in a row. See Motion Classifier Browser Controller demo for an example.

Returns:
Tuple(list[double], list[string]):
First item is probabilities that each of the possible motions were performed. Probabilities are given in such order:
Calm (no action was performed), Blink (single blink), Double Blink, Teeth (teeth grind), Eyes Up (quick eye movement upwards), Eyes Down (eye movement downwards).
Second item is the names of the motion classes (always provided in the same order, included just for convenince):
‘calm’, ‘blink’, ‘double_blink’, ‘teeth’, ‘eyes_up’, ‘eyes_down’
brainaccess.bcilibrary.motion_classifier.start()[source]

Starts EEG data collection.

Should be called before brainaccess.sdk.motion_classifier.predict()

Returns:

bool: True on success, False on error.

brainaccess.bcilibrary.motion_classifier.stop()[source]

Stops EEG data collection

Should be called when Motion Classifier is no longer needed.

Returns:

bool: True on success, False on error.

Alpha Detector

This section describes functions available to control the Alpha Detector algorithm.

Alpha detector can specify the magnitude of alpha brainwaves currently produced by the user.

Alpha brainwaves are most pronounced when a person is completely relaxed and has his/her eyes closed. This means that alpha wave detection can serve as an approximate measure of how relaxed a person is.

The algorithm expects 1-3 electrodes in the occipital region. We suggest using electrodes placed in O1 and O2 positions.

The workflow of using Alpha Detector is as follows:
1. Call brainaccess.bcilibrary.alpha_detector.initialize(), ensure it was successful.
2. Call brainaccess.bcilibrary.alpha_detector.start() to start collecting data.
3. Estimate alpha frequency by calling brainaccess.bcilibrary.alpha_detector.estimate_alpha().
5. When the algorithm is no longer needed call brainaccess.bcilibrary.alpha_detector.stop().

See all function descriptions for more information.

brainaccess.bcilibrary.alpha_detector.estimate_alpha()[source]

Estimates the alpha frequency for the user.

Each person has a slightly different alpha brainwave frequency, although it is usually in the range of 8-12Hz. This algorithm works best by firstly estimating the frequency for the current user. When this method is called, user should sit still, with his/her eyes closed for 3 seconds.

Returns:

bool: True on success, False on error.

brainaccess.bcilibrary.alpha_detector.initialize(channel_indices)[source]

Initializes Alpha Detector’s internal structures, intializes BrainAccess Core library and attempts to connect to BrainAccess EEG hardware.

Must be called before other Alpha Detector functions.

Args:

channel_indices ( list[int] ): indices of channels that should be used by the algorithm (we recommend using channels placed in occipital region). Maximum allowed number of channels is 3.

Returns:

bool: True on success, False on error.

brainaccess.bcilibrary.alpha_detector.predict()[source]

Predicts alpha wave intensity from the latest EEG data.

The data used in the previous predictions is reused if needed. If not enough data is available, this function firstly waits for the data and only then predicts.

Returns:

float: Estimation of alpha wave intensity as a measure between 0 and 1. For small alpha wave activities expect the value to be around 0.05 For strong alpha waves expect the value to be up to 0.5 (larger values are less common).

brainaccess.bcilibrary.alpha_detector.predict_from_now()[source]

Predicts alpha wave intensity from EEG data collected from the moment this function is called.

Previously collected data is discarded and the algorithm collects the required number of measurements before predicting. This can be useful if prediction should be made with data collected after some kind of event.

Returns:

float: The same alpha intensity evaluation as brainaccess.bcilibrary.alpha_detector.predict().

brainaccess.bcilibrary.alpha_detector.start()[source]
Returns:

bool: True on success, False on error.

brainaccess.bcilibrary.alpha_detector.stop()[source]

Stops EEG data collection

Should be called when Alpha Detector is no longer needed.

Returns:

bool: True on success, False on error.

SSVEP Detector

This section describes functions available to control the SSVEP Detector algorithm.

SSVEP Detector can recognize steady-state visual evoked potentials (SSVEP). Meaning that given a visual stimulus flickering at a constant frequency, SSVEP Detector can determine if the user is currently looking at it.

This can be used as motionless control, where the user chooses an option by looking at the corresponding visual stimulus.

The algorithm expects 1-3 electrodes in the occipital region. We recommend using 2-3 electordes in the O1, O2 and optionally Oz positions

The workflow of using SSVEP Detector is as follows:

1. Call brainaccess.bcilibrary.ssvep_detector.initialize(), ensure it was successful.
2. Call brainaccess.bcilibrary.ssvep_detector.start() to start collecting data.
3. Whenever a prediction is needed call brainaccess.bcilibrary.ssvep_detector.predict().
4. When the algorithm is no longer needed call brainaccess.bcilibrary.ssvep_detector.stop().

See all function descriptions for more information.

brainaccess.bcilibrary.ssvep_detector.initialize(channel_indices, ssvep_frequencies)[source]

Initializes SSVEP Detector’s internal structures, intializes BrainAccess Core library and attempts to connect to BrainAccess EEG hardware.

Args:

channel_indices (list[int]): indices of channels that should be used by the algorithm. Electrodes should be placed in oxipital region (we suggest O1, O2, Oz).

ssvep_frequencies (list[float]): flicker frequencies that are provided as visual stimuli for the user. For best results, frequencies should be roughly in 8 - 15Hz range and distance between them should be at least 1Hz. Each frequency is associated with a class that the algorithm later predicts.

Returns:

bool: True if successful, False on error.

brainaccess.bcilibrary.ssvep_detector.predict()[source]

Predicts class (frequency) on which user is concentrated.

Note that EEG data does not instantly reflect the flicker frequencies once the user switches focus to it. Rather, it takes a few seconds for it to happen.

Returns:

int: Predicted class index or -1 if an error occured. Class indices correspond to frequencies provided in brainaccess.sdk.ssvep_detector.initialize() (using the same ordering as was provided).

brainaccess.bcilibrary.ssvep_detector.start()[source]

Starts EEG data collection

Should be called before brainaccess.sdk.ssvep_detector.predict()

Returns:

bool: True if successful, False on error.

brainaccess.bcilibrary.ssvep_detector.stop()[source]

Stop EEG data collection

Should be called when SSVEP Detector is no longer needed.

Returns:

bool: True if successful, False on error.