N-MNIST
Character and Object Recognition
Year2015
Modalities
CategoryObject Detection, Classification, and Tracking
Tags
Aliases
NMNISTNeuromorphic MNIST
Full nameNeuromorphic MNIST
Sensors
Citations
423 (crossref)
Available online
Ground truth
Real data
Simulated data
Frames
Biases
Distribution
Format
AvailabilityAvailable across multiple online sharing platforms.
Size1.1 GB (compressed)
Paper
Garrick Orchard, Ajinkya Jayawant, Gregory Cohen, and Nitish Thakor,
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
Open Access
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
BibTeX
@article{Orchard_2015,
    title={Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades},
    volume={9},
    issn={1662-453X},
    url={http://journal.frontiersin.org/Article/10.3389/fnins.2015.00437/abstract},
    doi={10.3389/fnins.2015.00437},
    language={en},
    urldate={2023-10-06},
    journal={Frontiers in Neuroscience},
    author={Orchard, Garrick and Jayawant, Ajinkya and Cohen, Gregory K. and Thakor, Nitish},
    month={nov},
    year={2015},
}

Dataset Structure

The N-MNIST dataset is distributed in two separate zip files, one containing the testing set and the second containing the training set. The filenames used match those of the original recordings, allowing them to be referenced to the original dataset recordings.

Matlab code is provided alongside the files to assist in reading the dataset. The format of the data in the binary files is as follows:

Each recording is a separate binary file consisting of a list of events. Each event occupies 40 bits as described below:

  • bit 39 - 32: Xaddress (in pixels)
  • bit 31 - 24: Yaddress (in pixels)
  • bit 23: Polarity (0 for OFF, 1 for ON)
  • bit 22 - 0: Timestamp (in microseconds)

A second Matlab script is provided that creates a stablised version of the data.

Bias values for the camera are provided in the supplied Readme.txt file. The biases provided in that file are as follows:

Bias Value
APSvrefL 3050mV
APSvrefH 3150mV
APSbiasOut 750mV
APSbiasHyst 620mV
CtrlbiasLP 620mV
APSbiasTail 700mV
CtrlbiasLBBuff 950mV
TDbiasCas 2000mV
CtrlbiasDelTD 400mV
TDbiasDiffOff 620mV
CtrlbiasSeqDelAPS 320mV
TDbiasDiffOn 780mV
CtrlbiasDelAPS 350mV
TDbiasInv 880mV
biasSendReqPdY 850mV
TDbiasFo 2950mV
biasSendReqPdX 1150mV
TDbiasDiff 700mV
CtrlbiasGB 1050mV
TDbiasBulk 2680mV
TDbiasReqPuY 810mV
TDbiasRefr 2900mV
TDbiasReqPuX 1240mV
TDbiasPR 3150mV
APSbiasReqPuY 1100mV
APSbiasReqPuX 820mV