DVS09
DVS Sample Data
| Year | 2018 |
| Modalities | |
| Category | Benchmarking, SNN Training Task, and SNN Training |
| Tags | |
| Sensors | |
| Citations | 161 (crossref)252 (scholar) |
| Available online | ✓ |
| Ground truth | ✗ |
| Real data | ✓ |
| Simulated data | ✗ |
| Frames | ✗ |
| Biases | ✗ |
| Stereo | ✗ |
| Distribution | |
| Format | |
| Availability | Sample dataset files |
| Size | 0.5 GB (compressed) |
Dataset links
| Name | URL | Format | Available |
|---|---|---|---|
| Google Drive | https://drive.google.com/open?id=0BzvXOhBHjRhecFYzN3Q3ZlF2WVU | Binary | ✓ |
Other links
| Paper | https://www.zora.uzh.ch/id/eprint/17620/ |
| Project page | https://docs.google.com/document/d/16b4H78f4vG_QvYDK2Tq0sNBA-y7UFnRbNnsGbD1jJOg/edit#heading=h.d6xhhyjtb0d9 |
Paper
A 3.6 $\mu$s Latency Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor
Open Access
✗Conventional image sensors produce massive amounts of redundant data and are limited in temporal resolution by the frame rate. This paper reviews our recent breakthrough in the development of a high-performance spike-event based dynamic vision sensor (DVS) that discards the frame concept entirely, and then describes novel digital methods for efficient low-level filtering and feature extraction and high-level object tracking that are based on the DVS spike events. These methods filter events, label them, or use them for object tracking. Filtering reduces the number of events but improves the ratio of informative events. Labeling attaches additional interpretation to the events, e.g. orientation or local optical flow. Tracking uses the events to track moving objects. Processing occurs on an event-by-event basis and uses the event time and identity as the basis for computation. A common memory object for filtering and labeling is a spatial map of most recent past event times. Processing methods typically use these past event times together with the present event in integer branching logic to filter, label, or synthesize new events. These methods are straightforwardly computed on serial digital hardware, resulting in a new event-and timing-based approach for visual computation that efficiently integrates a neural style of computation with digital hardware. All code is open-sourced in the jAER project (jaer.wiki.sourceforge.net).
BibTeX
@article{delbruck_frame-free_2008,
year={2008},
author={Delbruck, Tobi},
language={en},
doi={10.5167/uzh-17620},
title={Frame-free dynamic digital vision},
}
year={2008},
author={Delbruck, Tobi},
language={en},
doi={10.5167/uzh-17620},
title={Frame-free dynamic digital vision},
}
Dataset Structure
This event camera dataset is from the first practical event camera, the DVS128 from the Sensors Group.