Source code and datasets

For a complete overview see: external pagePRS Github Portal

 

U-TILISE: A Sequence-to-sequence Model for Cloud Removal in Optical Satellite Time Series (Stucker et al., IEEE TGRS, 2023)

external pageCode available

 

Towards Accurate Instance Segmentation in Large-scale LiDAR Point Clouds (Xiang et. al, 2023)

external pageCode available

A Review of Panoptic Segmentation for Mobile Mapping Point Clouds (Xiang et. al, 2023)

external pageCode available

POMELO: Fine-Grained Population Mapping from Coarse Census Counts and Open Geodata (Metzger et. al, 2022)

external pageDemo App

Code available (Sample Code)

 

ImpliCity: City Modeling from Satellite Images with Deep Implicit Occupancy Fields (Stucker et al., ISPRS Annals, 2022)

external pageCode available


ResDepth: A Deep Residual Prior For 3D Reconstruction From High-resolution Satellite Images (Stucker C. and Schindler K., ISPRS Journal, 2022)

external pageCode available
 

PC2WF: 3D Wireframe Reconstruction from Raw Point Clouds (Liu et al., ICLR 2021)

external pageCode available
 

Gating revisited: Deep multi-layer RNNs that can be trained (Turkoglu et al., IEEE TPAMI, 2021)

external pageCode available
 

Crop mapping from image time series: deep learning with multi-scale label hierarchies (Turkoglu et al., RSE, 2021)

external pageCode available
Dataset
 

Crop Classification Under Varying Cloud Cover With Neural Ordinary Differential Equations (Metzger et al., IEEE TGRS, 2021)

external pageCode available
 

In the light of feature distributions: moment matching for neural style transfer (Kalischek, Wegner, Schindler, CVPR 2021)

external pageCode available


PREDATOR: Registration of 3D Point Clouds with Low Overlap (Huang, Gojcic et al., CVPR 2021)

external pageCode available
 

Minimal Rolling Shutter Absolute Pose with Unknown Focal Length and Radial (Kukelova, Albl et al., ECCV 2020)

external pageCode available
 

Indoor Scene Recognition in 3D (Huang, Usvyatsov, Schindler, IROS 2020)

external pageCode available
 

3D fluid flow estimation with integrated particle reconstruction (Lasinger et al., IJCV 2020)

external pageCode available


Lake Ice Detection from Sentinel-1 SAR with Deep Learning (Tom et. al. 2020)

external pageCode available
Pre-Trained Model
 

Lake Detection and Lake Ice Monitoring with Webcams and Crowd-sourced Images (Prabha et. al., Deeplab v3+ network, 2020)

external pageCode available
external pagePhoti-LakeIce dataset
 

Country-​wide high-​resolution vegetation height mapping with Sentinel-​2 (Lang et al., Remote Sensing of Environment Vol. 233, 2019)

Gabon canopy height map 2017 (geotifs)
external pageExplore on Google Earth Engine
 

Reconstruction of 3D flight trajectories from ad-hoc camera networks (Albl et al., IROS 2020)

external pageCode available
external pageDataset


Practical optimal registration of terrestrial LiDAR scan pairs (Cai et al., ISPRS Journal 2019)

external pageCode available


Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer (Ranftl, Lasinger et al., TPAMI 2020)

external pageCode available
external pageVideo
external pageDataset Creation


Lake Ice Monitoring with Webcams (Tiramisu network, Xiao et. al. 2018)

external pageCode available


DSen2: Super-resolution of Sentinel-2 with deep networks (Lanaras et al., ISPRS Journal 2018)

external pageCode available


Learning Aerial Image Segmentation From Online Maps (Kaiser Pascal et al., IEEE 2017)

external pageCode available


Semantically Informed Multiview Surface Refinement (Blaha et al., ICCV 2017)

external pageCode available


SupReME: Super-Resolution for multispectral Multiresolution Estimation (Lanaras et al., CVPR EarthVision Workshop 2017)

external pageCode available


Massively Parallel Multiview Stereopsis by Surface Normal Diffusion (Galliani et al., ICCV 2015)

external pageCode available


Hyperspectral Image Super Resolution (Lanaras et al., ICCV 2015)

external pageCode available


K-4PCS Pairwise Registration (Theiler et al., 2014, IJSPRS)

Project page


Global Consistent Point Cloud Registration (Theiler et al., 2015, IJSPRS)

Project page


Piecewise rigid scene flow (Vogel et al., IJCV 2015)

external pageCode available


Dataflow code (Vogel et al., GCPR 2013)

external pageCode available


RQE Feature Extraction (Tokarczyk et al., TGRS 2015) 

Code and documentation


Random forest template library - Stefan Walk

DownloadCode and documentation (ZIP, 14 KB)


VocMatch: Efficient Multiview Correspondence for Structure from Motion (Havlena et al., ECCV 2014)

Code available


Predicting Matchability (Hartmann et al., CVPR 2014)

Code and documentation


Are Cars Just 3D Boxes? - Jointly Estimating the 3D Shape of Multiple Objects (Zia et al., CVPR 2014) Towards Scene Understanding with Detailed 3D Object Representations (Zia et al., IJCV 2015)

Contact Zeeshan Zia <zia.zeeshan at outlook dot com> for any questions.
external pageCode and trained models, external pageEvaluation Script and Test set


Explicit Occlusion Modeling for 3D Object Class Representations (Zia et al., CVPR 2013)

Contact Zeeshan Zia <mzia at ethz dot ch> for any questions.

external pageTest set (260 MB, ~7 mins download time), external pageTraining set for first layer DPMs (1.5 GB, ~30 mins download time), external pageCode and trained models


Detailed 3D Representations for Object Recognition and Modeling (Zia et al., TPAMI 2013, 3dRR 2011)

Annotations (download link) used in our '3D geometric models for objects' papers:

- Part level annotations on the 3D Object Classes external pagedataset (Savarese et al. ICCV 2007)
- Point correspondences for ultrawide baseline matching in the same dataset


Multi-Target Tracking

external pageProject page with download links (external page maintained by Anton Andriyenko)

Data used in a series of papers on multi-target tracking, comprising of annotations done by manually placing bounding boxes around pedestrians and interpolating their trajectories between key frames.

external pageCVPR 2012 code

external pageCVPR 2011 code


GPU-SURF

external pageProject page with source code (external page hosted by MPII / Christian Wojek)

A GPU implementation of the popular SURF method in C++/CUDA, which achieves real-time performance even on HD images. Includes interest point detection, descriptor extraction, and basic descriptor matching.


Action Snippets

MATLAB code (including Weizmann test data)

The code used for our Action Snippets paper on activity recognition, published in CVPR'08. Included is also some test data to play with. If you use this data, please cite the corresponding paper as source.


Pedestrian Motion Models

Dataset (external page maintained by Stefano Pellegrini)

Data used in a paper on an advanced motion model for tracking, which takes into account interactions between pedestrians, inspired by social force models used for crowd simulation (joint work with Stefano Pellegrini, Andreas Ess, and Luc van Gool). If you use this data, please cite the corresponding paper as source.


Tracking with a Mobile Observer

Project page with download links (external page maintained by Andreas Ess).

Data used in a series of papers (CVPR'08, ICRA'09, PAMI'09) on pedestrian and vehicle tracking with a moving stereo rig, by Andreas Ess, Konrad Schindler, Bastian Leibe and Luc van Gool. Synchronized stereo videos observing busy inner-city streets with large and varying numbers of pedestrians. If you use this data, please cite the above-mentioned papers as source.


Coupled Detection and Tracking

Three pedestrian crossing sequences (91 MByte)

Data used in the ICCV'07 paper Coupled Detection and Trajectory Estimation for Multi-Object Tracking by Bastian Leibe, Konrad Schindler and Luc van Gool. Monocular videos observing pedestrian crossings with large and varying numbers of pedestrians in challenging conditions (natural lighting, occlusions, background changes). If you use this data, please cite the above-mentioned paper as source.


Shape-based Object Detection

4x50 closed shapes (swans, hats, starfish, applelogos)

A database of object categories defined by their shape. Each category has 50 images, which contain no instances of the remaining classes, but sometimes contain multiple instances of the same category. The swan and applelogo categories are extended versions of Vitto Ferrari's ETHZ shape classes. The images were collected from Google image search and Flickr, and contain significant amounts of background clutter. The category templates were drawn by hand. For each image there is:
- XX.jpg (original colour or grayscale image in JPG-format)
- XX_srmseg.tif (an over-segmentation created with the external pagesrm method of Nock and Nielsen)
- XX_CLASS.groundtruth (manually annotated ground truth bounding boxes as ASCII text)

Source code for detection by elastic shape matching (Schindler and Suter, Pattern Recognition 2013)

Extended ETHZ shape classes (swans, bottles, mugs, giraffes, applelogos, hats, starfish)

A larger database of shape categories, created by merging the above dataset with the ETHZ shape classes of Vitto Ferrari. This is (almost) a superset of each of the two older databases, but has not yet been used by either of us. Please refer to the README for details on the differences and how to use the new dataset.


n-View Multibody Structure and Motion

spinningwheels.mat (synthetic test sequence. 5 frames, 4 objects)
boxes.mat (piles of boxes on a table. 10 frames, 2 objects)
lightbulb.mat (textured objects on neutral background. 10 frames, 2-3 objects)
flowershirt.mat (a person moves though a room, camera also moves. 5 frames, 2 objects)
deliveryvan.mat (movie sequence, courtesy of Andrew Zisserman. 11 frames, 1-2 objects)

Each MATLAB-workspace contains the three variables K, X, and img.
- K is the (3 x 3) camera calibration matrix.
- X is a (N x 2 x F) array of image points (N ... number of image points, F ... number of frames).
- img is the image sequence of image size (m x n) in a (m x n x F) array.

Cameras were calibrated off-line, except for the delivery van, for which an approximate focal length was guessed. If a point is not visible in a given frame, it is marked with the imaginary i (square root of -1). All tracks were produced with the standard implementation of the KLT-tracker. In all sequences, intermediate frames between the given ones were dropped after feature tracking.


Two-View Multibody Structure and Motion

desk.mat (3 objects on desk, manual correspondences)
office.mat (3 objects on floor, MSER correspondences)

Each MATLAB-workspace contains the four variables X1, X2, img1, and img2.
- X1, X2 are the (N x 2) image coordinates of corresponding points
- img1, img2 are the two images of size (m x n).

 

JavaScript has been disabled in your browser