# ObjectFlow
**Repository Path**: KHouSin/ObjectFlow
## Basic Information
- **Project Name**: ObjectFlow
- **Description**: Implemenation of the paper: "Video Segmentation via Object Flow", Y.-H. Tsai, M.-H. Yang and M. J. Black, CVPR 2016
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2020-01-06
- **Last Updated**: 2020-12-19
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# ObjectFlow
Project webpage: https://sites.google.com/site/yihsuantsai/research/cvpr16-segmentation
Contact: Yi-Hsuan Tsai (wasidennis at gmail dot com)
## Paper
Video Segmentation via Object Flow
Yi-Hsuan Tsai, Ming-Hsuan Yang and Michael J. Black
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
## Overview
* This is the authors' MATLAB implementation described in the above paper. Please cite our paper if you use our code and model for your research.
* This code has been tested on Ubuntu 14.04 and MATLAB 2013b.
## Installation
* Download and unzip the code.
* Install the attached caffe branch, as instructed at http://caffe.berkeleyvision.org/installation.html.
* Download the CNN model for feature extraction [here](http://vllab.ucmerced.edu/ytsai/CVPR16/pascal_segmentation.zip), then unzip the model folder under the **caffe-cedn-dev/examples** folder.
* Install included libraries in the **External** folder if needed (pre-compiled codes are already included).
## Usage
* Put your video data in the **Videos** folder (see examples in this folder).
* Set directories and parameters in `setup_all.m` (suggest to use defaults).
* Run `demo_objectFlow.m` and change settings if needed based on your video data (see the script for further details).
## Note
* Currently this package only contains the implementation of object segment tracking without re-estimating optical flow and the performacne is a bit worse than the one reported in the paper.
* For initialization, currently we use the ground truth of the first frame and propagate to following frames. If you prefer to use other initializations, please replace the ground truth data.
* The model and code are available for non-commercial research purposes only.
## Hint
* The current implementation for generating optical flow is slow, so you can replace it with other optical flow methods to speed up the process.
## Log
* 06/2016: code released
* 09/2016: evaluation method updated
* 10/2016: code updated for supervoxel extraction and online CNN model