Skip to content
This repository was archived by the owner on Jan 7, 2025. It is now read-only.

fast-rcnn and faster-rcnn training #438

Closed
b0unc3 opened this issue Nov 27, 2015 · 4 comments
Closed

fast-rcnn and faster-rcnn training #438

b0unc3 opened this issue Nov 27, 2015 · 4 comments
Labels

Comments

@b0unc3
Copy link

b0unc3 commented Nov 27, 2015

Hi,

is it possible to fine-tune fast/faster-rcnn models on digits?
Thanks.

@lukeyeager
Copy link
Member

I haven't done it before. Do you have an implementation for Caffe or Torch already that you'd like to use?

@b0unc3
Copy link
Author

b0unc3 commented Nov 30, 2015

I've been able to train a model using fast-rcnn (with the caffe-fast-rcnn implementation), but I would like to use digits to simplify and automate the training phase.

@lukeyeager
Copy link
Member

DIGITS requires NVIDIA's fork of Caffe, which was last synced with BVLC/caffe on 09/29/2015 (https://github.com/NVIDIA/caffe/releases).

It looks like caffe-fast-rcnn is a different fork of caffe - so DIGITS will reject it.

If Caffe commits to a versioning scheme (BVLC/caffe#3311) which denotes a standardized API, DIGITS will be able to accept a much broader range of Caffe builds - possibly including the caffe-fast-rcnn fork.

Alternatively, if the code you need gets merged into BVLC's fork, then it will get picked up in NVIDIA's fork for our next update.

@marvision-ai
Copy link

Hi there, revisiting this. Is there a tutorial or at least some steps to get FasterRCNN working with digits? I have yet to see anything that covers beyond detectron.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants