Skip to content

Commit e214e3a

Browse files
committedApr 3, 2018
minor corrections
1 parent 26c7b3b commit e214e3a

File tree

5 files changed

+52
-7
lines changed

5 files changed

+52
-7
lines changed
 

‎INSTALL.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
- Create a virtual environment with all the packages : `conda env create -f environment.yml`
55

6-
- Then activate the environment with `source activate doc_seg`
6+
- Then activate the environment with `source activate dh_segment`
77

88
- It might be possible that the following needs to be added to your `~/.bashrc`
99

‎README.md

+10-3
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,17 @@ In order to limit memory usage, the images in the dataset we provide have been d
2424
__How to__
2525

2626
1. Get the annotated dataset [here](https://github.com/dhlab-epfl/dhSegment/releases/download/v0.2/pages.zip), which already contains the folders `images` and `labels` for training validation and testing set. Unzip it into `model/pages`.
27-
2. You can train the model from scratch with
27+
2. Download the pretrained weights for ResNet :
28+
```
29+
cd pretrained_models/
30+
python download_resnet_pretrained_model.py
31+
cd ..
32+
```
33+
34+
3. You can train the model from scratch with
2835
`python train.py with demo/demo_config.json`
2936
or skip this step and use directly the [provided model](https://github.com/dhlab-epfl/dhSegment/releases/download/v0.2/model.zip) (download and unzip it in `demo/model`)
30-
3. Run `python demo.py`
31-
4. Have a look at the results in `demo/processed_images`
37+
4. Run `python demo.py`
38+
5. Have a look at the results in `demo/processed_images`
3239

3340

‎demo.py

+1-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@
88
import numpy as np
99
import os
1010
import cv2
11-
import argparse
1211
import tempfile
1312
from scipy.misc import imread, imsave
1413

@@ -92,7 +91,7 @@ def find_page(img_filenames, dir_predictions, output_dir):
9291
target_shape = (orig_img.shape[1], orig_img.shape[0])
9392
bin_upscaled = cv2.resize(np.uint8(page_bin), target_shape, interpolation=cv2.INTER_NEAREST)
9493

95-
# Find quadrilateral inclosing the page
94+
# Find quadrilateral enclosing the page
9695
pred_box = boxes_detection.find_box(np.uint8(bin_upscaled), mode='quadrilateral')
9796

9897
if pred_box is not None:

‎environment.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
name: doc_seg
1+
name: dh_segment
22
channels:
33
- defaults
44
dependencies:
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
#!/usr/bin/env python
2+
3+
import urllib.request
4+
import tarfile
5+
import os
6+
from tqdm import tqdm
7+
8+
9+
def progress_hook(t):
10+
last_b = [0]
11+
12+
def update_to(b=1, bsize=1, tsize=None):
13+
"""
14+
b : int, optional
15+
Number of blocks transferred so far [default: 1].
16+
bsize : int, optional
17+
Size of each block (in tqdm units) [default: 1].
18+
tsize : int, optional
19+
Total size (in tqdm units). If [default: None] remains unchanged.
20+
"""
21+
if tsize is not None:
22+
t.total = tsize
23+
t.update((b - last_b[0]) * bsize)
24+
last_b[0] = b
25+
26+
return update_to
27+
28+
29+
if __name__ == '__main__':
30+
tar_filename = 'vgg_16.tar.gz'
31+
with tqdm(unit='B', unit_scale=True, unit_divisor=1024, miniters=1,
32+
desc="Downloading pre-trained weights") as t:
33+
urllib.request.urlretrieve('http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz', tar_filename,
34+
reporthook=progress_hook(t))
35+
tar = tarfile.open(tar_filename)
36+
tar.extractall()
37+
tar.close()
38+
print('VGG-16 pre-trained weights downloaded!')
39+
os.remove(tar_filename)

0 commit comments

Comments
 (0)
Please sign in to comment.