Skip to content
Permalink

Comparing changes

This is a direct comparison between two commits made in this repository or its related repositories. View the default comparison for this range or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: BVLC/caffe
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: c24d1848ee5e9a10f9efa0b2e7b167a2a9b0dad0
Choose a base ref
..
head repository: BVLC/caffe
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 596add755cdd1178ffd3ca5598a64909b361739d
Choose a head ref
Showing with 3,966 additions and 43,221 deletions.
  1. +0 −1 CMakeLists.txt
  2. +8 −19 Makefile
  3. +2 −7 cmake/Dependencies.cmake
  4. +1 −1 cmake/Modules/FindGFlags.cmake
  5. +1 −1 cmake/Modules/FindGlog.cmake
  6. +1 −1 docs/development.md
  7. +8 −13 docs/install_apt.md
  8. +4 −4 docs/install_osx.md
  9. +16 −27 docs/model_zoo.md
  10. +1 −1 docs/performance_hardware.md
  11. +6 −6 docs/tutorial/data.md
  12. +80 −119 docs/tutorial/layers.md
  13. +10 −10 docs/tutorial/loss.md
  14. +16 −14 docs/tutorial/net_layer_blob.md
  15. +356 −3,306 examples/classification.ipynb
  16. +897 −8,356 examples/detection.ipynb
  17. +561 −13,174 examples/filter_visualization.ipynb
  18. +933 −6,276 examples/hdf5_classification.ipynb
  19. +3 −3 examples/hdf5_classification/solver.prototxt
  20. +3 −3 examples/hdf5_classification/solver2.prototxt
  21. +2 −2 examples/hdf5_classification/train_val.prototxt
  22. +2 −2 examples/hdf5_classification/train_val2.prototxt
  23. +1 −1 examples/{net_surgery → imagenet}/bvlc_caffenet_full_conv.prototxt
  24. +3 −7 examples/imagenet/make_imagenet_mean.sh
  25. +25 −25 examples/mnist/readme.md
  26. +330 −6,797 examples/net_surgery.ipynb
  27. +0 −26 examples/net_surgery/conv.prototxt
  28. +150 −1,901 examples/siamese/mnist_siamese.ipynb
  29. +29 −37 examples/siamese/readme.md
  30. +6 −11 examples/web_demo/app.py
  31. +2 −2 include/caffe/blob.hpp
  32. +2 −2 include/caffe/common.hpp
  33. +2 −104 include/caffe/common_layers.hpp
  34. +3 −2 include/caffe/data_layers.hpp
  35. +9 −62 include/caffe/filler.hpp
  36. +4 −5 include/caffe/layer.hpp
  37. +2 −7 include/caffe/loss_layers.hpp
  38. +0 −3 include/caffe/net.hpp
  39. +7 −91 include/caffe/neuron_layers.hpp
  40. +7 −6 include/caffe/python_layer.hpp
  41. +4 −4 include/caffe/solver.hpp
  42. +1 −1 include/caffe/syncedmem.hpp
  43. +15 −19 include/caffe/util/cudnn.hpp
  44. +3 −71 include/caffe/vision_layers.hpp
  45. +2 −2 matlab/caffe/hdf5creation/demo.m
  46. +1 −0 models/bvlc_googlenet/readme.md
  47. +1 −1 python/caffe/__init__.py
  48. +1 −2 python/caffe/_caffe.cpp
  49. +21 −25 python/caffe/classifier.py
  50. +26 −29 python/caffe/detector.py
  51. +19 −26 python/caffe/draw.py
  52. +54 −64 python/caffe/io.py
  53. +28 −40 python/caffe/pycaffe.py
  54. +0 −2 python/caffe/test/test_net.py
  55. +0 −3 python/caffe/test/test_python_layer.py
  56. +0 −1 python/caffe/test/test_solver.py
  57. +4 −12 python/classify.py
  58. +6 −9 python/detect.py
  59. +2 −2 python/requirements.txt
  60. +1 −0 scripts/travis/travis_install.sh
  61. +0 −1 src/caffe/blob.cpp
  62. +23 −44 src/caffe/layers/accuracy_layer.cpp
  63. +1 −0 src/caffe/layers/base_data_layer.cpp
  64. +3 −22 src/caffe/layers/contrastive_loss_layer.cpp
  65. +7 −27 src/caffe/layers/contrastive_loss_layer.cu
  66. +5 −7 src/caffe/layers/cudnn_conv_layer.cpp
  67. +22 −72 src/caffe/layers/cudnn_conv_layer.cu
  68. +6 −4 src/caffe/layers/cudnn_pooling_layer.cpp
  69. +3 −9 src/caffe/layers/cudnn_pooling_layer.cu
  70. +2 −2 src/caffe/layers/cudnn_relu_layer.cpp
  71. +5 −11 src/caffe/layers/cudnn_relu_layer.cu
  72. +2 −2 src/caffe/layers/cudnn_sigmoid_layer.cpp
  73. +5 −11 src/caffe/layers/cudnn_sigmoid_layer.cu
  74. +2 −2 src/caffe/layers/cudnn_softmax_layer.cpp
  75. +4 −11 src/caffe/layers/cudnn_softmax_layer.cu
  76. +2 −2 src/caffe/layers/cudnn_tanh_layer.cpp
  77. +5 −12 src/caffe/layers/cudnn_tanh_layer.cu
  78. +0 −128 src/caffe/layers/filter_layer.cpp
  79. +0 −70 src/caffe/layers/filter_layer.cu
  80. +10 −43 src/caffe/layers/hdf5_data_layer.cpp
  81. +6 −13 src/caffe/layers/hdf5_data_layer.cu
  82. +23 −16 src/caffe/layers/lrn_layer.cu
  83. +21 −2 src/caffe/layers/mvn_layer.cpp
  84. +22 −1 src/caffe/layers/mvn_layer.cu
  85. +0 −141 src/caffe/layers/prelu_layer.cpp
  86. +0 −130 src/caffe/layers/prelu_layer.cu
  87. +0 −95 src/caffe/layers/reshape_layer.cpp
  88. +1 −1 src/caffe/layers/sigmoid_cross_entropy_loss_layer.cpp
  89. +21 −1 src/caffe/layers/sigmoid_cross_entropy_loss_layer.cu
  90. +1 −2 src/caffe/layers/softmax_loss_layer.cpp
  91. +0 −193 src/caffe/layers/spp_layer.cpp
  92. +10 −44 src/caffe/net.cpp
  93. +29 −137 src/caffe/proto/caffe.proto
  94. +6 −9 src/caffe/solver.cpp
  95. +5 −93 src/caffe/test/test_accuracy_layer.cpp
  96. +7 −51 src/caffe/test/test_contrastive_loss_layer.cpp
  97. +5 −7 src/caffe/test/test_data/generate_sample_data.py
  98. +0 −98 src/caffe/test/test_filler.cpp
  99. +0 −128 src/caffe/test/test_filter_layer.cpp
  100. +0 −38 src/caffe/test/test_lrn_layer.cpp
  101. +0 −145 src/caffe/test/test_net.cpp
  102. +0 −192 src/caffe/test/test_neuron_layer.cpp
  103. +8 −0 src/caffe/test/test_pooling_layer.cpp
  104. +0 −280 src/caffe/test/test_reshape_layer.cpp
  105. +0 −131 src/caffe/test/test_spp_layer.cpp
  106. +0 −23 src/caffe/util/cudnn.cpp
  107. +7 −17 tools/caffe.cpp
  108. +2 −2 tools/extra/plot_log.gnuplot.example
  109. +3 −3 tools/extract_features.cpp
1 change: 0 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -20,7 +20,6 @@ caffe_option(BUILD_python "Build Python wrapper" ON)
set(python_version "2" CACHE STRING "Specify which python version to use")
caffe_option(BUILD_matlab "Build Matlab wrapper" OFF IF UNIX OR APPLE)
caffe_option(BUILD_docs "Build documentation" ON IF UNIX OR APPLE)
caffe_option(BUILD_python_layer "Build the caffe python layer" ON)

# ---[ Dependencies
include(cmake/Dependencies.cmake)
27 changes: 8 additions & 19 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,19 +1,11 @@
PROJECT := caffe

CONFIG_FILE := Makefile.config
# Explicitly check for the config file, otherwise make -k will proceed anyway.
ifeq ($(wildcard $(CONFIG_FILE)),)
$(error $(CONFIG_FILE) not found. See $(CONFIG_FILE).example.)
endif
include $(CONFIG_FILE)

BUILD_DIR_LINK := $(BUILD_DIR)
ifeq ($(RELEASE_BUILD_DIR),)
RELEASE_BUILD_DIR := .$(BUILD_DIR)_release
endif
ifeq ($(DEBUG_BUILD_DIR),)
DEBUG_BUILD_DIR := .$(BUILD_DIR)_debug
endif
RELEASE_BUILD_DIR ?= .$(BUILD_DIR)_release
DEBUG_BUILD_DIR ?= .$(BUILD_DIR)_debug

DEBUG ?= 0
ifeq ($(DEBUG), 1)
@@ -179,7 +171,6 @@ WARNINGS := -Wall -Wno-sign-compare
# Set build directories
##############################

DISTRIBUTE_DIR ?= distribute
DISTRIBUTE_SUBDIRS := $(DISTRIBUTE_DIR)/bin $(DISTRIBUTE_DIR)/lib
DIST_ALIASES := dist
ifneq ($(strip $(DISTRIBUTE_DIR)),distribute)
@@ -241,15 +232,13 @@ endif
# libstdc++ for NVCC compatibility on OS X >= 10.9 with CUDA < 7.0
ifeq ($(OSX), 1)
CXX := /usr/bin/clang++
ifneq ($(CPU_ONLY), 1)
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release \d' | grep -o '\d')
ifeq ($(shell echo $(CUDA_VERSION) \< 7.0 | bc), 1)
CXXFLAGS += -stdlib=libstdc++
LINKFLAGS += -stdlib=libstdc++
endif
# clang throws this warning for cuda headers
WARNINGS += -Wno-unneeded-internal-declaration
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release \d' | grep -o '\d')
ifeq ($(shell echo $(CUDA_VERSION) \< 7.0 | bc), 1)
CXXFLAGS += -stdlib=libstdc++
LINKFLAGS += -stdlib=libstdc++
endif
# clang throws this warning for cuda headers
WARNINGS += -Wno-unneeded-internal-declaration
# gtest needs to use its own tuple to not conflict with clang
COMMON_FLAGS += -DGTEST_USE_OWN_TR1_TUPLE=1
# boost::thread is called boost_thread-mt to mark multithreading on OS X
9 changes: 2 additions & 7 deletions cmake/Dependencies.cmake
Original file line number Diff line number Diff line change
@@ -25,7 +25,7 @@ include(cmake/ProtoBuf.cmake)

# ---[ HDF5
find_package(HDF5 COMPONENTS HL REQUIRED)
include_directories(SYSTEM ${HDF5_INCLUDE_DIRS} ${HDF5_HL_INCLUDE_DIR})
include_directories(SYSTEM ${HDF5_INCLUDE_DIRS})
list(APPEND Caffe_LINKER_LIBS ${HDF5_LIBRARIES})

# ---[ LMDB
@@ -35,7 +35,7 @@ list(APPEND Caffe_LINKER_LIBS ${LMDB_LIBRARIES})

# ---[ LevelDB
find_package(LevelDB REQUIRED)
include_directories(SYSTEM ${LevelDB_INCLUDE})
include_directories(SYSTEM ${LEVELDB_INCLUDE})
list(APPEND Caffe_LINKER_LIBS ${LevelDB_LIBRARIES})

# ---[ Snappy
@@ -127,11 +127,6 @@ if(BUILD_python)
endif()
if(PYTHONLIBS_FOUND AND NUMPY_FOUND AND Boost_PYTHON_FOUND)
set(HAVE_PYTHON TRUE)
if(BUILD_python_layer)
add_definitions(-DWITH_PYTHON_LAYER)
include_directories(SYSTEM ${PYTHON_INCLUDE_DIRS} ${NUMPY_INCLUDE_DIR} ${Boost_INCLUDE_DIRS})
list(APPEND Caffe_LINKER_LIBS ${PYTHON_LIBRARIES} ${Boost_LIBRARIES})
endif()
endif()
endif()

2 changes: 1 addition & 1 deletion cmake/Modules/FindGFlags.cmake
Original file line number Diff line number Diff line change
@@ -38,7 +38,7 @@ else()
find_library(GFLAGS_LIBRARY gflags)
endif()

find_package_handle_standard_args(GFlags DEFAULT_MSG GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)
find_package_handle_standard_args(GFLAGS DEFAULT_MSG GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)


if(GFLAGS_FOUND)
2 changes: 1 addition & 1 deletion cmake/Modules/FindGlog.cmake
Original file line number Diff line number Diff line change
@@ -37,7 +37,7 @@ else()
PATH_SUFFIXES lib lib64)
endif()

find_package_handle_standard_args(Glog DEFAULT_MSG GLOG_INCLUDE_DIR GLOG_LIBRARY)
find_package_handle_standard_args(GLOG DEFAULT_MSG GLOG_INCLUDE_DIR GLOG_LIBRARY)

if(GLOG_FOUND)
set(GLOG_INCLUDE_DIRS ${GLOG_INCLUDE_DIR})
2 changes: 1 addition & 1 deletion docs/development.md
Original file line number Diff line number Diff line change
@@ -30,7 +30,7 @@ Similarly for IPython notebooks: simply include `"include_in_docs": true` in the

Other docs, such as installation guides, are written in the `docs` directory and manually linked to from the `index.md` page.

We strive to provide lots of usage examples, and to document all code in docstrings.
We strive to provide provide lots of usage examples, and to document all code in docstrings.
We absolutely appreciate any contribution to this effort!

### Versioning
21 changes: 8 additions & 13 deletions docs/install_apt.md
Original file line number Diff line number Diff line change
@@ -8,24 +8,12 @@ title: Installation: Ubuntu

sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev

**CUDA**: Install via the NVIDIA package instead of `apt-get` to be certain of the library and driver versions.
Install the library and latest driver separately; the driver bundled with the library is usually out-of-date.
This can be skipped for CPU-only installation.

**BLAS**: install ATLAS by `sudo apt-get install libatlas-base-dev` or install OpenBLAS or MKL for better CPU performance.

**Python** (optional): if you use the default Python you will need to `sudo apt-get install` the `python-dev` package to have the Python headers for building the pycaffe interface.

**Remaining dependencies, 14.04**

Everything is packaged in 14.04.

sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler

**Remaining dependencies, 12.04**

These dependencies need manual installation in 12.04.

# glog
wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
tar zxvf glog-0.3.3.tar.gz
@@ -40,10 +28,17 @@ These dependencies need manual installation in 12.04.
export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1
make && make install
# lmdb
git clone https://gitorious.org/mdb/mdb.git
git clone git://gitorious.org/mdb/mdb.git
cd mdb/libraries/liblmdb
make && make install

Note that glog does not compile with the most recent gflags version (2.1), so before that is resolved you will need to build with glog first.

**CUDA**: Install via the NVIDIA package instead of `apt-get` to be certain of the library and driver versions.
Install the library and latest driver separately; the driver bundled with the library is usually out-of-date.

**BLAS**: install ATLAS by `sudo apt-get install libatlas-base-dev` or install OpenBLAS or MKL for better CPU performance.

**Python** (optional): if you use the default Python you will need to `sudo apt-get install` the `python-dev` package to have the Python headers for building the pycaffe interface.

Continue with [compilation](installation.html#compilation).
8 changes: 4 additions & 4 deletions docs/install_osx.md
Original file line number Diff line number Diff line change
@@ -10,15 +10,15 @@ In the following, we assume that you're using Anaconda Python and Homebrew.

**CUDA**: Install via the NVIDIA package that includes both CUDA and the bundled driver. **CUDA 7 is strongly suggested.** Older CUDA require `libstdc++` while clang++ is the default compiler and `libc++` the default standard library on OS X 10.9+. This disagreement makes it necessary to change the compilation settings for each of the dependencies. This is prone to error.

**Library Path**: We find that everything compiles successfully if `$LD_LIBRARY_PATH` is not set at all, and `$DYLD_FALLBACK_LIBRARY_PATH` is set to provide CUDA, Python, and other relevant libraries (e.g. `/usr/local/cuda/lib:$HOME/anaconda/lib:/usr/local/lib:/usr/lib`).
**Library Path**: We find that everything compiles successfully if `$LD_LIBRARY_PATH` is not set at all, and `$DYLD_FALLBACK_LIBRARY_PATH` is set to to provide CUDA, Python, and other relevant libraries (e.g. `/usr/local/cuda/lib:$HOME/anaconda/lib:/usr/local/lib:/usr/lib`).
In other `ENV` settings, things may not work as expected.

**General dependencies**

brew install --fresh -vd snappy leveldb gflags glog szip lmdb
# need the homebrew science source for OpenCV and hdf5
brew tap homebrew/science
brew install hdf5 opencv
hdf5 opencv

If using Anaconda Python, a modification to the OpenCV formula might be needed
Do `brew edit opencv` and change the lines that look like the two lines below to exactly the two lines below.
@@ -32,7 +32,7 @@ If using Anaconda Python, HDF5 is bundled and the `hdf5` formula can be skipped.

# with Python pycaffe needs dependencies built from source
brew install --build-from-source --with-python --fresh -vd protobuf
brew install --build-from-source --fresh -vd boost boost-python
brew install --build-from-source --fresh -vd boost
# without Python the usual installation suffices
brew install protobuf boost

@@ -115,7 +115,7 @@ Then, whenever you want to update homebrew, switch back to the master branches,
# Update homebrew; hopefully this works without errors!
brew update

# Switch back to the caffe branches with the formulae that you modified earlier
# Switch back to the caffe branches with the forumlae that you modified earlier
cd /usr/local
git rebase master caffe
# Fix any merge conflicts and commit to caffe branch
43 changes: 16 additions & 27 deletions docs/model_zoo.md
Original file line number Diff line number Diff line change
@@ -3,30 +3,28 @@ title: Model Zoo
---
# Caffe Model Zoo

Lots of researchers and engineers have made Caffe models for different tasks with all kinds of architectures and data.
These models are learned and applied for problems ranging from simple regression, to large-scale visual classification, to Siamese networks for image similarity, to speech and robotics applications.

To help share these models, we introduce the model zoo framework:
Lots of people have used Caffe to train models of different architectures and applied to different problems, ranging from simple regression to AlexNet-alikes to Siamese networks for image similarity to speech applications.
To lower the friction of sharing these models, we introduce the model zoo framework:

- A standard format for packaging Caffe model info.
- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` binaries.
- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` parameters.
- A central wiki page for sharing model info Gists.

## Where to get trained models
## BVLC Reference Models

First of all, we bundle BVLC-trained models for unrestricted, out of the box use.
<br>
See the [BVLC model license](#bvlc-model-license) for details.
First of all, we provide some trained models out of the box.
Each one of these can be downloaded by running `scripts/download_model_binary.py <dirname>` where `<dirname>` is specified below:

- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in [ImageNet classification with deep convolutional neural networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Jeff Donahue @jeffdonahue)
- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in [ImageNet classification with deep convolutional neural networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) by Krizhevsky et al. in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)
- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn) as described by Girshick et al. in CVPR 2014. (Trained by Ross Girshick @rbgirshick)
- **BVLC GoogLeNet** in `models/bvlc_googlenet`: GoogLeNet trained on ILSVRC 2012, almost exactly as described in [Going Deeper with Convolutions](http://arxiv.org/abs/1409.4842) by Szegedy et al. in ILSVRC 2014. (Trained by Sergio Guadarrama @sguada)
- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in the NIPS 2012 paper. (Trained by Jeff Donahue @jeffdonahue)
- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in NIPS 2012. (Trained by Evan Shelhamer @shelhamer)
- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn). (Trained by Ross Girshick @rbgirshick)
- **BVLC GoogleNet** in `models/bvlc_googlenet`: GoogleNet trained on ILSVRC 2012, almost exactly as described in [GoogleNet](http://arxiv.org/abs/1409.4842). (Trained by Sergio Guadarrama @sguada)


**Community models** made by Caffe users are posted to a publicly editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
These models are subject to conditions of their respective authors such as citation and license.
Thank you for sharing your models!
## Community Models

The publicly-editable [Caffe Model Zoo wiki](https://github.com/BVLC/caffe/wiki/Model-Zoo) catalogues user-made models.
Refer to the model details for authorship and conditions -- please respect licenses and citations.

## Model info format

@@ -46,7 +44,7 @@ A caffe model is distributed as a directory containing:

Github Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.

`scripts/upload_model_to_gist.sh <dirname>` uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `<dirname>/readme.md` frontmatter, then updates existing Gist.
- `scripts/upload_model_to_gist.sh <dirname>`: uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `<dirname>/readme.md` frontmatter, then updates existing Gist.

Try doing `scripts/upload_model_to_gist.sh models/bvlc_alexnet` to test the uploading (don't forget to delete the uploaded gist afterward).

@@ -58,13 +56,4 @@ It is up to the user where to host the `.caffemodel` file.
We host our BVLC-provided models on our own server.
Dropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).

`scripts/download_model_binary.py <dirname>` downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.

## BVLC model license

The Caffe models bundled by the BVLC are released for unrestricted use.

These models are trained on data from the [ImageNet project](http://www.image-net.org/) and training data includes internet photos that may be subject to copyright.

Our present understanding as researchers is that there is no restriction placed on the open release of these learned model weights, since none of the original images are distributed in whole or in part.
To the extent that the interpretation arises that weights are derivative works of the original copyright holder and they assert such a copyright, UC Berkeley makes no representations as to what use is allowed other than to consider our present release in the spirit of fair use in the academic mission of the university to disseminate knowledge and tools as broadly as possible without restriction.
- `scripts/download_model_binary.py <dirname>`: downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.
2 changes: 1 addition & 1 deletion docs/performance_hardware.md
Original file line number Diff line number Diff line change
@@ -48,7 +48,7 @@ and then set the clock speed with

sudo nvidia-smi -i 0 -ac 3004,875 # repeat with -i x for each GPU ID

but note that this configuration resets across driver reloading / rebooting. Include these commands in a boot script to initialize these settings. For a simple fix, add these commands to `/etc/rc.local` (on Ubuntu).
but note that this configuration resets across driver reloading / rebooting. Include these commands in a boot script to intialize these settings. For a simple fix, add these commands to `/etc/rc.local` (on Ubuntu).

## NVIDIA Titan

12 changes: 6 additions & 6 deletions docs/tutorial/data.md
Original file line number Diff line number Diff line change
@@ -10,15 +10,15 @@ New input types are supported by developing a new data layer -- the rest of the

This data layer definition

layer {
layers {
name: "mnist"
# Data layer loads leveldb or lmdb storage DBs for high-throughput.
type: "Data"
# DATA layer loads leveldb or lmdb storage DBs for high-throughput.
type: DATA
# the 1st top is the data itself: the name is only convention
top: "data"
# the 2nd top is the ground truth: the name is only convention
top: "label"
# the Data layer configuration
# the DATA layer configuration
data_param {
# path to the DB
source: "examples/mnist/mnist_train_lmdb"
@@ -46,9 +46,9 @@ The (data, label) pairing is a convenience for classification models.

**Transformations**: data preprocessing is parametrized by transformation messages within the data layer definition.

layer {
layers {
name: "data"
type: "Data"
type: DATA
[...]
transform_param {
scale: 0.1
Loading