Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bilinear_interp_op #3925

Closed
wants to merge 1 commit into from
Closed

bilinear_interp_op #3925

wants to merge 1 commit into from

Conversation

luotao1
Copy link
Contributor

@luotao1 luotao1 commented Sep 6, 2017

fix #3924
(NOT finish yet)

See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/operators/bilinear_interp_op.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should we implement such an operator?

If we implement abilinear_interp operator, we also need to implement 最近邻插值, 三次插值, even those gaussian blur operators. So many operators will be an infinite workload to us.
In my mind, we only need to port the OpenCV base data structure represent cv::Mat to eigen matrix. And reuse OpenCV CPU/GPU function as kernel implement.
And maybe put those operators into a image directory is better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bilinear_interp is an op what a user once suggested. This op is not used to do data argumentation. It's used to do image segmentation. I don't think these operators 最近邻插值, 三次插值, even those gaussian blur are needed now. And there is no need to port the heavier OpenCV for this one operator now. In addition, TensorFlow also implements this op: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/resize_bilinear_op.cc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I see.
I just notice that this operator implement encounter some problems. This operator is more likely for user specified not for the general purpose.
I find that Caffe2 Catalogue do not implement it at all.
I just want to remind that we should implement most used operators since our deadline is looming.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dzhwinter Thank you for reminding! Now some necessary operators could not be developed since the LoDTensor is not ready. We'll try our best to finish LoDTensor this week. And @luotao1 has several necessary sequence operators.

template <typename T>
class BilinearInterpCUDAKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add cuda kernel here.

class BilinearInterpGradKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& ctx) const override {
auto d_input_t = ctx.Output<Tensor>(framework::GradVarName("X"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use auto * please.

};

} // namespace operators
} // namespace paddle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add python test


protected:
void InferShape(const framework::InferShapeContext &ctx) const override {
auto dim_X = ctx.Input<Tensor>("X")->dims(); // NCHW format
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Add not-null check for InputVar("X") and OutputVar("Out")
  2. dim_X -> dim_x

template <typename T>
class BilinearInterpGradCUDAKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If CUDA kernel will be done in the next PR, please add TODO(luotao) in these functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the TODO comment in here and merge this PR ASAP.

https://en.wikipedia.org/wiki/Bilinear_interpolation
)DOC");
AddAttr<int>("out_h", "output height of bilinear interpolation op.");
AddAttr<int>("out_w", "output weight of bilinear interpolation op.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put the AddComment(R"DOC )DOC") at last.

T ratio_w = (out_w > 1) ? static_cast<T>(in_w - 1) / (out_w - 1) : 0.f;

if (in_h == out_h && in_w == out_w) {
memcpy(output, input, product(input_t->dims()) * sizeof(T));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

product(input_t->dims()) - > input_t->numel()

int out_chw = channels * out_hw;

T ratio_h = (out_h > 1) ? static_cast<T>(in_h - 1) / (out_h - 1) : 0.f;
T ratio_w = (out_w > 1) ? static_cast<T>(in_w - 1) / (out_w - 1) : 0.f;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

单测里需要考虑:

  1. out_h = 1, out_w = 1的情况
  2. out_h/out_w 大于、小于in_h/in_w的情况

using framework::OperatorWithKernel::OperatorWithKernel;

protected:
void InferShape(const framework::InferShapeContext &ctx) const override {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The infershape need to re-write with the new interface. the detail in Infershape

auto dim_X = ctx.Input<Tensor>("X")->dims(); // NCHW format
int out_h = ctx.GetAttr<int>("out_h");
int out_w = ctx.GetAttr<int>("out_w");
PADDLE_ENFORCE_EQ(dim_X.size(), 4, "X's dimension must be 4");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe need to check the out_h, out_w, which does not range over the bounder.

template <typename T>
class BilinearInterpGradCUDAKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& context) const override {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the TODO comment in here and merge this PR ASAP.

class BilinearInterpKernel : public framework::OpKernel {
public:
void Compute(const framework::ExecutionContext& ctx) const override {
auto input_t = ctx.Input<Tensor>("X"); // float tensor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer the same name with parameter in OpMaker.
input_t => X. And if the return value is a pointer, the auto* is better.

namespace paddle {
namespace operators {

using framework::Tensor;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use using in the header file violate the google style, see the detail in Namespace

Do not use Namespace aliases at namespace scope in header files except in explicitly marked internal-only namespaces, because anything imported into a namespace in a header file becomes part of the public API exported by that file.

By the way, I think

using framework::Tensor;

is enough here.

@qingqing01
Copy link
Contributor

Related #9308

@qingqing01 qingqing01 closed this Apr 18, 2018
@wangyang59 wangyang59 mentioned this pull request Apr 18, 2018
@luotao1 luotao1 deleted the bilinear branch June 11, 2018 12:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bilinear Interpolation Operator
3 participants