×

签到

分享到微信

打开微信,使用扫一扫进入页面后,点击右上角菜单,

点击“发送给朋友”或“分享到朋友圈”完成分享

CPU上生成faster rcnn 离线模型时To implement proposal_fpn OP on cpu问题 已完结 lxjin2002021-08-27 14:34:11 回复 1 查看 使用求助
CPU上生成faster rcnn 离线模型时To implement proposal_fpn OP on cpu问题
分享到:

我们在尝试利用CPU生成faster rcnn的离线模型的时候,发现会在运行到jit的时候报错ValueError: To implement proposal_fpn OP on cpu,内容如下


(pytorch) root@black2070:/demo/quan_test# python mlu_forward.py

CNML: 7.9.2 1a1e33b

CNRT: 4.9.1 4cd7a8a

2021-08-26 14:02:57.516460: [cnrtWarning] [1511] [Card : NONE] Failed to initialize CNDEV. Host manage interface disabled 

2021-08-26 14:02:57.518464: [cnrtError] [1511] [Card : NONE] No MLU can be found !

2021-08-26 14:02:57.518469: [cnrtError] [1511] [Card : NONE] Error occurred in cnrtInit during calling driver interface.

2021-08-26 14:02:57.518472: [cnrtError] [1511] [Card : NONE] Return value is 5, MLU_ERROR_NO_DEVICE, means that No useful mlu device

2021-08-26 14:02:57.518476: [cnmlError] No MLU device

2021-08-26 14:02:57.518598: [cnmlError] No MLU device

test.png

torch.Size([1, 3, 1024, 64])

------------------------------------------

------------------------------------------

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

[warning] It seems that evaluation reaches maxium img_num or iteration is in network. Quantization still works.

0

2021-08-26 14:03:03.190940: [cnrtError] [1511] [Card : NONE] input param  is invalid device handle in cnrtSetCurrentDevice

26-Aug-21 14:03:03 - torch.Size([1, 3, 1024, 64])

/torch/venv3/pytorch/lib/python3.5/site-packages/torch/tensor.py:419: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).

  'incorrect results).', category=RuntimeWarning)

(tensor([[ -23.,  -11.,   23.,   11.],

        [ -16.,  -16.,   16.,   16.],

        [ -11.,  -23.,   11.,   23.],

        ...,

        [ 388., -181., 1112.,  181.],

        [ 494., -256., 1006.,  256.],

        [ 569., -362.,  931.,  362.]]),)

Traceback (most recent call last):

  File "mlu_forward.py", line 129, in

    mluout = mlu_forward(img_path, use_mlu=True)

  File "mlu_forward.py", line 115, in mlu_forward

    model = torch.jit.trace(model, example_tensor, check_trace=False)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torch/jit/__init__.py", line 858, in trace

    check_tolerance, _force_outplace, _module_class)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torch/jit/__init__.py", line 997, in trace_module

    module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 539, in __call__

    result = self._slow_forward(*input, **kwargs)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 525, in _slow_forward

    result = self.forward(*input, **kwargs)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torchvision/models/detection/generalized_rcnn.py", line 77, in forward

    proposals,proposal_losses = self.rpn(image_sizes, features, self.Anchors, targets)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 539, in __call__

    result = self._slow_forward(*input, **kwargs)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 525, in _slow_forward

    result = self.forward(*input, **kwargs)

  File "/torch/venv3/pytorch/lib/python3.5/site-packages/torchvision/models/detection/rpn.py", line 454, in forward

    TO_REMOVE)

ValueError: To implement proposal_fpn OP on cpu 


查阅了官方文档后,发现在

catch/torch_mlu/csrc/aten/operators/op_methods.cpp 里面proposal_fpn 是直接抛出异常的。

由于我们只有220没有270,想问下在这种情况下应该如何修改以便能在CPU上生成faster rcnn的离线模型?感谢!



版权所有 © 2024 寒武纪 Cambricon.com 备案/许可证号:京ICP备17003415号-1
关闭