打开微信,使用扫一扫进入页面后,点击右上角菜单,
点击“发送给朋友”或“分享到朋友圈”完成分享
使用自己的电脑(ubuntu18.04),无寒武纪卡,参考:https://developer.cambricon.com/index/curriculum/expdetails/id/13/classid/8.html 进行yolov5模型量化、融合、离线转换时,使用docke环境,成功安装torch、torch_mlu,参考上述文章时,可以正常在寒武纪cambricon-pytorch中使用官方yolov5s.pt、模型量化成功后,进行模型逐层和融合,按步骤修改代码,运行/torch/yolov5# python detect.py --device cpu --weights yolov5s_int8.pt --cfg mlu,报错如下:
(pytorch) root@edfff4f50630:/torch/yolov5# python detect.py --device cpu --weights yolov5s_int8.pt --cfg mlu
CNML: 7.10.6 c2897882b
CNRT: 4.10.7 a16cc83
2022-08-02 18:41:08.354542: [cnrtWarning] [1092] [Card : NONE] Failed to initialize CNDEV. Host manage interface disabled
2022-08-02 18:41:08.357712: [cnrtError] [1092] [Card : NONE] No MLU can be found !
2022-08-02 18:41:08.357721: [cnrtError] [1092] [Card : NONE] Error occurred in cnrtInit during calling driver interface.
2022-08-02 18:41:08.357725: [cnrtError] [1092] [Card : NONE] Return value is 5, MLU_ERROR_NO_DEVICE, means that "No useful mlu device"
2022-08-02 18:41:08.357730: [cnmlError] No MLU device
2022-08-02 18:41:08.357799: [cnmlError] No MLU device
2022-08-02 18:41:08.366540: [cnrtError] [1092] [Card : NONE] input param is invalid device handle in cnrtSetCurrentDevice
Namespace(agnostic_nms=False, augment=False, cfg='mlu', classes=None, conf_thres=0.25, device='cpu', exist_ok=False, img_size=640, iou_thres=0.45, jit=False, name='exp', project='runs/detect', save_conf=False, save_txt=False, source='data/images', update=False, view_img=False, weights=['yolov5s_int8.pt'])
Using torch 1.3.0a0+b8d5360 CPU
from n params module arguments
0 -1 1 3520 models.common.Focus [3, 32, 3]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 156928 models.common.C3 [128, 128, 3]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]]
9 -1 1 1182720 models.common.C3 [512, 512, 1, False]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 283 s, 7276605 parameters, 7276605 gradients
Fusing s...
Model Summary: 224 s, 7266973 parameters, 229245 gradients
Fusing s...
Model Summary: 224 s, 7266973 parameters, 229245 gradients
from n params module arguments
0 -1 1 3520 models.common.Focus [3, 32, 3]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 1 156928 models.common.C3 [128, 128, 3]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 1 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]]
9 -1 1 1182720 models.common.C3 [512, 512, 1, False]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 229245 models.yolo.Detect [80, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 283 s, 7276605 parameters, 7276605 gradients
Fusing s...
Model Summary: 224 s, 7266973 parameters, 229245 gradients
image 1/2 /torch/yolov5/data/images/bus.jpg: y.shape: torch.Size([1, 255, 80, 60])
y.shape: torch.Size([1, 255, 40, 30])
y.shape: torch.Size([1, 255, 20, 15])
[ERROR][/pytorch/catch/torch_mlu/csrc/aten/core/tensor_impl.cpp][line:569][cpu_data][thread:139675793758016][process:1092]:
Both cpu_storage and mlu_storage are not initialized!
Please check is there any invalid tensor operates such as:
output = input.cpu() or output = input.to("cpu") in pytorch model when doing mlu/mfus inference.
Can not call cpu_data on an empty tensor.
[WARNING][/pytorch/catch/torch_mlu/csrc/aten/operators/op_methods.cpp][line:68][copy_][thread:139675793758016][process:1092]:
copy_ Op cannot run on MLU device, start running on CPU!
[ERROR][/pytorch/catch/torch_mlu/csrc/aten/core/tensor_impl.cpp][line:569][cpu_data][thread:139675793758016][process:1092]:
Both cpu_storage and mlu_storage are not initialized!
Please check is there any invalid tensor operates such as:
output = input.cpu() or output = input.to("cpu") in pytorch model when doing mlu/mfus inference.
Traceback (most recent call last):
File "detect.py", line 257, in <module>
detect()
File "detect.py", line 139, in detect
pred = pred.data.cpu().type(torch.FloatTensor)
RuntimeError: Can not call cpu_data on an empty tensor.
报错相关逐层代码如下啊:
opt.cfg == : ((img)) img = img.type(torch.HalfTensor).to(ct.mlu_device()) img = img.to(ct.mlu_device()) pred = quantized_net(img)[] pred = pred.data.cpu().type(torch.FloatTensor) box_result = get_boxes(pred) (im0s.shape) (box_result) res = box_result[].tolist() () f: pt (res=x: (x[]x[])): f.write(.format(pt[]pt[]pt[]pt[])) cv2.rectangle(im0s((pt[])(pt[]))((pt[])(pt[]))()) cv2.imwrite(.format(os.path. name(path).split()[])im0s) ()
麻烦问下问题出在了哪里?
热门帖子
精华帖子