您好,定位到该模型中存在MLU270不支持算子(masked_select),建议您更换模型,或者考虑使用MLU370测试该模型。展开
您好,定位到该模型中存在MLU270不支持算子(masked_select),建议您更换模型,或者考虑使用MLU370测试该模型。展开
你好,寒武纪开源的文档里https://www.cambricon.com/docs/pytorch/pytorch_7_operator_support/Pytorch_operator_support.html#masked-select 是说这个算子是支持的啊
能定位到具体的调用点吗?我看到的是conv2d出了问题,但是内部实现不清楚是哪里调用了maximum。网络结构是github里的mbt2018_mean模型,你想问的图结构是指什么?哪个脚本?展开
您好,conv2d的问题是由于在量化的时候还需执行cpu前向,记录量化参数
net_quantization = mlu_quantize.quantize_dynamic_mlu(net, {'mean':mean, 'std':std, 'firstconv':True}, dtype='int8', gen_quant=True) #执行cpu前向,记录量化参数 _ = net_quantization(cpu_data) torch.save(net_quantization.state_dict(), 'test_quantization.pth')
您这边能通过torch.jit.trace导出一下模型的图结构吗?
[DEBUG][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml_ops.cpp][line:226][copy_][thread:140306007844672][process:1670]: self[shape: [1, 3, 512, 768], device: mlu:0, dtype: Float] src[shape: [1, 3, 512, 768], device: cpu, dtype: Float] non_blocking[false] [DEBUG][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml_ops.cpp][line:190][conv2d][thread:140306007844672][process:1670]: input[shape: [1, 3, 512, 768], device: mlu:0, dtype: Float] weight[shape: [128, 3, 5, 5], device: mlu:0, dtype: Float] bias[shape: [128], device: mlu:0, dtype: Float] padding[value: 2] [value: 2] stride[value: 2] [value: 2] dilation[value: 1] [value: 1] groups[value: 1] q_scale[shape: [2], device: mlu:0, dtype: Float] q_mode[shape: [1], device: mlu:0, dtype: Int] [DEBUG][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml_ops.cpp][line:398][max][thread:140306007844672][process:1670]: self[shape: [128], device: mlu:0, dtype: Float] other[shape: [1], device: mlu:0, dtype: Float] [ERROR][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml/internal/maximum_internal.cpp][line:11][cnml_maximum_internal][thread:140306007844672][process:1670]: Shape of input should match shape of other展开
您好,目前定位到maximum的两个输入的形状检查报错,CNML算子的两个输入都要求4D输入,进一步定位需要看到图确定原因,请问您能提供一下模型的图结构吗?
您好,具体是出现这个问题是吗?[代码]
[DEBUG][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml_ops.cpp][line:226][copy_][thread:140306007844672][process:1670]: self[shape: [1, 3, 512, 768], device: mlu:0, dtype: Float] src[shape: [1, 3, 512, 768], device: cpu, dtype: Float] non_blocking[false] [DEBUG][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml_ops.cpp][line:190][conv2d][thread:140306007844672][process:1670]: input[shape: [1, 3, 512, 768], device: mlu:0, dtype: Float] weight[shape: [128, 3, 5, 5], device: mlu:0, dtype: Float] bias[shape: [128], device: mlu:0, dtype: Float] padding[value: 2] [value: 2] stride[value: 2] [value: 2] dilation[value: 1] [value: 1] groups[value: 1] q_scale[shape: [2], device: mlu:0, dtype: Float] q_mode[shape: [1], device: mlu:0, dtype: Int] [DEBUG][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml_ops.cpp][line:398][max][thread:140306007844672][process:1670]: self[shape: [128], device: mlu:0, dtype: Float] other[shape: [1], device: mlu:0, dtype: Float] [ERROR][/pytorch/catch/torch_mlu/csrc/aten/operators/cnml/internal/maximum_internal.cpp][line:11][cnml_maximum_internal][thread:140306007844672][process:1670]: Shape of input should match shape of other
寒武纪的官方pytorch docker
您好,具体是出现这个问题是吗?
RuntimeError: torch_mlu::conv2d() Expected a value of type 'Tensor' for argument 'q_scale' but instead found type 'NoneType'. Position: 7 Value: None Declaration: torch_mlu::conv2d(Tensor input, Tensor weight, Tensor bias, int[] padding, int[] stride, int[] dilation, int groups, Tensor q_scale, Tensor q_mode) -> (Tensor)
请登录后评论