×

签到

分享到微信

打开微信,使用扫一扫进入页面后,点击右上角菜单,

点击“发送给朋友”或“分享到朋友圈”完成分享

Magicmind 推理模型加载内核出错 已完结 lemonlikelihood2023-06-25 20:03:57 回复 2 查看 技术答疑
Magicmind 推理模型加载内核出错
分享到:

【寒武纪硬件产品型号】必填*:
必填项,例如:MLU370

MLU290
【MagicMind版本号】必填*:

magicmind_1.3.0-1_centos_7.6

【出错信息】必填*:

python3 predict_on_resnet50.py --path_mm_model ../cn_mm2/resnet50_parser.mm 

Can not write to dir, file log disabled: /var/log

2023-06-25 19:38:30.990588: [cnrtError] [145311] [Card : 0] cnpapi__cnrtInvokeKernel: Found kernel(_Z25MLUTransposeKernel2DSmallIiLj1EEvPvS0_jjb) but not load(101315).

[2023-6-25 19:38:30] [CNNL] [Error]:Check failed: Found cncc does not register the module to runtime. after invoke kernel (MLUTransposeKernel2DSmall<T, REPEAT><<<kDim, kType, handle_->queue>>>( (void *)x, y, x_kernel[TR_N], x_kernel[TR_H], split_h))

2023-06-25 19:38:30.990938: [cnrtError] [145311] [Card : 0] cnpapi__cnrtInvokeKernel: Found kernel(_Z25quantizeParamGetMaxAndMinPvj14cnnlDataType_tiiS_) but not load(101315).

[2023-6-25 19:38:30] [CNNL] [Error]:Check failed: Found cncc does not register the module to runtime. after invoke kernel (quantizeParamGetMaxAndMin<<<kDim, kType, handle->queue>>>( (void *)input, dim_input, input_data_type, bitwidth, job_num, workspace))

2023-06-25 19:38:30.990999: ERROR:  magicmind/runtime/kernel/cnnl_kernel/conv/float_conv.cc:115] Call cnnl function cnnlQuantizeParam(handle, CNNL_QUANTIZE_POSITION, x_desc, x_ptr, kVars::kBitWidthInt31, workspace, workspace_size, x_position_c, x_scale, x_offset) failed.

2023-06-25 19:38:30.991022: [cnrtError] [145311] [Card : 0] cnpapi__cnrtInvokeKernel: Found kernel(_Z25quantizeParamGetMaxAndMinPvj14cnnlDataType_tiiS_) but not load(101315).

[2023-6-25 19:38:30] [CNNL] [Error]:Check failed: Found cncc does not register the module to runtime. after invoke kernel (quantizeParamGetMaxAndMin<<<kDim, kType, handle->queue>>>( (void *)input, dim_input, input_data_type, bitwidth, job_num, workspace))

2023-06-25 19:38:30.991071: ERROR:  magicmind/runtime/kernel/cnnl_kernel/conv/float_conv.cc:115] Call cnnl function cnnlQuantizeParam(handle, CNNL_QUANTIZE_POSITION, x_desc, x_ptr, kVars::kBitWidthInt31, workspace, workspace_size, x_position_c, x_scale, x_offset) failed.

2023-06-25 19:38:30.991154: ERROR:  magicmind/runtime/core/mlu_device.cc:227] 

Kernel Enqueue error:

Internal: cnnlFloatConvForward( handle_, conv_desc_, algo_, nullptr, input_, input_ptr_, weight_, weight_ptr_, bias_, bias_ptr_, workspace_, workspace_size_, nullptr, output_, output_ptr_, is_float_):Call cnnl function Failed, error code: CNNL_STATUS_EXECUTION_FAILED

In queue id 0 pointer 0x8427ec0

MLU device : MLU:0_0 in context ctx_id_


main/mm.conv2d/mm.conv2d/cnnl.conv : {

input {

cnnlTensorDe or {

Layout: NHWC

DataType: FLOAT32

DimNb: 4

OnchipDtype: INT31

Pos: 0

Scale: 1.000000

Offset: 0

1 256 256 3

}


ptr:

0x10080ffefc00000

}

weight {

cnnlTensorDe or {

Layout: NHWC

DataType: FLOAT32

DimNb: 4

OnchipDtype: INT31

Pos: 0

Scale: 1.000000

Offset: 0

64 7 7 3

}


ptr:

0x10080fff6140000

}

bias {

cnnlTensorDe or {

Layout: NHWC

DataType: FLOAT32

DimNb: 4

OnchipDtype: Unknown

Pos: 0

Scale: 1.000000

Offset: 0

1 1 1 64

}


ptr:

0x10080fff6163100

}

position_input {

ptr:

0

}

position_weight {

ptr:

0

}

output {

cnnlTensorDe or {

Layout: NHWC

DataType: FLOAT32

DimNb: 4

OnchipDtype: FLOAT32

Pos: 0

Scale: 1.000000

Offset: 0

1 128 128 64

}


ptr:

0x10080ffee800000

}

padding_ {

EXPLICIT

}

pad_ {

[3, 3, 3, 3]

}

stride_ {

[2, 2]

}

dilation_ {

[1, 1]

}

group_ {

1

}

layouts_ {

[NHWC, NHWC, ARRAY, NHWC]

}

runtime context {

workspace :0x10080ffef000000 with expected size 824096

 with recorded size 4194304

input_zero_element_ {

0

}

compute_type_ {

FLOAT

}

onchip_type_ {

[]

}


2023-06-25 19:38:30.991725: ERROR:  magicmind/runtime/executor/executor.cc:851]  status == Status::OK() is false. 

2023-06-25 19:38:30.991744: ERROR:  magicmind/runtime/executor/function_runner.cc:484]  Status::OK() == ret is false. 

Traceback (most recent call last):

  File "predict_on_resnet50.py", line 84, in <module>

    main(args)

  File "predict_on_resnet50.py", line 64, in main

    assert context.enqueue(inputs, outputs, queue).ok()

Asserti

【操作步骤】选填:


【相关日志文档】选填
如有,可附件


【出错代码链接】选填:


github的或gitee的代码的链接,

版权所有 © 2024 寒武纪 Cambricon.com 备案/许可证号:京ICP备17003415号-1
关闭