抱歉,找驱动组同事再次确认,目前4.9驱动并未包含该问题的修复,4.12驱动才包含,而目前论坛上并未提供4.12驱动。所以您目前还是只能将解码与infer2放在同一张卡上。4.12驱动发布日期我正在询问相关同事,待同事确认后我再同步给您。展开
可是我这边测试 解码和infer2 都必须放到一个板卡上 否则会报错。错误如下 I1217 16:47:19.466986 13006 model_cnrt.cpp:204] Load function from offline model succeeded CNSTREAM CORE I1217 16:47:19.479883 13006] Pipeline[MyPipeline] Start CNSTREAM CORE I1217 16:47:19.480082 13006] [stream_0]: Stream opening... CNSTREAM CORE I1217 16:47:19.480195 13006] Add stream success, stream id : [stream_0] CNSTREAM SOURCE I1217 16:47:19.489688 13030] [stream_0]: Got video info. CNSTREAM SOURCE I1217 16:47:19.489768 13030] [stream_0]: Begin create decoder CNSTREAM SOURCE I1217 16:47:19.495527 13030] [stream_0]: Finish create decoder hangup resv on port 1 2021-12-17 16:47:19.584963: [cnrtError] [13025] [Card : 0] Error occurred in cnrtGetPeerAccessibility during calling driver interface. 2021-12-17 16:47:19.585021: [cnrtError] [13025] [Card : 0] Return value is 258, MLU_MEMORY_ERROR_AccessPeer, means that "Failed to check peerability" CNSTREAM FRAME F1217 16:47:19.585049 13025] Call [cnrtGetPeerAccessibility(&can_peer, device_id, this->ctx.dev_id)] failed, error code: 632012 我的配置文件如下 "source" : { "class_name" : "cnstream::DataSource", "next_modules" : ["detector"], "custom_params" : { "reuse_cndec_buf" : "true", "output_type" : "mlu", "decoder_type" : "mlu", "input_buf_number" : 10, "output_buf_number" : 10, "device_id" : 0 } }, "detector" : { "class_name" : "cnstream::Inferencer2", "parallelism" : 1, "next_modules" : ["tracker"], "max_input_queue_size" : 20, "custom_params" : { "model_path" : "/home/joyiot/Desktop/220/ly_w/s-fin-fc-half-0.05-45.cambricon", // "model_path" : "s-11-bdd-ini-fin-c4b4-220.cambricon", "func_name" : "subnet0", "preproc_name" : "VideoPreprocYolov3", "postproc_name" : "VideoPostprocYolov5", "keep_aspect_ratio" : "true", "model_input_pixel_format" : "ARGB32", // "model_input_pixel_format" : "RGB24", "batching_timeout" : 100, "threshold" : 0.2, "engine_num" : 1, "device_id" : 1 }展开
是的,infer1 infer2 都行
你好 cnstream手册上说的 将不同流程分配给不同的板卡,充分利用资源,可行吗?我这边测试 只要推理和跟踪不在一个设备上 就会报错 [cndrvWarning]Current version of Driver is 4.6, we cannot dump the information about the kernel failure. [cndrvWarning]If you want get the information about the kernel failure, please update driver to 4.8 version or higher 2021-12-15 14:46:28.897689: [cnrtError] [539] [Card : 1] Error occurred in cnrtInvokeKernel_V3 during calling driver interface.展开
您好,目前VideoPreprocYolov5不支持带firstconv的离线模型,建议您:(1)使用不带firstconv的离线模型(2)继续使用带firstconv的离线模型,并修改CNStream/samples/common/preprocess/video_preprocess_yolov5.cpp内的代码。model_input_pixel_format的值请参考samples/cns_launcher/configs/yolov5_object_detection_mlu270.json,该文件内model_input_pixel_format设为RGB24展开
您好,目前VideoPreprocYolov5不支持带firstconv的离线模型,建议您:(1)使用不带firstconv的离线模型(2)继续使用带firstconv的离线模型,并修改CNStream/samples/common/preprocess/video_preprocess_yolov5.cpp内的代码。model_input_pixel_format的值请参考samples/cns_launcher/configs/yolov5_object_detection_mlu270.json,该文件内model_input_pixel_format设为RGB24展开
您好,目前VideoPreprocYolov5不支持带firstconv的离线模型,建议您:(1)使用不带firstconv的离线模型(2)继续使用带firstconv的离线模型,并修改CNStream/samples/common/preprocess/video_preprocess_yolov5.cpp内的代码。model_input_pixel_format的值请参考samples/cns_launcher/configs/yolov5_object_detection_mlu270.json,该文件内model_input_pixel_format设为RGB24展开
您好,目前VideoPreprocYolov5不支持带firstconv的离线模型,建议您:
(1)使用不带firstconv的离线模型
(2)继续使用带firstconv的离线模型,并修改CNStream/samples/common/preprocess/video_preprocess_yolov5.cpp内的代码。
model_input_pixel_format的值请参考samples/cns_launcher/configs/yolov5_object_detection_mlu270.json,该文件内model_input_pixel_format设为RGB24
请登录后评论