是你的main函数所在文件。
好的,了解了,非常感谢!!目前统计的是单独推理的时间,除去预处理后处理时间、cpu和MLU设备之间的数据拷贝时间、cnrtInvokeRuntimeContext_V2的时间,即仅仅是cnrtSyncQueue这一步的前后时间差,因为发现推理中这一步耗时占比最多[代码]batchsize=4的时候,这一步平均耗时约137msbatchsize=1的时候,这一步平均耗时约41ms展开
从标题看上去您这个是yolov5_v6 版本吗? 这个模型里面包含SiLU激活,当前框架不能很好地对他进行融合优化,可以尝试使用下面的 config.ini 对其进行优化:
config.ini文件:
; Switchers for addr optimization. [AddrOpt] ; Datatype:bool. Desc:The main switch to disable addr optimization(reshape, transpose, split, concat, overlap operations). addrOptDisable=0 ; Switchers for cache model. [CacheModel] ; Datatype:std::string. Desc:Config cache model save path, using absolute path. cacheFilePath= ; Datatype:std::string. Desc:Config cache model space limit, 1 means 1MB, default space is 2048MB. cacheModelSpaceLimit= ; Variable for Cnlog [Cnlog] ; Datatype:bool. Desc:Dump operation info for core dump debug. opInfoEnable=0 ; Variable for Data [Data] ; Datatype:bool. Desc:if change sopa last layer's store type preConvertEnable=1 ; Switchers for debug tool. [DebugTool] ; Datatype:std::string. Desc:Config dump level to dump the input and output tensors. The argument of dumpLevel can be set as the following values. 0 is the default value, function is closed. If the config option is set as 1, this function will dump the selected output tensors. If the config option is set as 2, this function will dump the selected tensors no matter it is input or output. dumpLevel= ; Datatype:bool. Desc:Dump data with NHWC layout in default, dump NCHW layout if this value is set as 'true'. dumpOrderNHWCDisable=0 ; Datatype:std::string. Desc:Path to save dump files. dumpPath= ; Datatype:bool. Desc:Enable info log level. printInfo=0 ; Variable for Fusion [Fusion] ; Datatype:bool. Desc:if improve backfusion backFusionEnable=1 ; Switchers for general process optimization. [GeneralOpt] ; Datatype:bool. Desc:Disable cluster optimization. clusterOpOptDisable=0 ; Datatype:bool. Desc:Disable internal transpose optimization. transOptDisable=0 ; Switchers for graph optimization. [GraphOpt] ; Datatype:bool. Desc:The main switch to disable graph optimization. optimizeGraphDisable=0 ; Switchers for memory optimization. [MemDeviceOpt] ; Datatype:bool. Desc:Disable internal memory reuse optimization. closeIntmdReuse=0 ; Datatype:bool. Desc:Enable internal memory reuse optimization while closeIntmdReuse is false and core limit is 16. openFullCoreIntmdReuse=0 ; Switchers for operation opt. [OpOpt] ; Datatype:bool. Desc:Enable sigmoid and tanh high precision optimization. activeHighPrecision=0 ; Datatype:bool. Desc:Enable normalize,softmax,sigmoid and tanh high precision optimization. highPrecision=0 ; Datatype:bool. Desc:Enable normalize high precision optimization. normalizeHighPrecision=0 ; Datatype:bool. Desc:Enable softmax high precision optimization. softmaxHighPrecision=0 ; Datatype:bool. Desc:Enable big ci case in topk optimization. topkBigCiOpt=0 ; Switchers for TFU [Tfu] ; Datatype:bool. Desc:Whether to let deconv to be the final layer of tfu. deconvFinal=0 ; Datatype:bool. Desc:Whether to enable TFU fusion enable=1 ; Datatype:bool. Desc:Whether to enable tfu when run fp32/int16 network. fp32Int16Enable=1 ; Datatype:bool. Desc:Whether to fuse deconv op by tfu. fuseDeconv=0 ; Datatype:bool. Desc:Whether to enable tfu shared memory parity optimize. ioParity=1 ; Datatype:int. Desc:Max block number in tfu subgraph maxBlockNum=4 ; Datatype:int. Desc:Max deconv num fused in one tfu maxDeconvNum=2 ; Datatype:bool. Desc:Support multisegment active op. multiSegmentAct=1 ; Datatype:bool. Desc:Whether to skip firstconv skipFirstconv=0 ; Datatype:bool. Desc:Whether to disable TFU when w=1. skipWIsOne=0 ; Datatype:bool. Desc:Whether to enable small ci/co optimize in tfu. smallCiCo=1 ; Datatype:bool. Desc:Whether to enable special net check, which can infulence stride overlap detect by tfu. specialNetCheck=1 ; Datatype:bool. Desc:tfu split on batches. splitOnN=0 ; Datatype:int. Desc:Redundancy computing pixels threshhold in first op strideMaxOverlap=4 ; Datatype:bool. Desc:Swap weight between tfu subgraph(Tfu can fuse deeper). swapWeight=0 skipsimplecase=0 ; Switchers for partition optimize. [PartitionOpt] ; Datatype:bool. Desc:Enable dim H partition and op optimize in interp op. interpOptEnable=0 ; Datatype:bool. Desc:Enable all split concat op optimize based on addr opt in partition decision. partitionOpOptEnable=1 ; Datatype:bool. Desc:Optimize all overlap op in partition decision, overlap op main switch. partitionOverlapOpOptEnable=0
然后在生成模型之前设置这个环境变量:
export CNML_OPTIMIZE=USE_CONFIG:config.ini
特别注意ini文件需要与入口函数所在py文件在同一路径
你好,cnrtSyncQueue 是在执行板卡推理任务。1、关于延时成倍增长的问题:对于220来说,本身只有4个core,当 batch_size=1 core_number=4时,推理会占满4个core,当 batch_size=4 core_number=4时,推理同样是占满4个core,在计算核资源没有增加但输入成倍增加的情况下,对于yolov5来说,bs=4 的延时是bs=1的延时的三到四倍是正常现象。2、关于延时比较长的问题:想确认一下:1)你的延时是end2end的延时(包含读入图片、前处理、模型推理、后处理)还是模型推理的硬件计算时间? 2)cnrtSyncQueue 的延时是多少?展开
好的,了解了,非常感谢!!
目前统计的是单独推理的时间,除去预处理后处理时间、cpu和MLU设备之间的数据拷贝时间、cnrtInvokeRuntimeContext_V2的时间,即仅仅是
cnrtSyncQueue这一步的前后时间差,因为发现推理中这一步耗时占比最多
auto t03=GetTickCount(); CNRT_CHECK(cnrtSyncQueue(detInitParam->queue)); auto t04=GetTickCount();
batchsize=4的时候,这一步平均耗时约137ms
batchsize=1的时候,这一步平均耗时约41ms
你好,cnrtSyncQueue 是在执行板卡推理任务。
1、关于延时成倍增长的问题:
对于220来说,本身只有4个core,当 batch_size=1 core_number=4时,推理会占满4个core,当 batch_size=4 core_number=4时,推理同样是占满4个core,在计算核资源没有增加但输入成倍增加的情况下,对于yolov5来说,bs=4 的延时是bs=1的延时的三到四倍是正常现象。
2、关于延时比较长的问题:
想确认一下:1)你的延时是end2end的延时(包含读入图片、前处理、模型推理、后处理)还是模型推理的硬件计算时间? 2)cnrtSyncQueue 的延时是多少?
请登录后评论