(pytorch) root@k8s-node-209:/workspace/Downloads/yolov5-6.1-20220429/yolov5-demo-pytorch-master-6.1# python3 test_yolov5m-6.1.py --mode mfus CNML: 7.10.2 ba20487 CNRT: 4.10.1 a884a9a Overriding model.yaml nc=80 with nc=4 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 2 115712 models.common.C3 [128, 128, 2] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 1182720 models.common.C3 [512, 512, 1] 9 -1 1 656896 models.common.SPPF [512, 512, 5] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 24273 models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model Summary: 270 layers, 7030417 parameters, 7030417 gradients /workspace/Downloads/yolov5-6.1-20220429/yolov5-demo-pytorch-master-6.1/models/yolo.py:65: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! print('y.shape: ',y.shape) y.shape: torch.Size([1, 27, 80, 80]) y.shape: torch.Size([1, 27, 40, 40]) y.shape: torch.Size([1, 27, 20, 20]) batchNum: 1 ---------- torch.Size([1, 3, 640, 640]) batchNum: 1 torch.Size([1, 7232, 1, 1]) tensor([[13.00000], [ 2.13672], [-0.19788], ..., [ 1.00977], [-0.19812], [ 0.29785]], dtype=torch.float16) num_boxes_final: 13.0 [array([[ 559, 262.75, 642.5, 339, 0.92188, 0], [ 505, 207.38, 585, 272.25, 0.85547, 0], [ 485, 185.5, 546.5, 236.75, 0.85498, 0], [ 222.38, 187.25, 277, 240, 0.85449, 0], [ 67.438, 412.75, 211.12, 500.25, 0.84619, 0], [ 128.75, 322.25, 235, 438, 0.79639, 0], [ 204.75, 213.25, 273, 282.75, 0.74658, 0], [ 171.5, 253.12, 264, 334, 0.71484, 0], [ 464.5, 169, 517, 208.25, 0.68213, 0], [ 445, 154.38, 488.75, 191.88, 0.64893, 0], [ 245.75, 167, 287.75, 210.62, 0.54834, 0], [ 427.5, 140.75, 474, 171.25, 0.50293, 0], [ 622, 321, 641, 395, 0.39209, 0]])]