CNML: 7.10.2 ba20487 CNRT: 4.10.1 a884a9a Overriding model.yaml nc=80 with nc=4 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 2 115712 models.common.C3 [128, 128, 2] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 1182720 models.common.C3 [512, 512, 1] 9 -1 1 656896 models.common.SPPF [512, 512, 5] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 24273 models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model Summary: 270 layers, 7030417 parameters, 7030417 gradients /workspace/Downloads/yolov5-6.1-20220429/yolov5-demo-pytorch-master-6.1/models/yolo.py:65: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! print('y.shape: ',y.shape) y.shape: torch.Size([1, 27, 80, 80]) y.shape: torch.Size([1, 27, 40, 40]) y.shape: torch.Size([1, 27, 20, 20]) batchNum: 1 ---------- torch.Size([1, 3, 640, 640]) batchNum: 1 torch.Size([1, 7232, 1, 1]) tensor([[13.00000], [ 2.92969], [-0.12732], ..., [ 0.71826], [-0.22290], [ 0.33276]], dtype=torch.float16) num_boxes_final: 13.0 [array([[ -60.312, -290.5, 37.625, -20.781, -0.33911, -0.15039], [ -605, -880.5, -100.88, -849.5, -1.3711, -0.54395], [ -757, -892, -621, -801, -1.0645, -1.3828]])]