srun: Job time limit was unset; set to partition default of 60 minutes srun: ################################################################################ srun: # Please note that the oasis compute nodes have aarch64 architecture CPUs. # srun: # All submission nodes and all other compute nodes have x86_64 architecture # srun: # CPUs. Programs, environments, or other software that was built on x86_64 # srun: # nodes may need to be rebuilt to properly execute on these nodes. # srun: ################################################################################ srun: job 3394992 queued and waiting for resources srun: job 3394992 has been allocated resources /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 10, 20, ..., 39994, 39994, 39994]), col_indices=tensor([ 1, 2, 3, ..., 9711, 10875, 10876]), values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(10879, 10879), nnz=39994, layout=torch.sparse_csr) tensor([0.1181, 0.8387, 0.0554, ..., 0.8107, 0.4393, 0.9489]) Matrix: p2p-Gnutella04 Shape: torch.Size([10879, 10879]) NNZ: 39994 Density: 0.0003379223282393842 Time: 1.061662197113037 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella04.mtx 1000': 50.59 msec task-clock:u # 0.012 CPUs utilized 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 3,303 page-faults:u # 65.291 K/sec 51,318,459 cycles:u # 1.014 GHz (59.34%) 74,705,078 instructions:u # 1.46 insn per cycle (83.02%) branches:u 366,825 branch-misses:u 31,809,194 L1-dcache-loads:u # 628.781 M/sec 466,198 L1-dcache-load-misses:u # 1.47% of all L1-dcache accesses LLC-loads:u LLC-load-misses:u 30,390,161 L1-icache-loads:u # 600.731 M/sec 296,270 L1-icache-load-misses:u # 0.97% of all L1-icache accesses 61,518,375 dTLB-loads:u # 1.216 G/sec (17.94%) dTLB-load-misses:u (0.00%) iTLB-loads:u (0.00%) iTLB-load-misses:u (0.00%) 4.302241563 seconds time elapsed 16.122298000 seconds user 29.141140000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 10, 20, ..., 39994, 39994, 39994]), col_indices=tensor([ 1, 2, 3, ..., 9711, 10875, 10876]), values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(10879, 10879), nnz=39994, layout=torch.sparse_csr) tensor([0.7249, 0.8723, 0.3843, ..., 0.2264, 0.4891, 0.9107]) Matrix: p2p-Gnutella04 Shape: torch.Size([10879, 10879]) NNZ: 39994 Density: 0.0003379223282393842 Time: 1.0079431533813477 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella04.mtx 1000': 328,853 BR_MIS_PRED_RETIRED:u # 0.0 per branch branch_misprediction_ratio 19,620,312 BR_RETIRED:u 4.241400567 seconds time elapsed 15.325937000 seconds user 28.223386000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 10, 20, ..., 39994, 39994, 39994]), col_indices=tensor([ 1, 2, 3, ..., 9711, 10875, 10876]), values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(10879, 10879), nnz=39994, layout=torch.sparse_csr) tensor([0.7608, 0.2449, 0.5322, ..., 0.5547, 0.8659, 0.8437]) Matrix: p2p-Gnutella04 Shape: torch.Size([10879, 10879]) NNZ: 39994 Density: 0.0003379223282393842 Time: 1.1017234325408936 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella04.mtx 1000': 27,939,682 L1I_TLB:u # 0.0 per TLB access itlb_walk_ratio 5,470 ITLB_WALK:u 17,679 DTLB_WALK:u # 0.0 per TLB access dtlb_walk_ratio 37,425,602 L1D_TLB:u 4.296820500 seconds time elapsed 15.875162000 seconds user 28.803412000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 10, 20, ..., 39994, 39994, 39994]), col_indices=tensor([ 1, 2, 3, ..., 9711, 10875, 10876]), values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(10879, 10879), nnz=39994, layout=torch.sparse_csr) tensor([0.9980, 0.9991, 0.6749, ..., 0.4225, 0.7297, 0.3717]) Matrix: p2p-Gnutella04 Shape: torch.Size([10879, 10879]) NNZ: 39994 Density: 0.0003379223282393842 Time: 1.0812580585479736 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella04.mtx 1000': 30,276,633 L1I_CACHE:u # 0.0 per cache access l1i_cache_miss_ratio 291,467 L1I_CACHE_REFILL:u 479,061 L1D_CACHE_REFILL:u # 0.0 per cache access l1d_cache_miss_ratio 31,689,326 L1D_CACHE:u 4.500137840 seconds time elapsed 15.794710000 seconds user 27.773851000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 10, 20, ..., 39994, 39994, 39994]), col_indices=tensor([ 1, 2, 3, ..., 9711, 10875, 10876]), values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(10879, 10879), nnz=39994, layout=torch.sparse_csr) tensor([0.8707, 0.5871, 0.5970, ..., 0.8826, 0.4673, 0.4994]) Matrix: p2p-Gnutella04 Shape: torch.Size([10879, 10879]) NNZ: 39994 Density: 0.0003379223282393842 Time: 0.9900743961334229 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella04.mtx 1000': 529,426 LL_CACHE_MISS_RD:u # 1.0 per cache access ll_cache_read_miss_ratio 550,033 LL_CACHE_RD:u 171,913 L2D_TLB:u # 0.1 per TLB access l2_tlb_miss_ratio 20,624 L2D_TLB_REFILL:u 296,662 L2D_CACHE_REFILL:u # 0.2 per cache access l2_cache_miss_ratio 1,714,211 L2D_CACHE:u 4.284402033 seconds time elapsed 15.584671000 seconds user 27.523772000 seconds sys