srun: Job time limit was unset; set to partition default of 60 minutes srun: ################################################################################ srun: # Please note that the oasis compute nodes have aarch64 architecture CPUs. # srun: # All submission nodes and all other compute nodes have x86_64 architecture # srun: # CPUs. Programs, environments, or other software that was built on x86_64 # srun: # nodes may need to be rebuilt to properly execute on these nodes. # srun: ################################################################################ srun: job 3394144 queued and waiting for resources srun: job 3394144 has been allocated resources /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 317, 416, ..., 239976, 239977, 239978]), col_indices=tensor([ 0, 1, 2, ..., 1483, 2179, 24817]), values=tensor([151., 17., 6., ..., 1., 1., 1.]), size=(24818, 24818), nnz=239978, layout=torch.sparse_csr) tensor([0.7658, 0.2874, 0.7506, ..., 0.3335, 0.5056, 0.9767]) Shape: torch.Size([24818, 24818]) NNZ: 239978 Density: 0.00038961697406616504 Time: 0.5561239719390869 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/sx-mathoverflow.mtx 100': 62.49 msec task-clock:u # 0.015 CPUs utilized 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 3,312 page-faults:u # 53.003 K/sec 76,783,170 cycles:u # 1.229 GHz (62.65%) 77,095,702 instructions:u # 1.00 insn per cycle (80.20%) branches:u 370,891 branch-misses:u (94.99%) 32,730,448 L1-dcache-loads:u # 523.800 M/sec 467,718 L1-dcache-load-misses:u # 1.43% of all L1-dcache accesses LLC-loads:u LLC-load-misses:u 31,548,469 L1-icache-loads:u # 504.885 M/sec 298,966 L1-icache-load-misses:u # 0.95% of all L1-icache accesses 61,098,419 dTLB-loads:u # 977.786 M/sec (20.67%) 64,747 dTLB-load-misses:u # 0.11% of all dTLB cache accesses (10.91%) iTLB-loads:u (0.00%) iTLB-load-misses:u (0.00%) 4.062782709 seconds time elapsed 16.106338000 seconds user 32.399716000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 317, 416, ..., 239976, 239977, 239978]), col_indices=tensor([ 0, 1, 2, ..., 1483, 2179, 24817]), values=tensor([151., 17., 6., ..., 1., 1., 1.]), size=(24818, 24818), nnz=239978, layout=torch.sparse_csr) tensor([0.7531, 0.4727, 0.4126, ..., 0.1574, 0.5247, 0.8875]) Shape: torch.Size([24818, 24818]) NNZ: 239978 Density: 0.00038961697406616504 Time: 0.6003477573394775 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/sx-mathoverflow.mtx 100': 323,514 BR_MIS_PRED_RETIRED:u # 0.0 per branch branch_misprediction_ratio 19,769,937 BR_RETIRED:u 4.061021393 seconds time elapsed 16.155442000 seconds user 31.047278000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 317, 416, ..., 239976, 239977, 239978]), col_indices=tensor([ 0, 1, 2, ..., 1483, 2179, 24817]), values=tensor([151., 17., 6., ..., 1., 1., 1.]), size=(24818, 24818), nnz=239978, layout=torch.sparse_csr) tensor([0.3067, 0.4335, 0.8814, ..., 0.2370, 0.1210, 0.7695]) Shape: torch.Size([24818, 24818]) NNZ: 239978 Density: 0.00038961697406616504 Time: 0.5404119491577148 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/sx-mathoverflow.mtx 100': 26,809,325 L1I_TLB:u # 0.0 per TLB access itlb_walk_ratio 6,925 ITLB_WALK:u 19,003 DTLB_WALK:u # 0.0 per TLB access dtlb_walk_ratio 36,516,965 L1D_TLB:u 4.031175418 seconds time elapsed 15.607232000 seconds user 30.562258000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 317, 416, ..., 239976, 239977, 239978]), col_indices=tensor([ 0, 1, 2, ..., 1483, 2179, 24817]), values=tensor([151., 17., 6., ..., 1., 1., 1.]), size=(24818, 24818), nnz=239978, layout=torch.sparse_csr) tensor([0.5013, 0.5961, 0.5565, ..., 0.3779, 0.1835, 0.6722]) Shape: torch.Size([24818, 24818]) NNZ: 239978 Density: 0.00038961697406616504 Time: 0.6185996532440186 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/sx-mathoverflow.mtx 100': 31,104,231 L1I_CACHE:u # 0.0 per cache access l1i_cache_miss_ratio 285,499 L1I_CACHE_REFILL:u 468,498 L1D_CACHE_REFILL:u # 0.0 per cache access l1d_cache_miss_ratio 32,677,465 L1D_CACHE:u 4.083129305 seconds time elapsed 16.243642000 seconds user 36.578375000 seconds sys /nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) ).to_sparse_csr().type(torch.float) tensor(crow_indices=tensor([ 0, 317, 416, ..., 239976, 239977, 239978]), col_indices=tensor([ 0, 1, 2, ..., 1483, 2179, 24817]), values=tensor([151., 17., 6., ..., 1., 1., 1.]), size=(24818, 24818), nnz=239978, layout=torch.sparse_csr) tensor([0.9075, 0.2788, 0.1365, ..., 0.4240, 0.8832, 0.1064]) Shape: torch.Size([24818, 24818]) NNZ: 239978 Density: 0.00038961697406616504 Time: 0.54673171043396 seconds Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/sx-mathoverflow.mtx 100': 559,358 LL_CACHE_MISS_RD:u # 1.0 per cache access ll_cache_read_miss_ratio 571,935 LL_CACHE_RD:u 194,840 L2D_TLB:u # 0.1 per TLB access l2_tlb_miss_ratio 23,481 L2D_TLB_REFILL:u 313,487 L2D_CACHE_REFILL:u # 0.2 per cache access l2_cache_miss_ratio 1,779,730 L2D_CACHE:u 3.961843929 seconds time elapsed 15.425912000 seconds user 28.864046000 seconds sys