ampere_research/pytorch/output/altra_10_30_p2p-Gnutella30_1000.output
2024-12-03 08:53:39 -05:00

159 lines
9.5 KiB
Plaintext

srun: Job time limit was unset; set to partition default of 60 minutes
srun: ################################################################################
srun: # Please note that the oasis compute nodes have aarch64 architecture CPUs. #
srun: # All submission nodes and all other compute nodes have x86_64 architecture #
srun: # CPUs. Programs, environments, or other software that was built on x86_64 #
srun: # nodes may need to be rebuilt to properly execute on these nodes. #
srun: ################################################################################
srun: job 3394991 queued and waiting for resources
srun: job 3394991 has been allocated resources
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 10, 10, ..., 88328, 88328, 88328]),
col_indices=tensor([ 1, 2, 3, ..., 36675, 36676, 36677]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(36682, 36682),
nnz=88328, layout=torch.sparse_csr)
tensor([0.3046, 0.0725, 0.4580, ..., 0.0593, 0.5121, 0.2116])
Matrix: p2p-Gnutella30
Shape: torch.Size([36682, 36682])
NNZ: 88328
Density: 6.564359899804003e-05
Time: 3.6646029949188232 seconds
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella30.mtx 1000':
56.52 msec task-clock:u # 0.008 CPUs utilized
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
3,194 page-faults:u # 56.515 K/sec
58,074,747 cycles:u # 1.028 GHz (51.20%)
90,036,443 instructions:u # 1.55 insn per cycle (89.06%)
<not supported> branches:u
363,262 branch-misses:u
33,111,438 L1-dcache-loads:u # 585.875 M/sec
454,665 L1-dcache-load-misses:u # 1.37% of all L1-dcache accesses
<not supported> LLC-loads:u
<not supported> LLC-load-misses:u
31,646,314 L1-icache-loads:u # 559.951 M/sec
281,443 L1-icache-load-misses:u # 0.89% of all L1-icache accesses
43,495,524 dTLB-loads:u # 769.611 M/sec (11.87%)
<not counted> dTLB-load-misses:u (0.00%)
<not counted> iTLB-loads:u (0.00%)
<not counted> iTLB-load-misses:u (0.00%)
7.033463989 seconds time elapsed
34.670765000 seconds user
307.031553000 seconds sys
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 10, 10, ..., 88328, 88328, 88328]),
col_indices=tensor([ 1, 2, 3, ..., 36675, 36676, 36677]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(36682, 36682),
nnz=88328, layout=torch.sparse_csr)
tensor([0.9700, 0.1728, 0.2199, ..., 0.6107, 0.3357, 0.2661])
Matrix: p2p-Gnutella30
Shape: torch.Size([36682, 36682])
NNZ: 88328
Density: 6.564359899804003e-05
Time: 2.3380045890808105 seconds
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella30.mtx 1000':
327,895 BR_MIS_PRED_RETIRED:u # 0.0 per branch branch_misprediction_ratio
20,553,601 BR_RETIRED:u
5.895917276 seconds time elapsed
31.121063000 seconds user
208.127447000 seconds sys
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 10, 10, ..., 88328, 88328, 88328]),
col_indices=tensor([ 1, 2, 3, ..., 36675, 36676, 36677]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(36682, 36682),
nnz=88328, layout=torch.sparse_csr)
tensor([0.9533, 0.7568, 0.8141, ..., 0.8395, 0.5617, 0.7830])
Matrix: p2p-Gnutella30
Shape: torch.Size([36682, 36682])
NNZ: 88328
Density: 6.564359899804003e-05
Time: 4.476518869400024 seconds
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella30.mtx 1000':
26,120,611 L1I_TLB:u # 0.0 per TLB access itlb_walk_ratio
7,531 ITLB_WALK:u
19,097 DTLB_WALK:u # 0.0 per TLB access dtlb_walk_ratio
35,744,928 L1D_TLB:u
8.109622410 seconds time elapsed
38.467161000 seconds user
370.437915000 seconds sys
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 10, 10, ..., 88328, 88328, 88328]),
col_indices=tensor([ 1, 2, 3, ..., 36675, 36676, 36677]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(36682, 36682),
nnz=88328, layout=torch.sparse_csr)
tensor([0.6886, 0.7814, 0.9957, ..., 0.8460, 0.1015, 0.8097])
Matrix: p2p-Gnutella30
Shape: torch.Size([36682, 36682])
NNZ: 88328
Density: 6.564359899804003e-05
Time: 2.856834888458252 seconds
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella30.mtx 1000':
31,819,981 L1I_CACHE:u # 0.0 per cache access l1i_cache_miss_ratio
284,493 L1I_CACHE_REFILL:u
486,709 L1D_CACHE_REFILL:u # 0.0 per cache access l1d_cache_miss_ratio
33,545,755 L1D_CACHE:u
6.374371632 seconds time elapsed
30.817943000 seconds user
247.363843000 seconds sys
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 10, 10, ..., 88328, 88328, 88328]),
col_indices=tensor([ 1, 2, 3, ..., 36675, 36676, 36677]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(36682, 36682),
nnz=88328, layout=torch.sparse_csr)
tensor([0.8464, 0.0437, 0.1230, ..., 0.6221, 0.9268, 0.5436])
Matrix: p2p-Gnutella30
Shape: torch.Size([36682, 36682])
NNZ: 88328
Density: 6.564359899804003e-05
Time: 4.838747978210449 seconds
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella30.mtx 1000':
544,742 LL_CACHE_MISS_RD:u # 1.0 per cache access ll_cache_read_miss_ratio
558,323 LL_CACHE_RD:u
190,574 L2D_TLB:u # 0.1 per TLB access l2_tlb_miss_ratio
23,746 L2D_TLB_REFILL:u
305,844 L2D_CACHE_REFILL:u # 0.2 per cache access l2_cache_miss_ratio
1,736,964 L2D_CACHE:u
8.386896120 seconds time elapsed
39.861141000 seconds user
395.959334000 seconds sys