ampere_research/pytorch/output/altra_10_30_p2p-Gnutella24_1000.output

159 lines
9.5 KiB
Plaintext
Raw Normal View History

2024-12-03 00:20:09 -05:00
srun: Job time limit was unset; set to partition default of 60 minutes
srun: ################################################################################
srun: # Please note that the oasis compute nodes have aarch64 architecture CPUs. #
srun: # All submission nodes and all other compute nodes have x86_64 architecture #
srun: # CPUs. Programs, environments, or other software that was built on x86_64 #
srun: # nodes may need to be rebuilt to properly execute on these nodes. #
srun: ################################################################################
2024-12-03 08:53:39 -05:00
srun: job 3394989 queued and waiting for resources
srun: job 3394989 has been allocated resources
2024-12-03 00:20:09 -05:00
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 9, 9, ..., 65369, 65369, 65369]),
col_indices=tensor([ 1, 2, 3, ..., 15065, 9401, 26517]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(26518, 26518),
nnz=65369, layout=torch.sparse_csr)
2024-12-03 08:53:39 -05:00
tensor([0.2470, 0.4231, 0.1036, ..., 0.7937, 0.3241, 0.7116])
Matrix: p2p-Gnutella24
2024-12-03 00:20:09 -05:00
Shape: torch.Size([26518, 26518])
NNZ: 65369
Density: 9.295875717624285e-05
2024-12-03 08:53:39 -05:00
Time: 1.6974337100982666 seconds
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella24.mtx 1000':
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
66.78 msec task-clock:u # 0.013 CPUs utilized
2024-12-03 00:20:09 -05:00
0 context-switches:u # 0.000 /sec
0 cpu-migrations:u # 0.000 /sec
2024-12-03 08:53:39 -05:00
3,520 page-faults:u # 52.713 K/sec
28,858,055 cycles:u # 0.432 GHz (26.93%)
64,429,843 instructions:u # 2.23 insn per cycle (67.63%)
2024-12-03 00:20:09 -05:00
<not supported> branches:u
2024-12-03 08:53:39 -05:00
296,857 branch-misses:u (84.08%)
33,646,348 L1-dcache-loads:u # 503.866 M/sec
493,998 L1-dcache-load-misses:u # 1.47% of all L1-dcache accesses
2024-12-03 00:20:09 -05:00
<not supported> LLC-loads:u
<not supported> LLC-load-misses:u
2024-12-03 08:53:39 -05:00
32,070,415 L1-icache-loads:u # 480.266 M/sec
305,993 L1-icache-load-misses:u # 0.95% of all L1-icache accesses
46,903,081 dTLB-loads:u # 702.391 M/sec (46.16%)
114,272 dTLB-load-misses:u # 0.24% of all dTLB cache accesses (32.45%)
2024-12-03 00:20:09 -05:00
<not counted> iTLB-loads:u (0.00%)
<not counted> iTLB-load-misses:u (0.00%)
2024-12-03 08:53:39 -05:00
5.106933083 seconds time elapsed
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
16.391614000 seconds user
28.913912000 seconds sys
2024-12-03 00:20:09 -05:00
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 9, 9, ..., 65369, 65369, 65369]),
col_indices=tensor([ 1, 2, 3, ..., 15065, 9401, 26517]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(26518, 26518),
nnz=65369, layout=torch.sparse_csr)
2024-12-03 08:53:39 -05:00
tensor([0.2307, 0.4662, 0.3789, ..., 0.0144, 0.6300, 0.7829])
Matrix: p2p-Gnutella24
2024-12-03 00:20:09 -05:00
Shape: torch.Size([26518, 26518])
NNZ: 65369
Density: 9.295875717624285e-05
2024-12-03 08:53:39 -05:00
Time: 1.6379659175872803 seconds
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella24.mtx 1000':
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
331,167 BR_MIS_PRED_RETIRED:u # 0.0 per branch branch_misprediction_ratio
19,518,210 BR_RETIRED:u
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
5.017894585 seconds time elapsed
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
16.446505000 seconds user
31.004338000 seconds sys
2024-12-03 00:20:09 -05:00
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 9, 9, ..., 65369, 65369, 65369]),
col_indices=tensor([ 1, 2, 3, ..., 15065, 9401, 26517]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(26518, 26518),
nnz=65369, layout=torch.sparse_csr)
2024-12-03 08:53:39 -05:00
tensor([0.7309, 0.0314, 0.4424, ..., 0.7434, 0.2124, 0.1432])
Matrix: p2p-Gnutella24
2024-12-03 00:20:09 -05:00
Shape: torch.Size([26518, 26518])
NNZ: 65369
Density: 9.295875717624285e-05
2024-12-03 08:53:39 -05:00
Time: 1.7232718467712402 seconds
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella24.mtx 1000':
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
26,964,483 L1I_TLB:u # 0.0 per TLB access itlb_walk_ratio
4,666 ITLB_WALK:u
14,001 DTLB_WALK:u # 0.0 per TLB access dtlb_walk_ratio
36,143,905 L1D_TLB:u
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
5.053286721 seconds time elapsed
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
16.447780000 seconds user
28.580949000 seconds sys
2024-12-03 00:20:09 -05:00
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 9, 9, ..., 65369, 65369, 65369]),
col_indices=tensor([ 1, 2, 3, ..., 15065, 9401, 26517]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(26518, 26518),
nnz=65369, layout=torch.sparse_csr)
2024-12-03 08:53:39 -05:00
tensor([0.5695, 0.5025, 0.1946, ..., 0.7428, 0.9634, 0.4327])
Matrix: p2p-Gnutella24
2024-12-03 00:20:09 -05:00
Shape: torch.Size([26518, 26518])
NNZ: 65369
Density: 9.295875717624285e-05
2024-12-03 08:53:39 -05:00
Time: 1.644775629043579 seconds
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella24.mtx 1000':
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
31,901,160 L1I_CACHE:u # 0.0 per cache access l1i_cache_miss_ratio
302,516 L1I_CACHE_REFILL:u
475,663 L1D_CACHE_REFILL:u # 0.0 per cache access l1d_cache_miss_ratio
33,507,563 L1D_CACHE:u
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
4.978338941 seconds time elapsed
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
16.455298000 seconds user
30.249373000 seconds sys
2024-12-03 00:20:09 -05:00
/nfshomes/vut/ampere_research/pytorch/spmv.py:20: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /space/jenkins/workspace/Releases/pytorch-dls/pytorch-dls/aten/src/ATen/SparseCsrTensorImpl.cpp:55.)
).to_sparse_csr().type(torch.float)
tensor(crow_indices=tensor([ 0, 9, 9, ..., 65369, 65369, 65369]),
col_indices=tensor([ 1, 2, 3, ..., 15065, 9401, 26517]),
values=tensor([1., 1., 1., ..., 1., 1., 1.]), size=(26518, 26518),
nnz=65369, layout=torch.sparse_csr)
2024-12-03 08:53:39 -05:00
tensor([0.0969, 0.1950, 0.8456, ..., 0.3315, 0.1512, 0.3182])
Matrix: p2p-Gnutella24
2024-12-03 00:20:09 -05:00
Shape: torch.Size([26518, 26518])
NNZ: 65369
Density: 9.295875717624285e-05
2024-12-03 08:53:39 -05:00
Time: 1.752812385559082 seconds
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
Performance counter stats for 'apptainer run pytorch-altra.sif -c numactl --cpunodebind=0 --membind=0 python spmv.py matrices/p2p-Gnutella24.mtx 1000':
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
558,546 LL_CACHE_MISS_RD:u # 1.0 per cache access ll_cache_read_miss_ratio
578,676 LL_CACHE_RD:u
187,549 L2D_TLB:u # 0.1 per TLB access l2_tlb_miss_ratio
22,990 L2D_TLB_REFILL:u
321,826 L2D_CACHE_REFILL:u # 0.2 per cache access l2_cache_miss_ratio
1,816,571 L2D_CACHE:u
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
4.952297819 seconds time elapsed
2024-12-03 00:20:09 -05:00
2024-12-03 08:53:39 -05:00
16.648691000 seconds user
27.005944000 seconds sys
2024-12-03 00:20:09 -05:00