聚合搜Scholar - 壹搜网为您找到"
Pytorch multi gpu
"相关结果 6条Simulating the evolution of a system of particles based on physical laws can be computational costly. We propose a simple way to perform GPU acceleration using PyTorch. Taking advantage of the efficient matrix operations and gradient calculations in PyTorch, we can readily accelerate the simulation using GPU. We showed that this method produces correct simulation results and runs significantly faster then when using CPU.
dx.doi.orgWe show that numerical computations based on tensor renormalization group (TRG) methods can be significantly accelerated with PyTorch on graphics processing units (GPUs) by leveraging NVIDIA's Compute Unified Device Architecture (CUDA). We find improvement in the runtime and its scaling with bond dimension for two-dimensional systems. Our results establish that the utilization of GPU resources is essential for future precision computations with TRG.
arxiv.orgIn this work we evaluate different approaches to parallelize computation of convolutional neural networks across several GPUs.
arxiv.org
1