You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to buy a graphics card to run Sionna for our research group. Besides running it on Linux, what should I pay attention to?
Consumer device vs server device
It seems the GeForce gaming devices contain roughly the same amount of CUDA cores, but are about 10x cheaper. The H100 series have "better F64 precision" but I doubt that would actually make a difference in RT simulations. It seems like a consumer device is much more cost-effective, but please correct me if I'm wrong.
Multi-GPU approach
Instead of splurging on a big device, Sionna supports multiple GPUs. It might be more cost-effective to link several cheap GPUs together, to have a higher number of CUDA cores and hopefully a better performance. NVLink was however phased out after the GeForce 30s series. Would it make sense to buy two 3090s and do multiple GPUs, or are we forced to buy multiple server-grade GPUs to use this feature with modern devices?
CUDA cores, Tensor cores, RT cores
This source states Sionna "takes advantage of computing (NVIDIA CUDA cores), AI (NVIDIA Tensor Cores), and ray tracing cores of NVIDIA GPUs for lightning-fast simulations of 6G systems." Is this true? How much is one core more important than the other, i.e., what is usually the bottleneck?
NVIDIA GPUs ?
Does Sionna only work with NVIDIA systems? Is there some type of advantage of using NVIDIA devices?
How does the RAM memory and CPU performance the speed of simulations?
Do they present a bottleneck for these simulations?
Given the answer to all these questions, can someone provide for 2025 the recommended (multi-)GPU setup for different budgets, i.e., "if you have less than 1000 USD, get a 3080, if you have more than X, get Y, ..."
The text was updated successfully, but these errors were encountered:
I am trying to buy a graphics card to run Sionna for our research group. Besides running it on Linux, what should I pay attention to?
It seems the GeForce gaming devices contain roughly the same amount of CUDA cores, but are about 10x cheaper. The H100 series have "better F64 precision" but I doubt that would actually make a difference in RT simulations. It seems like a consumer device is much more cost-effective, but please correct me if I'm wrong.
Instead of splurging on a big device, Sionna supports multiple GPUs. It might be more cost-effective to link several cheap GPUs together, to have a higher number of CUDA cores and hopefully a better performance. NVLink was however phased out after the GeForce 30s series. Would it make sense to buy two 3090s and do multiple GPUs, or are we forced to buy multiple server-grade GPUs to use this feature with modern devices?
This source states Sionna "takes advantage of computing (NVIDIA CUDA cores), AI (NVIDIA Tensor Cores), and ray tracing cores of NVIDIA GPUs for lightning-fast simulations of 6G systems." Is this true? How much is one core more important than the other, i.e., what is usually the bottleneck?
Does Sionna only work with NVIDIA systems? Is there some type of advantage of using NVIDIA devices?
Do they present a bottleneck for these simulations?
Given the answer to all these questions, can someone provide for 2025 the recommended (multi-)GPU setup for different budgets, i.e., "if you have less than 1000 USD, get a 3080, if you have more than X, get Y, ..."
The text was updated successfully, but these errors were encountered: