Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What graphics card to pick for Sionna? #754

Open
rwydaegh opened this issue Feb 21, 2025 · 0 comments
Open

What graphics card to pick for Sionna? #754

rwydaegh opened this issue Feb 21, 2025 · 0 comments

Comments

@rwydaegh
Copy link

I am trying to buy a graphics card to run Sionna for our research group. Besides running it on Linux, what should I pay attention to?

  1. Consumer device vs server device

It seems the GeForce gaming devices contain roughly the same amount of CUDA cores, but are about 10x cheaper. The H100 series have "better F64 precision" but I doubt that would actually make a difference in RT simulations. It seems like a consumer device is much more cost-effective, but please correct me if I'm wrong.

  1. Multi-GPU approach

Instead of splurging on a big device, Sionna supports multiple GPUs. It might be more cost-effective to link several cheap GPUs together, to have a higher number of CUDA cores and hopefully a better performance. NVLink was however phased out after the GeForce 30s series. Would it make sense to buy two 3090s and do multiple GPUs, or are we forced to buy multiple server-grade GPUs to use this feature with modern devices?

  1. CUDA cores, Tensor cores, RT cores

This source states Sionna "takes advantage of computing (NVIDIA CUDA cores), AI (NVIDIA Tensor Cores), and ray tracing cores of NVIDIA GPUs for lightning-fast simulations of 6G systems." Is this true? How much is one core more important than the other, i.e., what is usually the bottleneck?

  1. NVIDIA GPUs ?

Does Sionna only work with NVIDIA systems? Is there some type of advantage of using NVIDIA devices?

  1. How does the RAM memory and CPU performance the speed of simulations?

Do they present a bottleneck for these simulations?

Given the answer to all these questions, can someone provide for 2025 the recommended (multi-)GPU setup for different budgets, i.e., "if you have less than 1000 USD, get a 3080, if you have more than X, get Y, ..."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant