Replies: 5 comments 4 replies
-
Thank you for your interest in Sionna! To benefit from GPU acceleration, you can implement your receiver as a Keras layer in Python using the TensorFlow/Keras framework. You can find more information about this here: Once this is done, your new layer can be integrated into a Sionna-based communication system. We have provided a notebook that shows how to implement a neural network-based receiver as a Keras layer with Sionna, which could be helpful: GPU acceleration happens independently of the batch size, as long as your receiver is implemented using Keras/TensorFlow. Moreover, TensorFlow provides different execution modes that enable different levels of acceleration. You can find more information in this notebook (section Eager vs Graph Mode): Note that If you need to implement your receiver in C++/CUDA, we plan to provide a tutorial soon. |
Beta Was this translation helpful? Give feedback.
-
Hi Faycal,
Thanks for your quick response and information.
So you mean if we want to accelerate our python receiver by GPU, it can be realized as followings:
1. use keras layer in Python with Keras/Tensorflow framework, it can be automatically accelerated by GPU. (even it only runs once? It can speed up by allocated different layers in different thread/block? How to understand that ?)
2. Use Nvidia Cuda python? As https://developer.nvidia.com/how-to-cuda-python ?
What’s your opinions and recommendation of these two ways? Does ② execution is faster than ①, but easy to go ?
Could you please share more information about that, that’s really helpful. Thanks.
Br.
Sophie
From: Fayçal Ait-Aoudia ***@***.***>
Sent: Wednesday, April 27, 2022 4:18 PM
To: NVlabs/sionna ***@***.***>
Cc: Sophie Wei ***@***.***>; Author ***@***.***>
Subject: Re: [NVlabs/sionna] Q&A Can sionna is accelerated by GPU when only run once ? (Discussion #10)
Thank you for your interest in Sionna!
To benefit from GPU acceleration, you can implement your receiver as a Keras layer in Python using the TensorFlow/Keras framework. You can find more information about this here:
https://keras.io/guides/making_new_layers_and_models_via_subclassing
Once this is done, your new layer can be integrated into a Sionna-based communication system. We have provided a notebook that shows how to implement a neural network-based receiver as a Keras layer with Sionna, which could be helpful:
https://nvlabs.github.io/sionna/examples/Neural_Receiver.html
GPU acceleration happens independently of the batch size, as long as your receiver is implemented using Keras/TensorFlow. Moreover, TensorFlow provides different execution modes that enable different levels of acceleration. You can find more information in this notebook (section Eager vs Graph Mode):
https://nvlabs.github.io/sionna/examples/Sionna_tutorial_part1.html
Note that If you need to implement your receiver in C++/CUDA, we plan to provide a tutorial soon.
—
Reply to this email directly, view it on GitHub<#10 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AY5FGEBSJ3WV3IUT2QOL4ZLVHDZ3RANCNFSM5UOAO26A>.
You are receiving this because you authored the thread.Message ID: ***@***.******@***.***>>
|
Beta Was this translation helpful? Give feedback.
-
Your results are reasonable, however, they depend on your exact CPU/GPU configuration and the simulated model. As usually done in deep learning, one fundamental design paradigm of Sionna is to make use of parallel batch-processing whenever possible. Please note that for the GPU-accelerated experiments, your simulation time is almost constant (39s vs. 44s per SNR point), although 2000x more bits have been simulated. The reason is simply that the GPU utilization is low (you can check the GPU load with Keep in mind that XLA acceleration may further speed-up your simulation. |
Beta Was this translation helpful? Give feedback.
-
Hi cheerkwak, everybody, Eventually, how did you made the code run on GPU ? I am running Sionna_tutorial_part4 on my local windows machine and it is using a single CPU core (11.2%) and no GPU. I am new to this, so no answer is trivial for me. As a side thing, there are also a bunch of warnings like tensorflow:AutoGraph could not transform <bound method OFDMSystemNeuralReceiver.call of <tensorflow.python.eager.function.TfMethodTarget object at 0x00000199C72DE400>> and will run it as-is. Cause 'arguments' object has no attribute 'posonlyargs' Thank you ! |
Beta Was this translation helpful? Give feedback.
-
Hi, For GPU support you need to ensure that you have installed the latest CUDA drivers & CUDA Toolkit and that TensorFlow can access the GPU. We recommend that you follow the step-by-step instructions from [1] depending on your OS. Sionna itself does not need any specific configuration for GPU acceleration (as it uses TF as backend). |
Beta Was this translation helpful? Give feedback.
-
Hi dear sionna experts,
I just feel puzzled that how to use GPU to accelerated sionna with Keras layer and model ? Is that due to the simulation is run batchedly and each run is performed as the CUDA kernal? So if only one TTI (batch size = 1) is run, it can still get accelerated by GPU? How to understands the Keras and Cuda's relationship?
Why I ask that is we have our own RX receiver in python, now we are thinking how to adapted into GPU platform and get the acceleration (it can only run once or several times). One way is to is Numba, or write in Cuda python, shall sionna can fulfill that purpose easier ?
Beta Was this translation helpful? Give feedback.
All reactions