Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancy between single_shot_classification and readout_optimization.resonator_amplitude #1083

Open
Luca-Ben-Herrmann opened this issue Feb 12, 2025 · 4 comments

Comments

@Luca-Ben-Herrmann
Copy link
Contributor

The two measurements (single_shot_classification and readout_optimization.resonator_amplitude) seem use a different model to calculate the fidelity which yields different results in fidelity.  

The report shows two single_shot_classification performed in a row (with a readout amplitude of 0.85), then a readout_optimization.resonator_amplitude where you can see that the error probability is only 0.075 (corresponding to a fidelity of 0.925 which is higher than what we get in the single shot measurment). http://login.qrccluster.com:9000/NFiWnTAPRPS0hc-i7xhSRA==

I think that the readout_optimization.resonator_amplitude measurment should use the same code and method for calculating the fidelity as the single_shot_classification measurment.

@alecandido alecandido changed the title Discrepancy between single_shot_classification and readout_optimization.resonator_amplitude Discrepancy between single_shot_classification and readout_optimization.resonator_amplitude Feb 12, 2025
@alecandido
Copy link
Member

I may be wrong, but to me, it seems compatible to the historical presence of two different fidelities (both reported by the single_shot_classification):

Image

@andrea-pasquale and @Edoardo-Pedicillo can better confirm whether that's the case, or it is instead anything else.

However, if the problem is actual that, I'm not sure whether I'd move forward to fix it. Having both the fidelities reported is a bit redundant, but it is also a clear way to solve the ambiguity about which is the relevant one to display...

@Edoardo-Pedicillo
Copy link
Contributor

Edoardo-Pedicillo commented Feb 12, 2025

I think that the readout_optimization.resonator_amplitude measurment should use the same code and method for calculating the fidelity as the single_shot_classification measurment.

The code evaluating the fidelity in the two protocol is the same (I will double check in any case). Since we don't dump the shots in the readout_amplitude protocol, it is difficult to guess the source of the issue. I will coordinate with you offline to collect the missing data.

@Edoardo-Pedicillo
Copy link
Contributor

I have opened #1086, where I am using the same acquisition and post-processing in classification and readout_amplitude.

cl_data = cl_acq(params, platform, targets)
model = train_classifier(cl_data, qubit)

I have tested the routine and I didn't find big improvements.
Repeating the two routines multiple times (http://login.qrccluster.com:9000/FDh5kUcITCGjtiT45zmAnw==/) we can see that the fidelities/readout_errors are not stable, I have also repeated the readout optimization routine with longer scans and the conclusions are the same (sorry for the style of the next plots).

  • with the PR branch

Image

  • with the main branch

Image

For sure, the analysis is not conclusive, but it seems the code is not the main source of these oscillations, part of it could be explained as normal QPU parameters drift and/or statistical oscillation coming from the fact that the number of shots is finite. Regarding the last one, we could try to estimate the error bars associated with the readout error.

@Edoardo-Pedicillo
Copy link
Contributor

I have tried to execute the readout amplitude optimization on qw11q with branch ro_amp. I was not able to reproduce the issue http://login.qrccluster.com:9000/pQk_EkzhTb2KM7ZQK1aWSA==/. As discussed offline, it could be a problem of the setup or the drivers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants