-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Temp spike battery improvements #21
Temp spike battery improvements #21
Conversation
…ading from hardware, clamped minumum as well as maximum values just in case, and added comments about voltrage curve
…s into more accurate percentage values
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #21 +/- ##
==========================================
+ Coverage 81.96% 85.24% +3.27%
==========================================
Files 2 2
Lines 61 61
Branches 8 7 -1
==========================================
+ Hits 50 52 +2
+ Misses 5 4 -1
+ Partials 6 5 -1 ☔ View full report in Codecov by Sentry. |
Thanks. Release will publish shortly |
I'm not at my desk, so it'd be great if you could take care of bumping library in HA as well |
I forgot to fix the commit message name so the release failed, I'll have to wait till I get back to my desk to do it manually |
all sorted https://github.com/Bluetooth-Devices/thermopro-ble/releases/tag/v0.9.0 published |
Thank you very much @bdraco !
I went ahead and submitted this. My first time doing a PR to that repo so hope I got it right :) |
I finally managed to capture readings from an entire discharge cycle of a TempSpike battery. The battery reading sent by the device appears to be a reading in millivolts, and the discharge cycle is a non-linear function as you would expect in that situation.
To create a more accurate mapping of voltage to battery percentage, I used some machine learning to optimize a function in the form
A*tanh(B*x+C)+D
where A->D are variables that were optimized with TensorFlow. (I chose that function format simply because I noticed that the curve from the readings looked fairly similar to atanh
curve). This yielded a function whose output fit within about 1.8% of the actual data collected, which I think is good enough ;)I also added clamping for both minimum and maximum values, just in case.