-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get_paginated_jobs
, get_paginated_uploads
: merge, cache job outcome stat processing
#4316
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a couple of minor tweaks here i think
@@ -258,3 +261,30 @@ def find_missing_row_for_job(job_id, job_size): | |||
.filter(Notification.job_row_number == None) # noqa | |||
) | |||
return query.all() | |||
|
|||
|
|||
redis_cache = RequestCache(redis_store) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: can we pop this up the top of the file? think it's a lil easier to see and reuse up there than in between other functions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure where my reply to this went - do we actually want to encourage reuse for this though? I feel like using redis in this way from here isn't a pattern we have totally settled on..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair, lets leave it here for now then
cc18538
to
07047fe
Compare
…tat processing the logic is effectively repeated between the two functions - take advantage of this to make a common implementation that conditionally redis-caches the result. the thinking here being that once all a job's notifications are in a "completed" status they are unlikely to change again and we can save ourselves from scanning the notifications table again for up to 100k rows per job. being able to share this cache between the get_paginated_jobs and get_paginated_uploads is a bonus.
07047fe
to
7c8efe3
Compare
The logic is effectively repeated between the two functions - take
advantage of this to make a common implementation that conditionally redis-caches the result. The thinking here being that once all a job's notifications are in a "completed" status they are unlikely to change again and we can save ourselves from scanning the notifications table again for up to 100k rows per job. Being able to share this cache between the
get_paginated_jobs
andget_paginated_uploads
is a bonus.Depends on alphagov/notifications-utils#1179
I'm aware that this does some slightly out-of-character things for the api code. Firstly it's using
RequestCache
a little bit beyond it's nominative purpose - but it's the redis-caching decorator we have in our codebase and creating another one along with all the required tests doesn't seem like it would be a good use of effort. Also renaming and/or significantly reworking the class would require a breaking utils change that I'd rather avoid.This is the first addition of caching into
dao/
(in other places caches are cleared, but not set or read from). This is why I've given it a slightly stupid name so it can't be overlooked that this might be fetching from the cache - developers' normal expectations would probably be that the dao is directly accessing the database.Other than that there didn't seem to be a great other place to put a function that would be shared between both jobs and upload views 🤷