Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_paginated_jobs, get_paginated_uploads: merge, cache job outcome stat processing #4316

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

risicle
Copy link
Member

@risicle risicle commented Dec 18, 2024

The logic is effectively repeated between the two functions - take
advantage of this to make a common implementation that conditionally redis-caches the result. The thinking here being that once all a job's notifications are in a "completed" status they are unlikely to change again and we can save ourselves from scanning the notifications table again for up to 100k rows per job. Being able to share this cache between the get_paginated_jobs and
get_paginated_uploads is a bonus.

Depends on alphagov/notifications-utils#1179

I'm aware that this does some slightly out-of-character things for the api code. Firstly it's using RequestCache a little bit beyond it's nominative purpose - but it's the redis-caching decorator we have in our codebase and creating another one along with all the required tests doesn't seem like it would be a good use of effort. Also renaming and/or significantly reworking the class would require a breaking utils change that I'd rather avoid.

This is the first addition of caching into dao/ (in other places caches are cleared, but not set or read from). This is why I've given it a slightly stupid name so it can't be overlooked that this might be fetching from the cache - developers' normal expectations would probably be that the dao is directly accessing the database.

Other than that there didn't seem to be a great other place to put a function that would be shared between both jobs and upload views 🤷

Copy link
Contributor

@leohemsted leohemsted left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a couple of minor tweaks here i think

@@ -258,3 +261,30 @@ def find_missing_row_for_job(job_id, job_size):
.filter(Notification.job_row_number == None) # noqa
)
return query.all()


redis_cache = RequestCache(redis_store)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor: can we pop this up the top of the file? think it's a lil easier to see and reuse up there than in between other functions

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure where my reply to this went - do we actually want to encourage reuse for this though? I feel like using redis in this way from here isn't a pattern we have totally settled on..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair, lets leave it here for now then

app/dao/jobs_dao.py Outdated Show resolved Hide resolved
@risicle risicle force-pushed the ris-cache-notification-outcomes-for-job branch from cc18538 to 07047fe Compare December 27, 2024 12:33
…tat processing

the logic is effectively repeated between the two functions - take
advantage of this to make a common implementation that conditionally
redis-caches the result. the thinking here being that once all a
job's notifications are in a "completed" status they are unlikely
to change again and we can save ourselves from scanning the
notifications table again for up to 100k rows per job. being able
to share this cache between the get_paginated_jobs and
get_paginated_uploads is a bonus.
@risicle risicle force-pushed the ris-cache-notification-outcomes-for-job branch from 07047fe to 7c8efe3 Compare January 8, 2025 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants