You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to view an account that has about 3500 splits, I see [CRITICAL] WORKER TIMEOUT (pid:#) in the log and the page never renders.
My next smaller account has about 2200 splits and it is able to load.
I also was curious about performance in general. It usually takes several seconds for any account page to load for me. Is this because I am using SQLite? Docker? Docker for Windows? Some combination? If you have any ideas about how I can troubleshoot this, I would love to know.
The text was updated successfully, but these errors were encountered:
I am aware of the slow performance on large accounts. I have also noticed that on PostgreSQL (though I also suspect that that particular database server is also pretty slow). This is something I am continously investigating, but will have to look into furher. Never had a timeout though. Guess that depends on the database engine. My biggest account is ~1200 transactions.
One of the reasons (I think) is that even though the ledgers are paginated, the backend code still loads all of the splits from the DB and only sorts and filters in the template, see account.j2:75). But even then, sorting a 3500 element list shouldn't take very long.
I'm not sure how to improve on this. Piecash does not (I think) support retrieving only a part of the splits (docs). The best way would be to do the sorting and filtering in the SQL query to reduce the number of loaded results in the first place (if that is in fact the issue here).
On display of the transactions (transaction.j2), only the splits and contra-accounts are accessed, so I see no possible bottlenecks here. I will have to look at the piecash implementation to find out more.
Next step for me is trying out with different databases on different hosts.
Do you really have slow performance (several seconds load time) on any account? Even on smaller ones? With sqlite with my testing dbs I never had any issues (see e.g. https://gnucash-web-demo.bachmeier.cc which runs on PostgreSQL).
I will continue to work on this, but if you find out anything, please let me know!
This issue will serve as the tracking place for load time improvements.
joshuabach
changed the title
Worker timeout when attempting to view an account with many splits
Very long load times for accounts with many splits
Jan 9, 2024
I just tried this out for the first time yesterday and had a similar experience. I built a docker arm image for rpi. The sample sqlite data was loading just fine in the web ui but when I tried using my own gnucash data file (also sqlite with several larger accounts) I couldn't get most of the accounts to load. Haven't gathered as much info as @williamjacksn but I'll be happy to report my findings in case it helps find a common denominator. At first I was suspicious of the arm platform and was going to try on x86_64 next to see if the same thing happens.
Thanks for your work on this @joshuabach, it's a really neat project.
quick update on my end...I did try the latest x86_64 docker image 'out-of-the-box' and found much better performance than on the rpi. The larger accounts are still loading a bit slowly, but it takes at most 4-5 seconds (my arm image was much slower). I do see where there is still room for improvement on x86_64, but as expected, architecture will make a difference too
My environment:
When I try to view an account that has about 3500 splits, I see
[CRITICAL] WORKER TIMEOUT (pid:#)
in the log and the page never renders.My next smaller account has about 2200 splits and it is able to load.
I also was curious about performance in general. It usually takes several seconds for any account page to load for me. Is this because I am using SQLite? Docker? Docker for Windows? Some combination? If you have any ideas about how I can troubleshoot this, I would love to know.
The text was updated successfully, but these errors were encountered: