Replies: 2 comments 1 reply
-
Seconded; specifically to using a real DB, it allows, not only, the ability to offload resources, but the speed and efficiencies, as well as tunability, inherent in sqlites "big brothers." It would also open the possibility of having multiple users accessing various logs at once, without each user having to re-parse the logs. |
Beta Was this translation helpful? Give feedback.
-
Are these logs being updated while you're viewing them?
Is lnav unusable with this much stuff loaded? (As in, moving around in the log view is slow/unresponsive)
On startup, lnav parses all of the log files to generate a big index that contains an entry for each message. Each entry contains things like timestamp, log level, file offset, etc... That index is kept in memory along with buffers for each file. The SQLite virtual tables work with this message index and then read and parse the message from the file as needed. For example, when doing a The ARCHITECTURE.md file in the repo has some more information as well.
The main advantage of virtual tables is that they react automatically to changes in the files. But, one of the easiest ways to speed up queries is to just dump the virtual tables into a regular SQLite table. The easiest way to do that is by running something like:
For large tables, the SQLite table will be much faster... although you have to be careful that a statement like the above won't carryover some of the column properties, like the collation function.
I see a couple of options for that:
If this is intended to be used in a continuously running batch process, the main feature that would be needed is making it easy to remember where lnav left off in the last run so that only new data would be inserted. |
Beta Was this translation helpful? Give feedback.
-
At times we will load logs of 3-9GB+ into LNAV which challenges the system's memory and stability.
Does lnav load the entire log bundle into a virtual table held in memory? Is there some downside to actually generating a db from these logs and running against that rather than a virtual table?
One idea was that we could have a process that automatically ingests files loaded to some location into an actual database and then run an instance of LNAV against that when investigating them. Reduce the memory load. As long as we backed it with a NAND my thinking is we'd likely see an increase in performance and stability. Am I wrong?
Beta Was this translation helpful? Give feedback.
All reactions