Lnav Pipelines
The logs are moving, line by line, through a stream you control. You see them change in real time. This is the power of Lnav Pipelines.
Lnav Pipelines let you take raw logs and push them through precise commands, filters, and transformations without leaving the terminal. They are built into Lnav, not bolted on, so there is zero context-switch. You run pipelines directly against your live log view, using SQL queries, JSON extraction, regex parsing, and custom scripts to shape the data exactly as you need.
With Lnav Pipelines, you remove noise fast. You group related events, cut duplicates, and drill down to the failures that matter. Your pipeline can chain different commands: grep-like searches; field selection; aggregate counts; time-based grouping; advanced queries across multiple log files. Each step feeds the next, streamlining diagnosis and analysis.
A pipeline in Lnav is defined on the fly. You can prototype transformations in seconds, see the result instantly, and then save that pipeline for recurring use. No external ETL, no exporting to another tool. Logs stay hot and searchable while you adapt the pipeline to new formats or shifting production issues.
Integrating Lnav Pipelines into your workflow means faster root cause analysis, smaller feedback loops, and reduced mean-time-to-resolution. The pipelines are lightweight but handle complex transformations over gigabytes of log data. Because they’re terminal-native, they fit naturally into SSH sessions, CI/CD hooks, or automated scripts.
To start using Lnav Pipelines, install Lnav, open your logs, and run a simple command pipeline like:
:filter-in ERROR | :count-by log_time
In seconds, your critical errors are tallied and ready for review. From there, layer on more stages until you have a complete diagnostic stream.
Don’t let logs sit idle. Put them to work through Lnav Pipelines and see immediate results. Try it now with hoop.dev—connect your logs and watch pipelines run live in minutes.