Kubernetes Logs 101

Kubernetes Logs 101

Understanding your Kubernetes cluster’s internal dynamics is critical for system and application performance. Reading, analyzing, and evaluating your logs allow you to fine-tune your applications and maintain system stability. Logs can be beneficial while diagnosing issues and monitoring cluster performance. Kubernetes logs are different from typical apps and services.

By executing apps per server, Kubernetes abstracts away much of the usual maintenance that comes with application architecture. The purpose of this article is to present you with a high-level overview of the key ideas in Kubernetes logging.

Application logs are often written to a file named /var/log/app.log in conventional server settings. Subsequently, these files are examined on each server separately or submitted to a central repository for examination and/or storage.

Since pods might be many and short-lived, this form of log gathering is discouraged in Kubernetes. Therefore, Kubernetes recommends allowing the application to write logs to stdout and stderr. Each node runs its own Kubelet, which collects the segmented output logs and merges them into one log file. 

What are the various types of Kubernetes logs?

There are two types of logs in Kubernetes: node logs and component logs

Nodes, and their native services, generate node logs. The Kubelet agent is an excellent example. 

On the other hand, Pods, Containers, Kubernetes components, DaemonSets, and other Kubernetes Services create component logs.

Each node in a Kubernetes Cluster runs services that enable it to host Pods, accept instructions, and connect with other nodes. The formatting and storage of these logs are determines by the host operating system. You can get the logs on a Linux server by using journalctl -u kubelet. However, Kubernetes keeps logs in the usual /var/log location on other platforms.

On the other hand, the Kubernetes API retrieves component logs collected by Kubernetes. The greatest example would be pods where all communications are written to STDOUT or STDERR by applications.


Kubernetes executes your workload by putting containers into Pods to operate on Nodes. A node could be a virtual or physical device. Each node is administered by the control plane and has the services essential to execute Pods. Kubelet, a container runtime, and the kube-proxy are the components on a node. Kubelet saves logs on the node whenever a container restarts. Kubernetes has a log rotation mechanism to prevent logs from consuming the available space on the node. As a result, when a pod is evicted from a node, all connected containers and their logs are likewise evicted.


Pods are the most straightforward deployable units of compute that you can construct and control in Kubernetes. A Pod is a collection of one or more containers with shared resources and a specification file to execute the containers. A Pod’s contents are always co-located, co-scheduled, and executed in a shared environment. A Pod simulates an application-specific “logical host”: it comprises one or more application containers, which are closely connected.


When you install Kubernetes, you generate a cluster. A Kubernetes cluster comprises several worker computers, called nodes, executing containerized applications. At least one worker node is present in every cluster. The worker nodes host the Pods that make up the application workload. The control plane handles the worker nodes and the Pods in the cluster. In production situations, the control plane frequently operates across many computers, and a cluster typically runs numerous nodes, enabling fault-tolerance and high availability.

You can view Cluster logs in a variety of ways. You may access the individual log files directly in a text editor, less, cat, or whatever command-line tool you like by logging into the server that hosts the log you wish to examine. Alternatively, journalctl can obtain and display logs of a certain kind for you. Using an external logging tool makes it simple to gather Kubernetes cluster logs and application logs and analyze them via a centralized interface, eliminating the need to collect individual logs from each node through the command line.

Kubernetes logs

  • Application logs are logs from user-deployed apps. Program logs assist in deciphering what is going on inside the application.
  • Logs from API-server, Kube-scheduler, etcd, Kube-proxy, and other Kubernetes Cluster components. These logs aid in the diagnosis of Kubernetes cluster faults.
  • Kubernetes Audit logs detail API activity captured by the API server and gives you insight into API behavior.

The Kubernetes Logging Architecture

There is no built-in mechanism in Kubernetes to consolidate logs. You must use a centralized logging backend and feed all logs to it. 

Let’s look at the 3 most essential aspects of logging:

  • Logging Agent: A log agent that runs as a daemonset on all Kubernetes nodes and sends logs to a centralized logging backend in real-time. You can also use the logging agent as a sidecar container. Fluentd, for example.
  • Logging backend: A centralized system capable of storing, searching, and analyzing log data. 
  • Log Visualization: A dashboard-based solution for visualizing log data. 

Viewing Kubernetes logs

For live log streams, log review using kubectl logs or kubetail is convenient, but it has its limits. Historical logs, logs from terminated pods, and logs from crashed instances, for example, are not accessible. The use of a centralized log management solution is a recommended best practice, and Kubernetes is no exception.

For centrally linking pod logs, a variety of methods and solutions are available. Fluentd is one of the most renowned methods. Fluentd gathers and parses logs from a variety of sources before sending them to one or more repositories. Fluentd’s large catalog of customizable plugins enhances its versatility. Due to the sheer amount of resources required it’s ideal to keep your logs separate from your Kubernetes cluster.


In Kubernetes, there is a lot of subtlety regarding logging. Although Kubernetes has basic logging and monitoring capabilities, it has far from a complete logging solution built-in. Configuring the preservation and retention procedure and maintaining the necessary storage required to keep your logs may be time-consuming and difficult. You’ll need an external log collecting, analysis, and management tool like LOGIQ to get the most out of Kubernetes logging.

You can deploy LOGIQ within any Kubernetes environment or distribution using a Helm Chart in under 5 minutes. LOGIQ supports auto-discovery of all Kubernetes components within your environments and organizes all logs by namespace, application, Kubernetes labels, and pod names. You can live-tail your Kubernetes logs from the LOGIQ UI, convert logs to metrics, visualize them, and create dashboards that give you a holistic view over your environments.

Additionally, LOGIQ ships with Prometheus built in that enables you to pull metrics and unify them with your logs from the same environment. Moreover, you can set up alerts based on events and route them to a variety of alert destinations. LOGIQ also ships with built-in SIEM and data enhancement rules that allow you to enhance log value, reduce log volumes, and augment security events within data in motion. You can also store your logs and metrics for as long as you wish to using InstaStore and route them to any downstream target system on-demand.

Sign up for a FREE trial of LOGIQ SaaS to witness first-hand how LOGIQ simplifies Kubernetes logging, monitoring, and full-stack observability. You can also try out the free-forever LOGIQ PaaS Community Edition that deploys via a Helm Chart on any Kubernetes distribution.

Related articles

The LOGIQ blog

Let’s keep this a friendly and includisve space: A few ground rules: be respectful, stay on topic, and no spam, please.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More insights. More affordable.
Less hassle.