
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
@jfvilas/plugin-kubelog-backend
Advanced tools
This Backstage plugin is the backend for the Kubelog (Kubernetes log viewing) frontend plugin.
Please refer to Kubelog Plugin general info to understand what is Kubelog, what are its requirements and how does it work.
This Backstage backend plugin is primarily responsible for the following tasks:
Following table shows version compatibility between Kubelog and Kwirth Core.
| Plugin Kwirth version | Kwirth version |
|---|---|
| 0.14.2 | 0.5.21 |
| 0.11.7 | 0.4.131 |
| 0.11.6 | 0.4.20 |
| 0.11.1 | 0.3.160 |
| 0.10.1 | 0.2.213 |
| 0.9.5 | 0.2.8 |
Here's how to get the backend up and running quickly. First we need to add the @jfvilas/plugin-kubelog-backend package to your backend:
# From your Backstage root directory
yarn --cwd packages/backend add @jfvilas/plugin-kubelog-backend @jfvilas/plugin-kubelog-common
Next you need to modify your backend index file. In your packages/backend/src/index.ts make the following change:
const backend = createBackend();
// ... other feature additions
+ backend.add(import('@jfvilas/plugin-kubelog-backend'));
// ... other feature additions
backend.start();
To have a Kubelog up and running you must perform some previous additional tasks, like deploying Kwirth, creating API Keys, defining clusters, etc... In this section we cover all this needs in a structured way.
Remember, Backstage Kubelog plugin helps you in showing logs inside Backstage to ease your develoment teams work, but this plugin has no access to the logs in itself, it relies on Kwirth to act as a "log proxy", that is, Kwirth (a component that runs inside your Kubernetes clusters) has access to logs and can "export" them outside the cluster in a secure way, so logs can be consumed anywhere. For example, logs can be shown on Backstage entity pages.
We will not cover this subject here, we refer you to Kwirth installation documentation where you will find more information on how Kwirth works and how to install it. We show here just a summary of what is Kwirth:
Once you have a Kubernetes cluster with a Kwirth installation in place (to export logs Kwirth must be accesible from outside your cluster, so you will need to install any flavour of Ingress Controller and an Ingress for publishing Kwirth access). Please write down your Kwirt external access (we will need it for configuring Kubelog). For this tutorial we will assume your Kwirth is published on: http://your-external.dns.name/kwirth.
Once Kwirth is running perform there two simple actions:
This is all you need to do inside Kwirth.
For finishing Kubelog config you need to edit your app-config.yaml in order to add Kwirth information to your Kubernetes cluster. Kubelog doesn't have a specific section in the app-config, it just uses the Backstage Kubernetes core component configuration vitamined with some additional properties. Let's suppose you have a Kubernetes configuration like this in your current app-config:
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://kuebeapi.your-cluster.com
name: k3d-cluster
title: 'Kubernetes local'
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
We need to add 2 properties to the cluster configuration:
The kubernetes section should look something like this:
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://kuebeapi.your-cluster.com
name: k3d-cluster
title: 'Kubernetes local'
+ kubelogKwirthHome: http://your-external.dns.name/kwirth
+ kubelogKwirthApiKey: '40f5ea6c-bac3-df2f-d184-c9f3ab106ba9|permanent|cluster::::'
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
The permission system of Kubelog has been designed with these main ideas in mind:
So, the permission system has been build using (right now) two layers:
Let's suppose that in your clusters you have 3 namespaces:
Typically you would restrict access to logs in such a way that:
The way you can manage this in Kubelog is via Group entities of Backstage. That is:
NOTE: for simplicity we assume all your User refs and Group refs live in a Backstage namespace named 'default'
Once you have created the groups you can configure the namespace permission adding one additional property to the cluster definition, it is named 'kubelogNamespacePermissions'. This is an array of namespaces, where for each namespace you can declare an array of identity refs (that is, users or groups). The example below is self-explaining.
clusters:
- url: https://kuebeapi.your-cluster.com
name: k3d-cluster
title: 'Kubernetes local'
kubelogKwirthHome: http://your-external.dns.name/kwirth
kubelogKwirthApiKey: '40f5ea6c-bac3-df2f-d184-c9f3ab106ba9|permanent|cluster::::'
+ kubelogNamespacePermissions:
+ - stage: ['group:default/devops', 'group:default/admin']
+ - production: ['group:default/admin', 'user:default/nicklaus-wirth']
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
It's easy to understand:
Remember, if you don't want to restrict a namespace, just do not add it to the configuration in app-config file, like we have done with 'dev' namespace.
When a user working with Backstage enters Kubelog tab (in the entity page) he will see a list of clusters. If he selects a cluster a list of namespaces will be shown, that is, all namespaces that do contain pods tagged with the current entity id. If the user has no permission to a specific namespace, the namespace will be shown in red and will not be accesible. Allowed namespaced will be shown in primary color and will be 'clickable'.
In addition to namespace permissions, Kubelog has added on version 0.9 a pod permission layer in which you can refine your permissions. Currently 2 scopes have been defined:
Each scope has a configuration section in the app-config, but both work exactly the same way, so we will explain just how 'view' scope permissions would be defined.
Let's consider a simple view-scoped pod permission sample based on previously defined namespaces: 'dev', 'stage', 'production':
clusters:
- url: https://kuebeapi.your-cluster.com
name: k3d-cluster
title: 'Kubernetes local'
kubelogKwirthHome: http://your-external.dns.name/kwirth
kubelogKwirthApiKey: '40f5ea6c-bac3-df2f-d184-c9f3ab106ba9|permanent|cluster::::'
kubelogNamespacePermissions:
- stage: ['group:default/devops', 'group:default/admin']
- production: ['group:default/admin', 'user:default/nicklaus-wirth']
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: true
+ kubelogPodViewPermissions:
+ - stage:
+ allow:
+ - pods: [^common-]
+ - pods: [keys]
+ refs: []
+ - pods: [^ef.*]
+ refs: [group:.+/admin, group:test/.+]
+ - pods: [th$]
+ refs: [.*]
+ except:
+ - pods: [kwirth]
+ refs: [group:default/admin, user:defualt:nicklaus-wirth]
+ - production
+ deny:
+ - refs: [.*]
+ - others
+ allow:
+ - refs: []
...
VERY IMPORTANT NOTE: All strings defined in the pod permission layer are regular expressions.
About this example and about 'how to configure kubelog pod permissions':
So, in our example:
Let's complete the example with the other namespaces declared:
Please be aware that not declaring 'pods' or 'refs' means using a match-all approach (by using ['.*']), what is completely different than declaring '[]', what matches nothing.
Starting with Kubelog 0.9.5, there exist two scopes (that are consistent with Kwirth scopes):
The permissions related with this two scopes can be declared in app-config using these tow sections:
The way the permissions are declared is the one explained before, with this general structure inside app-config YAML:
- SCOPE:
- NAMESPACE:
- allow:
- pods: [...]
refs: [...]
- except:
- deny:
- unless:
- NAMESPACE:
- allow:
- ...
- SCOPE:
- NAMESPACE:
...
Where:
FAQs
Backstage backend plugin for Kubelog
We found that @jfvilas/plugin-kubelog-backend demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.