-
screen (Linux OS utility)
Open a new (named) screen:
screen -S session_name
Lists all existing screen sessions:
screen -ls
Reconnects to an existing (detached) screen session:
screen -r SCREEN_NAME_OR_NUMBER
Force/reconnect to an Attached screen session:
screen -rd SCREEN_NAME_OR_NUMBER
Most useful keyboard shortcuts to manage an open session:
Ctrl+a d Detach from current screen (without destroying it) Ctrl+a c Create a new window (with shell) Ctrl+a " List all window Ctrl+a 0 Switch to window 0 (by number ) Ctrl+a A Rename the current window Ctrl+a S Split current region horizontally into two regions Ctrl+a | Split current region vertically into two regions Ctrl+a tab Switch the input focus to the next region Ctrl+a Ctrl+a Toggle between the current and previous region Ctrl+a Q Close all regions but the current one Ctrl+a X Close the current region Ctrl+a ESC Enters in copy mode (you can scroll the buffer with up/down pageup/pagedown keys), press ESC to return to the shell Ctrl+a [ Enters in copy mode (you can scroll the buffer with up/down pageup/pagedown keys), press ESC to return to the shell Once into copy mode: Move cursor to the text you want to copy Press SPACE to start highlighting Move cursor to end of text you want to copy Press SPACE to copy to the clipboard and exit from copy mode Press Ctrl+a ] to paste the text
Resizing a screen Tab:
type Ctrl-a :resize +10 to increase size
How to unfreeze from accidental pressing of Ctrl-S:
type Ctrl+q
Sample ~/.screenrc
# Turn off the welcome message startup_message off # Disable visual bell vbell off # Set scrollback buffer to 10000 defscrollback 10000 # Customize the status line hardstatus alwayslastline hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]'
-
Docker cheatsheet
Table of Contents
Generic
Command Description docker exec -ti CONTAINER_ID command_to_execute Runs, interactive mode, a command over a container already running (e.g. starts the shell, /bin/sh) docker rmi IMAGE_NAME Deletes the image (all existing containers based on this image must have been stopped and deleted upfront) docker rm CONTAINER_ID Deletes the container docker stop CONTAINER_ID Stops the container docker inspect CONTAINER_ID Lists all attributes of container with id CONTAINER_ID Generic docker commands
Networking
Command Description docker network list Lists all configured networks docker inspect NETWORK_ID Displays all attributes of network with id NETWORK_ID docker run
–name alpine-2
–network=none alpineRuns a new container with name = alpine-2 and attachs it to network “none”, using image name = alpine docker network create
–driver bridge
–subnet 182.18.0.1/24
–gateway 182.18.0.1
wp-mysql-networkCreates a new network with type = bridge, subnet 182.18.0.1/24 and gateway 182.18.0.1. Network name will be wp-mysql-network docker run
-d
-e MYSQL_ROOT_PASSWORD=db_pass123
–name mysql-db
–network wp-mysql-network mysql:5.6Runs a new container (detached mode) assigning an environment variable MYSQL_ROOT_PASSWORD=db_pass123 and container name = mysql-db. The container will be attached to the network with name wp-mysql-network. Image used to run the container: mysql:5.6 docker run –network=wp-mysql-network
-e DB_Host=mysql-db -e DB_Password=db_pass123
-p 38080:8080
–name webapp
–link mysql-db:mysql-db
-d
kodekloud/simple-webapp-mysqlRuns a new container from image kodekloud/simple-webapp-mysql and with name webapp.Container gets attached to network with name wp-mysql-network 2 environment variables are defined Internal (container) network port 8080 is exposed to port 38080 (host) The container gets linked to container with name mysql-db Runs in detached mode Docker networking most common commands
Storage Management
Command Description docker run -v /opt/data:/var/lib/mysql
-d –name mysql-db
-e MYSQL_ROOT_PASSWORD=db_pass123 mysqlRuns a new container in detached mode and with name mysql-dbMaps the container directory /var/lib/mysql over the host directory /opt/data (bind-mounting) Sets an environment variable Uses the image with name mysql alternative using –mount option (rather than -v) docker run –mount type=bind,source=/opt/data,target=/var/lib/mysql mysql docker volume create data_volume Creates a new persisted volume named as data_volume (will be a new folder at /var/lib/docker/volumes into host file system) and runs a new container mapping the persisted volume to /var/lib/mysql docker run -v data_volume:/var/lib/mysql mysql Docker storage management useful commands
Docker image build
Docker file sample:
FROM Ubuntu Each statement is a layer with its space usage RUN apt-get update && apt-get -y install python RUN pip install flask flask-mysql COPY . /opt/source/code ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
Command to be issues in order to create an image:
docker build Dockerfile -t username/app-name
Docker registry management
Command Description docker run -d -p 5000:5000 –name registry registry:2 Creates a new local registry docker image tag my-image localhost:5000/my-image Tagging an image so that it gets stored on the local registry docker push localhost:5000/my-image Pushing an image to the local registry docker pull localhost:5000/my-image Pulling an image from the local registry Docker registry management useful commands
Log management
Command Description docker log -f CONTAINER_ID Prints out logs from CONTAINER_ID (-f = follow) Docker logs management
Info & Tips
Docker engine
Docker is made up of 3 components:
- Docker daemon
- REST API
- Docker CLI
When you install docker on a linux host, all 3 components are deployed.
Docker CLI can also be installed as single component and then used to run docker commands on a remote host with REST API and Docker daemon.
To run docker commands on a remote host:
docker -H=ip_of_remote_docker_engine:2375 run nginx
Docker cgroups
To limit the % of CPU that can be assigned to a container (.5 = cannot use more than 50% of host CPU):
docker run --cpu=.5 ubuntu
To limit the amount of memory that can be allocated from a container:
docker run --memory=100m ubuntu
-
EFK Stack deployment on Kubernetes
Full setup, including yaml manifest files, for a single node test system, collecting logs from nginx.
Table of Contents:
Intro
Logs produced by running containers, written to stdout/stderr are, by default, stored on host machine at /var/log/container.
Logrotation is pretty frequent, therefore either you collect and store them somewhere, or they will be soon gone for the good.
At some point, you might (will) need to analyse logs, so it’s a good idea to set up a framework to do so already from the beginning.
Moreover, rather than scrolling text files, having them available as structured data and on a web UI will help a lot, when it comes to log analysis.
An EFK (Elasticsearch – Fluentd – Kibana) stack allows you to do so. A possible alternative is made of ELK (Elasticsearch – Logstash – Kibana).
This post will guide you through all necessary steps to do so. As sample case, we will be collecting (JSON) logs from an nginx container and they will be available on Kibana.
Components
- Fluentd – the log aggregator used to collect container stdout/stderr logs and (optionally) process them before sending them to Elasticsearch
- Elasticsearch – provides a scalable, RESTful search and analytics engine for storing Kubernetes logs
- Kibana – the visualization layer, allowing you with a user interface to query and visualize logs
Prerequisites
- A Kubernetes cluster running on a Linux host VM
- kubectl utility, configured to interact with the cluster above
Step 1: Elasticsearch deployment
- Create a Service using the following yaml manifest
kind: Service apiVersion: v1 metadata: name: elasticsearch namespace: default labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: rest - port: 9300 name: inter-node
- Create a persistent volume to be assigned to the elasticsearch pods
- Make sure that .spec.local.path points to a folder existing on host VM local filesystem
- Make sure that .spec.nodeAffinity.required.nodeSelectorTerms.matchExpressions.values matches with kubernetes’ cluster node name
apiVersion: v1 kind: PersistentVolume metadata: name: data namespace: default spec: accessModes: - ReadWriteOnce capacity: storage: 5Gi persistentVolumeReclaimPolicy: Retain local: path: /u01/elastic nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - your_node_name
- Create a StatefulSet (sample below runs on a single node configuration)
apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster namespace: default spec: serviceName: elasticsearch replicas: 1 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0 resources: limits: cpu: 1000m requests: cpu: 10m ports: - containerPort: 9200 name: rest protocol: TCP - containerPort: 9300 name: inter-node protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: discovery.type value: single-node - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.seed_hosts value: "es-cluster-0.elasticsearch" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" initContainers: - name: fix-permissions image: busybox command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: data labels: app: elasticsearch spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 5Gi
Now, make sure your Elasticsearch pod is up and running:
[root@mr-k8s-demo1 ~]# kubectl get pods -l app=elasticsearch NAME READY STATUS RESTARTS AGE es-cluster-0 1/1 Running 2 (2d16h ago) 5d13h
Time to run a test call via HTTP. Here we have 2 alternatives.
Option #1:
Forward traffic to port 9200 and test via curl from the Linux host VM:
[root@mr-k8s-demo1 ~]# kubectl port-forward $(kubectl get pods -o=name --selector=app=elasticsearch) 9200:9200
Open a new shell (port-forward will allocate the shell above)
[root@mr-k8s-demo1 ~]# curl http://localhost:9200/_cluster/state?pretty
Option #2:
Expose permanently port 9200 with a Service so that it becomes accessible outside from the cluster as well (using Linux host VM’s real IP address):
[root@mr-k8s-demo1 ~]# kubectl expose service elasticsearch --port=9200 --target-port=9200 --external-ip=external_ip_of_your_Linux_host_VM --name=elasticsearch-external
Open a browser and go to http://external_ip_of_your_Linux_host_VM:9200/_cluster/state?pretty
That’s it, regarding Elasticsearch.
Step 2: Kibana deployment
Deploy service + deployment using the following manifest:
apiVersion: v1 kind: Service metadata: name: kibana namespace: default labels: app: kibana spec: ports: - port: 5601 selector: app: kibana --- apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: default labels: app: kibana spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.2.0 resources: limits: cpu: 1000m requests: cpu: 10m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601
Testing Kibana availability
Now, similarly to what we have just done with Elasticsearch, we must expose our Service. This time, since you will be frequently accessing the web UI, suggested solution is to create directly a Service to expose port 5601:
[root@mr-k8s-demo1 ~]# kubectl expose service kibana --port=5601 --target-port=5601 --external-ip=external_ip_of_your_Linux_host_VM --name=kibana-external
And point your browser to http://external_ip_of_your_Linux_host_VM:5601
Step 3: Fluentd deployment
Next we will set up set up Fluentd as a DaemonSet. Because it is DaemonSet, a Fluentd logging agent Pod will run on every node in our cluster.
Use the following yaml to create the Fluentd daemonset. It will do the following:
- Create a ServiceAccount called fluentd. Fluentd processes will use this service account to access the Kubernetes API.
- Create a ClusterRole which will allow get/list/watch access on pods.
- Create a ClusterRoleBinding. This will bind ServiceAccount above with the ClusterRole giving permissions to the ServiceAccount
apiVersion: v1 kind: ConfigMap metadata: name: fluentd-config data: fluent.conf: | <source> @type tail read_from_head true path /var/log/containers/nginx*.log pos_file /var/log/containers/nginx.log.pos tag nginx.access <parse> @type regexp expression /(?<docker_ts>[^ ]*) (?<docker_flag>[^ ]*) (?<docker_stdout>[^ ]*) (?<data>.*).*$/ </parse> </source> <filter nginx.**> @type record_transformer <record> ${record["data"]} </record> remove_keys docker_ts,docker_flag,docker_stdout </filter> <filter nginx.**> @type parser key_name data format json reserve_data false </filter> <match nginx.**> @type elasticsearch host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}" port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}" user "#{ENV['FLUENT_ELASTICSEARCH_USER']}" password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}" index_name fluentd type_name fluentd </match> <match **> @type null </match> --- apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: default labels: app: fluentd --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd labels: app: fluentd rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: default --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: default labels: app: fluentd spec: selector: matchLabels: app: fluentd template: metadata: labels: app: fluentd spec: serviceAccount: fluentd serviceAccountName: fluentd initContainers: - name: config-fluentd image: busybox imagePullPolicy: IfNotPresent command: ["/bin/sh","-c"] args: - cp /fluentd/etc2/fluent.conf /fluentd/etc/fluent.conf; volumeMounts: - name: config-path mountPath: /fluentd/etc - name: config-source mountPath: /fluentd/etc2 containers: - name: fluentd #image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1 image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch env: - name: FLUENT_ELASTICSEARCH_HOST value: "elasticsearch" - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" - name: FLUENTD_SYSTEMD_CONF value: disable resources: limits: memory: 512Mi requests: cpu: 25m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: config-path mountPath: /fluentd/etc terminationGracePeriodSeconds: 30 volumes: - name: config-source configMap: name: fluentd-config items: - key: fluent.conf path: fluent.conf - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: config-path emptyDir: {}
Fluentd configuration
Fluentd mounts locally the folder in which log files containing stdout from all containers are available (/var/lib/docker/containers).
Depending on configuration defined on file /fluentd/etc/fluent.conf such content can then be forwarded to elasticsearch.
Configuration file is defined as a ConfigMap object which is then mounted during container startup.
References to elasticsearch must be passed as environment variables:
env: - name: FLUENT_ELASTICSEARCH_HOST value: "elasticsearch" # Make sure this name can be resolved within the cluster - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http"
Fluentd can read different sources, parse, filter, change/add/remove content etc. before to forward logs to a destination (e.g. elasticsearch).
Documentation (adapt to your desired version) is available here: https://docs.fluentd.org/v/0.12/
Sample configuration file (reads nginx logs as JSON):
<source> @type tail # reads file polling for new entries read_from_head true # starts reading from beginning of the file path /var/log/containers/nginx*.log #pathname (can include wildcards) of file to be read pos_file /var/log/containers/nginx.log.pos # fluentd will store the last read position on this file tag nginx.access # adds a tag to this file, useful to add further steps during file processing <parse> @type regexp # parses retrieved lines splitting the content according to the regexp below expression /(?<docker_ts>[^ ]*) (?<docker_flag>[^ ]*) (?<docker_stdout>[^ ]*) (?<data>.*).*$/ </parse> </source> <filter nginx.**> @type record_transformer #transforms the content of entries tagged with nginx.* <record> ${record["data"]} # defines output: Field with name "data" </record> remove_keys docker_ts,docker_flag,docker_stdout # suppresses entries from output </filter> <filter nginx.**> @type parser key_name data # parses as json the value of field "data" format json reserve_data false # outputs only the parsed content ("data" root is removed) </filter> <match nginx.**> @type elasticsearch # sends processed entries to elasticsearch host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}" port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}" user "#{ENV['FLUENT_ELASTICSEARCH_USER']}" password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}" index_name fluentd # defines name of index that will be created on elasticsearch type_name fluentd </match> <match **> @type null # all the rest, not tagged as nginx.*, will be trashed </match>
Tips
Output can also be redirected to a file, useful for troubleshooting (you can see the outcome of log processing, based on your filters/transformers).
Sample:
<match **> @type file path /var/log/fluent/myapp utc append true </match>
Sample container: Nginx
The following yaml manifest deploys an nginx instance with all default settings except for the log format: We will be using JSON.
apiVersion: v1 kind: ConfigMap metadata: name: nginx-conf data: nginx.conf: | user nginx; worker_processes 1; events { worker_connections 10240; } http { log_format logger-json escape=json '{"time_local":"$time_iso8601", "remote_addr":"$remote_addr", "remote_user":"$remote_user", "request":"$request", "status":"$status", "body_bytes_sent":"$body_bytes_sent", "request_time":"$request_time", "http_referrer":"$http_referer", "http_user_agent":"$http_user_agent", "request_length":"$request_length" }'; server { listen 80; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } access_log /var/log/nginx/access.log logger-json; } } --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-conf mountPath: /etc/nginx/nginx.conf subPath: nginx.conf readOnly: true volumes: - name: nginx-conf configMap: name: nginx-conf items: - key: nginx.conf path: nginx.conf --- apiVersion: v1 kind: Service metadata: name: nginx spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 30008 selector: app: nginx
Call your nginx just deployed pointing your browser to: http://external_ip_of_your_Linux_host_VM:30008
You should see Nginx’s home page:
Testing the complete stack
Based on the configuration provided above, nginx will log to stdout using JSON format.
Fluentd is listening on such log file, and each new line will be parsed as per configuration file and all entries matching the filter provided will be forwarded to Elasticsearch.
To make them visible, there is still a last step to complete on Kibana.
Log in to the web UI and go to Management -> Index Management:
Based on the configurations provided, you should see at least 1 index named “fluentd”. Note that the name is part of Fluentd configuration file.
To make the index visible, you need to define an Index Pattern.
Click on Management -> Index Patterns -> Create Index Pattern:
Start typing the name so that it matches at least one of existing indexes (on sample above: fluent …). Then, click next and complete index pattern creation.
Now, click on Discover, make sure that the index pattern created above is selected, select a time range that includes the moment you accessed Nginx home page. Collected logs data will be displayed and, on the left column, each single log attribute (obtained by parsing the JSON entries) will be available.