You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context dev
A default-deny NetworkPolicy avoid to accidentally expose a Pod in a namespace that doesn't have any other NetworkPolicy defined.
Task: Create a new default-deny NetworkPolicy named deny-network in the namespace test for all traffic of type Ingress + Egress
The new NetworkPolicy must deny all Ingress + Egress traffic in the namespace test.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace test.
You can find a skeleton manifests file at /home/cert_masters/network-policy.yaml
A. See the explanation below
B. PlaceHolder
Create a new NetworkPolicy named deny-all in the namespace testing which denies all traffic of type ingress and egress traffic
A. See the explanation below:
B. PlaceHolder
A CIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately.
Fix all issues via configuration and restart the affected components to ensure the new settings take effect. Fix all of the following violations that were found against the API server:
Fix all of the following violations that were found against the Kubelet: Fix all of the following violations that were found against etcd:
A. See explanation below.
B. PlaceHolder
CORRECT TEXT
A container image scanner is set up on the cluster, but it's not yet fully integrated into the cluster s configuration. When complete, the container image scanner shall scan for and reject the use of vulnerable images.
Given an incomplete configuration in directory /etc/kubernetes/epconfig and a functional container image scanner with HTTPS endpoint https://wakanda.local:8081 /image_policy:
1.
Enable the necessary plugins to create an image policy
2.
Validate the control configuration and change it to an implicit deny
3.
Edit the configuration to point to the provided HTTPS endpoint correctly
Finally, test if the configuration is working by trying to deploy the vulnerable resource /root/KSSC00202/vulnerable-resource.yml.
A. See the explanation below
B. PlaceHolder
Cluster: dev Master node: master1 Worker node: worker1 You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config use-context dev Task:
Retrieve the content of the existing secret named adam in the safe namespace.
Store the username field in a file names /home/cert-masters/username.txt, and the password field in a file named /home/cert-masters/password.txt.
1.
You must create both files; they don't exist yet.
2.
Do not use/modify the created files in the following steps, create new temporary files if needed.
Create a new secret names newsecret in the safe namespace, with the following content:
Username: dbadmin Password: moresecurepas
Finally, create a new Pod that has access to the secret newsecret via a volume:
Namespace:safe Pod name:mysecret-pod Container name:db-container Image:redis Volume name:secret-vol Mount path:/etc/mysecret
A. See the explanation below
B. PlaceHolder
Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.
Fix all of the following violations that were found against the API server:
1.
Ensure the --authorization-mode argument includes RBAC
2.
Ensure the --authorization-mode argument includes Node
3.
Ensure that the --profiling argument is set to false
Fix all of the following violations that were found against the Kubelet:
1.
Ensure the --anonymous-auth argument is set to false.
2.
Ensure that the --authorization-mode argument is set to Webhook. Fix all of the following violations that were found against the ETCD:
Ensure that the --auto-tls argument is not set to true Hint: Take the use of Tool Kube-Bench
A. See the below.
B. PlaceHolder
Analyze and edit the given Dockerfile
1.
FROM ubuntu:latest
2.
RUN apt-get update -y
3.
RUN apt-install nginx -y
4.
COPY entrypoint.sh /
5.
ENTRYPOINT ["/entrypoint.sh"]
6.
USER ROOT
Fixing two instructions present in the file being prominent security best practice issues
Analyze and edit the deployment manifest file
1.
apiVersion: v1
2.
kind: Pod
3.
metadata:
4.
name: security-context-demo-2
5.
spec:
6.
securityContext:
7.
runAsUser: 1000
8.
containers:
9.
- name: sec-ctx-demo-2 10.image: gcr.io/google-samples/node-hello:1.0 11.securityContext: 12.runAsUser: 0 13.privileged: True 14.allowPrivilegeEscalation: false
Fixing two fields present in the file being prominent security best practice issues
Don't add or remove configuration settings; only modify the existing configuration settings
Whenever you need an unprivileged user for any of the tasks, use user test-user with the user id 5487
A. See the explanation below:
B. PlaceHolder
CORRECT TEXT
Two tools are pre-installed on the cluster's worker node:
1.
sysdig
2.
falco
Using the tool of your choice (including any non pre-installed tool), analyze the container's behavior for at least 30 seconds, using filters that detect newly spawning and executing processes. Store an incident file at /opt/KSRS00101/alerts/
details, containing the detected incidents, one per line, in the following format:
The following example shows a properly formatted incident file:
A. See the explanation below:
B. PlaceHolder
Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.
Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.
A. See the explanation below:
B. PlaceHolder
CORRECT TEXT
Context
This cluster uses containerd as CRI runtime.
Containerd's default runtime handler is runc. Containerd has been prepared to support an additional runtime handler, runsc (gVisor).
Task
Create a RuntimeClass named sandboxed using the prepared runtime handler named runsc.
Update all Pods in the namespace server to run on gVisor.
A. See the explanation below
B. PlaceHolder