8 Steps to Configure and Define Kubernetes Security Context
Published January 9, 2025.
Kubernetes is everywhere, orchestrating applications at a scale we once only imagined. But with this power comes a critical challenge: as Kubernetes grows, so do its risks. Attackers don’t need your cluster to fail – they thrive on the vulnerabilities you overlook. Traditional defenses like firewalls and identity management aren’t enough to protect the moving parts of a containerized environment.
The problem is more significant than you think. 28% of organizations run over 90% of workloads with insecure capabilities, and 70% rely on outdated Helm charts. Kubernetes demands more than speed and flexibility; it demands security embedded directly into your clusters. At the heart of this effort is the Kubernetes Security Context.
What is a Kubernetes Security Context?
A Kubernetes Security Context is a configuration that defines the security parameters for Pods or containers, shaping their runtime behavior to meet specific security standards. It allows you to manage user permissions and filesystem access, keeping workloads within defined security boundaries.
Here’s a basic example of a Pod spec with a security context:
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- name: secure-container
image: nginx
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
Internal vs. External Security Context
Kubernetes security is layered, with different controls handling specific aspects of workload behavior. Two key dimensions of this security model are the internal and external security contexts, which focus on different scopes of protection:
- Internal Security Context: Focuses on settings within the workload, like user IDs, filesystem access, and privilege levels. These configurations limit what a container can do inside the cluster, reducing risks at runtime.
- External Security Context: Manages how workloads interact with the broader environment. It includes network policies to control traffic, permissions for accessing Kubernetes APIs, and rules for external resource access.
Source
Kubernetes Security Contexts, RBAC, and Pod Security Admission
Knowing the differences between Kubernetes Security Contexts, RBAC, and Pod Security Admission is essential for implementing layered and adequate security measures, especially when dealing with serverless containers. While these mechanisms often overlap in discussions, they address unique aspects of cluster security.
Kubernetes Security Contexts control the configuration of individual Pods and containers, defining parameters like user IDs, privilege levels, and filesystem access. These settings restrict workloads to the least privileges necessary and isolate them at runtime, focusing on container-specific controls.
RBAC operates at the cluster level, managing who or what can interact with Kubernetes resources. It assigns roles and permissions to users, groups, or service accounts, restricting actions like deploying Pods or modifying configurations and blocking unauthorized activities.
Pod Security Admission complements both by enforcing security policies during the deployment phase. It evaluates Pods against defined standards, such as requiring specific security context settings or forbidding privileged containers. This prevents insecure configurations from being applied, even if someone has RBAC permissions to create Pods.
Why Kubernetes Security Context is a Game-Changer
Security contexts provide granular control over workload behavior, enabling better protection and compliance. Key benefits include:
- Reducing the Attack Surface: Security contexts limit attackers' opportunities by enforcing non-root user IDs, dropping unnecessary Linux capabilities, and enabling read-only filesystems.
- Compliance at Scale: Security contexts help standardize security policies by mandating configurations such as seccomp profiles. This standardization simplifies aligning workloads with compliance standards like PCI DSS or HIPAA, even in large, multi-tenant clusters.
- Improved Incident Response: Features like read-only root filesystems limit an attacker’s ability to alter configurations, reducing damage and simplifying containment during incidents.
Ongoing monitoring is essential to maximize the effectiveness of these security context benefits. Combine monitoring with tools like KICS, Trivy, Prowler, and GoSec to automate Kubernetes vulnerability scanning and zero in on actual risks. Jit’s Open ASPM platform makes this process seamless. Aside from integrating key security tools like those mentioned above, it focuses on what matters most by filtering out false positives and highlighting the critical fixes your Kubernetes security needs.
Source
8 Steps to Configure and Define Kubernetes Security Context
1. Define the Security Context in the Pod Specification
Start by adding the securityContext block to your Pod’s YAML specification. You can then define key runtime permissions and behavior, such as user and group IDs. Here’s a simple example:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
This example assigns a user ID (1000) and group ID (3000) to the Pod, creating a controlled environment. The Pod may inherit default (and potentially insecure) settings without this. You can use kubectl to verify the applied context by inspecting the live Pods:
You can then leverage a tool like Kubescape to scan Kubernetes configurations and automatically identify potential security risks. Kubescape can help verify that your security contexts and Kubernetes configurations comply with established security frameworks.kubectl get pod <pod-name> -o yaml
2. Set User and Group IDs (UID/GID)
Next, specify runAsUser and runAsGroup in the Security Context. For example:
securityContext:
runAsUser: 1001
runAsGroup: 1001
UID 0 (root user) has unrestricted access to the host system, which means any vulnerability in a container running as root could allow an attacker to exploit the entire cluster. Configuring your containers with a specific, non-root runAsUser and runAsGroup limits a compromised container's actions, keeping the damage contained.
For multi-container Pods, verify that all containers inherit these settings. Combine this with image analysis tools like Trivy to ensure the container image supports non-root users.
3. Enable RunAsNonRoot
Add runAsNonRoot to the Security Context to prevent containers from running with root privileges, even if misconfigured. This is especially critical for third-party images where the default user settings might not be straightforward:
securityContext:
runAsNonRoot: true
This addition forces the container to operate under non-root users, blocking privilege escalation attempts entirely. It’s often paired with compliance frameworks like OPA Gatekeeper to enforce this policy cluster-wide.
Source
4. Configure Privilege Escalation Settings
Privilege escalation lets a container gain additional permissions during runtime, which can be exploited. Prevent this by setting allowPrivilegeEscalation to false:
securityContext:
allowPrivilegeEscalation: false
For example, this prevents attackers from loading kernel modules that could compromise your host system. Disabling privilege escalation is especially important for workloads handling external traffic, like API gateways.
5. Use a Read-Only Filesystem
Set the container’s root filesystem to read-only to prevent unauthorized modifications to system files:
securityContext:
readOnlyRootFilesystem: true
A read-only filesystem is ideal for applications that don’t need to write to disk, such as static web servers. If you need write access, carefully mount writable volumes and scope their usage in the Pod's configuration. Use tools like Falco to monitor filesystem activity and detect anomalies.
6. Define Container Capabilities
Linux capabilities let you fine-tune the privileges of your container. Drop all unnecessary capabilities by default, then add only what’s required for the workload to function:
securityContext:
capabilities:
drop: ["ALL"]
add: ["NET_BIND_SERVICE"]
For example, adding NET_BIND_SERVICE allows the container to bind to low-numbered ports without granting excessive permissions, limiting the attack surface to the minimum needed. Use kube-bench to ensure capability settings comply with security benchmarks.
Source
7. Set fsGroup for Shared Volumes
The fsGroup setting manages access to shared storage, restricting file ownership and modification to a specific group. This setting is critical for multi-tenant workloads like databases, where multiple tenants or applications may be sharing the same resources:
securityContext:
fsGroup: 2000
With this setting, all files written to the shared volume are automatically assigned to the specified group. This prevents unauthorized access from other processes within the cluster.
8. Set Resource Limits for Container Stability
Resource limits control how much CPU and memory a container can use, preventing any container from consuming excessive resources. These limits are necessary to prevent a misbehaving container from disrupting cluster performance or causing outages. Here’s an example configuration:
spec:
containers:
- name: my-container
image: my-image
resources:
limits:
memory: "512Mi"
cpu: "1"
requests:
memory: "256Mi"
cpu: "0.5"
- Limits set the maximum resources a container can use, such as 512Mi of memory and 1 example.
- Requests define the minimum resources required for the container to run correctly. Kubernetes uses these values to allocate resources and decide where to place the container within the cluster.
- Tools like Kubernetes Metrics Server can help monitor resource usage and adjust configurations to match workload requirements over time. This preventative approach is critical in shared clusters, where one workload could otherwise impact others.
Build Secure Kubernetes Workloads
Kubernetes security starts with thoughtful configurations, and security contexts provide the building blocks. By defining user permissions, restricting privileges, and controlling how workloads interact within and outside the cluster, you create the first line of defense against attacks.
But you can work smarter with Jit. By automating vulnerability scans, integrating with tools like Trivy, Prowler, and Kubescape, and filtering out false positives, Jit empowers teams to maintain secure Kubernetes environments without the headache of constant monitoring. Developers can focus on coding and spend less time keeping Kubernetes. Learn more about Jit and start building a safer, more scalable infrastructure.