diff --git a/ADVANCED.md b/ADVANCED.md deleted file mode 100644 index 23326bbec..000000000 --- a/ADVANCED.md +++ /dev/null @@ -1,49 +0,0 @@ -# Advanced usage - -### Infrastructure Setup - -First create and populate a .env file with the required variables: - -```bash -cp deployments/kubehound/.env.tpl deployments/kubehound/.env -``` - -Edit the variables (datadog env `DD_*` related and `KUBEHOUND_ENV`): - -* `KUBEHOUND_ENV`: `dev` or `release` -* `DD_API_KEY`: api key you created from https://app.datadoghq.com/ website - -Note: -* `KUBEHOUND_ENV=dev` will build the images locally -* `KUBEHOUND_ENV=release` will use prebuilt images from ghcr.io - -### Running KubeHound - -To replicate the automated command and run KubeHound step-by-step. First build the application: - -```bash -make build -``` - -Next create a configuration file: - -```yaml -collector: - type: live-k8s-api-collector -telemetry: - enabled: true -``` - -A tailored sample configuration file can be found [here](./configs/etc/kubehound.yaml), a full configuration reference containing all possible parameters [here](./configs/etc/kubehound-reference.yaml). - -Finally run the KubeHound binary, passing in the desired configuration: - -```bash -bin/kubehound -c -``` - -Remember the targeted cluster must be set via `kubectx` or setting the `KUBECONFIG` environment variable. Additional functionality for managing the application can be found via: - -```bash -make help -``` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 2ec8aedad..13bcf4034 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -40,5 +40,4 @@ To add a new attack to KubeHound, please do the following: + Create the [resources](../test/setup/test-cluster/attacks/) file in the test cluster that will introduce an instance of the attack into the test cluster + Add an [edge system test](../test/system/graph_edge_test.go) that verifies the attack is correctly created by KubeHound - See [here](https://github.com/DataDog/KubeHound/pull/68/files) for a previous example PR. - \ No newline at end of file +See [here](https://github.com/DataDog/KubeHound/pull/68/files) for a previous example PR. diff --git a/DEVELOPER.md b/DEVELOPER.md deleted file mode 100644 index 425957977..000000000 --- a/DEVELOPER.md +++ /dev/null @@ -1,111 +0,0 @@ -# Developer - -## Requirements - -To sucessufully build and run the test for kubehound, you need: - -+ [Golang](https://go.dev/doc/install) `>= 1.22` -+ [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager) -+ [Kubectl](https://kubernetes.io/docs/tasks/tools/) - -## Build - -Build the application via: - -```bash -make build -``` - -All binaries will be output to the [bin](./bin/) folder - -## Release build - -Build the release packages locally using [goreleaser](https://goreleaser.com/install): - -```bash -make local-release -``` - -## Unit Testing - -The full suite of unit tests can be run locally via: - -```bash -make test -``` - -## System Testing - -The repository includes a suite of system tests that will do the following: -+ create a local kubernetes cluster -+ collect kubernetes API data from the cluster -+ run KubeHound using the file collector to create a working graph database -+ query the graph database to ensure all expected vertices and edges have been created correctly - -The cluster setup and running instances can be found under [test/setup](./test/setup/) - -If you need to manually access the system test environment with kubectl and other commands, you'll need to set (assuming you are at the root dir): - -```bash -cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config -``` - -### Environment variable: -- `DD_API_KEY` (optional): set to the datadog API key used to submit metrics and other observability data. - -### Setup - -Setup the test kind cluster (you only need to do this once!) via: - -```bash -make local-cluster-deploy -``` - -Then run the system tests via: - -```bash -make system-test -``` - -To cleanup the environment you can destroy the cluster via: - -```bash -make local-cluster-destroy -``` - -To list all the available commands, run: - -```bash -make help -``` - -In case of conflict/error, or just if you want to free some of your RAM. You can destroy the backend stack dedicated to the system-test. -Simply run: -```bash -make system-test-clean -``` - -Note: if you are running on Linux but you don't want to run `sudo` for `kind` and `docker` command, you can overwrite this behavior by editing the following var in `test/setup/.config`: -* `DOCKER_CMD="docker"` for docker command -* `KIND_CMD="kind"` for kind command - -### CI Testing - -System tests will be run in CI via the [system-test](./.github/workflows/system-test.yml) github action - - -## Metrics and logs - -To have some in-depth metrics and log correlation, all the components are now linked to datadog. To configure it you just need to add your Datadog API key (`DD_API_KEY`) in the environment variable in the `deployments/kubehound/.env`. When the API key is configured, a docker will be created `kubehound-dev-datadog`. - -All the information being gathered are available at: - -* Metrics: https://app.datadoghq.com/metric/summary?filter=kubehound.janusgraph -* Logs: https://app.datadoghq.com/logs?query=service%3Akubehound%20&cols=host%2Cservice&index=%2A&messageDisplay=inline&stream_sort=desc&viz=stream&from_ts=1688140043795&to_ts=1688140943795&live=true - -To collect the metrics for Janusgraph an exporter from Prometheus is being used: -* https://github.com/prometheus/jmx_exporter - -They are exposed here: -* Locally: http://127.0.0.1:8099/metrics -* Datadog: https://app.datadoghq.com/metric/summary?filter=kubehound.janusgraph diff --git a/Makefile b/Makefile index 396e9a82c..cd9515e6d 100644 --- a/Makefile +++ b/Makefile @@ -79,7 +79,7 @@ cache-clear: ## Clear the builder cache .PHONY: kubehound kubehound: | build ## Prepare kubehound (build go binary, deploy backend) - ./bin/kubehound + ./bin/build/kubehound .PHONY: test test: ## Run the full suite of unit tests @@ -133,7 +133,3 @@ thirdparty-licenses: ## Generate the list of 3rd party dependencies and write to local-wiki: ## Generate and serve the mkdocs wiki on localhost poetry install || pip install mkdocs-material mkdocs-awesome-pages-plugin markdown-captions poetry run mkdocs serve || mkdocs serve - -.PHONY: local-release -local-release: ## Generate release packages locally via goreleaser - goreleaser release --snapshot --clean --config .goreleaser.yaml diff --git a/README.md b/README.md index 1655be268..8512d7f8e 100644 --- a/README.md +++ b/README.md @@ -7,13 +7,21 @@ A Kubernetes attack graph tool allowing automated calculation of attack paths be ## Quick Start +### Requirements + +To run KubeHound, you need a couple dependencies ++ [Docker](https://docs.docker.com/engine/install/) `>= 19.03` ++ [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) `V2` + +### Install and run + Select a target Kubernetes cluster, either: * Using [kubectx](https://github.com/ahmetb/kubectx) * Using specific kubeconfig file by exporting the env variable: `export KUBECONFIG=/your/path/to/.kube/config` Download binaries are available for Linux / Windows / Mac OS via the [releases](https://github.com/DataDog/KubeHound/releases) page or by running the following (Mac OS/Linux): ```bash -wget https://github.com/DataDog/KubeHound/releases/download/latest/kubehound-$(uname -o | sed 's/GNU\///g')-$(uname -m) -O kubehound +wget https://github.com/DataDog/KubeHound/releases/latest/download/kubehound-$(uname -o | sed 's/GNU\///g')-$(uname -m) -O kubehound chmod +x kubehound ``` @@ -29,34 +37,32 @@ Then, simply run ```bash ./kubehound ``` -
-To build KubeHound from source instead -Clone and build this repository: -```bash -git clone https://github.com/DataDog/KubeHound.git -cd KubeHound -make kubehound -``` +For more advanced use case and configuration, see -The built binary is now available at: -```bash -bin/build/kubehound -``` -
+* [advanced configuration](https://kubehound.io/user-guide/advanced-configuration/): all the settings available through the configuration file. +* [common operations](https://kubehound.io/user-guide/common-operations/): the commands available from the KubeHound binary (`dump` / `ingest`). +* [common errors](https://kubehound.io/user-guide/troubleshooting/): troubleshooting guide. -For more advanced use case and configuration, see [ADVANCED.md](./ADVANCED.md) -To view the generated graph see the [Using KubeHound Data](#using-kubehound-data) section. +> Note: + KubeHound can be deployed as a serivce (KHaaS), [for more information](https://kubehound.io/user-guide/khaas-101/). -## Sample Attack Path +## Using KubeHound Data -![Example Path](./docs/images/example-graph.png) +To query the KubeHound graph data requires using the [Gremlin](https://tinkerpop.apache.org/gremlin.html) query language via an API call or dedicated graph query UI. A number of fully featured graph query UIs are available (both commercial and open source), but we provide an accompanying Jupyter notebook based on the [AWS Graph Notebook](https://github.com/aws/graph-notebook),to quickly showcase the capabilities of KubeHound. To access the UI: + ++ Visit [http://localhost:8888/notebooks/KubeHound.ipynb](http://localhost:8888/notebooks/KubeHound.ipynb) in your browser ++ Use the default password `admin` to login (note: this can be changed via the [Dockerfile](./deployments/kubehound/notebook/Dockerfile) or by setting the `NOTEBOOK_PASSWORD` environment variable in the [.env](./deployments/kubehound/.env.tpl) file) ++ Follow the initial setup instructions in the notebook to connect to the KubeHound graph and configure the rendering ++ Start running the queries and exploring the graph! -## Requirements +### Example queries -To run KubeHound, you need a couple dependencies -+ [Docker](https://docs.docker.com/engine/install/) `>= 19.03` -+ [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) `V2` +We have documented a few sample queries to execute on the database in [our documentation](https://kubehound.io/queries/gremlin/). A specific DSL has been developped to query the Graph for the most basic use cases ([KubeHound DSL](https://kubehound.io/queries/dsl/)). + +## Sample Attack Path + +![Example Path](./docs/images/example-graph.png) ### Sample Data @@ -68,22 +74,11 @@ make sample-graph To view the generated graph see the [Using KubeHound Data](#using-kubehound-data) section. -## Using KubeHound Data - -To query the KubeHound graph data requires using the [Gremlin](https://tinkerpop.apache.org/gremlin.html) query language via an API call or dedicated graph query UI. A number of fully featured graph query UIs are available (both commercial and open source), but we provide an accompanying Jupyter notebook based on the [AWS Graph Notebook](https://github.com/aws/graph-notebook),to quickly showcase the capabilities of KubeHound. To access the UI: - -+ Visit [http://localhost:8888/notebooks/KubeHound.ipynb](http://localhost:8888/notebooks/KubeHound.ipynb) in your browser -+ Use the default password `admin` to login (note: this can be changed via the [Dockerfile](./deployments/kubehound/notebook/Dockerfile) or by setting the `NOTEBOOK_PASSWORD` environment variable in the [.env](./deployments/kubehound/.env.tpl) file) -+ Follow the initial setup instructions in the notebook to connect to the KubeHound graph and configure the rendering -+ Start running the queries and exploring the graph! - -### Example queries - -We have documented a few sample queries to execute on the database in [our documentation](https://kubehound.io/queries/gremlin/). +## Query data from your scripts -### Query data from your scripts +If you expose the graph endpoint you can automate some queries to gather some KPI and metadata for instance. -#### Python +### Python You can query the database data in your python script by using the following snippet: diff --git a/docs/dev-guide/datadog.md b/docs/dev-guide/datadog.md new file mode 100644 index 000000000..cd8825073 --- /dev/null +++ b/docs/dev-guide/datadog.md @@ -0,0 +1,21 @@ +# Datadog setup + +The Datadog agent can be setup locally to provide some metrics and logs when developing on KubeHound. + +## Metrics and logs + +To have some in-depth metrics and log correlation, all the components are now linked to datadog. To configure it you just need to add your Datadog API key (`DD_API_KEY`) in the environment variable in the `deployments/kubehound/.env`. When the API key is configured, a docker will be created `kubehound-dev-datadog`. + +All the information being gathered are available at: + +* Metrics: https://app.datadoghq.com/metric/summary?filter=kubehound.janusgraph +* Logs: https://app.datadoghq.com/logs?query=service%3Akubehound%20&cols=host%2Cservice&index=%2A&messageDisplay=inline&stream_sort=desc&viz=stream&from_ts=1688140043795&to_ts=1688140943795&live=true + +To collect the metrics for Janusgraph an exporter from Prometheus is being used: + +* https://github.com/prometheus/jmx_exporter + +They are exposed here: + +* Locally: http://127.0.0.1:8099/metrics +* Datadog: https://app.datadoghq.com/metric/summary?filter=kubehound.janusgraph \ No newline at end of file diff --git a/docs/dev-guide/getting-started.md b/docs/dev-guide/getting-started.md index 3bacf1690..02da4aecf 100644 --- a/docs/dev-guide/getting-started.md +++ b/docs/dev-guide/getting-started.md @@ -1,10 +1,16 @@ # Getting started -## Requirements Test +To list all the available developpers commands from the makefile, run: + +```bash +make help +``` + +## Requirements build -+ Kind: https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager -+ Kubectl: https://kubernetes.io/docs/tasks/tools/ + go (v1.22): https://go.dev/doc/install ++ [Docker](https://docs.docker.com/engine/install/) >= 19.03 (`docker version`) ++ [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) >= v2.0 (`docker compose version`) ## Backend @@ -95,73 +101,3 @@ The CI will draft a new release that **will need manual validation**. In order t !!! Tip To resync all the tags from the main repo you can use `git tag -l | xargs git tag -d;git fetch --tags`. - -## Testing - -To ensure no regression in KubeHound, 2 kinds of tests are in place: - -* classic unit test: can be identify with the `xxx_test.go` files in the source code -* system tests: end to end test where we run full ingestion from different scenario to simulate all use cases against a real cluster. - -### Unit Testing - -The full suite of unit tests can be run locally via: - -```bash -make test -``` - -### System Testing - -The repository includes a suite of system tests that will do the following: -+ create a local kubernetes cluster -+ collect kubernetes API data from the cluster -+ run KubeHound using the file collector to create a working graph database -+ query the graph database to ensure all expected vertices and edges have been created correctly - -The cluster setup and running instances can be found under [test/setup](./test/setup/) - -If you need to manually access the system test environment with kubectl and other commands, you'll need to set (assuming you are at the root dir): - -```bash -cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config -``` - -#### Environment variable: -- `DD_API_KEY` (optional): set to the datadog API key used to submit metrics and other observability data. - -#### Setup - -Setup the test kind cluster (you only need to do this once!) via: - -```bash -make local-cluster-deploy -``` - -Then run the system tests via: - -```bash -make system-test -``` - -To cleanup the environment you can destroy the cluster via: - -```bash -make local-cluster-destroy -``` - -To list all the available commands, run: - -```bash -make help -``` - -!!! Note - if you are running on Linux but you dont want to run `sudo` for `kind` and `docker` command, you can overwrite this behavior by editing the following var in `test/setup/.config`: - - * `DOCKER_CMD="docker"` for docker command - * `KIND_CMD="kind"` for kind command - -#### CI Testing - -System tests will be run in CI via the [system-test](./.github/workflows/system-test.yml) github action \ No newline at end of file diff --git a/docs/dev-guide/testing.md b/docs/dev-guide/testing.md new file mode 100644 index 000000000..e4998ef15 --- /dev/null +++ b/docs/dev-guide/testing.md @@ -0,0 +1,73 @@ +# Testing + +To ensure no regression in KubeHound, 2 kinds of tests are in place: + +* classic unit test: can be identify with the `xxx_test.go` files in the source code +* system tests: end to end test where we run full ingestion from different scenario to simulate all use cases against a real cluster. + +## Requirements test + ++ [Golang](https://go.dev/doc/install) `>= 1.22` ++ [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager) ++ [Kubectl](https://kubernetes.io/docs/tasks/tools/) + +## Unit Testing + +The full suite of unit tests can be run locally via: + +```bash +make test +``` + +## System Testing + +The repository includes a suite of system tests that will do the following: ++ create a local kubernetes cluster ++ collect kubernetes API data from the cluster ++ run KubeHound using the file collector to create a working graph database ++ query the graph database to ensure all expected vertices and edges have been created correctly + +The cluster setup and running instances can be found under [test/setup](./test/setup/) + +If you need to manually access the system test environment with kubectl and other commands, you'll need to set (assuming you are at the root dir): + +```bash +cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config +``` + +### Environment variable: +- `DD_API_KEY` (optional): set to the datadog API key used to submit metrics and other observability data (see [datadog](https://kubehound.io/dev-guide/datadog/) section) + +### Setup + +Setup the test kind cluster (you only need to do this once!) via: + +```bash +make local-cluster-deploy +``` + +### Running the system tests + +Then run the system tests via: + +```bash +make system-test +``` + +### Cleanup + +To cleanup the environment you can destroy the cluster via: + +```bash +make local-cluster-destroy +``` + +!!! Note + if you are running on Linux but you dont want to run `sudo` for `kind` and `docker` command, you can overwrite this behavior by editing the following var in `test/setup/.config`: + + * `DOCKER_CMD="docker"` for docker command + * `KIND_CMD="kind"` for kind command + +### CI Testing + +System tests will be run in CI via the [system-test](./.github/workflows/system-test.yml) github action diff --git a/docs/dev-guide/wiki.md b/docs/dev-guide/wiki.md index 8d816b6d0..bf0e43790 100644 --- a/docs/dev-guide/wiki.md +++ b/docs/dev-guide/wiki.md @@ -12,4 +12,7 @@ make local-wiki ## Push new version -The website will get automatically updated everytime there is changemement in [docs](https://github.com/DataDog/KubeHound/tree/main/docs) directory or the [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file. This is being handled by [docs](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/docs.yml) workflow. \ No newline at end of file +The website will get automatically updated everytime there is changemement in [docs](https://github.com/DataDog/KubeHound/tree/main/docs) directory or the [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file. This is being handled by [docs](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/docs.yml) workflow. + +!!! Note + The domain for the wiki is being setup in the [CNAME](https://github.com/DataDog/KubeHound/tree/main/docs/CNAME) file. diff --git a/docs/reference/attacks/index.md b/docs/reference/attacks/index.md index 03e8bdc64..5677394c7 100644 --- a/docs/reference/attacks/index.md +++ b/docs/reference/attacks/index.md @@ -10,31 +10,31 @@ All edges in the KubeHound graph represent attacks with a net "improvement" in a !!! Note For instance, an assume role or ([IDENTITY_ASSUME](./IDENTITY_ASSUME.md)) is considered as an attack. -| ID | Name | MITRE ATT&CK Technique | MITRE ATT&CK Tactic | -| :----: | :--: | :-----------------: | :--------------------: | -| [CE_MODULE_LOAD](./CE_MODULE_LOAD.md) | Container escape: Load kernel module | Escape to host | Privilege escalation | -| [CE_NSENTER](./CE_NSENTER.md) | Container escape: nsenter | Escape to host | Privilege escalation | -| [CE_PRIV_MOUNT](./CE_PRIV_MOUNT.md) | Container escape: Mount host filesystem | Escape to host | Privilege escalation | -| [CE_SYS_PTRACE](./CE_SYS_PTRACE.md) | Container escape: Attach to host process via SYS_PTRACE | Escape to host | Privilege escalation | -| [CE_UMH_CORE_PATTERN](./CE_UMH_CORE_PATTERN.md) | Container escape: through core_pattern usermode_helper | Escape to host | Privilege escalation | -| [CONTAINER_ATTACH](./CONTAINER_ATTACH.md) | Attach to running container | N/A | Lateral Movement | -| [ENDPOINT_EXPLOIT](./ENDPOINT_EXPLOIT.md) | Exploit exposed endpoint | Exploitation of Remote Services | Lateral Movement | -| [EXPLOIT_CONTAINERD_SOCK](./EXPLOIT_CONTAINERD_SOCK.md) | Container escape: Through mounted container runtime socket | N/A | Lateral Movement | -| [EXPLOIT_HOST_READ](./EXPLOIT_HOST_READ.md) | Read file from sensitive host mount | Escape to host | Privilege escalation | -| [EXPLOIT_HOST_TRAVERSE](./EXPLOIT_HOST_TRAVERSE.md) | Steal service account token through kubelet host mount | Unsecured Credentials | Credential Access | -| [EXPLOIT_HOST_WRITE](./EXPLOIT_HOST_WRITE.md) | Container escape: Write to sensitive host mount | Escape to host | Privilege escalation | -| [IDENTITY_ASSUME](./IDENTITY_ASSUME.md) | Act as identity | Valid Accounts | Privilege escalation | -| [IDENTITY_IMPERSONATE](./IDENTITY_IMPERSONATE.md) | Impersonate user/group | Valid Accounts | Privilege escalation | -| [PERMISSION_DISCOVER](./PERMISSION_DISCOVER.md) | Enumerate permissions | Permission Groups Discovery | Discovery | -| [POD_ATTACH](./POD_ATTACH.md) | Attach to running pod | N/A | Lateral Movement | -| [POD_CREATE](./POD_CREATE.md) | Create privileged pod | Scheduled Task/Job: Container Orchestration Job | Privilege escalation | -| [POD_EXEC](./POD_EXEC.md) | Exec into running pod | N/A | Lateral Movement | -| [POD_PATCH](./POD_PATCH.md) | Patch running pod | N/A | Lateral Movement | -| [ROLE_BIND](./ROLE_BIND.md) | Create role binding | Valid Accounts | Privilege Escalation | -| [SHARE_PS_NAMESPACE](./SHARE_PS_NAMESPACE.md) | Access container in shared process namespace | N/A | Lateral Movement | -| [TOKEN_BRUTEFORCE](./TOKEN_BRUTEFORCE.md) | Brute-force secret name of service account token | Steal Application Access Token | Credential Access | -| [TOKEN_LIST](./TOKEN_LIST.md) | Access service account token secrets | Steal Application Access Token | Credential Access | -| [TOKEN_STEAL](./TOKEN_STEAL.md) | Steal service account token from volume | Unsecured Credentials | Credential Access | -| [CE_VAR_LOG_SYMLINK](./CE_VAR_LOG_SYMLINK.md) | Read file from sensitive host mount | Escape to host | Privilege escalation | -| [VOLUME_ACCESS](./VOLUME_ACCESS.md) | Access host volume | Container and Resource Discovery | Discovery | -| [VOLUME_DISCOVER](./VOLUME_DISCOVER.md) | Enumerate mounted volumes | Container and Resource Discovery | Discovery | +| ID | Name | MITRE ATT&CK Technique | MITRE ATT&CK Tactic | Coverage | +| :----: | :--: | :-----------------: | :--------------------: | :---: | +| [CE_MODULE_LOAD](./CE_MODULE_LOAD.md) | Container escape: Load kernel module | Escape to host | Privilege escalation | Full | +| [CE_NSENTER](./CE_NSENTER.md) | Container escape: nsenter | Escape to host | Privilege escalation | Full | +| [CE_PRIV_MOUNT](./CE_PRIV_MOUNT.md) | Container escape: Mount host filesystem | Escape to host | Privilege escalation | Full | +| [CE_SYS_PTRACE](./CE_SYS_PTRACE.md) | Container escape: Attach to host process via SYS_PTRACE | Escape to host | Privilege escalation | Full | +| [CE_UMH_CORE_PATTERN](./CE_UMH_CORE_PATTERN.md) | Container escape: through core_pattern usermode_helper | Escape to host | Privilege escalation | None | +| [CE_VAR_LOG_SYMLINK](./CE_VAR_LOG_SYMLINK.md) | Read file from sensitive host mount | Escape to host | Privilege escalation |Full | +| [CONTAINER_ATTACH](./CONTAINER_ATTACH.md) | Attach to running container | N/A | Lateral Movement | Full | +| [ENDPOINT_EXPLOIT](./ENDPOINT_EXPLOIT.md) | Exploit exposed endpoint | Exploitation of Remote Services | Lateral Movement | Full | +| [EXPLOIT_CONTAINERD_SOCK](./EXPLOIT_CONTAINERD_SOCK.md) | Container escape: Through mounted container runtime socket | N/A | Lateral Movement | None | +| [EXPLOIT_HOST_READ](./EXPLOIT_HOST_READ.md) | Read file from sensitive host mount | Escape to host | Privilege escalation | Full | +| [EXPLOIT_HOST_TRAVERSE](./EXPLOIT_HOST_TRAVERSE.md) | Steal service account token through kubelet host mount | Unsecured Credentials | Credential Access | Full | +| [EXPLOIT_HOST_WRITE](./EXPLOIT_HOST_WRITE.md) | Container escape: Write to sensitive host mount | Escape to host | Privilege escalation | Full | +| [IDENTITY_ASSUME](./IDENTITY_ASSUME.md) | Act as identity | Valid Accounts | Privilege escalation | Full | +| [IDENTITY_IMPERSONATE](./IDENTITY_IMPERSONATE.md) | Impersonate user/group | Valid Accounts | Privilege escalation | Full | +| [PERMISSION_DISCOVER](./PERMISSION_DISCOVER.md) | Enumerate permissions | Permission Groups Discovery | Discovery | Full | +| [POD_ATTACH](./POD_ATTACH.md) | Attach to running pod | N/A | Lateral Movement | Full | +| [POD_CREATE](./POD_CREATE.md) | Create privileged pod | Scheduled Task/Job: Container Orchestration Job | Privilege escalation | Full | +| [POD_EXEC](./POD_EXEC.md) | Exec into running pod | N/A | Lateral Movement | Full | +| [POD_PATCH](./POD_PATCH.md) | Patch running pod | N/A | Lateral Movement | Full | +| [ROLE_BIND](./ROLE_BIND.md) | Create role binding | Valid Accounts | Privilege Escalation | Partial | +| [SHARE_PS_NAMESPACE](./SHARE_PS_NAMESPACE.md) | Access container in shared process namespace | N/A | Lateral Movement | Full | +| [TOKEN_BRUTEFORCE](./TOKEN_BRUTEFORCE.md) | Brute-force secret name of service account token | Steal Application Access Token | Credential Access | Full | +| [TOKEN_LIST](./TOKEN_LIST.md) | Access service account token secrets | Steal Application Access Token | Credential Access | Full | +| [TOKEN_STEAL](./TOKEN_STEAL.md) | Steal service account token from volume | Unsecured Credentials | Credential Access | Full | +| [VOLUME_ACCESS](./VOLUME_ACCESS.md) | Access host volume | Container and Resource Discovery | Discovery | Full | +| [VOLUME_DISCOVER](./VOLUME_DISCOVER.md) | Enumerate mounted volumes | Container and Resource Discovery | Discovery | Full | diff --git a/docs/user-guide/advanced-configuration.md b/docs/user-guide/advanced-configuration.md new file mode 100644 index 000000000..ae7e4552d --- /dev/null +++ b/docs/user-guide/advanced-configuration.md @@ -0,0 +1,84 @@ +# Advanced configuration + +## Running KubeHound from source + +Clone the KubeHound repository and build KubeHound using the makefile: + +```bash +git clone https://github.com/DataDog/KubeHound.git +cd KubeHound +make build +``` + +The built binary is now available at: + +```bash +bin/build/kubehound +``` + +!!! Warning + We do not advise to build KubeHound from the sources as the docker images will use the latest flag instead of a specific release version. This mainly used by the developers/maintainers of KubeHound. + +## Configuration + +When using KubeHound you can setup different options through a config file with `-c` flags. You can use [kubehound-reference.yaml](https://github.com/DataDog/KubeHound/blob/main/configs/etc/kubehound-reference.yaml) as an example which list every options possible. + +### Collector configuration + +KubeHound is supporting 2 type of collector: + +* `file-collector`: The file collector which can process an offline dump (made by KubeHound - see [common operation](https://kubehound.io/) for the dump command). +* `live-k8s-api-collector` (by default): The live k8s collector which will retrieve all kubernetes objects from the k8s API. + +#### File Collector + +To use the file collector, you just have to specify: + +* `directory`: directory holding the K8s json data files +* `cluster`: the name of the targeted cluster + +!!! Tip + If you want to ingest data from a previous dump, we advise you to use `ingest local` command - [more detail here](https://kubehound.io/user-guide/common-operations/#ingest). + +#### Live Collector + +When retrieving the kubernetes resources form the k8s API, KubeHound setup limitation to avoid resources exhaustion on the k8s API: + +* `rate_limit_per_second` (by default `50`): Rate limit of requests/second to the Kubernetes API. +* `page_size` (by default `500`): Number of entries retrieved by each call on the API (same for all Kubernetes entry types) +* `page_buffer_size` (by default `10`): Number of pages to buffer + +!!! Note + Most (>90%) of the current runtime of KubeHound is spent in the transfer of data from the remote K8s API server, and the bulk of that is spent waiting on rate limit. As such increasing `rate_limit_per_second` will improve performance roughly linearly. + +!!! Tip + You can disable the interactive mod with `non_interactive` set to true. This will automatically dump all k8s resources from the k8s API without any user interaction. + +### Builder + +The `builder` section allows you to customize how you want to chunk the data during the ingestion process. It is being splitted in 2 sections `vertices` and `edges`. For both graph entities, KubeHound uses a `batch_size` of `500` element by default. + +!!! Warning + Increasing batch sizes can have some performance improvements by reducing network latency in transferring data between KubeGraph and the application. However, increasing it past a certain level can overload the backend leading to instability and eventually exceed the size limits of the websocket buffer used to transfer the data. **Changing the default following setting is not recommended.** + +#### Vertices builder + +For the vertices builder, there is 2 options: + +* `batch_size_small` (by default `500`): to control the batch size of vertices you want to insert through +* `batch_size_small` (by default `100`): handle only the PermissionSet resouces. This resource is quite intensive because it is the only requirering aggregation between multiples k8s resources (from `roles` and `rolebindings`). + +!!! Note + Since there is expensive insert on vertices the `batch_size_small` is currently not used. + +#### Edges builder + +By default, KubeHound will optimize the attack paths for large cluster by using `large_cluster_optimizations` (by default `true`). This will limit the number of attack paths being build in the targetted cluster. Using this optimisation will remove some attack paths. For instance, for the token based attacks (i.e. `TOKEN_BRUTEFORCE`), the optimisation will build only edges (between permissionSet and Identity) only if the targetted identity is `system:masters` group. This will reduce redundant attack paths: + +* If the `large_cluster_optimizations` is activated, KubeHound will use the default `batch_size` (by default `500). +* If the `large_cluster_optimizations` is deactivated, KubeHound will use a specific batch size configured through `batch_size_cluster_impact` for all attacks that make the graph grow exponentially. + +Lastly, the graph builder is using [pond](https://github.com/alitto/pond) library under the hood to handle the asynchronous tasks of inserting edges: + +* `worker_pool_size` (by default `5`): parallels ingestion process running at the same time (number of workers). +* `worker_pool_capacity` (by default `100`): number of cached elements in the worker pool. diff --git a/docs/user-guide/getting-started.md b/docs/user-guide/getting-started.md index 29b20726f..76902cb3f 100644 --- a/docs/user-guide/getting-started.md +++ b/docs/user-guide/getting-started.md @@ -16,11 +16,11 @@ KubeHound ships with a sensible default configuration as well as a pre-built bin Download the latest KubeHound binary for you platform: ```bash -wget https://github.com/DataDog/KubeHound/releases/download/latest/kubehound-$(uname -o | sed 's/GNU\///g')-$(uname -m) -O kubehound +wget https://github.com/DataDog/KubeHound/releases/latest/download/kubehound-$(uname -o | sed 's/GNU\///g')-$(uname -m) -O kubehound chmod +x kubehound ``` -This will start [backend services](../architecture.md) via docker compose (wiping any existing data), and compile the kubehound binary from source. +Then just run `./kubehound`, it will start [backend services](../architecture.md) via docker compose v2 API. Next, make sure your current kubectl context points at the target cluster: @@ -43,35 +43,82 @@ kubehound Sample output: ```text -INFO[0000] Starting KubeHound (run_id: aff49337-5e36-46ea-ac1f-ed224bf215ba) component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Initializing launch options component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loading application configuration from default embedded component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Initializing application telemetry component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loading cache provider component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loaded MemCacheProvider cache provider component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loading store database provider component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loaded MongoProvider store provider component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loading graph database provider component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0000] Loaded JanusGraphProvider graph provider component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0001] Starting Kubernetes raw data ingest component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0001] Loading Kubernetes data collector client component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0001] Loaded k8s-api-collector collector client component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound +./kubehound +INFO[01:42:19] Loading application configuration from default embedded +WARN[01:42:19] No local config file was found (kubehound.yaml) +INFO[01:42:19] Using /home/datadog/kubehound for default config +INFO[01:42:19] Initializing application telemetry +WARN[01:42:19] Telemetry disabled via configuration +INFO[01:42:19] Loading backend from default embedded +WARN[01:42:19] Loading the kubehound images with tag latest - dev branch detected +INFO[01:42:19] Spawning the kubehound stack +[+] Running 3/3 + ✔ Container kubehound-release-kubegraph-1 Healthy 50.3s + ✔ Container kubehound-release-ui-jupyter-1 Healthy 50.3s + ✔ Container kubehound-release-mongodb-1 Healthy 58.4s +INFO[01:43:20] Starting KubeHound (run_id: 01j4fwbg88j6eptasgegdh2sgs) +INFO[01:43:20] Initializing providers (graph, cache, store) +INFO[01:43:20] Loading cache provider +INFO[01:43:20] Loaded memcache cache provider +INFO[01:43:20] Loading store database provider +INFO[01:43:20] Loaded mongodb store provider +INFO[01:43:21] Loading graph database provider +INFO[01:43:21] Loaded janusgraph graph provider +INFO[01:43:21] Running the ingestion pipeline +INFO[01:43:21] Loading Kubernetes data collector client +WARN[01:43:21] About to dump k8s cluster: "kind-kubehound.test.local" - Do you want to continue ? [Yes/No] +yes +INFO[01:43:30] Loaded k8s-api-collector collector client +INFO[01:43:30] Starting Kubernetes raw data ingest +INFO[01:43:30] Loading data ingestor +INFO[01:43:30] Running dependency health checks +INFO[01:43:30] Running data ingest and normalization +INFO[01:43:30] Starting ingest sequences +INFO[01:43:30] Waiting for ingest sequences to complete +INFO[01:43:30] Running ingestor sequence core-pipeline +INFO[01:43:30] Starting ingest sequence core-pipeline +INFO[01:43:30] Running ingest group k8s-role-group +INFO[01:43:30] Starting k8s-role-group ingests +INFO[01:43:30] Waiting for k8s-role-group ingests to complete +INFO[01:43:30] Running ingest k8s-role-ingest +INFO[01:43:30] Running ingest k8s-cluster-role-ingest +INFO[01:43:30] Streaming data from the K8s API +INFO[01:43:32] Completed k8s-role-group ingest +INFO[01:43:32] Finished running ingest group k8s-role-group ... -INFO[0028] Building edge ExploitHostWrite component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0028] Edge writer 22 ContainerAttach::CONTAINER_ATTACH written component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0028] Building edge IdentityAssumeNode component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0029] Edge writer 8 ExploitHostWrite::EXPLOIT_HOST_WRITE written component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound +INFO[01:43:35] Completed k8s-pod-group ingest +INFO[01:43:35] Finished running ingest group k8s-pod-group +INFO[01:43:35] Completed ingest sequence core-pipeline +INFO[01:43:35] Completed pipeline ingest +INFO[01:43:35] Completed data ingest and normalization in 5.065238542s +INFO[01:43:35] Loading graph edge definitions +INFO[01:43:35] Loading graph builder +INFO[01:43:35] Running dependency health checks +INFO[01:43:35] Constructing graph +WARN[01:43:35] Using large cluster optimizations in graph construction +INFO[01:43:35] Starting mutating edge construction +INFO[01:43:35] Building edge PodCreate +INFO[01:43:36] Edge writer 10 PodCreate::POD_CREATE written +INFO[01:43:36] Building edge PodExec ... -INFO[0039] Completed edge construction component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0039] Completed graph construction component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound -INFO[0039] Attack graph generation complete in 39.108174109s component=kubehound run_id=aff49337-5e36-46ea-ac1f-ed224bf215ba service=kubehound +INFO[01:43:36] Starting dependent edge construction +INFO[01:43:36] Building edge ContainerEscapeVarLogSymlink +INFO[01:43:36] Edge writer 5 ContainerEscapeVarLogSymlink::CE_VAR_LOG_SYMLINK written +INFO[01:43:36] Completed edge construction +INFO[01:43:36] Completed graph construction in 773.2935ms +INFO[01:43:36] Stats for the run time duration: 5.838839708s / wait: 5.926496s / throttling: 101.501262% +INFO[01:43:36] KubeHound run (id=01j4fwbg88j6eptasgegdh2sgs) complete in 15.910406167s +WARN[01:43:36] KubeHound as finished ingesting and building the graph successfully. +WARN[01:43:36] Please visit the UI to view the graph by clicking the link below: +WARN[01:43:36] http://localhost:8888 +WARN[01:43:36] Password being 'admin' ``` - ## Access the KubeHound data At this point, the KubeHound data has been ingested in KubeHound's [graph database](../architecture.md). -You can use any client that supports accessing JanusGraph - a comprehensive list is available on the [JanusGraph home page](https://janusgraph.org/). We also provide a showcase [Jupyter Notebook](../../deployments/kubehound/notebook/KubeHound.ipynb) to get you started. This is accessible on [http://locahost:8888](http://locahost:8888) after starting KubeHound backend. The default password is `admin` but you can change this by setting the `NOTEBOOK_PASSWORD` environment variable in your `.env file`. +You can use any client that supports accessing JanusGraph - a comprehensive list is available on the [JanusGraph home page](https://janusgraph.org/). +We also provide a showcase [Jupyter Notebook](../../deployments/kubehound/notebook/KubeHound.ipynb) to get you started. This is accessible on [http://locahost:8888](http://locahost:8888) after starting KubeHound backend. The default password is `admin` but you can change this by setting the `NOTEBOOK_PASSWORD` environment variable in your `.env file`. ## Visualize and query the KubeHound data diff --git a/mkdocs.yml b/mkdocs.yml index e70e37ae0..18beceb44 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -83,12 +83,15 @@ nav: - References: references.md - User Guide: - Getting Started: user-guide/getting-started.md + - Advanced config: user-guide/advanced-configuration.md - Local Common Operations: user-guide/common-operations.md - KubeHound as a Service: user-guide/khaas-101.md - Troubleshooting: user-guide/troubleshooting.md - Developper Guide: - Getting Started: dev-guide/getting-started.md + - Testing: dev-guide/testing.md - Wiki: dev-guide/wiki.md + - Datadog setup: dev-guide/datadog.md - Attack Techniques Reference: - ... |reference/*/*.md #- Attacks: reference/attacks/index.md