diff --git a/.gitignore b/.gitignore index 03f1188d0..4a10430b7 100644 --- a/.gitignore +++ b/.gitignore @@ -32,7 +32,6 @@ dist/ cmd/kubehound/kubehound cmd/kubehound/__debug_bin -cmd/kubehound-ingestor/kubehound-ingestor deployments/kubehound/data deployments/kubehound/data/* diff --git a/configs/etc/kubehound-reference.yaml b/configs/etc/kubehound-reference.yaml index f2132d7d7..92a5a1ec3 100644 --- a/configs/etc/kubehound-reference.yaml +++ b/configs/etc/kubehound-reference.yaml @@ -32,9 +32,6 @@ collector: # file: # # Directory holding the K8s json data files # directory: /path/to/directory - # - # # Target cluster name - # cluster: # # General storage configuration @@ -42,7 +39,7 @@ collector: storage: # Whether or not to wipe all data on startup wipe: true - + # Number of connection retries before declaring an error retry: 5 @@ -74,8 +71,8 @@ telemetry: # Default tags to add to all telemetry (free form key-value map) # tags: - # team: ase - + # team: ase + # Statsd configuration for metics support statsd: # URL to send statsd data to the Datadog agent @@ -90,7 +87,7 @@ telemetry: # Graph builder configuration # # NOTE: increasing batch sizes can have some performance improvements by reducing network latency in transferring data -# between KubeGraph and the application. However, increasing it past a certain level can overload the backend leading +# between KubeGraph and the application. However, increasing it past a certain level can overload the backend leading # to instability and eventually exceed the size limits of the websocket buffer used to transfer the data. Changing this # is not recommended. # @@ -99,7 +96,7 @@ builder: # vertex: # # Batch size for vertex inserts # batch_size: 500 - # + # # # Small batch size for vertex inserts # batch_size_small: 100 @@ -124,18 +121,25 @@ builder: # # Cluster impact batch size for edge inserts # batch_size_cluster_impact: 1 - # Ingestor configuration (for KHaaS) # ingestor: # blob: # # (i.e.: s3://) # bucket: "" # # (i.e.: us-east-1) -# region: "" +# region: "" # temp_dir: "/tmp/kubehound" # archive_name: "archive.tar.gz" # max_archive_size: 2147483648 # 2GB # # GRPC endpoint for the ingestor -# api: +# api: # endpoint: "127.0.0.1:9000" # insecure: true + +# +# Dynamic info (optionnal - auto injected by KubeHound) +# +# dynamic: +# +# # Target cluster name +# cluster: diff --git a/deployments/k8s/khaas/values.yaml b/deployments/k8s/khaas/values.yaml index ff86bd480..c0897b8bf 100644 --- a/deployments/k8s/khaas/values.yaml +++ b/deployments/k8s/khaas/values.yaml @@ -1,7 +1,7 @@ team: services: ingestor: - image: ghcr.io/datadog/kubehound-ingestor + image: ghcr.io/datadog/kubehound-binary version: latest bucket: s3:// region: "us-east-1" diff --git a/docs/architecture.md b/docs/architecture.md index daefad477..c64f941cc 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -6,7 +6,7 @@ KubeHound works in 3 steps: 2. Compute attack paths 3. Write the results to a local graph database (JanusGraph) -After the initial ingestion is done, you use a compatible client or the provided [Jupyter Notebook](../../deployments/kubehound/notebook/KubeHound.ipynb) to visualize and query attack paths in your cluster. +After the initial ingestion is done, you use a compatible client or the provided [Jupyter Notebook](https://github.com/DataDog/KubeHound/blob/main/deployments/kubehound/ui/KubeHound.ipynb) to visualize and query attack paths in your cluster. [![KubeHound architecture (click to enlarge)](./images/kubehound-high-level-v2.png)](./images/kubehound-high-level-v2.png) diff --git a/docs/dev-guide/getting-started.md b/docs/dev-guide/getting-started.md index 02da4aecf..5fa8a6dd3 100644 --- a/docs/dev-guide/getting-started.md +++ b/docs/dev-guide/getting-started.md @@ -8,9 +8,9 @@ make help ## Requirements build -+ go (v1.22): https://go.dev/doc/install -+ [Docker](https://docs.docker.com/engine/install/) >= 19.03 (`docker version`) -+ [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) >= v2.0 (`docker compose version`) +- go (v1.22): https://go.dev/doc/install +- [Docker](https://docs.docker.com/engine/install/) >= 19.03 (`docker version`) +- [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) >= v2.0 (`docker compose version`) ## Backend @@ -20,15 +20,15 @@ The backend images are built with the Dockerfiles `docker-compose.dev.[graph|ing The minimum stack (`mongo` & `graph`) can be spawned with -* `kubehound dev` which is an equivalent of -* `docker compose -f docker-compose.yaml -f docker-compose.dev.graph.yaml -f docker-compose.dev.mongo.yaml`. By default it will always rebuild everything (no cache is being used). +- `kubehound dev` which is an equivalent of +- `docker compose -f docker-compose.yaml -f docker-compose.dev.graph.yaml -f docker-compose.dev.mongo.yaml`. By default it will always rebuild everything (no cache is being used). ### Building dev options You can add components to the mininum stack (`ui` and `grpc endpoint`) by adding the following flag. -* `--ui` to add the Jupyter UI to the build. -* `--grpc` to add the ingestor endpoint (exposing the grpc server for KHaaS). +- `--ui` to add the Jupyter UI to the build. +- `--grpc` to add the ingestor endpoint (exposing the grpc server for KHaaS). For instance, building locally the minimum stack with the `ui` component: @@ -36,7 +36,7 @@ For instance, building locally the minimum stack with the `ui` component: kubehound dev --ui ``` -### Tearing down the dev stack +### Tearing down the dev stack To tear down the KubeHound dev stack, just use `--down` flag: @@ -44,7 +44,8 @@ To tear down the KubeHound dev stack, just use `--down` flag: kubehound dev --down ``` -!!! Note +!!! note + It will stop all the component from the dev stack (including the `ui` and `grpc endpoint` if started) ## Build the binary @@ -59,28 +60,28 @@ make build KubeHound binary will be output to `./bin/build/kubehound`. - ### Releases We use `buildx` to release new versions of KubeHound, for cross platform compatibility and because we are embedding the docker compose library (to enable KubeHound to spin up the KubeHound stack directly from the binary). This saves the user from having to take care of this part. The build relies on 2 files [docker-bake.hcl](https://github.com/DataDog/KubeHound/blob/main/docker-bake.hcl) and [Dockerfile](https://github.com/DataDog/KubeHound/blob/main/Dockerfile). The following bake targets are available: -* `validate` or `lint`: run the release CI linter -* `binary` (default option): build kubehound just for the local architecture -* `binary-cross` or `release`: run the cross platform compilation +- `validate` or `lint`: run the release CI linter +- `binary` (default option): build kubehound just for the local architecture +- `binary-cross` or `release`: run the cross platform compilation -!!! Note - Those targets are made only for the CI and are not intented to be run run locally (except to test the CI locally). +!!! note + Those targets are made only for the CI and are not intented to be run run locally (except to test the CI locally). ##### Cross platform compilation -To test the cross platform compilation locally, use the buildx bake target `release`. This target is being run by the CI ([buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml#L77-L84 workflow). +To test the cross platform compilation locally, use the buildx bake target `release`. This target is being run by the CI ([buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml#L77-L84 workflow). ```bash docker buildx bake release ``` -!!! Warning +!!! warning + The cross-binary compilation with `buildx` is not working in mac: `ERROR: Multi-platform build is not supported for the docker driver.` ## Push a new release @@ -94,10 +95,15 @@ git push origin vX.X.X New tags will trigger the 2 following jobs: -* [docker](): pushing new images for `kubehound-graph`, `kubehound-ingestor` and `kubehound-ui` on ghcr.io. The images can be listed [here](https://github.com/orgs/DataDog/packages?repo_name=KubeHound). -* [buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml): compiling the binary for all platform. The platform supported can be listed using this `docker buildx bake binary-cross --print | jq -cr '.target."binary-cross".platforms'`. +- [docker](): pushing new images for `kubehound-graph`, `kubehound-binary` and `kubehound-ui` on ghcr.io. The images can be listed [here](https://github.com/orgs/DataDog/packages?repo_name=KubeHound). +- [buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml): compiling the binary for all platform. The platform supported can be listed using this `docker buildx bake binary-cross --print | jq -cr '.target."binary-cross".platforms'`. + +!!! warning "deprecated" + + The `kubehound-ingestor` image has been deprecated since **v1.5.0** and renamed to `kubehound-binary`. The CI will draft a new release that **will need manual validation**. In order to get published, an admin has to to validate the new draft from the UI. -!!! Tip +!!! tip + To resync all the tags from the main repo you can use `git tag -l | xargs git tag -d;git fetch --tags`. diff --git a/docs/dev-guide/testing.md b/docs/dev-guide/testing.md index e4998ef15..0a790baea 100644 --- a/docs/dev-guide/testing.md +++ b/docs/dev-guide/testing.md @@ -2,14 +2,14 @@ To ensure no regression in KubeHound, 2 kinds of tests are in place: -* classic unit test: can be identify with the `xxx_test.go` files in the source code -* system tests: end to end test where we run full ingestion from different scenario to simulate all use cases against a real cluster. +- classic unit test: can be identify with the `xxx_test.go` files in the source code +- system tests: end to end test where we run full ingestion from different scenario to simulate all use cases against a real cluster. ## Requirements test -+ [Golang](https://go.dev/doc/install) `>= 1.22` -+ [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager) -+ [Kubectl](https://kubernetes.io/docs/tasks/tools/) +- [Golang](https://go.dev/doc/install) `>= 1.22` +- [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager) +- [Kubectl](https://kubernetes.io/docs/tasks/tools/) ## Unit Testing @@ -22,10 +22,11 @@ make test ## System Testing The repository includes a suite of system tests that will do the following: -+ create a local kubernetes cluster -+ collect kubernetes API data from the cluster -+ run KubeHound using the file collector to create a working graph database -+ query the graph database to ensure all expected vertices and edges have been created correctly + +- create a local kubernetes cluster +- collect kubernetes API data from the cluster +- run KubeHound using the file collector to create a working graph database +- query the graph database to ensure all expected vertices and edges have been created correctly The cluster setup and running instances can be found under [test/setup](./test/setup/) @@ -36,6 +37,7 @@ cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config ``` ### Environment variable: + - `DD_API_KEY` (optional): set to the datadog API key used to submit metrics and other observability data (see [datadog](https://kubehound.io/dev-guide/datadog/) section) ### Setup @@ -62,12 +64,13 @@ To cleanup the environment you can destroy the cluster via: make local-cluster-destroy ``` -!!! Note +!!! note + if you are running on Linux but you dont want to run `sudo` for `kind` and `docker` command, you can overwrite this behavior by editing the following var in `test/setup/.config`: - - * `DOCKER_CMD="docker"` for docker command - * `KIND_CMD="kind"` for kind command + + * `DOCKER_CMD="docker"` for docker command + * `KIND_CMD="kind"` for kind command ### CI Testing -System tests will be run in CI via the [system-test](./.github/workflows/system-test.yml) github action +System tests will be run in CI via the [system-test](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/system-test.yml) github action diff --git a/docs/dev-guide/wiki.md b/docs/dev-guide/wiki.md index bf0e43790..b89bfca67 100644 --- a/docs/dev-guide/wiki.md +++ b/docs/dev-guide/wiki.md @@ -6,13 +6,14 @@ The website [kubehound.io](https://kubehound.io) is being statically generated f make local-wiki ``` -!!! Tip - All the configuration of the website (url, menu, css, ...) is being made from [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file: +!!! tip + All the configuration of the website (url, menu, css, ...) is being made from [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file: ## Push new version The website will get automatically updated everytime there is changemement in [docs](https://github.com/DataDog/KubeHound/tree/main/docs) directory or the [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file. This is being handled by [docs](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/docs.yml) workflow. -!!! Note +!!! note + The domain for the wiki is being setup in the [CNAME](https://github.com/DataDog/KubeHound/tree/main/docs/CNAME) file. diff --git a/docs/reference/attacks/index.md b/docs/reference/attacks/index.md index 5677394c7..628f249d6 100644 --- a/docs/reference/attacks/index.md +++ b/docs/reference/attacks/index.md @@ -7,34 +7,35 @@ hide: All edges in the KubeHound graph represent attacks with a net "improvement" in an attacker's position or a lateral movement opportunity. -!!! Note +!!! note + For instance, an assume role or ([IDENTITY_ASSUME](./IDENTITY_ASSUME.md)) is considered as an attack. -| ID | Name | MITRE ATT&CK Technique | MITRE ATT&CK Tactic | Coverage | -| :----: | :--: | :-----------------: | :--------------------: | :---: | -| [CE_MODULE_LOAD](./CE_MODULE_LOAD.md) | Container escape: Load kernel module | Escape to host | Privilege escalation | Full | -| [CE_NSENTER](./CE_NSENTER.md) | Container escape: nsenter | Escape to host | Privilege escalation | Full | -| [CE_PRIV_MOUNT](./CE_PRIV_MOUNT.md) | Container escape: Mount host filesystem | Escape to host | Privilege escalation | Full | -| [CE_SYS_PTRACE](./CE_SYS_PTRACE.md) | Container escape: Attach to host process via SYS_PTRACE | Escape to host | Privilege escalation | Full | -| [CE_UMH_CORE_PATTERN](./CE_UMH_CORE_PATTERN.md) | Container escape: through core_pattern usermode_helper | Escape to host | Privilege escalation | None | -| [CE_VAR_LOG_SYMLINK](./CE_VAR_LOG_SYMLINK.md) | Read file from sensitive host mount | Escape to host | Privilege escalation |Full | -| [CONTAINER_ATTACH](./CONTAINER_ATTACH.md) | Attach to running container | N/A | Lateral Movement | Full | -| [ENDPOINT_EXPLOIT](./ENDPOINT_EXPLOIT.md) | Exploit exposed endpoint | Exploitation of Remote Services | Lateral Movement | Full | -| [EXPLOIT_CONTAINERD_SOCK](./EXPLOIT_CONTAINERD_SOCK.md) | Container escape: Through mounted container runtime socket | N/A | Lateral Movement | None | -| [EXPLOIT_HOST_READ](./EXPLOIT_HOST_READ.md) | Read file from sensitive host mount | Escape to host | Privilege escalation | Full | -| [EXPLOIT_HOST_TRAVERSE](./EXPLOIT_HOST_TRAVERSE.md) | Steal service account token through kubelet host mount | Unsecured Credentials | Credential Access | Full | -| [EXPLOIT_HOST_WRITE](./EXPLOIT_HOST_WRITE.md) | Container escape: Write to sensitive host mount | Escape to host | Privilege escalation | Full | -| [IDENTITY_ASSUME](./IDENTITY_ASSUME.md) | Act as identity | Valid Accounts | Privilege escalation | Full | -| [IDENTITY_IMPERSONATE](./IDENTITY_IMPERSONATE.md) | Impersonate user/group | Valid Accounts | Privilege escalation | Full | -| [PERMISSION_DISCOVER](./PERMISSION_DISCOVER.md) | Enumerate permissions | Permission Groups Discovery | Discovery | Full | -| [POD_ATTACH](./POD_ATTACH.md) | Attach to running pod | N/A | Lateral Movement | Full | -| [POD_CREATE](./POD_CREATE.md) | Create privileged pod | Scheduled Task/Job: Container Orchestration Job | Privilege escalation | Full | -| [POD_EXEC](./POD_EXEC.md) | Exec into running pod | N/A | Lateral Movement | Full | -| [POD_PATCH](./POD_PATCH.md) | Patch running pod | N/A | Lateral Movement | Full | -| [ROLE_BIND](./ROLE_BIND.md) | Create role binding | Valid Accounts | Privilege Escalation | Partial | -| [SHARE_PS_NAMESPACE](./SHARE_PS_NAMESPACE.md) | Access container in shared process namespace | N/A | Lateral Movement | Full | -| [TOKEN_BRUTEFORCE](./TOKEN_BRUTEFORCE.md) | Brute-force secret name of service account token | Steal Application Access Token | Credential Access | Full | -| [TOKEN_LIST](./TOKEN_LIST.md) | Access service account token secrets | Steal Application Access Token | Credential Access | Full | -| [TOKEN_STEAL](./TOKEN_STEAL.md) | Steal service account token from volume | Unsecured Credentials | Credential Access | Full | -| [VOLUME_ACCESS](./VOLUME_ACCESS.md) | Access host volume | Container and Resource Discovery | Discovery | Full | -| [VOLUME_DISCOVER](./VOLUME_DISCOVER.md) | Enumerate mounted volumes | Container and Resource Discovery | Discovery | Full | +| ID | Name | MITRE ATT&CK Technique | MITRE ATT&CK Tactic | Coverage | +| :-----------------------------------------------------: | :--------------------------------------------------------: | :---------------------------------------------: | :------------------: | :------: | +| [CE_MODULE_LOAD](./CE_MODULE_LOAD.md) | Container escape: Load kernel module | Escape to host | Privilege escalation | Full | +| [CE_NSENTER](./CE_NSENTER.md) | Container escape: nsenter | Escape to host | Privilege escalation | Full | +| [CE_PRIV_MOUNT](./CE_PRIV_MOUNT.md) | Container escape: Mount host filesystem | Escape to host | Privilege escalation | Full | +| [CE_SYS_PTRACE](./CE_SYS_PTRACE.md) | Container escape: Attach to host process via SYS_PTRACE | Escape to host | Privilege escalation | Full | +| [CE_UMH_CORE_PATTERN](./CE_UMH_CORE_PATTERN.md) | Container escape: through core_pattern usermode_helper | Escape to host | Privilege escalation | None | +| [CE_VAR_LOG_SYMLINK](./CE_VAR_LOG_SYMLINK.md) | Read file from sensitive host mount | Escape to host | Privilege escalation | Full | +| [CONTAINER_ATTACH](./CONTAINER_ATTACH.md) | Attach to running container | N/A | Lateral Movement | Full | +| [ENDPOINT_EXPLOIT](./ENDPOINT_EXPLOIT.md) | Exploit exposed endpoint | Exploitation of Remote Services | Lateral Movement | Full | +| [EXPLOIT_CONTAINERD_SOCK](./EXPLOIT_CONTAINERD_SOCK.md) | Container escape: Through mounted container runtime socket | N/A | Lateral Movement | None | +| [EXPLOIT_HOST_READ](./EXPLOIT_HOST_READ.md) | Read file from sensitive host mount | Escape to host | Privilege escalation | Full | +| [EXPLOIT_HOST_TRAVERSE](./EXPLOIT_HOST_TRAVERSE.md) | Steal service account token through kubelet host mount | Unsecured Credentials | Credential Access | Full | +| [EXPLOIT_HOST_WRITE](./EXPLOIT_HOST_WRITE.md) | Container escape: Write to sensitive host mount | Escape to host | Privilege escalation | Full | +| [IDENTITY_ASSUME](./IDENTITY_ASSUME.md) | Act as identity | Valid Accounts | Privilege escalation | Full | +| [IDENTITY_IMPERSONATE](./IDENTITY_IMPERSONATE.md) | Impersonate user/group | Valid Accounts | Privilege escalation | Full | +| [PERMISSION_DISCOVER](./PERMISSION_DISCOVER.md) | Enumerate permissions | Permission Groups Discovery | Discovery | Full | +| [POD_ATTACH](./POD_ATTACH.md) | Attach to running pod | N/A | Lateral Movement | Full | +| [POD_CREATE](./POD_CREATE.md) | Create privileged pod | Scheduled Task/Job: Container Orchestration Job | Privilege escalation | Full | +| [POD_EXEC](./POD_EXEC.md) | Exec into running pod | N/A | Lateral Movement | Full | +| [POD_PATCH](./POD_PATCH.md) | Patch running pod | N/A | Lateral Movement | Full | +| [ROLE_BIND](./ROLE_BIND.md) | Create role binding | Valid Accounts | Privilege Escalation | Partial | +| [SHARE_PS_NAMESPACE](./SHARE_PS_NAMESPACE.md) | Access container in shared process namespace | N/A | Lateral Movement | Full | +| [TOKEN_BRUTEFORCE](./TOKEN_BRUTEFORCE.md) | Brute-force secret name of service account token | Steal Application Access Token | Credential Access | Full | +| [TOKEN_LIST](./TOKEN_LIST.md) | Access service account token secrets | Steal Application Access Token | Credential Access | Full | +| [TOKEN_STEAL](./TOKEN_STEAL.md) | Steal service account token from volume | Unsecured Credentials | Credential Access | Full | +| [VOLUME_ACCESS](./VOLUME_ACCESS.md) | Access host volume | Container and Resource Discovery | Discovery | Full | +| [VOLUME_DISCOVER](./VOLUME_DISCOVER.md) | Enumerate mounted volumes | Container and Resource Discovery | Discovery | Full | diff --git a/docs/reference/entities/index.md b/docs/reference/entities/index.md index 5fab082f3..aeb14b837 100644 --- a/docs/reference/entities/index.md +++ b/docs/reference/entities/index.md @@ -7,17 +7,17 @@ hide: Tne entities represents all the vertices in KubeHound graph model. Those are an abstract representation of a Kubernetes component that form the vertices of the graph. -!!! Note - For instance: [PERMISSION_SET](./permissionset.md) is an abstract of Role and RoleBinding. +!!! note + For instance: [PERMISSION_SET](./permissionset.md) is an abstract of Role and RoleBinding. -| ID | Description | -| :----: | :-----------------: | -| [COMMON](./common.md) | Common properties can be set on any vertices within the graph. | -| [CONTAINER](./container.md) | A container image running on a Kubernetes pod. Containers in a Pod are co-located and co-scheduled to run on the same node. | -| [ENDPOINT](./endpoint.md) | A network endpoint exposed by a container accessible via a Kubernetes service, external node port or cluster IP/port tuple. | -| [IDENTITY](./identity.md) | Identity represents a Kubernetes user or service account.| -| [NODE](./node.md) | A Kubernetes node. Kubernetes runs workloads by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. | -| [PERMISSION_SET](./permissionset.md) | A permission set represents a Kubernetes RBAC `Role` or `ClusterRole`, which contain rules that represent a set of permissions that has been bound to an identity via a `RoleBinding` or `ClusterRoleBinding`. Permissions are purely additive (there are no "deny" rules). | -| [POD](./pod.md) | A Kubernetes pod - the smallest deployable units of computing that you can create and manage in Kubernetes. | -| [Volume](./volume.md) | Volume represents a volume mounted in a container and exposed by a node. | +| ID | Description | +| :----------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | +| [COMMON](./common.md) | Common properties can be set on any vertices within the graph. | +| [CONTAINER](./container.md) | A container image running on a Kubernetes pod. Containers in a Pod are co-located and co-scheduled to run on the same node. | +| [ENDPOINT](./endpoint.md) | A network endpoint exposed by a container accessible via a Kubernetes service, external node port or cluster IP/port tuple. | +| [IDENTITY](./identity.md) | Identity represents a Kubernetes user or service account. | +| [NODE](./node.md) | A Kubernetes node. Kubernetes runs workloads by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. | +| [PERMISSION_SET](./permissionset.md) | A permission set represents a Kubernetes RBAC `Role` or `ClusterRole`, which contain rules that represent a set of permissions that has been bound to an identity via a `RoleBinding` or `ClusterRoleBinding`. Permissions are purely additive (there are no "deny" rules). | +| [POD](./pod.md) | A Kubernetes pod - the smallest deployable units of computing that you can create and manage in Kubernetes. | +| [Volume](./volume.md) | Volume represents a volume mounted in a container and exposed by a node. | diff --git a/docs/references.md b/docs/references.md index 9464fff32..d6d11ab48 100644 --- a/docs/references.md +++ b/docs/references.md @@ -1,64 +1,68 @@ # References ## 2024 - Pass The Salt (PTS) Workshop -### [KubeHound: Identifying attack paths in Kubernetes clusters at scale with no hustle](https://cfp.pass-the-salt.org/pts2024/talk/WA99YZ/) + +### [KubeHound: Identifying attack paths in Kubernetes clusters at scale with no hustle](https://cfp.pass-the-salt.org/pts2024/talk/WA99YZ/) [Slides :fontawesome-solid-file-pdf:{ .pdf } ](files/PassTheSalt24/Kubehound-Workshop-PassTheSalt_2024.pdf){ .md-button } [Jupyter notebook :fontawesome-brands-python:{ .python } ](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/KubehoundDSL_101.ipynb){ .md-button } -The goal of the workshop was to showcase **how to use KubeHound to pinpoint security issues in a Kubernetes cluster and get a concrete security posture**. +The goal of the workshop was to showcase **how to use KubeHound to pinpoint security issues in a Kubernetes cluster and get a concrete security posture**. But first, as attackers (or defenders), there's nothing better to understand an attack than to exploit it oneself. So the workshop started with some of the most common attacks (container escape and lateral movement) and **let attendees exploit them in our vulnerable cluster**. After doing some introduction around Kubernetes basic and Graph theory, the attendees played with KubeHound to ingest data synchronously and asynchronously (dump and rehydrate the data). Then we **covered all the KubeHound DSL and basic gremlin usage**. The goal was to go over the possibilities of the KubeHound DSL like: -* List all the port and IP addresses being exposed outside of the k8s cluster -* Enumerate how attacks are present in the cluster -* List all attacks path from endpoints to node -* List all endpoint properties by port with serviceEndpoint and IP addresses that lead to a critical path -* ... +- List all the port and IP addresses being exposed outside of the k8s cluster +- Enumerate how attacks are present in the cluster +- List all attacks path from endpoints to node +- List all endpoint properties by port with serviceEndpoint and IP addresses that lead to a critical path +- ... The workshop finished with some "real cases" scenario either from a red teamer or blue teamer point of view. The goal was to show how the tool can be used in different scenarios (initial recon, attack path analysis, assumed breach on compromised resources such as containers or credentials, ...) All was done using the following notebook which is a step-by-step KubeHound DSL: -* A [specific notebook](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/KubeHoundDSL_101.ipynb) to describe all KubeHound DSL queries and how you can leverage them. Also this notebook describes the basic Gremlin needed to handle the KubeHound DSL for specific cases. +- A [specific notebook](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/KubeHoundDSL_101.ipynb) to describe all KubeHound DSL queries and how you can leverage them. Also this notebook describes the basic Gremlin needed to handle the KubeHound DSL for specific cases. ## 2024 - Troopers presentation -### [Attacking and Defending Kubernetes Cluster with KubeHound, an Attack Graph Model](https://troopers.de/troopers24/talks/t8tc7m/) -[Recording :fontawesome-brands-youtube:{ .youtube } ](#){ .md-button .md-button--youtube } [Slides :fontawesome-solid-file-pdf:{ .pdf } ](files/Troopers24/Kubehound-Troopers_2024-slides.pdf){ .md-button } [Dashboard PoC :fontawesome-brands-python:{ .python } ](https://github.com/DataDog/KubeHound/tree/main/scripts/dashboard-demo){ .md-button } +### [Attacking and Defending Kubernetes Cluster with KubeHound, an Attack Graph Model](https://troopers.de/troopers24/talks/t8tc7m/) + +[Recording :fontawesome-brands-youtube:{ .youtube } ](#){ .md-button .md-button--youtube } [Slides :fontawesome-solid-file-pdf:{ .pdf } ](files/Troopers24/Kubehound-Troopers_2024-slides.pdf){ .md-button } [Dashboard PoC :fontawesome-brands-python:{ .python } ](https://github.com/DataDog/KubeHound/tree/main/scripts/dashboard-demo){ .md-button } This presentation explains the genesis behind the tool. A specific focus was made on the new version **KubeHound as a Service** or **KHaaS** which allow using KubeHound with a distributed model across multiple Kuberentes Clusters. We also introduce a new command that allows consultants to use KubeHound asynchronously (dumping and rehydration later, in office for instance). 2 demos were also shown: -* A [ PoC :fontawesome-brands-python:{ .python } of a dashboard](#) was created to show how interesting KPI can be extracted easily from KubeHound. -* A [specific notebook](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/KubeHound_demo.ipynb) to show how to shift from a can of worms to the most critical vulnerability in a Kubernetes Cluster with a few KubeHound requests. +- A [ PoC :fontawesome-brands-python:{ .python } of a dashboard](#) was created to show how interesting KPI can be extracted easily from KubeHound. +- A [specific notebook](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/KubeHound_demo.ipynb) to show how to shift from a can of worms to the most critical vulnerability in a Kubernetes Cluster with a few KubeHound requests. Also we showed how the tool has been built and lessons we have learned from the process. ## 2024 - InsomniHack 2024 presentation -### [Standing on the Shoulders of Giant(Dog)s: A Kubernetes Attack Graph Model](https://www.insomnihack.ch/talks-2024/#BZ3UA9) -[Recording :fontawesome-brands-youtube:{ .youtube } ](https://www.youtube.com/watch?v=sy_ijtW6wmQ){ .md-button .md-button--youtube } [Slides :fontawesome-solid-file-pdf:{ .pdf } ](files/insomnihack24/Kubehound - Insomni'Hack 2024 - slides.pdf){ .md-button } [Dashboard PoC :fontawesome-brands-python:{ .python } ](https://github.com/DataDog/KubeHound/tree/main/scripts/dashboard-demo){ .md-button } +### [Standing on the Shoulders of Giant(Dog)s: A Kubernetes Attack Graph Model](https://www.insomnihack.ch/talks-2024/#BZ3UA9) + +[Recording :fontawesome-brands-youtube:{ .youtube } ](https://www.youtube.com/watch?v=sy_ijtW6wmQ){ .md-button .md-button--youtube } [Slides :fontawesome-solid-file-pdf:{ .pdf } ](files/insomnihack24/Kubehound - InsomniHack 2024 - slides.pdf){ .md-button } [Dashboard PoC :fontawesome-brands-python:{ .python } ](https://github.com/DataDog/KubeHound/tree/main/scripts/dashboard-demo){ .md-button } This presentation explains why the tool was created and what problem it tries to solve. 2 demos were shown: -* A [ PoC :fontawesome-brands-python:{ .python } of a dashboard](#) was created to show how interesting KPI can be extracted easily from KubeHound. -* A [specific notebook](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/InsomniHackDemo.ipynb) to show how to shift from a can of worms to the most critical vulnerability in a Kubernetes Cluster with a few KubeHound requests. +- A [ PoC :fontawesome-brands-python:{ .python } of a dashboard](#) was created to show how interesting KPI can be extracted easily from KubeHound. +- A [specific notebook](https://github.com/DataDog/KubeHound/tree/main/deployments/kubehound/notebook/InsomniHackDemo.ipynb) to show how to shift from a can of worms to the most critical vulnerability in a Kubernetes Cluster with a few KubeHound requests. It also showed how the tool has been built and lessons we have learned from the process. -## 2023 - Release v1.0 annoucement +## 2023 - Release v1.0 annoucement + ### [KubeHound: Identifying attack paths in Kubernetes clusters](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/) [Blog Article :fontawesome-brands-microblog: ](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/){ .md-button } Blog article published on [securitylabs](https://securitylabs.datadoghq.com) as a tutorial 101 on how to use the tools in different use cases: -* [Red team: Looking for low-hanging fruit](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#red-team-looking-for-low-hanging-fruit) -* [Blue team: Assessing the impact of a compromised container](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#blue-team-assessing-the-impact-of-a-compromised-container) -* [Blue team: Remediation](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#blue-team-remediation) -* [Blue team: Metrics and KPIs](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#blue-team-metrics-and-kpis) +- [Red team: Looking for low-hanging fruit](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#red-team-looking-for-low-hanging-fruit) +- [Blue team: Assessing the impact of a compromised container](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#blue-team-assessing-the-impact-of-a-compromised-container) +- [Blue team: Remediation](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#blue-team-remediation) +- [Blue team: Metrics and KPIs](https://securitylabs.datadoghq.com/articles/kubehound-identify-kubernetes-attack-paths/#blue-team-metrics-and-kpis) -It also explain briefly how the tools works (what is under the hood). \ No newline at end of file +It also explain briefly how the tools works (what is under the hood). diff --git a/docs/user-guide/advanced-configuration.md b/docs/user-guide/advanced-configuration.md index ae7e4552d..2dabdb2c7 100644 --- a/docs/user-guide/advanced-configuration.md +++ b/docs/user-guide/advanced-configuration.md @@ -16,8 +16,9 @@ The built binary is now available at: bin/build/kubehound ``` -!!! Warning - We do not advise to build KubeHound from the sources as the docker images will use the latest flag instead of a specific release version. This mainly used by the developers/maintainers of KubeHound. +!!! warning + + We do not advise to build KubeHound from the sources as the docker images will use the latest flag instead of a specific release version. This mainly used by the developers/maintainers of KubeHound. ## Configuration @@ -27,58 +28,66 @@ When using KubeHound you can setup different options through a config file with KubeHound is supporting 2 type of collector: -* `file-collector`: The file collector which can process an offline dump (made by KubeHound - see [common operation](https://kubehound.io/) for the dump command). -* `live-k8s-api-collector` (by default): The live k8s collector which will retrieve all kubernetes objects from the k8s API. +- `file-collector`: The file collector which can process an offline dump (made by KubeHound - see [common operation](https://kubehound.io/) for the dump command). +- `live-k8s-api-collector` (by default): The live k8s collector which will retrieve all kubernetes objects from the k8s API. #### File Collector To use the file collector, you just have to specify: -* `directory`: directory holding the K8s json data files -* `cluster`: the name of the targeted cluster +- `directory`: directory holding the K8s json data files + +!!! tip -!!! Tip If you want to ingest data from a previous dump, we advise you to use `ingest local` command - [more detail here](https://kubehound.io/user-guide/common-operations/#ingest). +!!! warning "deprecated" + + The `cluster` field is deprecated since v1.5.0. Now a metadata.json is being embeded with the cluster name. If you are using an old dump you can overwrite it using the `dynamic` section from the config file or just manually the `metadata.json` file. + #### Live Collector When retrieving the kubernetes resources form the k8s API, KubeHound setup limitation to avoid resources exhaustion on the k8s API: -* `rate_limit_per_second` (by default `50`): Rate limit of requests/second to the Kubernetes API. -* `page_size` (by default `500`): Number of entries retrieved by each call on the API (same for all Kubernetes entry types) -* `page_buffer_size` (by default `10`): Number of pages to buffer +- `rate_limit_per_second` (by default `50`): Rate limit of requests/second to the Kubernetes API. +- `page_size` (by default `500`): Number of entries retrieved by each call on the API (same for all Kubernetes entry types) +- `page_buffer_size` (by default `10`): Number of pages to buffer + +!!! note -!!! Note Most (>90%) of the current runtime of KubeHound is spent in the transfer of data from the remote K8s API server, and the bulk of that is spent waiting on rate limit. As such increasing `rate_limit_per_second` will improve performance roughly linearly. -!!! Tip +!!! tip + You can disable the interactive mod with `non_interactive` set to true. This will automatically dump all k8s resources from the k8s API without any user interaction. -### Builder +### Builder The `builder` section allows you to customize how you want to chunk the data during the ingestion process. It is being splitted in 2 sections `vertices` and `edges`. For both graph entities, KubeHound uses a `batch_size` of `500` element by default. -!!! Warning +!!! warning + Increasing batch sizes can have some performance improvements by reducing network latency in transferring data between KubeGraph and the application. However, increasing it past a certain level can overload the backend leading to instability and eventually exceed the size limits of the websocket buffer used to transfer the data. **Changing the default following setting is not recommended.** #### Vertices builder For the vertices builder, there is 2 options: -* `batch_size_small` (by default `500`): to control the batch size of vertices you want to insert through -* `batch_size_small` (by default `100`): handle only the PermissionSet resouces. This resource is quite intensive because it is the only requirering aggregation between multiples k8s resources (from `roles` and `rolebindings`). +- `batch_size_small` (by default `500`): to control the batch size of vertices you want to insert through +- `batch_size_small` (by default `100`): handle only the PermissionSet resouces. This resource is quite intensive because it is the only requirering aggregation between multiples k8s resources (from `roles` and `rolebindings`). + +!!! note -!!! Note Since there is expensive insert on vertices the `batch_size_small` is currently not used. #### Edges builder By default, KubeHound will optimize the attack paths for large cluster by using `large_cluster_optimizations` (by default `true`). This will limit the number of attack paths being build in the targetted cluster. Using this optimisation will remove some attack paths. For instance, for the token based attacks (i.e. `TOKEN_BRUTEFORCE`), the optimisation will build only edges (between permissionSet and Identity) only if the targetted identity is `system:masters` group. This will reduce redundant attack paths: -* If the `large_cluster_optimizations` is activated, KubeHound will use the default `batch_size` (by default `500). -* If the `large_cluster_optimizations` is deactivated, KubeHound will use a specific batch size configured through `batch_size_cluster_impact` for all attacks that make the graph grow exponentially. +- If the `large_cluster_optimizations` is activated, KubeHound will use the default `batch_size` (by default `500). +- If the `large_cluster_optimizations` is deactivated, KubeHound will use a specific batch size configured through `batch_size_cluster_impact` for all attacks that make the graph grow exponentially. -Lastly, the graph builder is using [pond](https://github.com/alitto/pond) library under the hood to handle the asynchronous tasks of inserting edges: +Lastly, the graph builder is using [pond](https://github.com/alitto/pond) library under the hood to handle the asynchronous tasks of inserting edges: -* `worker_pool_size` (by default `5`): parallels ingestion process running at the same time (number of workers). -* `worker_pool_capacity` (by default `100`): number of cached elements in the worker pool. +- `worker_pool_size` (by default `5`): parallels ingestion process running at the same time (number of workers). +- `worker_pool_capacity` (by default `100`): number of cached elements in the worker pool. diff --git a/docs/user-guide/common-operations.md b/docs/user-guide/common-operations.md index e64889464..9dc40c0e6 100644 --- a/docs/user-guide/common-operations.md +++ b/docs/user-guide/common-operations.md @@ -2,15 +2,16 @@ When running `./kubehound`, it will execute the 3 following action: -* run the `backend` (graphdb, storedb and UI) -* `dump` the kubernetes resources needed to build the graph -* `ingest` the dumped data and generate the attack path for the targeted Kubernetes cluster. +- run the `backend` (graphdb, storedb and UI) +- `dump` the kubernetes resources needed to build the graph +- `ingest` the dumped data and generate the attack path for the targeted Kubernetes cluster. All those 3 steps can be run separately. [![](../images/kubehound-local-commands.png)](../images/kubehound-local-commands.png) -!!! Note +!!! note + if you want to skip the interactive mode, you can provide `-y` or `--non-interactive` to skip the cluster confirmation. ## Backend @@ -20,6 +21,7 @@ In order to run, KubeHound needs some docker containers to be running. Every com ### Starting the backend The backend stack can be started by using: + ```bash kubehound backend up ``` @@ -29,11 +31,13 @@ It will use the latest [kubehound images releases](https://github.com/orgs/DataD ### Restarting/stopping the backend The backend stack can be restarted by using: + ```bash kubehound backend reset ``` or just stopped: + ```bash kubehound backend down ``` @@ -48,7 +52,8 @@ The backend data can be wiped by using: kubehound backend wipe ``` -!!! Warning +!!! warning + This command will **wipe ALL docker DATA (docker volume and containers) and will not be recoverable**. ## Dump @@ -63,7 +68,8 @@ kubehound dump local [directory to dump the data] If for some reasons you need to have the raw data, you can add `--no-compress` flag to have a raw extract. -!!! Note +!!! note + This step does not require any backend as it only automate grabbing k8s resources from the k8s api. ## Ingest @@ -73,8 +79,13 @@ If for some reasons you need to have the raw data, you can add `--no-compress` f To ingest manually an extraction made by KubeHound, just specify where the dump is being located and the associated cluster name. ```bash -kubehound ingest local [directory or tar.gz path] --cluster +kubehound ingest local [directory or tar.gz path] ``` -!!! Warning - This step requires the backend to be started, it will start it for you. +!!! warning + + This step requires the backend to be started, it will not start it for you. + +!!! warning "deprecated" + + The `--cluster` is deprecated since v1.5.0. Now a metadata.json is being embeded with the cluster name. If you are using old dump you can either still use the `--cluster` flag or auto detect it from the path. diff --git a/docs/user-guide/getting-started.md b/docs/user-guide/getting-started.md index 76902cb3f..caae21c2e 100644 --- a/docs/user-guide/getting-started.md +++ b/docs/user-guide/getting-started.md @@ -11,7 +11,7 @@ These two are used to start the backend infrastructure required to run KubeHound ## Running KubeHound -KubeHound ships with a sensible default configuration as well as a pre-built binary, designed to get new users up and running quickly. +KubeHound ships with a sensible default configuration as well as a pre-built binary, designed to get new users up and running quickly. Download the latest KubeHound binary for you platform: @@ -116,13 +116,13 @@ WARN[01:43:36] Password being 'admin' ## Access the KubeHound data -At this point, the KubeHound data has been ingested in KubeHound's [graph database](../architecture.md). -You can use any client that supports accessing JanusGraph - a comprehensive list is available on the [JanusGraph home page](https://janusgraph.org/). -We also provide a showcase [Jupyter Notebook](../../deployments/kubehound/notebook/KubeHound.ipynb) to get you started. This is accessible on [http://locahost:8888](http://locahost:8888) after starting KubeHound backend. The default password is `admin` but you can change this by setting the `NOTEBOOK_PASSWORD` environment variable in your `.env file`. +At this point, the KubeHound data has been ingested in KubeHound's [graph database](../architecture.md). +You can use any client that supports accessing JanusGraph - a comprehensive list is available on the [JanusGraph home page](https://janusgraph.org/). +We also provide a showcase [Jupyter Notebook](https://github.com/DataDog/KubeHound/blob/main/deployments/kubehound/ui/KubeHound.ipynb) to get you started. This is accessible on [http://locahost:8888](http://locahost:8888) after starting KubeHound backend. The default password is `admin` but you can change this by setting the `NOTEBOOK_PASSWORD` environment variable in your `.env file`. ## Visualize and query the KubeHound data -Once the data is loaded in the graph database, it's time to visualize and query it! +Once the data is loaded in the graph database, it's time to visualize and query it! You can explore it interactively in your graph client. Then, refer to KubeHound's [query library](../queries/index.md) to start asking questions to your data. @@ -135,4 +135,3 @@ make sample-graph ``` This will spin up a temporary local kind cluster, run KubeHound on it, and destroy the cluster. - diff --git a/docs/user-guide/khaas-101.md b/docs/user-guide/khaas-101.md index 6f154ca35..570788c72 100644 --- a/docs/user-guide/khaas-101.md +++ b/docs/user-guide/khaas-101.md @@ -2,12 +2,13 @@ KHaaS enables you to use KubeHound in a distributive way. It is being splitted in 2 main categories: -* The ingestor stack which includes the `graphdb`, `storedb`, `UI` and `grpc endpoint`. -* The collector (the kubehound binary) which will dump and send the k8s resources to the KHaaS `grpc endpoint`. +- The ingestor stack which includes the `graphdb`, `storedb`, `UI` and `grpc endpoint`. +- The collector (the kubehound binary) which will dump and send the k8s resources to the KHaaS `grpc endpoint`. [![](../images/khaas-architecture.png)](../images/khaas-architecture.png) -!!! Note +!!! note + You need to deploy the data storage you want to use ([AWS s3 in our example](https://github.com/DataDog/KubeHound/tree/main/deployments/terraform)). ## Deploying KHaaS - Ingestor stack @@ -23,12 +24,12 @@ docker compose -f docker-compose.yaml -f docker-compose.release.yaml -f docker-c By default the endpoints are only exposed locally: -* `127.0.0.1:9000` for ingestor endpoint. -* `127.0.0.1:8888` for the UI. +- `127.0.0.1:9000` for ingestor endpoint. +- `127.0.0.1:8888` for the UI. -!!! Warning - You should change the default password by editing `NOTEBOOK_PASSWORD=` in the `docker-compose.yaml` +!!! warning + You should change the default password by editing `NOTEBOOK_PASSWORD=` in the `docker-compose.yaml` ### k8s deployment @@ -47,35 +48,42 @@ NAME NAMESPACE REVISION UPDATED khaas khaas 1 2024-07-30 19:04:37.0575 +0200 CEST deployed kubehound-0.0.1 ``` -!!! Warning +!!! warning + This is an example to deploy KubeHound as a Service in k8s cluster, but you will need to adapt it to your own environment. ## KubeHound collector In order to use `kubehound` with KHaaS, you need to specify the api endpoint you want to use: -* `--khaas-server` from the inline flags (by default `127.0.0.1:9000`) +- `--khaas-server` from the inline flags (by default `127.0.0.1:9000`) Since this is not likely to change in your environment, we advise you to use the local config file. By default KubeHound will look for `./kubehound.yaml` or `$HOME/.config/kubehound.yaml`. As example here we set the default endpoint with disabled SSL. ```yaml ingestor: - api: - endpoint: "127.0.0.1:9000" - insecure: true + api: + endpoint: "127.0.0.1:9000" + insecure: true ``` -!!! Note +!!! note + You can use [kubehound-reference.yaml](https://github.com/DataDog/KubeHound/blob/main/configs/etc/kubehound-reference.yaml) as an example which list every options. +!!! warning "deprecated" + + The `kubehound-ingestor` has been deprecated since **v1.5.0** and renamed to `kubehound-binary`. + ### Dump and ingest In order to use the collector with KHaaS you need to specify the cloud location you want to dump the k8s resources: -* `--bucket` from the inline flags (i.e. `s3://`). There is no default value for security reason. -* `--region` from the inline flags (i.e. `us-east-1`) to set the region to retrieve the configuration (only for s3). +- `--bucket` from the inline flags (i.e. `s3://`). There is no default value for security reason. +- `--region` from the inline flags (i.e. `us-east-1`) to set the region to retrieve the configuration (only for s3). + +!!! warning -!!! Warning The `kubehound` binary needs to have push access to your cloud storage provider. If you don't want to specify the bucket every time, you can set it up in your local config file. @@ -83,20 +91,21 @@ If you don't want to specify the bucket every time, you can set it up in your lo ```yaml ingestor: - blob: - # (i.e.: s3://) - bucket: "" - # (i.e.: us-east-1) - region: "" + blob: + # (i.e.: s3://) + bucket: "" + # (i.e.: us-east-1) + region: "" ``` -!!! Note +!!! note + You can use [kubehound-reference.yaml](https://github.com/DataDog/KubeHound/blob/main/configs/etc/kubehound-reference.yaml) as an example which list every options. Once everything is configured you just run the following, it will: -* **dump the k8s resources** to the cloud storage provider. -* send a grpc call to **run the ingestion on the KHaaS** grpc endpoint. +- **dump the k8s resources** to the cloud storage provider. +- send a grpc call to **run the ingestion on the KHaaS** grpc endpoint. ```bash kubehound dump remote @@ -108,13 +117,13 @@ or with the flags (for AWS s3): kubehound dump remote --khaas-server 127.0.0.1:9000 --insecure --bucket s3:// --region us-east-1 ``` -!!! Note - The ingestion will dump the current cluster being setup, if you want to skip the interactive mode, just specify `-y` or `--non-interactive` +!!! note + The ingestion will dump the current cluster being setup, if you want to skip the interactive mode, just specify `-y` or `--non-interactive` ### Manual ingestion -If you want to rehydrate (reingesting all the latest clusters dumps), you can use the `ingest` command to run it. +If you want to rehydrate (reingesting all the latest clusters dumps), you can use the `ingest` command to run it. ```bash kubehound ingest remote @@ -130,4 +139,4 @@ You can also specify a specific dump by using the `--cluster` and `run_id` flags ```bash kubehound ingest remote --cluster my-cluster-1 --run_id 01htdgjj34mcmrrksw4bjy2e94 -``` \ No newline at end of file +```