Resoto Worker Configuration
You may need make your AWS
credentials file, Google Cloud service account JSON files, Kubernetes kubeconfig files, etc. available to Resoto Worker so that it can collect your resources.
Writing Configuration Files at Startup
The Resoto Worker
write_files_to_home_dir configuration option allows you to write files to the Resoto Worker home directory.
Resoto Worker generates files at defined
path with the specified
content at startup.
- path: ~/.aws/config
region = us-east-1
Resoto Worker can only write files to its home directory.
Resoto Worker will overwrite any existing files with the defined filenames.
Mounting Configuration Files to Container-Based Installations
Add the desired volume definition to the
resotoworkercontainer with the updated service definition:
$ docker-compose up -dnote
In Docker Compose V2, the command is
docker compose(no hyphen) instead of
Create a secret with the path to the configuration file:
$ kubectl -n resoto create secret generic resoto-home --from-file=credentials=$HOME/.aws/credentials
resotoworkervolume mounts and volumes:resoto-values.yaml
- mountPath: /home/resoto/.aws
- name: aws-credentials
Deploy the changes with Helm:
$ helm upgrade resoto resoto/resoto --set image.tag=edge -f resoto-values.yaml
Resoto resource collection speed depends heavily on the number of CPU cores available to the worker. When collecting hundreds of accounts, Resoto Worker can easily saturate 64 cores or more.
The amount of RAM required depends on the number of resources in each account. As a rule of thumb, estimate 512 MB of RAM and 0.5 CPU cores per account concurrently collected, with a minimum of 4 cores and 16 GB for a production setup.
The following settings specify how many Worker threads Resoto starts:
# How many cleanup threads to run in parallel
# Collector thread/process pool size
# Account thread/process pool size
# Region thread pool size
# GCP project thread/process pool size
resotoworker.pool_sizesetting determines how many collectors (AWS, Google Cloud, DigitalOcean, Kubernetes, etc.) are run concurrently.
gcp.project_pool_sizeare used to determine how many accounts or projects respectively are collected concurrently.
aws.region_pool_sizeis used to determine how many regions per account are collected concurrently.
At peak, Resoto creates concurrent network connections for each region in every account. With a single cloud with 32 accounts and 20 regions per account, for example, there will be a maximum of 32 × 20 = 640 connections.
This is not a problem in a data center or with a SOHO router, where hundreds of thousands (or even millions) of new connections per second are supported. However, if you are testing Resoto at home using a consumer-grade router, you should be conservative when configuring thread pool sizes.