ML engine settings

The engine management page provides access to configuration options for individual ML engines in the environment. These settings relate primarily to resource utilization for an engine’s core processes and can be used to tailor resource allocations by service/process to each engine’s expected workload.

The following settings can be accessed by selecting Settings in the engine configuration tray:

  • Ingestion Replica Count: Number of pods to deploy for the ingestion service

  • Train Anomaly Detection Replica Count: Number of pods to deploy for the anomaly detection training service

  • Ingestion Minimum CPU: Minimum number of CPU cores that can be dedicated to the ingestion service

  • Ingestion Maximum CPU: Maximum number of CPU cores that can be dedicated to the ingestion service

  • Ingestion Minimum Memory: Minimum amount of memory (in GB) that can be dedicated to the ingestion service

  • Ingestion Maximum Memory: Maximum amount of memory (in GB) that can be dedicated to the ingestion service

  • Elasticsearch memory: Amount of memory (in GB) to dedicate to Elasticsearch

  • Elasticsearch Minimum CPU: Minimum number of CPU cores that can be dedicated to Elasticsearch

  • Elasticsearch Maximum CPU: Maximum number of CPU cores that can be dedicated to Elasticsearch

Note

  • The Kibana UI can be enabled from the same tray and will be deployed alongside Elasticsearch if toggled on.

  • Collectors assignments can also be configured on a per-engine basis from the main configuration tray.