Azure (AKS)¶
After completing the pre-deployment preparations, follow the instructions below to set up the necessary infrastructure and deploy the Plixer ML Engine in Azure.
Additional prerequisites for Azure¶
Credentials for the Azure user account that will be used for deployment
A VNet with one subnet for the deployment
Note
The Plixer ML VM (the deployment host) deployed as part of the pre-deployment preparations will have all software prerequisites (Docker, Terraform, etc.) preinstalled.
The Azure user account must be assigned the
owner
role to allow a role to be assigned to the AKS cluster user.The infrastructure setup script (
01_azure_infrastructure.sh
) includes an option to automatically set up a new VNet and will prompt the user to enter the necessary information. Alternatively, these details can be manually defined using thevnet_addresses
andnew_subnet_cidr
variables in /home/plixer/common/kubernetes/azure.tfvars.
Hybrid cloud deployments¶
When pairing a Plixer ML Engine in Azure with an on-prem Plixer Scrutinizer environment, one of the following methods should be used to enable connectivity between the two before starting the deployment process.
Azure site-to-site (S2S) VPN
Follow these instructions to create a site-to-site VPN connection to allow communication between the two deployments.
Direct access via public IP
A public IP address can be used to allow external access to the on-prem Plixer Scrutinizer deployment. However, this will expose the Plixer Scrutinizer environment to the Internet via ports 5432, 22, and 443.
The public IP address must be entered when prompted by the 01_azure_infrastructure.sh
and setup.sh
scripts. The Internet gateway IP must also be manually added to the Plixer Scrutinizer pg_hba.conf
file to allow access to Postgres.
After the file has been modified, run the the following command on the Plixer Scrutinizer server to reload the configuration:
psql -c "SELECT pg_reload_conf()"
Deploying the Kubernetes infrastructure¶
Log in to the Plixer ML VM image using
plixer:plixer
.Accept the EULA, and then configure network settings for the host.
SSH to the Plixer ML VM image using the
plixer
credentials set in step 2.Exit the automated setup wizard by pressing Ctrl + C.
Start the Azure CLI and run the following to set up the client and log in:
az login
Define the infrastructure deployment parameters in
/home/plixer/common/kubernetes/azure.tfvars
(as described in the file).Note
azure.tfvars
may also include fields/variables with factory-defined values (e.g.,kube_version
) for deploying the Plixer ML Engine Kubernetes cluster. Contact Plixer Technical Support for assistance before making changes to any default value.Navigate to
/home/plixer/common/kubernetes
and run the Kubernetes cluster deployment script:01_azure_infrastructure.sh
Verify that the infrastructure was successfully deployed (may take several minutes):
kubectl get nodes
After confirming the Kubernetes cluster has been correctly deployed, proceed to deploying the Plixer ML Engine.
Deploying the Plixer ML Engine¶
Once the Kubernetes cluster has been deployed, follow these steps to deploy the Plixer ML Engine:
Navigate to the
/home/plixer/ml
directory on the deployment host.Run the Plixer ML Engine deployment script and follow the prompts to set up the appliance:
setup.sh
When prompted, enter the following Plixer Scrutinizer environment details:
Primary reporter IP address
After the script completes running, navigate to Admin > Resources > ML Engines and wait for the engine to show as Deployed under its Deploy Status. Refresh the page if the status has not updated after a few minutes.
Terraform configuration¶
The following table lists all required and optional variables in /home/plixer/common/kubernetes/azure.tfvars
, which are used when deploying the Kubernetes infrastructure for the Plixer ML Engine.
Field name |
Description |
cluster_name |
REQUIRED: The name to associate with and identify this deployment. |
vm_type |
REQUIRED: This is the Azure VM instance type to create for AKS worker nodes. |
location |
REQUIRED: This is the location to create the AKS worker nodes in (e.g. East US 2). |
resource_group_name |
OPTIONAL: Name of existing resource group to use when deploying assets. If empty, a new resource group named ${var.cluster_name}-resource-group will be created. resource_group_name must also be in the specified location field. |
vnet_name |
OPTIONAL: Name of existing VNET to deploy AKS in. |
vnet_subnet_name |
OPTIONAL: Name of existing subnet within vnet_name to deploy AKS in. Each subnet can only contain one aks cluster. |
vnet_addresses |
OPTIONAL: If vnet_name is not specified, then use this address space when creating the new VNET to place AKS in. By default, value is set to 172.18.0.0/16. |
new_subnet_cidr |
OPTIONAL (required if vnet_subnet_name is not specified): If vnet_subnet_name is not specified, then use this address space when creating the new VNET subnet to place AKS in. Value must be within the address space of the specified VNET. Default value is set to 172.18.1.0/24. |
public_node_ips |
OPTIONAL: Whether or not to assign public IPs to AKS nodes. By default, value is set to FALSE. |
service_cidr |
OPTIONAL: Service CIDR space for internal k8s services. Must not conflict with the address space of the VNET being deployed to. By default, value is set to 172.19.1.0/24. |
dns_service_ip |
OPTIONAL: Service IP to assign to the k8s internal DNS service. Must be within the address space specified by service_cidr. By default, value is set to 172.19.1.5. |