Prerequisites for Azure Kubernetes Service (AKS)
Environment Prerequisites
For a Portworx cluster on Azure Kubernetes Service (AKS), each node must meet the following hardware, software, and network requirements:
- PX-StoreV1
- PX-StoreV2
Hardware | |
---|---|
CPU | 4 cores minimum, 8 cores recommended |
RAM | 4GB minimum, 8GB recommended |
Disk
|
|
Backing drive | 8GB (minimum required) 128 GB (minimum recommended) |
Operating system root partition | If /opt and /var are created as separate disks then 64 GB is sufficient for root partition.Otherwise a min of 128 GB is required |
Storage drives | Azure Managed Disks or Azure Blob Storage. |
Network connectivity | Bandwidth:
Latency requirements for synchronous replication: less than 10ms between nodes in the cluster |
Node type | Azure Virtual Machines (VMs) |
Hardware | |
---|---|
CPU | 8 cores minimum |
RAM | SSD/NVME drive type with a minimum memory 8 GB |
Disk
|
|
Backing drive | 8GB (minimum required) 128 GB (minimum recommended) |
Operating system root partition | If /opt and /var are created as separate disks then 64 GB is sufficient for root partition.Otherwise a min of 128 GB is required recommended |
Storage drives | Azure Managed Disks or Azure Blob Storage. |
Network connectivity | Bandwidth:
Latency requirements for synchronous replication: less than 10ms between nodes in the cluster |
Node type | Azure Virtual Machines (VMs) |
Metadata drive | Minimum of 64 GB system metadata device on each node where you want to deploy Portworx. If you do not provide a metadata device, one will be automatically added to the spec. |
- PX-StoreV1
- PX-StoreV2
Software | |
---|---|
Linux kernel and distro | Kernel version 4.18 or greater. To check if your Linux distro and kernel are supported, see Supported Kernels. |
Key-value store | Portworx needs a key-value store to perform its operations. As such, install a clustered key-value database (kvdb ) with a three node cluster.You can also use Internal KVDB during installation. In this mode, Portworx will create and manage an internal key-value store (KVDB) cluster. If you plan of using your own KVDB, refer to KVDB for Portworx for details on recommendations for installing and configuring a KVDB cluster. |
Disable swap | Disable swap on all nodes that will run the Portworx software. Ensure that the swap device is not automatically mounted on server reboot. |
Network Time Protocol (NTP) | All nodes in the cluster should be in sync with NTP time. Any time drift between nodes can cause unexpected behaviour, impacting services. |
Software | |
---|---|
Linux kernel and distro | Linux kernel version: 4.20 or newer (minimum), 5.0 or newer (recommended). During installation, Portworx will automatically try to pull the dmsetup , mdadm , lvm2 , thin-provisioning-tools , augeas-tools package from distribution specific repositories. This is a mandatory requirement and installation will fail if this prerequisite is not met.To check if your Linux distro and kernel are supported, see Supported Kernels. |
Key-value store | Portworx needs a key-value store to perform its operations. As such, install a clustered key-value database (kvdb ) with a three node cluster.You can also use Internal KVDB during installation. In this mode, Portworx will create and manage an internal key-value store (KVDB) cluster. If you plan of using your own KVDB, refer to KVDB for Portworx for details on recommendations for installing and configuring a KVDB cluster. |
Disable swap | Please disable swap on all nodes that will run the Portworx software. Ensure that the swap device is not automatically mounted on server reboot. |
Portworx network requirements
Portworx runs as a pod in a Kubernetes cluster and uses specific ports for communication, data transfer, and telemetry.
- East-to-west
- Inbound
- Outbound
- Portworx also requires the following ports:
- An open KVDB port. For example, if you're using etcd externally, open port 2379.
- An open UDP port at 9002.
- For telemetry, open ports 9024, 12001, and 12002. Ensure you are running Portworx Operator version 23.7.0 or higher to configure the telemetry port:
- Portworx Versions 2.13.7 and Older: Open port 9024 specifically for telemetry.
- Portworx Versions 2.13.8 and Newer: Use port 9029 for telemetry.
Kubernetes | Description |
---|---|
9001 | Portworx management port [REST] |
9002 | Portworx node-to-node port [gossip]/UDP |
9003 | Portworx storage data port |
9004 | Portworx namespace [RPC] |
9012 | Portworx node-to-node communication port [gRPC] |
9013 | Portworx namespace driver [gRPC] |
9014 | Portworx diags server port [gRPC] |
9018 | Portworx kvdb peer-to-peer port [gRPC] |
9019 | Portworx kvdb client service [gRPC] |
9021 | Portworx gRPC SDK gateway [REST] |
9022 | Portworx health monitor [REST] |
9029 | Telemetry log uploader |
12002 | Telemetry phone home |
Kubernetes | Description |
---|---|
9001 | Portworx management port [REST] |
9021 | Portworx gRPC SDK gateway [REST] |
Supported disk types
- PX-StoreV1
- PX-StoreV2
Cloud Provider | Disk Types |
---|---|
Azure |
|
Cloud Provider | Disk Types |
---|---|
Azure |
|
Important notes for the PremiumV2_LRS
and UltraSSD_LRS
Disk Types:
- For a comprehensive overview of their limitations, refer to the Azure documentation pages for PremiumV2_LRS and UltraSSD_LRS.
- To enable expansion of
PremiumV2_LRS
andUltraSSD_LRS
disk types requires a dedicated storage pool for the metadata partition. This is crucial to avoid losing metadata during the disk expansion process. - When configuring the
UltraSSD_LRS
disk type, Portworx uses the median limit for IOPs. To adjust the performance settings of theUltraSSD_LRS
disk type according to your needs, see the Adjust the Performance of an Ultra Disk page in the Azure documentation.
Supported Kubernetes versions
Before installing Portworx on AKS, ensure you are using a supported Kubernetes version:
Portworx Enterprise supported Kubernetes versions
- 3.2
- 3.1
- 3.0
Type | Supported Versions |
---|---|
AKS |
|
Type | Supported Versions |
---|---|
AKS |
|
Supported Kubernetes Version |
---|
|
Best practices
Prevent Accidental Deletion: If your virtualization software has a feature to prevent accidental deletion, you should enable it for the VMs hosting PX nodes. While PX is designed to handle the loss of some nodes without issue, losing a significant number of storage nodes due to VM deletion can result in a loss of quorum and an outage. For more information on how to prevent accidental deletion of VM, refer to Lock your resources to protect your infrastructure on Azure.