Skip to main content
Version: 3.4

Installation on a Bare Metal Kubernetes Cluster using Portworx Operator

This topic provides instructions for installing Portworx on a bare metal Kubernetes cluster using the Portworx Operator.

The following collection of tasks describe how to install Portworx on a bare metal Kubernetes cluster using the Portworx Operator:

Complete all the tasks to install Portworx.

Generate Portworx Specification

To install Portworx, you must first generate Kubernetes manifests that you will deploy in your bare metal Kubernetes cluster by following these steps.

  1. Sign in to the Portworx Central console.

  2. In the Welcome to Portworx! section, select Get Started.

  3. On the Product Line page, in the Portworx Enterprise section, select Continue.

  4. From the Portworx Version dropdown menu, select the Portworx version to install.

  5. From the Platform dropdown menu, select DAS/SAN.

  6. From the Distribution Name dropdown menu, select None.

  7. (Optional) To customize the configuration options and generate a custom specification, click Customize and perform the following steps:

    note

    To continue without customizing the default configuration or generating a custom specification, proceed to Step 8.

    • Basic tab:
      1. To use an existing etcd cluster, do the following:
        1. Select the Your etcd details option.
        2. In the field provided, enter the host name or IP and port number.
          For example, http://test.com.net:1234.
        3. Select one of the following authentication methods:
        • Disable HTTPS – To use HTTP for etcd communication.
        • Certificate Auth – To use HTTPS with an SSL certificate.
          For more information, see Secure your etcd communication.
        • Password Auth – To use HTTPS with username and password authentication.
      2. To use an internal Portworx-managed key-value store (kvdb), do the following:
        1. Select the Built-in option.
        2. To enable TLS encrypted communication among KVDB nodes and between Portworx nodes and the KVDB cluster, select the Enable TLS for internal kvdb checkbox.
        3. If your cluster does not already have a cert-manager, select the Deploy Cert-Manager for TLS certificates checkbox.
      3. Select Next.
    • Storage tab:
      1. To enable Portworx to use all available, unused, and unmounted drives on the node, do the following:
        1. Select the Automatically scan disks option.
        2. From the Default IO Profile dropdown menu, select Auto.
          This enables Portworx to automatically choose the best I/O profile based on detected workload patterns.
        3. Select the Use unmounted disks even if they have a partition or filesystem on it. Portworx will never use a drive or partition that is mounted checkbox to use unmounted disks, even if they contain a partition or filesystem.
          Portworx will not use any mounted drive or partition.
      2. To manually specify the drives on the node for Portworx to use, do the following:
        1. Select the Manually specify disks option.
        2. In the Drive/Device field, specify the block drive(s) that Portworx uses for data storage.
        3. In the Pool Label field, assign a custom label in key:value format to identify and categorize storage pools.
      3. Select the PX-StoreV2 checkbox to enable the PX-StoreV2 datastore.
      4. If you select the PX-StoreV2 checkbox, in the Metadata Path field, enter a pre-provisioned path for storing the Portworx metadata.
        The path must be at least 64 GB in size.
      5. From the Journal Device dropdown menu, select one of the following:
        • None – To use the default journaling setting.
        • Auto – To automatically allocate journal devices.
        • Custom – To manually enter a journal device path.
          Enter the path of the journal device in the Journal Device Path field.
      6. Skip KVDB device - This checkbox is selected by default and appears only if you choose the Built-in option in the Basic tab.
        Keep it selected to use the same device for KVDB and storage I/O. This configuration is suitable for test or development clusters but not recommended for production clusters. For production clusters, clear the checkbox and provide a separate device to store internal KVDB data. This separates KVDB I/O from storage I/O and improves performance.
      7. KVDB device - Enter the block device path to be used exclusively for KVDB data.
        This device must be present on at least three nodes in the cluster to ensure high availability.
        note

        To restrict Portworx to run internal KVDB only on specific nodes, label those nodes with:

        kubectl label nodes node1 node2 node3 px/metadata-node=true
      8. Select Next.
    • Network tab:
      1. In the Interface(s) section, do the following:
        1. Enter the Data Network Interface to be used for data traffic.
        2. Enter the Management Network Interface to be used for management traffic.
      2. In the Advanced Settings section, do the following:
        1. Enter the Starting port for Portworx services.
      3. Select Next.
    • Customize tab:
      1. Choose the Kubernetes platform in the Customize section.
      2. In the Environment Variables section, enter name-value pairs in the respective fields.
      3. In the Registry and Image Settings section:
        1. Enter the Custom Container Registry Location to download the Docker images.
        2. Enter the Kubernetes Docker Registry Secret that serves as the authentication to access the custom container registry.
        3. From the Image Pull Policy dropdown menu, select Default, Always, IfNotPresent, or Never.
          This policy influences how images are managed on the node and when updates are applied.
      4. In the Security Settings section, select the Enable Authorization checkbox to enable Role-Based Access Control (RBAC) and secure access to storage resources in your cluster.
      5. In the Advanced Settings section:
        1. Select the Enable Stork checkbox to enable Stork.
        2. Select the Enable CSI checkbox to enable CSI.
        3. Select the Enable Monitoring checkbox to enable monitoring for user-defined projects before installing Portworx Operator.
        4. Select the Enable Telemetry checkbox to enable telemetry in the StorageCluster spec.
          For more information, see Enable Pure1 integration for upgrades on bare metal.
        5. Enter the prefix for the Portworx cluster name in the Cluster Name Prefix field.
        6. Select the Secrets Store Type from the dropdown menu to store and manage secure information for features such as CloudSnaps and Encryption.
      6. Click Finish.
      7. In the summary page, enter a name for the specification in the Spec Name field, and tags in the Spec Tags field.
      8. Click Download .yaml to download the yaml file with the customized specification or Save Spec to save the specification.
  8. Click Save & Download to generate the specification.

Deploy Portworx Operator

Use the Operator specifications you generated in the Generate Portworx Specification section, and deploy Portworx Operator by running the following command.

kubectl apply -f 'https://install.portworx.com/<version-number>?comp=pxoperator'
serviceaccount/portworx-operator created
podsecuritypolicy.policy/px-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created

Deploy StorageCluster

  1. Use the StorageCluster specifications you generated in the Generate Portworx Specification section, and deploy StorageCluster by running the following command.

    kubectl apply -f 'https://install.portworx.com/<version-number>?operator=true&mc=false&kbver=&b=true&c=px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-8dfd338e915b&stork=true&csi=true&mon=true&tel=false&st=k8s&promop=true'
    storagecluster.core.libopenstorage.org/px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-8dfd338e915b created
  2. (Optional) If you have a disaggregated setup, after you generate the StorageCluster spec, you must create two separate node sections in the spec to define the device settings for the storage and storageless (compute) nodes. Here is a sample StorageCluster spec that uses node-specific overrides:

Monitor Portworx Nodes

  1. Enter the following kubectl get command and wait until all Portworx nodes show as Ready or Online in the output:

    kubectl -n <px-namespace> get storagenodes -l name=portworx
    NAME                 ID                                     STATUS   VERSION          AGE
    username-k8s1-node0 xxxxxxxx-xxxx-xxxx-xxxx-43cf085e764e Online 2.11.1-3a5f406 4m52s
    username-k8s1-node1 xxxxxxxx-xxxx-xxxx-xxxx-4597de6fdd32 Online 2.11.1-3a5f406 4m52s
    username-k8s1-node2 xxxxxxxx-xxxx-xxxx-xxxx-e2169ffa111c Online 2.11.1-3a5f406 4m52s
  2. Enter the following kubectl describe command with the NAME of one of the Portworx nodes you retrieved above to show the current installation status for individual nodes:

    kubectl -n <px-namespace> describe storagenode <portworx-node-name>
    ...
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal PortworxMonitorImagePullInPrgress 7m48s portworx, k8s-node-2 Portworx image portworx/px-enterprise:2.10.1.1 pull and extraction in progress
    Warning NodeStateChange 5m26s portworx, k8s-node-2 Node is not in quorum. Waiting to connect to peer nodes on port 9002.
    Normal NodeStartSuccess 5m7s portworx, k8s-node-2 PX is ready on this node
    note
    • The image pulled in the output differs based on the Portworx license type and version.
    • For Portworx Enterprise, the default license activated on the cluster is a 30 day trial, that you can convert to a SaaS-based model or a generic fixed license.

Verify Portworx Pod Status

Enter the following command to list and filter the results for Portworx pods and specify the namespace where you have deployed Portworx:

kubectl get pods -n <px-namespace> -o wide | grep -e portworx -e px
NAME                                                    READY   STATUS    RESTARTS         AGE     IP                NODE                   NOMINATED NODE   READINESS GATES
portworx-api-774c2 1/1 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-api-t4lf9 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
portworx-api-dvw64 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node2 <none> <none>
portworx-kvdb-94bpk 1/1 Running 0 4s 192.168.121.196 username-k8s1-node0 <none> <none>
portworx-kvdb-8b67l 1/1 Running 0 10s 192.168.121.196 username-k8s1-node1 <none> <none>
portworx-kvdb-fj72p 1/1 Running 0 30s 192.168.121.196 username-k8s1-node2 <none> <none>
portworx-operator-58967ddd6d-kmz6c 1/1 Running 0 4m1s 10.244.1.99 username-k8s1-node0 <none> <none>
prometheus-px-prometheus-0 2/2 Running 0 2m41s 10.244.1.105 username-k8s1-node0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-9gs79 2/2 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-vpptx 2/2 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none>
px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d-bxmpn 2/2 Running 0 2m55s 192.168.121.191 username-k8s1-node2 <none> <none>
px-csi-ext-868fcb9fc6-54bmc 4/4 Running 0 3m5s 10.244.1.103 username-k8s1-node0 <none> <none>
px-csi-ext-868fcb9fc6-8tk79 4/4 Running 0 3m5s 10.244.1.102 username-k8s1-node2 <none> <none>
px-csi-ext-868fcb9fc6-vbqzk 4/4 Running 0 3m5s 10.244.3.107 username-k8s1-node1 <none> <none>
px-prometheus-operator-59b98b5897-9nwfv 1/1 Running 0 3m3s 10.244.1.104 username-k8s1-node0 <none> <none>

Note the name of a px-cluster pod. You will run pxctl commands from these pods in Verify Portworx Cluster Status.

Verify Portworx Cluster Status

You can find the status of the Portworx cluster by running pxctl status commands from a pod.
Enter the following kubectl exec command, specifying the pod name you retrieved in Verify Portworx Pod Status:

kubectl exec <pod-name> -n <px-namespace> -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e
IP: 192.168.121.99
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 3.0 TiB 10 GiB Online default default
Local Storage Devices: 3 devices
Device Path Media Type Size Last-Scan
0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:2 /dev/vdc STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
0:3 /dev/vdd STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC
* Internal kvdb on this node is sharing this storage device /dev/vdc to store its data.
total - 3.0 TiB
Cache Devices:
* No cache devices
Cluster Summary
Cluster ID: px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d
Cluster UUID: xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae
Scheduler: kubernetes
Nodes: 3 node(s) with storage (3 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
192.168.121.196 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc username-k8s1-node0 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
192.168.121.99 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e username-k8s1-node1 Disabled Yes 10 GiB 3.0 TiB Online Up (This node) 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
192.168.121.191 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a username-k8s1-node2 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core)
Global Storage Pool
Total Used : 30 GiB
Total Capacity : 9.0 TiB

Status displays PX is operational when the cluster is running as expected. If the cluster is using the PX-StoreV2 datastore, the StorageNode entries for each node displays Yes(PX-StoreV2).

Verify Portworx Pool Status

note

This procedure is applicable for clusters with PX-StoreV2 datastore.

Run the following command to view the Portworx drive configurations for your pod:

kubectl exec <px-pod>  -n <px-namespace> -- /opt/pwx/bin/pxctl service pool show
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
PX drive configuration:
Pool ID: 0
Type: PX-StoreV2
UUID: 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx
IO Priority: HIGH
Labels: kubernetes.io/arch=amd64,kubernetes.io/hostname=username-vms-silver-sight-3,kubernetes.io/os=linux,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,iopriority=HIGH
Size: 25 GiB
Status: Online
Has metadata: No
Balanced: Yes
Drives:
0: /dev/sda, Total size 32 GiB, Online
Cache Drives:
No Cache drives found in this pool
Metadata Device:
1: /dev/sdd, STORAGE_MEDIUM_SSD

The output Type: PX-StoreV2 ensures that the pod uses the PX-StoreV2 datastore.

Verify pxctl Cluster Provision Status

  1. Access the Portworx CLI.

  2. Run the following command to find the storage cluster:

    kubectl -n <px-namespace> get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
    px-cluster-xxxxxxxx-xxxx-xxxx-xxxx-3e9bf3cd834d xxxxxxxx-xxxx-xxxx-xxxx-6f3fd5522eae Online 2.11.0 10m

    The status must display the cluster is Online.

  3. Run the following command to find the storage nodes:

    kubectl -n <px-namespace> get storagenodes
    NAME                  ID                                     STATUS   VERSION          AGE
    username-k8s1-node0 xxxxxxxx-xxxx-xxxx-xxxx-fad8c65b8edc Online 2.11.0-81faacc 11m
    username-k8s1-node1 xxxxxxxx-xxxx-xxxx-xxxx-70c31d0f478e Online 2.11.0-81faacc 11m
    username-k8s1-node2 xxxxxxxx-xxxx-xxxx-xxxx-19d45b4c541a Online 2.11.0-81faacc 11m

    The status must display the nodes are Online.

  4. Verify the Portworx cluster provision status by running the following command.
    Specify the pod name you retrieved in Verify Portworx Pod Status.

    kubectl exec <px-pod> -n <px-namespace> -- /opt/pwx/bin/pxctl cluster provision-status
    NODE					                NODE STATUS	 POOL						              POOL STATUS  IO_PRIORITY	SIZE	AVAILABLE	USED   PROVISIONED ZONE REGION	RACK
    0c99e1f2-9d49-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 8ec9e6aa-7726-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    1e89102f-0510-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 06fcc73a-7e2f-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default
    24508311-e2fe-xxxx-xxxx-xxxxxxxxxxxx Up 0 ( 58ab2e3f-a22e-xxxx-xxxx-xxxxxxxxxxxx ) Online HIGH 32 GiB 32 GiB 33 MiB 0 B default default default

What to do next

Create a PVC. For more information, see Create your first PVC.