Scheduled Snapshots
Prerequisites
Configuring cloud secrets
To create cloud snapshots, one needs to setup secrets with Portworx which will get used to connect and authenticate with the configured cloud provider.
Follow instructions on the create and configure credentials section to setup secrets.
Storkctl
Always use the latest storkctl binary tool by downloading it from the current running Stork container.
Perform the following steps to download storkctl from the Stork pod:
- Kubernetes
- OpenShift
-
Linux:
STORK_POD=$(kubectl get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
kubectl cp -n <px-namespace> $STORK_POD:/storkctl/linux/storkctl ./storkctl
sudo mv storkctl /usr/local/bin &&
sudo chmod +x /usr/local/bin/storkctl -
OS X:
STORK_POD=$(kubectl get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
kubectl cp -n <px-namespace> $STORK_POD:/storkctl/darwin/storkctl ./storkctl
sudo mv storkctl /usr/local/bin &&
sudo chmod +x /usr/local/bin/storkctl -
Windows:
-
Copy
storkctl.exefrom the stork pod:STORK_POD=$(kubectl get pods -n <px-namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
kubectl cp -n <px-namespace> $STORK_POD:/storkctl/windows/storkctl.exe ./storkctl.exe -
Move
storkctl.exeto a directory in your PATH.
-
-
Linux:
STORK_POD=$(oc get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
oc cp -n <px-namespace> $STORK_POD:/storkctl/linux/storkctl ./storkctl
sudo mv storkctl /usr/local/bin &&
sudo chmod +x /usr/local/bin/storkctl -
OS X:
STORK_POD=$(oc get pods -n <namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
oc cp -n <px-namespace> $STORK_POD:/storkctl/darwin/storkctl ./storkctl
sudo mv storkctl /usr/local/bin &&
sudo chmod +x /usr/local/bin/storkctl -
Windows:
-
Copy
storkctl.exefrom the stork pod:STORK_POD=$(oc get pods -n <px-namespace> -l name=stork -o jsonpath='{.items[0].metadata.name}') &&
oc cp -n <px-namespace> $STORK_POD:/storkctl/windows/storkctl.exe ./storkctl.exe -
Move
storkctl.exeto a directory in your PATH.
-
Create a schedule policy
You can use a schedule policy to specify when Portworx should trigger a specific action.
-
Create a file named
daily-policy.yaml, specifying the following fields and values:-
apiVersion: with the version of the Stork scheduler (this example uses
stork.libopenstorage.org/v1alpha1) -
kind: with the
SchedulePolicyvalue -
metadata.name: with the name of the
SchedulePolicyobject (this example usesdaily) -
policy.daily.time: with the backup time (this example uses "10:14PM")
-
policy.retain: with the number of backups Portworx must retain (this example retains 3 backups)
apiVersion: stork.libopenstorage.org/v1alpha1
kind: SchedulePolicy
metadata:
name: daily
policy:
daily:
time: "10:14PM"
retain: 3
For more details about how you can configure aschedule policy, see the Schedule Policy reference page.
- Kubernetes
- OpenShift
kubectl apply -f daily-policy.yamlschedulepolicy.stork.libopenstorage.org/daily createdoc apply -f daily-policy.yamlschedulepolicy.stork.libopenstorage.org/daily createdschedulepolicy.stork.libopenstorage.org/daily created -
-
You can check the status of your schedule policy by entering the
storkctl get schedulepolicycommand:storkctl get schedulepolicyNAME INTERVAL-MINUTES DAILY WEEKLY MONTHLY
daily N/A 10:14PM N/A N/A
Associate a schedule policy with a StorageClass or a Volume
The following sections show how you can associate a schedule policy either with a Volume or a StorageClass.
In case of Azure AKS, If you associate a schedule policy with a storage class, then you cannot use Stork to manage that schedule policy.
Create a VolumeSnapshotSchedule
Use a VolumeSnapshotSchedule to associate your schedule policy at the CRD level, and back up specific volumes according to a schedule you define.
- Create a file called
volume-snapshot-schedule.yamlspecifying the following fields and values:
- metadata:
- name: with the name of this VolumeSnapshotSchedule policy
- namespace: the namespace in which this policy will exist
- annotations:
- portworx/snapshot-type: with the
cloudorlocalvalue, depending on what environment you want store your snapshots in - portworx/cloud-cred-id: with your cloud environment credentials
- stork.libopenstorage.org/snapshot-restore-namespaces: with other namespaces snapshots taken with this policy can restore to
- The following annotations are required when PX-Security is enabled:
- openstorage.io/auth-secret-namespace: namespace where the kubernetes secret holding the auth token resides
- openstorage.io/auth-secret-name: name of the kubernetes secret which holds the auth token
- portworx/snapshot-type: with the
- spec:
-
schedulePolicyName: with the name of the schedule policy you defined in the steps above
-
suspend: with a boolean value specifying if the schedule should be in a suspended state
-
preExecRule: with the name of a rule to run before taking the snapshot
-
postExecRule: with the name of a rule to run after taking the snapshot
-
reclaimPolicy: with
retainordelete, indicating what Portworx should do with the snapshots that were created using the schedule. Specifying thedeletevalue deletes the snapshots created by this schedule when the schedule is deleted. -
template.spec.persistentVolumeClaimName: with the PVC you want this policy to apply to
apiVersion: stork.libopenstorage.org/v1alpha1
kind: VolumeSnapshotSchedule
metadata:
name: mysql-snapshot-schedule
namespace: mysql
annotations:
portworx/snapshot-type: cloud
portworx/cloud-cred-id: <cred_id>
stork.libopenstorage.org/snapshot-restore-namespaces: otherNamespace
# Add the below annotations when PX-Security is enabled.
#openstorage.io/auth-secret-namespace: <secret-namespace>
#openstorage.io/auth-secret-name: <secret-name>
spec:
schedulePolicyName: testpolicy
suspend: false
reclaimPolicy: Delete
preExecRule: testRule
postExecRule: otherTestRule
template:
spec:
persistentVolumeClaimName: mysql-data
-
-
Apply the spec:
- Kubernetes
- OpenShift
kubectl apply -f volume-snapshot-schedule.yamloc apply -f volume-snapshot-schedule.yaml
Create a storage class
Use a StorageClass to apply your schedule policy to all PVCs using that StorageClass.
-
Create a file called
sc-with-snap-schedule.yamlwith the following content:kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-sc-with-snap-schedules
provisioner: pxd.portworx.com
parameters:
# Add the below parameters when PX-Security is enabled.
# openstorage.io/auth-secret-namespace: <secret-namespace>
# openstorage.io/auth-secret-name: <secret-name>
repl: "2"
snapshotschedule.stork.libopenstorage.org/default-schedule: |
schedulePolicyName: daily
annotations:
portworx/snapshot-type: local
snapshotschedule.stork.libopenstorage.org/weekly-schedule: |
schedulePolicyName: weekly
annotations:
portworx/snapshot-type: cloud
portworx/cloud-cred-id: <credential-uuid>
This example references two schedules:
- The
default-schedulebacks up volumes to the local Portworx cluster daily. - The
weekly-schedulebacks up volumes to cloud storage every week.
-
Apply the spec:
- Kubernetes
- OpenShift
kubectl apply -f sc-with-snap-schedule.yamloc apply -f sc-with-snap-schedule.yaml
Specifying the cloud credential to use
Specifying the portworx/cloud-cred-id is required only if you have more than one cloud credentials configured. If you have a single one, by default, that credential is used.
Let's list all the available cloud credentials we have.
- Kubernetes
- OpenShift
PX_POD=$(kubectl get pods -l name=portworx -n <px-namespace> -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n <px-namespace> -- /opt/pwx/bin/pxctl credentials list
PX_POD=$(oc get pods -l name=portworx -n <px-namespace> -o jsonpath='{.items[0].metadata.name}')
oc exec $PX_POD -n <px-namespace> -- /opt/pwx/bin/pxctl credentials list
The above command will output the credentials required to authenticate/access the objectstore. Pick the one you want to use for this snapshot schedule and specify it in the portworx/cloud-cred-id annotation in the StorageClass.
Next, let's apply our newly created storage class:
- Kubernetes
- OpenShift
kubectl apply -f sc-with-snap-schedule.yaml
storageclass.storage.k8s.io/px-sc-with-snap-schedules created
oc apply -f sc-with-snap-schedule.yaml
storageclass.storage.k8s.io/px-sc-with-snap-schedules created
Create a PVC
After we've created the new StorageClass, we can refer to it by name in our PVCs like this:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-snap-schedules-demo
annotations:
volume.beta.kubernetes.io/storage-class: px-sc-with-snap-schedules
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Paste the listing from above into a file named pvc-snap-schedules-demo.yaml and run:
- Kubernetes
- OpenShift
kubectl create -f pvc-snap-schedules-demo.yaml
persistentvolumeclaim/pvc-snap-schedules-demo created
oc create -f pvc-snap-schedules-demo.yaml
persistentvolumeclaim/pvc-snap-schedules-demo created
Let's see our PVC:
- Kubernetes
- OpenShift
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-snap-schedules-demo Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7 2Gi RWO px-sc-with-snap-schedules 14s
oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-snap-schedules-demo Bound pvc-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7 2Gi RWO px-sc-with-snap-schedules 14s
The above output shows that a volume named pvc-xxxxxxxx-xxxx-xxxx-xxxx-080027ee1df7 was automatically created and is now bounded to our PVC.
We're all set!
Checking snapshots
Verifying snapshot schedules
First let's verify that the snapshot schedules are created correctly.
storkctl get volumesnapshotschedules
NAME PVC POLICYNAME PRE-EXEC-RULE POST-EXEC-RULE RECLAIM-POLICY SUSPEND LAST-SUCCESS-TIME
pvc-snap-schedules-demo-default-schedule pvc-snap-schedules-demo daily Retain false
pvc-snap-schedules-demo-weekly-schedule pvc-snap-schedules-demo weekly Retain false
Here we can see 2 snapshot schedules, one daily and one weekly.
Verifying snapshots
Now that we've put everything in place, we would want to verify that our cloudsnaps are created.
Using storkctl
Also, you can use storkctl to make sure that the snapshots are created by running:
storkctl get volumesnapshots
NAME PVC STATUS CREATED COMPLETED TYPE
pvc-snap-schedules-demo-default-schedule-interval-2019-03-27-015546 pvc-snap-schedules-demo Ready 26 Mar 19 21:55 EDT 26 Mar 19 21:55 EDT local
pvc-snap-schedules-demo-weekly-schedule-interval-2019-03-27-015546 pvc-snap-schedules-demo Ready 26 Mar 19 21:55 EDT 26 Mar 19 21:55 EDT cloud