Mistake on this page? Email us

Offline Pod support

In the event of a network outage for a gateway, the Pods assigned to that gateway will remain running. If the gateway reboots during this outage, the gateway restarts any Pods that were running before the reboot.

Note: Because Pelion Edge supports offline gateways, the Pod status remains running even if the gateway loses internet connection and appears as NotReady when you inspect it with kubectl get nodes.

For offline operation, ensure:

  • Containers have imagePullPolicy: IfNotPresent.

  • Manually created Pods have the below tolerations:

    "tolerations": [
        {
            "effect": "NoExecute",
            "key": "node.kubernetes.io/not-ready",
            "operator": "Exists"
        },
        {
            "effect": "NoExecute",
            "key": "node.kubernetes.io/unreachable",
            "operator": "Exists"
        }
    ],
    

Note: Container images corresponding to the Pods stored in offline cache are excluded from garbage collection, so the offline capabilities can work correctly.

DaemonSets automatically apply these tolerations and more.

Note: To support this, kubelet stores a copy of all Pods assigned to it, along with any resources needed by those Pods, such as ConfigMaps, Secrets and PersistentVolumes to disk.

Disabling offline Pod support with snapcraft

Offline Pod support is enabled by default. If you are using snapcraft, you can disable offline mode for Kubelet by using this command on the gateway:

snap set pelion-edge kubelet.offline-mode=false

To enable offline mode again, use this command on the gateway:

snap set pelion-edge kubelet.offline-mode=true

Troubleshooting

To determine whether a container is crashing, look for the CrashLoopBackOff message:

  1. Run describe Pod.

  2. Check the state of the container.

    Note: As soon as the container crashes, it automatically restarts and changes state. You can prevent the Pod from restarting by adding RestartPolicy: Never to the Pod specification, but this suppresses the CrashLoopBackOff error.

Don’t compare the status printed by kubectl get pod with the Status field from kubectl describe pod because they don’t necessarily match. A healthy Pod has a Running status, even if its container is crashing.