Table of contents

Patch 07

The description and installation procedures for the enhancements and fixes for patch 07 are provided.

Patch details for wsl-v1231-x86-patch-07

This patch includes the following enhancements and fixes:

Enhancements
  • An admin can now control the creation of the Spark Context by default in Jupyter GPU environments.
  • The pyarrow and the compatible pandas package are included in Jupyter GPU environments.
  • The version of the sparkmagics package included with the Jupyter GPU environments is upgraded.

Fixed defects

Issue with Jupyter instances running inside Watson Studio Local
Jupyter instances running inside Watson Studio Local were discoverable by piecing together user-id and other information. Any authenticated user could then interact with that Jupyter instance by following the constructed URL.
Issue with the api/v1/usermgmt/v1/usermgmt/usersbystatus endpoint
The api/v1/usermgmt/v1/usermgmt/usersbystatus endpoint was returning password hash value.

Prerequisites

WSL 1.2.3.1 x86 patch06 must all be installed. To download patch 07, go to Fix Central and select wsl-x86-v1231-patch07. Previous patches are also available in Fix Central.
Note: Patch 07, part 04 (wsl-x86-v1231-patch07-part04.tar.gz) includes updated back up and restore tools. If you want those tools and you previously installed patch 04, you must replace patch 04 with the updated back up and restore tools included in patch 07.

Patch files

The patch contains the following files:

  • wsl-x86-v1231-patch07-part01.tar.gz
  • wsl-x86-v1231-patch07-part02.tar.gz
  • wsl-x86-v1231-patch07-part03.tar.gz
  • wsl-x86-v1231-patch07-part04.tar.gz
  • wsl-x86-v1231-patch07-part05.tar.gz
After you extract the files, the following files are available under a new directory named wsl-x86-v1231-patch07:
  • jupyter-d8a2rls2x-shell.v1.0.357-x86_64-20190911.183150.tgz
  • jupyter-gpu-py36.v1.0.6-x86_64-20190913.171937.tgz
  • jupyter-d8a3rls2x-shell.v1.0.349-x86_64-20190911.184359.tgz
  • usermgmt.v3.13.1603-x86_64-20190916.175421.tgz
  • dsx-local-proxy.v3.13.231-x86_64-20191007.143033.tgz
  • wsl-backup-restore.tar.gz

Pre-installation

If you're applying the patch on Watson Studio Local that is running on a pre-existing Kubernetes cluster like OpenShift, you must perform these tasks:
  1. Identify the docker registry that is used by the cluster by running
    idp-registry.sysibm-adm.svc.cluster.local:31006
    You must change the commands based on the docker registry that is used by the installation. Change the following docker commands to use the docker registry used by the cluster.
  2. Authenticate to kubectl.
  3. Authenticate to docker.
The following are general pre-installation tasks, applicable in all cases:
  1. Back up the config map
    kubectl get configmap -n dsx runtimes-def-configmap -o yaml > configmap.backup.patch07.yaml
    1. Run
      kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-d8a2rls2x-shell
      and note the value of the image key.
    2. Run
      kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-d8a3rls2x-shell
      and note the value of the image key.
    3. Run
      kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-gpu-py36
      and note the value of the image key.
  2. If you want to roll back the patch, run the following docker image commands and note the key values of the images:
    1. Run
      kubectl get deploy -n dsx dsx-core -o yaml | grep image:
      and note the value of the image key.
    2. Run
      kubectl get deploy -n dsx dsx-scripted-ml-python2 -o yaml | grep image:
      and note the value of the image key.
    3. Run
      kubectl get deploy -n dsx dsx-scripted-ml-python3 -o yaml | grep image:
      and note the value of the image key.
    4. Run
      kubectl get deploy -n dsx zen-scripted-data-python2 -o yaml | grep image:
      and note the value of the image key.
    5. Run
      kubectl get deploy -n dsx jupyter-notebooks-nbviewer -o yaml | grep image:
      and note the value of the image key.
    6. Run
      kubectl get deploy -n dsx jupyter-notebooks-nbviewer-dev -o yaml | grep image:
      and note the value of the image key.
  3. This section applies for standalone installations only. To roll back the image management files:
    1. Run
      pod=`kubectl get po -n dsx --no-headers | grep dsx-core | grep 1/1 | grep Running | head -1 |  awk '{print($1)}'`;  kubectl exec -itn dsx ${pod} sh
    2. Run
      cd /user-home/_global_/.custom-images/
      to get the image management pods.
    3. Run
      cp -ar builtin-metadata builtin-metadata-patch07-backup
      to execute into the pod. You can pick any of the three pods that will be running.
    4. Run
      cp -ar metadata metadata-patch07-backup
    5. Run
      exit

Installing the patch

To install the Python 2.7 image patch
  1. Open the Python 2.7 image by running
    tar -xzvf jupyter-d8a2rls2x-shell.v1.0.357-x86_64-20190911.183150.tgz
    A directory that is called jupyter-d8a2rls2x-shell-artifact is created and contains files.
  2. Run the command
    cd jupyter-d8a2rls2x-shell-artifact
  3. Run
    docker load < jupyter-d8a2rls2x-shell_v1.0.357-x86_64.tar.gz
    command to load the image.
  4. Run
    docker tag 4ad43ba2b65b idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.357-x86_64_v1231-patch07
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.357-x86_64_v1231-patch07
    to push the image to the docker registry.
  6. ssh to each compute node in the cluster and run
    docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.357-x86_64_v1231-patch07
    to download the image to the node. This step can take a long time.
  7. Run
    kubectl -n dsx edit configmap runtimes-def-configmap
    1. Look for the following sections:
      • jupyter-server.json
      • dsx-scripted-ml-python2-server.json
      • python27-script-as-a-service-server.json
    2. In each of the sections, change the image key value to
      idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.357-x86_64_v1231-patch07
      .
    3. Locate the jupyter-server.json section.
    4. Add the following environment variable (in bold) in the env section. Note the `,` that is required before '{.'
      "env": [
          {
           "name": "AUTOSTART_JUPYTER_SC",
           "value": "autoStartJupyterSC",
           "source": "GlobalConfig"
          },
          {
           "name": "APP_ENV_OAUTH_KEYS_ENDPOINT",
           "value": "https://internal-nginx-svc:12443/auth/jwtpublic"
          }
      	
      ]	
    5. In the probes section, change the value for path from tree to ax/monitor (in bold) in the following snippet:
      "probes": {
        "liveness": {
          "path": "/dsx-jupyter /${userNS}/${projectId}/ax/monitor",
          .
          .
          .
        },
        "readiness": {
          "path": "/dsx-jupyter /${userNS}/${projectId}/ax/monitor",
          .
          .
          .
        }
       },
      
  8. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python2
  9. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.357-x86_64_v1231-patch07
  10. Run
    kubectl edit deploy -n dsx zen-scripted-data-python2
  11. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.357-x86_64_v1231-patch07

To install the Python 3.5 image patch

  1. Open the Python 3.5 image by running
    tar -xzvf jupyter-d8a3rls2x-shell.v1.0.349-x86_64-20190911.184359.tgz
    A directory that is called jupyter-d8a3rls2x-shell-artifact is created and contains files.
  2. Run the command
    cd jupyter-d8a3rls2x-shell-artifact
  3. Run the command
    docker load < jupyter-d8a3rls2x-shell_v1.0.349-x86_64.tar.gz
    to load the image.
  4. Run
    docker tag 8baaeeb70b01 idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07
    to push the image to the docker registry.
  6. ssh to each compute node in the cluster and run
    docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07
    to download the image to the node. This step can take a long time.
  7. Run
    kubectl -n dsx edit configmap runtimes-def-configmap
    1. Look for the following sections:
      • jupyter-py35-server.json
      • dsx-scripted-ml-python3-server.json
      • python35-script-as-a-service-server.json
      • sshd-server.json
    2. In each of the sections, change the image key value to
      idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07
    3. Locate the jupyter-py35-server.json section.
    4. Add the following environment variable (in bold) in the env section. Note the `,` that is required before '{.'
      "env": [
          {
           "name": "AUTOSTART_JUPYTER_SC",
           "value": "autoStartJupyterSC",
           "source": "GlobalConfig"
          },
          {
           "name": "APP_ENV_OAUTH_KEYS_ENDPOINT",
           "value": "https://internal-nginx-svc:12443/auth/jwtpublic"
          }
      
      ]
    5. In the probes section, change the value for path from tree to ax/monitor (in bold) in the following snippet:
      "probes": {
        "liveness": {
          "path": "/dsx-jupyter-py35/${userNS}/${projectId}/ax/monitor",
          .
          .
          .
        },
        "readiness": {
         "path": "/dsx-jupyter-py35/${userNS}/${projectId}/ax/monitor",
         .
         .
         .
        }
      },
  8. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python3
  9. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07
  10. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer
  11. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07
  12. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer-dev
  13. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.349-x86_64_v1231-patch07

To install the GPU image patch

  1. Open the GPU image by running
    tar -xzvf jupyter-gpu-py36.v1.0.6-x86_64-20190913.171937.tgz
    A directory that is called jupyter-gpu-py36-artifact is created and contains files.
  2. Run the command
    cd jupyter-gpu-py36-artifact
  3. Run the command
    docker load < jupyter-gpu-py36_v1.0.6-x86_64.tar.gz
    to load the image to the docker registry.
  4. Run
    docker tag 0a37679dc4bb idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-gpu-py36:v1.0.6-x86_64_v1231-patch07
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-gpu-py36:v1.0.6-x86_64_v1231-patch07
    to push the image to the docker registry.
  6. ssh to each compute node in the cluster and run
    docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-gpu-py36:v1.0.6-x86_64_v1231-patch07
    to download the image to the node. This step can take a long time.
  7. Run
    kubectl -n dsx edit configmap runtimes-def-configmap
    1. Look for the following sections:
      • jupyter-gpu-py35-server.json
      • dsx-scripted-ml-gpu-python3-server.json
    2. In each of the sections, change the image key value to
      idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-gpu-py36:v1.0.6-x86_64_v1231-patch07
    3. Locate the jupyter-gpu-py35-server.json section.
    4. Add the following environment variable (in bold) in the env section after the resources section. Note the `,` is needed after the resources section.
      "resources": {
             .
             .
             .
             "duration": {
             "value": -1,
             "units": "unix"
             }
      },
      "env": [
          {
           "name": "AUTOSTART_JUPYTER_SC",
           "value": "autoStartJupyterSC",
           "source": "GlobalConfig"
          },
          {
           "name": "APP_ENV_OAUTH_KEYS_ENDPOINT",
           "value": "https://internal-nginx-svc:12443/auth/jwtpublic"
          }
      
      ]
    5. In the probes section, change the value for path from tree to ax/monitor (in bold) in the following snippet:
      "probes": {
        "liveness": {
          "path": "/dsx-jupyter-gpu-py35/${userNS}/${projectId}/ax/monitor",
          .
          .
          .
        },
        "readiness": {
          "path": "/dsx-jupyter-gpu-py35/${userNS}/${projectId}/ax/monitor",
          .
          .
          .
        }
      },
    6. Locate the
      dsx-scripted-ml-gpu-python3-server.json
      section.
    7. Add the following environment variable (in bold) in the env section after the resources section. Note the `,` is needed after the resources section.
      "resources": {
             .
             .
             .
             "duration": {
             "value": -1,
             "units": "unix"
             }
      },
      "env": [
         	{
           "name": "AUTOSTART_JUPYTER_SC",
           "value": "autoStartJupyterSC",
           "source": "GlobalConfig"
          }																			
      ]

To install the usermgmt patch

  1. Open the usermgmt image by running
    tar -xzvf usermgmt.v3.13.1603-x86_64-20190916.175421.tgz
  2. Run the command
    cd usermgmt-artifact
  3. Run the command
    docker load < privatecloud-usermgmt_v3.13.1603-x86_64.tar.gz
    to load the image.
  4. Run
    docker tag 5aafca6fee0f idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1603-x86_64_v1231-patch07
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1603-x86_64_v1231-patch07
    to push the image to the docker registry.
  6. Run
    kubectl -n dsx edit deploy usermgmt
  7. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1603-x86_64_v1231-patch07
To install the ibm-nginx image patch
  1. Open the ibm-nginx image by running
    tar -xzvf dsx-local-proxy.v3.13.231-x86_64-20191007.143033.tgz
  2. Run the command
    cd dsx-local-proxy-artifact
  3. Run the command
    docker load < privatecloud-nginx-repo_v3.13.231-x86_64.tar.gz
    to load the image.
  4. Run
    docker tag 22743645596d idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-nginx-repo:v3.13.231-x86_64-patch07
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-nginx-repo:v3.13.231-x86_64-patch07
    to push the image to the docker registry.
  6. Run
    kubectl edit deploy -n dsx ibm-nginx
  7. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-nginx-repo:v3.13.231-x86_64-patch07
To copy backup and restore tools
  1. Run
    cp wsl-backup-restore.tar.gz /wdp/utils/; cd /wdp/utils; tar -xvf wsl-backup-restore
    to get the scripts for backing up and restoring your data.
  2. Follow the back up and restore procedures for information about using the tools.

Post-installation

Update image management

This section applies for standalone installations only. Back up the image management files. Ensure that you followed the pre-installation steps to back up the files for image management before running the following commands.
  1. Run
    kubectl get pods -n dsx | grep imagemgmt | grep -v Completed
    to get the image management pods.
  2. Run
    kubectl exec -it -n dsx <podname> sh 
    to execute into the pod. You can pick any of the three pods that will be running.
  3. Run
    cd /user-home/_global_/.custom-images/builtin-metadata
  4. Run
    rm *
  5. Run
    cd /user-home/_global_/.custom-images/metadata
  6. Run
    rm *
  7. Run
    cd /scripts
  8. Run
    retag_images.sh
    node ./builtin-image-info.js

    to replace images in image management with the latest images that are provided in the patch.

  9. (Optional) To restore custom images, run the following commands:
    Note: Old custom images could be based on vulnerable images. It is recommended that you create newer custom images based on the latest environment images.
    1. Run
      cp -ar /user-home/global/.custom-images/metadata-patch07-backup/* /user-home/global/.custom-images/metadata/
    2. Run
      cp -ar /user-home/global/.custom-images/builtin-metadata-patch07-backup/* /user-home/global/.custom-images/builtin-metadata/
Restart all Jupyter with GPU user environments
  1. Run
    kubectl get deployment -n dsx -l type=jupyter-gpu-py35
    view deployments with the old GPU image.
  2. Run
    kubectl delete deployment -n dsx -l type=jupyter-gpu-py35
    to delete deployments running with the old GPU image.
  3. Rebuild all custom images that were built with the GPU image.
Restart all Jupyter 2.7 user environments
  1. Run
    kubectl get deployment -n dsx -l type=jupyter
    to view any deployments with the old Jupyter 2.7 image.
  2. Run
    kubectl delete deployment -n dsx -l type=jupyter
    to delete any deployments running with the old Jupyter2.7 image.
  3. Rebuild all custom images that were built with the Jupyter 2.7 image
Restart all Jupyter 3.5 user environments
  1. Run
    kubectl get deployment -n dsx -l type=jupyter-py35
    to view any deployments with the old Jupyter 3.5 image.
  2. Run
    kubectl delete deployment -n dsx -l type=jupyter-py35
    to delete any deployments running with the old Jupyter 3.5 image.
  3. Rebuild all custom images that were built with the Jupyter 3.5 image.

Rolling back the patch

Roll back the patch
  1. Run
    kubectl edit configmaps -n dsx runtimes-def-configmap -o yaml
    and then look for the image keys that contain
    jupyter-d8a2rls2x-shell
    Change the keys to the value noted in the Pre-installation section, step 1.a.
  2. Run
    kubectl edit configmaps -n dsx runtimes-def-configmap -o yaml
    and then look for the image keys that contain
    jupyter-d8a3rls2x-shell
    Change the keys to the value noted in the Pre-installation section, step 1.b.
  3. Run
    kubectl edit configmaps -n dsx runtimes-def-configmap -o yaml
    and then look for the image keys that contain
    jupyter-gpu
    Change the keys to the value noted in the Pre-installation section, step 1.c.
  4. Run
    kubectl edit deploy -n dsx usermgmt
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.a.
  5. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python2
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.b.
  6. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python3
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.c.
  7. Run
    kubectl edit deploy -n dsx zen-scripted-data-python2
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.d.
  8. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer
    and then look for the image key, and change it to the value noted in the Pre-installation section,step 2.e.
  9. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer-dev
    and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.f.
This section applies for standalone installations only.
  1. Run
    kubectl get pods -n dsx | grep imagemgmt | grep -v Completed
    to get the image management pods.
  2. Run
    kubectl exec -it -n dsx <podname> sh
    to execute into the pod. You can pick any of the three pods that will be running.
  3. Run
    cd /user-home/_global_/.custom-images/
  4. Restore the builtin-metadata directory that was backed up in 3.c in the Pre-installation section.
  5. Restore the metadata directory that was backed up in 3.d in the Pre-installation section.