Table of contents

Patch 06

This topic provides installation procedures for patch 06 and descriptions for the enhancements and fixes included in the patch.

Patch details for wsl-v1231-x86-patch-06

This patch includes the following enhancements and fixes:

Enhancements
  • The Github/Bitbucket integration now lets data scientists create projects from a specific branch in a repository.
  • An admin can now control the creation of the Spark Context by default in Jupyter 2.7 and 3.5 environments.
  • The pyarrow and the compatible pandas package are included in Jupyter 2.7 and 3.5 environments.
  • The version of the sparkmagics package included with the Jupyter 2.7 and 3.5 environments is upgraded.
  • TS001906969 - All collaborators in a project can view the jobs that are scheduled by the project admins or editors.

Fixed defects

TS002051550/TS002578020 – Error reported for certain Github/Bitbucket tokens
An "Invalid Access Token” error is returned when a Github/Bitbucket access token includes a “/” or other special characters.
Issue with updating the tag for a project release in Watson Machine Learning
Updating the tag for a project release that is created from a Github/Bitbucket repository causes your browser to hang.
TS002329694–Graphviz package fix
The following error occurs when you use the graphviz package within a Jupyter notebook:
FileNotFoundError: [Errno 2] No such file or directory: 'dot'
ExecutableNotFound: failed to execute ['dot', '-Tpng'], make sure the Graphviz executables are on your systems' PATH
 
Issue with user-sensitive information logged for certain failed operations
When certain operations within a project fail, user-sensitive information is logged in the error message.
Credentials found in scripts within a docker image
One of the docker images includes scripts that have hardcoded credentials in the script files.
TS002491483 – Certain copy operations within a user’s pod are performed as root
The startup scripts of certain user pods run copy operations as root and can lead to a security vulnerability.
TS002561905 – Error returned for certain user name formats when creating a token for Github/Bitbucket access
An authentication error is returned when the user name provided when a token is created for Github/Bitbucket access and it contains “.”, “@” or “\”

Prerequisites

WSL 1.2.3.1 x86 patch01, patch02, patch03, and patch05 must all be installed. To download patch 06, go to Fix Central and select wsl-x86-v1231-patch06. Previous patches are also available in Fix Central.

Patch files

The patch contains the following files:

  • wsl-x86-v1231-patch06-part01.tar.gz
  • wsl-x86-v1231-patch06-part02.tar.gz
  • wsl-x86-v1231-patch06-part03.tar.gz
  • wsl-x86-v1231-patch06-part04.tar.gz
After you extract the files, the following files are available under a new directory named wsl-x86-v1231-patch06:
  • dsx-core.v3.13.1319-x86_64-20190814.220044.tgz
  • dsx-scripted-ml.v0.01.232-x86_64-20190618.013350.tgz
  • jupyter-d8a2rls2x-shell.v1.0.347-x86_64-20190807.211456.tgz
  • jupyter-d8a3rls2x-shell.v1.0.338-x86_64-20190807.211604.tgz
  • jupyter-gpu-py36.v1.0.4-x86_64-20190814.020206.tgz
  • spawner-go-api.v3.13.1039-x86_64-20190722.204345.tgz
  • usermgmt.v3.13.1598-x86_64-20190808.193735.tgz
  • wdp-dashboard-frontend.1.3.4-x86_64-20190807.230148.tgz
  • dsx-scripted-ml-job-v1231-patch06.yaml
  • dsx-scripted-ml-job-v1231-patch06-preexistingk8s.yaml

Pre-installation

If you're applying the patch on Watson Studio Local that is running on a pre-existing Kubernetes cluster like OpenShift, you must perform these tasks:
  1. Identify the docker registry that is used by the cluster by running
    idp-registry.sysibm-adm.svc.cluster.local:31006
    You must change the commands based on the docker registry that is used by the installation. Change the following docker commands to use the docker registry used by the cluster.
  2. Authenticate to kubectl.
  3. Authenticate to docker.
The following are general pre-installation tasks, applicable in all cases:
  1. Back up the config map
    kubectl get configmap -n dsx runtimes-def-configmap -o yaml > configmap.backup.patch06.yaml
    1. Run
      kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-d8a2rls2x-shell
      and note the value of the image key.
    2. Run
      kubectl get configmap -n dsx runtimes-def-configmap -o yaml | grep jupyter-d8a3rls2x-shell
      and note the value of the image key.
  2. Note the key values of the images on your system now by running the following docker image commands. You need these values if you decide to roll back the patch.
    1. Run
      kubectl get deploy -n dsx dsx-core -o yaml | grep image:
      and note the value of the image key.
    2. Run
      kubectl get deploy -n dsx spawner-api -o yaml | grep image:
      and note the value of the image key.
    3. Run
      kubectl get deploy -n dsx usermgmt -o yaml | grep image:
      and note the value of the image key.
    4. Run
      kubectl get deploy -n dsx dash-front-deploy -o yaml | grep image:
      and note the value of the image key.
    5. Run
      kubectl get deploy -n dsx dsx-scripted-ml-python2 -o yaml | grep image: 
      and note the value of the image key.
    6. Run
      kubectl get deploy -n dsx dsx-scripted-ml-python3 -o yaml | grep image: 
      and note the value of the image key.
    7. Run
      kubectl get deploy -n dsx zen-scripted-data-python2 -o yaml | grep image: 
      and note the value of the image key
    8. Run
      kubectl get deploy -n dsx jupyter-notebooks-nbviewer -o yaml | grep image:
      and note the value of the image key
    9. Run
      kubectl get deploy -n dsx jupyter-notebooks-nbviewer-dev -o yaml | grep image:
      and note the value of the image key
  3. This section applies for standalone installations only. To roll back the image management files:
    1. Run
      cd /user-home/_global_/.custom-images/
      to get the image management pods.
    2. Run
      cp -ar builtin-metadata builtin-metadata-patch06-backup
      to execute into the pod. You can pick any of the three pods that will be running.
    3. Run
      cp -ar metadata metadata-patch06-backup
    4. Back up the builtin-metadata directory.
    5. Back up the metadata directory.

Installing patches

To install the dsx-core patch
  1. Open the dsx-core image by running
    tar -xzvf dsx-core.v3.13.1319-x86_64-20190814.220044.tgz
    A directory that is called
    dsx-core-artifact
    is created and contains files in it.
  2. Run the command
    cd dsx-core-artifact
  3. Run the command
    docker load < dsx-core_v3.13.1319-x86_64.tar.gz
    to load the image to the docker registry.
  4. Run
    docker tag 88d127bc7643 idp-registry.sysibm-adm.svc.cluster.local:31006/dsx-core:v3.13.1319-x86_64_v1231-patch06
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/dsx-core:v3.13.1319-x86_64_v1231-patch06
    to push the image to the docker registry.
  6. Run
    kubectl -n dsx edit deploy dsx-core
  7. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/dsx-core:v3.13.1319-x86_64_v1231-patch06
To install the dsx-scripted-ml image patch
  1. Open the dsx-scripted-ml by running
    tar -xzvf dsx-scripted-ml.v0.01.232-x86_64-20190618.013350.tgz
    A directory that is called dsx-scripted-ml-artifact is created and contains files in it.
  2. Run the command
    cd dsx-scripted-ml-artifact
  3. Run the
    docker load < privatecloud-dsx-scripted-ml_v0.01.232-x86_64.tar.gz
    command to load the image.
  4. Run
    docker tag 214e98095b79 idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-dsx-scripted-ml:v0.01.232-x86_64_v1231-patch06
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-dsx-scripted-ml:v0.01.232-x86_64_v1231-patch06
    to push the image to the docker registry.
  6. Use the command
    cd ..
    to go to the parent directory.
  7. Run the following command to create a Kubernetes job:
    1. If running on a standalone installation of Watson Studio, create a new job by running
      kubectl apply -f dsx-scripted-ml-job-v1231-patch06.yaml
    2. If running on a Watson Studio deployment on an existing Kubernetes platform like Openshift, edit dsx-scripted-ml-job-v1231-patch06-preexistingk8s.yaml and do these tasks:
      1. Update with the image pushed in step 1 of installing the dsx-scripted-ml image patch and create a new job by running
        kubectl apply -f dsx-scripted-ml-job-v1231-patch06-preexistingk8s.yaml
      2. Change the value of the namespace attribute to match the namespace in which Watson Studio is deployed.
    3. Note: If you run into a “field is immutable“ error while running
      kubectl apply -f dsx-scripted-ml-job-v1231-patch06.yaml
      run
      kubectl delete job -n dsx dsx-scripted-ml-patch-06
      to delete the existing job, and then run
      kubectl apply -f dsx-scripted-ml-job-v1231-patch06.yaml
  8. Wait until the job pod reaches a Completed state by using the query
    kubectl get pods -n dsx | grep dsx-scripted-ml-patch-06
To install the Python 2.7 image patch
  1. Open the Python 2.7 image by running
    tar -xzvf jupyter-d8a2rls2x-shell.v1.0.347-x86_64-20190807.211456.tgz
    A directory that is called jupyter-d8a2rls2x-shell-artifact is created and contains files.
  2. Run the command
    cd jupyter-d8a2rls2x-shell-artifact
  3. Run
    docker load < jupyter-d8a2rls2x-shell_v1.0.347-x86_64.tar.gz
    command to load the image.
  4. Run
    docker tag 3a0319ecdde6 idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06
    to push the image to the docker registry.
  6. ssh to each compute node in the cluster and run
    docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06
    to download the image to the node. This step can take a long time.
  7. Run
    kubectl -n dsx edit configmap runtimes-def-configmap
    1. Look for the following sections:
      • jupyter-server.json
      • dsx-scripted-ml-python2-server.json
      • python27-script-as-a-service-server.json
    2. In each of the sections, change the image key value to
      idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06
    3. Add the following environment snippet (in bold) after the resources section. Note the `,` that is required after the resources section:
      "resources": {
      	.
      	.
      	.
             "duration": {
             "value": -1,
             "units": "unix"
             }
      },
      "env": [
        {
          "name": "AUTOSTART_JUPYTER_SC",
          "value": "autoStartJupyterSC",
          "source": "GlobalConfig"
        }
       ]
      }
  8. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python2
  9. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06
  10. Run
    kubectl edit deploy -n dsx zen-scripted-data-python2
  11. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a2rls2x-shell:v1.0.347-x86_64_v1231-patch06

To install the Python 3.5 image patch

  1. Open the Python 3.5 image by running
    tar -xzvf jupyter-d8a3rls2x-shell.v1.0.338-x86_64-20190807.211604.tgz
    A directory that is called jupyter-d8a3rls2x-shell-artifact is created and contains files.
  2. Run the command
    cd jupyter-d8a3rls2x-shell-artifact
  3. Run the command
    docker load < jupyter-d8a3rls2x-shell_v1.0.338-x86_64.tar.gz
    to load the image.
  4. Run
    docker tag 61bc2e8259e7 idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
    to push the image to the docker registry.
  6. ssh to each compute node in the cluster and run
    docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
    to download the image to the node. This step can take a long time.
  7. Run
    kubectl -n dsx edit configmap runtimes-def-configmap
    1. Look for the following sections:
      • jupyter-py35-server.json
      • dsx-scripted-ml-python3-server.json
      • python35-script-as-a-service-server.json
      • sshd-server.json
    2. In each of the sections, change the image key value to
      idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
    3. Add the following environment snippet (in bold) after the resources section. Note the `,` that is required after the following resources section:
      "resources": {
      	.
      	.
      	.
             "duration": {
             "value": -1,
             "units": "unix"
             }
      },
      "env": [
        {
          "name": "AUTOSTART_JUPYTER_SC",
          "value": "autoStartJupyterSC",
          "source": "GlobalConfig"
        }
       ]
      }
  8. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python3
  9. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
  10. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer
  11. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06
  12. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer-dev
  13. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/jupyter-d8a3rls2x-shell:v1.0.338-x86_64_v1231-patch06

Install the spawner image patch

  1. Open the spawner image by running
    tar -xzvf spawner-go-api.v3.13.1039-x86_64-20190722.204345.tgz
    A directory that is called spawner-go-api-artifact is created and contains files.
  2. Run the command
    cd spawner-go-api-artifact
  3. Run the command
    docker load < privatecloud-spawner-api-k8s_v3.13.1039-x86_64.tar.gz
    to load the image to the docker registry.
  4. Run the command
    docker images
    to get the unique image ID of the images that were loaded to docker in the previous step.
  5. Run
    docker tag ae37b8429267 idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-spawner-api-k8s:v3.13.1039-x86_64_v1231-patch06
  6. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-spawner-api-k8s:v3.13.1039-x86_64_v1231-patch06
    to push the image to the docker registry.
  7. Run
    kubectl -n dsx edit deploy spawner-api
  8. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-spawner-api-k8s:v3.13.1039-x86_64_v1231-patch06

To install the usermgmt patch

  1. Open the usermgmt image by running
    tar -xzvf usermgmt.v3.13.1598-x86_64-20190808.193735.tgz
  2. Run the command
    cd usermgmt-artifact
  3. Run the command
    docker load < privatecloud-usermgmt_v3.13.1598-x86_64.tar.gz
    to load the image.
  4. Run
    docker tag 5028f420c9e9 idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1598-x86_64_v1231-patch06
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1598-x86_64_v1231-patch06
    to push the image to the docker registry.
  6. Run
    kubectl -n dsx edit deploy usermgmt
  7. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/privatecloud-usermgmt:v3.13.1598-x86_64_v1231-patch06

To install the wdp-dashboard-frontend patch

  1. Open the dashboard frontend image by running
    tar -xzvf wdp-dashboard-frontend.1.3.4-x86_64-20190807.230148.tgz
    A directory that is called wdp-dashboard-frontend-artifact is created and contains files.
  2. Run the command
    cd wdp-dashboard-frontend-artifact
  3. Run the command
    docker load < dashboard-frontend_1.8.7-x86_64.tar.gz
    to load the image. You can ignore the following error message, "The image localhost:5000/dashboard-frontend:1.8.7-x86_64 exists, renaming the old one with ID."
  4. Run
    docker tag f644ea5cba36 idp-registry.sysibm-adm.svc.cluster.local:31006/dashboard-frontend:1.8.7-x86_64_v1231-patch06
    to tag the image.
  5. Run
    docker push idp-registry.sysibm-adm.svc.cluster.local:31006/dashboard-frontend:1.8.7-x86_64_v1231-patch06
    to push the image to the docker registry.
  6. Run
    kubectl -n dsx edit deploy dash-front-deploy
  7. Look for the image key, and then change the value to
    idp-registry.sysibm-adm.svc.cluster.local:31006/dashboard-frontend:1.8.7-x86_64_v1231-patch06

Post-installation

Update image management

This section applies for standalone installations only. Ensure that you followed the pre-installation steps to back up the files for image management before running the following commands.
  1. Run
    kubectl get pods -n dsx | grep imagemgmt | grep -v Completed
    to get the image management pods.
  2. Run
    kubectl exec -it -n dsx <podname> sh 
    to execute into the pod. You can pick any of the three pods that will be running.
  3. Run
    cd /user-home/_global_/.custom-images/builtin-metadata
  4. Run
    rm *
  5. Run
    cd /user-home/_global_/.custom-images/metadata
  6. Run
    rm *
  7. Run
    cd /scripts
  8. Run
    retag_images.sh
    node ./builtin-image-info.js
  9. (Optional) To restore custom images, run the following commands:
    Note: Old custom images could be based on vulnerable images. It is recommended that you create newer custom images based on the latest environment images.
    1. Run
      cp -ar /user-home/global/.custom-images/metadata-patch06-backup/* /user-home/global/.custom-images/metadata/
    2. Run
      cp -ar /user-home/global/.custom-images/builtin-metadata-patch06-backup/* /user-home/global/.custom-images/builtin-metadata/
Restart user environments
  • Jupyter 2.7 user environments
  • Jupyter 3.5 user environments
  1. To restart all Jupyter 2.7 user environments:
    1. Run
      kubectl get deployment -n dsx -l type=jupyter
      to view any deployments with the old Jupyter 2.7 image.
    2. Run
      kubectl delete deployment -n dsx -l type=jupyter
      to delete any deployments running with the old Jupyter 2.7 image.
    3. Rebuild all custom images that were built with the Jupyter 2.7 image.
  2. To restart all Jupyter 3.5 user environments:
    1. Run
      kubectl get deployment -n dsx -l type=jupyter-py35
      to view any deployments with the old Jupyter 3.5 image.
    2. Run
      kubectl delete deployment -n dsx -l type=jupyter-py35
      to delete any deployments running with the old Jupyter 3.5 image.
    3. Rebuild all custom images that were built with the Jupyter 3.5 image.

Rolling back the patch

Roll back the patch
  1. Run
    kubectl edit configmaps -n dsx runtimes-def-configmap -o yaml
    and then look for the image keys that contain jupyter-d8a2rls2x-shell Change the keys to the value noted in the Pre-installation section, step 1.a..
  2. Run
    kubectl edit configmaps -n dsx runtimes-def-configmap -o yaml
    and then look for the image keys that contain jupyter-d8a3rls2x-shell Change the keys to the value noted in the Pre-installation section, step 1.b..
  3. Run
    kubectl -n dsx edit deploy dsx-core
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.a..
  4. Run
    kubectl edit deploy -n dsx spawner-api
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.b..
  5. Run
    kubectl edit deploy -n dsx usermgmt
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.c..
  6. Run
    kubectl edit deploy -n dsx dash-front-deploy
    and then look for the image key. Change the key to the value noted in the Pre-installation section, step 2.d.
  7. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python2
    and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.e.
  8. Run
    kubectl edit deploy -n dsx dsx-scripted-ml-python3
    and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.f.
  9. Run
    kubectl edit deploy -n dsx zen-scripted-data-python2
    and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.g.
  10. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer
    and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.h.
  11. Run
    kubectl edit deploy -n dsx jupyter-notebooks-nbviewer-dev
    and then look for the image key, and change it to the value noted in the Pre-installation section, step 2.i.

Roll back the image management files

This section applies for standalone installations only.
  1. Run
    kubectl get pods -n dsx | grep imagemgmt | grep -v Completed
    to get the image management pods.
  2. Run
    kubectl exec -it -n dsx <podname> sh 
    to execute into the pod. You can pick any of the three pods that will be running.
  3. Run
    cd /user-home/_global_/.custom-images/
  4. Restore the builtin-metadata directory that was backed up in 3.d in the pre-installation steps.
  5. Restore the metadata directory that was backed up in 3.e in the pre-installation steps.