Table of contents

Patch 10

The description and installation procedures for the enhancements and fixes for patch 10 are provided.

Patch details for wsl-v1231-x86-patch-10

This patch includes the following enhancements, upgrades, and fixes:

Enhancements
  • Resolved security attack vectors around directory access through API endpoints.
  • Enhanced the network security around communications between kubenetes pods and services.
  • Enhanced the authorization and authentication processes on API calls.
  • Enhanced user management auditing.
  • Prevented authenticated users to impersonate other users through the JWT token signature.
  • Prevented operations from startup scripts that access user data to give users more permissions.
  • Supported LDAP group authorization and authentication.
  • Supported cluster backup and restore utilities.
Upgrades
  • TS002408130 - The version of kubenetes is upgraded to 1.13
  • Switched network controller to Calico

Fixed defects

  • TS002578361 - Now able to select an existing model to create a model group.
  • TS002579204 - Now able to delete a damaged project.
  • TS002579186 - Now able to create a project release with a BitBucket project.
  • TS002579167 - Now able to import a project ZIP from a previous Watson Studio Local version.
  • TS002662421 - Now able to save a model when you're using an external/ third-party certificate.
  • TS002809158 - Watson Studio Local 1.x Admin Dashboard no longer displays only zeros after an upgrade.

Prerequisites

IBM Watson Studio Local 1.2.3.1 or any 1.2.3.1 patch on x86. To download patch 10, go to Fix Central and select wsl-v1231-x86-patch-10.

Patch files

The patch contains the following files:

patch_x86_64_CVE_2019_1002100_v1.0.0.tar
The installer upgrades Kubenetes version to 1.13. It replaces Kubenetes network controller to Calico, and updates the network policies for pod communication. This installer must run before the application installer.
wsl_app_patch_v1231_10_v1.0.0.tar
The application installer updates images used by Watson Studio Local.

Pre-installation

The installation requires a quiet period to patch the Watson Studio Local cluster.

  1. Confirm that all jobs are stopped and that jobs aren't scheduled during the patch installation.
  2. Stop all running environments by using the following commands:
    kubectl delete deploy -n dsx -l type=jupyter-gpu-py35
    kubectl delete deploy -n dsx -l type=jupyter 
    kubectl delete deploy -n dsx -l type=jupyter-py35
    kubectl delete deploy -n dsx -l type=rstudio
    kubectl delete deploy -n dsx -l type=shaper
    kubectl delete svc -n dsx -l type=jupyter-gpu-py35
    kubectl delete svc -n dsx -l type=jupyter 
    kubectl delete svc -n dsx -l type=jupyter-py35
    kubectl delete svc -n dsx -l type=rstudio
    kubectl delete svc -n dsx -l type=shaper
  3. Delete customized images from Admin Console > Image Management. After you apply the patch, the Image Management page will only show the new base images. If you do not delete the customized images, you can still use them from the environment page after installing the patch, but you cannot delete them from the Image Management. To manually delete the images after installing the patch, following the post-installation procedure.
  4. Ensure that all the nodes of the cluster are running before installing this patch. Also kubelet and docker services should be running on all the nodes.
  5. If the cluster is using Gluster, ensure that the gluster file system is clean before installing this patch by running the following command:
    gluster volume status sysibm-adm-ibmdp-assistant-pv detail | grep Online | grep ": N" | wc -l

    If the resulting count is larger than 0, then one or more bricks for the volume are not healthy and must be fixed before continuing to install the patch.

Installing the patch for OpenShift clusters

To install the patch for OpenShift clusters, contact your IBM representative.

Installing the patch for non-OpenShift clusters

There are two installers for this patch: platform and application.

To install the platform patch
Note: This patch cannot be rolled back.
  1. Download the patch tar file patch_x86_64_CVE_2019_1002100_v1.0.0.tar. The preferred location is the install path name from /wdp/config, such as /ibm.
  2. Log in as the root or system administrator who has read/write permissions in the install directory. This script runs the remote scripts by using SSH.
  3. Use tar to extract the patch scripts. It will create a new directory in the install directory and install the patch files there.
    tar xvf patch_x86_64_CVE_2019_1002100_v1.0.0.tar 
  4. Change to the patch directory and run the patch_master.sh script by using the following command:
    cd <install_dir>/patch_CVE_2019_1002100
    ./patch_master.sh
    If you have sudo privileges for installing the patch, ensure that the sudo user is created on all nodes. Log in as <sudo_user>, and run the patch_master.sh script by using the following command:
    cd <install_dir>/patch_CVE_2019_1002100
    sudo ./patch_master.sh

    Optionally you can create a private key for this user in the ~/.ssh dir to use instead of a user password.

    To get a list of all available options and examples of usage, run
    cd <install_dir>/patch_CVE_2019_1002100
    ./patch_master.sh --help
  5. Monitor the progress of the installation. If any issues are encountered, check the logs file. The remote nodes keep log files in the <install_dir>/patch_CVE_2019_1002100 directory.

To install the application patch

  1. Download the patch tar file wsl_app_patch_v1231_10_v1.0.0.tar to the Watson Studio node. The preferred location is the install path name from /wdp/config, such as /ibm.
  2. Log in as the root or system administrator who has read/write permissions in the install directory. This script runs the remote scripts by using SSH.
  3. Use tar to extract the patch scripts. It will create a new directory in the install directory and install the patch files there.
    tar xvf wsl_app_patch_v1231_10_v1.0.0.tar
  4. Change to the patch directory and run the patch_master.sh script by using the following command:
    cd <install_dir>/wsl_app_patch_v1231
    ./patch_master.sh
    If you have sudo privileges for installing the patch, ensure that the sudo user is created on all nodes. Log in as <sudo_user>, and run the patch_master.sh script by using the following command:
    cd <install_dir>/wsl_app_patch_v1231
    sudo ./patch_master.sh

    Optionally you can create a private key for this user in the ~/.ssh dir to use instead of a user password.

    To get a list of all available options and examples of usage, run
    cd <install_dir>/wsl_app_patch_v1231
    ./patch_master.sh --help
  5. Monitor the progress of the installation. If any issues are encountered, check the logs file. The remote nodes keep log files in the <install_dir>/wsl_app_patch_v1231/logs directory.

Rolling back the application patch

Note: Only the application patch can be rolled back. The platform patch cannot be rolled back.
Roll back the patch
  • Run
    ./patch_master.sh --rollback

Post-installation

Verifying the installation and cluster

To verify that the install is successful, run:
cat /wdp/patch/current_patch_level 
A successful install should display:
patch_number=10
patch_version=1.0.0
To verify that your cluster is healthy, check that all of the nodes are in ready state and all the pods are running by running the following commands.
kubectl get node
kubectl get po --all-namespaces

Manually delete environment definitions

The patch installation does not delete your environment definitions that were previously saved before the patch was installed. You can manually delete the definitions.

To delete the environment definitions:

  1. Run the following command to get the image management pods:
    kubectl get pods -n dsx | grep imagemgmt | grep -v Completed
  2. Run the following command to execute into the pod:
    kubectl exec -it -n dsx <podname> sh
    
  3. Delete runtime definitions files for your old environments by using the following command:
    cd /user-home/_global_/config/.runtime-definitions/custom
    rm <files_no_longer_needed>
  4. Go to each node and delete the docker image from that node by using the following commands:
    ssh <node-ip>
    docker images | grep your-image-name (or docker images | grep customimages)
    docker rmi <image-hex>