Table of contents

Upgrade to Watson Studio Local

You can upgrade your Watson Studio Local cluster to Version 1.2.3.

Upgrade Watson Studio Local from Version 1.2.3.0 to Version 1.2.3.1

To upgrade Watson Studio Local Version 1.2.3.0 to Version 1.2.3.1, complete the following steps:

  1. POWER only: From the User Management page in the Admin Console, write down the users who have the Deployment Admin permission because you will need to reapply the permission after the upgrade.
  2. Run this command on the master node:
    kubectl get no
    If your Kubernetes version is less than v1.10.11-dirty, skip this step. Otherwise, download and apply the patch CVE_2018_1002105 to upgrade Kubernetes to Version 1.10.11. Then, follow the instructions in the README file of the patch.
  3. Download the Watson Studio Local installation package to the first master node of your dedicated Watson Studio Local cluster.
  4. Move the installation package to your installer files partition, and make the installation package executable.
  5. Before starting the upgrade, restart the entire cluster to make sure no one is logged in doing active work.
  6. Stop all scheduled jobs.
  7. Run the installation package with only the --upgrade parameter and no other options. If there are any deployments, services, jobs, and pods listed that need to be deleted, delete them using the kubectl delete command.
  8. After the upgrade is finished, the admin should restart the entire cluster.
  9. POWER only: In the Admin Console, go to the User Management page and reassign the Deployment Admin permission to the applicable users.

Upgrade Watson Studio Local from Version 1.2.1 or 1.2.2 to Version 1.2.3

Version 1.2.1 only: Write down your administration settings so that you can reapply them after the upgrade. You can also Read previous version 1.2.x administration settings post-upgrade.

To upgrade Watson Studio Local Version 1.2.1/1.2.2 to Version 1.2.3, complete the following steps:

  1. POWER only: From the User Management page in the Admin Console, write down the users who have the Deployment Admin permission because you will need to reapply the permission after the upgrade.
  2. Download and apply the patch CVE_2018_1002105 to upgrade Kubernetes to Version 1.10.11. Follow the instructions in the README file of the patch.
  3. Download the Watson Studio Local installation package to the first master node of your dedicated Watson Studio Local cluster.
  4. Move the installation package to your installer files partition, and make the installation package executable.
  5. Before starting the upgrade, restart the entire cluster to make sure no one is logged in doing active work.
  6. Stop all scheduled jobs.
  7. Run the installation package with only the --upgrade parameter and no other options. If there are any deployments, services, jobs, and pods listed that need to be deleted, delete them using the kubectl delete command.
  8. After the upgrade is finished, the admin should restart the entire cluster.
  9. If you upgraded from Version 1.2.1, reapply all of your administration settings.
  10. POWER only: In the Admin Console, go to the User Management page and reassign the Deployment Admin permission to the applicable users.
  11. Because Watson Studio Local now enforces case sensitivity for remote data sets in Version 1.2.3, you must verify that all schema and table names in your preexisting remote data sets match the exact case of the schema and table names in the data source. For any case mismatches, you must manually correct them in your remote data source definitions. Otherwise, the Insert to code feature will fail.

Upgrade DSX Local from Version 1.2.0.x to Watson Studio Local Version 1.2.3

Tip: Write down your V1.2.0 administration settings so that you can reapply them after the upgrade. You can also Read previous version 1.2.x administration settings post-upgrade.

To upgrade DSX Local Version 1.2.0/1.2.0.1/1.2.0.2/1.2.0.3 to Watson Studio Local Version 1.2.3, complete the following steps:

  1. Upgrade Kubernetes to Version 1.10.11.
  2. Download the Watson Studio Local installation package to the first master node of your dedicated DSX Local cluster.
  3. Move the installation package to your installer files partition, and make the installation package executable.
  4. Run the installation package with only the --upgrade parameter and no other options. Follow the steps to complete the upgrade.
  5. Enter the following commands so that PMML model web services can start successfully:
    kubectl -n dsx get pods | grep dsx-core
    kubectl exec -ti -n dsx <any-dsx-core-pod> -- rm /user-home/_global_/spark/jars/spss-assembly-spark2.0-scala2.11-5.0.0.0-201707110020.jar
  6. Reapply all of your DSX Local administration settings. If you enabled the SSHD service in Version 1.2, you must re-enable it in Version 1.2.3.
  7. Because project releases now have access permissions, the Watson Studio Local administrator must go to the User Management page of the Admin Console and assign Deployment Admin to at least one user (ensure the email field is filled out, and have the user sign out then in to receive the new permission). Only the Deployment Admin can create a project release in V1.2.1 (previously in V1.2.0, only the DSX administrator could create a project release). Then in Watson Machine Learning (previously the IBM Deployment Manager), the Deployment Admin must assign Admin, Developer, and User permissions to project release members. After the upgrade, only members (including owners) of project releases can see the project releases. Admin users no longer can see the project releases unless they are assigned as members. Learn more
  8. Watson Studio Local V1.2.3 uses a new JSON body format for scoring models. Because this new JSON body does not work on V1.2.0 models, you must save the V1.2.0 models as new V1.2.3 models to upgrade the JSON format.

    Example of the V1.2.0 JSON body:

    {"input_json_str":"[{\"PRODUCT_LINE\":\"Golf Equipment\",\"GENDER\":\"F\",\"MARITAL_STATUS\":\"Married\",\"PROFESSION\":\"Professional\"}]"}

    Example of the V1.2.3 JSON body:

    {"input_json":[{"PRODUCT_LINE":"Golf Equipment","GENDER":"F","MARITAL_STATUS":"Married","PROFESSION":"Professional"}]}
  9. Because the curl commands generated in the Watson Studio Local V1.2.3 client will not work on the migrated V1.2.0 model web service deployments, you should redeploy all of your models so that the Watson Studio Local client generates the correct curl commands. To migrate your applications to the new API format, you can create the new deployments of the models, update your applications to work with the new deployments using the new API format, and then shut down the original deployment.

    Example of the V1.2.0 curl command:

    curl -k -X POST \
      https://9.87.654.321/dmodel/v1/mp1rel/pyscript/xgbreg27-webservice1/score \
      -H 'Authorization: Bearer eyJ...ozw' \
      -H 'Cache-Control: no-cache' \
      -H 'Content-Type: application/json' \
      -d '{"args":{"input_json_str":"[{\"Y\":2.3610346996,\"X\":4.2118473168}]"}}'
    

    Example of the V1.2.3 curl command:

    curl -k -X POST \
      https://9.87.654.321/dmodel/v1/mp1rel/pyscript/xgbreg27-webservice1/score \
      -H 'Authorization: Bearer eyJ...ozw' \
      -H 'Cache-Control: no-cache' \
      -H 'Content-Type: application/json' \
      -d '{"args":{"input_json":[{"Y":2.3610346996,"X":4.2118473168}]}}'
  10. All V1.2.0 library projects will turn into V1.2.3 libraries with open access permissions. Preexisting V1.2.0 users must sign into Watson Studio Local V1.2.3 at least once before you can add them to restricted libraries as viewers.
  11. Because Watson Studio Local now enforces case sensitivity for remote data sets in Version 1.2.3, you must verify that all schema and table names in your preexisting remote data sets match the exact case of the schema and table names in the data source. For any case mismatches, you must manually correct them in your remote data source definitions. Otherwise, the Insert to code feature will fail.
Tip: If the upgrade fails with the following error:
=========> Creating new persistent volumes <=========
2018-10-23 21:05:20,158 - ERROR - root - create_gluster_volume_and_pv:main:808 -- One or more kubernetes jobs failed to create directory influxdb.
Exiting
Failed to create new Persistent Volumes

Then run the following command on all of the master nodes:

docker load -i "/wdp/DockerImages/alpine:v3.8"

and rerun the installation package with only the --upgrade parameter and no other options.

Read previous version 1.2.x administration settings

If upgraded DSX Local to Watson Studio Local Version 1.2.3 and need to review your previous V1.2.x settings, complete the following steps:

  1. Create a .yaml file with the following contents:
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: mongo-deploy-backup
      namespace: sysibm-adm
      labels:
        name: mongo-deploy
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            name: mongo-deploy-backup
            component: mongo-deploy-backup
            enabled: "true"
        spec:
          nodeSelector:
            dsx_node_index: "0"
          containers:
          - name: mongo-deploy-backup
            image: mongo-cluster:1.0.3-x86_64
            command: ["mongod"]
            env:
              - name: mongo_node_name
                value: mongo-svc
              - name: k8s_m_namespace
                value: sysibm-adm
              - name: mongo_nodes_number
                value: "3"
              - name: mongo_replica_set_name
                value: rs5
            ports:
            - containerPort: 27017
            volumeMounts:
            - name: mongo-replica-storage2
              mountPath: /data/db
            livenessProbe:
              exec:
                command:
                  - sh
                  - /opt/mongo/mongoLiveness.sh
              initialDelaySeconds: 30
              periodSeconds: 30
            resources:
              requests:
                memory: 500M
              limits:
                memory: 3000M
          volumes:
          - name: mongo-replica-storage2
            persistentVolumeClaim:
              claimName: mongo-pvc-0
    
  2. Create a new kubernetes pod from file with the command: kubectl -n sysibm-adm create -f <yaml file created in previous step>. Find the name of the new pod with the command: kubectl -n sysibm-adm get po | grep mongo-deploy-backup.
  3. Enter the following command to retrieve your previous 1.2.x settings:
    kubectl -n sysibm-adm exec <name of the mongo pod> -- /usr/bin/mongoexport --db  test --collection settings

Example result:

2018-08-20T22:50:51.853+0000	connected to: localhost
{"_id":{"$oid":"5b72081e96b3ea0011bb4c5c"},"elasticsearchRotationPeriod":10,"metricsRotationPeriod":1,"dashboardRefreshRate":10,"smtpServer":"","smtpPort":"","smtpUserName":"","smtpPassword":"","salt":"4d96b92d17e0adf9","ssl":false,"alertSupportEmailList":"","alertThreshold":"95","alertWarningThreshold":"75","alertTimeThreshold":"5","__v":0}