Table of contents

Upgrade to Watson Studio Local

You can upgrade your Data Science Experience Local cluster from Version 1.2.x to Watson Studio Local Version 1.2.2.

Upgrade DSX Local from Version 1.2.1 to Watson Studio Local Version 1.2.2

Tip: Write down your V1.2.1 administration settings so that you can reapply them after the upgrade. You can also Read previous version 1.2.x administration settings post-upgrade.

To upgrade DSX Local Version 1.2.1 to Watson Studio Local Version 1.2.2, complete the following steps:

  1. Download the Watson Studio Local installation package to the first master node of your dedicated DSX Local cluster.
  2. Move the installation package to your installer files partition, and make the installation package executable.
  3. Run the installation package with only the --upgrade parameter and no other options. Follow the steps to complete the upgrade.
  4. Reapply all of your DSX Local administration settings.

Upgrade DSX Local from Version 1.2.0.x to Watson Studio Local Version 1.2.2

Tip: Write down your V1.2.0 administration settings so that you can reapply them after the upgrade. You can also Read previous version 1.2.x administration settings post-upgrade.

To upgrade DSX Local Version 1.2.0/1.2.0.1/1.2.0.2/1.2.0.3 to Watson Studio Local Version 1.2.2, complete the following steps:

  1. Download the Watson Studio Local installation package to the first master node of your dedicated DSX Local cluster.
  2. Move the installation package to your installer files partition, and make the installation package executable.
  3. Run the installation package with only the --upgrade parameter and no other options. Follow the steps to complete the upgrade.
  4. Reapply all of your DSX Local administration settings. If you enabled the SSHD service in Version 1.2, you must re-enable it in Version 1.2.2.
  5. Because project releases now have access permissions, the Watson Studio Local administrator must go to the User Management page of the Admin Console and assign Deployment Admin to at least one user (ensure the email field is filled out, and have the user sign out then in to receive the new permission). Only the Deployment Admin can create a project release in V1.2.1 (previously in V1.2.0, only the DSX administrator could create a project release). Then in Watson Machine Learning (previously the IBM Deployment Manager), the Deployment Admin must assign Admin, Developer, and User permissions to project release members. After the upgrade, only members (including owners) of project releases can see the project releases. Admin users no longer can see the project releases unless they are assigned as members. Learn more
  6. Because the curl commands generated in the Watson Studio Local V1.2.2 client will not work on the migrated V1.2.0 model web service deployments, you should redeploy all of your models so that the DSX Local client generates the correct curl commands. To migrate your applications to the new API format, you can create the new deployments of the models, update your applications to work with the new deployments using the new API format, and then shut down the original deployment.

    Example of the V1.2.0 curl command:

    curl -k -X POST \
      https://9.87.654.321/dmodel/v1/mp1rel/pyscript/xgbreg27-webservice1/score \
      -H 'Authorization: Bearer eyJ...ozw' \
      -H 'Cache-Control: no-cache' \
      -H 'Content-Type: application/json' \
      -d '{"args":{"input_json_str":"[{\"Y\":2.3610346996,\"X\":4.2118473168}]"}}'

    Example of the V1.2.1 curl command:

    curl -k -X POST \
      https://9.87.654.321/dmodel/v1/mp1rel/pyscript/xgbreg27-webservice1/score \
      -H 'Authorization: Bearer eyJ...ozw' \
      -H 'Cache-Control: no-cache' \
      -H 'Content-Type: application/json' \
      -d '{"args":{"input_json":[{"Y":2.3610346996,"X":4.2118473168}]}}'
  7. All V1.2.0 library projects will turn into V1.2.2 libraries with open access permissions. Preexisting V1.2.0 users must sign into Watson Studio Local V1.2.2 at least once before you can add them to restricted libraries as viewers.
Tip: If the upgrade fails with the following error:
=========> Creating new persistent volumes <=========
2018-10-23 21:05:20,158 - ERROR - root - create_gluster_volume_and_pv:main:808 -- One or more kubernetes jobs failed to create directory influxdb.
Exiting
Failed to create new Persistent Volumes

Then run the following command on all of the master nodes:

docker load -i "/wdp/DockerImages/alpine:v3.8"

and rerun the installation package with only the --upgrade parameter and no other options.

Read previous version 1.2.x administration settings

If upgraded DSX Local to Watson Studio Local Version 1.2.2 and need to review your previous V1.2.x settings, complete the following steps:

  1. Create a .yaml file with the following contents:
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: mongo-deploy-backup
      namespace: sysibm-adm
      labels:
        name: mongo-deploy
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            name: mongo-deploy-backup
            component: mongo-deploy-backup
            enabled: "true"
        spec:
          nodeSelector:
            dsx_node_index: "0"
          containers:
          - name: mongo-deploy-backup
            image: mongo-cluster:1.0.3-x86_64
            command: ["mongod"]
            env:
              - name: mongo_node_name
                value: mongo-svc
              - name: k8s_m_namespace
                value: sysibm-adm
              - name: mongo_nodes_number
                value: "3"
              - name: mongo_replica_set_name
                value: rs5
            ports:
            - containerPort: 27017
            volumeMounts:
            - name: mongo-replica-storage2
              mountPath: /data/db
            livenessProbe:
              exec:
                command:
                  - sh
                  - /opt/mongo/mongoLiveness.sh
              initialDelaySeconds: 30
              periodSeconds: 30
            resources:
              requests:
                memory: 500M
              limits:
                memory: 3000M
          volumes:
          - name: mongo-replica-storage2
            persistentVolumeClaim:
              claimName: mongo-pvc-0
  2. Create a new kubernetes pod from file with the command: kubectl -n sysibm-adm create -f <yaml file created in previous step>. Find the name of the new pod with the command: kubectl -n sysibm-adm get po | grep mongo-deploy-backup.
  3. Enter the following command to retrieve your previous 1.2.x settings:
    kubectl -n sysibm-adm exec <name of the mongo pod> -- /usr/bin/mongoexport --db  test --collection settings

Example result:

2018-08-20T22:50:51.853+0000	connected to: localhost
{"_id":{"$oid":"5b72081e96b3ea0011bb4c5c"},"elasticsearchRotationPeriod":10,"metricsRotationPeriod":1,"dashboardRefreshRate":10,"smtpServer":"","smtpPort":"","smtpUserName":"","smtpPassword":"","salt":"4d96b92d17e0adf9","ssl":false,"alertSupportEmailList":"","alertThreshold":"95","alertWarningThreshold":"75","alertTimeThreshold":"5","__v":0}