Table of contents

Limitations and known issues for Watson Studio Local

Use the following to help troubleshoot issues you might have with Watson Studio Local.

The following limitations and known issues apply to the Watson Studio Local release:

See also: Troubleshooting

Notebooks

Rerun Scala code if you receive an exception
When connecting to Informix from Scala, if you get the exception java.lang.SecurityException cannot be cast to java.sql.SQLException, try running the code again.
No version control in Zeppelin
If a Watson Studio Local user attempts to use version control for a Zeppelin notebook, it fails with the following error: Couldn't checkpoint note revision: possibly storage doesn't support versioning. Please check the logs for more details.

Zeppelin version control

Head option does not work in Zeppelin
In the Zeppelin notebook toolbar, nothing happens when Head is clicked.
New notebook defaults to current notebook filename instead of untitled
If you are currently inside a notebook and click File > New Notebook, then the new notebook defaults to the same filename as the current notebook instead of "Untitled".
Cannot copy a notebook in Jupyter
If you make a copy of a notebook in Jupyter, the new copy of the notebook displays no environment and cannot be opened. As a workaround, make a copy of the notebook manually.
Deleted sample notebook still displays
Do not delete sample notebooks from the dsx-samples project. Otherwise, Watson Studio Local users receive a 404 error when they attempt to open the corresponding tile on the Community page.

Machine learning models

Run model evaluations as type Notebook run
In the Watson Machine Learning client, running model evaluations using type Model evaluation might fail. As a workaround, run model evaluations using type Notebook run instead.
Batch scoring for HDP can only be done by notebook
For batch scoring with Hadoop/HDP spark model using Livy, you need to save it as .ipynb file. When it runs, you can ignore the preceding errors.
Delayed error message when testing a model
If you attempt to test an ML model using an invalid value, the error message (An error occurred in processing your submission. Please try again later.) might not display until you leave the panel.
Correct the input data set when changing the execution type for batch script generation
While generating a script for remote batch scoring, if you change the Execution Type after the Input data set has already been selected, the Input data set will not automatically change. You must manually change the Input data set to ensure Watson Studio Local generates a script with the correct data.
Model group is not supported for custom batch models
For custom batch models, model groups are not supported. For custom online modes, web service group deployment for the model group is supported.
Cannot export a model with a space in the name
If a model name or its project name has a space in it, the model export will fail. As a workaround, export the entire project.

Projects

Do not import a project with .git/index.lock in it
Before you create a project from a file, ensure that .git/index.lock does not exist in the project zip. Otherwise, you can receive a undefined 502 error message and then not be able to load any of the project assets or delete the project.
Data Refinery remote job incorrectly lists local host option
When working with a remote data set that has only been configured with LivySpark (and not LivySpark2), the Run as a job panel incorrectly shows Local Host for the Target Host field with the Save & Run button disabled. Note that remote data shaping requires LivySpark2 configured and Local Host should not be listed.
Cannot push Git commits from the web terminal
In the web terminal (Launch Terminal), the git push command fails. You must instead click Push from the Git Actions (Git Actions) menu.
Cannot create a notebook from a git-host URL if the notebook exists in a private repository
Even when a git token is added, you cannot create a new notebook using the URL option for a private git repository. The URL load fails.

Hadoop integration

Cannot stop jobs for a registered Hadoop target host
When a registered Hadoop cluster is selected as the Target Host for a job run, the job cannot be stopped. As a workaround, view the Watson Studio Local job logs to find the Yarn applicationId; then use the ID to manually kill the Hadoop job on the remote system. When the remote job is terminated, the Watson Studio Local job will terminate on its own with a "Failed" status.

Similarly, jobs that are started for registered Hadoop image push operations cannot be stopped either.

Cross platform between Watson Studio Local and a remote Hadoop cluster not supported for virtual environments
For virtual environments only, Watson Studio Local and Hadoop must have matching architectures. For example, a virtual Watson Studio Local on POWER environment can only work with a virtual Hadoop POWER environment.

H2O Flows

H2O Flow document names must be ASCII characters only
When you name your H2O Flow documents, use all ASCII characters. Non-ASCII characters and double-byte characters are not supported.
H20 importFile displays a stack trace when searching with empty search path
In a blank H20 flow notebook, if you click the importFile routine and click Search you might see the following stack trace error: ERROR MESSAGE: Can not create a Path from an empty string (java.lang.IllegalArgumentException) ....
Delete the H2O notebook from inside the H2O Flow
In the project notebook tab, you cannot delete an H2O notebook. Instead, delete the H2O Flow file either from the Flows tab or from the terminal.

Spark Canvas Flows

Move an exported CSV file out of the exported directory to preview it successfully
If you export a data set from Spark Canvas or other tools and try to preview it, you might receive error code 400 when you try to preview it. As a workaround, complete the following steps:
  1. After the Data Asset Exporter ran successfully, go inside the dsx-core pod using the following commands:
    kubectl get pods -n dsx | grep dsx-core
    kubectl exec -it -n dsx <dsx-core pod name> sh
  2. Inside the dsx-core pod, go to the directory path: /user-home/<User ID>/DSX_Projects/<Project name>/datasets. This path should have a directory with the filename you exported, and the part-00000-*.csv file will be inside this directory.
  3. Move the file to the data sets directory: mv new_DRUG/part-00000-*.csv .

Now the Spark Canvas transformations and preview should work correctly.

Watson Explorer oneWEX

Watson Explorer collection resource is shared among collaborators
Once a Watson Explorer collection is shared through an initial commit and push, modification on the collection in Watson Explorer Content Miner (by configuring a collection or by editing with Domain Adaptation Curator) becomes visible and editable for anyone in the project instantly without another commit and push. It might cause a race condition and unexpected overwrite of the collection. This is because the collection in Content Miner is not managed by Watson Studio Local asset management.

As a workaround, users can hide a collection from collaborators by not committing the corresponding collection asset at all after the collection is created.

Watson Explorer collection resources cannot be exported
If you export a Watson Studio Local project with a Watson Explorer collection in it, the exported project does not contain the collection resource. So if you import the project into Watson Studio Local, Watson Explorer Content Miner cannot open this collection because the corresponding collection resource is not in the project. Since the corresponding collection asset is exported with the project, you can still use the collection through Feature Extractor API and WEX Feature Extractor node in SPSS modeler.
Maximum number of documents in one Watson Explorer collection
If ingested data contains more that 100,000 documents, only 100,000 documents will be indexed.

RStudio

Signing out of RStudio is not supported
In RStudio, if you click the rstudio (signout) button, it triggers an error message Missing or incorrect token. The signout feature is not supported. To exit RStudio, click the bread crumb link in the upper left corner ( Project Name > RStudio).
Restart R to clear custom user settings from an imported project

When you create a project from a file and launch RStudio, RStudio loads the user settings and state information for the user who exported the project to a file. This could cause inconsistencies with RStudio since the R Session state information might differ from the current user settings and options. To resolve this issue, navigate to the RStudio Sessions menu and click Restart R.

If the issues still persist, complete the following steps to reset the RStudio state:

  1. Navigate to your RStudio working directory.
  2. Rename the .rstudio directory.
  3. Rename the .Rdata and .Rhistory files if present.
  4. Navigate to the RStudio Sessions menu and click Restart R.

IBM Cloud Private

Watson Studio Local for ICp on x86 has the following limitations:

  • GPU is not supported on Watson Studio Local for ICp Version 3.1 on x86.
  • Upgrade from DSX Local Version 1.2.1 or earlier to Watson Studio Local 1.2.2 is not supported.
  • Image management is not supported.

POWER

Python 3 notebook cannot run on a remote Hadoop cluster
Jupyter notebooks on the python3 kernel do not support running on a remote Spark on a Hadoop cluster.

Character encoding support

All non-ASCII characters including Unicode are not supported for the following Watson Studio Local areas:

  • Names and tags in image management and package management services.
  • Spark-submit jobId queries.
  • Filenames to be read or deleted by the Filetransfer service.

For non-ASCII characters in data source names, you must manually encode the data source name in UTF-8 before you retrieve any information from it. Python 2.7 example:

dsname = dataSet['datasource'].encode('utf-8')
For non-ASCII characters in database metadata such as table names and schema names, you must manually encode them to UTF-8. Python 2.7 example:
dbTableOrQuery = dbTableOrQuery.encode(encoding='UTF-8',errors='strict')