Limitations and known issues for Watson Studio Local
Use the following to help troubleshoot issues you might have with Watson Studio Local.
The following limitations and known issues apply to the Watson Studio Local release:
- Machine learning models
- Hadoop integration
- Bitbucket integration
- H2O Flows
- Spark Canvas Flows
- Watson Explorer oneWEX
- Character encoding support
- Driver support
See also: Troubleshooting
- The kernel fails to start up when you when you start a notebook in a GPU environment
- Apply patch03 that is located here, and then select wsl-x86-v1231-patch03-TS002286373 to correct this issue.
- Rerun Scala code if you receive an exception
- When connecting to Informix from Scala, if you get the exception java.lang.SecurityException cannot be cast to java.sql.SQLException, try running the code again.
- No version control in Zeppelin
- If a Watson Studio Local user attempts to use version control for a
Zeppelin notebook, it fails with the following error: Couldn't checkpoint note revision:
possibly storage doesn't support versioning. Please check the logs for more details.
- Head option does not work in Zeppelin
- In the Zeppelin notebook toolbar, nothing happens when Head is clicked.
- New notebook defaults to current notebook filename instead of untitled
- If you are currently inside a notebook and click , then the new notebook defaults to the same filename as the current notebook instead of "Untitled".
- Cannot copy a notebook in Jupyter
- If you make a copy of a notebook in Jupyter, the new copy of the notebook displays no environment and cannot be opened. As a workaround, make a copy of the notebook manually.
- Deleted sample notebook still displays
- Do not delete sample notebooks from the dsx-samples project. Otherwise, Watson Studio Local users receive a 404 error when they attempt to open the corresponding tile on the Community page.
Machine learning models
- Model builder is now obsolete
- The model builder is now removed and it is recommended that you build models with a notebook.
- Run model evaluations as type Notebook run
- In the Watson Machine Learning client, running model evaluations using type
Model evaluationmight fail. As a workaround, run model evaluations using type
- Batch scoring for HDP can only be done by notebook
- For batch scoring with Hadoop/HDP spark model using Livy, you need to save it as .ipynb file. When it runs, you can ignore the preceding errors.
- Delayed error message when testing a model
- If you attempt to test an ML model using an invalid value, the error message (An error occurred in processing your submission. Please try again later.) might not display until you leave the panel.
- Correct the input data set when changing the execution type for batch script generation
- While generating a script for remote batch scoring, if you change the Execution Type after the Input data set has already been selected, the Input data set will not automatically change. You must manually change the Input data set to ensure Watson Studio Local generates a script with the correct data.
- Model group is not supported for custom batch models
- For custom batch models, model groups are not supported. For custom online modes, web service group deployment for the model group is supported.
- Cannot export a model with a space in the name
- If a model name or its project name has a space in it, the model export will fail. As a workaround, export the entire project.
- Do not import a project with .git/index.lock in it
- Before you create a project from a file, ensure that .git/index.lock does not exist in the project zip. Otherwise, you can receive a undefined 502 error message and then not be able to load any of the project assets or delete the project.
- Data Refinery remote job incorrectly lists local host option
- When working with a remote data set that has only been configured with LivySpark (and not LivySpark2), the Run as a job panel incorrectly shows Local Host for the Target Host field with the Save & Run button disabled. Note that remote data shaping requires LivySpark2 configured and Local Host should not be listed.
- Cannot push Git commits from the web terminal
- In the web terminal (), the
git pushcommand fails. You must instead click Push from the Git Actions () menu.
- Cannot create a notebook from a git-host URL if the notebook exists in a private repository
- Even when a git token is added, you cannot create a new notebook using the URL option for a private git repository. The URL load fails.
- Cannot stop jobs for a registered Hadoop target host
- When a registered Hadoop cluster is selected as the Target Host for a job
run, the job cannot be stopped. As a workaround, view the Watson Studio Local job logs to find the Yarn applicationId; then use the ID to manually kill the Hadoop job on the
remote system. When the remote job is terminated, the Watson Studio Local
job will terminate on its own with a "Failed" status.
Similarly, jobs that are started for registered Hadoop image push operations cannot be stopped either.
- Cross platform between Watson Studio Local and a remote Hadoop cluster not supported for virtual environments
- For virtual environments only, Watson Studio Local and Hadoop must have matching architectures. For example, a virtual Watson Studio Local on POWER environment can only work with a virtual Hadoop POWER environment.
- Watson Studio Local fails when a new Bitbucket server repository is completely empty
- When a new repository is created in the Bitbucket server UI, it is created empty with no files. Watson Studio Local can connect to this empty Bitbucket server repository, but committing or pushing changes from Watson Studio Local will fail until an initial commit takes place to the repository from another Git client. This is easy to fix using a Git client, but it will cause errors if you don't commit and push at least one file, typically the README.MD file, to the repository.
- Personal Access Tokens with
/is not accepted in the Watson Studio Local UI
- There is a bug, where if a generated personal access token contains the
/(slash) character, it will not be accepted as valid by the Watson Studio Local UI. As a workaround, you need to regenerate the token until it doesn't contain the
- H2O Flow document names must be ASCII characters only
- When you name your H2O Flow documents, use all ASCII characters. Non-ASCII characters and double-byte characters are not supported.
- H20 importFile displays a stack trace when searching with empty search path
- In a blank H20 flow notebook, if you click the importFile routine and click Search you might see the following stack trace error: ERROR MESSAGE: Can not create a Path from an empty string (java.lang.IllegalArgumentException) ....
- Delete the H2O notebook from inside the H2O Flow
- In the project notebook tab, you cannot delete an H2O notebook. Instead, delete the H2O Flow file either from the Flows tab or from the terminal.
Spark Canvas Flows
- Move an exported CSV file out of the exported directory to preview it successfully
- If you export a data set from Spark Canvas or other tools and try to preview it, you might
receive error code 400 when you try to preview it. As a workaround, complete the
- After the Data Asset Exporter ran successfully, go inside the dsx-core pod using the following
kubectl get pods -n dsx | grep dsx-core kubectl exec -it -n dsx <dsx-core pod name> sh
- Inside the dsx-core pod, go to the directory path: /user-home/<User
ID>/DSX_Projects/<Project name>/datasets. This path should have a directory with the
filename you exported, and the
part-00000-*.csvfile will be inside this directory.
- Move the file to the data sets directory: mv new_DRUG/part-00000-*.csv .
Now the Spark Canvas transformations and preview should work correctly.
- After the Data Asset Exporter ran successfully, go inside the dsx-core pod using the following commands:
Watson Explorer oneWEX
- Watson Explorer collection resource is shared among collaborators
- Once a Watson Explorer collection is shared through an initial commit and push, modification on
the collection in Watson Explorer Content Miner (by configuring a collection or by editing with
Domain Adaptation Curator) becomes visible and editable for anyone in the project instantly without
another commit and push. It might cause a race condition and unexpected overwrite of the collection.
This is because the collection in Content Miner is not managed by Watson Studio Local asset management.
As a workaround, users can hide a collection from collaborators by not committing the corresponding collection asset at all after the collection is created.
- Watson Explorer collection resources cannot be exported
- If you export a Watson Studio Local project with a Watson Explorer collection in it, the exported project does not contain the collection resource. So if you import the project into Watson Studio Local, Watson Explorer Content Miner cannot open this collection because the corresponding collection resource is not in the project. Since the corresponding collection asset is exported with the project, you can still use the collection through Feature Extractor API and WEX Feature Extractor node in SPSS modeler.
- Maximum number of documents in one Watson Explorer collection
- If ingested data contains more that 100,000 documents, only 100,000 documents will be indexed.
- Signing out of RStudio is not supported
- In RStudio, if you click the rstudio (signout) button, it triggers an error message Missing or incorrect token. The signout feature is not supported. To exit RStudio, click the bread crumb link in the upper left corner ( ).
- Restart R to clear custom user settings from an imported project
When you create a project from a file and launch RStudio, RStudio loads the user settings and state information for the user who exported the project to a file. This could cause inconsistencies with RStudio since the R Session state information might differ from the current user settings and options. To resolve this issue, navigate to the RStudio Sessions menu and click Restart R.
If the issues still persist, complete the following steps to reset the RStudio state:
- Navigate to your RStudio working directory.
- Rename the .rstudio directory.
- Rename the .Rdata and .Rhistory files if present.
- Navigate to the RStudio Sessions menu and click Restart R.
- Python 3 notebook cannot run on a remote Hadoop cluster
- Jupyter notebooks on the python3 kernel do not support running on a remote Spark on a Hadoop cluster.
- Version 220.127.116.11 of POWER uses Python 3.6 even if the image or pod name is listed as 3.5
- For POWER, the Version 18.104.22.168 release installs Python 3.6. Python 3.5 is not available.
Therefore, any environment working with Python 3 in this release will be working with version 3.6
regardless of the image or pod name, for example,
- A project release cannot be created from an exported file in Version 22.214.171.124 of POWER
A project release cannot be created from an exported file. You should use an external git repository to create a project release from a project originating in a separate Watson Studio Local installation.
- matplotlib version 3.0.3 fixes "no attribute" error
- If import matplotlib results in the error: AttributeError: module 'matplotlib' has no
attribute 'artist' or import pandas results in the error: AttributeError: module
'pandas' has no attribute 'core', then you must update the matplotlib version from 3.0.0 to
3.0.3 in your notebook:
- Add a new cell to your notebook with the following
!pip3 install --user matplotlib==3.0.3
- Restart the kernel from .
- Verify that the matplotlib is version 3.0.3 and the pandas version is 0.23.0 by running the
!pip list | grep pandas !pip list | grep matplotlib
- Add a new cell to your notebook with the following command:
Character encoding support
All non-ASCII characters including Unicode are not supported for the following Watson Studio Local areas:
- Names and tags in image management and package management services.
- Filenames to be read or deleted by the Filetransfer service.
For non-ASCII characters in data source names, you must manually encode the data source name in UTF-8 before you retrieve any information from it. Python 2.7 example:
dsname = dataSet['datasource'].encode('utf-8')
dbTableOrQuery = dbTableOrQuery.encode(encoding='UTF-8',errors='strict')
Custom JDBC 3 drivers are supported, but some features won’t work with the drivers, such as Browse.