Table of contents

Submitting a job to a deployed model

Once you've deployed a model in Watson Studio, you can submit jobs and retrieve results using the Execution API.

The sample Notebook DecisionOptimizationExecuteDeployedModel.ipynb shows how to use the Execution API. It's available in the DO-samples project. This Notebook assumes you have first deployed the Diet model, also available in the DO-samples project.

The first cell of the Notebook identifies your executionServiceURL.

executionServiceURL = 'https://' + server + "/dsvc/v1/" + urlAlias + "/domodel/" + deployedAssetName + "/model/" + savedModelName

This is made up of values for your:

the network address of your Watson Studio cluster.
the route you specified when creating your release.
the name of the deployed asset, specified in Watson Machine Learning.

And it also identifies your executionServiceModelURL

executionServiceModelURL = executionServiceURL + "/model/" + savedModelName

which adds:

the name you gave to the scenario you saved as a model for deployment.

The second cell specifies the authorization token to execute your model.

executionToken = "<Here paste the token you can find in the IBM Watson Machine Learning>" 
headers = {"Authorization" : executionToken}

You can find this token in Watson Machine Learning.

The third cell gets the solve configuration from the deployed model.

SOLVE_URL = [x['uri'] for x in obj['deploymentDescription']['links'] if x['target'] == 'solve'][0]
SOLVE_CONFIG = obj['deploymentDescription']['solveConfig']

To enable debugging, the next cell specifies oass.dumpZipName which activates the dump of a zip file containing the input and output tables, and the model, together with any run configuration parameters. The zip file is added to the debug container.

SOLVE_CONFIG['solveParameters'] = {"oaas.dumpZipName": ""}
SOLVE_CONFIG['attachments'].append({'category': 'output', 'type': 'CONTAINER', 'containerId': 'debug', 'name' : '.*\\.zip'})

The next cell writes the problem data in .csv format then sends the tables with the job to the deployed model.

files = {'solveconfig': json.dumps(SOLVE_CONFIG)};
for i in solve_data:
    files[i+'.csv'] = solve_data[i].to_csv();

Next, the job is sent:

r =, files=files, headers=headers)
obj = r.json();

JOB_URL = [x['href'] for x in obj['links'] if x['rel'] == 'self'][0]

The code then queries the service to find out the job status:

from time import sleep
while True:
    r = requests.get(JOB_URL, headers=headers)
    executionStatus = r.json();
    status = executionStatus['solveState']['executionStatus']
    print status

Once the status changes to PROCESSED, FAILED, or INTERRUPTED the Notebook prints either the result tables or information about the failure or interruption status.

The next cell retrieves the dump zip file from the note book folder. This file can be re-imported as a scenario in the Model Builder

for o in executionStatus['outputAttachments']:
    if "dump_" in o['name']:
        zipName = o['name']
        DEBUG_FILE =  o['url']
        print DEBUG_FILE
        z = requests.get(DEBUG_FILE, headers=headers,stream=True)
        f = open(zipName, 'wb')
        for chunk in z.iter_content(chunk_size=512 * 1024):
            if chunk:

Finally the Notebook deletes the job from the execution service, as well as the debug zip file from the debug container to free up space.

requests.delete(JOB_URL, headers=headers)
requests.delete(executionServiceURL+'/data/containers/debug/'+zipName, headers=headers)