Web View Job Launch

This page guides you through the setup and running of a job using the Dis.co Web view.

Create a Python file

First create a Python file. You can name it whatever your like, such as hellodisco.py.

"""
A simple script that is run on the dis.co platform.
It takes no input, configuration, or constants files.
"""
import platform
if __name__ == '__main__':
print('Hello World, from Python {}!'.format(platform.python_version()))
for k, v in platform.uname()._asdict().items():
print('{key: <16}{value}'.format(key=k, value=v))

Create a new job

1. Go to: https://app.dis.co/dashboard/jobs/view. The list of jobs is displayed.

2. Click New job. The New job form is displayed.

You'll see the following fields:

  • Job title: Type a name, or click to auto-generate a random name.

  • Job size: Select job size. Options are: small, medium, large.

  • Cloud: Select the cloud cluster where jobs will run. Select either discoCloud (the default) or your own cloud service if you added one. See Cloud Setup.

  • Basic files: Select a script file, such as hellodisco.py in the case of this example. In the next section we'll provide an example for adding data files as input.

  • Advanced files: Constants files (leave blank).

3. After adding the Job properties and files, click Create job. The Job is created and can be run from the window that appears by clicking Run job.

Note: When selecting the Autorun this job checkbox in the new job form, the job is run automatically with no need to click Run. If you did not select the Autorun checkbox, proceed to the next step.

4. Click Run job. Initially, the job begins in the “Queued” state, then moves to the "Running" state. Its progress can be viewed in the relevant tabs.

5. When the job is done, click Results at the top right to download results for the entire job in zipped format. The file is named according to the convention <JobID>.zip.

6. For downloading results for a specific task or its data file, click the download button for the corresponding task then choose Data files or Results from the options that are displayed.

  • A task file is named according to the convention <TaskID>.zip

  • A data file is named according to the convention input-data-<uuid>.pickle

Viewing real-time logs

To view the real-time stdout and stderr logs, click Real time log in the task row while the job is running. The real-time logs panel is displayed.

The panel has stdout and stderr tabs. Inspect the stdout tab to see all the output printed from your script.‌

Viewing the activity log

While processing the job, the system displays the percentage processed and an activity log from the time of creation.The time from initialization is clocked after success, indefinitely.

Archiving a job

When you archive a job, it is no longer displayed in the Job list but is retained in the database.

To archive a job, click the menu at the end of the job row and select Archive.

In the confirmation message that is displayed, click Archive job.

Duplicating a job

Duplicating enables you to rerun a job. When duplicating, you do not have to upload scripts or data files or change anything to the job configuration.

To duplicate a job, click the menu at the end of the job row and select Duplicate.

A new job configuration form is displayed. 'Duplicate of' is added to the job title. Edit fields, as necessary. Then click Create job.

Next up is an example of how to run a job that uses several data files as input.