On Premises
Last updated
Was this helpful?
Last updated
Was this helpful?
is an open-source workflow execution engine from the Broad Institute of MIT and Harvard, designed to run (WDL) workflows on local or cloud infrastructure.
is our open-source adapter that provides a GA4GH Workflow Execution Schema (WES) API for Cromwell engines. When deployed alongside Cromwell, the WES Service offers:
OAuth2 authentication through
Comprehensive API operation auditing
Standardized API for workflow submission and monitoring
Support for run request attachments
Simplified log streaming without direct file access
Automatic file path translation in inputs
These are the main operations that occur when using the DNAstack WES Service as as the workflow execution backend for Workbench:
Workbench submits the workflow to the WES API
The WES API translates the request and submits it to Cromwell
Cromwell generates individual task definitions
Cromwell dispatches task definitions to the underlying execution service
Tasks are executed on the environment's compute infrastructure
Outputs are written to the attached storage
Cromwell returns the result of running the workflow to the WES Service
The WES Service returns the workflow results to Workbench
Java 17+ installed
Start Cromwell in server mode
On the same compute node, start the DNAstack WES Service and bind it to only allow traffic from localhost. Binding the service to the localhost guarantees that it does not accept traffic from external clients. The DNAstack WES Service will expect Cromwell to be available at http://localhost:8000
by default. If Cromwell is available on a different port or is not running locally you can configure the location by specifying -Dwes.cromwell.url="<IP>:<PORT>"
. The DNAstack WES Service will start on port 8090.
Submit a workflow to the WES Service
The forward proxy is responsible for handling all incoming traffic, establishing a secure connection, validating the client certificate, and forwarding traffic to the WES Service. Using this approach, the WES Service does not actually need access to be made accessible except through the localhost.
If you have not done so already, you will need to download a forward proxy. This guide will use NGINX, which is a fast, easy to use forward proxy that supports Mutual TLS out of the box.
The server certificate will be used by clients connecting to the NGINX instance to establish a secure (https) connection. You can use OpenSSL to generate the certificate.
If you are using a Copy the following text and save it to a file called server.conf
.
Replace both instances of the ${IP_ADDRESS}
variable with the actual Public IP address the NGINX instance will be accessible on.
If you plan on connecting to the instance using a domain name that you own, you can uncomment the last line and replace the ${DNS_NAME}
variable with the domain name you plan on using
Using OpenSSL and the config you just created, generate a server certificate and a public key.
The client.key and client.pem files contain sensitive information that will grant anyone access to your WES service. Treat these files like a password.
The client certificate will be used by clients for authentication purposes. The server will be provided the client's public key and will validate incoming traffic is signed with the client's private key.
Copy the following text and save it to a file called client.conf
. Replace the ${COMMON_NAME}
variable with a string that can be used to identify this certificate.
Using OpenSSL and the client.conf
file, generate the client certificate and public key. You will also want to combine both files into a single client.pem
file for later use with Workbench.
The client.key and client.pem files contain sensitive information that will grant anyone access to your WES service. Treat these files like a password.
Copy the following text and save it to a file named nginx.conf
. This configuration will start nginx
in daemon mode listening on port 8443. All incoming requests must be authenticated using the client.key
.
Start NGINX running in the background
Test that you are able to connect to the WES Service using the nginx
forward proxy to authenticate the client certificate over an https connection
Finally, you will want to make sure you have adjusted any firewall rules to allow incoming requests on port 8443. You can check to make sure that external connectivity is working by running the same cURL command as above, changing the IP address to be the compute node's public IP:
Once you have completed the setup process and validated that the WES instance is publicly accesable, you are now ready to connect to Workbench. To connect to Workbench you will need the following information:
The public facing IP address that the WES Service can be reached at.
Step 2: Click the Add Engine button in the top right hand corner and select the GA4GH Workflow Execution Service engine type.
Step 3: Fill in the engine information:
Type a readable name for the engine. The ID should be auto-generated based on the name.
For the URL field type the complete IP and port prefixed by https that will be used to access your WES Service. For example, if the compute node was accessible at IP 192.192.1.1
and port 8443
then the URL would be: https://192.192.1.1:8443
.
Step 4: Fill in the Provider and Region. If you are running this on-premises, select the Self Hosted option, otherwise choose the provider that corresponds to your compute environment.
Step 5: If you are using a self-signed certificate then under the Environment section, toggle Configure SSL to on.
In the Server Certificate input box paste the contents from the server.crt
file.
Step 6: Under the **Authentication ** section change the Method to be Mutual TLS .
In the Client Key and Certificate input box, paste the contents from the client.pem
file. This file should contain both the client public key and the client private key.
Step 7: Click the Save button. If Workbench was able to connect to the engine, you will see a message informing you that
Various status monitoring operations also take place and are reported by Workbench as described in the .
The following guide describes how to deploy the DNAstack WES Service on a local HPC or compute infrastructure and connect it to Workbench. There are many configuration options which will not be covered; for a complete list please visit the DNAstack WES Service .
If you have previously followed the setup guide, you can skip ahead to
Download the of the WES Service
Download the of Cromwell (or have Cromwell accessible locally)
(or another reverse proxy software) installed
You can get started in seconds with the DNAstack WES Service and Cromwell. Each environment and compute system will require different Cromwell configurations which you can find described in the Cromwell .
In order to ensure secure communication, Workbench requires both an SSL connection and for authentication to be enabled on the WES Service instance. The simplest way to enable these features is through the use of and a forward proxy ( like NGINX) running beside the WES Service.
Contents of the server.crt
file generated .
Contents of the client.pem
file generated .
Step 1: Log into Workbench and navigate to the .
Step 8: Once the engine has been created, you are now ready to it was added and will be redirected back to the Settings page.