Google Cloud Platform
The Google Cloud Platform (GCP) engine allows you to run workflows on Google Cloud Platform using Cromwell. This engine is deployed using the DNAstack engine installer following the Cromwell on GCP installation instructions.
Since the underlying engine is Cromwell, you can use the same set of parameters that cromwell supports. Workbench's parameters are synonymous with Cromwell's parameters, and you can find the full list of Cromwell parameters in the Cromwell documentation.
The following is a list of parameters that are specific to the GCP engine specifically. For the complete list of options available in Workbench for a Cromwell engine see page on Cromwell engine parameters
Available Parameters
jes_gcs_root=STRING
jes_gcs_root=STRINGWhere outputs of the workflow will be written. Expects this to be a GCS URL (e.g. gs://my-bucket/workflows). If this is not set, this defaults to the value within backend.jes.config.root in the Configuration.
google_compute_service_account=STRING
google_compute_service_account=STRINGAlternate service account to use on the compute instance (e.g. [email protected]). If this is not set, this defaults to the value within backend.jes.config.genomics.compute-service-account in the Configuration if specified or default otherwise.
google_project=STRING
google_project=STRINGGoogle project used to execute this workflow.
auth_bucket=STRING
auth_bucket=STRINGA GCS URL that only Cromwell can write to. The Cromwell account is determined by the google.authScheme (and the corresponding google.userAuth and google.serviceAuth). Defaults to the the value in jes_gcs_root.
monitoring_script=STRING
monitoring_script=STRINGSpecifies a GCS URL to a script that will be invoked prior to the user command being run. For example, if the value for monitoring_script is "gs://bucket/script.sh", it will be invoked as ./script.sh > monitoring.log &. The value monitoring.log file will be automatically de-localized.
monitoring_image=STRING
monitoring_image=STRINGSpecifies a Docker image to monitor the task. This image will run concurrently with the task container, and provides an alternative mechanism to monitoring_script (the latter runs inside the task container). For example, one can use quay.io/broadinstitute/cromwell-monitor, which reports cpu/memory/disk utilization metrics to Stackdriver.
monitoring_image_script=STRING
monitoring_image_script=STRINGSpecifies a GCS URL to a script that will be invoked on the container running the monitoring_image. This script will be invoked instead of the ENTRYPOINT defined in the monitoring_image. Unlike the monitoring_script no files are automatically de-localized.
google_labels=OBJECT
google_labels=OBJECTAn object containing only string values. Represent custom labels to send with PAPI job requests. Per the PAPI specification, each key and value must conform to the regex [a-z]([-a-z0-9]*[a-z0-9])?.
enable_ssh_access=BOOLEAN
enable_ssh_access=BOOLEANIf set to true, will enable SSH access to the Google Genomics worker machines. Please note that this is a community contribution and is not officially supported by the Cromwell development team.
delete_intermediate_output_files=BOOLEAN
delete_intermediate_output_files=BOOLEANExperimental: Any File variables referenced in call output sections that are not found in the workflow output section will be considered an intermediate File. When the workflow finishes and this option is set to true, all intermediate File objects will be deleted from GCS. Cromwell must be run with the configuration value system.delete-workflow-files set to true. The default for both values is false. NOTE: The behavior of this option on other backends is unspecified.
enable_fuse=BOOLEAN
enable_fuse=BOOLEANSpecifies if workflow tasks should be submitted to Google Pipelines with an additional ENABLE_FUSE flag. It causes container to be executed with CAP_SYS_ADMIN. Use it only for trusted containers. Please note that this is a community contribution and is not officially supported by the Cromwell development team.
Last updated
Was this helpful?

