LogoLogo
  • Overview
  • publisher
    • Introduction
    • Getting Started
      • Logging in to Publisher
    • Data Sources
      • Connecting a Data Source
      • Managing a Data Source
      • Connectors
        • AWS S3 Permissions
        • Connecting to AWS S3 Storage
        • Google Cloud Storage (GCS) Permissions
        • Connecting to Google Cloud Storage
        • PostgreSQL Permissions
        • Connecting to PostgreSQL
        • PostgreSQL on Azure Permissions
        • Microsoft Azure Blob Storage Permissions
        • Connecting to Microsoft Azure Blob Storage
        • Connecting to HTTPS
        • Connecting to other sources via Trino
          • BigQuery
    • Collections
      • Creating a Collection
      • Sharing a Collection
      • Collection Filters
      • Editing Collection Metadata
      • Updating Collection Contents
    • Access Policies
      • Creating an Access Policy
      • Managing Access Policies
    • Questions
      • Adding Questions
      • Example Question
    • Settings
      • Viewing Current and Past Administrators
      • Adding an Administrator
      • Removing an Administrator
      • Setting Notification Preferences
  • Explorer
    • Introduction
    • Viewing a Collection
    • Browsing Collections
    • Asking Questions
    • Accessing a Private Collection
      • Requesting Access to a Private Collection
    • Filtering Data in Tables
      • Strings
      • Dates
      • Numbers
  • Workbench
    • Introduction
    • Getting Started
      • Logging into Workbench
      • Connecting an Engine
      • Finding or Importing a Workflow
      • Configuring Workflow Inputs
      • Running and Monitoring a Workflow
      • Locating Outputs
    • Engines
      • Adding and Updating an Engine
        • On AWS HealthOmics
        • On Microsoft Azure
        • On Google Cloud Platform
        • On Premises
      • Parameters
        • AWS HealthOmics
        • Google Cloud Platform
        • Microsoft Azure
        • On-Premises
        • Cromwell
        • Amazon Genomics CLI
    • Workflows
      • Finding Workflows
      • Adding a Workflow
      • Supported Languages
      • Repositories
        • Dockstore
    • Instruments
      • Getting Started with Instruments
      • Connecting a Storage Account
      • Using Sample Data in a Workflow
      • Running Workflows Using Samples
      • Family Based Analysis with Pedigree Information
      • Monitor the Workflow
      • CLI Reference
        • Instruments
        • Storage
        • Samples
        • OpenAPI Specification
    • Entities
    • Terminology
  • Passport
    • Introduction
    • Registering an Email Address for a Google Identity
  • Command Line Interface
    • Installation
    • Usage Examples
    • Working with JSON Data
    • Reference
      • workbench
        • runs submit
        • runs list
        • runs describe
        • runs cancel
        • runs delete
        • runs logs
        • runs tasks list
        • runs events list
        • engines list
        • engines describe
        • engines parameters list
        • engines parameters describe
        • engines health-checks list
        • workflows create
        • workflows list
        • workflows describe
        • workflows update
        • workflows delete
        • workflows versions create
        • workflows versions list
        • workflows versions describe
        • workflows versions files
        • workflows versions update
        • workflows versions delete
        • workflows versions defaults create
        • workflows versions defaults list
        • workflows versions defaults describe
        • workflows versions defaults update
        • workflows versions defaults delete
        • namespaces get-default
        • storage add
        • storage delete
        • storage describe
        • storage list
        • storage update
        • storage platforms add
        • storage platforms delete
        • storage platforms describe
        • storage platforms list
        • samples list
        • samples describe
        • samples files list
      • publisher
        • datasources list
  • Analysis
    • Python Library
    • Popular Environments
      • Cromwell
      • CWL Tool
      • Terra
      • Nextflow
      • DNAnexus
Powered by GitBook

© DNAstack. All rights reserved.

On this page
  • Available Parameters
  • storage_type=[DYNAMIC|STATIC]
  • storage_capacity=NUMBER
  • log_level=[OFF|FATAL|ERROR|ALL]
  • priority=NUMBER
  • retention_mode=[RETAIN|REMOVE]
  • run_group_id=STRING
  • accelerators=[GPU]

Was this helpful?

  1. Workbench
  2. Engines
  3. Parameters

AWS HealthOmics

PreviousParametersNextGoogle Cloud Platform

Last updated 11 months ago

Was this helpful?

Workbench supports a number of engine parameters for AWS HealthOmics that can greatly impact the performance of your workflow runs. These parameters can be set when you create or update an engine in Workbench ( see ) or when you submit a workflow run ( see ).

Available Parameters

storage_type=[DYNAMIC|STATIC]

The storage_type parameter allows you to specify the type of storage that will be used by the engine. The two options are DYNAMIC and STATIC.

  • DYNAMIC: The engine will use a dynamic storage type. This means that the engine will automatically scale the storage based on the size of the data being processed. While you do not need to worry about running out of storage space, this may reduce performance and has lower IO

  • STATIC: The engine will use a static storage type. This means that the engine will use a fixed amount of storage space associated with a file store. This can improve performance and has higher IO, but you may run out of storage space if you exceed the limit.

storage_capacity=NUMBER

If you specify STATIC for the storage_type parameter, you must also specify the storage_size parameter. This will set the total size to allocate for the Fsx Luster file store in GiB.

log_level=[OFF|FATAL|ERROR|ALL]

The log_level parameter allows you to specify the level of logging that will be used by the engine and sent to cloud watch. The four options are OFF, FATAL, ERROR, and ALL.

  • OFF: No logging will be performed by the engine

  • FATAL: Only fatal errors encountered by the engine will be logged

  • ERROR: All errors encountered by the engine will be logged

  • ALL: All events will be logged by the engine

priority=NUMBER

The priority parameter allows you to specify the priority of the run with a number from 1-1000. The higher the number, the higher the priority. This can be useful if you have multiple runs that need to be processed to ensure more urgent runs are processed first.

retention_mode=[RETAIN|REMOVE]

The AWS HealthOmics API has quotas on the number of runs that can be stored. The retention_mode parameter allows you to specify whether a complete run should be kept or removed if and only if the quota is exceeded. The two options are RETAIN and REMOVE.

  • RETAIN: The run will be kept even if the quota is exceeded. This may prevent you from submitting new runs

  • REMOVE: The run will be removed if the quota is exceeded. This may allow you to submit new runs, but you will lose the metadata associated with the run. Run outputs stored in S3 will not be deleted.

run_group_id=STRING

accelerators=[GPU]

AWS HealthOmics supports attaching specialized hardware accelerators to your workflows and runs. These accelerators can have a significant impact on the performance of your workflows. The accelerators parameter allows you to specify the type of accelerator to attach to the engine. At this time, only option is GPU.

In Aws HealthOmics, you can define a to easily limit compute resources used across all runs within the run group. You can set the maximum vCPU, maximum duration, or maximum concurrent runs to help limit your use of compute resources. The run_group_id parameter allows you to specify the unique identifier for the run group.

RunGroup
Submitting and Monitoring a Workflow Run
Fsx Luster
Parameters