Command Line Tools#

The Open Data Cube offers a CLI for common administrative tasks related to the Open Data Cube.

datacube#

Data Cube command-line interface

Usage

datacube [OPTIONS] COMMAND [ARGS]...

Options

--version#

Display the open data cube version number and exit.

-v, --verbose#

Use multiple times for more verbosity

--log-file <log_file>#

Specify log file

-E, --env <env>#

The ODC environment to use. Defaults to ‘default’.

-C, --config, --config-file <config>#

A path to a possible configuration path. Multiple can be provided, but only the first that can be read will be used.

-R, --raw-config, --config-text <raw_config>#

Passing in the raw contents of the configuration file to use as a string. May be in JSON, YAML or INI format. Cannot be used with the -C/–config option

--log-queries#

Print database queries.

dataset#

Dataset management commands

Usage

datacube dataset [OPTIONS] COMMAND [ARGS]...

add#

Add datasets to the Data Cube

Usage

datacube dataset add [OPTIONS] [DATASET_PATHS]...

Options

-p, --product <product_names>#

Only match against products specified with this option, you can supply several by repeating this option with a new product name

-x, --exclude-product <exclude_product_names>#

Attempt to match to all products in the DB except for products specified with this option, you can supply several by repeating this option with a new product name

--auto-add-lineage, --no-auto-add-lineage#

WARNING: will be deprecated in datacube v1.9. Default behaviour is to automatically add lineage datasets if they are missing from the database, but this can be disabled if lineage is expected to be present in the DB, in this case add will abort when encountering missing lineage dataset

--verify-lineage, --no-verify-lineage#

WARNING: will be deprecated in datacube v1.9. Lineage referenced in the metadata document should be the same as in DB, default behaviour is to skip those top-level datasets that have lineage data different from the version in the DB. This option allows omitting verification step.

--dry-run#

Check if everything is ok

--ignore-lineage#

Pretend that there is no lineage data in the datasets being indexed

--archive-less-mature#

Archive less mature versions of the dataset

Arguments

DATASET_PATHS#

Optional argument(s)

archive#

Archive datasets

Usage

datacube dataset archive [OPTIONS] [IDS]...

Options

-d, --archive-derived#

Also recursively archive derived datasets

--dry-run#

Don’t archive. Display datasets that would get archived

--all#

archive all non-archived datasets (warning: may be slow on large databases)

Arguments

IDS#

Optional argument(s)

count#

Count datasets

Sample usage syntax:
datacube dataset count –period “1 year” –query “time in [2020, 2023]” –query “region=”101010”” product_name

Usage

datacube dataset count [OPTIONS] [PRODUCTS]...

Options

--count-only#

Display total result count without any grouping.

--period <period>#

Group product counts in time slices of the given period, e.g. 1 day, 6 months, 1 year.

--status <status>#

Whether to count archived datasets

- ‘active’: count only active datasets [default]
- ‘archived’: count only archived datasets
- ‘all’: count both active and archived datasets
Options:

active | archived | all

--query <query>#

Query expressions to filter datasets by searchable fields such as date, spatial extents, maturity, or other properties.

FIELD = VALUE
FIELD in DATE-RANGE
FIELD in [START, END]
TIME < DATE
TIME > DATE
START and END can be either numbers or dates
Dates follow YYYY, YYYY-MM, or YYYY-MM-DD format

FIELD: x, y, lat, lon, time, region, …

eg. ‘time in [1996-01-01, 1996-12-31]’
‘time in 1996’
‘time > 2020-01’
‘lon in [130, 140]’ ‘lat in [-40, -30]’
‘region=”101010”’
-f <f>#

Output format

Default:

'yaml'

Options:

csv | yaml | json

Arguments

PRODUCTS#

Optional argument(s)

find-duplicates#

Search for duplicate indexed datasets

Usage

datacube dataset find-duplicates [OPTIONS] [FIELDS]...

Options

-p, --product <product_names>#

Only search within product(s) specified with this option. You can supply several by repeating this option with a new product name.

-f <f>#

Output format

Default:

'yaml'

Options:

csv | yaml | json

Arguments

FIELDS#

Optional argument(s)

info#

Display dataset information

Usage

datacube dataset info [OPTIONS] [IDS]...

Options

--show-sources#

Also show source datasets

--show-derived#

Also show derived datasets

-f <f>#

Output format

Default:

'yaml'

Options:

csv | yaml | json

--max-depth <max_depth>#

Maximum sources/derived depth to travel

Arguments

IDS#

Optional argument(s)

purge#

Purge archived datasets

Usage

datacube dataset purge [OPTIONS] [IDS]...

Options

--dry-run#

Don’t archive. Display datasets that would get archived

--all#

purge all archived datasets (warning: may be slow on large databases)

--force#

Allow active datasets to be deleted (default: false)

Arguments

IDS#

Optional argument(s)

restore#

Restore datasets

Usage

datacube dataset restore [OPTIONS] [IDS]...

Options

-d, --restore-derived#

Also recursively restore derived datasets

--dry-run#

Don’t restore. Display datasets that would get restored

--derived-tolerance-seconds <derived_tolerance_seconds>#

Only restore derived datasets that were archived this recently to the original dataset

--all#

restore all archived datasets (warning: may be slow on large databases)

Arguments

IDS#

Optional argument(s)

update#

Update datasets in the Data Cube

Usage

datacube dataset update [OPTIONS] [DATASET_PATHS]...

Options

--allow-any <keys_that_can_change>#

Allow any changes to the specified key (a.b.c)

--dry-run#

Check if everything is ok

--location-policy <location_policy>#

What to do with previously recorded dataset location(s)

- ‘keep’: keep as alternative location [default]
- ‘archive’: mark as archived
- ‘forget’: remove from the index
Options:

keep | archive | forget

--archive-less-mature#

Archive less mature versions of the dataset

Arguments

DATASET_PATHS#

Optional argument(s)

metadata#

Metadata type commands

Usage

datacube metadata [OPTIONS] COMMAND [ARGS]...

add#

Add or update metadata types in the index

Usage

datacube metadata add [OPTIONS] [FILES]...

Options

--allow-exclusive-lock, --forbid-exclusive-lock#

Allow index to be locked from other users while updating (default: false)

Arguments

FILES#

Optional argument(s)

list#

List metadata types that are defined in the generic index.

Usage

datacube metadata list [OPTIONS]

show#

Show information about a metadata type.

Usage

datacube metadata show [OPTIONS] [METADATA_TYPE_NAME]...

Options

-f <output_format>#

Output format

Default:

'yaml'

Options:

yaml | json

Arguments

METADATA_TYPE_NAME#

Optional argument(s)

update#

Update existing metadata types.

An error will be thrown if a change is potentially unsafe.

(An unsafe change is anything that may potentially make the metadata type incompatible with existing types of the same name)

Usage

datacube metadata update [OPTIONS] [FILES]...

Options

--allow-unsafe, --forbid-unsafe#

Allow unsafe updates (default: false)

--allow-exclusive-lock, --forbid-exclusive-lock#

Allow index to be locked from other users while updating (default: false)

-d, --dry-run#

Check if everything is ok

Arguments

FILES#

Optional argument(s)

product#

Product commands

Usage

datacube product [OPTIONS] COMMAND [ARGS]...

add#

Add or update products in the generic index.

Usage

datacube product add [OPTIONS] [FILES]...

Options

--allow-exclusive-lock, --forbid-exclusive-lock#

Allow index to be locked from other users while updating (default: false)

Arguments

FILES#

Optional argument(s)

delete#

Delete products and all associated datasets

Usage

datacube product delete [OPTIONS] [PRODUCT_NAMES]...

Options

--force#

Allow a product with active datasets to be deleted (default: false)

-d, --dry-run#

Check if everything is ok

Arguments

PRODUCT_NAMES#

Optional argument(s)

list#

List products that are defined in the generic index.

Usage

datacube product list [OPTIONS]

Options

-f <output_format>#

Output format

Default:

'default'

Options:

default | csv | yaml | tab

show#

Show details about a product in the generic index.

Usage

datacube product show [OPTIONS] [PRODUCT_NAME]...

Options

-f <output_format>#

Output format

Default:

'yaml'

Options:

yaml | json

Arguments

PRODUCT_NAME#

Optional argument(s)

update#

Update existing products.

An error will be thrown if a change is potentially unsafe.

(An unsafe change is anything that may potentially make the product incompatible with existing datasets of that type)

Usage

datacube product update [OPTIONS] [FILES]...

Options

--allow-unsafe, --forbid-unsafe#

Allow unsafe updates (default: false)

--allow-exclusive-lock, --forbid-exclusive-lock#

Allow index to be locked from other users while updating (default: false)

-d, --dry-run#

Check if everything is ok

Arguments

FILES#

Optional argument(s)

spindex#

Spatial Indexes

Usage

datacube spindex [OPTIONS] COMMAND [ARGS]...

create#

Create unpopulated spatial index for particular CRSes

Usage

datacube spindex create [OPTIONS] [SRIDS]...

Options

--init-users, --no-init-users#

Include user roles and grants. (default: true)

-u, --update, --no-update#

Populate the spatial index after creation (slow). For finer grained updating, use the ‘spindex update’ command

Arguments

SRIDS#

Optional argument(s)

drop#

Drop existing spatial indexes for particular CRSs

Usage

datacube spindex drop [OPTIONS] [SRIDS]...

Options

-f, --force, --no-force#

If set, does not ask the user to confirm deletion

Arguments

SRIDS#

Optional argument(s)

list#

List all CRSs for which spatial indexes exist in this index

Usage

datacube spindex list [OPTIONS]

update#

Update a spatial index for particular CRSs.

Usage

datacube spindex update [OPTIONS] [SRIDS]...

Options

-p, --product <product>#

The name of a product to update the spatial index for (can be used multiple times for multiple products)

-d, --dataset <dataset>#

The id of a dataset to update the spatial index for (can be used multiple times for multiple datasets)

Arguments

SRIDS#

Optional argument(s)

system#

System commands

Usage

datacube system [OPTIONS] COMMAND [ARGS]...

check#

Check and display current configuration

Usage

datacube system check [OPTIONS]

clone#

Clone an existing ODC index into a new, empty index

Usage

datacube system clone [OPTIONS] SOURCE_ENV

Options

--batch-size <batch_size>#

Size of batches for bulk-adding to the new index

--skip-lineage, --no-skip-lineage#

Do not load lineage data where possible. (default: false - i.e. do not skip lineage)

--lineage-only, --no-lineage-only#

Clone lineage data only. (default: false)

Arguments

SOURCE_ENV#

Required argument

init#

Initialise the database

Usage

datacube system init [OPTIONS]

Options

--default-types, --no-default-types#

Add default types? (default: true)

--init-users, --no-init-users#

Include user roles and grants. (default: true)

--recreate-views, --no-recreate-views#

Recreate dynamic views

--rebuild, --no-rebuild#

Rebuild all dynamic fields (caution: slow)

--lock-table, --no-lock-table#

Allow table to be locked (eg. while creating missing indexes)

user#

User management commands

Usage

datacube user [OPTIONS] COMMAND [ARGS]...

create#

Create a User

Usage

datacube user create [OPTIONS] {user|manage|admin} USER

Options

--description <description>#

Arguments

ROLE#

Required argument

USER#

Required argument

delete#

Delete a User

Usage

datacube user delete [OPTIONS] [USERS]...

Arguments

USERS#

Optional argument(s)

grant#

Grant a role to users

Usage

datacube user grant [OPTIONS] {user|manage|admin} [USERS]...

Arguments

ROLE#

Required argument

USERS#

Optional argument(s)

list#

List users

Usage

datacube user list [OPTIONS]

Options

-f <f>#

Output format

Default:

'yaml'

Options:

csv | yaml

Note

To check all available CLI go to opendatacube/datacube-core