--account <ACCOUNT> | Google Cloud Platform user account to use for invocation. Overrides the default *core/account* property value for this command invocation |
--add-maintenance-exclusion-end <TIME_STAMP> | End time of the exclusion window. Must take place after the start time. See
$ gcloud topic datetimes for information on time formats |
--add-maintenance-exclusion-name <NAME> | A descriptor for the exclusion that can be used to remove it. If not specified,
it will be autogenerated |
--add-maintenance-exclusion-start <TIME_STAMP> | Start time of the exclusion window (can occur in the past). If not specified,
the current time will be used. See $ gcloud topic datetimes for information on
time formats |
--async | Return immediately, without waiting for the operation in progress to
complete |
--autoprovisioning-config-file <AUTOPROVISIONING_CONFIG_FILE> | Path of the JSON/YAML file which contains information about the
cluster's node autoprovisioning configuration. Currently it contains
a list of resource limits, identity defaults for autoprovisioning, node upgrade
settings, node management settings, minimum cpu platform, node locations for
autoprovisioning, disk type and size configuration, shielded instance settings,
and customer-managed encryption keys settings.
+
Resource limits are specified in the field 'resourceLimits'.
Each resource limits definition contains three fields:
resourceType, maximum and minimum.
Resource type can be "cpu", "memory" or an accelerator (e.g.
"nvidia-tesla-k80" for nVidia Tesla K80). Use gcloud compute accelerator-types
list to learn about available accelerator types.
Maximum is the maximum allowed amount with the unit of the resource.
Minimum is the minimum allowed amount with the unit of the resource.
+
Identity default contains at most one of the below fields:
serviceAccount: The Google Cloud Platform Service Account to be used by node VMs in
autoprovisioned node pools. If not specified, the project's default service account
is used.
scopes: A list of scopes to be used by node instances in autoprovisioned node pools.
Multiple scopes can be specified, separated by commas. For information on defaults,
look at:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes
+
Node Upgrade settings are specified under the field
'upgradeSettings', which has the following fields:
maxSurgeUpgrade: Number of extra (surge) nodes to be created on
each upgrade of an autoprovisioned node pool.
maxUnavailableUpgrade: Number of nodes that can be unavailable at the
same time on each upgrade of an autoprovisioned node pool.
+
Node Management settings are specified under the field
'nodeManagement', which has the following fields:
enableAutoUpgrade: A boolean field that indicates if node
autoupgrade is enabled for autoprovisioned node pools.
enableAutoRepair: A boolean field that indicates if node
autorepair is enabled for autoprovisioned node pools.
+
minCpuPlatform: If specified, new autoprovisioned nodes will be
scheduled on host with specified CPU architecture or a newer one.
Note: Min CPU platform can only be specified in Beta and Alpha.
+
Autoprovisioning locations is a set of zones where new node pools
can be created by Autoprovisioning. Autoprovisioning locations are
specified in the field 'autoprovisioningLocations'. All zones must
be in the same region as the cluster's master(s).
+
Disk type and size are specified under the 'diskType' and 'diskSizeGb' fields,
respectively. If specified, new autoprovisioned nodes will be created with
custom boot disks configured by these settings.
+
Shielded instance settings are specified under the 'shieldedInstanceConfig'
field, which has the following fields:
enableSecureBoot: A boolean field that indicates if secure boot is enabled for
autoprovisioned nodes.
enableIntegrityMonitoring: A boolean field that indicates if integrity
monitoring is enabled for autoprovisioned nodes.
+
Customer Managed Encryption Keys (CMEK) used by new auto-provisioned node pools
can be specified in the 'bootDiskKmsKey' field |
--autoprovisioning-locations <ZONE> | Set of zones where new node pools can be created by autoprovisioning.
All zones must be in the same region as the cluster's master(s).
Multiple locations can be specified, separated by commas |
--autoprovisioning-max-surge-upgrade <AUTOPROVISIONING_MAX_SURGE_UPGRADE> | Number of extra (surge) nodes to be created on each upgrade of an
autoprovisioned node pool |
--autoprovisioning-max-unavailable-upgrade <AUTOPROVISIONING_MAX_UNAVAILABLE_UPGRADE> | Number of nodes that can be unavailable at the same time on each
upgrade of an autoprovisioned node pool |
--autoprovisioning-min-cpu-platform <PLATFORM> | If specified, new autoprovisioned nodes will be scheduled on host with
specified CPU architecture or a newer one |
--autoprovisioning-scopes <SCOPE> | The scopes be used by node instances in autoprovisioned node pools.
Multiple scopes can be specified, separated by commas. For information
on defaults, look at:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--scopes |
--autoprovisioning-service-account <AUTOPROVISIONING_SERVICE_ACCOUNT> | The Google Cloud Platform Service Account to be used by node VMs in
autoprovisioned node pools. If not specified, the project default
service account is used |
--billing-project <BILLING_PROJECT> | The Google Cloud Platform project that will be charged quota for operations performed in gcloud. If you need to operate on one project, but need quota against a different project, you can use this flag to specify the billing project. If both `billing/quota_project` and `--billing-project` are specified, `--billing-project` takes precedence. Run `$ gcloud config set --help` to see more information about `billing/quota_project` |
--clear-maintenance-window | If set, remove the maintenance window that was set with --maintenance-window
family of flags |
--clear-resource-usage-bigquery-dataset | Disables exporting cluster resource usage to BigQuery |
--cloud-run-config <load-balancer-type=EXTERNAL> | Configurations for Cloud Run addon, requires `--addons=CloudRun` for create
and `--update-addons=CloudRun=ENABLED` for update.
+
*load-balancer-type*:::Optional Type of load-balancer-type EXTERNAL or INTERNAL
Example:
+
$ {command} example-cluster --cloud-run-config=load-balancer-type=INTERNAL |
--complete-credential-rotation | Complete the IP and credential rotation for this cluster. For example:
+
$ {command} example-cluster --complete-credential-rotation
+
This causes the cluster to stop serving its old IP, return to a single IP, and invalidate old credentials |
--complete-ip-rotation | Complete the IP rotation for this cluster. For example:
+
$ {command} example-cluster --complete-ip-rotation
+
This causes the cluster to stop serving its old IP, and return to a single IP state |
--configuration <CONFIGURATION> | The configuration to use for this command invocation. For more
information on how to use configurations, run:
`gcloud topic configurations`. You can also use the CLOUDSDK_ACTIVE_CONFIG_NAME environment
variable to set the equivalent of this flag for a terminal
session |
--database-encryption-key <DATABASE_ENCRYPTION_KEY> | Enable Database Encryption.
+
Enable database encryption that will be used to encrypt Kubernetes Secrets at
the application layer. The key provided should be the resource ID in the format of
`projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]`.
For more information, see
https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets |
--disable-database-encryption | Disable database encryption.
+
Disable Database Encryption which encrypt Kubernetes Secrets at
the application layer. For more information, see
https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets |
--disable-default-snat | Disable default source NAT rules applied in cluster nodes.
+
By default, cluster nodes perform source network address translation (SNAT)
for packets sent from Pod IP address sources to destination IP addresses
that are not in the non-masquerade CIDRs list.
For more details about SNAT and IP masquerading, see:
https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#how_ipmasq_works
SNAT changes the packet's source IP address to the node's internal IP address.
+
When this flag is set, GKE does not perform SNAT for packets sent to any destination.
You must set this flag if the cluster uses privately reused public IPs.
+
The --disable-default-snat flag is only applicable to private GKE clusters, which are
inherently VPC-native. Thus, --disable-default-snat requires that the cluster was created
with both --enable-ip-alias and --enable-private-nodes |
--disable-workload-identity | Disable Workload Identity on the cluster.
+
For more information on Workload Identity, see
+
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity |
--enable-autoprovisioning | Enables node autoprovisioning for a cluster.
+
Cluster Autoscaler will be able to create new node pools. Requires maximum CPU
and memory limits to be specified |
--enable-autoprovisioning-autorepair | Enable node autorepair for autoprovisioned node pools.
Use --no-enable-autoprovisioning-autorepair to disable |
--enable-autoprovisioning-autoupgrade | Enable node autoupgrade for autoprovisioned node pools.
Use --no-enable-autoprovisioning-autoupgrade to disable |
--enable-autoscaling | Enables autoscaling for a node pool.
+
Enables autoscaling in the node pool specified by --node-pool or
the default node pool if --node-pool is not provided |
--enable-basic-auth | Enable basic (username/password) auth for the cluster. `--enable-basic-auth` is
an alias for `--username=admin`; `--no-enable-basic-auth` is an alias for
`--username=""`. Use `--password` to specify a password; if not, the server will
randomly generate one. For cluster versions before 1.12, if neither
`--enable-basic-auth` nor `--username` is specified, `--enable-basic-auth` will
default to `true`. After 1.12, `--enable-basic-auth` will default to `false` |
--enable-binauthz | Enable Binary Authorization for this cluster |
--enable-intra-node-visibility | Enable Intra-node visibility for this cluster.
+
Enabling intra-node visibility makes your intra-node pod-to-pod traffic
visible to the networking fabric. With this feature, you can use VPC flow
logging or other VPC features for intra-node traffic.
+
Enabling it on an existing cluster causes the cluster
master and the cluster nodes to restart, which might cause a disruption |
--enable-legacy-authorization | Enables the legacy ABAC authentication for the cluster.
User rights are granted through the use of policies which combine attributes
together. For a detailed look at these properties and related formats, see
https://kubernetes.io/docs/admin/authorization/abac/. To use RBAC permissions
instead, create or update your cluster with the option
`--no-enable-legacy-authorization` |
--enable-master-authorized-networks | Allow only specified set of CIDR blocks (specified by the
`--master-authorized-networks` flag) to connect to Kubernetes master through
HTTPS. Besides these blocks, the following have access as well:
+
1) The private network the cluster connects to if
`--enable-private-nodes` is specified.
2) Google Compute Engine Public IPs if `--enable-private-nodes` is not
specified.
+
Use `--no-enable-master-authorized-networks` to disable. When disabled, public
internet (0.0.0.0/0) is allowed to connect to Kubernetes master through HTTPS |
--enable-master-global-access | Use with private clusters to allow access to the master's private endpoint from any Google Cloud region or on-premises environment regardless of the
private cluster's region |
--enable-network-egress-metering | Enable network egress metering on this cluster.
+
When enabled, a DaemonSet is deployed into the cluster. Each DaemonSet pod
meters network egress traffic by collecting data from the conntrack table, and
exports the metered metrics to the specified destination.
+
Network egress metering is disabled if this flag is omitted, or when
`--no-enable-network-egress-metering` is set |
--enable-network-policy | Enable network policy enforcement for this cluster. If you are enabling network policy on an existing cluster the network policy addon must first be enabled on the master by using --update-addons=NetworkPolicy=ENABLED flag |
--enable-resource-consumption-metering | Enable resource consumption metering on this cluster.
+
When enabled, a table will be created in the specified BigQuery dataset to store
resource consumption data. The resulting table can be joined with the resource
usage table or with BigQuery billing export.
+
To disable resource consumption metering, set `--no-enable-resource-consumption-
metering`. If this flag is omitted, then resource consumption metering will
remain enabled or disabled depending on what is already configured for this
cluster |
--enable-shielded-nodes | Enable Shielded Nodes for this cluster. Enabling Shielded Nodes will enable a
more secure Node credential bootstrapping implementation. Starting with version
1.18, clusters will have shielded GKE nodes by default |
--enable-stackdriver-kubernetes | Enable Stackdriver Kubernetes monitoring and logging |
--enable-vertical-pod-autoscaling | Enable vertical pod autoscaling for a cluster |
--flags-file <YAML_FILE> | A YAML or JSON file that specifies a *--flag*:*value* dictionary.
Useful for specifying complex flag values with special characters
that work with any command interpreter. Additionally, each
*--flags-file* arg is replaced by its constituent flags. See
$ gcloud topic flags-file for more information |
--flatten <KEY> | Flatten _name_[] output resource slices in _KEY_ into separate records
for each item in each slice. Multiple keys and slices may be specified.
This also flattens keys for *--format* and *--filter*. For example,
*--flatten=abc.def* flattens *abc.def[].ghi* references to
*abc.def.ghi*. A resource record containing *abc.def[]* with N elements
will expand to N records in the flattened output. This flag interacts
with other flags that are applied in this order: *--flatten*,
*--sort-by*, *--filter*, *--limit* |
--format <FORMAT> | Set the format for printing command output resources. The default is a
command-specific human-friendly output format. The supported formats
are: `config`, `csv`, `default`, `diff`, `disable`, `flattened`, `get`, `json`, `list`, `multi`, `none`, `object`, `table`, `text`, `value`, `yaml`. For more details run $ gcloud topic formats |
--generate-password | Ask the server to generate a secure password and use that as the basic auth password, keeping the existing username |
--help | Display detailed help |
--impersonate-service-account <SERVICE_ACCOUNT_EMAIL> | For this gcloud invocation, all API requests will be made as the given service account instead of the currently selected account. This is done without needing to create, download, and activate a key for the account. In order to perform operations as the service account, your currently selected account must have an IAM role that includes the iam.serviceAccounts.getAccessToken permission for the service account. The roles/iam.serviceAccountTokenCreator role has this permission or you may create a custom role. Overrides the default *auth/impersonate_service_account* property value for this command invocation |
--log-http | Log all HTTP server requests and responses to stderr. Overrides the default *core/log_http* property value for this command invocation |
--logging-service <LOGGING_SERVICE> | Logging service to use for the cluster. Options are:
"logging.googleapis.com/kubernetes" (the Google Cloud Logging
service with Kubernetes-native resource model enabled),
"logging.googleapis.com" (the Google Cloud Logging service),
"none" (logs will not be exported from the cluster) |
--maintenance-window <START_TIME> | Set a time of day when you prefer maintenance to start on this cluster. For example:
+
$ {command} example-cluster --maintenance-window=12:43
+
The time corresponds to the UTC time zone, and must be in HH:MM format.
+
Non-emergency maintenance will occur in the 4 hour block starting at the
specified time.
+
This is mutually exclusive with the recurring maintenance windows
and will overwrite any existing window. Compatible with maintenance
exclusions.
+
To remove an existing maintenance window from the cluster, use
'--clear-maintenance-window' |
--maintenance-window-end <TIME_STAMP> | End time of the first window (can occur in the past). Must take place after the
start time. The difference in start and end time specifies the length of each
recurrence. See $ gcloud topic datetimes for information on time formats |
--maintenance-window-recurrence <RRULE> | An RFC 5545 RRULE, specifying how the window will recur. Note that minimum
requirements for maintenance periods will be enforced. Note that FREQ=SECONDLY,
MINUTELY, and HOURLY are not supported |
--maintenance-window-start <TIME_STAMP> | Start time of the first window (can occur in the past). The start time
influences when the window will start for recurrences. See $ gcloud topic
datetimes for information on time formats |
--master-authorized-networks <NETWORK> | The list of CIDR blocks (up to 100 for private cluster, 50 for public cluster) that are allowed to connect to Kubernetes master through HTTPS. Specified in CIDR notation (e.g. 1.2.3.4/30). Cannot be specified unless `--enable-master-authorized-networks` is also specified |
--max-accelerator <type=TYPE,count=COUNT> | Sets maximum limit for a single type of accelerators (e.g. GPUs) in cluster.
+
*type*::: (Required) The specific type (e.g. nvidia-tesla-k80 for nVidia Tesla K80)
of accelerator for which the limit is set. Use ```gcloud compute
accelerator-types list``` to learn about all available accelerator types.
+
*count*::: (Required) The maximum number of accelerators
to which the cluster can be scaled |
--max-cpu <MAX_CPU> | Maximum number of cores in the cluster.
+
Maximum number of cores to which the cluster can scale |
--max-memory <MAX_MEMORY> | Maximum memory in the cluster.
+
Maximum number of gigabytes of memory to which the cluster can scale |
--max-nodes <MAX_NODES> | Maximum number of nodes in the node pool.
+
Maximum number of nodes to which the node pool specified by --node-pool
(or default node pool if unspecified) can scale. Ignored unless
--enable-autoscaling is also specified |
--min-accelerator <type=TYPE,count=COUNT> | Sets minimum limit for a single type of accelerators (e.g. GPUs) in cluster. Defaults
to 0 for all accelerator types if it isn't set.
+
*type*::: (Required) The specific type (e.g. nvidia-tesla-k80 for nVidia Tesla K80)
of accelerator for which the limit is set. Use ```gcloud compute
accelerator-types list``` to learn about all available accelerator types.
+
*count*::: (Required) The minimum number of accelerators
to which the cluster can be scaled |
--min-cpu <MIN_CPU> | Minimum number of cores in the cluster.
+
Minimum number of cores to which the cluster can scale |
--min-memory <MIN_MEMORY> | Minimum memory in the cluster.
+
Minimum number of gigabytes of memory to which the cluster can scale |
--min-nodes <MIN_NODES> | Minimum number of nodes in the node pool.
+
Minimum number of nodes to which the node pool specified by --node-pool
(or default node pool if unspecified) can scale. Ignored unless
--enable-autoscaling is also specified |
--monitoring-service <MONITORING_SERVICE> | Monitoring service to use for the cluster. Options are:
"monitoring.googleapis.com/kubernetes" (the Google Cloud
Monitoring service with Kubernetes-native resource model enabled),
"monitoring.googleapis.com" (the Google Cloud Monitoring service),
"none" (no metrics will be exported from the cluster) |
--node-locations <ZONE> | The set of zones in which the specified node footprint should be replicated.
All zones must be in the same region as the cluster's master(s), specified by
the `--zone` or `--region` flag. Additionally, for zonal clusters,
`--node-locations` must contain the cluster's primary zone. If not specified,
all nodes will be in the cluster's primary zone (for zonal clusters) or spread
across three randomly chosen zones within the cluster's region (for regional
clusters).
+
Note that `NUM_NODES` nodes will be created in each zone, such that if you
specify `--num-nodes=4` and choose two locations, 8 nodes will be created.
+
Multiple locations can be specified, separated by commas. For example:
+
$ {command} example-cluster --zone us-central1-a --node-locations us-central1-a,us-central1-b |
--node-pool <NODE_POOL> | Node pool to be updated |
--password <PASSWORD> | The password to use for cluster auth. Defaults to a server-specified randomly-generated string |
--project <PROJECT_ID> | The Google Cloud Platform project ID to use for this invocation. If
omitted, then the current project is assumed; the current project can
be listed using `gcloud config list --format='text(core.project)'`
and can be set using `gcloud config set project PROJECTID`.
+
`--project` and its fallback `core/project` property play two roles
in the invocation. It specifies the project of the resource to
operate on. It also specifies the project for API enablement check,
quota, and billing. To specify a different project for quota and
billing, use `--billing-project` or `billing/quota_project` property |
--quiet | Disable all interactive prompts when running gcloud commands. If input
is required, defaults will be used, or an error will be raised.
Overrides the default core/disable_prompts property value for this
command invocation. This is equivalent to setting the environment
variable `CLOUDSDK_CORE_DISABLE_PROMPTS` to 1 |
--region <REGION> | Compute region (e.g. us-central1) for the cluster |
--release-channel <CHANNEL> | Subscribe or unsubscribe this cluster to a release channel.
+
When a cluster is subscribed to a release channel, Google maintains
both the master version and the node version. Node auto-upgrade
defaults to true and cannot be disabled.
+
_CHANNEL_ must be one of:
+
*None*::: Use 'None' to opt-out of any release channel.
+
*rapid*::: 'rapid' channel is offered on an early access basis for customers who want
to test new releases.
+
WARNING: Versions available in the 'rapid' channel may be subject to
unresolved issues with no known workaround and are not subject to any
SLAs.
+
*regular*::: Clusters subscribed to 'regular' receive versions that are considered GA
quality. 'regular' is intended for production users who want to take
advantage of new features.
+
*stable*::: Clusters subscribed to 'stable' receive versions that are known to be
stable and reliable in production.
+
:::
+ |
--remove-labels <KEY> | Labels to remove from the Google Cloud resources in use by the Kubernetes Engine
cluster. These are unrelated to Kubernetes labels.
Example:
+
$ {command} example-cluster --remove-labels=label_a,label_b |
--remove-maintenance-exclusion <NAME> | Name of a maintenance exclusion to remove. If you hadn't specified a name, one
was auto-generated. Get it with $ gcloud container clusters describe |
--resource-usage-bigquery-dataset <RESOURCE_USAGE_BIGQUERY_DATASET> | The name of the BigQuery dataset to which the cluster's usage of cloud
resources is exported. A table will be created in the specified dataset to
store cluster resource usage. The resulting table can be joined with BigQuery
Billing Export to produce a fine-grained cost breakdown.
+
Example:
+
$ {command} example-cluster --resource-usage-bigquery-dataset=example_bigquery_dataset_name |
--set-password | Set the basic auth password to the specified value, keeping the existing username |
--start-credential-rotation | Start the rotation of IP and credentials for this cluster. For example:
+
$ {command} example-cluster --start-credential-rotation
+
This causes the cluster to serve on two IPs, and will initiate a node upgrade to point to the new IP |
--start-ip-rotation | Start the rotation of this cluster to a new IP. For example:
+
$ {command} example-cluster --start-ip-rotation
+
This causes the cluster to serve on two IPs, and will initiate a node upgrade to point to the new IP |
--trace-token <TRACE_TOKEN> | Token used to route traces of service requests for investigation of issues. Overrides the default *core/trace_token* property value for this command invocation |
--update-addons <ADDON=ENABLED|DISABLED> | Cluster addons to enable or disable. Options are
HorizontalPodAutoscaling=ENABLED|DISABLED
HttpLoadBalancing=ENABLED|DISABLED
KubernetesDashboard=ENABLED|DISABLED
NetworkPolicy=ENABLED|DISABLED
CloudRun=ENABLED|DISABLED
ConfigConnector=ENABLED|DISABLED
NodeLocalDNS=ENABLED|DISABLED |
--update-labels <KEY=VALUE> | Labels to apply to the Google Cloud resources in use by the Kubernetes Engine
cluster. These are unrelated to Kubernetes labels.
Example:
+
$ {command} example-cluster --update-labels=label_a=value1,label_b=value2 |
--user-output-enabled | Print user intended output to the console. Overrides the default *core/user_output_enabled* property value for this command invocation. Use *--no-user-output-enabled* to disable |
--username <USERNAME> | The user name to use for basic auth for the cluster. Use `--password` to specify
a password; if not, the server will randomly generate one |
--verbosity <VERBOSITY> | Override the default verbosity for this command. Overrides the default *core/verbosity* property value for this command invocation. _VERBOSITY_ must be one of: *debug*, *info*, *warning*, *error*, *critical*, *none* |
--workload-pool <WORKLOAD_POOL> | Enable Workload Identity on the cluster.
+
When enabled, Kubernetes service accounts will be able to act as Cloud IAM
Service Accounts, through the provided workload pool.
+
Currently, the only accepted workload pool is the workload pool of
the Cloud project containing the cluster, `PROJECT_ID.svc.id.goog`.
+
For more information on Workload Identity, see
+
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity |
--zone <ZONE> | Compute zone (e.g. us-central1-a) for the cluster. Overrides the default *compute/zone* property value for this command invocation |