aws lookoutequipment create-inference-scheduler

Creates a scheduled inference. Scheduling an inference is setting up a continuous real-time inference plan to analyze new measurement data. When setting up the schedule, you provide an S3 bucket location for the input data, assign it a delimiter between separate entries in the data, set an offset delay if desired, and set the frequency of inferencing. You must also provide an S3 bucket location for the output data

Options

NameDescription
--model-name <string>The name of the previously trained ML model being used to create the inference scheduler
--inference-scheduler-name <string>The name of the inference scheduler being created
--data-delay-offset-in-minutes <long>A period of time (in minutes) by which inference on the data is delayed after the data starts. For instance, if you select an offset delay time of five minutes, inference will not begin on the data until the first data measurement after the five minute mark. For example, if five minutes is selected, the inference scheduler will wake up at the configured frequency with the additional five minute delay time to check the customer S3 bucket. The customer can upload data at the same frequency and they don't need to stop and restart the scheduler when uploading new data
--data-upload-frequency <string>How often data is uploaded to the source S3 bucket for the input data. The value chosen is the length of time between data uploads. For instance, if you select 5 minutes, Amazon Lookout for Equipment will upload the real-time data to the source bucket once every 5 minutes. This frequency also determines how often Amazon Lookout for Equipment starts a scheduled inference on your data. In this example, it starts once every 5 minutes
--data-input-configuration <structure>Specifies configuration information for the input data for the inference scheduler, including delimiter, format, and dataset location
--data-output-configuration <structure>Specifies configuration information for the output results for the inference scheduler, including the S3 location for the output
--role-arn <string>The Amazon Resource Name (ARN) of a role with permission to access the data source being used for the inference
--server-side-kms-key-id <string>Provides the identifier of the AWS KMS customer master key (CMK) used to encrypt inference scheduler data by Amazon Lookout for Equipment
--client-token <string>A unique identifier for the request. If you do not set the client request token, Amazon Lookout for Equipment generates one
--tags <list>Any tags associated with the inference scheduler
--cli-input-json <string>Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally
--generate-cli-skeleton <string>Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command