aws personalize create-batch-inference-job
Creates a batch inference job. The operation can handle up to 50 million records and the input file must be in JSON format. For more information, see recommendations-batch
Options
Name | Description |
---|---|
--job-name <string> | The name of the batch inference job to create |
--solution-version-arn <string> | The Amazon Resource Name (ARN) of the solution version that will be used to generate the batch inference recommendations |
--filter-arn <string> | The ARN of the filter to apply to the batch inference job. For more information on using filters, see Filtering Batch Recommendations |
--num-results <integer> | The number of recommendations to retrieve |
--job-input <structure> | The Amazon S3 path that leads to the input file to base your recommendations on. The input material must be in JSON format |
--job-output <structure> | The path to the Amazon S3 bucket where the job's output will be stored |
--role-arn <string> | The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and output Amazon S3 buckets respectively |
--batch-inference-job-config <structure> | The configuration details of a batch inference job |
--cli-input-json <string> | Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally |
--generate-cli-skeleton <string> | Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command |