aws lambda create-event-source-mapping

Creates a mapping between an event source and an AWS Lambda function. Lambda reads items from the event source and triggers the function. For details about each event source type, see the following topics. Using AWS Lambda with Amazon DynamoDB Using AWS Lambda with Amazon Kinesis Using AWS Lambda with Amazon SQS Using AWS Lambda with Amazon MQ Using AWS Lambda with Amazon MSK Using AWS Lambda with Self-Managed Apache Kafka The following error handling options are only available for stream sources (DynamoDB and Kinesis): BisectBatchOnFunctionError - If the function returns an error, split the batch in two and retry. DestinationConfig - Send discarded records to an Amazon SQS queue or Amazon SNS topic. MaximumRecordAgeInSeconds - Discard records older than the specified age. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires MaximumRetryAttempts - Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records are retried until the record expires. ParallelizationFactor - Process multiple batches from each shard concurrently

Options

NameDescription
--event-source-arn <string>The Amazon Resource Name (ARN) of the event source. Amazon Kinesis - The ARN of the data stream or a stream consumer. Amazon DynamoDB Streams - The ARN of the stream. Amazon Simple Queue Service - The ARN of the queue. Amazon Managed Streaming for Apache Kafka - The ARN of the cluster
--function-name <string>The name of the Lambda function. Name formats Function name - MyFunction. Function ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction. Version or Alias ARN - arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD. Partial ARN - 123456789012:function:MyFunction. The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length
--enabledIf true, the event source mapping is active. Set to false to pause polling and invocation
--no-enabledIf true, the event source mapping is active. Set to false to pause polling and invocation
--batch-size <integer>The maximum number of items to retrieve in a single batch. Amazon Kinesis - Default 100. Max 10,000. Amazon DynamoDB Streams - Default 100. Max 1,000. Amazon Simple Queue Service - Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10. Amazon Managed Streaming for Apache Kafka - Default 100. Max 10,000. Self-Managed Apache Kafka - Default 100. Max 10,000
--maximum-batching-window-in-seconds <integer>(Streams and SQS standard queues) The maximum amount of time to gather records before invoking the function, in seconds
--parallelization-factor <integer>(Streams) The number of batches to process from each shard concurrently
--starting-position <string>The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources. AT_TIMESTAMP is only supported for Amazon Kinesis streams
--starting-position-timestamp <timestamp>With StartingPosition set to AT_TIMESTAMP, the time from which to start reading
--destination-config <structure>(Streams) An Amazon SQS queue or Amazon SNS topic destination for discarded records
--maximum-record-age-in-seconds <integer>(Streams) Discard records older than the specified age. The default value is infinite (-1)
--bisect-batch-on-function-error(Streams) If the function returns an error, split the batch in two and retry
--no-bisect-batch-on-function-error(Streams) If the function returns an error, split the batch in two and retry
--maximum-retry-attempts <integer>(Streams) Discard records after the specified number of retries. The default value is infinite (-1). When set to infinite (-1), failed records will be retried until the record expires
--tumbling-window-in-seconds <integer>(Streams) The duration in seconds of a processing window. The range is between 1 second up to 900 seconds
--topics <list...>The name of the Kafka topic
--queues <list...>(MQ) The name of the Amazon MQ broker destination queue to consume
--source-access-configurations <list...>An array of the authentication protocol, or the VPC components to secure your event source
--self-managed-event-source <structure>The Self-Managed Apache Kafka cluster to send records
--function-response-types <list...>(Streams) A list of current response type enums applied to the event source mapping
--cli-input-json <string>Performs service operation based on the JSON string provided. The JSON string follows the format provided by ``--generate-cli-skeleton``. If other arguments are provided on the command line, the CLI values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally
--generate-cli-skeleton <string>Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value ``input``, prints a sample input JSON that can be used as an argument for ``--cli-input-json``. If provided with the value ``output``, it validates the command inputs and returns a sample output JSON for that command