I'm sorry, but based on the provided knowledge sources, there is no specific information on how to increase the batch size when using MySQL as a source and Redshift as a destination in Airbyte. However, it's worth noting that there are performance considerations to keep in mind when dealing with batch/bulk writes. Large records may impact performance and the connector should ideally be enhanced to support variable batch sizes as mentioned in the [Elasticsearch destination documentation](
https://docs.airbyte.com/integrations/destinations/elasticsearch#performance-considerations). For more specific guidance, I would recommend checking the Airbyte [documentation](
https://docs.airbyte.io/) or reaching out to the Airbyte community on their [discussion forum](
https://discuss.airbyte.io/).