In this post I would like to go over how I tuned a test server for copying / syncing files from the local filesystem to S3 over the internet. If you ever had the task of doing this, you will notice that as the file count grows, so does the time it takes to upload the files to S3. After some web searching I found out that AWS allows you to tune the config to allow more concurrency than default.
AWS CLI S3 Config
The parameter that we will be playing with is max_concurrent_requests
This has a default value of 10, which allows only 10 requests to the AWS API for S3. Lets see if we can make some changes to that value and get some performance gains. My test setup is as follows:
1 2 3 |
2 x Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 8GB RAM CentOS release 6.10 (Final) |
I have 56 102MB files in the test directory:
1 2 3 4 5 6 7 8 |
-rw-r--r-- 1 jasonr domain^users 101M Sep 24 11:44 sample__0_0_7.csv.gz -rw-r--r-- 1 jasonr domain^users 102M Sep 24 11:44 sample__0_0_53.csv.gz -rw-r--r-- 1 jasonr domain^users 101M Sep 24 11:44 sample__0_0_6.csv.gz -rw-r--r-- 1 jasonr domain^users 101M Sep 24 11:44 sample__0_0_8.csv.gz -rw-r--r-- 1 jasonr domain^users 101M Sep 24 11:44 sample__0_0_55.csv.gz --snip-- [jasonr@jr-sandbox jason_test]$ ls| wc -l 56 |
For the first test I am going to run aws s3 sync with no changes, so out of the box it should have 10 max_concurrent_requests. Lets use the Linux time command to gather the time result to copy all 56 files to S3. I will delete the folder on S3 with each iteration to keep the test the same. You can also view the 443 requests via netstat and count them as well to show whats going on. In all the tests my best result was 250. So as you can see you will need to play with the settings to get the best result, these settings will change along with the server specs.
1. 1m25.919s with the default configuration:
1 2 3 4 5 6 7 8 9 10 11 |
[jasonr@jr-sandbox jason_test]$ time aws s3 sync . s3://dev-redshift/jason_sync_test/ upload: ./sample__0_0_0.csv.gz to s3://dev-redshift/jason_sync_test/sample__0_0_0.csv.gz upload: ./sample__0_0_10.csv.gz to s3://dev-redshift/jason_sync_test/sample__0_0_10.csv.gz upload: ./sample__0_0_11.csv.gz to s3://dev-redshift/jason_sync_test/sample__0_0_11.csv.gz upload: ./sample__0_0_12.csv.gz to s3://dev-redshift/jason_sync_test/sample__0_0_12.csv.gz upload: ./sample__0_0_13.csv.gz to s3://dev-redshift/jason_sync_test/sample__0_0_13.csv.gz --snip-- real 1m25.919s user 0m35.153s sys 0m15.879s |
2. Now lets set the max conqurent requests to 20 and try again, you can do this with the command below, after running we can see a little gain.
1 2 3 4 5 6 7 8 9 10 11 |
[jasonr@jr-sandbox jason_test]$ aws configure set default.s3.max_concurrent_requests 20 [jasonr@jr-sandbox jason_test]$ cat ~/.aws/config [default] s3 = max_concurrent_requests = 20 [root@jr-sandbox ~]# netstat -an| grep 443| wc -l 20 real 1m13.277s user 0m36.186s sys 0m16.462s |
3. Bumped up to 50 shows a bit more gain:
1 2 3 4 5 6 7 8 9 10 11 |
[jasonr@jr-sandbox jason_test]$ aws configure set default.s3.max_concurrent_requests 50 [jasonr@jr-sandbox jason_test]$ cat ~/.aws/config [default] s3 = max_concurrent_requests = 50 [root@jr-sandbox ~]# netstat -an| grep 443| wc -l 49 real 1m0.720s user 0m37.669s sys 0m19.344s |
4. Bumped up to 100, I start to notice that we lost some speed:
1 2 3 4 5 6 7 8 9 10 |
[jasonr@jr-sandbox jason_test]$ aws configure set default.s3.max_concurrent_requests 100 [jasonr@jr-sandbox jason_test]$ cat ~/.aws/config [default] s3 = max_concurrent_requests = 100 [root@jr-sandbox ~]# netstat -an| grep 443| wc -l 95 real 1m4.212s user 0m39.737s sys 0m21.847s |
5. Bumped up to 250 we see the best result so far:
1 2 3 4 5 6 7 8 9 10 |
[jasonr@jr-sandbox jason_test]$ aws configure set default.s3.max_concurrent_requests 250 [jasonr@jr-sandbox jason_test]$ cat ~/.aws/config [default] s3 = max_concurrent_requests = 250 [root@jr-sandbox ~]# netstat -an| grep 443| wc -l 234 real 0m55.036s user 0m42.841s sys 0m21.409s |
6. Bumped up to 500, we lose performance, most likely due to the machine resources.
1 2 3 4 5 6 7 8 9 10 |
[jasonr@jr-sandbox jason_test]$ aws configure set default.s3.max_concurrent_requests 500 [jasonr@jr-sandbox jason_test]$ cat ~/.aws/config [default] s3 = max_concurrent_requests = 500 [root@jr-sandbox ~]# netstat -an| grep 443| wc -l 465 real 1m16.593s user 0m50.336s sys 0m25.806s |
So to wrap up, you can tune the amount of concurrent requests allowed from the aws cli to s3, you will need to play with this setting to get the best results for your machine.
Hi Jason, can you comment your tweaking of max_concurrent_requests for aws cli? Have you ever correlated the optimum value (250 threads in example above) with the number of CPU cores (I think 36 cores for your “2 x Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz”)? This implies ~8 threads per core, does this sound right? Thanks.
Hi Andy,
Thanks for you comment.
That is a great question, I have thought about it, but never really did that math, math is not my strongest subject so keep me honest here :). But I like this type of stuff! This particular sandbox I was playing with was a vmware virtual machine. I carved out 2 cores 1 socket off that 18 core 36 thread CPU. So my understanding would be that each core has 2 threads, my test system would be 4 threads total? So not sure that correlates but open to here what you think. I do see where you are going with this in your comment, 36 cores x 8 = 288, which is very close to 250.
I think this is the model:
https://ark.intel.com/content/www/us/en/ark/products/81061/intel-xeon-processor-e52699-v3-45m-cache-2-30-ghz.html
What do you think?
I would think the more important factor to performance than CPU for this would be available network bandwidth. Moving data is more often I/O bound than CPU bound. So its likely that the bottleneck you experienced at 250 connections was where you saturated your available network bandwidth, either at the virtualized bus or on your internet connection. A better test would be to run this test inside an AWS VPC on an EC2 with 5Gbit to 10Gbit network with a S3 endpoint in the VPC to maximize your throughput. Also, for S3, when putting data, the object key name is used in selecting the S3 “shard”. I’m not sure if that is the terminology they use. If the names are not sufficiently entropic then you may run into a situation where the connections are not well balanced across the available S3 shards and performance could suffer from that as well.
Thank you for this comment, this is valuable information. I think you are 100% correct that network bandwidth is the key factor here.