ImportError: cannot import name ‘is_s3express_bucket’

Another day another odd error, my colleague and I were alerted to an issue on one of our MWAA Airflow 2.5.1 environments.

The above error was happening when a new DAG tried to instantiate S3 via botocore/boto3. We traced it down to the S3transfer package. Apparently in this commit:
https://github.com/boto/s3transfer/commit/3b50c31bb608188cdfb0fc7fd8e8cd03b6b7b187

Support was added for S3express, and from what we gathered we needed to upgrade 3 packages in our constraints.txt to solve the import issue.

After making these package version changes we were back in business.

botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: https://lambda.us-east-1.amazonaws.com

Recently while working on one of our EMR projects that uses lambdas and airflow, I ran into the following timeout issue:

We have a lambda that was invoked from boto3 in a Airflow step that would update dynamo db with values needed for our pipeline. This function worked in previous tests with no issues. We did add to the lambda function which was causing it to take longer than normal. When we tested the lambda from the console, the function worked fine, albeit it took a bit longer than the previous version. When calling from Airflow we would continually run into the timeout issue, causing the function to be executed multiple times during retries.

I thought to test this function from the awscli and it revealed the issue, the default boto3 timeout is 60 seconds, this was longer than our lambda was taking. So even though we set the lambda timeout to 4 minutes, boto was timing out at 1 minute, never getting the response back from lambda. The way we fixed this was to have boto3 setup a lambda_config that had a longer timeout.

AWS Apache Managed Airflow EMR ModuleNotFoundError: No module named ‘requests’ Bootstrap

I came across another fun one the other day, we are in the process of migrating our on premise elastic map reduce system into the cloud. We are using AWS EMR and have AWS Managed Airflow as the executor (DAG). We came across an odd situation with a pyspark application. When using Airflow with a SparkSubmitHook, the job would bootstrap looking just fine according to the run logs, however it would fail with No module named 'requests' when the application tried to import it. This was very odd since we have this application running from spark-submit just fine when calling it from the master node command line.

I decided to investigate the differences, our bootstrap script for installing python modules via pip which we call from the EMR API RunJobFlow call looks like this:

This is very basic, all it does is upgrade PIP and run PIP install to install each of the modules. When checking the bootstrap log I can see that PIP upgrades and goes out to the repo and installs the packages just fine. So why were we getting the No module named 'requests' error when executing through airflow. After a ton of googling and research I have found the issue and applied a solution that worked. Turns out airflow will run as the root user when bootstrapping, so if you notice we use the --user argument in pip. This will instruct the packages to be installed in the calling users home directory, the kicker is the code is run by the hadoop user on the EMR cluster nodes after executing from airflow. So turns out, the hadoop user is unable to access the requests module since root installed it with --user. I changed the bootstrap script to the following and it all started working, by removing --user and prefixing with sudo, the packages now get installed in a globally available area for all users. I am sure there are better ways to do this, I am still learning and researching, but if you run into this, the change below with get you out of the woods.

After some further research, and testing we decided to utilize a requirements.txt file to be called by the bootstrap shell script in the RunJobFlow call, first create a requirements.txt file, I like to hardcode the versions so nothing changes unexpectedly as you bootstrap a new cluster and it reaches out to PyPy to get the packages.

https://docs.aws.amazon.com/emr/latest/APIReference/API_RunJobFlow.html

Add your desired packages and version numbers to a file called requirements.txt like below:

Then you will need to copy this file into a bucket you have access to:

Then create a shell script that has the following, call it bootstrap.sh:

Copy that shell script to your bucket:

And execute it via the bootstrap actions in the RunJobFlow EMR API call:

As you can see the shell script will be executed which will copy the requirements.txt file locally and then run pip -r against it which will install all the packages. If you want to see the log on a running cluster, you can ssh to the master node and view the logs here to see the bootstrapping take place:

You should see the stdout log as so:

Hope this helps.