AWS Apache Managed Airflow EMR ModuleNotFoundError: No module named ‘requests’ Bootstrap

I came across another fun one the other day, we are in the process of migrating our on premise elastic map reduce system into the cloud. We are using AWS EMR and have AWS Managed Airflow as the executor (DAG). We came across an odd situation with a pyspark application. When using Airflow with a SparkSubmitHook, the job would bootstrap looking just fine according to the run logs, however it would fail with No module named 'requests' when the application tried to import it. This was very odd since we have this application running from spark-submit just fine when calling it from the master node command line.

I decided to investigate the differences, our bootstrap script for installing python modules via pip which we call from the EMR API RunJobFlow call looks like this:

This is very basic, all it does is upgrade PIP and run PIP install to install each of the modules. When checking the bootstrap log I can see that PIP upgrades and goes out to the repo and installs the packages just fine. So why were we getting the No module named 'requests' error when executing through airflow. After a ton of googling and research I have found the issue and applied a solution that worked. Turns out airflow will run as the root user when bootstrapping, so if you notice we use the --user argument in pip. This will instruct the packages to be installed in the calling users home directory, the kicker is the code is run by the hadoop user on the EMR cluster nodes after executing from airflow. So turns out, the hadoop user is unable to access the requests module since root installed it with --user. I changed the bootstrap script to the following and it all started working, by removing --user and prefixing with sudo, the packages now get installed in a globally available area for all users. I am sure there are better ways to do this, I am still learning and researching, but if you run into this, the change below with get you out of the woods.

After some further research, and testing we decided to utilize a requirements.txt file to be called by the bootstrap shell script in the RunJobFlow call, first create a requirements.txt file, I like to hardcode the versions so nothing changes unexpectedly as you bootstrap a new cluster and it reaches out to PyPy to get the packages.

https://docs.aws.amazon.com/emr/latest/APIReference/API_RunJobFlow.html

Add your desired packages and version numbers to a file called requirements.txt like below:

Then you will need to copy this file into a bucket you have access to:

Then create a shell script that has the following, call it bootstrap.sh:

Copy that shell script to your bucket:

And execute it via the bootstrap actions in the RunJobFlow EMR API call:

As you can see the shell script will be executed which will copy the requirements.txt file locally and then run pip -r against it which will install all the packages. If you want to see the log on a running cluster, you can ssh to the master node and view the logs here to see the bootstrapping take place:

You should see the stdout log as so:

Hope this helps.

Node Application Stopped Sending Updates To Slack – can’t identify protocol

I wanted to share my experience with a node application that I support. This particular application is an API, it happens to log each and every request it receives to a internal slack channel. Our team uses this channel for many things, to verify when the API is in maintenance, to check that requests are processing, to see status on the overall health of the API etc..

Once in a while out of nowhere we would stop receiving these updates to slack. I set out to troubleshoot why this may be happening, at first we thought that we were hitting the slack rate limits, which is clearly defined here:

https://api.slack.com/docs/rate-limits

However after reading the linked doc, I was skeptical. The API does serve a lot of requests, but not enough to hit their limit. We have 2 servers that send slack messages and process the API requests and when they stopped sending it would be both servers, not just one. Also we have run into this before and restarting the service fixed the issue, so I was sure we did not hit the rate limit. Also trying to send a manual slack update using curl would not work! I knew this had to be something with the linux OS itself, and not the Slack service.

I tried to use netstat to see if we were hitting some type of OS limit, and all looked well. Next I tried one of my favorite tools, LSOF, at first I grepped for deleted to see if something was being held and not released. I did not see anything that stood out, next I grepped for node and low and behold I saw this:

My eyes went right to the “can’t identify protocol”, I opened up a browser and started to research, first hit when searching “can’t identify protocol” was a stack overflow article with the solution.

https://stackoverflow.com/questions/7911840/seeing-too-many-lsof-cant-identify-protocol

When lsof prints “Can’t identify protocol”, this usually relates to sockets (it should also say ‘sock’ in the relevant output lines).

So, somewhere in your code you are probably connecting sockets and not closing them properly (perhaps you need a finally block).

I suggest you step through your code with a debugger (easiest to use your IDE, potentially with a remote debugger, if necesssary), while running lsof side-by-side. You should eventually be able to see which thread / line of code is creating these File Descriptors.

Turns out that the node application was opening file descriptors / sockets and not closing them properly, this caused the system to hit the hard limit on open files / file descriptors. You can view the hard and soft limit like so, switch to the user that application is running as and run:

So you can see that the nodeuser has a hard limit of 4096 open files, which due to the application not properly closing them, we hit the ceiling. This explains why restarting the server or the process fixed it. It would release the open file descriptors and the system was able to open sockets again. I spoke with the developer and we researched, looks like one of the modules we were using was the cause of the issue, perhaps we were using it wrong? I found this out from this article:
https://stackoverflow.com/questions/24922745/node-js-winston-how-to-safely-drain-a-logger

Question:

I have experimented with instantiating and closing winston loggers as (half) described on https://github.com/flatiron/winston#instantiating-your-own-logger, to no avail. I run into trouble closing file transports of Winston’s – walking through it’s source code, I found that the proper way to close off a logger would seem to be the close method. I expected this to take care of closing the transport file used by the logger – however that turned out to be not so.

Varying in frequency according to node.js server load, winston would still hold on to many transport files, infinitely long after the close method had been called for them, indefinitely long after no new writes were being initiated to them. I observed that through the node.js process file descriptors table (lsof -p). Even though close has been called for a Winston logger, it would indefinitely keep the file descriptor of the log file “in use”, i.e. the log file never gets really closed. Thus leaking file descriptors and eventually making the node.js process bump into the ulimit (-n) limit after my application has been up for long.

Should there be a specific programming pattern for draining a Winston logger such that it can be eventually closed?

Answer:

Create only one logger instance and then derive children from it. In this case, winston will hold only one open file handler. Might also be better for performance.

So that was it, the developers agreed and set out to create a patch, problem solved.

centos8 postgresql-11-check-db-dir[]: is missing or empty

We have been rolling out CENTOS8 in our lower environments for testing, we use a dedicated vmware virtual server with centos8 minimal install, we only apply hardening techniques to these systems other than the main application, which is pg11 here. These systems use a LVM mounted ext4 filesystem for the data directory.

Recently on 3 of the new PG VMS after reboot we noticed that PG did not start, this also seemed intermittent, even though we have enabled the systemd service to start on reboots. So I checked the pg startup log and did not find too much about the issue. So I checked /var/log/messages and found the issue.

I checked the systemd service file and saw that out of the box postgres had the following:

After=Syslog.target This is a special target unit in systemd and is the standardized name to pull in a syslog implementation.

After=network.target has very little meaning during start-up. It only indicates that the network management stack is up after it has been reached. Whether any network interfaces are already configured when it is reached is undefined.

WantedBy=multi-user.target normally defines a system state where all network services are started up and the system will accept logins, but a local GUI is not started. This is the typical default system state for server systems, which might be rack-mounted headless systems in a remote server room.

Those options above will not ensure that all filesystems in fstab are mounted before postgres starts. So what we were seeing was a classic race condition where postgres started before the data directory was mounted. As I previously mentioned we use a custom PGDATA location. So after some research I found my option that fixed this. You will need to edit the pg11 service and add the following, then reload systemd and reboot and all should work. You can find your LVM mount by running the following:

You can see my u02-data1.mount in the output, so edit and add the override file with the following, if you have multiple mounts, you can add them as well.
Edit with: systemctl edit postgresql-11

Reload the daemon with: systemctl daemon-reload

After=local-fs.target systemd-fstab-generator(3) automatically adds dependencies of type Before= to all mount units that refer to local mount points for this target unit. In addition, it adds dependencies of type Wants= to this target unit for those mounts listed in /etc/fstab that have the auto mount option set.

AWS CLI Max Concurrent Requests Tuning

In this post I would like to go over how I tuned a test server for copying / syncing files from the local filesystem to S3 over the internet. If you ever had the task of doing this, you will notice that as the file count grows, so does the time it takes to upload the files to S3. After some web searching I found out that AWS allows you to tune the config to allow more concurrency than default.
AWS CLI S3 Config

The parameter that we will be playing with is max_concurrent_requests
This has a default value of 10, which allows only 10 requests to the AWS API for S3. Lets see if we can make some changes to that value and get some performance gains. My test setup is as follows:

I have 56 102MB files in the test directory:

For the first test I am going to run aws s3 sync with no changes, so out of the box it should have 10 max_concurrent_requests. Lets use the Linux time command to gather the time result to copy all 56 files to S3. I will delete the folder on S3 with each iteration to keep the test the same. You can also view the 443 requests via netstat and count them as well to show whats going on. In all the tests my best result was 250. So as you can see you will need to play with the settings to get the best result, these settings will change along with the server specs.

1. 1m25.919s with the default configuration:

2. Now lets set the max conqurent requests to 20 and try again, you can do this with the command below, after running we can see a little gain.

3. Bumped up to 50 shows a bit more gain:

4. Bumped up to 100, I start to notice that we lost some speed:

5. Bumped up to 250 we see the best result so far:

6. Bumped up to 500, we lose performance, most likely due to the machine resources.

So to wrap up, you can tune the amount of concurrent requests allowed from the aws cli to s3, you will need to play with this setting to get the best results for your machine.

Postgres Long Running Active Queries Send To Slack

I needed a utility to alert our team when any long running queries were running on a production postgres cluster. I came up with the following python code that achieves just that. This would alert slack if an active query exceeds 45 mins. The script takes in user parameters as well, I will demonstrate the way to call it. Hope it helps someone.

CRON CALL:

CODE:

SLACK MESSAGE:

Python Function Execute Subprocess With Timeout

I have a project that rsync’s data from an RPM repository for a local version of this repo. The issue I was faced with was the remote mirror would sometimes stop the rsync due to overloaded network or other unforeseen issues. I wanted to use rsyncs hashing algorithm to have it start right where it left off so I wrote a function to do this. If 900 seconds was hit it usually meant there was an issue with the transfer. I also want to state here that I observed the rsync stop serving issue on many mirrors so it was not just an issue with the TCP network. I use this in production and it logs each iteration or restart. The function below will also kill the current rsync so multiple copies are not running at the same time. I also only wanted to perform 5 iterations of rsync upon error or timeout so I use a while loop here.

Here are the individual rsync commands in the INI configuration.

Here is how I call the execute_jobs_timeout() function:

The function:

Log Snippet showing each command executing:

CENTOS6 Postgres pg_upgrade 9 to 11 – In Place – Link – No Copy – Limited Disk Space

I wanted to share my experience with upgrading postgres database server from major version 9 to 11. I am showing the steps that I took to get many servers in dev and production upgraded with limited disk space(not enough space to copy). I am hoping this will help with the problems I faced when testing this procedure. Using the –link parameter has drawbacks as noted in the documentation, however we perform full VM backups of each server so we can always restore from backup if the upgrade fails and we will not need to start the pg9.3 database again.

https://www.postgresql.org/docs/11/pgupgrade.html

-k
--link

use hard links instead of copying files to the new cluster
If you ran pg_upgrade with --link, the data files are shared between the old and new cluster. If you started the new cluster, the new server has written to those shared files and it is unsafe to use the old cluster.

Before we get started make a backup of the files pg_hba.conf and postgresql.conf for later use, you will need to use them later to reconstruct the pg11 configs.

Use WGET to grab the RPMS from https://yum.postgresql.org

Install the RPMS for postgres11 that we just downloaded

We will create the data location for postgres11 where the files will be hardlinked and not copied. You can see the tablespace disk locations and the index locations from the pg9.3 install. Its important to create the new pg11 data directory on the same filesystem since we will be using the –link parameter and it uses hardlinks which will not traverse filesystems.

We will need to init a postgres database in our new location on disk data11.

Now we are ready to stop pg9.3 and check pg_upgrade compatibility. pg_upgrade ships with a –check argument that will check the compatibility of the clusters and be sure the upgrade will work before changing any files. Lets stop pg9.3 and run the pg_upgrade with the –check parameter.


Ok checks have passed and the cluster versions are ready for upgrade, lets run this without the –check parameter and upgrade postgres.

OK the pg_upgrade code completed successfully and has generated 2 scripts. One to analyze the new pg11 cluster to get stats for the query planner and vacuum. The other to cleanup and remove the old pg9.3 locations on disk. Let’s start pg11, we will need to create an override file to tell pg11 where the data11 data lives, then we should be able to start postgres and check some things and verify our upgrade.


OK we can see we have pg11 running and we can run the generated scripts to cleanup, but lets take a look at the data and index directories to see what the upgrade produced.

We can view the shell scripts that pg_upgrade produced and cleanup the old pg9.3 references and run the analyze vacuums.


This looks good, lets execute them and cleanup any pg9.3 references as well as remove the pg9.3 rpms.

Remove the pg9.3 rpms and references, set the new data location in the .pgsql_profile.

You can now view the pg_hba.conf and postgresql.conf you saved in /root and add whats needed to the new pg11 configs.

That’s it!!

SINOPIA NPM allow connections to GITHUB as well as the NPM registry

SINOPIA LINK HERE
We use SINOPIA as a proxy on our internal network behind the firewall to allow users to install NODE packages without an internet connection. We basically run sinopia on a machine that has access to the internet and the clients point to the server to install packages that are not locally available. We have been running into issues where installs that needed access to github would fail with something like this:

As you can see, we are getting choked at:

To get around this we need to change the config.yml on the server to allow proxies to github, here is the final configuration. Hope this helps other users as we had a fun time trying to figure it out. Pay attention to the uplinks section and the proxy requests where github is defined.

PSQL Connect To AWS Redshift From Windows 10 PowerShell

Coming from a completely Linux background, I was tasked with connecting to a aws redshift cluster or a postgres cluster via Windows powershell and PSQL. I knew it was possible and searching the internet came up with CMD prompt solutions, when I attempted via powershell, I was faced with the following error below, you will need to install postgres on windows10 to get access to the psql binary, you can get it here:
https://www.postgresql.org/download/windows/

Turns out a colleague of mine and I figured out you will need to set the variable PGCLIENTENCODING via the powershell command line. This was expected but we could not nail down the syntax, we found it.

Once this is set, you can connect to PG as normal.

Python Generator Find Files With Wildcard

This is a neat way to generate file names in a directory that match a specific pattern, I use this to generate a list of files exported out of hive to load into S3.