Postgres Long Running Active Queries Send To Slack

I needed a utility to alert our team when any long running queries were running on a production postgres cluster. I came up with the following python code that achieves just that. This would alert slack if an active query exceeds 45 mins. The script takes in user parameters as well, I will demonstrate the way to call it. Hope it helps someone.

CRON CALL:

CODE:

SLACK MESSAGE:

Python Function Execute Subprocess With Timeout

I have a project that rsync’s data from an RPM repository for a local version of this repo. The issue I was faced with was the remote mirror would sometimes stop the rsync due to overloaded network or other unforeseen issues. I wanted to use rsyncs hashing algorithm to have it start right where it left off so I wrote a function to do this. If 900 seconds was hit it usually meant there was an issue with the transfer. I also want to state here that I observed the rsync stop serving issue on many mirrors so it was not just an issue with the TCP network. I use this in production and it logs each iteration or restart. The function below will also kill the current rsync so multiple copies are not running at the same time. I also only wanted to perform 5 iterations of rsync upon error or timeout so I use a while loop here.

Here are the individual rsync commands in the INI configuration.

Here is how I call the execute_jobs_timeout() function:

The function:

Log Snippet showing each command executing:

CENTOS6 Postgres pg_upgrade 9 to 11 – In Place – Link – No Copy – Limited Disk Space

I wanted to share my experience with upgrading postgres database server from major version 9 to 11. I am showing the steps that I took to get many servers in dev and production upgraded with limited disk space(not enough space to copy). I am hoping this will help with the problems I faced when testing this procedure. Using the –link parameter has drawbacks as noted in the documentation, however we perform full VM backups of each server so we can always restore from backup if the upgrade fails and we will not need to start the pg9.3 database again.

https://www.postgresql.org/docs/11/pgupgrade.html

-k
--link

use hard links instead of copying files to the new cluster
If you ran pg_upgrade with --link, the data files are shared between the old and new cluster. If you started the new cluster, the new server has written to those shared files and it is unsafe to use the old cluster.

Before we get started make a backup of the files pg_hba.conf and postgresql.conf for later use, you will need to use them later to reconstruct the pg11 configs.

Use WGET to grab the RPMS from https://yum.postgresql.org

Install the RPMS for postgres11 that we just downloaded

We will create the data location for postgres11 where the files will be hardlinked and not copied. You can see the tablespace disk locations and the index locations from the pg9.3 install. Its important to create the new pg11 data directory on the same filesystem since we will be using the –link parameter and it uses hardlinks which will not traverse filesystems.

We will need to init a postgres database in our new location on disk data11.

Now we are ready to stop pg9.3 and check pg_upgrade compatibility. pg_upgrade ships with a –check argument that will check the compatibility of the clusters and be sure the upgrade will work before changing any files. Lets stop pg9.3 and run the pg_upgrade with the –check parameter.


Ok checks have passed and the cluster versions are ready for upgrade, lets run this without the –check parameter and upgrade postgres.

OK the pg_upgrade code completed successfully and has generated 2 scripts. One to analyze the new pg11 cluster to get stats for the query planner and vacuum. The other to cleanup and remove the old pg9.3 locations on disk. Let’s start pg11, we will need to create an override file to tell pg11 where the data11 data lives, then we should be able to start postgres and check some things and verify our upgrade.


OK we can see we have pg11 running and we can run the generated scripts to cleanup, but lets take a look at the data and index directories to see what the upgrade produced.

We can view the shell scripts that pg_upgrade produced and cleanup the old pg9.3 references and run the analyze vacuums.


This looks good, lets execute them and cleanup any pg9.3 references as well as remove the pg9.3 rpms.

Remove the pg9.3 rpms and references, set the new data location in the .pgsql_profile.

You can now view the pg_hba.conf and postgresql.conf you saved in /root and add whats needed to the new pg11 configs.

That’s it!!

SINOPIA NPM allow connections to GITHUB as well as the NPM registry

SINOPIA LINK HERE
We use SINOPIA as a proxy on our internal network behind the firewall to allow users to install NODE packages without an internet connection. We basically run sinopia on a machine that has access to the internet and the clients point to the server to install packages that are not locally available. We have been running into issues where installs that needed access to github would fail with something like this:

As you can see, we are getting choked at:

To get around this we need to change the config.yml on the server to allow proxies to github, here is the final configuration. Hope this helps other users as we had a fun time trying to figure it out. Pay attention to the uplinks section and the proxy requests where github is defined.

PSQL Connect To AWS Redshift From Windows 10 PowerShell

Coming from a completely Linux background, I was tasked with connecting to a aws redshift cluster or a postgres cluster via Windows powershell and PSQL. I knew it was possible and searching the internet came up with CMD prompt solutions, when I attempted via powershell, I was faced with the following error below, you will need to install postgres on windows10 to get access to the psql binary, you can get it here:
https://www.postgresql.org/download/windows/

Turns out a colleague of mine and I figured out you will need to set the variable PGCLIENTENCODING via the powershell command line. This was expected but we could not nail down the syntax, we found it.

Once this is set, you can connect to PG as normal.

Python Generator Find Files With Wildcard

This is a neat way to generate file names in a directory that match a specific pattern, I use this to generate a list of files exported out of hive to load into S3.

Python3 Subprocess and Rsync Deadlock Strace Timeout

I recently came across a tough to debug issue where I was calling a shell script from python using the subprocess module, this shell script called rsync, no matter what I would always run into a timeout situation. I fired up strace and noticed that the process was in a timeout state.

select(4, NULL, [3], [3], {60, 0}) = 0 (Timeout)

I looked at the subprocess documentation and apparently using pipes will fill the system pipe buffer.

Warning

This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that.

I was baffled, I finally took the approach to eliminate stderr and stdout and just check the return status of the command using run(). Here is what I finally came up with, and all was well.

Hope you find this and it helps you.

Amazon Redshift Long Running Query Alert to Slack

This python code when called with a user that can query the STV_RECENTS table will check the duration on a current running query against the threshold set by the config in microseconds and send an alert to slack if it exceeds 30 minutes. I have it cronned up and running every 30 minutes.

CLI example:

You will need slackclient:
https://pypi.python.org/pypi/slackclient
You will need psycopg2:
https://pypi.python.org/pypi/psycopg2

INI file:

Slack message example:

Nagios Python Plugin Check If File Is Stale

Wrote this simple plugin to check if a log file was stale on a server using nagios and nrpe. This plugin checks multiple files with the app. naming convention.

POSTGRES – Top 100 Tables In Tablespace

I had a situation where I needed to find the top 100 largest tables in a certain tablespace on a postgres 9 database, in my case we archive tables into an archive1 tablespace. This query will find all the largest relations in the archive1 tablespace. Its important to swap out ‘archive1’ with whatever tablespace you are trying to list.

Hope this helps you out, took some time to get it to work.