ImportError: cannot import name ‘is_s3express_bucket’

Another day another odd error, my colleague and I were alerted to an issue on one of our MWAA Airflow 2.5.1 environments.

The above error was happening when a new DAG tried to instantiate S3 via botocore/boto3. We traced it down to the S3transfer package. Apparently in this commit:
https://github.com/boto/s3transfer/commit/3b50c31bb608188cdfb0fc7fd8e8cd03b6b7b187

Support was added for S3express, and from what we gathered we needed to upgrade 3 packages in our constraints.txt to solve the import issue.

After making these package version changes we were back in business.

The screen lock and fingerprint unlock features in macOS 14.6 are noticeably slow.

I recently ran into an issue on my M1 MacBook Pro after updating to 14.6.1 Sonoma. When the screen locked due to a screensaver, or a reboot, and I attempt to unlock it via the fingerprint reader, it can take upwards of 10 to 20 seconds to go through. After researching this issue, it appears to be related to account policy data.

Seems that the longer the account policy data history, the slower the locking and unlocking process becomes. In my case, the list was large due to multiple policy updates. The command below was shared by my colleague and running it from a terminal as root solved the issue for me. Thanks Joe!

Replace /Users/jason with your username:

– Jralph

Redshift Serverless Data Sharing Query aborted due to read failure on a perm block

Here is an interesting error that we recently encountered with one of our Redshift Serverless and Redshift Provisioned clusters. We have a data sharing setup where the serverless DB is the producer cluster of certain key tables. We share these tables to a provisioned Redshift cluster via data sharing.

When querying this particular table on the provisioned cluster through the data share with python(psycopg2) and airflow we received the following error.

We opened a support case with AWS and was informed that this is due to a meta data mismatch that can be resolved by running an update against the shared table on the producer side. After running this update we were back in business and things operated as normal.

The following query can be executed on the producer cluster as a mitigation: “UPDATE SET = 1 WHERE false;” where TABLE_NAME is the name of the table on which queries are failing and COLUMN_NAME is the name of any column in this table. This query will not result in any actual change to the producer’s data, but will result in synchronizing the metadata pertaining to TABLE_NAME on the consumer and thus letting subsequent datasharing queries go through successfully.

— Jason Ralph

Redshift Serverless Find Largest Tables

You can use the below SQL on redshift serverless to find the top largest tables. You can return the results in 1MB data blocks or convert to TB. You can obviously change the limit N to whatever you want to change the number of results returned.

1MB data blocks:

Size In TB:

Specific Fields:

botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: https://lambda.us-east-1.amazonaws.com

Recently while working on one of our EMR projects that uses lambdas and airflow, I ran into the following timeout issue:

We have a lambda that was invoked from boto3 in a Airflow step that would update dynamo db with values needed for our pipeline. This function worked in previous tests with no issues. We did add to the lambda function which was causing it to take longer than normal. When we tested the lambda from the console, the function worked fine, albeit it took a bit longer than the previous version. When calling from Airflow we would continually run into the timeout issue, causing the function to be executed multiple times during retries.

I thought to test this function from the awscli and it revealed the issue, the default boto3 timeout is 60 seconds, this was longer than our lambda was taking. So even though we set the lambda timeout to 4 minutes, boto was timing out at 1 minute, never getting the response back from lambda. The way we fixed this was to have boto3 setup a lambda_config that had a longer timeout.

RequestsDependencyWarning: urllib3 (1.26.18) or chardet (3.0.4) doesn’t match a supported version!

I ran into this issue on a CENTOS8 server that has yet to be updated to RHEL8, after upgrading some packages via Pip:

Turns out:

Module python3-requests is not compatible with locally installed third party module urllib3 of version 1.26.8 and get conflicting with Red Hat provided python3-urllib3 version 1.24.2-5.el8.

I was able to get around this by upgrading URLLIB3 and REQUESTS:

Works Now:

HTTPSConnectionPool(host=’files.pythonhosted.org’, port=443): Read timed out

I recently had an issue where one of our EMR clusters failed to bootstrap the python modules via PIP. I checked the logs and saw that we ran into the following error:

I wanted to have PIP not die if it timed out, I also wanted it to retry on failure. By adding the following to my bootstrap.sh I was able to have the PIP socket timeout at a longer interval, also bump up the retries to 10. I have not seen the issue since I applied the new settings.

From the PIP help page:

Upgrade Rocky Linux 8 to 9 CLI

I thought I would share my version of how I updated the server that runs this blog from Rocky 8 to Rocky 9 without a clean install. I want to mention this is a do at your own risk post, this is not officially supported.

!!!Do not attempt this if you do not have backups and a way to fully recover your system.!!!

The first step I took was go to the rocky download site and make sure I grabbed the latest GPG, RELEASE and REPOS:

https://download.rockylinux.org/pub/rocky/9/BaseOS/x86_64/os/Packages/r/

You will need to modify the below command to match the version you find in the above site, once that is complete you can run it.

One road block was dnf did not like that I had remi and epel release 8, so I removed them and it went fine.

Find the epel and remi release rpms:

Remove them:

Upgrade your system to 9 from 8:

I ignored this error, it seems like its just a GPG error:

Verify:

Rebuild the RPM database to now use SQLITE:

Thats it, reboot:

I did have some issues with dnf where I needed to reset some modules.

I needed to reset the modules one by one, there may be more on your system:

That seemed to fix it, good luck.

AttributeError: module ‘cryptography.utils’ has no attribute ‘register_interface’

I just recently came across an issue when we were bootstrapping one of our EMR clusters, looks like when trying to import pgpy we failed with the following traceback:

Apparently the cryptography team released a new version on September 7th 2022 that broke the pgpy library.
https://pypi.org/project/cryptography/38.0.1/

We needed to downgrade our version to get things working again. I figured I would post this to see if others run into this, according to the pgpy github page, they are working on a fix.

https://github.com/SecurityInnovation/PGPy/issues/402

Here is how I solved it in the meantime, I needed to downgrade the cryptography library.

Python Linux Find Files With Pattern Accessed Older Than N Days And Remove

This is a neat utility that you can use to keep in your sysadmin bag of tricks, it walks the directory you define recursively and grabs all the file access times and stores them into a list, it then compares them against a command line parameter for days ago. If its older than N days it will remove the file. What’s really nice about this utility is it has a debug mode, this way you can see what will be deleted before you remove debug and execute it.