All tags
Recent posts
Recursively deleting all objects in a bucket and the bucket itself can be done with the following command. aws s3 rb s3://<bucket_name> --force If the bucket has versioning enabled any object versions and delete markers will fail to delete. The following message will be returned. remove_bucket failed: s3://<bucket_name> An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty. You must delete all versions in the bucket. The following set of command deletes all objects, versions, delete markers, and the bucket. Read more...

Grabbing just an IP address from a network interface can be useful for scripting. In the example below the assumed interface is eth0. ip a show eth0 | grep "inet " | cut -d' ' -f6 | cut -d/ -f1 You can then save this into a variable and use it in other commands. local_ip=$(ip a show eth0 | grep "inet " | cut -d' ' -f6 | cut -d/ -f1) python3 -m http.server 8000 --bind $local_ip hugo server --bind $local_ip --baseURL=http://$local_ip

ffmpeg -i input.mkv -ss 00:00:03 -t 00:00:08 -async 1 output.mkv Arugment Description -i Specify input filename -ss Seek start position -t Duration after start position -async 1 Start of audio stream is synchronised See these StackOverflow answers for a debate on various other methods of trimming a video with ffmpeg.

Updating images for containers that are run through docker-compose is simple. Include the appropriate tags for the image value. docker-compose pull docker-compose up -d You can then delete the old, now untagged image. The following command deletes all untagged images. docker image prune Rollback to a previous image If you wish to rollback to the previous image, first tag the old image. Read more...

Install the NFS client pacakge. For distros that use yum install nfs-utils. sudo apt install nfs-common Manually mount the share in a directory. Replace the following with your own values: server with your NFS server /data with your exported directory /mnt/data with your mount point sudo mount -t nfs server:/data /mnt/data To automatically mount the NFS share edit /etc/fstab with the following: # <file system> <mount point> <type> <options> <dump> <pass> server:/data /mnt/data nfs defaults 0 0 To reload fstab verbosely use the following command: Read more...

EBS sends events to CloudWatch when creating, deleting or attaching a volume, but not on detachment. However, CloudTrail is able to list detachments, the command below lists the last 25 detachments. aws cloudtrail lookup-events \ --max-results 25 \ --lookup-attributes AttributeKey=EventName,AttributeValue=DetachVolume Setting up noticiations is then possible with CloudWatch alarms for CloudTrail. The steps are summarized below: Ensure that a trail is created with a log group. Create a metric filter with the Filter pattern { $.eventName = "DetachVolume" } in CloudWatch. Create an alarm in CloudWatch with threshold 1 and the appropriate Action.

Introduction These are tar commands that I use often but need help remembering. Contents Introduction Create an archive Create a gzip compressed archive Extract an archive Extract a gzip compressed tar archive List files in an archive List files in a compressed archive Extract a specific file from an archive Create an archive tar -cvf send.tar send/ -c Create an archive -v Verbose -f Specify filename Create a gzip compressed archive tar -czvf send.tar.gz send/ -c Create an archive -z Compress archive with gzip -v Verbose -f Specify filename Extract an archive tar -xvf send.tar -x Extract an archive -v Verbose -f Specify filename Extract a gzip compressed tar archive tar -xvzf send.tar -x Extract an archive -v Verbose -z Decompress using gzip -f Specify filename List files in an archive tar -tvf send.tar -t List contents -v Verbose -f Specify filename List files in a compressed archive tar -tzvf send.tar.gz -t List contents -z Decompress using gzip -v Verbose -f Specify filename Extract a specific file from an archive tar -xvf send.tar my_taxes.xlsx scan.pdf -x Extract an archive -v Verbose -f Specify filename :)

Squash commits already pushed to GitHub Here I squash the last N commits by rebasing and force pushing to GitHub or another remote while avoiding this helpful but unwanted error. To https://github.com/LameLemon/pepe-is-the-man.git ! [rejected] master -> master (non-fast-forward) error: failed to push some refs to 'https://github.com/LameLemon/pepe-is-the-man.git' hint: Updates were rejected because the tip of your current branch is behind hint: its remote counterpart. Integrate the remote changes (e.g. hint: 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. First, through interactive rebasing set the HEAD back to the number of commits you want to squash. If you wish to include the root commit replace HEAD~3 with --root. Read more...

Web scraping is the act of extracting data from websites. This is a repository of scrapers I’ve built over the years for various websites. They download media files and other data. These scripts exist purely for education purposes. The scrapers are built using Python 3 and BeautifulSoup which is a library for grabbing and navigating data from HTML and XML files. The source code can be found on GitHub.

ffmpeg -y -i combined.mkv -vf fps=24,scale=1080:-1:flags=lanczos,palettegen palette.png ffmpeg -i combined.mkv -i palette.png -filter_complex "fps=24,scale=1080:-1:flags=lanczos[x];[x][1:v]paletteuse" out.gif -y Overwrite output wihtout asking -i Specify input filename -vf Video filtergraph -filter_complex Creates a complex filtergraph with inputs and/or outputs Explanation and other resources: How to make GIFS with FFMPEG (GIPHY Engineering) High quality GIF with FFmpeg “You can’t just code a gif”

Only applies if video has an existing audio track. ffmpeg -i video.mkv -i audio.mp3 -c:v copy -c:a aac -strict experimental -map 0:v:0 -map 1:a:0 ouput.mkv -i Specify input filename -c:v Encode all video streams -c:a Encode all audio streams -strict Specify how strictly to follow the standards -map Designate one or more input streams as the srouce for the output file

ffmpeg -i input.mkv -vf reverse -af areverse output.mkv -i Specify input filename -vf Video filtergraph -t Audio filtergraph

ffmpeg -i input.mkv -ss 00:00:03 -t 00:00:08 -async 1 output.mkv -i Specify input filename -ss Seek start position -t Duration after start position -async 1 Start of audio stream is synchronised Resources ffmpeg Documentation - Official FFmpeg documentation ffmprovisor - A repository of useful FFmpeg commands for archivists

Introduction These are a few git commands that I refer to from time to time when I need a reminder. This is by no means a comprehensive guide on how to use git. Contents Introduction Add co-authors to a commit Branching with Git Checkout a remote branch Create a branch from another branch Change the last commit Set the author Set the date Fetch a pull request Squash commits already pushed to GitHub Keep a fork up to date Further reading Add co-authors to a commit This will allow you to give credit to more than one author for a commit. This works on GitHub only, and count towards the contribution history of all authors. Read more...

Introduction This is intended for users who already have a basic understanding of PRAW and want to switch from hard coding the username and password into their code to using a refresh token. This was written using PRAW version 6.1.1 and Python 3.7.1. To let you in on a secret, I initially wrote for myself so I don’t forget how I did it. Read more...

Police car chasing a red car while dodging obstacles
This is a web game contains four unique levels, all increasing in difficulty. The main aim of the game is to shoot down the enemy vehicles before they escape you shoot you down first. After the end of each level, the user’s progress is saved so the game can be continued at another time. Crime Hunter started out as a class group project where I was a developer and graphic designer. Read more...

Screen capture downloading the top 5 new posts from r/DIY using GrabIt.
This is a Python program to traverse and download posts from subreddits and users on an incremental basis. It also records which posts have been saved so as to only download new submissions. The program uses multiple methods of extracting data from links posted to Reddit. The Reddit API is used to get submissions from Reddit, the links are then parsed and forwarded to the appropriate extractor. With the API restrictions, if you wish to download more than 1000 submissions, the Pushshift API is used for the remainder of the submissions. Images from Imgur are downloaded preserving the original descriptions and title. If the link is not supported by any in-built scrapers it is passed into the youtube_dl library with its large number of information extractors focused on video streaming sites. Read more...

This page contains an archive of all posts.

Previous Page