Recently, Simon Willison shared how he uses S3 event notifications with Lambda and DynamoDB to list recently uploaded files from an S3 bucket. The first thought that occurred to me was to use S3 inventory which provides a daily catalog of objects within a bucket queriable through Athena. The second idea involved doing the same with the recently announced S3 metadata feature. Both methods, I discovered, were already commented on by others.
Read more...
Aws
Last month, AWS announced multi-session support natively in the AWS console; previously I’d relied on Firefox containers to separate logins between accounts. This can be easily enabled from the account drop-down in the top right of the console.
Multi-session support enabled and the usual account ID and IAM role is visible in the top right.
Once enabled, this feature supports up to five sessions simultaneously and allows you to switch between them from the account drop-down.
Read more...
Data sources are used to retrieve information outside of Terraform, in this case default VPC, subnets, security group and internet gateway resources provisioned in a region within an AWS account. Each opted-in region within an AWS account comes with default network resources, with can be used to provision resources within a default subnet, use the default internet gateway or security group for provisoned resources and more.
Retrieve the default VPC The default VPC can be retrieved using the aws_vpc data source and the default argument.
Read more...
AWS Identity Center is a service that allows you to manage and control user access to AWS accounts or applications. The user identities and groups can be provisioned from an external Identity Provider, like Okta or Keycloak, or managed directly within IAM Identity Center.
Disclaimer: Do not take the information here as a good or best practice. The purpose of this site is to post my learnings in somewhat real-time.
Read more...
Grafana has the ability to use Amazon Athena as a data source allowing you to run SQL queries to visualize data. The Athena table data types are conveniently inherited in Grafana to be used in dashboard panels. If the data types in Athena are not exactly how you’d like them in Grafana you can still apply conversion functions.
In this case the timestamp column in Athena is formatted as a string, and I do not have the ability to adjust the table in Athena (which is normally what you’d want to do).
Read more...
Dynamically partitioning events on Amazon Data Firehouse is possible using the jq 1.6 engine or using a Lambda function for custom parsing. Using JQ expressions through the console to partition events when configuring a Firehouse stream is straight forward provided you know the JQ expression and the source event schema. I found it difficult translating this configuration into a CloudFormation template after initially setting up the stream by click through the console.
Read more...
Cross account access to an S3 bucket is a well documented setup. Most guides will cover creating and applying a bucket policy to an S3 bucket and then creating a policy and role to access that bucket from another account. A user or service from that account can then assume that role, provided they’re allowed to by the roles trust relationship to acccess the S3 bucket via the CLI or API.
Read more...
Granting a user permission to access resources across AWS accounts is a common task, typically the account with the resources contains an IAM role with the appropriate policy defining the actions the user can perform. In addition to this a trust policy is created that specifies which principal can assume that role in order to perform the allowed actions. Sometimes an external ID is added to the trust policy which verifies the user wanting to assume that role.
Read more...
AWS allows you to enable server-side encryption (SSE) for you data at rest in your SQS queue. Disabling this option also has an effect on your encryption in transit as well. From the SQS documentation:
All requests to queues with SSE enabled must use HTTPS and Signature Version 4.
In other words disabling SSE also means you can now communicate to the SQS without TLS.
I wrote a Python script to send a message to a queue using the HTTP API and botocore without using the higher level abstractions of boto3.
Read more...
This week I open sourced a Terraform project I’ve been using for the past few months. This solution allows the user to schedule the start or stop of EC2 instances in a single AWS account. This schedule is defined through Terraform and created EventBridge Schedulers. This post will be a snapshot in time of how the solutions looks at the time of publishing. An up to date and concise description of the solution can be found on its GitHub page.
Read more...
You can track changes to a tag through AWS CloudTrail, AWS Config, or Amazon CloudWatch Events, these methods have already been documented but they’re too slow to respond to changes, too expensive to run, not as extensible out-of-the-box, or outdated. I haven’t seen much coverage on doing this with Amazon EventBridge, which has many integration options, is low-latency, and is fairly low cost (and in this case free). There is a page in the documentation titled Monitor tag changes with serverless workflows and Amazon EventBridge that covers just that, I’d recommend starting there.
Read more...
To show related content on this blog I use Hugo’s in-built functionality, which works surprisingly well with no setup. I did, however, want to test out creating text embeddings from my posts and rank them by similarity. Before you continue reading, the usual disclaimer:
Do not take the information here as good or best practice. The purpose of this entry is to post my learnings in somewhat real-time.
I will use Amazon Titan Embeddings G1 - Text available through Amazon Bedrock and SQLite to store the results.
Read more...
Update 2023-12-19: Got an update from the issue I raised that the AWS Backup Access Policy and IAM role issue has been resolved in the Terraform AWS Provider version v5.30.0 via this Pull Request thanks to @nam054 and @johnsonaj. They delay has now been added as part of the provider itself and I’ve confirmed it works! You can disregard the rest of this post or continue reading if you’re interested.
Read more...
Disclaimer: Do not take the information here as a good or best practice. The purpose of this site is to post my learnings in somewhat real-time.
AWS IAM Identity Center (previously and more commonly known as AWS SSO) allows you to control access to your AWS accounts through centrally managed identities. You can choose to manage these identities through IAM Identity Center, or through external Identity Providers (IdPs) such as Okta, Azure AD, and so on.
Read more...
The AWS API allows you to list-rules which returns a list of all the rules but does not list targets. The API also provides you with list-targets-by-rule which allows you to list the targets associated with a specific rule. If you want to find all the rules with a specific target, this case an event bus, you can join both of them together.
No idea if this is acceptable practice, or if I’ll ever use this again, but I will unleash this string of commands and pipes to the world.
Read more...
This post details how to update a domain record entry on Linode based on the public IP of a machine running Linux. We will create a python script and use the Linode API to accomplish this.
Create a personal token in Linode From your Linode console under My Profile > API Tokens you can create a personal access token. The script only requires read/write access to the Domains scope. From here you can also set your desired expiration time.
Read more...
Disclaimer: Do not take the information here as a good or best practice. The purpose of this site is to post my learnings in somewhat real time.
Create an OIDC IdP on AWS This needs to be done once for an AWS account, this configures the trust between AWS and GitHub through OIDC.
Create an OpenID Connect identity provider for GitHub on AWS. From the IAM console, choose Identity providers and then Add provider.
Read more...
Recursively deleting all objects in a bucket and the bucket itself can be done with the following command.
aws s3 rb s3://<bucket_name> --force If the bucket has versioning enabled any object versions and delete markers will fail to delete. The following message will be returned.
remove_bucket failed: s3://<bucket_name> An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
Read more...
EBS sends events to CloudWatch when creating, deleting or attaching a volume, but not on detachment. However, CloudTrail is able to list detachments, the command below lists the last 25 detachments.
aws cloudtrail lookup-events \ --max-results 25 \ --lookup-attributes AttributeKey=EventName,AttributeValue=DetachVolume Setting up noticiations is then possible with CloudWatch alarms for CloudTrail. The steps are summarized below:
Ensure that a trail is created with a log group. Create a metric filter with the Filter pattern { $.
Read more...