Posts

Showing posts from 2018

Auto partition secondary EBS on CentOS 7

If you add an additional blank EBS volume to a CentOS 7 EC2 instance, it won’t auto-partition in the same way as the primary volume gets resized on launch. Additionally if your baking an image you will certainly encounter problems with it not automounting on different instance types and even manually trying to mount it can give misleading error messages.

Using a S3 Hive Metastore with EMR

When configuring Hive to use EMRFS (i.e. s3://) instead of using the implied HDFS cluster storage of the metastore, which is vital if you want to have a persistent metastore that can survive clusters being destroyed and recreated; you might encounter this message: Access Denied (Service: Amazon S3; Status Code: 403;...) .

Resizing root partition on CentOS 6 in the cloud

If your unfortunate enough to still be using CentOS 6 in AWS than more than likely be stumped why the root partition isn’t resizing like most other distros you launch.

Installing the new Cloudwatch Agent for ECS

If you want to gather log files on your ECS hosts running a Amazon ECS-optimized AMI, then there’s instructions on how to install the Previous CloudWatch Logs Agent using cloud-init and some user data. The old Logs Agent is still supported but the new Unified CloudWatch Agent is recommended as it is touted to be faster but more importantly, allows easy collection of instance metrics. To install it, simply launch an EC2 instance using one of the Amazon ECS AMI ’s, and put this in the user-data section (expand the advanced configuration of the launch wizard). Once the EC2 instance is started up, just simply create an AMI from it, shutdown the EC2 instance and use this new AMI for you ECS clusters or for your Batch Compute Environments. With a bit of modification, this script could be used to install and configure it on the launch of each ECS instance, but having a pre-baked AMI is a bit cleaner. This script also increase the default docker container size to 100GB, using the ECS Age

Jenkins build and multi-environment deploys

Image
In our team, we can't run continuous deployment into our testing environments. Like most enterprises with large/legacy back-end systems, we only have a few up-stream instances running with populated data. That results in having a finite set of test environments: qa-1, qa-2, sit-1, sit-2, etc. QA would usually be stubbed out so having one per feature branch wouldn't be a problem but for SIT, having a consistent & known environment for integration testing makes this difficult. Previously, I used Bamboo and it had the concept of Releases, Environments and dedicated Deploy Jobs. These allowed for builds to be made into releases (manually or automatically) and have those release deployed to specific environments using predefined deploy jobs, with only one release being recorded as deployed to an environment at a time. The advantage of this was it allowed our testers to easily see what was currently deployed and where, without having to dig through every job. Jenkins 2 was re