Author: Petronald Green

0

How to deploy a container image from Gitlab to AWS Fargate – the important bits

Jumping straight into it. On GitLab I assume you already have your dockerized application and a basic .gitlab-ci.yaml file.

We’re gonna want to build that image and push it to AWS ECR (Amazon Elastic Container Registry). In your gitlab ci file insert the following.

aws-deploy:
  image: docker:latest
  stage: build
  services:
    - docker:dind
  script:
    - apk update && apk -Uuv add python py-pip &&
        pip install awscli && apk --purge -v del py-pip &&
        rm /var/cache/apk/*
    - $(aws ecr get-login --no-include-email --region us-east-1)
    - docker build --pull -t "$AWS_REGISTRY_IMAGE:dev" .
    - docker push "$AWS_REGISTRY_IMAGE:dev"
  only:
    - master

You’re gonna need to set a build-time environment variable called AWS_REGISTRY_IMAGE with the URI of your ECR repository.

Great! We’re halfway there. Do a sample push and verify that your image ends up in ECR

Now we want to deploy to Fargate on every push so… assuming once again you already got your ECS cluster setup. We want to use a CodePipeline to detect pushes & deploy them to ECS.

Under AWS CodePipeline start a new pipeline, Source is Amazon ECR select the appropriate image and tags and stuff. Next

ok here’s the tricky part, to deploy to Fargate we need to add an imagedefinitions.json artifact to our image, this can be done automatically in the build step so.

Build Provider > AWS Codebuild

Create a new project

Environment Image > Managed Image
Operating System > Ubuntu
Runtime > Standard

Down to the Buildspec section, select ‘Insert Build Commands’

Then click ‘Switch to editor and enter the following

version: 0.2

phases:
  install:
    runtime-versions:
       python: 3.8
  post_build:
    commands:
      - printf '[{"name":"my-fargate-container-name","imageUri":"%s"}]' MYAWSID.dkr.ecr.us-east-1.amazonaws.com/MYIMAGENAME:dev > imagedefinitions.json
artifacts:
  files:
    - imagedefinitions.json

Finally Save, continue pipeline creation.. in the Deploy stage select the correct Fargate Cluster/service and DONE.

If you did all of that right, the next time you push to your master branch; it’ll automatically get built and deployed to Fargate!!

1

Angular 8 + Ionic 4 Monorepo Part 1: The Setup

If you’re like me you created this super cool angular web application https://kiwibudget.com/ (shameless promotion) and you users keep asking for a mobile app!

Since your (super cool) frontend is in angular the logical solution is to use Ionic to build the app since you got all this pre-written ng code that you wanna use.

I wont go too deep into the details but after trying to set things up with vanilla Angular workspaces (and failing spectacularly), then trying Angular + nrwl/nx (works but so much hackery to get ionic to work, maintenance would raise technical debt to infinity), I found the gold that is xplat.

This very quickly written post is to document my setup with future blogs as app development progresses.

Disclaimer. This post mostly serves as a note to me of how to set things up, so it’s pretty rough.

Lines that begin with ‘?’ should not be typed and are cli options.

Lets setup our nx workspace

npm init nx-workspace myworkspace
? What to create in the new workspace      empty
? CLI to power the Nx workspace            Angular CLI
npm install -g @nrwl/cli
npm install --save-dev @nstudio/xplat
ng add @nstudio/xplat

Lets make the web app (we call it ‘app’, xplat will prepend ‘web’, change this is you have other requirements)
(You can also configure the prefix but you can look into that yourself)

ng g app
? What name would you like for this app?  app
? What type of app would like to create?  web
? Which frontend framework should it use? angular
? Use xplat supporting architecture?      Yes

? In which directory should the app be generated? 
? Which stylesheet format would you like to use? SASS(.scss)  [ http://sass-lang.com   ]

Lets make the mobile app (we call it ‘app’, xplat will prepend ‘ionic’, change this is you have other requirements)

ng g app
? What name would you like for this app?    app
? What type of app would like to create?    ionic
? Which frontend framework should it use?   angular
? Use xplat supporting architecture?        Yes

We want to use the ionic command to manage our app so lets hook that up.

Create a file ionic.config.json with the following

{
  "name": "ionic-app",
  "integrations": {},
  "type": "angular",
  "root": "apps/ionic-app"
}

We can now run our ionic app with ionic serve --project ionic-app

But wait it still fails…

For some reason capacitor is not installed automatically so since we have setup the ‘ionic’ command we can fix that with

ionic integrations enable capacitor --project ionic-app

Right now you can start apps with

ng serve web-app
ionic serve --project ionic-app

Done!


PS. If you are a lazy dev like me and want to use the Ionic devapp you will also need to install the cordova integration.

It’s not technically a smart idea but… thug life

ionic integrations enable cordova --project ionic-app
0

NodeFerret – Free Server and Uptime monitoring

Disclaimer
I am the creator of NodeFerret, as a result I will be biased towards it. With that said, let us begin.

So what is NodeFerret?
At a high level NodeFerret aims to be a Linux Server and Uptime monitoring tool. It continuously checks your website(s) and/or server(s) and notifies you (eMail, Slack, Webhook) if anything goes wrong.

You mentioned the word free… much does it really cost?
Well the ‘official’ pricing is on the website https://nodeferret.com/#pricing but for the general hobbyist or indie developer use it’s essentially free.

How can you sustain it while having it ‘essentially free’?
I initially created NodeFerret for myself. To monitor various sites and servers I have online.

After I was done I realized that I had so much extra resources that I could have 1000 more me’s using the service and not have to pay an extra dollar…. so here we are.

Why would I use NodeFerret over say, NewRelic or DataDog?
If you need the level of insight into your servers or applications that these providers provide then NodeFerret probably isn’t right for you.

We are focused on the average developer or sysadmin who wants something easy to use and ‘just works’, Not the DevOps Engineer with a fleet of containers and a full-time job caring for them.

Sounds interesting, where can I sign up?
Right over there –> https://console.nodeferret.com/register

What if I have problems?
You can leave a message on the nodeferret community forum.
Or email me directly at support[@]nodeferret.com

Final Words
I do hope you enjoy using NodeFerret as much as I enjoyed creating it. I am always adding new features and trying to make it a better tool for everyone.

0

Linux (Ubuntu) commands to memorize

This is just a personal reminder of commands that I use a lot that I should really memorize but haven’t gotten around to.

How to check what is running on a particular port

lsof -i :8000
1

How to fix Rancher 2.0 “503 Service Temporarily Unavailable”

Scenario

So you have your Rancher 2.0 Kubernetes cluster up, and you update one of your apps.. Suddenly your site is down and you see the error

503 Service Temporarily Unavailable

Don’t freak out, theres a simple solution here. More likely than not, you setup your ingress to target a backend instead of a service.

Updating your app can often rename the ‘backend’, causing your initial ingress to be at a loss about what to point to.

Solution

Edit your app, expose the port as a Cluster IP and then edit your ingress to be a service which targets the exposed port.

0

Rancher 2.0 etcd disaster recovery

This doc shows how to restore to a single node etcd cluster after a 3, 5 or 7 node cluster has lost quorum.

Ideally with these sorts of failures you want to try your best to get the original etcd hosts back up.

This is also done at your own risk, I have no association with Rancher nor am I a Rancher professional. It is also highly recommended to test this in a staging environment first. I will NOT be responsible for the loss of all your or your company’s data; which is exactly what will happen if this procedure fails.

With that out of the way; please read on.

This doc assumes you have
1. rancher_cli installed on your local machine
2. a working internet connection on the surviving etcd host

1. Login to the surviving host

rancher context switch
rancher ssh <surviving_etcd>

At this point you may want to do a docker inspect etcd to ensure the the following two directories are bind-mounted

...
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/etcd",
                "Destination": "/var/lib/rancher/etcd",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/kubernetes",
                "Destination": "/etc/kubernetes",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],

If you do not see the above.. Stop.

2. check the health of the cluster

docker exec -it etcd etcdctl member list
docker exec -it etcd etcdctl endpoint health

You should see unhealthy cluster

3. Take a snapshot of cluster

This ensures that if for any reason this operation fails, you have not lost all your data. We will store our snapshot in the /etc/kubernetes dir which is bind-mounted onto the same path on the host

mkdir -p /etc/kubernetes/etcd-snapshots/etcd-$(date +%Y%m%d)
docker exec -it etcd etcdctl snapshot save /etc/kubernetes/etcd-snapshots/etcd-$(date +%Y%m%d)/snapshot.db

4. Get deploy command

Lavie (https://github.com/lavie/runlike) has this great tool which approximates the deploy command used to put up a docker container. We will use it to get out etcd configuration. Run the following:

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike etcd

the output should be a pretty long docker run type string. Save it in a safe place for later

5. Destroy/Rename the old etcd container

docker stop etcd
docker rename etcd etcd_old

6. Start the new etcd container

  1. Edit the --initial-cluster area of the command from step 4, leaving only the surviving container.
  2. Append --force-new-cluster at the end of the command

Use this new string to deploy a new container.

7. Delete old nodes

In the rancher UI. You should now be able to access your cluster again. Delete the pools of the nodes that died. (This will take a while as rancher will redeploy etcd)

You are now free to continue using your cluster or create new nodes to expand your etcd cluster

END


Extra

In case everything went to hell, we can use the snapshot taken in step 3…

docker exec -it etcd etcdctl snapshot --data-dir=/var/lib/rancher/etcd/snapshot restore /etc/kubernetes/etcd-snapshots/etcd-$(date +%Y%m%d)/snapshot.db

docker stop etcd
mv /var/lib/etcd/member /var/lib/etcd/member_old
mv /var/lib/etcd/snapshot/member /var/lib/etcd/member
rmdir /var/lib/etcd/snapshot
docker start etcd

The above restores the snapshot to /var/lib/rancher/etcd/snapshot
We then stop etcd, archive the messed up etcd data (member_old) and replace it with the restored data

0

Fix: Ubuntu touchpad stops working on wakeup

Create a file in /lib/systemd/system-sleep

sudo vim /lib/systemd/system-sleep/fixmouse.sh

Create a script which will reinstall the psmouse kernel module on wake

#!/bin/sh
case $1/$2 in
  pre/*)
    echo "Going to $2..."
    # Place your pre suspend commands here, or `exit 0` if no pre suspend action required
    exit 0
    ;;
  post/*)
    echo "Waking up from $2..."
    # Place your post suspend (resume) commands here, or `exit 0` if no post suspend action required
    modprobe -r psmouse
    modprobe psmouse
    ;;
esac

Ensure the script is executable

chmod 755 /lib/systemd/system-sleep/fixmouse.sh

Done. Try it out.

How to manually generate SSL certificates for Flynn applications 1

How to manually generate SSL certificates for Flynn applications

For the last few years the flynn team has been working on getting us letsencrypt integration. While I feel the functionally should be here soon, in the meantime we just have to make the requests ourselves

Step 1. Using letsencrypt, perform a manual request

I’m currently using Ubuntu 18.04 so to install is just a matter of

sudo apt install certbot

I’m sure you can figure out how to get it installed if you’re running any other distro.

Now to make the manual request we do

sudo certbot certonly --manual --preferred-challenges dns

This will perform a dns challenge where we set the content of a TXT record in our zone file. In my opinion it is the easiest but you also have the options of http and tls-sni. (See more here

Step 2. Add to Flynn

A. If the route does not already exist in Flynn

sudo flynn -a **my-app-name** route add http \
  -c /etc/letsencrypt/live/**my.domain.com**/fullchain.pem \
  -k /etc/letsencrypt/live/**my.domain.com**/privkey.pem **my.domain.com**

This will add a new route () and apply our certificate and key. We are done.

B. If the route already exists in Flynn

We get the appropriate route id with

flynn -a **my-app-name** route

And we update with

sudo flynn -a **my-app-name** route update \
  **http/my-very-long-route-id-593375844** -s  http  \
  -c /etc/letsencrypt/live/**my.domain.com**/fullchain.pem \
  -k /etc/letsencrypt/live/**my.domain.com**/privkey.pem

Don’t forget to change
1. Your app name (can find with flynn apps)
1. The route ID
2. the path for the cert
4. The path for the and key.

Done, you should now have https on your Flynn site.

Let me know if you have any questions

How to add Flynn to your GitLab CI/CD workflow 0

How to add Flynn to your GitLab CI/CD workflow

Goal: Make GitLab deploy to Flynn

I will assume you already have your .yml setup to build your project. As I will only cover the deploy section.

You also need to have an app created on your Flynn server and any resources already created. If you need help doing this there is great documentation on the official website https://flynn.io/docs/basics to get you initially setup

Requirements

Either your Flynn cluster add string that you got on first install

flynn cluster add -p <tls pin> <cluster name> <controller domain> <controller key>

or the backup located in ~/.flynnrc

[[cluster]]
  Name = "default"
  Key = "347skdfh2389hskdfds"
  ControllerURL = "https://controller.my.flynn.site.com"
  TLSPin = "SLDFKSDF3E0Y3924Y23HJKLHDSFOE="
  DockerPushURL = ""

Step 1. Configure Environment Variables

It is highly recommended that you create environment variables in GitLab for the above variables (Settings > CI/CD > Variables). While you could hard-code them… please don’t.

In this tutorial I will be using the following env mapping.

FLYNN_TLS_PIN=SLDFKSDF3E0Y3924Y23HJKLHDSFOE=
FLYNN_CLUSTER_NAME=default
FLYNN_CONTROLLER_DOMAIN=my.flynn.site.com
FLYNN_CONTROLLER_KEY=347skdfh2389hskdfds

Note that the FLYNN_CONTROLLER_DOMAIN has thehttps://controller.part removed compared to the~/.flynnrc` file.

Step 2. Update gitlab-ci

In your gitlab-ci.yml file, create a deploy with the following

...
staging:
  type: deploy
  script:
  - L=/usr/local/bin/flynn && curl -sSL -A "`uname -sp`" https://dl.flynn.io/cli | zcat >$L && chmod +x $L
  - flynn cluster add -p $FLYNN_TLS_PIN $FLYNN_CLUSTER_NAME $FLYNN_CONTROLLER_DOMAIN $FLYNN_CONTROLLER_KEY
  - flynn -a `app-name-staging` remote add
  - git push flynn master
  only:
  - master
...

Replace app-name-staging with the name of your app. You can find it with flynn apps.

Step 3. Commit and Push

At this point we are essentially done. All commits henceforth will be pushed to Flynn.

Issues? Let me know in the comments below.