Angular 8 + Ionic 4 Monorepo Part 1: The Setup

If you’re like me you created this super cool angular web application (shameless promotion) and you users keep asking for a mobile app!

Since your (super cool) frontend is in angular the logical solution is to use Ionic to build the app since you got all this pre-written ng code that you wanna use.

I wont go too deep into the details but after trying to set things up with vanilla Angular workspaces (and failing spectacularly), then trying Angular + nrwl/nx (works but so much hackery to get ionic to work, maintenance would raise technical debt to infinity), I found the gold that is xplat.

This very quickly written post is to document my setup with future blogs as app development progresses.

Disclaimer. This post mostly serves as a note to me of how to set things up, so it’s pretty rough.

Lines that begin with ‘?’ should not be typed and are cli options.

Lets setup our nx workspace

npm init nx-workspace myworkspace
? What to create in the new workspace      empty
? CLI to power the Nx workspace            Angular CLI
npm install -g @nrwl/cli
npm install --save-dev @nstudio/xplat
ng add @nstudio/xplat

Lets make the web app (we call it ‘app’, xplat will prepend ‘web’, change this is you have other requirements)
(You can also configure the prefix but you can look into that yourself)

ng g app
? What name would you like for this app?  app
? What type of app would like to create?  web
? Which frontend framework should it use? angular
? Use xplat supporting architecture?      Yes

? In which directory should the app be generated? 
? Which stylesheet format would you like to use? SASS(.scss)  [   ]

Lets make the mobile app (we call it ‘app’, xplat will prepend ‘ionic’, change this is you have other requirements)

ng g app
? What name would you like for this app?    app
? What type of app would like to create?    ionic
? Which frontend framework should it use?   angular
? Use xplat supporting architecture?        Yes

We want to use the ionic command to manage our app so lets hook that up.

Create a file ionic.config.json with the following

  "name": "ionic-app",
  "integrations": {},
  "type": "angular",
  "root": "apps/ionic-app"

We can now run our ionic app with ionic serve --project ionic-app

But wait it still fails…

For some reason capacitor is not installed automatically so since we have setup the ‘ionic’ command we can fix that with

ionic integrations enable capacitor --project ionic-app

Right now you can start apps with

ng serve web-app
ionic serve --project ionic-app


PS. If you are a lazy dev like me and want to use the Ionic devapp you will also need to install the cordova integration.

It’s not technically a smart idea but… thug life

ionic integrations enable cordova --project ionic-app

NodeFerret – Free Server and Uptime monitoring

I am the creator of NodeFerret, as a result I will be biased towards it. With that said, let us begin.

So what is NodeFerret?
At a high level NodeFerret aims to be a Linux Server and Uptime monitoring tool. It continuously checks your website(s) and/or server(s) and notifies you (eMail, Slack, Webhook) if anything goes wrong.

You mentioned the word free… much does it really cost?
Well the ‘official’ pricing is on the website but for the general hobbyist or indie developer use it’s essentially free.

How can you sustain it while having it ‘essentially free’?
I initially created NodeFerret for myself. To monitor various sites and servers I have online.

After I was done I realized that I had so much extra resources that I could have 1000 more me’s using the service and not have to pay an extra dollar…. so here we are.

Why would I use NodeFerret over say, NewRelic or DataDog?
If you need the level of insight into your servers or applications that these providers provide then NodeFerret probably isn’t right for you.

We are focused on the average developer or sysadmin who wants something easy to use and ‘just works’, Not the DevOps Engineer with a fleet of containers and a full-time job caring for them.

Sounds interesting, where can I sign up?
Right over there –>

What if I have problems?
You can leave a message on the nodeferret community forum.
Or email me directly at support[@]

Final Words
I do hope you enjoy using NodeFerret as much as I enjoyed creating it. I am always adding new features and trying to make it a better tool for everyone.


Linux (Ubuntu) commands to memorize

This is just a personal reminder of commands that I use a lot that I should really memorize but haven’t gotten around to.

How to check what is running on a particular port

lsof -i :8000

How to fix Rancher 2.0 “503 Service Temporarily Unavailable”


So you have your Rancher 2.0 Kubernetes cluster up, and you update one of your apps.. Suddenly your site is down and you see the error

503 Service Temporarily Unavailable

Don’t freak out, theres a simple solution here. More likely than not, you setup your ingress to target a backend instead of a service.

Updating your app can often rename the ‘backend’, causing your initial ingress to be at a loss about what to point to.


Edit your app, expose the port as a Cluster IP and then edit your ingress to be a service which targets the exposed port.


Rancher 2.0 etcd disaster recovery

This doc shows how to restore to a single node etcd cluster after a 3, 5 or 7 node cluster has lost quorum.

Ideally with these sorts of failures you want to try your best to get the original etcd hosts back up.

This is also done at your own risk, I have no association with Rancher nor am I a Rancher professional. It is also highly recommended to test this in a staging environment first. I will NOT be responsible for the loss of all your or your company’s data; which is exactly what will happen if this procedure fails.

With that out of the way; please read on.

This doc assumes you have
1. rancher_cli installed on your local machine
2. a working internet connection on the surviving etcd host

1. Login to the surviving host

rancher context switch
rancher ssh <surviving_etcd>

At this point you may want to do a docker inspect etcd to ensure the the following two directories are bind-mounted

        "Mounts": [
                "Type": "bind",
                "Source": "/var/lib/etcd",
                "Destination": "/var/lib/rancher/etcd",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
                "Type": "bind",
                "Source": "/etc/kubernetes",
                "Destination": "/etc/kubernetes",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"

If you do not see the above.. Stop.

2. check the health of the cluster

docker exec -it etcd etcdctl member list
docker exec -it etcd etcdctl endpoint health

You should see unhealthy cluster

3. Take a snapshot of cluster

This ensures that if for any reason this operation fails, you have not lost all your data. We will store our snapshot in the /etc/kubernetes dir which is bind-mounted onto the same path on the host

mkdir -p /etc/kubernetes/etcd-snapshots/etcd-$(date +%Y%m%d)
docker exec -it etcd etcdctl snapshot save /etc/kubernetes/etcd-snapshots/etcd-$(date +%Y%m%d)/snapshot.db

4. Get deploy command

Lavie ( has this great tool which approximates the deploy command used to put up a docker container. We will use it to get out etcd configuration. Run the following:

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike etcd

the output should be a pretty long docker run type string. Save it in a safe place for later

5. Destroy/Rename the old etcd container

docker stop etcd
docker rename etcd etcd_old

6. Start the new etcd container

  1. Edit the --initial-cluster area of the command from step 4, leaving only the surviving container.
  2. Append --force-new-cluster at the end of the command

Use this new string to deploy a new container.

7. Delete old nodes

In the rancher UI. You should now be able to access your cluster again. Delete the pools of the nodes that died. (This will take a while as rancher will redeploy etcd)

You are now free to continue using your cluster or create new nodes to expand your etcd cluster



In case everything went to hell, we can use the snapshot taken in step 3…

docker exec -it etcd etcdctl snapshot --data-dir=/var/lib/rancher/etcd/snapshot restore /etc/kubernetes/etcd-snapshots/etcd-$(date +%Y%m%d)/snapshot.db

docker stop etcd
mv /var/lib/etcd/member /var/lib/etcd/member_old
mv /var/lib/etcd/snapshot/member /var/lib/etcd/member
rmdir /var/lib/etcd/snapshot
docker start etcd

The above restores the snapshot to /var/lib/rancher/etcd/snapshot
We then stop etcd, archive the messed up etcd data (member_old) and replace it with the restored data


Fix: Ubuntu touchpad stops working on wakeup

Create a file in /lib/systemd/system-sleep

sudo vim /lib/systemd/system-sleep/

Create a script which will reinstall the psmouse kernel module on wake

case $1/$2 in
    echo "Going to $2..."
    # Place your pre suspend commands here, or `exit 0` if no pre suspend action required
    exit 0
    echo "Waking up from $2..."
    # Place your post suspend (resume) commands here, or `exit 0` if no post suspend action required
    modprobe -r psmouse
    modprobe psmouse

Ensure the script is executable

chmod 755 /lib/systemd/system-sleep/

Done. Try it out.

How to manually generate SSL certificates for Flynn applications 1

How to manually generate SSL certificates for Flynn applications

For the last few years the flynn team has been working on getting us letsencrypt integration. While I feel the functionally should be here soon, in the meantime we just have to make the requests ourselves

Step 1. Using letsencrypt, perform a manual request

I’m currently using Ubuntu 18.04 so to install is just a matter of

sudo apt install certbot

I’m sure you can figure out how to get it installed if you’re running any other distro.

Now to make the manual request we do

sudo certbot certonly --manual --preferred-challenges dns

This will perform a dns challenge where we set the content of a TXT record in our zone file. In my opinion it is the easiest but you also have the options of http and tls-sni. (See more here

Step 2. Add to Flynn

A. If the route does not already exist in Flynn

sudo flynn -a **my-app-name** route add http \
  -c /etc/letsencrypt/live/****/fullchain.pem \
  -k /etc/letsencrypt/live/****/privkey.pem ****

This will add a new route () and apply our certificate and key. We are done.

B. If the route already exists in Flynn

We get the appropriate route id with

flynn -a **my-app-name** route

And we update with

sudo flynn -a **my-app-name** route update \
  **http/my-very-long-route-id-593375844** -s  http  \
  -c /etc/letsencrypt/live/****/fullchain.pem \
  -k /etc/letsencrypt/live/****/privkey.pem

Don’t forget to change
1. Your app name (can find with flynn apps)
1. The route ID
2. the path for the cert
4. The path for the and key.

Done, you should now have https on your Flynn site.

Let me know if you have any questions

How to add Flynn to your GitLab CI/CD workflow 0

How to add Flynn to your GitLab CI/CD workflow

Goal: Make GitLab deploy to Flynn

I will assume you already have your .yml setup to build your project. As I will only cover the deploy section.

You also need to have an app created on your Flynn server and any resources already created. If you need help doing this there is great documentation on the official website to get you initially setup


Either your Flynn cluster add string that you got on first install

flynn cluster add -p <tls pin> <cluster name> <controller domain> <controller key>

or the backup located in ~/.flynnrc

  Name = "default"
  Key = "347skdfh2389hskdfds"
  ControllerURL = ""
  DockerPushURL = ""

Step 1. Configure Environment Variables

It is highly recommended that you create environment variables in GitLab for the above variables (Settings > CI/CD > Variables). While you could hard-code them… please don’t.

In this tutorial I will be using the following env mapping.


Note that the FLYNN_CONTROLLER_DOMAIN has thehttps://controller.part removed compared to the~/.flynnrc` file.

Step 2. Update gitlab-ci

In your gitlab-ci.yml file, create a deploy with the following

  type: deploy
  - L=/usr/local/bin/flynn && curl -sSL -A "`uname -sp`" | zcat >$L && chmod +x $L
  - flynn -a `app-name-staging` remote add
  - git push flynn master
  - master

Replace app-name-staging with the name of your app. You can find it with flynn apps.

Step 3. Commit and Push

At this point we are essentially done. All commits henceforth will be pushed to Flynn.

Issues? Let me know in the comments below.

How to deploy an Angular 6 project to Heroku 9

How to deploy an Angular 6 project to Heroku

As I had difficulty finding a reliable source online, I decided to create my own.

Here is how you deploy an Angular 6 app to Heroku

Step 1

You are going to need something to serve your files. Let’s go with express. We will also need ‘path’ to setup our server (unless you wanna hardcode those in yourself)

npm install --save express path

Step 2.

Now if we want Heroku to build our project on their servers we need to tell them two things.
1. How to build our project and
2. What versions of node/npm our code works with

You can do this by putting the following in package.json

  "scripts": {
    "postinstall": "ng build --prod"
  "engines": {
    "node": "8.11.3",
    "npm": "6.1.0"

Remember to replace the versions of node and npm to the ones you have.
You can find out with

node --version
npm --version

Step 3

By default angular separates away from deployments what it thinks are ‘development’ only additions. However, since Heroku is building our code, we need to give it the ability to run those modules.

To do this you can either move @angular/cli, @angular/compiler-cli, typescript and "@angular-devkit/build-angular": "~0.6.8"__*__ from our devDependencies over to dependencies. Or we can have Heroku install those modules on their own.

I personally prefer the first as it allows you to specify versions, however if you wish to do the latter, you would place the following right under postinstall

    "preinstall": "npm install -g @angular/cli @angular/compiler-cli typescript",

Step 4

Create our server file. In your main application directory (the one with package.json) create a file called server.js. Add the following

const path = require('path');
const express = require('express');
const app = express();

// Serve static files
app.use(express.static(__dirname + '/dist/MY_APP_NAME'));

// Send all requests to index.html
app.get('/*', function(req, res) {
  res.sendFile(path.join(__dirname + '/dist/MY_APP_NAME/index.html'));

// default Heroku port
app.listen(process.env.PORT || 5000);

Remember to replace MY_APP_NAME (both of them) to the name of your app.

Step 5

Now to create a Procfile to tell Heroku “how” we wish our app to be run. In your project directory (same one with package.json) create a file called Procfile and place the following

web: node server.js

Step 6. Final Step

We can now build our app with npm install and run it with ‘node server.js’.
If everything works, we should now see a working site on http://localhost:5000

If you have any issues feel free to leave a message in the comments.

To push to heroku, assuming you have the cli installed.
If not (

heroku create
git add .
git commit -m "initial heroku deploy'
git push heroku master

Done. You should now see a deploy link. Open it up and you should see your site.

Hope that helped, if you have any issues try typing heroku local on your machine for some more insight.




Still here?

Fine fine.. you’ve convinced me.. here’s a bonus.
Seeing as Heroku employs magical unicorns and it https all the things. You can add this to your server.js to perform a protocol redirect.

// Heroku automagically gives us SSL
// Lets write some middleware to redirect us
let env = process.env.NODE_ENV || 'development';

let forceSSL = (req, res, next) => {
  if (req.headers['x-forwarded-proto'] !== 'https') {
    return res.redirect(['https://', req.get('Host'), req.url].join(''));
  return next();

if (env === 'production') {

Be sure to place it directly under const app = express() otherwise urls like index.html will not redirect.