DEVGET.NET Blog
Meet MutexBot, an alternative to Yoink Bot
Yoink is an excellent shared resource manager in slack, but with the news that it was going paid at an entirely too expensive price in 2022, I figured, “why not write my own”.
So I did… meet https://mutexbot.com/, a slack shared resource manager to assist in managing you software licenses, shared accounts, dev environments etc.
The syntax is pretty straightforward
/mutex create [resource_name]
/mutex reserve [resource_name]
/mutex reserve [resource_name] [duration]
/mutex reserve [resource_name] [duration] "[note]"
/mutex release [resource_name]
/mutex delete [resource_name]
/mutex list
Hope all who find it find it useful. I’m also interesting in knowing what you use it for! Leave a comment down below with your MutexBot use case 🙂
How to check my SSL Certificate Expiration Date
The easiest way to check the expiration or validity of your SSL Certificate is to use an online tool such as UptimeToolbox’s SSL Expiration Checker.
To use you just need to enter the URL or domain name into the form or for the url hackers out there you can use the format https://app.uptimetoolbox.com/tools/ssl-checker/?domain=blog.devget.net replacing the ‘blog.devget.net’ bit with your own domain.
How to get Kubernetes CPU allocation for all pods
With the a average person using Kubernetes to deploy workloads it’s likely you’ll hit a CPU allocation wall pretty early. To help debug and optimize your limits, the following lists the cpu requests for each pod.
Credit to abelal83 for the solution.
kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"
In my usage however I’ve found the need to also see the not only the cpu requests but memory requests as well as cpu/mem limits. To do so I’ve tweaked the above to be the following
kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]} {.name}:{'limits.cpu'}:{.resources.limits.cpu}{'\n'} {.name}:{'limits.memory'}:{.resources.limits.memory}{'\n'} {.name}:{'requests.cpu'}:{.resources.requests.cpu}{'\n'} {.name}:{'requests.memory'}:{.resources.requests.memory}{'\n'}{end}{'\n'}{end}"
the output of of the above looks like
Monitor your Kubernetes pod cpu and memory usage
watch -n 10 "kubectl top pod --all-namespaces | sort -r -k 3 -n"
Breakdown
watch -n 10
– run every 10s
kubectl top pod --all-namespaces
– Get the mem and cpu usage of all pods across all namespaces
sort -r -k 3 -n"
– Sort in Reverse order on Key 3 and treat as a Number
How to install Ubuntu Yaru theme in Pop!_OS 20.04
TL:DR
sudo apt install yaru-theme-gnome-shell yaru-theme-gtk yaru-theme-icon yaru-theme-sound
Long Version
To search for a package in apt
you can use the command sudo apt search searchterm
. Using this command you get the following result:
AWS Pricing Calculator
Started working on a pricing calculator for AWS services. Goal was to make it comprehensive while being easy to use.
Initially I wanted to just create something like a Fargate Pricing Calculator but then realized I had a framework to do much more.
So here we are, hope you enjoy using it!
How to deploy a container image from Gitlab to AWS Fargate – the important bits
Jumping straight into it. On GitLab I assume you already have your dockerized application and a basic .gitlab-ci.yaml
file.
We’re gonna want to build that image and push it to AWS ECR (Amazon Elastic Container Registry). In your gitlab ci file insert the following.
aws-deploy:
image: docker:latest
stage: build
services:
- docker:dind
script:
- apk update && apk -Uuv add python py-pip &&
pip install awscli && apk --purge -v del py-pip &&
rm /var/cache/apk/*
- $(aws ecr get-login --no-include-email --region us-east-1)
- docker build --pull -t "$AWS_REGISTRY_IMAGE:dev" .
- docker push "$AWS_REGISTRY_IMAGE:dev"
only:
- master
You’re gonna need to set a build-time environment variable called AWS_REGISTRY_IMAGE
with the URI of your ECR repository.
Great! We’re halfway there. Do a sample push and verify that your image ends up in ECR
Now we want to deploy to Fargate on every push so… assuming once again you already got your ECS cluster setup. We want to use a CodePipeline to detect pushes & deploy them to ECS.
Under AWS CodePipeline start a new pipeline, Source is Amazon ECR select the appropriate image and tags and stuff. Next
ok here’s the tricky part, to deploy to Fargate we need to add an imagedefinitions.json artifact to our image, this can be done automatically in the build step so.
Build Provider > AWS Codebuild
Create a new project
Environment Image > Managed Image
Operating System > Ubuntu
Runtime > Standard
Down to the Buildspec section, select ‘Insert Build Commands’
Then click ‘Switch to editor and enter the following
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
post_build:
commands:
- printf '[{"name":"my-fargate-container-name","imageUri":"%s"}]' MYAWSID.dkr.ecr.us-east-1.amazonaws.com/MYIMAGENAME:dev > imagedefinitions.json
artifacts:
files:
- imagedefinitions.json
Finally Save, continue pipeline creation.. in the Deploy stage select the correct Fargate Cluster/service and DONE.
If you did all of that right, the next time you push to your master branch; it’ll automatically get built and deployed to Fargate!!
Recent Comments