Showing posts with label tools. Show all posts
Showing posts with label tools. Show all posts

Tuesday, September 1, 2020

Linux Terminal Goods V

It's time for a new kick-ass console application to boost your productivity. The tools can be useful both for engineering productivity on your local workstation/laptop or doing some remote pair programming but also for the cloud as you profiling or debugging something. So if you did not check it out the other 4 posts please check it out here: I, II, III, IV. So like Bruce Buffer would say it Iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiit's time! Linux Terminal Goods V!


Diskonaut

Kubernetes and Docker are great, however, time to time the fill your 500GB HDD.So In order to save space, please don't delete your Steam games. Use diskonaut and figure out the big files and delete them pretty sweet and easy.

Httpstat

Latency can be a Bitch sometimes. Httpstat helps you to figure it out. :-) 

Delta

This is not an air company. However delta helo you to figure out Diffs pretty on the fly. There is even a github like a template so you even will forget that you are in a terminal. 

Webify

Sometimes you want to test things pretty dirty and easy, other times you want just to have fun, mostly if you just want to have fun, check it out webify. It all any BASH command or script be turned into a web server. So let's pretend it's secure and just have a good time with Webify.  

Ranger

ls can be deprecated now. There is something much cooler and with a badass name that's Ranger. Ranger allows you to navigate thought the file system via terminal and open vim at the end. 

That's it for today! 

Cheers,

Diego Pacheco

Saturday, May 2, 2020

timetracking-rs: Tracking Hours in Rust

I worked with Technology for a long time. I created a Python script 10 years ago to manage my working hours for control, observability, backpressure purposes. Last Friday night(What a nerd thing to do, I know), I re-wrote this script into a Rust program. I always like to build my own tools, for several reasons, like make my work more productive or just because I want an excuse to do something useful for me in a language that I liked. It took me about 4h to figure out how to do this in Rust. There were 2 basic challenges with was Strings(OH I hate Strings in Rust) and working effectively with Data/Time math operations. I would 2 hours figure it out and making this work and 2 other hours refactoring the code to make it better. Overall code with Rust is pretty productive and fun however Strings and pain in the ass. So I'm using this program every day (decommission my old python script) - so this might be useful for you too. Let's get started.



Show me the code



So we have 2 Rust files here: main, time_tracking, time_utils, and model. main we have the main application which reads the configs from the main arguments vector and parses it into a model::TimeTrackingData Struct.

The model files have structs for config and time tracking math operations. time_utils use chrono and bdays in order todo the date/time math I need for the program to work. The time_tracking has the report building pretty much and calls the hours calculations.

The complete code is here: https://github.com/diegopacheco/timetracking.rs

Video


timetracking-rs from Diego Pacheco on Vimeo.


Cheers,
Diego Pacheco

Wednesday, September 18, 2019

Linux Terminal Goods

Some time ago I blogged about some cool terminal plugins I was using in my Linux notebook. Also some awesome retro-emulator terminals just for fun :D. Often folks asked me about some of the plugins I still use, so I decide to share some awesome, productive tools I use. Most of the tools I use are individual and isolated but the auto-suggestions, this is based on ZSH and if you are using bash won't work out, the others "binaries" will work just fine since they are not attached to ZSH. IMHO if you use bash you should give ZSH a shot because is amazing and has an active community with some many cool and productive add-ons.




ZSH Auto-Suggestions


This ZSH plugin allow you to have auto-complete in your terminal. Based on the commands you already typed he does some gray(background) suggestion and if you want to take it, you just type arrow right and he auto-complete for you. This is so so productive.

Space Vim


SpaceVim is vim on asteroids. It's dark themed and super awesome.


There are so many cool and awesome features. My favorites are the three-view(just press F3). There are also TABS, you can navigate between tabs doing CTRL + PgUp or CTRL + PgDn.

fzf


fzf is a fuzzy text finder super cool and useful to find files in your Linux. It has auto-complete  and we can trigger it to open in vim doing: vim $(fzf --height 40%)

It's also possible to get a file preview like this: fzf --preview 'cat {}'



bat

Bat is not Cat. Bat its a cat with asteroids. There are line counters, syntax highlights and much more.



Hope you guys like it. Have fun.

Cheers,
Diego Pacheco


Tuesday, June 19, 2018

github-fecther: Checking new Repos from Github with Go

Go lang is a very interesting language. Github already is the central hub for the open source on the Internet. Companys have lots of repositories and it's very hard to catch up nowadays. So I build a simple silly tool in Go in order to fetch all repositories a company might have and they compare with the previous fetch and see if there are new repositories.  Thanks to GO elegance I was able to do that with less than 200 lines of code. In order to do what we need to do, we will use some basic functionality in Go such as IO, Http Calls, JSON support for Structs, Slices, Maps, Error Handling(Which is the sucking part) and a bit of logic. This program can fail since I'm not using a proper TOKEN for github so you will HIT the github Throttling limits pretty quickly if you keep running the program. You can create a proper DEV account and fix the code to use your token or you can schedule the program to run in different times in your crontab. This is not production ready, but it's a way to have fun with go and build something somehow useful :D Let's get started!


github-fetcher: How the Program Works?

First of all the program looks into the DISK to see if there is a JSON file(using the name of the organization you want to check) and if there is that JSON file is loaded to memory. IF there is no JSON file in DISK(First time you run for each organization) is fine.

After Loading the file from DISK if present we will go to the Github API and fetch all repositories from the Organization you pass by the parameter. GIthub organization / repos API is paginated. So I do multiples calls in a loop in order to get reports from all pages. Nowadays is very common to have companies with more than 100 repositories.

After getting the repositories from Github API we will compare the repositories from Disk(previous run) with repositories from the site. This will give us a DIFF - which will be new repos or deleted ones. Then the JSON is updated in DISK with current run.

Getting the Code and Running 

Download the main.go file them we can run in 2 ways. We can do: go run main.go facebook or we can build a binary file by doing: go build and then we can run with: ./github-fetcher facebook. When you run it you will see an output like this:



The Go Code



So let's go through the code. First of all, we are importing the libs we need for this code. After imports, we are creating a Struct called Repo. Here we are using an interesting Go lang feature which allows us Map JSON to Structs and vice versa. Github API has many attributes but I just care about the repository full_name so that's why there is just 1 field there.

There is a function called extractRepos which receives the pagination and the organization name. This function returns 2 things: A slice(which is like an Array but not) of Repo and error if happens. This is how we do error handler in go - since there are no exceptions, every function needs to return 2 things.  I do the HTTP call and parse the result. You can see there is a json.Unmarshal which receives the http body content and a pointer reference to an slice of repos called &repos. So &repos means we are pasing repos by reference not by value. In the previous line, you might realize we are using the make function - that's there in order to create an Array.

The next function is getAllRepos which will call extractRepos with different pagination until we receive an error - this is how I know how many pages are there. You might realize when I call extractRepos I have repos, _ this means repos will be the array of repos and _ is the error, since I won't ignore that I use underline. The current repo is appended to the array of repos - this is done by using the function append where we pass the 2 arrays we want append and this results in a 3rd array.

Next function is persistInDisk, here we receive a path(which is a string) so this is the location where we want to persist in the disk and receives an []Repo - this is an array of Repos. Here we are using json.Marshal and passing the array of Repos in order to transfer our array of Struct Repo in JSON string. Them we use io.copy yo copy to the file in the disk and persist it.

Next function is loadFromDisk which also receives a path but now there is *[]Repo which is a pointer to Repo array. We need that because we will load a value by reference. We will read the content from the file and decode it to JSON and send back to the array struct.

Next function is the diff one. Here we receive 2 slices(arrays) which will be the array from DISK and the array from github call. In order to get the difference, we will do the following algorithm. First, we will loop throw the first array and add all items on a first slice(array from disk) to a map which the key will be the repo name and we will assign a counter - for this loop will be 1 to all keys. Them we will be doing the same with the other slice(from github api call) however now we will get the value from ma ap if exit and add 1. If we got a duplicated key the value will be 2 otherwise 1. Finally, we loop throu our map and find keys where the counter is 1 so this means they are unique and thats waht we want this is the diff. Right now the algorithm doesn't make difference between new repos and deleted ones this could be pretty easy to do just by checking the source of the number or by using the different number when is the second array.

Finally the main function. Here we orchestrate the main flow described previously in this post. We are getting the organization name by parameter doing os.Args so we get from command line arguments. We call other functions and if there are errors I dont proceed.

Thats it!

Cheers,
Diego Pacheco

Monday, June 18, 2018

Running Ansible with Docker

Ansible is a great provisioning tool. However, it can be painful to get some ansible scripts right. Especially if you need some stuff with bash and Ansible. Often baking time in AWS can be pretty high. So It's better you can run ansible locally. However, running ansible local could mess up with your OS. So the best thing is run ansible in Docker. Since the docker container will be ephemeral, once you finish running the container all changes will be lost. You also will benefit from running locally and being able to figure it out quickly whats wrong.  So today I want to share about some simple project I create in order to help to do that. This is called Ansible-Docker this is an ansible sandbox using Amazon Linux.

Getting Started

In order to get started, we need have docker and git installed. Next, we need to git clone ansible-docker and then bake it. Bake just need to happen 1 time. Baked might take some time depending on your internet connection. After baking you can run "run" command which will run ansible linter and then ansible on docker image.



The Ansible Project

For this ansible sandbox, we have simple and default ansible project structure. Which you can see here on the src folder.  There is main.yml which is the file it will be run by ansible. You can see this file delegates to a git role in ansible which is located in roles/git/tasks/main.yml



The Dockerfile

Now let's take a look at the Dockerfile. So here are installing Ansible and we are using Amazon Linux Latest version as our base Docker image. As you can see on the Dockerfile we call run.sh which will run ansible as soon as the container get up. You might see a different path that happens because I'm doing some volume mapping - You can check it out here.



That's it. Now we can run ansible locally with ansible-docker. Using this ideas and scripts you can speed up your development time.

Cheers,
Diego Pacheco

Mocking Terraform AWS using Docker

Terraform is a good tool for infrastructure provisioning.  However to test terraform it could be pretty difficult. So you will create some terraform scripts and upload to the cloud a run some slow Jenkins job? and if your syntax is wrong? Well, this process can be very painful. So I want to share some simple sandbox I built in order to speed up terraform + aws development in your local machine. I might be wondering how is that possible─? Well, my secret sauce is Localstack. So we are limited to all endpoints that localstack mocks. As Localsttack adds more endpoints we benefit from that. The main idea behind this simple project is to show how easy is to docker-ize somDevOpsps tools and make engineering easy.  Currently is very often to spend 40mim or more doing baking and that's is wrong. So that's kind of mainframe era so the idea is to save time and run things local - as much as possible. Docker helps a lot with that. I run software in production using AWS Amazon Linux. Now there is Amazon Linux docker image.  This is great because you can have some OS local as you will have it in PROD.



Getting Started

First of all, you need to have Docker and git installed. Them you can clone Terraform-Docker. Once you clone docker-terraform you can run bake command. That's needed just 1 time.  After baking the docker images we can run localstack(this will need to be in another terminal). After running Localstack we can run terraform-docker.

The Terraform Project

Under the src directory you will see:

  • main.tf:        Which is our terraform "code"
  • outputs.tf:     Which are all the things Terraform will output when it finishes.
  • variables.tf:  Which are custom variables and parameters we use for terraform.
For this sample, I will create a bucket on S3 using terraform. There are some special changes that need to be made in order to this work locally. For instance, we need to point to Localstack endpoints instead of AWS ones. 


So this file is where you can see a specific IP for the S3 endpoint. I can do this because I created a Docker Network which allows me to control and define IP address for docker networks. You can see how I create a docker network and attach IPs here.

The Dockerfile

Dockerfile is pretty simple. We are using the latest Amazon Linux as base Docker image and we are installing terraform 0.11.7 and we are copying local terraform project. There is a run.sh which pretty much does terraform init and terraform apply in order to run terraform as soon as you start this container.



That's it! Now we mocked Terraform and are running all in the local machine. You can get a full project with all source code and scripts here.

Cheers,
Diego Pacheco

Chuyên mục văn hoá giải trí của VnExpress

.

© 2017 www.blogthuthuatwin10.com

Tầng 5, Tòa nhà FPT Cầu Giấy, phố Duy Tân, Phường Dịch Vọng Hậu, Quận Cầu Giấy, Hà Nội
Email: nguyenanhtuan2401@gmail.com
Điện thoại: 0908 562 750 ext 4548; Liên hệ quảng cáo: 4567.