Install MailHog with Nginx on Ubuntu server

Debian / Ubuntu

sudo apt-get -y install golang-go
go get

Then, start MailHog by running /path/to/MailHog in the command line.

E.g. the path to Go’s bin files on Ubuntu is ~/go/bin/, so to start the MailHog run:


sudo nano /etc/systemd/system/mailhog.service

Description=MailHog service



sudo systemctl enable mailhog

sudo systemctl start mailhog

Add it to nginx

sudo nano /etc/nginx/sites-available/default

server {
        server_name mail.your.domain;
        listen 80;
        listen [::]:80;

        location / {
                proxy_pass      http://localhost:8025;
                proxy_set_header    Host             $host;
                proxy_set_header X-NginX-Proxy true;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
                proxy_http_version 1.1;
                proxy_redirect off;
                proxy_buffering off;

sudo nginx -t

sudo service nginx reload

Inertia.js is insane

Did you ever heard about Inertia.js?

I like to work frontend side using vueJS but I have to write frontend routes also using I do not see anything wrong with but I just hate creating frontend routes then I should also create routes in Laravel side.

With the use of Inertia.js I do not have to write frontend routes which is a very nice thing to me.

Theres a lot of good things about Inertia.js and you should see it for your self 🙂

Jonathan Reinink did a very good job on this insane thing!

Combine Google Fonts for Web Performance

We can increase web performance by combining google font css.

We use to write like these:

<link rel="stylesheet" href='' />
<link rel="stylesheet" href=',600,700' />

Now we can write like this:

<link rel="stylesheet" href='|Poppins:400,600,700' />

Compression and Decompression in Nginx

This section describes how to configure compression or decompression of responses, as well as sending compressed files.


Compressing responses often significantly reduces the size of transmitted data. However, since compression happens at runtime it can also add considerable processing overhead which can negatively affect performance. NGINX performs compression before sending responses to clients, but does not “double compress” responses that are already compressed (for example, by a proxied server).

Enabling Compression

To enable compression, include the gzip directive with the on parameter.

gzip on;

By default, NGINX compresses responses only with MIME type text/html. To compress responses with other MIME types, include the gzip_types directive and list the additional types.

gzip_types text/plain application/xml;

To specify the minimum length of the response to compress, use the gzip_min_length directive. The default is 20 bytes (here adjusted to 1000):

gzip_min_length 1000;

By default, NGINX does not compress responses to proxied requests (requests that come from the proxy server). The fact that a request comes from a proxy server is determined by the presence of the Via header field in the request. To configure compression of these responses, use the gzip_proxied directive. The directive has a number of parameters specifying which kinds of proxied requests NGINX should compress. For example, it is reasonable to compress responses only to requests that will not be cached on the proxy server. For this purpose the gzip_proxied directive has parameters that instruct NGINX to check the Cache-Controlheader field in a response and compress the response if the value is no-cacheno-store, or private. In addition, you must include the expired parameter to check the value of the Expires header field. These parameters are set in the following example, along with the auth parameter, which checks for the presence of the Authorization header field (an authorized response is specific to the end user and is not typically cached):

gzip_proxied no-cache no-store private expired auth;

As with most other directives, the directives that configure compression can be included in the http context or in a server or location configuration block.

The overall configuration of gzip compression might look like this.

server {
    gzip on;
    gzip_types      text/plain application/xml;
    gzip_proxied    no-cache no-store private expired auth;
    gzip_min_length 1000;

Enabling Decompression

Some clients do not support responses with the gzip encoding method. At the same time, it might be desirable to store compressed data, or compress responses on the fly and store them in the cache. To successfully serve both clients that do and do not accept compressed data, NGINX can decompress data on the fly when sending it to the latter type of client.

To enable runtime decompression, use the gunzip directive.

location /storage/ {
    gunzip on;

The gunzip directive can be specified in the same context as the gzip directive:

server {
    gzip on;
    gzip_min_length 1000;
    gunzip on;

Note that this directive is defined in a separate module that might not be included in an open source NGINX build by default.

Sending Compressed Files

To send a compressed version of a file to the client instead of the regular one, set the gzip_static directive to on within the appropriate context.

location / {
    gzip_static on;

In this case, to service a request for /path/to/file, NGINX tries to find and send the file /path/to/file.gz. If the file doesn’t exist, or the client does not support gzip, NGINX sends the uncompressed version of the file.

Note that the gzip_static directive does not enable on-the-fly compression. It merely uses a file compressed beforehand by any compression tool. To compress content (and not only static content) at runtime, use the gzip directive.

This directive is defined in a separate module that might not be included in an open source NGINX build by default.


Push and Pull to git with using SSH Keys

Ever experience that every time you pull or push to bitbucket it ask you to enter password if not both username and password?

Theres a better way to push and pull to git using ssh.

  • Generate public and private keys
$> ssh-keygen -t rsa 

If you already created ssh keys before then just copy the content of the public key

$> cat ~/.ssh/

If you are using bitbucket, go to settings{yourusername}/ssh-keys/

and create ssh key the you copy

Add SSH Key
Add SSH Key

Take note to use the “SSH” version before cloning. or if you’re already using HTTPS, just edit git/config then replace the URL.

$> nano .git/config
 repositoryformatversion = 0
 filemode = true
 bare = false
 logallrefupdates = true
[remote "origin"]
 url =
 fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
 remote = origin
 merge = refs/heads/master

Automate checking of PHP Coding Standards

I am so lazy manually check my code for coding standards. What if we can automatically check our code for coding standards.

Here comes Squiz Labs’ PHP_CodeSniffer which we can use to automate our code checking for standards.

PHP_CodeSniffer is a set of two PHP scripts; the main phpcs script that tokenizes PHP, JavaScript and CSS files to detect violations of a defined coding standard, and a second phpcbf script to automatically correct coding standard violations. PHP_CodeSniffer is an essential development tool that ensures your code remains clean and consistent.

A coding standard in PHP_CodeSniffer is a collection of sniff files. Each sniff file checks one part of the coding standard only. Multiple coding standards can be used within PHP_CodeSniffer so that the one installation can be used across multiple projects. The default coding standard used by PHP_CodeSniffer is the PEAR coding standard


PHP_CodeSniffer requires PHP version 5.4.0 or greater, although individual sniffs may have additional requirements such as external applications and scripts. See the Configuration Options manual page for a list of these requirements.



If you use Composer, you can install PHP_CodeSniffer system-wide with the following command:

composer global require "squizlabs/php_codesniffer=*"

Make sure you have the composer bin dir in your PATH. The default value is ~/.composer/vendor/bin/, but you can check the value that you need to use by running composer global config bin-dir --absolute.


Printing a List of Installed Coding Standards

$ phpcs -i
The installed coding standards are MySource, PEAR, PHPCS, PSR1, PSR2, Squiz and Zend

To check a file against the PSR1 coding standard, simply specify the file’s location.

$ phpcs --standard=PSR1 /path/to/code/myfile.php

FILE: /path/to/code/myfile.php
  2 | ERROR | Missing file doc comment
 20 | ERROR | PHP keywords must be lowercase; expected "false" but found "FALSE"
 47 | ERROR | Line not indented correctly; expected 4 spaces but found 1
 51 | ERROR | Missing function doc comment
 88 | ERROR | Line not indented correctly; expected 9 spaces but found 6

Or, if you wish to check an entire directory, you can specify the directory location instead of a file.

$ phpcs --standard=PSR1 /path/to/code

FILE: /path/to/code/myfile.php
  2 | ERROR | Missing file doc comment
 20 | ERROR | PHP keywords must be lowercase; expected "false" but found "FALSE"
 47 | ERROR | Line not indented correctly; expected 4 spaces but found 1
 51 | ERROR | Missing function doc comment
 88 | ERROR | Line not indented correctly; expected 9 spaces but found 6

FILE: /path/to/code/yourfile.php
 21 | ERROR   | PHP keywords must be lowercase; expected "false" but found
    |         | "FALSE"
 21 | WARNING | Equals sign not aligned with surrounding assignments

Using Laravel Valet and ngrok

One of the things we do as web developer is testing the call back or web hook of web services of API.
When we develop API of PayPal, Braintree and other payment gateways, we need to check for Webhook.
When we are testing API webhook, is not so good if we change code and then upload to our test server.
May be its much better if we change our code locally and call our web hook that point to our local server. is one of the solutions we can use. I think there are a lot of tools out there but its the tool
that I use for now.

Its so easy to install and use

Just download the app and run some code.

You better check the docs for more info.

If you are using Laravel Valet then its much easy

Using your terminal navigate to your app and run

valet share

Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 16.04

So you want to install LEMP in your server. In my experience, when I want to set up server for my web app I always install LEMP stack. Sometimes it will consume a lot of your time.

So I created a simple script that we can run to install all packages required for our server.

Login to your Ubuntu server using SSH

Download the file and make it runnable

Make the file runnable

chmod +x

Run the script