Installing the Amazon ElastiCache Cluster Client for PHP in Ubuntu Server 14.04 LTS

Downloading the Installation Package

To ensure that you use the correct version of the ElastiCache Cluster Client for PHP, you will need to know what version of PHP is installed on your Amazon EC2 instance. You will also need to know whether your Amazon EC2 instance is running a 64-bit or 32-bit version of Linux.

To determine the PHP version installed on your Amazon EC2 instance

  • At the command prompt, run the following command:
    php -v

    The PHP version will be shown in the output, as in this example:

    PHP 5.4.10 (cli) (built: Jan 11 2013 14:48:57) 
    Copyright (c) 1997-2012 The PHP Group
    Zend Engine v2.4.0, Copyright (c) 1998-2012 Zend Technologies
    

To determine your Amazon EC2 AMI architecture (64-bit or 32-bit)

  1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the Instances list, click your Amazon EC2 instance.
  3. In the Description tab, look for the AMI: field. A 64-bit instance should have x86_64 as part of the description; for a 32-bit instance, look for i386 or i686 in this field.

You are now ready to download the ElastiCache Cluster Client.

To download the ElastiCache Cluster Client for PHP

  1. Sign in to the AWS Management Console and open the ElastiCache console at https://console.aws.amazon.com/elasticache/.
  2. From the ElastiCache console, click Download ElastiCache Cluster Client.
  3. Choose the ElastiCache Cluster Client that matches your PHP version and AMI architecture, and click the Download ElastiCache Cluster Client button.

Installation Steps for New Users

  1. Launch an Ubuntu Linux instance (either 64-bit or 32-bit) and log into it.
  2. Install PHP dependencies:
    sudo apt-get update sudo apt-get install gcc g++ php5 php-pear
  3. Download the correct php-memcached package for your Amazon EC2 instance and PHP version. For more information, see Downloading the Installation Package.
  4. Install php-memcached. The URI should be the download path for the installation package.
    sudo pecl install <package download path>

    Note

    This installation step installs the build artifact amazon-elasticache-cluster-client.so into the /usr/lib/php5/20121212* directory. Please verify the absolute path of the build artifact because it is needed by the next step.

    If the previous command doesn’t work, you need to manually extract the PHP client artifact amazon-elasticache-cluster-client.so from the downloaded *.tgz file, and copy it to the /usr/lib/php5/20121212* directory.

    tar -xvf <package download path>
    cp amazon-elasticache-cluster-client.so /usr/lib/php5/20121212/ 
  5. With root/sudo permission, add a new file named memcached.ini in the /etc/php5/cli/conf.d directory, and insert “extension=<absolute path to amazon-elasticache-cluster-client.so>” in the file.
    echo "extension=<absolute path to amazon-elasticache-cluster-client.so>" | sudo tee /etc/php5/cli/conf.d/memcached.ini

Restart your Nginx or Apache Webserver!

Source: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Appendix.PHPAutoDiscoverySetup.html

How To Install Elasticsearch on an Ubuntu Server

Elasticsearch is a platform for distributed, RESTful search and analysis. It can scale as needed, and you can get started using it right away on a single DigitalOcean droplet. In this tutorial, we will download, install, and start using Elasticsearch on Ubuntu. The steps provided have currently been tested on: Ubuntu 12.04.3 x64 and Ubuntu 13.10 x64.

Dependencies

First, update the list of available packages by running apt-get update.

Next, we must install the Java runtime. There are two options here.

  • Install the OpenJDK runtime supplied by Ubuntu.
  • Install the Elasticsearch recommended Java runtime, Oracle Java.

The first option works perfectly fine if you would just like to play around and get acquainted with Elasticsearch or run a small collection of nodes. The latter option is the one recommended by Elasticsearch for guaranteed compatibility.

OpenJDK

To accomplish the first option, we can simply run apt-get install openjdk-6-jre.

Oracle Java

For the second option, we’ll follow the steps in the Elasticsearch documentation. To begin, we must add a repository that contains the Oracle Java runtime

sudo add-apt-repository ppa:webupd8team/java

We must then run apt-get update to pull in package information from this new repository. After doing so, we can install the Oracle Java runtime

sudo apt-get install oracle-java7-installer

While executing the above command you will be required to accept the Oracle binary license. If you don’t agree to the license, you may instead install the OpenJDK runtime instead.

Test your Java installation

You can then check that Java is installed by running java -version.

That’s all the dependencies we need for now, so let’s get started with obtaining and installing Elasticsearch.

Download and Install

Elasticsearch can be downloaded directly from their site in zip, tar.gz, deb, or rpm packages. You don’t need to do this ahead of time, as we will download the files that we need as we need them in the text below.

Install

Given the download options provided by Elasticsearch, we have a few options:

  • Install from the zip or tar.gz archive.
  • Install from the deb package.
  • Install from the rpm package.

That last option is not the Ubuntu way, so we’ll ignore it.

Installing from zip or tar.gz archive is best if you just want to play around with Elasticsearch for a bit. Installing from either of these options simply makes available the binaries needed for running Elasticsearch. Installing from the deb package fully installs Elasticsearch and starts the server running immediately. This includes installing an init script at /etc/init.d/elasticsearch which starts Elasticsearch on boot. If you are only looking to play around with Elasticsearch, I suggest installing from the zip or tar.gz. That way, you can discover Elasticsearch while starting and stopping the server at will.

Installing from zip or tar.gz archive

The zip and tar.gz downloads both contain pre-compiled binaries for Elasticsearch.

To begin, download the source somewhere convenient. After extracting the archive, you will be able to run the binaries directly from the resulting directory, so you should place them somewhere accessible by every user you want to have access to Elasticsearch. For this tutorial, we’ll just download to our current user’s directory. If you download them to /tmp, they are likely to disappear when you reboot your VPS. If this is what you want, go ahead and place the download there. You can create a new temporary directory in /tmp quickly by running mktemp -d.

In any case, make sure you’re in the directory you want to extract Elasticsearch into before proceeding.

Download the archive

Run either

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.1.zip

or

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.1.tar.gz

The first command downloads the zip archive, and the second command downloads the tar.gz archive. If you downloaded the zip package, make sure you have previously run apt-get install unzip then run

unzip elasticsearch-1.4.1.zip

Alternatively, if you’ve downloaded the tar.gz package, run

tar -xf elasticsearch-0.90.7.tar.gz

Either option will create the directory elasticsearch-1.4.1. Change into that directory by entering cd elasticsearch-1.4.1, and you’ll find the binaries in the bin folder.

Installing from the Debian software package

The best package to download for Ubuntu is the deb package. The RPM can work but it needs to be converted first, and we will not cover doing so here. Grab the deb package by running

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.1.deb

Installing directly from a Debian package is done by running

dpkg -i elasticsearch-1.4.1.deb

This results in Elasticsearch being properly installed in /usr/share/elasticsearch. Recall that installing from the Debian package also installs an init script in /etc/init.d/elasticsearch that starts the Elasticsearch server running on boot. The server will also be immediately started after installation.

Configuration files

If installed from the zip or tar.gz archive, configuration files are found in the config folder of the resulting directory. If installing from the Debain package, configuration files are found in /etc/elasticsearch.

In either case, there will be two main configuration files: elasticsearch.yml and logging.yml. The first configures the Elasticsearch server settings, and the latter, unsurprisingly, the logger settings used by Elasticsearch.

“elasticsearch.yml” will, by default, contain nothing but comments.

“logging.yml” provides configuration for basic logging. You can find the resulting logs in/var/log/elasticsearch.

Remove Elasticsearch Public Access

Before continuing, you will want to configure Elasticsearch so it is not accessible to the public Internet–Elasticsearch has no built-in security and can be controlled by anyone who can access the HTTP API. This can be done by editing elasticsearch.yml. Assuming you installed with the package, open the configuration with this command:

sudo vi /etc/elasticsearch/elasticsearch.yml

Then find the line that specifies network.bind_host, then uncomment it and change the value tolocalhost so it looks like the following:

network.bind_host: localhost

Then insert the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

Save and exit. Now restart Elasticsearch to put the changes into effect:

sudo service elasticsearch restart

We’ll cover other basic configuration options later, but first we should test the most basic of Elasticsearch installs.

Test your Elasticsearch install

You have now either extracted the zip or tar.gz archives to a directory, or installed Elasticsearch from the Debian package. Either way, you have the Elasticsearch binaries available, and can start the server. If you used the zip or tar.gz archives, make sure you’re in the resulting directory. If you installed using the Debian package, the Elasticsearch server should already be running, so you don’t need to start the server as shown below.

Let’s ensure that everything is working. Run

 ./bin/elasticsearch

Elasticsearch should now be running on port 9200. Do note that Elasticsearch takes some time to fully start, so running the curl command below immediately might fail. It shouldn’t take longer than ten seconds to start responding, so if the below command fails, something else is likely wrong.

Ensure the server is started by running

curl -X GET 'http://localhost:9200'

You should see the following response

{
  "ok" : true,
  "status" : 200,
  "name" : "Xavin",
  "version" : {
    "number" : "1.4.1",
    "build_hash" : "36897d07dadcb70886db7f149e645ed3d44eb5f2",
    "build_timestamp" : "2013-11-13T12:06:54Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.2"
  },
  "tagline" : "You Know, for Search"
}

If you see a response similar to the one above, Elasticsearch is working properly. Alternatively, you can query your install of Elasticsearch from a browser by visiting :9200. You should see the same JSON as you saw when using curl above.

If you installed by the zip or tar.gz archive, the server can be stopped using the RESTful API

curl -X POST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'

The above command also works when Elasticsearch was installed using the Debian package, but you can also stop the server using service elasticsearch stop. You can restart the server with the corresponding service elasticsearch start.

Using Elasticsearch

Elasticsearch is up and running. Now, we’ll go over some basic configuration and usage.

Basic configuration

When installed by zip or tar.gz archives, configuration files are found in the config folder inside the resulting directory. When installed via Debian package, configuration files can be found in/etc/elasticsearch/. The two configuration files you will find are elasticsearch.yml and logging.yml. The first is a general Elasticsearch configuration. The provided file contains nothing but comments, so default settings are used. Reading through the file will provide a good overview of the options, but I will make a few suggestions below. None of the settings are necessary. You can work with Elasticsearch without doing any of the following, but it’ll be a raw development environment.

The setting “cluster.name” is the method by which Elasticsearch provides auto-discovery. What this means is that if a group of Elasticsearch servers on the same network share the same cluster name, they will automatically discover each other. This is how simple it is to scale Elasticsearch, but be aware that if you keep the default cluster name and there are other Elasticsearch servers on your network that are not under your control, you are likely to wind up in a bad state.

Basic usage

Let’s add some data to our Elasticsearch install. Elasticsearch uses a RESTful API, which responds to the usual CRUD commands: Create, Read, Update, and Destroy.

To add an entry

curl -X POST 'http://localhost:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'

You should see the following response

{“ok”:true,”index”:”tutorial”,”type”:”helloworld”,”id”:”1″,”version”:1}

What we have done is send a HTTP POST request to the Elasticserach server. The URI of the request was/tutorial/helloworld/1. It’s important to understand the parameters here:

  • “tutorial” is index of the data in Elasticsearch.
  • “helloworld” is the type.
  • “1” is the id of our entry under the above index and type.

If you saw the response above to the curl command, we can now query for the data with

curl -X GET 'http://localhost:9200/tutorial/helloworld/1'

which should respond with

{"_index":"tutorial","_type":"helloworld","_id":"1","_version":1,"exists":true, "_source" : { "message": "Hello World!" }}

Success! We’ve added to and queried data in Elasticsearch.

One thing to note is that we can get nicer output by appending ?pretty=true to the query. Let’s give this a try

curl -X GET 'http://localhost:9200/tutorial/helloworld/1?pretty=true'

Which should respond with

{
  "_index" : "tutorial",
  "_type" : "helloworld",
  "_id" : "1",
  "_version" : 1,
  "exists" : true, "_source" : { "message": "Hello World!" }
}

which is much more readable. The output will also be pretty printed without needing to append the query string if you have set format=yaml in the Elasticsearch configuration file.

Conclusion

We have now installed, configured and begun using Elasticsearch. Since it responds to a basic RESTful API. It is now easy to begin adding to and querying data using Elasticsearch from your application.

Credits: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-on-an-ubuntu-vps

Production-Ready Beanstalkd with Laravel 4 Queues

Queues are a great way to take some task out of the user-flow and put them in the background. Allowing a user to skip waiting for these tasks makes our applications appear faster, and gives us another opportunity to segment our application and business logic out further.

For example, sending emails, deleting accounts and processing images are all potentially long-running or memory-intensive tasks; They make great candidates for work which we can off-load to a queue.

Laravel can accomplish this with its Queue package. Specifically, I use the Beanstalkd work queue with Laravel.

Here’s how I set that up to be just about production-ready.

Note: I use Ubuntu for development and often in production. The following is accomplishsed in Ubuntu 12.04 Server LTS. Some instructions may differ for you depending on your OS

Here’s what we’ll cover:

  1. Laravel and Queues
  2. Installing Beanstalkd
  3. Churning through the queue with Supervisor

Laravel and Queues

Laravel makes using queues very easy. Our application, the "producer", can simply run something likeQueue::push('SendEmail', array('message' => $message)); too add a "job" to the queue.

On the other end of the queue is the code listening for new jobs and a script to process the job (collectively, the "workers"). This means that in addition to adding jobs to the queue, we need to set up a worker to pull from the stack of available jobs.

Here’s how that looks in Laravel. In this example, we’ll create an image-processing queue.

Install dependencies

As noted in the docs, Laravel requires the Pheanstalk package for using Beanstalkd. We can install this using Composer:

$ composer require pda/pheanstalk ~2.0

Create a script to process it

Once our PHP dependency in installed, we can begin to write some code. In this example, we’ll create aPhotoService class to handle the processing. If no method is specified, laravel assumes the class will have a fire() method. This is half of a worker – the code which does some processing.

<?php namespace Myapp\Queue;

class PhotoService {

    public function fire($job, $data)
    {
        // Minify, crop, shrink, apply filters or otherwise manipulate the image
    }

}

Push a job to a Queue

When a user uploads an image, we’ll add a job to the queue so our worker can process it.

In Laravel, we’ll create a job by telling the Queue library what code will handle the job (in this case thefire() method inside of Myapp\Queue\PhotoService as defined above) and give it some data to work with. In our example, we simply pass it a path to an image file.

Queue::push('Myapp\Queue\PhotoService', array('image_path' => '/path/to/image/file.ext'));

Process the jobs

At this point, we have code to process an image (most of a worker), and we’ve added a job to the queue. The last step is to have code pull a job from the queue.

This is the other half of a worker. The worker needs to both pull a job from the queue and do the processing. In Laravel, that’s split into 2 functionalities – Laravel’s queue listener, and the code we write ourselves – in this case, the PhotoService.

Laravel has some CLI tools to help with queues:

// Fire the latest job in the queue
$ php artisan queue:work

// Listen for new jobs in the queue
// and fire them off one at a time
// as they are created
$ php artisan queue:listen

When not working with the “sync” driver, these tools are what you need to use in order to process the jobs in your queue. We run the queue:listen command to have laravel listen to the queue and pull jobs as they become available.

Let’s install Beanstalkd to see how that works.

By default, laravel will run queue jobs synchronously – that is, it runs the job at the time of creation. This means the image will be processed in the same request that the user created when uploading an image. That’s useful for testing, but not for production. We’ll make this asynchronous by introducing Beanstalkd.

Beanstalkd

Let’s install Beanstalkd:

# Debian / Ubuntu:
$ sudo apt-get update
$ sudo apt-get install beanstalkd

Note: You may be able to get a newer version of Beanstalkd by adding this PPA. Ubuntu 12.04 installs an older version of Beanstalkd.

Next, some quick configuration. The first thing we need to do is tell Beanstalkd to start when the system starts up or reboots. Edit /etc/default/beanstalkd and set START to “yes”.

$ sudo vim /etc/default/beanstalkd
> START yes     # uncomment

Then we can start Beanstalkd:

$ sudo service beanstalkd start
# Alternatively: /etc/init.d/beanstalkd start

Now we can setup Laravel. In your app/config/queue.php file, set the default queue to ‘beanstalkd':

'default' => 'beanstalkd',

Then edit any connection information you need to change. I left my configuration with the defaults as I installed it on the same server as the application.

'connections' => array(

    'beanstalkd' => array(
        'driver' => 'beanstalkd',
        'host'   => 'localhost',
        'queue'  => 'default',
        'ttr'    => 60,
    ),

),

Now when we push a job to the queue in Laravel, we’ll be pushing to Beanstalkd!

Installing Beanstalkd on a remote server

You may (read: should) want to consider installing Beanstalkd on another server, rather than your application server. Since Beantalkd is an in-memory service, it can eat up your servers resources under load.

To do this, you can install Beanstalkd on another server, and simply point your “host” to the proper server address, rather than localhost.

This leaves the final detail – what server runs the job? If you follow all other steps here, Supervisord will still be watching Laravel’s listener on your application server. You may want to consider running your job script (or even a copy of your application which has a job script) on yet another server whose job is purely to churn through Beanstalkd queue jobs. This means having a listener and working listener/job code on yet another server.

In fact, in a basic distributed setup, we’d probably have an application server (or 2, plus a load-balancer), a database server, a queue server and a job server!

Supervisord

Let’s say you pushed a job to Beanstalkd:

Queue::push('Myapp\Queue\PhotoService', array('image_path' => '/path/to/image/file.ext'));

Now what? You might notice that it goes to Beanstalkd, but Myapp\Queue\PhotoService@fire() doesn’t seem to be getting called. You’ve checked your error logs, you see if the image was edited, and found that the the job is just “sitting there” in your Beanstalkd queue.

Beanstalkd doesn’t actually PUSH jobs to a script – instead, we need a worker to check if there are jobs available and ask for them.

This is what $ php artisan queue:listen does – It listens for jobs and runs them as they become available.

If you run that command, you’ll see your job being sent to code. If all goes well, your image will beproperly manipulated.

The question then becomes: How do I make php listen at all times? We need to avoid having to “supervise” that process manually. This is where Supervisord comes in.

Supervisord will watch our queue:listen command and restart it if it fails. Let’s see how to set that up.

First, we’ll install it:

# Debian / Ubuntu:
$ sudo apt-get install supervisor

Next, we’ll configure it. We need to define a process to listen to.

$ sudo vim /etc/supervisor/conf.d/myqueue.conf

Add this to your new conf file, changing file paths and your environment as necessary:

[program:myqueue]
command=php artisan queue:listen --env=your_environment
directory=/path/to/laravel
stdout_logfile=/path/to/laravel/app/storage/logs/myqueue_supervisord.log
redirect_stderr=true
autostart=true
autorestart=true

We now have a process called “myqueue” which we can tell Supervisord to start and monitor.

Let’s do that:

$ sudo supervisorctl
> reread # Tell supervisord to check for new items in /etc/supervisor/conf.d/
> add myqueue       # Add this process to Supervisord
> start myqueue     # May say "already started"

Now the “myqueue” process is on and being monitored. If our queue listener fails, Supervisord will restart the php artisan queue:listen --env=your_environment process.

You can check that it is indeed running that process with this command:

$ ps aux | grep php

# You should see some output like this:
php artisan queue:listen --env=your_environment
sh -c php artisan queue:work  --queue="default" --delay=0 --memory=128 --sleep --env=your_environment
php artisan queue:work --queue=default --delay=0 --memory=128 --sleep --env=your_environment

Wrapping up

Now we have a full end-to-end queue working and in place!

  1. We create a script to process a queued job
  2. We installed Beanstalkd to act as the work queue
  3. We use Laravel to push jobs to our queue
  4. We use Laravel queue:listen to act as a worker and pull jobs from the queue
  5. We wrote some code to process a job from the queue
  6. We use Supervisord to ensure queue:listen is always listening for new jobs

Notes

  1. You might want to consider setting up log rotation on the Laravel and Supervisord logs
  2. You can read here for more information on setting up Supervisord on Ubuntu.
  3. Read the Laravel docs on queues to learn how and when to release or delete jobs.

TL;DR

For reference, just copy and paste the whole process from here:

$ sudo apt-get update
$ sudo apt-get install -y beanstalkd supervisor
$ sudo vim /etc/default/beanstalkd
> START yes     # uncomment this line
$ sudo service beanstalkd start
$ sudo vim /etc/supervisor/conf.d/myqueue.conf

Enter this, changing as needed:

[program:myqueue]
command=php artisan queue:listen --env=your_environment
directory=/path/to/laravel
stdout_logfile=/path/to/laravel/app/storage/logs/myqueue_supervisord.log
redirect_stderr=true
autostart=true
autorestart=true

Start Supervisord:

$ sudo supervisorctl
> reread                # Get available jobs
> add myqueue
> start myqueue

Read more on Supervisord here for info on supervisorctl.

From: http://fideloper.com/ubuntu-beanstalkd-and-laravel4

How to Install Laravel with an Nginx Web Server on Ubuntu 14.04

Introduction

Laravel is a modern, open source PHP framework for web developers. It aims to provide an easy, elegant way for developers to get a fully functional web application running quickly.

In this guide, we will discuss how to install Laravel on Ubuntu 14.04. We will be using Nginx as our web server and will be working with the most recent version of Laravel at the time of this writing, version 4.2.

Install the Backend Components

The first thing that we need to do to get started with Laravel is install the stack that will support it. We can do this through Ubuntu’s default repositories.

First, we need to update our local package index to make sure we have a fresh list of the available packages. Then we can install the necessary components:

sudo apt-get update
sudo apt-get install nginx php5-fpm php5-cli php5-mcrypt git

This will install Nginx as our web server along with the PHP tools needed to actually run the Laravel code. We also install git because the composer tool, the dependency manager for PHP that we will use to install Laravel, will use it to pull down packages.

Modify the PHP Configuration

Now that we have our components installed, we can start to configure them. We will start with PHP, which is fairly straight forward.

The first thing that we need to do is open the main PHP configuration file for the PHP-fpm processor that Nginx uses. Open this with sudo privileges in your text editor:

sudo nano /etc/php5/fpm/php.ini

We only need to modify one value in this file. Search for the cgi.fix_pathinfo parameter. This will be commented out and set to “1”. We need to uncomment this and set it to “0”:

cgi.fix_pathinfo=0

This tells PHP not to try to execute a similar named script if the requested file name cannot be found. This is very important because allowing this type of behavior could allow an attacker to craft a specially designed request to try to trick PHP into executing code that it should not.

When you are finished, save and close the file.

The last piece of PHP administration that we need to do is explicitly enable the MCrypt extension, which Laravel depends on. We can do this by using the php5enmod command, which lets us easily enable optional modules:

sudo php5enmod mcrypt

Now, we can restart the php5-fpm service in order to implement the changes that we’ve made:

sudo service php5-fpm restart

Our PHP is now completely configured and we can move on.

Configure Nginx and the Web Root

The next item that we should address is the web server. This will actually involve two distinct steps.

The first step is configuring the document root and directory structure that we will use to hold the Laravel files. We are going to place our files in a directory called /var/www/laravel.

At this time, only the top-level of this path (/var) is created. We can create the entire path in one step by passing the -p flag to our mkdir command. This instructs the utility to create any necessary parent path elements needed to construct a given path:

sudo mkdir -p /var/www/laravel

Now that we have a location set aside for the Laravel components, we can move on to editing the Nginx server blocks.

Open the default server block configuration file with sudo privileges:

sudo nano /etc/nginx/sites-available/default

Upon installation, this file will have quite a few explanatory comments, but the basic structure will look like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        root /usr/share/nginx/html;
        index index.html index.htm;

        server_name localhost;

        location / {
                try_files $uri $uri/ =404;
        }
}

This provides a good basis for the changes that we will be making.

The first thing we need to change is the location of the document root. Laravel will be installed in the/var/www/laravel directory that we created.

However, the base files that are used to drive the app are kept in a subdirectory within this calledpublic. This is where we will set our document root. In addition, we will tell Nginx to serve anyindex.php files before looking for their HTML counterparts when requesting a directory location:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/laravel/public;
    index index.php index.html index.htm;

    server_name localhost;

    location / {
            try_files $uri $uri/ =404;
    }
}

Next, we should set the server_name directive to reference the actual domain name of our server. If you do not have a domain name, feel free to use your server’s IP address.

We also need to modify the way that Nginx will handle requests. This is done through the try_filesdirective. We want it to try to serve the request as a file first. If it cannot find a file of the correct name, it should attempt to serve the default index file for a directory that matches the request. Failing this, it should pass the request to the index.php file as a query parameter.

The changes described above can be implemented like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        root /var/www/laravel/public;
        index index.php index.html index.htm;

        server_name server_domain_or_IP;

        location / {
                try_files $uri $uri/ index.php?$query_string;
        }
}

Finally, we need to create a block that handles the actual execution of any PHP files. This will apply to any files that end in .php. It will try the file itself and then try to pass it as a parameter to the index.php file.

We will set the fastcgi_* directives so that the path of requests are correctly split for execution, and make sure that Nginx uses the socket that php5-fpm is using for communication and that theindex.php file is used as the index for these operations.

We will then set the SCRIPT_FILENAME parameter so that PHP can locate the requested files correctly. When we are finished, the completed file should look like this:

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/laravel/public;
    index index.php index.html index.htm;

    server_name server_domain_or_IP;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

Save and close the file when you are finished.

Because we modified the default server block file, which is already enabled, we simply need to restart Nginx for our configuration changes to be picked up:

sudo service nginx restart

Create Swap File (Optional)

Before we go about installing Composer and Laravel, it might be a good idea to enable some swap on your server so that the build completes correctly. This is generally only necessary if you are operating on a server without much memory (like a 512mb Droplet).

Swap space will allow the operating system to temporarily move data from memory onto the disk when the amount of information in memory exceeds the physical memory space available. This will prevent your applications or system from crashing with an out of memory (OOM) exception when doing memory intensive tasks.

We can very easily set up some swap space to let our operating system shuffle some of this to the disk when necessary. As mentioned above, this is probably only necessary if you have less than 1GB of ram available.

First, we can create an empty 1GB file by typing:

sudo fallocate -l 1G /swapfile

We can format it as swap space by typing:

sudo mkswap /swapfile

Finally, we can enable this space so that the kernel begins to use it by typing:

sudo swapon /swapfile

The system will only use this space until the next reboot, but the only time that the server is likely to exceed its available memory is during the build processes, so this shouldn’t be a problem.

Install Composer and Laravel

Now, we are finally ready to install Composer and Laravel. We will set up Composer first. We will then use this tool to handle the Laravel installation.

Move to a directory where you have write access (like your home directory) and then download and run the installer script from the Composer project:

cd ~
curl -sS https://getcomposer.org/installer | php

This will create a file called composer.phar in your home directory. This is a PHP archive, and it can be run from the command line.

We want to install it in a globally accessible location though. Also, we want to change the name tocomposer (without the file extension). We can do this in one step by typing:

sudo mv composer.phar /usr/local/bin/composer

Now that you have Composer installed, we can use it to install Laravel.

Remember, we want to install Laravel into the /var/www/laravel directory. To install the latest version of Laravel, you can type:

sudo composer create-project laravel/laravel /var/www/laravel

At the time of this writing, the latest version is 4.2. In the event that future changes to the project prevent this installation procedure from correctly completing, you can force the version we’re using in this guide by instead typing:

sudo composer create-project laravel/laravel /var/www/laravel 4.2

Now, the files are all installed within our /var/www/laravel directory, but they are entirely owned by ourroot account. The web user needs partial ownership and permissions in order to correctly serve the content.

We can give group ownership of our Laravel directory structure to the web group by typing:

sudo chown -R :www-data /var/www/laravel

Next, we can change the permissions of the /var/www/laravel/app/storage directory to allow the web group write permissions. This is necessary for the application to function correctly:

sudo chmod -R 775 /var/www/laravel/app/storage

If we need to upload a large file we need to change some configuration:

Edit…

sudo nano /etc/php5/fpm/php.ini

Set…

upload_max_filesize = 100M 
post_max_size = 100M

 

Edit…

sudo nano /etc/nginx/nginx.conf

Set…

http { 
     #... client_max_body_size 100M;
}

Reload PHP-FPM & Nginx

service php5-fpm reload
service nginx reload

You now have Laravel completely installed and ready to go. You can see the default landing page by visiting your server’s domain or IP address in your web browser:

http://server_domain_or_IP

Laravel default landing page

You now have everything you need to start building applications with the Laravel framework.

Conclusion

You should now have Laravel up and running on your server. Laravel is quite a flexible framework and it includes many tools that can help you build out an application in a structured way.

To learn how to use Laravel to build an application, check out the Laravel documentation.

From: https://www.digitalocean.com/community/tutorials/how-to-install-laravel-with-an-nginx-web-server-on-ubuntu-14-04

Emoji and MySQL

I have a hard time saving Emoji character to MySQL column of type text.

Here are the steps I did to make it work:

1. Open app/config/database.php and change the charset from utf8 to utf8mb4. Change collation from utf8_unicode_ci to utf8mb4_general_ci.

2. I go to phpmyadmin and then modify the column collation to utf8mb4_general_ci.

It Works :)

Laravel 4 Validation in Codeigniter

I decided to convert my Codeigniter application to Laravel 4 but since the code base of my application is pretty stable I decided to used some of Laravel 4’s components inside Codeigniter until my web application is fully Laravel-based apps.

First thing I did was to used Laravel’s Eloquent

Next thing I would like to do is to show how to use Laravel 4’s Validation inside of Codeigniter apps.

So lets get started.

 

Using Eloquent ORM inside Codeigniter

Last time I wrote an article on how you can use Eloquent ORM outside of Laravel 4. This time we are going to integrate Eloquent ORM inside our Codeigniter apps. As most of us know, Laravel used composer for bundles including its database (Illuminate Database). In my application I use Codeigniter Active Record, Datamapper Orm. Now that I create application using Laravel, I want to use Laravel’s Eloquent ORM in my existing Codeigniter application.

Install Composer

The first thing we need to do is to install composer. If you are new to composer you may want to read some article about it here

Create json File

Create file and name it composer.json in the root folder of your Codeigniter application. Since Eloquent ORM only runs on PHP 5.3 or later I assume you use that PHP version. Notice that I add key for autoload then the classmap. Change the path where your eloquent models resides. Dont forget to run $ composer dump-autoload after you created a new Eloquent Model

Install Illuminate Database

Open you terminal (command prompt in Windows) then navigate to you Codeigniter’s root folder then enter the following command:

You should see message something like this:

composer

This will create “vendor” folder in your apps root directory.

Autoload

Add require ‘vendor/autoload.php'; just before the codeigniter bootsrap

Create models and connection files

Create two files and save it in your models directory.

Test it