Tag Archives: aws

Simple AWS Lambda Java Project – NetBeans and Gradle


I have been working on distributed harvesting projects for the last 5 years. Generally the projects used a raft of AWS EC2 instances and Gearman to manage the workflow. Having recently heard of some similar solutions using AWS Lambda functions, I wanted to explore this as a possible new architecture for these projects. So.. I dug in.

Article Contents:

The learning curve for me has been a little steep. There is a limited number of languages supported (AWS doc linked here), but the only two that came close to meeting my needs were NodeJS and Java.

Most of my project code is written in CasperJS, a NodeJS extension, so I thought that perhaps that was the path of least resistance.. but it was not clear to me how to ‘install’ PhantomJS and CapserJS components into this Lambda Function, so I switched to working with Java.

Designing the Test

This is a VERY simple test blob, but it took me about a day to get to this point, since it was not totally clear to me how the AWS API Gateway might work and how the data is packaged/posted to the <Lambda Function. After a lot of poking around, uploading my first hunk of (totally broken) code; I was presented the a test CLI. The test CLI has only a few options or packaging up data payloads (at least for test), singular scalar string and a JSON string (which turns into an Object.. the source of the frustration I faced figuring this out).

Boiling it down, this is what the test harness input looks like, when you use the “Hello World” test case, after changing the properties to match my desired inputs:

This will work fine for my needs, I prefer JSON payloads over a fixed (brittle) set of parameters.. this lets me muck around with the internals and add more capacity without having to work about external inputs; I’m just that way.

Having decided that JSON was the payload to accept.. next I needed to write a chunk of code to accept it.

Writing the Test Code

Knowing that I would be using a Json string as my input, I’d first coded my function like this:

public String searchAuto(String json){
return String.format(“%s”,json);
}

What I discovered quickly is that the string of JSON in the test payload (which looks like this), is deserialized as a Json Object, and not a string. This required me to step up my simple test to actually deal with the data as JSON. To do this, I selected simple.JsonObject. This is where the “fun” began. Hopefully this helps you avoid spending the time I did to figure out how to get the code cleanly compile locally.. AND to run when uploaded to AWS.

NOTE: I’m not going to say this is the right way to do this.. it’s just the way it worked for me. It’s kludgy and I don’t really like having a two step process but it’s working to get my version of HELLO WORLD working.

Long story short, I needed to get copies of these three .jar files added to my project’s build path, to get things going. Where you place them in your project might vary, but in a basic NetBeans project, these are in the lib/ directory off the project root:

ls -l lib/*.jar

7,267 lib/aws-lambda-java-core-1.1.0.jar
75,355 lib/aws-lambda-java-events-2.0.jar
57,607 lib/json-simple-2.1.2.jar

Locating the jars was not the simplest task either, Amazon’s horrible documentation sent me fishing around on various Github repos, but in the end.. I found them with a good ol search. I’ve linked the libs in the above block to where I sourced mine.

AWS Lambda requires use of specific handler hooks, when initiating the function. I’d like to wax poetic about what it all means but at this time, I just need a proof of concept, so using a code example.. this is the code I compiled:

package lambdaTest;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import org.json.simple.JsonObject;

/**
*
* @author david
*/

public class LambdaTest {

public String myHandler(int myCount, Context context) {
LambdaLogger logger = context.getLogger();
logger.log(“received : ” + myCount);
return String.format(“Received value of %d”,myCount);
//String.valueOf(myCount);
}

/* search options */
public String searchAuto(JsonObject json){

return String.format(“Search Region [%s] for ‘%s'”,json.getString(“region”),json.getString(“query”));
}
}

Beyond the Build – Packaging your Java

Again, the think the AWS documentation is pretty crappy, and this extends to every AWS service I’ve had to use. Once you figure out what I really means, they work very well.. but decoding Seattle AWS-speak (for me) is often not easy. You might be smarter than me and able to see the path right off the bat.. well.. good for you. For me and the rest of the inelastic brain crowd.. I’ll try to give you the Cliff Notes version…

When I simply wanted to test my JAR, and I didn’t need the JSON libary, it was a simple matter of uploading the Jar file in the Lambda dialog; it looks like this:

However, uploading more than 1 jar, things start to get a little wonky. The instructions say this:

Example Gradle Config
The page also provides a rather extensive rundown on using Gradle to run your build and make the package. Since creating my own .ZIP with the files in the ‘root’ as designated didn’t work, I went Gradle. If you decide to go Gradle, here is the build file I ended up using. When I made the mistake of including the Mavin paths for AWS into the Gradle build, I ended up with ALL if the AWS .jar files being downloaded and zipped into the library, creating a 7.8 MB upload!! For a Hello World program.. INSANE! Anyhow.. this is my gradle config that changes the default Gradle configs (build.gradle text file) to work with NetBeans paths. You might need to tweak this further to your own needs:

apply plugin: ‘java’

//repositories {
// mavenCentral()
//}

sourceSets {
main {
java {
srcDir ‘src’
}
}
test {
java {
srcDir ‘test’
}
}
}

dependencies {
compile (
fileTree(dir: ‘lib’, include: ‘*.jar’)
)
}

task buildZip(type: Zip) {
from compileJava
from processResources
into(‘lib’) {
from configurations.runtime
}
}

build.dependsOn buildZip

Using Gradle to run the build looks like this, run the command gradle build and you are on your way… as long as your code has a clean compile:

gradle build

BUILD SUCCESSFUL in 1s
3 actionable tasks: 3 executed

This allowed me to build a ZIP file that uploaded and worked. What I found out, was that the structure of the file is NOT really the way it was described in the docs. The contents of the zip file look like this for my project; note that when uploading this way, you are *not* packaging the .jar file for your class, but the compiled class file. Also not that it is *not* in the root of the path. With this knowledge, I can probably dispense with dual-building with Gradle and just package up the contents in this way. Hack the process!

Lambda/lambdaTest:
1612 Sep 29 15:18 LambdaTest.class

Lambda/lib:
7267 Sep 26 10:40 aws-lambda-java-core-1.1.0.jar
75355 Sep 26 10:13 aws-lambda-java-events-2.0.jar
57607 Sep 26 18:30 json-simple-2.1.2.jar

Now, you should be able to upload your zip file. Not that a file of 10MB or more, they’d like you to spend extra money storing it on S3… (niiiiice).

Testing your Function

Once uploaded, you can configure your test…

… and run it!

Conclusion

After a day of putting around and getting frustrated with the AWS docs that always leave (for me) gaping knowlege gaps.. I have a proof of concept test coded, compiled, uploaded and run. Now.. what needs to be sorted out is how to setup the AWS API Gateway to trigger this function… and look at how much each of those iterations will cost. With a planned execution batch that will range from 40-60 events per query.. this could add up fast.

So….. I’m not yet convinced this will function economically for my latest project; but I’m going to be testing out the concept pretty thoroughly over the coming week.


Installing CasperJS 1.1.4 on AWS (CentOS)

Installing CasperJS to work with PhantomJSs latest version 2.1.1

This is the current status of my test installation. My perviously hacked version of CaserpJS ( instructions are there: Helping CasperJS 1.1.0-beta3 play nice with PhantomJS 2.0.0 ), however it’s time to rev-up to the non-beta version of the code.

casperjs
CasperJS version 1.1.0-beta3 at /usr/lib/node_modules/casperjs, using phantomjs version 2.1.1

As of today:
Screen Shot 2016-02-08 at 10.42.19 AM

Step 1 — Clone CasperJS from Git

Hopefully you already have Git installed, and you are ready to clone:

git clone git://github.com/n1k0/casperjs.git
Cloning into ‘casperjs’…
remote: Counting objects: 14392, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 14392 (delta 0), reused 0 (delta 0), pack-reused 14385
Receiving objects: 100% (14392/14392), 8.50 MiB | 0 bytes/s, done.
Resolving deltas: 100% (8648/8648), done.
Checking connectivity… done.

Step 2 — Perform Installation

Using hints from the Instructions at CasperJS 1.1.0-DEV documentation, I first located my current casper image, moved it aside, then linked the new one into it’s location.

whereis casperjs
casperjs: /usr/bin/casperjs /usr/local/bin/casperjs /opt/n1k0-casperjs-e3a77d0/bin/casperjs /opt/casperjs/bin/casperjs /opt/casperjs/bin/casperjs.exe

mv /usr/bin/casperjs /usr/bin/casperjs.1.1.0-beta3

ln -sf `pwd`/bin/casperjs /usr/bin/casperjs

Step 3 — Verify Casper

Running casper, I checked to ensure it’s on the latest version:

casperjs
CasperJS version 1.1.0-beta5 at /opt/casperjs, using phantomjs version 2.1.1

This looks like it’s good to go.. now CASPER AWAY!!

Installing PhantomJS 2.1.1 on AWS (CentOS)

phantomjs-logoIt’s a gamble to do this, and according to the build script it’s going to take a long time to complete the compile / install of Phantom 2.1.1.

Note: If you are looking for instructions on building for Ubuntu, the steps are different. I’ve documented that process in this post: Installing PhantomJS 2.1.1 on Ubuntu.

Step 1 — install required dependencies

You may or may not have most of these on your AWS / CentOS system. I found that most of these were required to start the PhantomJS build.Here are the ones that I’ve confirmed I needed:

  • autoconf
  • pkgconfig.x86_64
  • python26-pyudev.noarch
  • python26-twisted.noarch
  • sip.x86_64
  • python27-pyudev.noarch
  • python27-twisted.noarch
  • gcc
  • flex
  • bison
  • xorg-x11-server-Xorg.x86_64
  • xorg-x11-server-devel.x86_64
  • xorg-x11-utils.x86_64
  • xorg-x11-proto-devel.noarch
  • sqlite-tcl.x86_64
  • sqlite-devel.x86_64
  • openssl.x86_64
  • crypto-utils.x86_64
  • openssl-devel.x86_64
  • libfontenc.x86_64
  • libfontenc-devel.x86_64
  • fontconfig.x86_64
  • fontconfig-devel.x86_64
  • libicu-devel.x86_64
  • freetype-devel.x86_64
  • libpng-devel.x86_64
  • libjpeg-turbo-devel.x86_64
  • libXext-devel.x86_64
  • libxcb-devel.x86_64
  • xcb-util.x86_64

Installing the packages went smoothly:

sudo yum install autoconf pkgconfig.x86_64 python26-pyudev.noarch python26-twisted.noarch sip.x86_64 python27-pyudev.noarch python27-twisted.noarch gcc flex bison xorg-x11-server-Xorg.x86_64 xorg-x11-server-devel.x86_64 xorg-x11-utils.x86_64 xorg-x11-proto-devel.noarch sqlite-tcl.x86_64 sqlite-devel.x86_64 openssl.x86_64 crypto-utils.x86_64 openssl-devel.x86_64 libfontenc.x86_64 libfontenc-devel.x86_64 fontconfig.x86_64 fontconfig-devel.x86_64 libicu-devel.x86_64 freetype-devel.x86_64 libpng-devel.x86_64 libjpeg-turbo-devel.x86_64 libXext-devel.x86_64 libxcb-devel.x86_64 xcb-util.x86_64

Step 2 — clone the Git repo to local drive:

git clone git://github.com/ariya/phantomjs.git
Cloning into ‘phantomjs’…
remote: Counting objects: 63695, done.
remote: Compressing objects: 100% (37/37), done.
remote: Total 63695 (delta 16), reused 0 (delta 0), pack-reused 63657
Receiving objects: 100% (63695/63695), 129.05 MiB | 4.08 MiB/s, done.
Resolving deltas: 100% (31013/31013), done.
Checking connectivity… done.

cd phantomjs

git checkout 2.1.1
Note: checking out ‘2.1.1’.
[…]
HEAD is now at d9cda3d… Set version to “2.1.1”

git submodule init
Submodule ‘3rdparty-win’ (https://github.com/Vitallium/phantomjs-3rdparty-win.git) registered for path ‘src/qt/3rdparty’
Submodule ‘qtbase’ (https://github.com/Vitallium/qtbase.git) registered for path ‘src/qt/qtbase’
Submodule ‘qtwebkit’ (https://github.com/Vitallium/qtwebkit.git) registered for path ‘src/qt/qtwebkit’

git submodule update
Cloning into ‘src/qt/3rdparty’…
Cloning into ‘src/qt/qtbase’…
Cloning into ‘src/qt/qtwebkit’…

Step 3 — Hack the QT build

It seemed that I needed to set some different flags for the qtbase build. It was not clear to me if this could be done with the build.py options, so I hacked the qt/qtbase/configure script.

vi src/qt/qtbase/configure

First off, I changed the settings of these two values near the top of the config file:

Then commented out part of the section around Werror, so that the build would not treat warnings as errors. The C++ macro options in the code will generate A LOT of errors, most of them from the flags defined in build.py. I tried the route of disabling those flags and ended up with more errors and more issues.. so changing the flags in the config was my next option:

[…]
#CFG_WERROR=auto
CFG_WERROR=no
[…]
#CFG_DEV=no
CFG_DEV=yes
[…]
warnings-are-errors|Werror)
# if [ “$VAL” = “yes” ] || [ “$VAL” = “no” ]; then
# CFG_WERROR=”$VAL”
# else
UNKNOWN_OPT=yes
# fi
;;
[…]

Step 4 — Build!

python build.py
—————————————-
WARNING
—————————————-

Building PhantomJS from source takes a very long time, anywhere from 30 minutes
to several hours (depending on the machine configuration). It is recommended to
use the premade binary packages on supported operating systems.

For details, please go the the web site: http://phantomjs.org/download.html.

Do you want to continue (Y/n)? Y

Step 5 — check the binary

Once the build has completed, you will find the binary to be built in the local directory bin/

ls -l bin/phantomjs
-rwxr-xr-x 1 root root 56736434 Feb 5 11:33 /usr/sbin/phantomjs

To complete the installation, you’ll need to replace the current phantomjs binary with the new one. To find the location if your current binary (if you have one), this should work:

whereis phantomjs
phantomjs: /usr/bin/phantomjs

Copy the new binary to that location and verify version:

cp bin/phantomjs /usr/bin/phantomjs
cp: overwrite ‘/usr/bin/phantomjs’? y

phantomjs -v
2.1.1

YOU ARE DONE!! It was just that easy

AWS EC2 Sendmail Configuration (with .procmail)

Sendmail.. how you frustrate people; but the power makes it difficult to turn away. I’ll now detail the steps required for me to setup a simple e-mail auto-processing service on an AWS EC2 instance without a dedicated hostname.

Setting up the Procmail Recepie

.procmail — you can do a lot of things with .procmail. The online resources are many, but for this example I’m going to keep it VERY simple.

First, create a procmail file in your local account. My AWS instances are CentOS based, and use the ‘ec2-user’ as the default account. I’m going to keep it simple here and stick with that paradigm.

-bash-4.1$ vi .procmailrc

I’m going to setup my .procmailrc file to look like this:

SHELL=/usr/bin/php
MAILDIR=$HOME/mail
LOGFILE=$HOME/logs/procmail.log

:0 # catch errors
* ^Subject: Returned mail:.*
logs/procmail.error.log

# — auto-catches
:0
|”$HOME/prod/MailIntake/process.mail.php” $1

Now that I have a .procmail setup… time to get down to making sendmail work.

Sendmail — allowing server to accept messages

To configure the sendmail files, I assume super user powers. If you are unable to assume superuser powers or run sudo.. I doubt you’ll be able to complete these configurations. Hopefully your responsible IT person is going to handle all this for you instead.

Main Sendmail File — sendmail.mc

Setting up the main Sendmail file. This file is fairly large, so I’m only going to highlight the sections I felt needed to be updated.

vi sendmail.mc

To allow use of the AWS mail relay, defined in this section:

define(`SMART_HOST’, `email-smtp.us-east-1.amazonaws.com’)dnl

Setup local hostname / domain identity

dnl # Also accept email sent to “localhost.localdomain” as local email.
dnl #LOCAL_DOMAIN(`localhost.localdomain’)dnl
LOCAL_DOMAIN(`ec2-52-6-000-000.compute-1.amazonaws.com’)dnl

Setting up the masquerade

MASQUERADE_DOMAIN(`ec2-52-6-000-000.compute-1.amazonaws.com’)dnl
MASQUERADE_AS(`ec2-52-6-000-000.compute-1.amazonaws.com’)dnl

access db configuration

Editing the access file to setup the local host relay, so messages can be sent from the various network interfaces on the machine. Obviously. one of those IP addresses was obscured. Where you see #private ip# substitute your AWS private IP (such as 172.123.321.1)

## By default we allow relaying from localhost…
localhost RELAY
127.0.0.1 RELAY
#private ip# RELAY
email-smtp.us-east-1.amazonaws.com RELAY

## Allowed Connections
Connect:127.0.0.1 OK
Connect:#private ip# OK
Connect:email-smtp.us-east-1.amazonaws.com OK

Defining Local Hostnames — local-host-names

For my configuration, I wanted to make sure the system understood it’s local non-FQDN identitiy, so I edited the local-host-names file to include three different ways to reference the system. The AWS DSN, public IP and private IP:

vi local-host-names

Contents of my file looks like this (my file contains real IPs and hostname)

# local-host-names – include all aliases for your machine here.
ec2-52-6-000-000.compute-1.amazonaws.com
52.6.000.000
172.30.000.000

The Mailer Table — mailertable

At this point I didn’t see a need to implement functions of the mailertable

Setting up trusted user file — trusted-users

Modified the trusted users file to allow my primary user, root and one alias to send mail without warnings:

vi /etc/mail/trusted-users

File contents:

# trusted-users – users that can send mail as others without a warning
# apache, mailman, majordomo, uucp, are good candidates
ec2-user, apps, proxy, root

Aliases — virtusertable

Entering user aliases to capture mail sent to various users, and route them to the local ‘ec2-user’.

vi /etc/mail/virtusertable

Contents of my file with aliases:

# A domain-specific form of aliasing, allowing multiple virtual domains to be
# hosted on one machine.
#
ec2-user@ec2-52-6-000-000.compute-1.amazonaws.com ec2-user@localhost
stuff@ec2-52-6-000-000.compute-1.amazonaws.com ec2-user@localhost
things@ec2-52-6-000-000.compute-1.amazonaws.com ec2-user@localhost

Rebuild Settings

Run a make on the directory to rebuild the sendmail db files

make -C /etc/mail
make: Entering directory `/etc/mail’
make: Leaving directory `/etc/mail’

Restart Sendmail!

Restart sendmail… watch for majik!

/etc/init.d/sendmail restart
Shutting down sm-client: [ OK ]
Shutting down sendmail: [ OK ]
Starting sendmail: [ OK ]
Starting sm-client: [ OK ]

Upgrade PhantomJS 1.9 to 2.0 on AWS

phantomjs-logoIt’s a gamble to do this, and according to the build script it’s going to take a long time to complete, but to try and solve some issues that PhantomJS has with CasperJS 1.1-beta3 (latest version) I wanted to upgrade to Phantom 2.0.

A lot of things have changed, and it’s been suggested that a number of features that CasperJS wants to use, are deprecated in the 2.0 version of Phantom. But forward I’ll forge regardless.

Step 1 is to locate the source, download and unzip:

wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.0.0-source.zip
Length: 110092872 (105M) [application/zip]
Saving to: ‘phantomjs-2.0.0-source.zip’

unzip phantomjs-2.0.0-source.zip
Archive: phantomjs-2.0.0-source.zip
a2912c216d06df4d8b51f12ad4082a48c5fc7ba6
creating: phantomjs-2.0.0/
inflating: phantomjs-2.0.0/.gitignore
[…]
inflating: phantomjs-2.0.0/tools/preconfig.sh
inflating: phantomjs-2.0.0/tools/qscriptengine.h
inflating: phantomjs-2.0.0/tools/src.pro

Step 2 – install required dependancies

You may or may not have most of these from your previous PhantomJS 1.9.x install, but I found that most of these were required to start the PhantomJS build.Here are the ones that I’ve confirmed I needed:

  • gcc
  • gcc-c++
  • make
  • flex
  • ruby
  • openssl-devel
  • fontconfig-devel
  • sqlite-devel
  • libicu-devel
  • libpng-devel
  • libjpeg-devel
  • freetype-devel
  • bison
  • gperf

Installing the packages went smoothly:

sudo yum -y install gcc gcc-c++ make flex bison gperf ruby openssl-devel freetype-devel fontconfig-devel libicu-devel sqlite-devel libpng-devel libjpeg-devel

Following this I grabbed the source code to install freetype2. Although freetype successfully installed, the required header files where not found. I decided it was bet to grab it and build from source:

wget http://download.savannah.gnu.org/releases/freetype/freetype-2.6.tar.gz
gunzip freetype-2.6.tar.gz
tar xvf freetype-2.6.tar
./configure
[…]
configure: creating ./config.status
config.status: creating unix-cc.mk
config.status: creating unix-def.mk
config.status: creating ftconfig.h
config.status: executing libtool commands
configure:
make & make install
[…]
/usr/bin/install -c -m 644 ./builds/unix/ftconfig.h \
/usr/local/include/freetype2/config/ftconfig.h
/usr/bin/install -c -m 644 /usr/local/freetype-2.6/objs/ftmodule.h \
/usr/local/include/freetype2/config/ftmodule.h
/usr/bin/install -c -m 755 ./builds/unix/freetype-config \
/usr/local/bin/freetype-config
/usr/bin/install -c -m 644 ./builds/unix/freetype2.m4 \
/usr/local/share/aclocal/freetype2.m4
/usr/bin/install -c -m 644 ./builds/unix/freetype2.pc \
/usr/local/lib/pkgconfig/freetype2.pc
/usr/bin/install -c -m 644 /usr/local/freetype-2.6/docs/freetype-config.1 \
/usr/local/share/man/man1/freetype-config.1

Now following that build, due to some inexplicable continuous oversight on the part of freetype’s maintainers.. OR.. phantom.. a link has to be make so that the build process can find the actual libraries required:

ln -s /usr/include/freetype2/freetype /usr/include/freetype

Step 3 – build

Now bulid.sh script. NOTE: if you are executing the compile on a VM (or in this case AWS), it’s recommended that the build process Does Not try to run parallel build jobs on the virtual cores. The PhantomJS website was not clear (to me) why.. but it did recommend using the –jobs 1 flag on the build.. which I am doing. You may omit that if you’d like to experiment.

cd phantomjs-2.0.0

./build.sh –jobs 1
—————————————-
WARNING
—————————————-

Building PhantomJS from source takes a very long time, anywhere from 30
minutes to several hours (depending on the machine configuration).
We recommend you use the premade binary packages on supported operating
systems.

For details, please go the the web site: http://phantomjs.org/download.html.

Do you want to continue (y/n)?
y
[…]

NOTE: If you want to suppress the warning regarding perils of the long compile, you an use the –confirm flag to bypass the question. This is really helpful if you want to background the process and write it to a log. Where I find this most beneficial is when I want to/need to close the terminal window before the compile completes.

Here is an optional method of running that will background the process, auto-reply to the warning and write to a log file:

nohup ./build.sh –confirm –jobs 1 > build.log &

You might carp about not being able to monitor progress now! Well sure you can.. just do a following tail on the file. Exact command varies with system, I’ll provide the one for typical LINUX and for typical OSX:

For typical LINUX:
tailf build.log

For typical OSX:
tail -f build.log

Step 4 – check the binary

Once the build has completed, you will find the binary to be built in the local directory bin/

ls -l bin/phantomjs
-rwxr-xr-x 1 root root 56587060 Sep 30 17:16 bin/phantomjs

To complete the installation, you’ll need to replace the current phantomjs binary with the new one. To find the location if your current binary (if you have one), this should work:

whereis phantomjs
phantomjs: /usr/bin/phantomjs

Copy the new binary to that location and verify version:

cp bin/phantomjs /usr/bin/phantomjs
cp: overwrite ‘/usr/bin/phantomjs’? y

phantomjs -v
2.0.0

YOU ARE DONE!! It was just that easy

Install Redis on AWS EC2

redis-whiteRedis is fairly simple to install and get running. I found the best way to do this on CentOS based AWS EC2 nodes is to use the following steps.

Install Pre-Requisites

Redis will require several per-requisits. Your system may vary, but these are the cases I ran into when running the build in August 2015 with the latest AWS system updates. Some of these are required to run the tests, others are required for Redis itself.

TCL 8.5 or higher for Test

You need tcl 8.5 or newer in order to run the Redis test

yum install tcl

Download latest Redis package

Assume super user, move to a safe directory (I like /usr/local) and download the latest build:

sudo su –
cd /usr/local
wget http://download.redis.io/redis-stable.tar.gz

Extract Files

Once the main tarball has been downloaded, extract the files and start the configuration process.

tar xvzf redis-stable.tar.gz
cd redis-stable

Build the Binary

Build the binary. Redis does not seem to require ./config to be run, the necessary make files are already in place. Just run make and install!! If you decide to run the ‘make test’ (which I suggest you do), it maybe take 10-15 min. to complete depending on the power of your AWS instance.

make
make test
make install

Set Overcommit to TRUE

Redis is going to complain unless you have some level of overcommit memory enabled. This is easy to do (again, you must be root or sudoer to do this). Add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot, IF you can safely do so on your machine (best to check and make sure there are no live service interruptions or other personnel using the system).

vi /etc/sysctl.conf

Add this to the end of the file:

# Required by Redis to enable overcommit setting:
vm.overcommit_memory = 1

Reboot

init 6

Configure Redis

Create a working directory for the redis disk files. I like to use the following:

mkdir /var/redis
mkdir /var/redis/db

Copy the base configuration file to /etc/ and customize to your environment.

mkdir /etc/redis
cp redis.conf /etc/redis/6379.conf
vi /etc/redis/6379.conf

I made the following changes to the configuration file. I can’t guarantee all or any of these will be correct for your configuration:

daemonize yes

bind 127.0.0.1

tcp-keepalive 60

logfile “/var/log/redis-server.log”

dir /var/redis/db

Copy the startup file into /etc/init.d

cp utils/redis_init_script /etc/init.d/redis

Add the start command to the root’s crontab. Yeah, so this might be a cheater method instead of adding this to the systems rd.X files, but it’s also easy to disable.

crontab -e

@reboot /etc/init.d/redis start

Start Redis Server

Starting the server from the command line is a good way to verify it’s functional. It’s easy to do, just type ‘resis-server’. Hit CNTL-C to kill and exit once you’ve tested launch. If it starts up, you should see something like this:

[root@ip-10-000-000-00 redis-stable]# redis-server

31408:C 04 Aug 21:55:00.578 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
31408:M 04 Aug 21:55:00.579 * Increased maximum number of open files to 10032 (it was originally set to 1024).

31408:M 04 Aug 21:55:00.581 # Server started, Redis version 3.0.3

31408:signal-handler (1438725473) Received SIGINT scheduling shutdown…
31408:M 04 Aug 21:57:53.628 # User requested shutdown…
31408:M 04 Aug 21:57:53.628 * Saving the final RDB snapshot before exiting.
31408:M 04 Aug 21:57:53.631 * DB saved on disk
31408:M 04 Aug 21:57:53.632 # Redis is now ready to exit, bye bye…

If that looks OK, then start using the startup back file. This should start redis as a deamon (service) depending on how you edited the configuration file. If you did it the way I did, then it will start as a deamon.

/etc/init.d/redis start

Starting Redis server…

Test to make sure it’s listening.. by using the ping command. If it’s alive and listening, you’ll receive back a ‘PONG’

redis-cli ping

PONG

FINAL STEPS — Reboot and Verify!

A good and proper final test, assuming you are able to reboot the system without causing trouble to any live services or other personnel… is.. REBOOT, then verify that it has restarted as expected.

init 6

Connection closed by remote host.

[ec2-user@ip-10-000-000-00 ~]$ redis-cli ping

PONG

CONGRATULATIONS!! You are now the proud owner/maintainer/RP of a Redis server!

NEXT…

Doing something productive with Redis… (to be continued)

Amazon AWS – spinning up a micro for Gearman Client

Walkthough for spinning up a micro EC2 for use as a GearMan worker node.

This is a preliminary effort, and the steps may very well change over the next few weeks as these are tested out. The documentation I’m presenting here is based on my previous work [HERE] Install Gearman + Gearman-PHP on AWS ec2.

Getting Started

This assumes you already have an AWS account setup. If you don’t you need to go do that now. You can get started [HERE].

Once logged in, go to your dashboard. From here you’ll be selecting the EC2 area.
Screen Shot 2013-06-17 at 10.26.23 AM

Under resources you’ll find a button “Launch Instance”. This is the button you want to click. This is where the fun begins.
Screen Shot 2013-06-17 at 10.27.57 AM

In this case I’m going to use the ‘Classic Wizard’. It’s really not that magical but Wizard is such a super-awesome-cool-name (ala Windoze ’95) you’ll see it used here. Anyhow, I’m going classic:
Screen Shot 2013-06-17 at 10.30.04 AM

I want to keep things EVERY simple here so I’m going use the default Amazon AWS distribution/AMI.
Screen Shot 2013-06-17 at 10.31.52 AM

For this exercise, I’m using an ‘On Demand’ Micro instance:
Screen Shot 2013-06-17 at 10.34.01 AM

No Advanced features should be required, so I’ve left everything on this page set to it’s default settings:
Screen Shot 2013-06-17 at 10.36.20 AM

Next the storage requirements are defined. Since this is a worker that should be able to be spun up on need, and shouldn’t require much in the way of local storage, I’m going to opt out of defining and EBS volume and rely upon Ephemeral storage.
Screen Shot 2013-06-17 at 10.40.43 AM

At this point, I don’t see the need to define any keys for EC2 management, so I’m leaving this area blank:
Screen Shot 2013-06-17 at 10.41.30 AM

Next, select the .ssh key file (it’s stored in a .pem file) that you want to use to access this system. If you don’t have a set of these keys setup already, you’ll want to define them. I’m using one specific to this node class already defined.. you’ll need to handle this step as your policy/needs dictate. It’s not that complex, but word to you, DOWNLOAD THAT KEY, once you create it, there is no known way (according to all places I’ve checked) to download it again. BE WARNED.
Screen Shot 2013-06-17 at 10.47.16 AM

Next step will be do select a security profile for your node. I’ve found that the default one is sufficient for these purposes. You may want to further restrict the number of ports open, as the default opens a few extra things you might not want. This is another area where you’re own needs an policy will need to be carefully considered. Otherwise, start with default and iterate to the optimal configuration.
Screen Shot 2013-06-17 at 10.52.02 AM

Check your settings on this review page, and if everything is to your preference, then you can spin it up!!
Screen Shot 2013-06-17 at 10.55.38 AM

One you click launch, it will take a little while for the instance to go live.
Screen Shot 2013-06-17 at 10.57.20 AM

Once launched, you’ll be able to see your instance in the dashboard! Sorry bout all the greyd out information, but the instance I’m talking about is slightly highlighted in blue.
Screen Shot 2013-06-17 at 11.00.22 AM

NOW, YOU CAN START TO INSTALL YOUR GEARMAN COMPONENTS

Install Gearman + Gearman-PHP on AWS ec2

The fun and games continue!! As with every Gearman implementation I’ve done, there are trick for each environment. Here are the Cliff Notes (originally sourced from [Planet MySQL page] with my own twist) for getting Gearman setup on an Amazon Web Services (AWS) EC2 (Elastic Computing 2) node running the default AWS distribution. As always, your experience may vary.

Install required libraries

First, get all the required libraries installed using yum:

[ec2-user@]$ sudo yum install -y gcc
[ec2-user@]$ sudo yum install -y gcc-c++
[ec2-user@]$ sudo yum install -y gperf
[ec2-user@]$ sudo yum install -y boost
[ec2-user@]$ sudo yum install -y boost-devel
[ec2-user@]$ sudo yum install -y memcached
[ec2-user@]$ sudo yum install -y libuuid
[ec2-user@]$ sudo yum install -y libuuid-devel
[ec2-user@]$ sudo yum install -y libevent-devel
[ec2-user@]$ sudo yum install -y php-devel
[ec2-user@]$ sudo yum install -y php-xml

Compile Gearmand from Source

Very straight forward config and build.

[ec2-user@]$ cd gearmand-1.1.9
[ec2-user@]$ sudo ./configure --with-boost=/usr/include --prefix=/usr
[ec2-user@]$ sudo make
[ec2-user@]$ sudo make install

Compile Gearman PHP Library from Source

Fairly simple build, but you must first phpize.

[ec2-user@]$ cd gearman-1.1.1
[ec2-user@]$ sudo phpize
[ec2-user@]$ sudo ./configure --prefix=/usr
[ec2-user@]$ sudo make
[ec2-user@]$ sudo make install

Run ldconfig to Reload Dynmaic Library Cache

If you don’t run ldconfig, you’re going to get errors when you edit the php.ini file (last step).


bad:
PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/gearman.so' - libgearman.so.8: cannot open shared object file: No such file or directory in Unknown on line 0

[ec2-user@]$ sudo ldconfig

Edit the PHP ini file

This is the last step.

finding location of your php.ini file
[ec2-user@]$ php -i | grep php.ini
Configuration File (php.ini) Path => /etc
Loaded Configuration File => /etc/php.ini

Edit your config file, adding these lines:

[ec2-user@]$ sudo vi /etc/php.ini
[Gearman]
; Add Gearman shared object to config
extension="gearman.so"

Now your install is complete!!