n3rds.com – Jobs for Programmers

Mar 08
2012
Jobs for Programmers

Jobs for Programmers

Two colleagues out of St Louis, Josh Anyan, and Chris DeGroat, recently created n3rds.com – a website for matching up tech industry jobs with candidates. They gave me a heads up, and I was pretty impressed – as I generally am by these two. I thought I’d give them a call out on my site in hopes to send them what little traffic I can muster. Here is a review:

n3rds.com aims to be low friction, my entire experience on the site was just a few minutes, here are some highlights:

  1. The login system is all via the linkedIn API ( how suitable ). This makes it super easy to sign up, and connect with the most relevant professional social network out there – NOT your facebook.
  2. You’re immediately asked a few questions about locale, skillsets, salary ranges that interest you.
  3. Next, your taken to a listing of matching jobs – where you can pick and choose what interests you and go from there.
  4. As new jobs that match your skills come into the system, you are notified. This is a nice way to keep a pulse on opportunities, without throwing your resume on the market.

n3rds.com launched a few days ago and is currently in beta. If you’re a recruiter or employer, head on over and toss up a few job postings. It’s free! If you’re a techie, might as well sign up – it’s pretty frictionless, and you never know where you’ll find the next big thing.

For the record: Chris DeGroat and Josh Anyan are rockstar developers, they knocked this product out in just a couple short weeks. Keep tabs on these guys.

Nice work gentlemen.

Apache Low Memory Settings + PHP + APC

Apr 03
2010

In addition to moving my servers to save costs, I ran into a two part issue that I lumped into: “I need to tune memory usage a bit”.

Part 1: Apache

Since I moved my Apache servers to lower memory instances, I was running into swap space usage that I could easily avoid, ie:

free -m
                      total       used       free     shared    buffers     cached
Mem:               268        245         22          0         71         53
-/+ buffers/cache:        120        147
Swap:               511         29        482

Some of the reasoning behind this was that, by default, Apache expects a bit more memory to be available than what I provided to it in the move. The fix was to introduce a few settings to lower child processes and limit concurrent connections to something more reasonable to the type of traffic my site really gets – which is near nothing most days.

The settings I dropped into apache were:

httpd.conf:

    #Low Memory Settings
    StartServers 1
    MinSpareServers 4
    MaxSpareServers 2
    ServerLimit 6
    MaxClients 6
    MaxRequestsPerChild 3000

I made the adjustments, cleared out the swap space with:

swapoff -a
swapon -a

Then restarted apache:

/etc/init.d/apache2 stop
/etc/init.d/apache2 start

And all was well in the world.

free -m
                       total       used       free     shared    buffers     cached
Mem:                268        207         60          0         31         79
-/+ buffers/cache:           97        170
Swap:                 511          0        511

Part 2-1: PHP

A bit simpler, my blog site was running into max memory allocation limits. I had left the default php.ini in place in the upgrade, so I needed to do a once over of configs and change memory_limit from 16M to something more reasonable for my site.

php.ini

memory_limit = 64M      ; Maximum amount of memory a script may consume (16MB)

Part 2-2: APC

Having Apache settings set for lower memory usage also allowed me more room to increase my APC cache limit a bit higher to keep more pages faster. From 30 MB to 50 MB.

apc.ini

extension=apc.so
apc.enabled=1
apc.shm_size=50

Other obvious solutions in consideration, switch to Rackspace to invert my memory/cpu requirement/cost ratios. Any other tips are welcome :-)

Mysql: Force Localhost to Use TCP, Not a Unix Socket File

Apr 03
2010

So, recently I decided I was paying too much for my server because I was not maximizing performance across all the various daemons. So I decided to split my larger server into a handful of smaller servers to be able to fine tune each one to dedicated purposes. All went well, but I had some trouble for a few evenings figuring out how I could port forward localhost:3306 to the, now remote, database server. This should have been dirt simple with an iptables rule – but after digging in, I discovered MySQL treats localhost “special” by sending connections through the unix socket file, which is absolutely faster, but only works if the database daemon is on the same host as the connecting application.

After doing some research, I found it is possible to use a tool like socat and autossh to wrap an ssh tunnel to forward connections through the socket file to a remote IP over TCP. This however, was more complex and one off than I cared to explore for my simple problem. I finally resorted to using DNS and to stop using localhost as the host name. However, a few tid bits for the weary traveler:

  • The mysql client library is responsible for selecting the protocol.
  • PHP’s internal mysql libraries, unfortunately, as far as I could discover ( please correct me if I am wrong here ), do not allow you to select the protocol.
  • So if you’re using “localhost” as your host name in a PHP mysql_connect, you’re forced to go through the socket file, however, you can use 127.0.0.1 instead of localhost to force TCP.
  • The linux mysql-client package command line tool offers a –protocol=tcp flag if you want to force TCP. You can also set this as a default inside /etc/mysql/my.cnf under the [client] heading

my.cnf:

[client]
port            = 3306
socket          = /var/run/mysqld/mysqld.sock
protocol        = TCP

Again, this appears to work fine if you’re not using PHP as your client.

I hope this lesson learned ( use DNS ) comes as a helping hand to others out there. If anybody has some other suggestions, please do leave a comment!

Simple Performance Testing with Apache Benchmark

Jan 03
2010


I’ve been knee deep in performance and scalability for some time now, and have used and learned of many useful tools and techniques to help out. One of my favorite command line tools for seeing how well a single Apache server is churning out pages in development comes stock on Ubuntu, and Mac OS X: Apache Benchmark.

A simple performance test against the homepage of one of my client site’s using AB at the command line:

ab -t5 -n100 http://www.teamgzfs.com/

The results:

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.teamgzfs.com (be patient)
Finished 664 requests

Server Software:        Apache/2.2.8
Server Hostname:        www.teamgzfs.com
Server Port:            80

Document Path:          /
Document Length:        306 bytes

Concurrency Level:      100
Time taken for tests:   5.054 seconds
Complete requests:      664
Failed requests:        0
Write errors:           0
Total transferred:      465003 bytes
HTML transferred:       205326 bytes
Requests per second:    131.38 [#/sec] (mean)
Time per request:       761.143 [ms] (mean)
Time per request:       7.611 [ms] (mean, across all concurrent requests)
Transfer rate:          89.85 [Kbytes/sec] received

Connection Times (ms)
min  mean[+/-sd] median   max
Connect:       38   78  48.7     64     994
Processing:   114  540 226.2    493    1690
Waiting:      114  531 205.6    493    1485
Total:        191  618 236.0    558    1808

Percentage of the requests served within a certain time (ms)
50%    558
66%    587
75%    598
80%    617
90%    845
95%   1163
98%   1402
99%   1746
100%   1808 (longest request)

There is quite a bit of useful information here that can help you tune your code and server. It’s important to note however, that when working on a larger site, that expects quite a bit more traffic, you might want to investigate some more thorough solutions outside of just a single machine and ab. It is, however, a nice starting point into useful information.

One rather funny pitfall you can run into however, is if the host you are sending requests to is smartly secured – these types of tests become a bit useless, as they may have security settings to limit or delay requests – providing you with timeouts and/or inaccurate information. Best to run these types of things in a semi-developmental mode with those types of security settings turned down, and rely on bigger guns or fleets of boxes and scripts to hit a production secure site.

In addition to hitting just a landing page, you can use AB to send COOKIE or POST data too! This is very useful if you want to see how pages perform but need credentials to get in first. This is a little trickier to do using the -c, -T, -p, and -v flags. I noticed there are under-useful resources online to figuring it out with AB, so it would seem worthwhile to write it – as it took me some trickery to figuring it out as well:

Sending POST data to a login form:

First we create a file that contains our URL encoded post data. Note, AB expects the values to be URL encoded, but not the equal (=) or ampersands (&).

post_data.txt

username=foo%40bar.com&password=foobar

Capturing a cookie:

Here, we use the verbosity (-v) flag so we can see the response headers that come back — many sites will send back a cookie once authenticated, we’ll want to capture that cookie here. Though, some sites will not require it, I demonstrate it for the sake of example:

ab -v4 -n1 -T 'application/x-www-form-urlencoded' -p post_data.txt http://www.foobar.com/login

The returned response header will fly by quick, you’re looking for something like the following:

Set-cookie somesession=somerandomsessiondata...;

The session data may come back encrypted, unencrypted, a serialization, or just a number. That varies by site. The point here is you have a key/value pair for the cookie. All you need is the part up to the semi-colon ( not including the semi-colon ). Copy that “key=val” string, and use it when hitting other pages on the site you are testing, ie:

Using a cookie to test a page that requires a cookie:

ab -t10 -n100 -c 'somessession=somerandomsessiondata' http://www.foobar.com/login_required_page

This can become a lot of fun once you get the hang of it. Now you have the know how, go enjoy creating an arsenal of these scripts and start performance tuning your sites – or script hacking your favorite social network ( or obnoxious Blizzard clan website hahaha… drum roll for D3 – 2010? Please???! ).

Upgrading from PHP 5.2 to 5.3 in Ubuntu – Part 1

Jul 05
2009

( I’m publishing this partially done so that it acts as a reminder for me to FINISH it… bare with me! )

Overview of Important Changes

variable class naming

Previously in PHP, only method and function names could be variables.  Ie:

$func = "print";
$func("Hello World");
class Foo {
  public static function Bar()
  {
     echo "Hello World";
  }
}

$method = "Bar";
Foo::$method();

Now, in PHP 5.3+, variable class naming is also supported – making possible this syntax:

class Foo {
  public static function Bar()
  {
     echo "Hello World";
  }
}

$class = "Foo";
$method = "Bar";
$class::$method

This new class variable naming should provide a much desired level of ambiguity for PHP developers, I know for my MVC framework, it could really change the Typhoon PHP Typhoon->run() method in a positive way.

late static binding

This is a more advance PHP OOP topic – that almost came off as a bug in previous versions of PHP. I’ve personally ran into it myself on occasion, and am happy to see a solution available in PHP 5.3. In previous versions of PHP, static calls resolved to the inherited class. In cases where one needed a static call to resolve in it’s own scope, unexpected results were common. Ie:

class Foo {
    public static function who() {
        echo __CLASS__;
    }
    public static function test() {
        self::who();
    }
}

class Bar extends Foo {
    public static function who() {
         echo __CLASS__;
    }
}

Bar::test();

Output:

Foo

The solution, “Late Static Bindings”, solves this using the static keyword:

class Foo {
    public static function who() {
        echo __CLASS__;
    }
    public static function test() {
        static::who();
    }
}

class Bar extends Foo {
    public static function who() {
         echo __CLASS__;
    }
}

Bar::test();

The static keyword resolves to the calling class and produces the expected output:

Bar

Additions to Standard PHP Library (SPL)

Circular Garbage Collection

Lambda Functions (Anonymous Functions)

Closures

Overriding Internal Functions

New Reserved Words

Namespaces

Jump Labels

Changes to Functions and Methods

Extensions

Phar

php.ini Changes

Deprecated Methods

Install php 5.3 on Ubuntu

mkdir download && cd download
wget http://snaps.php.net/php5.3-200906131830.tar.gz

Ubuntu + Compiz

Jun 12
2009

This is actually from awhile ago ( almost a year ), but I thought I’d share. It’s my desktop pc, running Ubuntu 8.04 and Compiz Fusion engine.

To be honest, it was a fun exploration for the novelty, but these days, the most I really use from the effects engine is the magnifier to help people standing over my shoulder focus on whatever I’m demonstrating.

Pretty cool none the less! Read up at Compiz Fusion and Ubuntu

Setup a webcam security system with Ubuntu Linux and Motion

May 17
2009
Snap from Office Security Cam

Snap from Office Security Cam

So, now that I’m in Morgantown – my home is too small to comfortably work on side gigs and personal projects – especially now that my family is getting bigger with the baby!  I’ve been using the office space I leased out more and more.  While exploring video conferencing with Matt last week, I had the thought “wouldn’t it be cool to have a security camera in the office?”.  So I did just that, and it’s actually quite easy for Ubuntu linux users.

What you need:
  • Ubuntu Linux ( I was using 8.04.1 at the time of installation )
  • one or more USB web cameras
What you can do:
  • Motion detection – record video/and or frames if there is motion.
  • Snapshot intervals – take time interval snapshots regardless of motion detection.
  • Live video IP stream in mjpeg format.
  • Specify recorded video to be saved in your choice mpeg, avi, flv, swf format.
  • When motion exists, have frames and videos draw a box around the specific motion for more obvious recognition of subtle movements ( this actually shows the shadow of the janitor near the door around 6 a.m. every morning – I wouldn’t have noticed otherwise! )
  • Easily send all data to a backup server in a variety of ways – I keep it simple by saving data to my Dropbox directory, a wonderful cross-platform data syncronization and sharing utility.
Steps:

1.  Plugin your webcam.
For me, the Logitech QuickCam® Pro 9000 worked right out of the box, and was only 105$.

2.  Install Motion – software motion detector, and turn it on.

sudo apt-get install motion
sudo motion

3. Configure Motion

Everything really works out of the box with this – but isn’t quite organized to my liking, and probably not yours either. Global configuration is located inside /etc/motion.conf ( You’ll notice there are multiple threadN.conf files in this directory – which can be used for custom configured individual cameras if you are setting up more than one ).

Note: Be sure to restart the Motion server everytime you make a configuration change.

sudo /etc/init.d/motion restart

Take a look at the files, they are well documented. Below are a few helpful configurations to get your data organized quicker:

#/etc/motion/motion.conf

# Locate and draw a box around the moving object.
locate on

# Draws the timestamp using same options as C function strftime(3)
text_right %Y-%m-%dn%T-%q

# Text is placed in lower left corner
text_left SECURITY CAMERA %t - Office

Organize the filesytem to save data by date, instead of all in one directory.

# File path for snapshots (jpeg or ppm) relative to target_dir
snapshot_filename %Y%m%d/camera-%t/snapshots/hour-%H/camera-%t-%v-%Y%m%d%H%M%S-snapshot

# File path for motion triggered images (jpeg or ppm) relative to target_dir
jpeg_filename %Y%m%d/camera-%t/motions/hour-%H/camera-%t-%v-%Y%m%d%H%M%S-%q-motion

# File path for motion triggered ffmpeg films (mpeg) relative to target_dir
movie_filename %Y%m%d/camera-%t/movies/hour-%H/camera-%t-%v-%Y%m%d%H%M%S-movie

# File path for timelapse mpegs relative to target_dir
timelapse_filename %Y%m%d/camera-%t/timelapses/hour-%H/camera-%t-%Y%m%d-timelapse

4.  (Optional)  Setup a backup solution

a. Easy solution, get and install Dropbox — instructions on the Dropbox site.  Then update your motion.conf to save to your Dropbox directory:

#/etc/motion/motion.conf
...
target_dir /path/to/dropbox/security_camera
...

b. A more granular solution is to take advantage of hooks configurable in motion.conf. Using these, you can create bash scripts to do anything your heart desires ( like trigger a silent alarm on motion detection outside business hours ). Available hooks: on_event_start, on_event_end, on_picture_save, on_motion_detected, on_movie_start, on_movie_end.

If you have wput installed, you can easily upload files to a remote backup server with these hooks:

#motion.conf
...
on_picture_save wput ftp://user@pass@server %f
...

However, this solution is somewhat less secure, as it uses FTP. In a future post I will detail how to secure this up using encrypted transfer and phrase free keys. ( Stay tuned! )

5. Live feed

This comes working out of the box with Motion. Check out your live stream in your web browser by navigating to: http://localhost:8081

That’s it! Webcam security made easy :-)

Visit Other Sites!

Find me on other sites...

Archives

All entries, chronologically...

Pages List

General info about this site...