zircote.com

development (in)action

Functional Testing PHP SSH2 Workflows With PHPUnit and Vagrant

When $this->markTestIncomplete() just won’t do. Enter Vagrant.

To begin with the tools you will need to become familiar with at as follows:

Vagrant: http://vagrantup.com You will find many good write-ups on its installation and use. For the sake of brevity I will provide links to a few I found useful and not provide much detail on this. I suggest begining here: http://vagrantup.com/v1/docs/getting-started/index.html

PHPUnit: http://www.phpunit.de

There are four ssh2 authentication methods provided with the php ssh2 tools:

  • ssh2_auth_hostbased_file — Authenticate using a public hostkey
  • ssh2_auth_none — Authenticate as “none”
  • ssh2_auth_password — Authenticate over SSH using a plain password
  • ssh2_auth_pubkey_file — Authenticate using a public key

For my project I needed to functionally validate a workflow of uploading a file to a server utilizing ssh2_auth_password and ssh2_auth_pubkey. I need to test the following to validate my results:

  • Can I authenticate with the desired method?
  • Can I validate the fingerprint of the servers key?
  • Can I put the contents of a file to a known path on the remote server?
  • Can I validate those contents and destination filename are as expected?

To accomplish these goals I require an environment with a known public and private key this provides the fingerprint and both known keys to establish the connection without much work. For password based authentication I need an environment that will allow non-tty password based auth over ssh (this feature is generally disabled and must be changed at vagrant provisioning or you must create a image that is configured.) Not wanting to create a custom box for this case I utilize the provisioning tools built into vagrant and sed. From here it’s really a matter of writing tests and running vagrant, take a look at the gist for examples I have provided.

Example Gist https://gist.github.com/3612867

The TL;DR:

  1. Write your tests
  2. Fetch the Vagrant SSH keys
  3. Create or grab the example VagrantFIle
  4. run `vagrant init`
  5. run `vagrant up`
  6. run the tests
  7. rinse / repeat

Why Composer?

The efforts to decouple services and isolate application tiers can be a tricky business. Currently, at Ifbyphone we are in the process of decoupling many application segments and improving the agility of our development process. This effort comes at the expense of many things that are often taken for granted in many environments. It is no different at Ifbyphone. A significant pain point of this process is application dependencies. We implement several dependencies in our applications Zend Framework, Rediska, PHPUnit and many others. The challenge is, applications often have their own deployment schedules; development processes and are often times at different stages of their life cycle.

To date we have relied on PEAR to act as the manager of these dependencies. However, as virtualization of hardware and automated deployments of applications with such things as Puppet and Capistrano become more prevalent, the necessity of ‘pinning’ versions of libraries becomes crucial. An example is Zend Framework and PHPUnit; currently PHPUnit version is 3.6.10 (at the time of this writing) yet Zend Framework 1 requires, due to interface additions to PHPUnit, PHPUnit version 3.4. It is our preference to use the latest version of PHPUnit where possible. However this preference is prohibitive when working with Zend Framework 1. While it is possible to maintain two separate installations this does not help with things such as continuous integration and only adds complexity. In addition to this, versions change rapidly in open source development and a pear upgrade-all is risky on production server.

Enter composer [http://getcomposer.org/], composer provides the tools to atomically maintain dependencies for a given application, to pin those versions and segregates the dependencies from other deployments and development environments. Its near seamless implementation of Hg and git make its use simple and a pleasure. More over, it provides the tools for deployments to reside safely along side one another without the fear of unwanted version creep or accidental conflict of versioning. Composer provides a large list of schema properties allowing for a simple yet detailed manifest of dependencies.

When I began using composer I feared that it would only suffice for select packages. I was concerned that a project that did not implement a composer.json schema would be exempt from the manifest inclusion. I have since learned with exploration that this is in fact not the case. Having the ability to deploy libraries via archive [zip,tar, etc], git, svn (see gist below), PEAR packages, and it’s own clearing house of packages, http://packagist.org . As a project maintainer you are given the option of defining packages within your project composer.json pinning versions, repository locations etc. This provides the flexibility that has been missing from PHP for years. Github has changed the opensource community in many ways, removing the ‘fear’ of submitting changes to a project. What Github has done for development, I look to composer to do for PHP in the delivery of this mass creativity.

You may learn more about composer at http://getcomposer.org. You may find the the project github site is https://github.com/composer/composer and keep up with much of its development on its google groups list at Composer-De Google Group and finally packagist at http://packagist.org for a list of composer friendly projects.

The Challenges of Moving Forward.

As business moves forward, technology expands into new concepts and applications of theory, and Ifbyphone is no exception. Unlike most startups we hear about these days, creating solutions with the latest technologies, developing new markets and the freedom of a clean backlog. We find ourselves at Ifbyphone with technical debt, established customer expectations and a backlog of tasks that can be daunting on a good day. This isn’t to say these challenges are insurmountable. They do, however, pose not so unique challenges; for every aspect of the company. Luckily we have a great team of software engineers and support staff working hard each day to distill the backlog to the most valuable sprint tasks each week so that we may deliver the most value to our customers.

Recently we have begun several initiatives to bring more stability and agility to our development process, including upgrading software, implementing new platform services and implementing peer code reviews. Of the new platform services recently implemented or that are in the process of implementation are: Redis, Zend Server, CometD, MongoDB, MySQL, Amazon Web Services, RabbitMQ, Puppet, OAuth2 and others. Working with a list of technologies such as this is daunting when you consider the imperative goal of not introducing any service outages. This requires vast amounts of planning, testing and preparation, but most importantly communication. Communication with one another, with our customers, and with our upstream providers is key to a successful endeavor; all of which leads us to further expand our list of initiatives to modernize and improve our services and provide a great service to our clients. We have implemented improved database change control, software deployments and continuous integration; however we are still looking forward by planning to address such challenges as improved reporting, improved statistical analysis, capacity scaling, monitoring, security enhancements and many more enhancements.

We are not nor do we believe we are unique in these challenges: many companies around the world are faced with similar obstacles. The difference is often times the application of theory, money and manpower. Mistakes are made, but the lessons gleaned from them are invaluable to growth and progress. The greatest lessons learned are the seemingly obvious ones; where one looks back and wonders, “Why (or how) did we let this happen?” Our most recent efforts have been focused on database improvements: upgrading hardware and versioning, but also updating legacy code to implement more efficient database drivers. Vast quantities of code have been refactored: the entire team was sequestered in a conference room for multiple days to peruse diff files and perform peer code reviews. It was challenging, tedious and monotonous, yet successful. The lessons we are able to take away from this are such:

  • Do not let fear paralyze the progress; it only gets harder the longer you wait.
  • Keep the team involved. Large projects should always have full team involvement.
    • Succeed as a team or fail as a team.
    • Communicate with one another. While sequestering the team in a room is not always the preferred method of team interaction; the simple act of shared hardship can foster camaraderie.

I am a big fan of peer code reviews and this particular undertaking has only served to bolster my support of this process: both the reviewer and the reviewed can take from each session many lessons that go well beyond the simple technicalities of the code. The junior reviewer learns restraint and constructive mentorship of the peer developers he has reviewed; while from the senior developers being reviewed, he/she learns to be more assertive and prepared in getting results ‘without authority.’ This collectively provides a great opportunity and experience in their career. As developers being reviewed, we learn to interact with each other in harmonious ways, to openly communicate our goals and methods. We are at times humbled and learn to put aside our egos (not that I have ever met an egotistical developer ) and embrace critique.

Today at Ifbyphone we have many projects well underway and even more on the backlog; there are no shortages of challenges to be undertaken, and frankly I wouldn’t have it any other way. I am fortunate to be in a place professionally where I absolutely love what I do: I am challenged daily, work with a great group of people and have excellent leaders to keep me on track. Over the years I have learned that the best jobs are often the hard ones and all of those constraints of technical debt that pose the greatest challenge—while frustrating—give the greatest rewards.

What Is an Expert?

One of the joys of my current profession is constant exposure to individuals and organizations that label themselves “experts” in any given discipline. Having been exposed to this perk yet again this holiday season, I feel compelled to draft my thoughts on the subject. It should be noted I do not claim to be an expert in any given discipline; I do however feel my experience lends itself to a unique perspective on many topics and subjects. My technical career spans several fields seemingly unrelated and diverse. I have been employed professionally as the following in no particular order:

  • Automotive Technician
    • ASE Certified Master Auto, L1, Body and Heavy Truck
    • Automatic Transmissions
    • Electronics Drivability
    • Diesel Fuel Injection
    • Heavy Line
    • Fuel & Ignition pit crew in an ARCA stock car team
  • Director of Information technology for a Fortune 500 company.
  • Shop Foreman for the manufacturing and maintenance of oil-field equipment.
  • Garde Manger and Saucier Chef.
  • Software Developer and Software Architect.

There is more; for the sake of brevity, I have kept this list short. For many years, I have kept this list a secret professionally, as it often sparks questions of ability, sanity and reliability. My lack of formal secondary education is often raised in job interviews and is just as often difficult to address, depending on the job market. With my introductions complete, I would like to share my thoughts on what does and does not make an expert. The list of things I believe that do make an expert will most assuredly be a list much smaller than what is generally accepted.

The expert

It is my belief that to become an expert in any field one must know enough to not be entirely sure of all aspects of a given problem. An expert will have the experience and wisdom to know there are no simple solutions; that all problems can and usually do have multiple facets. This expert will know that answers cannot come from one book, author, magazine or periodical; but rather from the mistakes and pains of failure. This individual will recognize that while there is the most common solution, there will one day be that problem that is rarely seen; knowing this and knowing how to address it is the making of an expert.

Not the expert

An individual reading one or two books that present canned problems and solutions as examples does not make a person an expert regardless of how well this person absorbs the lessons. This individual possesses only enough knowledge to begin the path to understanding one or two facets of the possibilities that can and will arise. It is important to understand that these publications have carefully filtered these problems for the most common denominator and to be publicly consumable. These solutions are merely best case recipes one can hope for and, often times, serve no real world problem solving knowledge on the topic.  Until the individual can readily find oneself presented with problems that does not return results in a Google search and solve these problems without the help of IRC or any others, he should not claim the title of expert in such matters.

Commercially there are also companies that ply their wares, posing as experts in a given problem field hoping to take your money and make you happy. Far too often, I find these companies have a narrow range of problems they solve and it is far too easy to get started on a solution only to discover your particular use case is not supported or solved by the experts’ offerings. This is similar to the individual case of not having seen enough of the problems or simply not having possessed the user base to be adaquate to solve the problem.

In Summary

Ultimately, it is important to recognize that consultant companies are often motivated for more selfish reasons than the individual to obfuscate their status, or lack thereof, as experts, so that they may lure us into a contractual obligation.  For individuals, it is often ignorance, narcissism and/or pride that often motivates the proclamations of subject matter experts from all parts of life. The holidays are especially bountiful for examining these persons to watch them in the wild and view their natural behaviors. Whether it is a parent instructing you in the disciplines of child rearing or a CEO expounding the joys of the latest whiz-bang idea they read on Joe Schmoes Blog; it is up to us non-experts to filter the wheat from the chaff, filter the good from the trash.

 

A Quick Ternary Joyride.

I am not a fan of @ but I have not found a way to use the new ternary operator without otherwise throwing the notice.

[ zircote ~/Workspace/CtaTrack ] php -a
Interactive shell

php > $v = array();
php > echo $v['test']?: 'no';
PHP Notice:  Undefined index: test in php shell code on line 1
no
php > echo @$v['test']?: 'no';
no
php > $v['test'] = 'yes';
php > echo @$v['test']?: 'no';
yes
php >

The Revolution Around Us…

I was remembering back to 1997 when I was working in IT for a fortune 500 as the stir for Y2K upgrades were on the rise, buyouts of smaller companies were daily and the transition from copper pair telecommunications systems were being replaced by VOIP. The latter was an odd transition; the use of a somewhat new thing (Ethernet) was being retasked to now handle phone traffic which heretofore was managed not by IT but rather by tel-co. This transition meant that the phone systems which had to date been managed by groups of switch technicians and troubleshooting performed with tone generators, butt-sets and amplifiers where now handled by neck-bearded pale-faced sociophobes in the basement  (not exactly) where no one dared venture. Many lost their jobs unable to embrace the change, many couldn’t make the transition believing it would pass and some embraced it striving to stay on the curve.

Enter 2011, a new revolution has been well under way reminding me of those days long since passed; the cloud. While some see the cloud as just the most recent fad, a passing annoyance or the next thing to crash and burn; I however believe this is the next revolution. There will be those that will lose their jobs refusing to embrace the inevitable, more will not make the transition believing it will pass and some will successfully embrace it riding the curve. Much like the tel-co department of old, ops groups will now have to begin thinking more like a developer; it will require them to become part of the development process designing and interacting with aspects not specifically relating to the assets and hardware. Similarly, developers will have to become more mindful of ops related requirements that intersect their daily thought processes; considering technical details of deployment, concurrency and the like. The ability for even the most meagerly funded organizations to have as robust and thoroughly sound infrastructure as a Fortune 500, by which they may build, grow and continuously deliver their message to the world; making the cloud a cornerstone that is here to stay.

Prowl Push Notifications With Zend_Log

I wanted a contained method to send push notifications from various tasks and perhaps even a list of recipients for critical alerts; all the while still be able to hook into existing application code and error reporting. This lead me to writing Skulk_Log_Writer_Prowl as part of the 0.1.4 0.1.5 release of the Skulk package.

While currently simple in implementation it provides configurable support for provider keys, single or multiple recipient keys, linking URL and notice title. I may add layout support for more detailed messaging formats later with Zend_Layout. Implementing the  Zend_Log_Writer API, drop in use with minor configuration details is as follows:

The messages received should appear something like these:

Skulk Log Output

EDIT:

I have also added recently a working yet rough Zend_Mail transport for prowl as well will expand tests and finalize it after the holidays:

For more information on:

Zend Server Cluster Manager and Capistrano Deployments

One of the challenges of deploying applications in an elastic environment is the target servers are ever changing; from one moment to the next your environment may consist of four, twenty or more target hosts.  The process of determining these targets can be a tedious and time consuming task; requiring the examination of cluster member lists within the Zend Server Manager Gui or executing the ec2-describe-instances then parsing through the results for the correct group; either of these methods is time consuming and thwart with the possibility of mistyping a hostname, missing one or just breaking the process.

I approach deployments from an angle that no human interaction (beyond security dictated by my comfort level) should be required;  a programmatic method of gathering this dynamic list of target hosts is required. To gather this list I have created and employed a library for interacting with the API interface for Zend Server and Zend Server Cluster Manager, Jazsl (Just Another Zend Server Library.)  Jazsl provides methods for all current Zend Server API calls:

  • clusterAddServer
  • clusterRemoveServer
  • clusterDisableServer
  • clusterEnableServer
  • clusterGetServerStatus
  • getSystemInfo
  • restartPhp
  • configurationExport
  • configurationImport

More on these methods may be explored in the Zend Server documentation at Zend-Server-CM-Reference-Manual.  I will assume the target stage for this will be production, and the stage files is production.rb; construction of the dynamic servers list within the production.rb requires certain dependancies:

  • gem: capistrano
  • gem: capistrano-ext
  • gem: json
  • Zend Framework
  • Jazsl

These procedures will assume you have a working capistrano deployment configuration for your php project, the details of installing and configuring Jazsl are as follows. First install the pear packages for Jazsl and Zend Framework on the machine you intend to deploy from.

Now you must enable the zf configuration as well as enable each of the provider within the Jazsl project you intend to utilize; for the purpose of deployments we only require the JazslProvider JazslClusterProvider. To configuration of covered in the following gist:

Confirm your setup and configuration of Jazsl providers by executing zf ? , the out put of which should contain:

Before continuing we must configure an api key (read-only access is all we require.) To generate the api key we must log into the Zend Server Cluster Manager Gui, refer to the Zend Server documentation for details. The deployment hosts portion only requires a read-only api key. Once you have this key you can resume configuration of Jazsl tool as follows:

zf add-server-key jazsl zcsm https://10.0.0.12:10082/ZendServerManager key_full xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

The parameters are described as follows:

zf add-server-key jazsl zendserver url keyname apikey
  • zendserver the identifying name for the key-set/host
  • url the full Uri for the Zend Server originating this api-key set
  • keyname the identifying name of the api-key given in the Zend Server Admin Gui Section
  • apikey the api-key hash provided

You may then validate its operation by executing:

zf cluster-status jazsl-server zcsm

where zcsm name of the key you just saved in the previous command; this command should return either an error message or a table of cluster members (provided you have cluster member)

Once this is confirmed you are now ready to use the jazsl-cluster tool to return the json string that will be used by Capistrano for the target lists as shown in the above gist. To utilize this json string we modify our production.rb file to execute the jazsl-cluster command and parse it as part of the roles determination as shown:

You will note for the :db roles section, I utilize a zf get-server jazsl-cluster command to return a single target host, I utilize this for database migrations which I only need to run on one instance in the cluster; while understanding that there are no static instances in an elastic environment. Finally deployment is executed as usual with capistrano: cap deploy production this will parse the servers list and execute all deployment commands on each server as usual.