Book Notes: The Open Organization (Jim Whitehurst)

My list of unorganized notes from The Open Organization, by Jim Whitehurst (Red Hat CEO):

  • The best ideas win regardless of who they come from.
  • Encourage and expect open, frank, and passionate debate. Let them know I expect them to tell me if my idea is junk.
  • Bottom-up culture
  • Worry less about whether or not things are done precisely as I would choose. Be hands-off enough to allow people to direct themselves and make decisions.
  • Have thick skin and allow extensive and relentless feedback.
  • Help employees see the higher purpose for their work. That sense of purpose is the best intrinsic motivator.
  • Purpose: removing technical roadblocks and providing innovative solutions that allow clients to improve the world more effectively and quickly.
  • A manager’s task is to create a work environment that inspires exceptional contribution and that merits an outpouring of passion, imagination, and initiative.
  • Allow and spark emotion.
  • Hire motivated people and inspire them, rather than skilled people and motivate them.
  • When interviewing, ask questions that show how *curious* they are about things. What projects (software or otherwise) are you proud of? What are your hobbies?
  • Have mailing lists and other means of public communication to recognize and reinforce passion. Acknowledge good work and encourage open/blunt communication.
  • But keep the passion fires in check — heated arguments tend to tune out facts and merits, becoming destructive.
  • If employees take psychological ownership, even average employees can perform at high levels. They need to be engaged with and understand the strategy (what and how).
  • Don’t sugar-coat bad news.
  • People want context, whats, and whys.
  • Be accessible, answer questions, admit mistakes, and say you’re sorry. Builds credibility and authority.
  • Engaged employees require you to explain your decisions.
  • Meritocracy != democracy. Everyone has a chance to be heard, but everyone’s opinion is not equal. Individuals that have shown themselves to be leaders in a topic are the ones with clout and decision making power, regardless of position in the org chart.
  • Meritocracy leaders are chosen by peers and defined by sustained contributions.
    Instead of brainstorming and “no bad ideas”, debating ideas tends to create the most new ideas.
  • Hold one on ones, but allow employees to set the agenda ahead of time. Don’t set it yourself, making assumptions about what’s important.
  • Always include team in decision making. It’s not a democracy — ultimately, decisions are yours. But it’s a way to get fresh opinions and provide satisfaction.
  • Don’t be afraid to describe incomplete plans. The ambiguity is a great time to facilitate engagement.
  • Articulate higher-level goals, but don’t feel like you have to spell out implementation. Let skilled employees (help) fill in the details
  • Leadership is the art of getting things done through other people
  • Allow prototyping/experimentation that fails fast, rather than spending so much time analyzing and designing up front. In the end, takes less time.
  • Have enough confidence to admit you don’t have all the answers!

Apache and MariaDB/MySQL Settings for Low-Memory Servers

Gone are the days of requiring large amounts of resources to adequately run a fast, enterprise-grade web server.  I currently run a single DigitalOcean instance (the 1GB memory plan) and host many web platforms with no performance issues, whatsoever.  I thought I’d share the settings that have been working really well in this low-memory environment.  Note that the server is running CentOS 7, but these settings should be applicable for any OS.

For what it’s worth, if you’re interested in a DigitalOcean account, click here to use my referral — you’ll gain $10 in credits when you sign up…

Apache (/etc/httpd/conf/httpd.conf)

The 15 ‘MaxClients’ was defined empirically.  It’s the highest value I’ve been able to use and not run into out-of-memory issues.  Some users may be able to get closer to 20 or more, but 15 has been chugging along for months without any problems.

StartServers 1
MinSpareServers 1
MaxSpareServers 5
MaxClients 15
MaxRequestsPerChild 300
KeepAliveTimeout 3
HostnameLookups Off

MariaDB/MySQL (/etc/my.conf)

key_buffer = 16K
max_allowed_packet = 1M
table_cache = 4
sort_buffer_size = 64K
read_buffer_size = 256K
read_rnd_buffer_size = 256K
net_buffer_length = 2K
thread_stack = 64K


innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

max_allowed_packet = 16M


key_buffer = 8M
sort_buffer_size = 8M

key_buffer = 8M
sort_buffer_size = 8M


If you use MyISAM instead of Innodb, replacing the ‘innodb_*’ settings with ‘skip-innodb’ can reduce memory usage even further.


Automated Apache and FTP Setup for New Website

When adding a new website to an existing web server, the process of setting up Apache and creating FTP users is a bit tedious. However, it’s really easy to automate with a simple script. The following is an example Shell script that automatically:

  1. Creates /var/www/<domain>/html
  2. Creates an FTP user
  3. Adds that user’s  primary group to the apache user
  4. chown & chmod /var/www/<domain>/html
  5. Creates an Apache .conf for the site, including necessary aliases and logging
  6. Restarts Apache

The script assumes Apache directory locations typical for CentOS/RHEL/Fedora, but can be easily modified for others.  It also assumes the “sites-available” and “sites-enabled” setup, tied into httpd.conf.

# usage: <domain> <username> <password>

mkdir -p /var/www/$1/html
chmod -R 755 /var/www/$1/html
useradd -d /var/www/$1/html $2
usermod -a -G $2 apache
echo $3 | passwd $2 --stdin
chown -R $2:$2 /var/www/$1/html

echo "<VirtualHost *:80>" > /etc/httpd/sites-available/$1.conf
echo "    ServerName $1" >> /etc/httpd/sites-available/$1.conf
echo "    ServerAlias www.$1" >> /etc/httpd/sites-available/$1.conf
echo "    DocumentRoot /var/www/$1/html" >> /etc/httpd/sites-available/$1.conf
echo "    <Directory /var/www/$1/html/>" >> /etc/httpd/sites-available/$1.conf
echo "        AllowOverride All" >> /etc/httpd/sites-available/$1.conf
echo "    </Directory>" >> /etc/httpd/sites-available/$1.conf
echo "    ErrorLog /var/www/$1/error.log" >> /etc/httpd/sites-available/$1.conf
echo "    CustomLog /var/www/$1/requests.log combined" >> /etc/httpd/sites-available/$1.conf
echo "</VirtualHost>" >> /etc/httpd/sites-available/$1.conf

ln -s /etc/httpd/sites-available/$1.conf /etc/httpd/sites-enabled/$1.conf
systemctl restart httpd

If you’re interested in cheap, enterprise-grade web hosting, without the hassle of managing the setup, feel free to contact me!

How to Issue Bulk Refunds through the Stripe API

Today was not a good day.  In short, a nonprofit’s online donation form was hit 1,285 times in an attempt to validate stolen credit cards.  Unfortunately, 120 of those succeeded, meaning our Stripe account had over $600 in fraudulent donations.  I needed a quick way to fully refund those charges, but in bulk.  Through the Java API client, I was able to do the following.  I figured I’d throw it out here, in case someone else can use it.  Note that although it uses the Java  client, the concept is identical for others.

import com.stripe.Stripe;
import com.stripe.model.Charge;
import com.stripe.model.ChargeCollection;

import java.util.HashMap;
import java.util.Map;

public class StripeRefunder {

    public static void main(String[] args) {
        try {
            // USE YOUR STRIPE KEY HERE.
            Stripe.apiKey = &amp;quot;&amp;quot;;

            int iterationCount = 100;
            String startAfter = null;
            int totalCount = 0;
            while (iterationCount &amp;gt;= 100) {
                ChargeCollection charges = get(startAfter);
                iterationCount = charges.getData().size();
                totalCount += iterationCount;
                startAfter = charges.getData().get(iterationCount - 1).getId();

            System.out.println(&amp;quot;TOTAL REFUNDS: &amp;quot; + totalCount);
        } catch (Exception e) {

    private static ChargeCollection get(String startAfter) throws Exception {
        Map&amp;lt;String, Object&amp;gt; params = new HashMap&amp;lt;String, Object&amp;gt;();

        params.put(&amp;quot;paid&amp;quot;, true);
        params.put(&amp;quot;amount&amp;quot;, 500);

        params.put(&amp;quot;limit&amp;quot;, 100);
        if (startAfter != null) {
            params.put(&amp;quot;starting_after&amp;quot;, startAfter);
        ChargeCollection charges = Charge.all(params);
        for (Charge charge : charges.getData()) {
        return charges;

The Responsible Consultant: What project information should my software/web developer provide?

Getting right to the point, many (possibly even most) software/web developers will simply hand you the deliverable upon project completion and call it a day. The tools and processes used are often held back, forcing you to always go through them for updates or eventually argue with them to get what you need. Although the former may make a bit of business sense on the consultant’s side, I’d argue that this methodology is terrible.

Lately, I’ve been approached by multiple organizations in a really tough spot: they were working with a developer and something tragic happened to him/her mid-process.  The organization had been kept in-the-dark about the tools, processes, tasks, online accounts, etc. necessary to pick the project back up.  In the end, this often forces the clients to restart from square one.

The following is a list of topics to discuss with your consultant before the project even starts.  During the ongoing work, require that they keep you in the loop.  At a bare minimum, you should at least be able to 1.) find a replacement with the necessary skills and 2.) pick up where the project was left.

  • “Source control management” (SCM): Most developers house all code in an SCM repository (git, SVN, and others) or some central location.  Ensure you have access to this — it’s, by far, the most vital piece!  Most consultants will be hesitant to give you access, prior to the final payment, but you should at least require a full copy of the code at each paid milestone.
  • Have the developer provide you a list with the programming languages, libraries, technologies, methodologies, etc. used in the project, just in case you need to find a qualified replacement with the right skills.
  • If the software uses any third-party services, make sure you receive a full list of all URLs, usernames, and passwords.
  • The above also applies for where the live software/web application is eventually run.  Require the account information for the server, hosting company, or cloud platform.
  • If the system includes a database, know where and how to fully export it.
  • Have the consultant maintain a description of the system architecture, methodologies, tips, gotchas, etc.  Think of this as a quick tutorial for the replacement.
  • Task tracking: the developers should maintain a list of open, in-progress, and finished tasks, giving a clear view into the current/future efforts and what is already complete.

Again, some consultants will be reluctant to provide the above, or at worst will refuse outright.  However, I’d argue that their level of irresponsibility is extremely risky.  Tragedies, breaches of contract, and other negative circumstances can happen.  Be prepared to keep your project on track!

How to Backup an OpenShift MySQL Database with a Shell Script

As a part of my ongoing consulting with nonprofits, I oversee over a dozen web applications running on OpenShift.  I needed an easy way to backup all of the MySQL databases in one shot.  So, I cooked up the following shell script.  It’s pretty dirty, but it works.  The script assumes Linux and is run as a cron job, but the concept could be easily adapted to other operating systems.  I thought I’d throw it out there in case it’s useful to anyone else.

now=$(date +"%Y-%m-%d")
rm -rf $LOCALDIR
mkdir -p $LOCALDIR

backupSql() {
 # TODO: It would be better to 'source' all the environment variables in on shot, but I wasn't able to find a way to do that. For now, just scp the env files and use them.
 local username="`cat OPENSHIFT_MYSQL_DB_USERNAME`"
 local password="`cat OPENSHIFT_MYSQL_DB_PASSWORD`"
 local host="`cat OPENSHIFT_MYSQL_DB_HOST`"
 local port="`cat OPENSHIFT_MYSQL_DB_PORT`"

 ssh $1 "rm -f app-root/data/$2.sql ; mysqldump --user="$username" --password="$password" --host="$host" --port="$port" --complete-insert $2 > app-root/data/$2.sql"
 scp $1:app-root/data/$2.sql .


backupSql "[SSH HOST]" "[APP NAME]"
... (backup multiple apps at once by repeating the above)

cd ..
tar -zcvf $NAME.tar.gz $NAME
rm -rf $NAME

You’ll need to edit a few things:

  • LOCALDIR’s targeted location
  • The actual calls to the ‘backupSql’ function.

UPDATE: A few folks have asked why I don’t simply use ‘rhc snapshot’ for DB backups.  Honestly, I can’t quite remember the circumstances that led to this approach.  This post had sat in my queue for a while before I actually published it.

Here’s what I think happened.  I’m using OpenShift to host about a dozen platforms for nonprofit organizations.  We first started using OpenShift months after OpenShift Online was started.  Since it was in its early stages, I wanted to make sure that the backups would be portable to some other solution, if that became an urgent need.  At the time, I think the snapshots didn’t include an actual .sql export of the MySQL DB.  It was more of a binary approach that was applicable solely for an OpenShift restore.  No idea if that assumption was correct or not, nor do I know if that’s the case anymore.

Define: Good Consultants

This morning, I stumbled across this blog post, describing “Good Consultants vs. Bad Consultants”.  Although it’s a little rough around the edges, the points it makes are important.

The main takeaway point is the first sentence:

Bad consultants make money off their customers, good consultants make money for their customers.

When you’re looking for a technology consultant to help your nonprofit organization or business, find one that’s passionate about helping it succeed and sticking with it long-term!  You are not solely a source of income.

Tutorial: Spring + Hibernate + HikariCP

HikariCP is newer JDBC connection pool, but has already gained a large following.  And for good reason!  It’s lightweight, reliable, and performant.

We recently added it as a core module to Hibernate ORM: hibernate-hikaricp (will be released in ORM 4.3.6 and 5.0.0).  However, I wanted to try and replace C3P0 within NeighborLink’s new web platform.  It’s been plagued with connection timeouts (regardless of the many iterations of config changes) and other quirks.  NL is based on Spring MVC and ORM (4.2.12), so in this setup, hibernate-hikaricp is a moot point.  We can simply feed the HikariCP DataSource implementation directly to the Spring LocalSessionFactoryBean.  Here’s my setup and configuration:



Spring configuration

public class AppConfig {


    public LocalSessionFactoryBean sessionFactory() {
        LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();


        return sessionFactory;

    public HibernateTransactionManager transactionManager() {
        HibernateTransactionManager txManager = new HibernateTransactionManager();

        return txManager;

    private DataSource dataSource() {

    	final HikariDataSource ds = new HikariDataSource();
    	ds.addDataSourceProperty("url", url);
    	ds.addDataSourceProperty("user", username);
    	ds.addDataSourceProperty("password", password);
    	ds.addDataSourceProperty("cachePrepStmts", true);
    	ds.addDataSourceProperty("prepStmtCacheSize", 250);
    	ds.addDataSourceProperty("prepStmtCacheSqlLimit", 2048);
    	ds.addDataSourceProperty("useServerPrepStmts", true);
    	return ds;

    private Properties hibernateProperties() {
        final Properties properties = new Properties();
        ... (Dialect, 2nd level entity cache, query cache, etc.)
        return properties;


This setup also requires less maintenance, since rather than relying on Hibernate <-> HikariCP integration and supported versions, you sidestep that entirely and directly feed HikariCP into Spring.  This should theoretically allow you to use HikariCP with any version of Spring supporting Hibernate through a SessionFactoryBean.  Also note that although I’m using the annotation-based Spring configuration, the concepts would be similar through XML.

Man vs. JIRA: The 3,000+ Issue Tracker Fight

What do you get from a 10+ year old open source framework, thousands and thousands of users within a wide range of roles, and tremendous complexity?  A JIRA project with over 3,000 unresolved tickets, ranging from the brand-new to a stale 8+ years.  Welcome to Hibernate ORM.

Is the large number indicative of low software quality?  Definitely not.  And therein lies the problem.  A vast majority of the tickets are no longer issues, no longer relevant, or duplicates.  But due to the sheer quantity, it became nearly impossible to weed through them.

I became the self-appointed “JIRA czar”, in an attempt to clean it up.  The following details the steps I’ve taken, so far, in case they’re useful to other teams in similar situations.  Unfortunately, while some steps are automatable, the majority require a lot of tedious, manual work.  But in the end, it’s worth it.

  • Manual query*: keywords “reject” or “can’t reproduce” (and variations).  Our community, thankfully, often attempts to reproduce older issues and will comment, but we don’t always notice the message.
  • Manual query*: <= 1 vote.  This often signifies “staleness” for bugs or a lack of interest/relevance for new features.
  • Manual query*: low number of watchers.  This can also signify “staleness” or lack of involvement, but is far less reliable.
  • We use a custom “Awaiting Test Case” state.  If a ticket has sat in it with no response, beyond a given threshold, automatically reject.
  • Manual query*: participants list includes a core team member.  This turned up numerous tickets where a team member mentioned a lack of feasibility, feature misunderstanding, etc., but for whatever reason did not actually reject the ticket.
  • I started a really rough script, using the JIRA API, that attempts to discover duplicates by analyzing stacktraces within the tickets.  It’s definitely a work-in-progress, but has already proved useful.  To use it (or contribute), please see  Alternatives do exist, but I haven’t found any that are 1.) open source, 2.) still maintained, 3.) not horrendously complicated and 4.) usable for Java-related text.  If anyone has suggestions, I’m all ears!

* denotes the addition of common query parameters: unresolved, unassigned, and reported by someone outside of the core team.

Through the above steps, I’ve been able to close out nearly 1,000 tickets.  And that does not mean I’ve become trigger happy and closed issues that really are still problems.  But frankly, I’d rather be overly aggressive and rely on the community to push back if something is erroneously closed.  Being too conservative will not help.

To help prevent this situation from happening again, I’ve implemented some regular steps and rules:

  1. Check new tickets each morning.
  2. If the ticket is a question, say so, politely request use of the forums, and close.
  3. If no test case is provided, set to Awaiting Test Case and request one.
  4. Continue to automatically reject tickets sitting in Awaiting Test Case for more than 2-3 months with no response.
  5. Above all, do a better job of educating users, rather than scolding them

If anyone has other tips, please post them!

Hibernate ORM Presentations at DevNexus 2014

Frankly, Hibernate ORM has been missing from the conference scene for quite a while.  Starting this year, I’m attempting to make it more of a priority.  The framework has received many improvements and new features that are well-worth presenting.

I’ll be starting with two talks at DevNexus 2014 in Atlanta.  One focuses on Hibernate ORM tips, tricks, performance improvements, and common myths/misconceptions.  The other presents several powerful features provided by Hibernate, outside of the typical ORM/JPA space.  The abstracts are below.  I’d love feedback and requests!

Hibernate ORM Tips, Tricks, and Performance Techniques

Out-of-the-box, Hibernate ORM offers limited overhead and decent throughput.  Early-stage applications enjoy the convenience of ORM/JPA with great performance.  However, scaling your application into an enterprise-level system introduces more demanding needs.

This talk will describe numerous tips and techniques to both increase Hibernate ORM performance, as well as decrease overhead.  These include some basic tricks, such as mapping and fetching strategies.  Entity enhancement instrumentation, third-party second level caching, Hibernate Search, and more complex considerations will also be discussed.  The talk will include live demonstrations techniques and their before-and-after results.

Not Just ORM: Powerful Hibernate ORM Features and Capabilities

Hibernate has always revolved around data, ORM, and JPA.  However, it’s much more than that.  Hibernate has grown into a family of projects and capabilities, extending well beyond the traditional ORM/JPA space.

This talk will present powerful features provided both by Hibernate ORM, as well as third-party extensions.  Some capabilities are brand new, while others are older-but-improved.  Topics include multiple-tenancy, geographic data, auditing/versioning, sharding, OSGi, and integration with additional Hibernate projects.  The talk will include live demonstrations.