Hello! This is my very first post on the dev blog and I have to say I am
thrilled! I am very new to Ruby on Rails and just started programming.
This is the first blog post in my series about building and developing a
blog. All dev newbies out there can read about my experiences here every
week. I will share some tips & tricks that might speed up our process in
becoming true development pros.
Last week I was working on a project that will give educational providers more insight into how their courses are performing on our site. I first turned to the mixpanel_client gem but quickly realized that building up requests was getting in the way of slicing and dicing all the data that we’ve got stored in Mixpanel. To make things easier, I created a library that implements a simplified interface to mixpanel_client. Introducing mixpannenkoek!
We are very excited to announce that we will migrate our databases to our new
PostgreSQL cluster this week. Instead of a vertically scalable Master and synchronous replica setup, we now have a near horizontally scaling relational database. The cluster consists of two Pgpool-II servers and two PostgreSQL master servers. While I write this, the performance change in production is yet to be proved, but we are certain database speed will be noticeable.
At the end of November 2013, we stumbled upon a problem. We were using New Relic to monitor our application performance here at Springest. Our app server response time was consistently hanging around 500ms, but our site didn’t feel that fast. We wracked our brains; why would New Relic report that our site was fast, if in fact it was slow? New Relic does a great job of letting you know about performance problems within your application. It turns out, though, that you need to do some work to get the full story. I’ll illustrate with two graphs.
When working at my desk I work on a 24” monitor and I like to use a pretty big font size when there is enough space for it. But often I also work on my laptop, where there is much less screen real estate. I used to change the font size manually with ⌘ and the +/– keys. Yesterday I decided to automate things by letting Vim do the heavy lifting.
I found a small piece of AppleScript that outputs the dimensions of the currently available screens. I decided to check the number of vertical pixels, as they always increase when you attach an external monitor due tue the fact that most monitors have screen ratios higher than 16:9. The script switches font sizes if there are more than 900 vertical pixels, meaning a resolution higher than the native resolution of all 13” or smaller MacBooks.
osascript -e 'tell application "Finder" to get bounds of window of desktop' | cut -d ' ' -f 4
The osascript command returns something similar to 0, 0, 3200, 1200. The first two arguments are the distances from the edge of the screen (left and top side), the third and fourth arguments are the horizontal and vertical width of the screen. I’m only interested in the fourth argument, so I use cut to select it.
Performance wise this is pretty fast. I timed the command to take about ~0,01s on average, which is more than fast enough as the check is only performed when you start Vim. That is also a point I would like to improve on in the future. It would be awesome if it could detect screen switching automatically. But for now I am more than happy with the solution I came up with.
To put it all together in Vim add the following code to your ~/.vimrc. Change the font and sizes to your desire:
" Set font size based on screen size. When vertical height is greater than 900" (i.e. an external monitor is attached on 13" or smaller MacBooks), use 18, else use 16.if has('mac')if system("osascript -e 'tell application \"Finder\" to get bounds of window of desktop' | cut -d ' ' -f 4")>900setguifont=Inconsolata:h18
We are looking for three experienced Ruby developers to strengthen our
current team of eight developers. We are a very developer friendly
company where everybody, from developers to marketeers to sales people,
is deeply involved in the product that you will be building with us.
As a developer at Springest, you will be responsible for a part of our
product that you will manage the roadmap for with a small team with
other developers, your product manager, marketeers and sales people.
Updated: A lot of people have been reporting issues with stuff that was not working as intended, especially with ElasticSearch upgrades, etc, since I originally posted it. I have rebooted development on this to address those issues, and it should all work now.
There is one minor issue when creating a new RabbitMQ node, but restarting it after the first failed Chef run should get it up and running.
Changes include many bug fixes, upgrade to Elasticsearch 1.0.1, Logstash upgrade to 1.4.1. Logstash will now be installed from the official Elasticsearch package repo’s with a very simple cookbook that I created to get rid of some complexity in the original one.
Thanks to all who have reported these issues!
I put together a bunch of cookbooks that will allow you to run a complete
Logstash setup on a scalable
AWS OpsWorks stack. At
we use it to ship from 250 to about 1k log entries per second, depending
on the RPM on our 18 servers.
A screenshot of Kibana3, included in the
We are looking for an experienced Linux engineer to improve our server
environment. Most of us are experienced developers, some with a basic
understanding of what a server environment should be like. Due to growth
in data and complexity in the past years, maintaining our infrastructure
has become an interesting, full time job that we think you would like.
And some minor performance issues with multiple
indexes that do not
perform well on MySQL.
In the past couple of weeks, we have rewritten all that was needed to
get our test suite (~6k tests) running on Postgres. That gives us enough
endurance to think this might actually work, but not enough to do it
live yet. There are always parts of the code that are not thoroughly
tested, do not perform in production, etc. So what to do with all that
work we – and Wercker, our CI tool – put into it?
We need more data! Live data if that is possible. As Derek and I were
talking about that and he suggested that it would be awesome if we could
replay traffic from production onto our new new stack. An interesting
thought, so I got to work.
This is part two of the Holacracy series. If you’re intrested in the whole story of us implementing
Holacracy, you might want to start
at the beginning.
Last week we had our first two Governance meetings. These meetings are about how the team is acting as a whole and what can be improved to make things run smoother. Day-to-day work related stuff seem to have no place in these meetings, other than maybe on a meta level. We had a good taste of how these meetings go in Holacracy.