Yandex Subbotnik

Today I've been to the 5th Yandex Saturday event - Subbotnik. This is series of technical miniconferences in different cities. Place where yandexoids show other developers how they work, show toolset and other yandexish things. This was second Subbotnik for me. Last year most of the talks were about BEM (their front-end architecture) and toolset around it. So I've lost what they're talking about right from the second talk that day.

Today's Subbotnik was prepared much more thoughtfully. Yes, BEM was mentioned couple of times during each talk anyway, but all the themes were aimed to more broad audience, that doesn't know all of the ins and outs of bem-tools.

This Subbotnik had 2 tracks. One for Yandex's API users, the other was more for general developers with talks about front-end development and cloud services. I watched the second track, as I don't use Yandex API at all, so have no interest in it.

All the talks were in Russian most of the slides (if not all) too. But anyway I think it worth mentioning them in English speaking internet. You never know what may become useful.

Yandex.Disk. The flight is normal by Vladimir Rusinov.
Short introduction talk about Yandex.Disk from it's product manager. Yandex.Disk, for those who don't know, is online backup, storage and syncing service. Something very close to Dropbox. The product is doing fine, usage is growing, other services actively integrate with it using API. There were some numbers, history and funny picture. As it turned out it was warmup before second talk.

Operation and development of fast-growing clouds by Michail Belov and Oleg Leksunin.
First half of the talk was done by Oleg Leksunin who is system administrator of Yandex.Disk. He talked about requirements that were crucial for such service. architecture of the service, how they are organized and deployed. Also mentioned some key parts of the system and talked what each of them does.

There's so called MPFS service, that is a virtual file system for users's files. When file is added to YDisk request to MPFS is sent, which synchronously stores file as blobs on two different servers (with internal project Mulca). When it is saved file's metadata are saved to MongoDB.

Protocol for YDisk is extended WebDAV, that was chosen because it implements most of needed features, had existing working tools like wget or curl. And almost any modern OS has builtin WebDAV support. WebDAV server is written in Erlang. It was described in the post on habrahabr (in russian). MPFS is written in Python with Flask, Jinja2 and is served via nginx. Client applications use XMPP for push notifications.

Overall it was great talk with lots of interesting details and insights. Usually there's not so much you can take from talks like this, but it is interesting to know how big companies make big services. As it turns out everything may be quite simple.

Cryochamber for statics by Alexey Androsov I think was the best talk of the day. Alexey works as a fronted engineer at Yandex.Mail team. He started discussing how usually static files are served. Common case is to set cache period for static files for a long period, e.g. a year or a month, and make url to static resource uniqe by adding version of the app or revision (/static/1.1/app.js or /static/app.js?rev=a7cb9). Problem appears when there's only small change in the fronend app, then version or revision changes and all the static files have to be downloaded again. For big frontend apps this may become big problem, Y.Mail's static resource is more than 1Mb so with often changes to code there's a lot of things to download every time.

Yandex.Mail team took different approach. Now they put all the files for all of the versions into the same folder, but name of the resource is generated by file's hash. And postprocessor substitutes all links to the file in content that is served to user to hash-based new filenames.

So the CSS-code:
.class {
   background: url("1.png");

.class {
    background: url("freeze/f80abidmv6dlfo9.png");
There's some tricks for making all of this possible in JavaScript. That is being done with borschick, the app that does all the magick - renaming, processing links etc. I'll definately take a look at this project it looks very interesting.

Michail Korepanov with a talk on Incremental updates on the client continued discussion on how to make updates of front-apps less problematic. They decided to make browser download not the whole file with new version of app fiiles, but implement diff update.

So now the app doesn't load css and main javascript app as links, but they are downloaded via some javascript bootstrapper and saved to localstorage. After saving resources CSS is inserted inline to page and Javascript is evaluated with "new Function()".

Next time you open Y.Mail bootstrapper sends request with app version saved in localstorage, server responds with diff, bootstrapper patches code in localstorage and injects into page.

Diffs are prepared beforehand for 3 previous versions, when new version is deployed to production. If version stored in localstorage is too old, then server responds with whole latest version of the app. As I said, Yandex.Mail deploy new frontend app version twice a week, so such method saves a lot of bandwith and makes loading new version of the app much faster.

Also Michail mentioned RFC 3229 Delta encoding in HTTP, now finding diff and patching must be done by hand, with JavaScript, but this RFC implementetion could automate most this work. Unfortunately it is not implemented in any browser yet. There's a Chromium-based Yandex.Browser, so maybe this would be a place where this RFC would be first implemented ;)

Now I'm not sure how this is correlated to the previous talk by Alexey, since they solve the same problem from different angles. This talk looked a bit more theoretical, though it is used in production, as I understood.

Next talk after a short break - single-page application using node.js and BEM started by Eugene Filatov.

The main point of the talks was to tell about how yandex makes web apps that may be rendered both by backend and in the browser. Server rendering is needed for search engines to get your content properly.

BEM methodology operates with reusable blocks: news block, search bar block, email list block etc. Every block contains all in one place: html, css and javascript for it to work properly. But since block may be rendered both on server and on client there's some things that may need different handling. In the talk example of setting a cookie was mentioned. On server we have to add Set-Cookie request header, in browser we may write to document.cookies in the simplest case. But there's also a lot of shared code, so they prepare 3 files:
  • cookie.js - code specific to the browser
  • cookie.priv.js - code specific to node.js ("priv" is for "private")
  • cookie.common.js - shared code for both versions
bem-tools that postprocess page would gather bundles for node and browser environments.

Eugene shared a piechart that shows that common code usually takes up to 54% of block code, so it definately worth having similar code organization if you want to do both server and browser rendering.

Next talk was Interface development in distributed teams by Sergey Puzankov.

That was just an overview how the whole process is organized, how communication is made and all the things. Not so much to tell about, everything is the same as others do.  Well, there's two things that everybody should, but mostly don't have: regular inner conferences for sharing knowledge and up-to-date docs about systems.

Last talk was Code management or Why SCM by Sergey Sergeev.

Sergey is well-known git advocate, so he told about how his team uses git, best practices and anti-patterns. Now I know that Yandex SERP frontend team uses git-flow :) There was nothing really new here too, but it was nice opportunity to share user stories and ask questions for some people.

This was interesting event with great speakers. Despite there were no groundbreaking revalations, it was great opportunity to gather developers and discuss problems that all of us, both small dev teams and large software giants share.

Thanks Yandex Team for interesting event and great speakers!

Sublime Text 2 for Django developer

Many developers know about this battle between emacs and vim. Well, not exactly between editors themselves, but rather between their fanboys. As for me, I started with gedit, default Gnome, therefore Ubuntu, text editor. After adding a couple of plugins, most importantly, dummy code completion it worked well for couple of months.

A bit later I decided gedit isn't cool enough :-) And made a switch to vim. Can't say this was love from first sight, but as longer I used it, more I liked it. I ended up being completely overtaken by command mode. I don't care if I need to type one more key to yield into system clipboard, I don't need it too often. But copying to internal registers, having them as many as I want, searching right under your pinky is awesome. I started using searching dozens of time more often, then I did in other editors and IDEs. Combining this with remembering of your positions (so you can jump to your previous cursor positions), changes, and thousands of other different things. You know, look at any vim cheat sheet, and you'll be scared and maybe interested :-)

Yes, I know, it takes time to learn it, even more time to get used to it, but when it is in your muscle memory it is extremely comfortable. But I still have different issues with it. I almost can't use Rope, code completion is almost as dumb as gedit's, and configuring it is cargo-cult for me :).

I tried emacs, but it uses many many more times control and alt keys. I don't really like this. Now let's move to the subject :-)

Text editor Sublime Text 2 is starting to gain more and more attention. It is multiplatform, quick, nicelooking editor with a couple of very interesting features.

One of the mostly used ST2 features is it's function for opening files from project. You can press Ctrl+P and start typing filepath and it will give a list of matching files. You have to create project first, to use this function. It works great and became very handy way to move between files. Other options is Ctrl+Shift+P - same kind of text input opens and you can start typing a command and it will filter it for you. For example you can type Rope and it will show you all the commands of rope on first place, and then other commands that have letters r,o,p,e in it. It is hard to explain, you should try it for yourself and you'll understand it quickly.

Opening file from project

Moving between methods is not harder - Ctrl+R and you see a list of methods in current file, can filter and quickly jump to their declaration.

Most surprising feature for me is vintage mode. It is a simulation of VIM's most commonly used commands, such as ghjk-navigation, commands for copying, deleting, pasting, inserting etc. But miracle didn't happen, it is quite restricted. Using motion commands isn't as consistent as in vim. ESC doesn't clears command, so if you typed d, you will delete something, no matter how many times you push ESC button. Vintage mode is made completely on keybindings, so maybe it is possible to fix some things, but I think emulate whole vim's behavior would be overkill.

Other great advantage of ST2 is it's configuration file. You can build your own from scratch, referring to default one, which is pretty well documented. Configuration file is json, so you don't have to learn another configuration language. My (tiny) configuration file is on github

I have installed couple of plugins for django development:
  • Djaneiro is nice set of autocompletions, colorscheme for django. I printed their README to see all list of available substitutions, e.g. var becomes {{ }} in template, block becomes {% block %} {% endblock %}
  • SublimeRope is an attempt to bring powerful library Rope into Sublime Text 2. Rope can do a lot of IDEish things with python code like extracting methods, searching for docstrings, going to definitions, renaming etc. At the moment it can do only renaming and moving to definitions but github repo is being updated time to time, so I hope we will see more features in future.
  • SublimeLint  became a big discovery. I knew lint existed, but never actually used it. It can show you different warnings and errors like those big compilers have: variable declared but never used, unused imports, syntax errors and PEP8. This is not the best explanation what Lint is though :) It can make editor a bit slower, but you can turn off background linting and turn on when you need it again.
SublimeLint shows PEP8 warning
Autocompletions in ST2 is what I still don't understand. Quite often I can't predict what suggestions will it give, especially what it will autocomplete with Tab key.

Everybody is talking about minimap. I can't say anything about it. It is interesting way to find your place in file, no more, no less.

As a summary I think sublime can become my default editor. I use it for a week already, and though I miss some vim specific features, I like it very and very much. I think I'll try to do a reversed path. I want to try recreate all things i liked so much from ST2 in vim. Depending on results I'll choose one of them to use day-to-day.

Even if i will switch back to vim, Sublime Text 2 showed me some things in editor, that I liked and that can make my life easier and nicer. Users really don't know what they want, until they get it :)

Easy REST API with django-tastypie

One day at work I had to implement JSON response on simple GET request. I was just going to code another RESTish bicycle as my project manager told me to use django-tastypie. I'm sure he knows what he is talking about so I opened google to find what tastypie is.

When spec for my task was being written, author didn't think about what tool will be used for implementation. On the other side tastypie authors didn't know what people will do with their lib. So my task was to make this two worlds meet in one place. On github project page there's a simple example with very basic things showed. The real world is usually more complicated. I'll try to show you how I worked with tastypie.

Enhancing git commit message template

My faveourite DVCS git has a method to generate (or at least prepopulate) your commit message. I'm going to tell you how it works.

We're using a principle of feature branches at work. Every ticket, that is created on bugtracker receives it's own git branch. Sometimes you have to see all changes related to one ticket. But branches would be merged to master one day sooner or later so commits from different branches would be mixed. To solve this we have a convention, that every commit message has to start with 'ticket:###'.

Android App Inventor

I had an idea to develop for Android since bought this shiny, speedy, cooly Acer Liquid E. Anyone would like to! It's so smart, powerful and always stays with you literary everywhere! It is a great temptation not try to handle this power yourself.

I installed all requirements for development, but since then didn't have much urge to try do something with it because Android SDK would take time to get into. I am more focuse at Python and Django stuff at the moment. Meanwhile, I remember there's a thing called Android App Inventor - a tool for creating android apps visually, without actual coding. That's an easy way to try something on Android.

Simple automation of virtualenv creation

Working with python/django related things is a great fun if we're talking about actual development. But there's a couple of things around you have to keep in mind and manage manually. One of the most annoying thing you have to do is activating different virtual environments.

I have a few projects on my local machine, most of them have separate virtualenvs, so when you have to move from one project to another you have to deactivate one, activate other and only then start working. When I was starting developing I thought it's not a big deal - only two commands actually. But after a tens of switches you're starting to get angry. Solution for that was found on Max Klymyshyn blog. There's only one thing to mention - on Ubuntu I placed Max's code at $HOME/.bashrc, because Ubuntu doesn't respect .bash_profile.

But during last weekend I decided to try a couple of projects from github. And for everyone of them I had to create separate virtual environment (and as I'm so lazy I made it automatically started). While creating a new virtualenv, I thought that this whole procedure deserves a bit of automatization too. So I made a little bash script, that creates new clean virtual environment in a new directory. Here's a script:

if [ $1 ]
     echo 'Making new dir' $1
    mkdir $1
    cd $1

    echo 'Creating new dev environment'
    virtualenv .env --no-site-packages
    echo VIRTUALENV_PATH=`pwd`/.env > .venv
    echo 'Parameter is required for dir name'

It only checks that you've specified a name for a new directory, where virtualenv is going to be created. I put it to $HOME/bin/newenv file, and chmod +x newenv to make it runnable. So now every time I want to try something new from github I have to type only one command, that creates a great place for experiments, which virtualenv is activated right when I cd into this directory.

At last, I tweaked (in russian) my bash prompt. Now it shows if current dir is svn/hg/git repo, and if so, show the name of current branch and if it has not committed changes. It looks geeky as hell, but not so informative and usefull as I expected :) It slows down using of terminal (cause it runs all svn, hg and git twice after every command on terminal: first to get a branch name, and second - for status). And it takes a half of command line space. So, there's more cons then pros, but I'll see, maybe I'll get used to it so wouldn't be able to remove it.

Testing Google App Engine application

There was a time when I did no testing at all. (As The Kooks sing "You're so naive"). And Spycify was started and achieved it's first releases with about 0 tests. Those first versions were working, but had a whole bunch of annoying bugs. The only way I had is to do massive refactoring of whole app.

Previos versions were checked manually. I started a devserver, clicked through changed parts to see does it works. Sometimes this would not help and I had to upload app to production to test it there, manually, of course. Massive refactoring was a nice chance to start using automate testing. No more manual clicking and repeating myself.