This year's PolyConf, a "polyglot conference", had a slightly lower insight-per-hour rate for me than the last one, but still turned out to be pretty good - especially since it was one day longer this year. Here are some of the interesting things that I heard about:
The conference started with a short workshop on miniKanren, a relational engine that is sort of like Prolog but simpler and more pure - there are no cuts, and the search algorithm guarantees that as long as the result exists, the program will find it in finite time. We wrote a Scheme interpreter in it (which is easier than it sounds! Just define variables, lambda and application, and you're all set) and that made possible all kinds of fun with running programs backwards.
For instance, if
(append q '(c d e)) returns
(a b c d e), then what is
q? Given a list like
(I love you), what are all possible programs that return this list? The most impressive example was running a proof checker backwards. Given a Scheme function that checks whether a (logical) proof is correct, and a statement like
p & (p -> q) & (q -> r) -> r, miniKanren is able to find the proof for that statement.
You can define schemas for your XML or JSON messages, but there are also several libraries for defining a protocol as you would define a set of data types. For instance, ASN.1, Protocol Buffers, Thrift, Cap'n proto, Avro…
Being able to define a binary protocol and have an easy way of serializing and deserializing messages in various languages for free sounds pretty useful. And I guess these might do much better when you really care about performance, for instance you have constrained processing resources or high throughput, and you don't want the text parsing overhead.
OFFSET are a bad way of handling pagination. Don't count on Postgres to fetch that last page quickly!
Well, to be fair, the point of the talk was not to come up with convoluted schemes for efficient pagination in SQL, but to use a right tool for the job (in this case you could cache your data in something like Redis). And more generally - learn the technology you're using, push it to its limits, but don't be afraid to reach for something else if it might be better for your use case. Who knows - maybe adding a new technology to your stack will unlock some other possibilities you do not yet know about.
Also, if you want to introduce a new technology to the team, don't rewrite a mission critical component - you will reduce the bus factor, it will break and people will hate you. Start with something smaller like one-off scripts.
Cello is a fun little library that allows you to use high-level features in C, such as inheritance, polymorphism, GC and exceptions. Maybe not very useful, and somewhat abusive, but still pretty cool.
An interesting talk about WebSocket internals. I learned that the protocol is pretty simple but still provides some useful features like splitting the messages into frames (convenient when you're streaming) and specifying them either as UTF-8 or binary. It also tries really hard to prevent over-eager caching by various proxies - initial HTTP requests and responses contain random numbers, and frames sent by client must be XORed with a random mask.
Good metaphor for program correctness as science: type systems are a logical proof that your program does the right thing, and automated tests are experimental evidence. Testing manually is just an anecdote - all it proves is that the program worked once on your machine…
Also, type systems are like the universal quantifier, and tests are like the existential one.
MirageOS is a whole operating system written in OCaml. Thanks to static linking, you can write your server using MirageOS as a library, compile it, and generate an image with only the code that it uses. If your site is simple, the whole virtual machine that serves it can take up something like 20 megabytes - pretty impressive compared to hundreds of megabytes for a Linux server.
This is not only convenient but also pretty safe, as you're reducing the attack surface of your system. As a proof of concept, there's the Bitcoin Piñata, a "hack me" website that knows the key to about 10 Bitcoins - nobody was able to smash it yet.
Julia is a nice language for scientific computing that compiles itself to native code. What impressed me is that it has powerful macros - for instance, you can define a generic way of looping through multi-dimensional arrays, and it will generate as many nested loops (
for i = ..., for j = ..., for k = ...) as necessary.
Emoji Lisp! I really like the
CDR icons :)
The main idea of this book is simple: as a software engineer, you have plenty of opportunities to optimize what you're doing for higher impact. The book provides plenty of advice on how to identify which activities make the most difference in whatever you're trying to achieve (have the most "leverage", as the author says).
A good example would be taking some time to automate things you would do manually. This in a sense generalizes to having tight feedback loop, for instance a fast compile-run cycle, or a way to reproduce a bug you're working on quickly. The book also talks about importance of having good metrics (as opposed to "flying blind") and systems that fail fast (so that you immediately know what is the source of failure).
Of course, writing software is not the only area where you can go looking for "high-leverage" activities. There are things like recruitment, onboarding and mentoring new work-mates, which pays off in terms of these people being able to contribute to your project. And on the individual level, it's important to optimize your own learning, since that will generally become more useful later than money from a well-paid, but interesting job.
Overall, the book is full of unsurprising but generally useful software engineering advice on various aspects of the programmer job, like working with code, estimation, team work or risk management. The examples get boring at times - hearing for the Nth time what this or that famous company did gets old quickly, and gives an impression of a typical American self-help book. I liked some stories, though. I learned for instance that Dropbox uses fake traffic to detect problems with site load quickly - if they notice a problem, they simply turn it off and investigate the issue without any time pressure (which they couldn't do with real traffic)!
The most important part that I got from this book, however, wasn't any specific engineering tips (which you can easily find elsewhere) but the attitude of "leverage": don't waste time on unimportant stuff, go look for things you can do that will matter the most.
This is a shockingly comprehensive, and at the same time very easy to read, introduction to game design. The book touches on aspects ranging from game mechanics, aesthetics and technology considerations, to teamwork advice and design documents.
Overall, the book gave me a lot of respect for the complexity and size of the field. As a game designer, you have to draw from a big number of disciplines such as anthropology, psychology, sociology, architecture, probability theory or graphic design. You have to think about what needs of the player your game satisfies (think Maslow's hierarchy, but not only), how to maintain the difficulty curve so that the player is in the "channel of flow" (not too easy, not too hard), how to balance your game with respect to different axes, what theme the game will have and how to use every aspect of design to reinforce that theme, and so on and so forth. There is also some advice on brainstorming and working with a team which can be easily applied outside of the field of game development.
All these ideas are neatly categorized in the book's chapters, and in each part I found some interesting insight. To give one example, there's a concept of game venue as something that defines the type of play experience. The author mentions venues like the hearth where people gather together (modern hearth being, well, the TV with a console), your personal workbench where you concentrate on things (desktop PCs would fill this niche, with PC games being more "serious" and less casual), the reading nook where you sit comfortably with a book (or a tablet), a table for board games, public spaces like arena for competitive games, and so on.
Another example is the well-known notion of "emergent gameplay" that somehow always felt like a magic to me – my understanding of it was "create a sufficiently complex game and it will magically become more interesting because of what the players come up with". The book breaks it down nicely: a game can have basic actions which are basically rules of the game (for instance, move a piece or capture a piece) and strategic actions which are implied by these (for instance, move a piece to protect another one, sacrifice or exchange pieces, force an opponent to do something…) It's generally good to have a small number of basic actions from which many strategic actions can emerge. A good way to achieve that is to make each of this actions meaningful, e.g. have far-reaching consequences instead of just local ones.
That's just a small sample of things I took away from The Art of Game Design. I borrowed the book from a friend but I think I will be buying my own copy, since I definitely plan to return to it in the future.
I just came back from PyWaw Summit, a two-day Python conference here in Warsaw. Here are some interesting take-aways I had:
- A great talk about "diving into the rabbit hole": a tendency of programmers to go digging themselves into deeper and deeper trouble trying to solve a problem. Sort of a dark version of the flow state - time flies really fast, you become fixated on the issue, have a feeling that you're always "almost there", neglect human contact… What you can do is get better at recognizing these situations, step back, have a rest, and talk with someone else.
- An interesting point about unit tests. Programmers learn relatively early not to change the code from
n = 1 to
n = 2, they change the code from
n = 1 to
any n, i.e. generalize properly. Notice that the first opportunity to do so is right when you're writing tests for your code and learn that you have to isolate some part. Instead of hacky solutions like
mock.patch, take the opportunity to refactor your function!
- IPFS - ambitious project for universal peer-to-peer content-addressed storage, sort of like Git, Bitcoin, or BitTorrent. I wonder what will come out of it.
- PostgreSQL has
SELECT to_json(...), I guess it can come in handy when you want to write something quick and dirty and get the data to your application.
- A horror story: you know how you can have Python stored procedures in Postgres? Some people were using them to import Jinja2 and render templates. On the database server.
- Also about Python stored procedures - you can actually keep them in a versioned Python file, and just call functions from that file in your database procedures. This way, your procedures can be under version control, and you can actually unit-test them (by providing mock data to function instead of running through database).
- Case in point for microservices: they make onboarding new programmers easier, since a new programmer doesn't have to understand the whole system immediately - they can read a single program and be ready to hack on it from day one.
- Inspiring keynote about "sharpening your tools": bad tools can slow you down, it's important to spent some time automating your work, and pair programming is actually pretty useful for that - you see the other person doing something crazy fast with their computer, and get new ideas on how to improve your own setup. Examples: shell history and tab completion, editor auto-indent, incremental search (search as you type), editor auto-linting (jshint, pyflakes), aliases and scripts for common commands, storing your dot-files in version control.
- testmon: a neat project that monitors code changes and re-runs only the relevant test (by checking code coverage). I'm looking forward to trying it out.
- Ola Sitarska told us the story of Django Girls, Django beginner tutorials for women. Pretty awesome how big the initiative is getting - just look at how many cities the events are being held in.
That's all for now - until next time!
- Interesting talk about not using any frameworks (I imagine the JS programmers tend to go overboard with these sometimes) - makes you notice that sometimes all the dependencies force you into a specific way of coding, and forces you to actually learn more of the underlying technology. I guess a first step would be to learn modern JS language and DOM features without using jQuery.
- URI Templates are a thing - a standard way to specify resource URLs, like
- JSON can have a schema too. Seems useful as a form of validation for APIs. Also, allows for automatic form generation on the frontend - just change a schema and appropriate fields will be generated, with client side validation even.
- Advice from a team that instead of preparing independed design mock-ups for pages, decided to develop according to a "style guide" of visual content - a gallery of available classes, colors and so on. Seems like like good idea. They even have software that automatically generates these, and allows editing them in browser.
- A guy from Yammer described their problems with scaling up the codebase and the team. Main takeaway: instead of writing documentation, make it executable (write clear tests instead of describing the functionality; make JSHint part of your build instead of having a coding style guide).
- Scalable and Modular Architecture for CSS. Interesting idea, if too radical at times (advice to use single classes in the format
.block__element--modifier looks like abuse to me).
- Web Components - an upcoming standard (and existing library) allowing you to define your own HTML elements. Want to have an in-place AJAX editor? Instead of copy-pasting necessary markup all over the place, just define a
- Fun story about a test that suddenly started failing mysteriously. They were validating a purchase of child insurance, and used some testing data for that. One day the guy just became too old for child insurance :) Moral: instead of refactoring a group tests to have common, complicated set-up procedure, use many simple helper functions.
- Pretty 3D fractals in browser. The audience suggested using Oculus VR…
Finally, there was a closing keynote about diversity in tech that I found valuable. The fact that the tech scene is demographically monolithic, and at times very unfriendly to women and other underrepresented groups is quite well documented, but the speaker also touched on a few other issues.
- One was that we actually make this stuff for everyone else, creating technology and online spaces that the rest of the world uses. This is important when it comes to accessibility (there are more blind people worldwide than the whole population of Poland), but also things like real-name policies that are downright harmful, social network designs that encourage online harrassment, or failures like making your a phone work well only for right-handed people because nobody on the design team foresaw possible problems.
- Something I still have to make my mind on is shipping culture considered harmful. The downsides of "moving fast and breaking things" include launching badly thought out products and subsequent feature creep, stressful work pace, and an environment where only the engineers' contributions are appreciated since they are the only ones directly shipping.
- Another controversial idea was that meritocracy doesn't work, in that it's easy to make your environment resistant to change - if you value only the merits that you have, and dismiss the rest as irrelevant, in the end you'll only invite more of the same people and keep out others that would also bring value. See Linus's famous abuse that keeps potential Linux kernel contributors out if they're not thick-skinned enough.
The talk (and a positive response from the audience) gave me much respect for the British frontend scene, especially compared to the Polish one (post in Polish, and a somewhat unpleasant reading).
That's all I have - I hope you enjoyed my writeup!