I read the transcription of Richard Hamming's great talk "You and Your Research", and the questions I keep asking myself are "am I working on the most important problems in my field? Why not?". That is the perfect mantra for the upcoming month, because I want to focus on the wider issues in the industry. Often we are so focused on the narrow world of the tools and methodologies we're used to, that we forget the hows and the whys that led us up to this point in the first place.
This is an important phenomenon that I will most likely get back to often, and since I don't know any recognized word for it I will simply call it "legacy thinking". Our way of thinking is based on old legacy rather than recent observations. It is the phenomenon when we are so accustomed to perceive a problem from a specific angle that we have forgotten how much of it is based on preconceived assumptions. It is the scenario when our history of tackling the problem is based on an ever continuous refinement of existing solutions rather than examining the original problem. These are the things I find interesting to look at, and the funny thing is that you can recognize this phenomenon everywhere, from tools, languages and methods to organizational structure, computer hardware and human responsibilities.
My friends and colleagues have asked me if I really have the determination to stick to it now that it is summer and lovely weather and all. That is indeed a just question, but given that I have so much I want to read, learn and do during such a short time as one month, I'm not that worried I will waste my time eating ice cream and stroll around in the sun.
At the same time I have to remind myself to be open minded and that this is a lifestyle experiment as well. I'm not convinced that taking a two hour walk in the middle of the day is a bad thing for either creativity or productivity, as long as you're passionate about what you are trying to accomplish. If you need to consider some new aspects of a given solution or find a different approach to solve a complex problem, a walk in the sun might be the best thing to do.
Yet we have all these preconceived notions that you should stay in the office from 9 to 5 and hammer your forehead into the monitor. Is that really the way to go if you are dependent on creativity and original thinking? Are you not dependent on creativity and original thinking? If not, are you really solving problems or are you just a code monkey hammering down bug fixes one by one?
Many good software organizations lets you flex your time freely, and if you want to take a two hour walk, read a book about poetry or play with an intricate puzzle, that might not be a problem at all. At the same time, many of these organizations will not consider that work hours, so you'll hammer your head from morning to lunch, then you do something stimulating on your own time, then you're back doing office hours during the evening. I don't consider this to be necessarily wrong—actually, I find it quite convenient—but if these stimulating activities really are an intrinsic part of solving problems, why should they not be considered work hours?
This is the core of the lifestyle experimentation. By taking unpaid leave, I'm freeing myself completely from all the constraints of accountability. I have no one to report to except myself and the readers of my upcoming articles. If I spent a month at work without accomplishing anything, I would most likely feel like a douche and it would certainly hurt my career. If I spent a software sabbatical without accomplishing any real worth beyond what I learn, read and experience, I would still consider it a big win as long as I can afford to eat and pay rent.
The granularity of time spent learning
In the software industry you are expected to be smart, and people expect you to continue being smart as new technologies and methodologies comes along. Spending a couple of hours per week to read up on stuff seems pretty fine for many, but what about an entire day? Or an entire week each month? To me this is just an question about granularity and scale, but the interesting issue here is if those "hours per week" is enough to get you out of the rut and all those preconceived notions compared to what a "month per year" could do.
I was greatly inspired by Stefan Sagmesiter's TED talk "The power of time off", where he talks about taking a yearlong sabbatical every seven years. I certainly would appreciate a yearlong sabbatical, but personally I think a seven year iteration is a bit too much. I chose one month basically because that was what I could afford, but I also think it is a decent time to start with.
I am also quite curious about what habits will spring forth. Since I'm free to manage my own time, I suspect the old 9 to 5 fashion will die quite soon. How will I incorporate exercise and leisure in these new weekdays? How should I incorporate it? If exercise is yet another intrinsic aspect of keeping yourself sharp, alert and resourceful—which I certainly think it is by the way—should it not also be considered work? In that case, should it not be the most natural thing in the world to take a 10 km run in the middle of the day? Let's find out.
So, do I even have a plan?
Indeed I do. In fact, I got a quite thorough plan. There are some areas that I really want to learn more about, which is basically anything related to domain-specific languages, model-driven engineering and how one can achieve quality software and increase productivity by forging different language representations together. A way to achieve aspect-oriented principles perhaps? I don't know, but I'm about to learn.
From a practical perspective and I got some ideas I would really like to hammer down into code, and I have identified three different areas that I want to experiment with.
Graphical layout of hierarchical symbols.
I have always been interested in the representation of different programming languages, but I never really had any good tool or framework to experiment with it. I want to move beyond the use of text for depicting symbols and still make it practical. Please note that I'm not talking about UML diagrams or these nasty old point-and-click languages or the likes. I want to keep it in the realm of symbols, not diagrams.
Imagine a font system where you had a variable amount of parameters for each glyph, or a syntax highlighter based on geometric shapes instead of colors.
The layout and format of data.
A simple language to describe the layout and constraints of bits, bytes and arrays. I hope to use it for the serialization and marshalling of data structures, and to describe and parse file formats and protocols. It will probably be heavily influenced by the notation of Backus-Naur Form.
A very simple framework for describing state diagrams with hierarchically nested states. I have no desire of creating a graphical representation like the UML state diagram, but rather a very simple text based version. My real interest here is to gain experience in merging state machines with other frameworks, not the actual state machines.
Given that these three projects seem quite ambitious to me, I don't have much hope of achieving results worth demoing within a month. Especially not since I have an entire pile of fancy books on my desk, all waiting to be read and contemplated. This will certainly be a nice start however, and if I actually manage to get something down I promise to put it on my github account.
So, to wrap things up with the initial question; am I working on the most important problems in my field? I would certainly like to think so, but I admit these projects are only small steps towards a greater goal. When it comes to developing quality software in a scalable and sustainable manner, I think we still have immense amount of knowledge to gain, and I think this is in the right direction.
Wish me luck!