2023-08-04
I turned 11 this year as a programmer. It feels surreal, doing something for over a decade. But, it's also not some special achievement, as some make it out to be. Experience does not necessarily make you a better programmer. As Austin Henley puts it here, Experience is cheap, all it takes is time.
There are all sorts of tropes one can fall into following that thought process. I have been naive enough to fall into some, and lucky enough to avoid others. And over the years I have observed (and maybe learned) some things.
It would be foolish to take the following as reasonable advice or truth. These are only observations and opinions of someone who has done it for a while.
NOT doing programming helped me a lot. There is a saying: (perfect) Practice makes you perfect. I do not deny that. But I believe there is a limit to the benefit one can get from only repetition. After a threshold period of time (which might vary widely depending on the topic), it is better to move on to something else. Doing different things, things that are not related to programming at all, helped me to look at problems from different viewpoints. It is easier for a person to understand new ideas or solve difficult problems when they can associate it with something they have already seen. And the more things they do, the more things they have for association.
Failure occurs most of the time. When I was still new to programming, one thing that bugged me a lot is that I would either not finish something I started, or I would never maintain a finished product. The medical term for this is Atychiphobia. Fortunately, I don't have this in a medically critical level. Instead, this led me into massive procrastination. What I've come to learn is that most of the time failure does not equal to wasted time. I think it is natural for many programmers to not complete a project. I try to (and usually do) find something to learn from the small part I have done. Over time, they accumulate to a significant amount of knowledge. Learning what not to do is as important as learning what to do. For the record, it is also important to know how to learn from mistakes.
It's not always possible to be efficient or productive. This ties closely to my view of being productive. Most people I know calculate productivity with respect to what they are trying to achieve. I try to frame my spent time with respect to what I have done. It sometimes feel like nothing is getting done. Maybe I thought I could finish a feature in a hour. But I ended up spending 3 hours looking up stuff and googling. In hindsight, those three hours might seem unproductive. But the end result is most likely a culmination of what I did in those extra hours. I wouldn't call them wasted at all. And as for being efficient, great programmers have amazing techniques in how they read codes and docs, think about structure, or how they debug. These techniques allow them to squeeze those three hours into something less. That's why they seem to be more efficient at doing the same thing you or I do. It's something that (I think) doesn't come outright, and the secret is to just trust in the process.
Read the documentation before everything. I enjoy reading blogs and experiences of other programmers on any particular subject. But if there is a good documentation which covers that subject and is comprehensive without being redundant, I will always go for that. Secondly, ocumentations should always be written in a strict and concise manner. Most of the times, I can just look at API references and their details to fit a structure in my head. A well written documentation is always better than most other sources to learn from. The Unity User Manual is a good example. Sure, it has its flaws. But it captures almost all aspects of the engine clearly.
But well written code might not even need one. A good program, in my opinion, has clear structure, thought-out design and is explicit in nature. The flask and beets codebases are some of my favorite to read. I can just sit down and read thousands of lines of code without something in it interrupting me. It is very easy to follow the flow of the program and understand what exactly is the expected outcome. I use (and contribute to) beets regularly but have yet to look at its documentation. Reading a good codebase nets me more benefit than poking in several articles or videos.
I always keep logs. More often than not, a system will go down. When the project is small, it is easy to figure out where the bug is. But as it scales up, it becomes impossible to keep track of the clown show. The more logs, the better. I have made myself comfortable in reading log files. I can create a quite accurate timeline of why and how a system failed. Also, I have a personal rule of always logging the timestamps of any entry. I once read this somewhere which has stuck with me ever since, and it has never failed me. There will come a time when you need those, sooner or later. And one can never have too much logs.
99% uptime is not good enough. A system with 99% uptime will stay down 3.6 days a year. That's disastrous for high demand systems. I have had the good fortune (and sometimes misfortune) of working in large-scale projects, and I think this is the biggest lesson I learned. A sudden phone call after midnight is almost never worth it. Planning beforehand will save much more trouble.
Learning to touch type was my greatest and fastest investment. It took me ~40 days to properly touch type at a decent speed. Now it is second nature. I can devote my attention to another window or some other thought instead of looking at my keyboard every once in a while. And it might be placebo, but I think my thinking has also gotten faster after learning how to touch type.
Functional programming is better but also overrated. We shouldn't replace everything with functional programming. I think that every programmer should try traditional functional programming at some point but we also need OOP. A large amount of complex problems have become easier to solve after modelling them as objects, and for some people it is the more intuitive process. Both can and should coexist.
I will put it out before I optimize it. Joel Spolsky has a great article that had diminished by insecurity on publishing a rather crappy product. It is better release something that is usable for most cases than to wait for the final optimal product. A product is almost never final. Showing results early on is a great way to get feedback and improve the product. Giving users some output in 15 seconds today is better than giving them a output in 2 seconds after 6 months. Most users won't use 80% of the features. And the product will probably work for majority of the users anyway.
Note: This sometimes leads to bad practice of finalizing unoptimized software and really invalidates the art of programming. The following Hacker News thread is a great read.
Commenting personal projects is one of my most useful habits. It is very easy for me to forget what actually occurred in my mind the moment I wrote a certain program. It allows me to map the train of thought I had at that time. I tend to comment quite comprehensively, which ends up being very verbose sometimes. I, personally, will not recommend it for projects that are not going to be revisited.
Making code easy to understand is more important than adding abstraction. It is also harder. Most of the time, adding abstraction to save you some space is not worth the pain of moving back and forth between code. Abstraction usually makes the code more complicated. I would pick explicit code over multiple layers of abstraction that is not meant for making the code simple. The Zen of Python is always worth a re-read in these cases.
The Zen of Python
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
I have my own opinions about namespaces, but will reserve it for later.
PascalCase, camelCase, and snake_case are all good. Just be consistent. I used to have programs that looked like this.
def get_data():
...
...
...
...
def makeRequest():
...
It may not seem like a big deal. But code styling is an integral part of program design that many seem to overlook. I am now obsessively consistent regarding anything and everything, be it naming, syntax, indentation, and/or patterns. This helps in establishing code identity and is much more easier to read than inconsistent lines.
SQL queries are still the best. I was kind of hooked on for the first few months of using NoSQL. It weared off quickly. While they are useful and allow for different design philosophies, if you end up writing NoSQL queries at any point in your work, it means you definitely should know SQL too. Knowing how to construct efficent SQL queries will save you a lot of unnecessary hassle and your system a lot of resources.
Avoiding quadratic time algorithms as much as possible. Good and scalable systems should (almost) never use quadratic time algorithms. There is a saying along the lines of, "O(n^2) is good enough to make into production, but bad enough to collapse the system". They scale very poorly to the point your product is unusable.
The habit of nesting leads to bad practices. 1. The codebase gets messy. 2. Debugging nightmares that might lead to pulling out your hair. It might feel smart and efficient to write ternary operators or conditional blocks and chaining them. But this will lead to unreadable nonsense eventually. Use them appropriately (and hopefully rarely).
Use a debugger instead of printing. This is a thing I picked up from game development. Most gamedevs are adept at using visual debuggers, which is quite opposite to the traditional software development culture. It is tempting to just throw a print/log statement at breakpoints to find where the problem is. But I find it more efficient and time saving to use a debugger. Modern IDEs usually come with very capable tools, and I'm sure extensions are available for most popular editors too. There is not really many valid excuses for not using a debugger.
See what the program is doing to the system. I think all developers should at least once verify how their codes are allocating and managing memory, or interacting with the CPU. The behind the scenes stuff gives massive insight into how the program will perform elsewhere. Embedded systems are a great way and reason to learn this.
Pair programming once in a while gives fresh perspective. Our minds work in different ways. It's interesting to see if someone else takes a new approach to the problem I'm having. Even if you don't like the idea, I recommend trying it out atleast once with some programmer you know and mesh well with. It's a great way to learn new things and also very fun.
Version Control. Even if it's a small project, use git
or whatever. It takes almost no time and there isn't any negative side to it. And while we are at this topic, I've seen people that overcommit with very little changes that are not separable. I don't think there is any need to go for that extreme. Clear and Relevant commit messages with regular commits work out well enough for me.
'As long as it works' is a bad attitude. This usually happens because clients or people in managerial position are breathing on your neck. There is also a small part of the community which thinks immediate results are reflective enough of the final outcome, in both the developer and client sides. In my opinion, the encouragement of this is because the software development industry (other than a few caveats) doesn't really contain standards that are enforceable. While there's not much we can do about that, I try to not cut corners as much as I can.
Assumptions lead to mistakes. There should be some evidence or concrete theories behind what is happening. I am guilty of saying, "I wrote this. There is not a mistake in that part." Nuh-uh. Unless there is something to back up claims, they are prone to falling apart.
Unit tests are not worship-worthy. This is a controversial topic. I know there is a divide between fans of test-driven development and documentation-driven development. I try not to pick sides. Because almost always, there exists a better hybrid solution. I will not deny the clarity TDD provides, but it is very much possible to maintain (most) products without writing tests at all. Before someone crucifies me, I will say that I do write tests for most of my projects. But most of the time, they do not turn out to be nearly as impactful as everyone make them out to be. Maybe I have been writing tests wrong this whole time. But I have seen many people who write tests just for the sake of writing tests, most of which do not provide any value. I think it is better to first draft the doc, building the API, and then writing tests to keep it in check. To give where credit is due, they help refactoring easier, reduce developer headache when it is needed for a big change to happen quickly, and filters out a significant part of bugs from making into production.
Business and other factors matter as much as the art of programming. It learned this the hard way. Most of the time, I'm writing code for someone else. There is usually one or multiple people I have to work with. It is better to not enforce my particular coding style elsewhere. Adaptability is a great skill to have. Also, management does not have all the time one might need to make the code pretty, and it is OK. In the same spirit, getting into refactoring without giving others a heads up just because I know it should be a certain way is not a good idea. Understanding the big picture will save everyone's time and effort.
There is no shortcut. To learn, you have to put in the effort. I am a big believer of the 10,000 hours concept. Like all disciplines, programming also needs regular practice for improving. At times, the idea of keeping it up felt enormous to me. It fades away eventually, with comfort which can only come with familiarity. I really like this BoJack Horseman quote, Every day it gets a little easier. But you gotta do it every day —that's the hard part. But it does get easier.