2025-08-12
This post was at the top of the HN chart today. It starts with the depiction of an interesting event, of how a group of dredge workers accidentally emptied a two century old canal because they didn't know it had a plug. Tim further talks about how the reluctance to preserve institutional knowledge propagates into more severe consequences later.
This is Wikipedia's definition of Institutional Memory: Institutional memory is a collective set of facts, concepts, experiences and knowledge held by a group of people. As a software developer, I have come to appreciate good documentation over the years. It is something I always emphasize in my workplace. I think a lot of my coworkers, or people in similar position will have a similar event to share: you open up an old codebase, that you wrote, and be completely stupefied by what is happening between those lines of code. It is easy to forget as a human. But when an institution forgets, the price to be paid is higher.
I do not intend to talk about documentation processes or software development best practices. The concept of institutional memory operates on a higher level than that. I will take the liberty of copying another body of text from the same Wikipedia article: For example, two automobile repair shops might have the same model of car lift. The lifts themselves and the written instructions for them are identical. However, if one shop has a lower ceiling than the other, its employees may determine that raising a car beyond a certain height can cause it to be damaged by the ceiling. The current employees inform new employees of this workaround. They, in turn, inform future new employees, even if the person who originally discovered the problem no longer works there. Such information is in the repair shop's institutional memory.
This is a great example. But imagine a scenario where the company does a complete overhaul, and starts over with a completely new group of employees. The new employees have no knowledge of this workaround.
When I was younger, I would frequent the Internet Archive a lot. As years have progressed, this has become less frequent, not because the website has regressed in value (it is actually quite the opposite), but because my personal and professional priorities have shifted elsewhere. The Internet Archive project exists to build a digital library of the internet and cultural artifacts in digital form. The legality of this project is still being contested in court, but the purpose remains the same. Preservation.
I read about the Challenger Disaster for the first time in Richard Feynman's book What Do You Care What Other People Think?: Further Adventures of a Curious Character. Feynman allocates a significant portion of that book recollecting the events from the Rogers Commission, responsible for investigating the crash of the space shuttle Challenger. In 1986, the shuttle, along with its seven crew members, crashed into the Atlantic just 73 seconds after its launch.
In his book, Feynman talks about the inconsistencies within the NASA program. It had been plagued with mismanagement, shaky science, and incorrect risk assessment plans. The book, and the details of the investigation are highly interesting and I would recommend anyone to read it. As with any project that operates within the real world, with real risks and benefits, the flaws are mostly not technical, but human.
17 years after, another space shuttle, named Columbia, disintegrated during re-entry into the Earth's atmosphere. Taking the lives of seven crew members yet again. Columbia was different to Challenger from a technical point of view. But the underlying problems remained the same. The management's stubborn ignorance to warnings from engineers, a tragic decision to not stop flying despite known problems. Tim, from the mentioned Hacker News post, words it perfectly, "To set the stage for Columbia, Nasa first had to forget all the lessons of Challenger".
An organization has to preserve knowledge, and provide the ability to retrieve that knowledge. Organizational forgetfulness leads to inconsistent methodology, immature decisions, and severe consequences. If I had to decide considering the industry I work in, organizational memory would deal with knowledge representation, data security, customer analysis, details of existing system and sources, written records and paper trails. The Wikipedia article regarding Organizational Memory chains to another topic named Experiential Learning. I will come back to this later.
I feel like I should touch the broader topic of collective memory while we are at it. My initial understanding was that organizational or institutional memory are essentially subsets of collective memory. This is incorrect. Collective memory is a shared pool of knowledge and information of some group or community which influences the group's identity. I might be overly generalizing the highlighted part. But let me explain. Collective memory is subjective. Organizational/Institutional memory are situational. In their 2015 paper, Roediger and Abel state that, Collective memory seems to be shaped by schematic narrative templates, or knowledge structures that serve to narrate the story of a people, often emphasizing heroic and even mythic elements while minimizing negative or inconsistent ones. It is not guaranteed that collective memory will be historically accurate. History is unbiased. Collective memory lets the group form opinions, ideologies, and decisions. Unsurprisingly, it is very susceptible to manipulation and propaganda. Most Americans and Russians do not share a common view on the events that took place in WW2. People who were affected by the Iraq war, will have different collective memory depending on which group they are a part of. That memory is influenced by emotions, generations, political standpoints, and which side of the barrel they were on.
In the book "1984", the famous novel by Orwell, we encounter the concept of Memory Holes. The removal of an information in all forms and getting rid of any trace. As if it never happened. When you can create a memory hole, you can put in new memories there. In the book, when the weekly ration was decreased from 30 grams to 20, the authority decides to get rid of any source of information that said that 30 grams of ration were ever distributed. Any previous mention in any newspaper, booklet, poster - any sort of media were destroyed. These were then replaced by the information that the ration was actually increased to 20 grams.
The subjectivity of collective memory allows people with influence and power to shift narratives, manipulate opinions. As individual as we may be by nature, we are, in the end, a creature of the society. If the society is led to believe something, if even you make yourself believe a fact, regardless of its truth, you will eventually consider that as the ground truth. The malleability of collective memory makes it so that any sort of truthfulness of the memory is completely at the hand of the people in charge of maintaining those memories.
In the second day of writing this, I found this article, titled Why LLMs Can't Really Build Software. Conrad Irwin compares the methodologies of building a software, between a software engineer and an LLM. He argues that LLMs can not maintain clear mental models of the product. I have to work with LLMs for the most part of my workday, both as a consumer and a developer. I can somewhat agree with the sentiment of the argument. LLM memory does not scale with gigabytes. Over a long chain of conversation, the LLM will eventually get confused. Now, it is not unusual for even a software engineer to get confused over their code or the product. But our methodology usually leads us to do some trial and errors, we log what has or has not worked for us, what tests failed at what state of the code. If a test fails, we can check with our mental model to see if we need to fix the test, or the code. We can then rethink our mental model, of how we actually want our software to behave.
LLMs do not do that. At least not yet. Yes, we have agentic workflows, but it is not enough. Will it improve? I believe so. It has to. But in my opinion, our approach to the solution is misled. The problem here is not that current language models are not advanced enough. We are taking a next token generator, and then expecting it solve problems like a human. As for myself, I don't think in tokens. My mind does not generate one piece of idea and then takes the next logical step. In my mind, I go back and forth a lot. Re-evaluating my decisions, reconditioning my understanding of the situation. I take a look at what has worked before, what I set out to do, how I have done what I have done, and then make necessary adjustments.
05/12/2025
It has been almost 4 months since I last visited this. I was going to go about the concept of Memory Bank, that I was first introduced to during playing around with Cline. I think it is worth a read.
As for this piece, I don't think I can find enough motivation in myself to continue this. What I wanted to get across to any reader is the importance of the preservation of knowledge. I have lately begun to feel that, even with all this jazz about everything being stored in the internet as digital footprint, we should be more appreciative of the initiatives that aim to preserve historic, academic, and cultural knowledge, regardless of all legal shenanigans. We would not be able to reconstruct the entirety of human knowledge if an apocalypse was to hit the earth now. So, every step counts.