prof. dr. Edsger W. Dijkstra talks about how
computing has "matured" from a theoretical topic for the scientists to a practical issue for the engineers, managers and the entrepreneurs
As a student who has felt a disconnect between a computer science degree and the jobs people pursue with a computer science degree, I resonate with this statement very well.
I like computer science, but not the tech industry. The elegance and beauty of computer science has been diminished to slow, inefficient JavaScript programs full of bugs and security vulnerabilities with flashy marketing, and these embarrassing programs are praised by managers, venture capitalists, and PMs who have little care and almost no desire to understand and appreciate computer science.
who can accept the application of science for the obvious benefits, but feel rather uncomfortable with its creation because they don’t understand what the doing of research, with its intangible goals and its uncertain rewards, entails.
The eventual application of science can only be possible through research. You can’t apply a formula that hasn’t had years of research and theory behind it. Even though the application of chemistry (drugs) inspires me to pursue studies in chemistry, drugs can only be discovered and conceptualized through hours and hours of thinking and going in circles. For a long time, I was scared of research too. At my first internship I’d spend hours after my work day had ended thinking about a problem I was stuck on. I’d spend an entire week trying to implement a solution and show up to my weekly meeting to only present nothing to the lead surgeon and professor. I also made a mistake, and spent 2 days unable to focus because of the fear of disappointing my research team. Research can painful and can drive you insane. But when you accomplish something, no matter how small, that feeling is better than any feeling I’ve ever felt in any industry job.
I would therefore like to posit that computing’s central challenge, viz. "How not to make a mess of it", has not been met. On the contrary, most of our systems are much more complicated than can be considered healthy, and are too messy and chaotic to be used in comfort and confidence. The average customer of the computing industry has been served so poorly that he expects his system to crash all the time, and we witness a massive worldwide distribution of bug-ridden software for which we should be deeply ashamed.
In organic chemistry we learn so many different pathways to the same drug. In the next year’s organic chemistry course we learn why some pathways are unfeasible. We lose points on exams for coming up with dangerous or lengthy pathways. There are strict protocols and standards for creating drugs and a lot more thought into creating drugs. You can take Advil (within a healthy dosage) and not expect something dangerous to happen.
But can you say this about software? In courses, I’m rarely judged on my actual code, but rather if my code can pass some tests. Inefficient and unreadable code is given the same grade as efficient and maintainable code. There’s no incentive to do better. In fact, at my Microsoft internship I was told to avoid refactoring code, and faced much opposition when I presented a PR with refactored code. PRs with untested features were accepted quicker than my PR that just refactored code. My manager, who never cared about what I did prior to the code refactor, suddenly pulled me into a meeting to ask if the code refactor really was necessary. And there were no bugs discovered in my refactored code.
Our apps have became more and more bloated, with data leaks every hour, and having something "work" isn’t the norm, but the outlier. Companies deliberately choose horrible languages to write programs in like JavaScript, push confusing design changes instead of security improvements and freeze our computers with memory leaks. I can say with confidence that every single person has probably had some type of personal data leaked. I had my first leak of personal data thanks to LinkedIn.
Even worse, software has poisoned people. Perhaps this could of been avoided if we verified the correctness of the software, but apparently we’ve decided verifying most programs is too costly, and instead poured billions of dollars into memory and energy draining flashy AI models that have proven almost 100% of the time to be racist and down right horrifying.
Windows crashes once a week, and I have to move my schedule around for software; shouldn’t it be the other way around? Software should cater to us.
For us scientists it is very tempting to blame the lack of education of the average engineer, the short-sightedness of the managers and the malice of the entrepreneurs for this sorry state of affairs, but that won’t do. You see, while we all know that unmastered complexity is at the root of the misery, we do not know what degree of simplicity can be obtained, nor to what extent the intrinsic complexity of the whole design has to show up in the interfaces.
The statement "unmastered complexity is the root of misery" could be talking about the Dunning-Kruger effect, in which Silicon Valley tech bros go around creating short sighted solutions without the input of the groups affected, which then cause harms to others. Or the statement could be talking about the state of a researcher, who may feel misery and despair due to their lack of mastery in the topic they are researching. I feel is it the latter.
We do not know yet whether trade-offs will be possible. We do not know yet whether we can invent for intricacy a meaningful concept about which we can prove theorems that help.
A professor once stated to my boyfriend who then told me, that "optimization is optimization until your optimization produces something wrong, and if that happens then you’re cheating". I believe this is what these "trade-offs" are talking about. There’s a lot of trade-offs in science in hopes of making a topic simpler. We use the Bohr model to represent the electrons and protons for an element; but there’s a trade-off. The trade-off is that for simplicity we may give students the impression that electrons orbit in circles or ovals or that electrons stay "stuck" in orbit. This is an "optimization". And sometimes optimizations can become cheating; an optimization works until it can’t. Obviously catching these cheating optimizations is important, but how can we? I think that is what prof. dr. Edsger W. Dijkstra’s second sentence implies.
We do not know yet whether intrinsic intricacy can be distinguished from accidental intricacy. … To put it bluntly, we simply do not know yet what we should be talking about, but that should not worry us, for it just illustrates what was meant by "intangible goals and uncertain rewards".
Research is a dive into the unknown. For instance, virologists for many years have noticed that certain biomarkers, like, intracellular miRNAs that were down regulated were exosomal-associated miRNA. Exosomes with miRNA were exiting the cell. For the previous years, not much focus was given on these miRNAs; they were thought as perhaps a byproduct of the influenza virus. But what was causing this down regulation? Was it even associated with the influenza virus (accidental intricacy or intrinsic intricacy?). And what molecule, foreign or self, was causing this? Well, virologists found out that miR-24, which was leaving the cell, correlated with an increased expression of furin in H5N1 strain of influenza. Previously ignored, miRNA has become a key factor in the viral life cycle. Most importantly is that virologists didn’t know they were looking for miRNA.
In fact, microRNAs (miRNA) are non-coding RNA, which means they don’t become proteins themselves, but influence what and how much of the coding RNA become proteins. This makes it really hard to study since there’s "nothing" we can grasp onto. There’s no protein to take a closer look at. We can only make guesses on which proteins are affected by the billions of non-coding regions. These molecules were discovered in worms in 1993 and only known to exist in humans in the last decade. Since we have only analyzed mostly coding regions of the genome, miRNAs are a completely new area.
Why do I talk about virology research? Well, I hope I made it clear that no one knew what they were looking for. Computing Science is still a relatively new area of research that has so much to be explored. Unfortunately, lots of money has been poured into AI, which other areas like ethics, hardware, compiler, proof verification and more much more underfunded. We need to do research that has "intangible goals and uncertain rewards". Research in AI seems to contradict this statement. There’s always a goal: make machines more like humans or make machines replace humans. And there’s always a desired reward: money.
The moral is that whether Computing Science is finished will primarily depend on our courage and our imagination.
Research isn’t all knowledge, as probably almost every researcher has told you. There’s also imagination and intuition and courage. You can’t be driven by goals like money or status. Science doesn’t care if you can figure out why something happens or prove why something is right. Science owes nothing to you. All we can do is keep imagining and pushing the field of computing science forward.