A few weeks ago, students of the Galen community had their midterm exams. In the exam for one of my classes (I won’t say which) I included a practical portion at the end that required the students to develop a graphical display for various computer programs’ user interfaces. While all the students in the class that uses graphical techniques to design (for the user) interfaces that allow them to interact with computer systems did well overall, this section appeared to present them with a significant amount of difficulty. As a result, when I mentioned to them that I was working on this article, they made me promise I wouldn’t mention which class it was that I was referencing. As the French say, Mission accomplie.
So anyway… when reviewing the results with them in a later class, I pointed out that this activity ought to have been the easiest part of the entire test. Just the week before, we had done a very similar program, and all that they needed to do was take that source code (to which they had access during the exam session), make a few changes, and they could have been finished in about 15 minutes.
Computer Science, and software development specifically, is perhaps the one subject at Galen in which self-plagiarism (of a sort) is actually encouraged. Considering how seriously we take academic integrity here, I’ll need to expand on that a little bit, lest I leave you with even a slightly inaccurate impression of what I mean.
When you are creating a new product, whether it’s a program, an essay, or answers to the questions of an assignment, taking material from others without citation is invariably the wrong thing to do. Most of the time, drawing from your own previous body of work and presenting it as a new creation is also against the rules of good scholarship. In order to develop your skills in any given area, like English composition, you cannot submit an essay for two different classes, or turn in an assignment you did last semester for something due this semester. This avoids actually doing the work that the instructor is attempting to evaluate, and cheats you out of the opportunity to know for yourself how you are doing in that class.
Getting a good grade might seem like the “goal” when you’re sitting in front of an instructor with a No. 2 pencil and a sheet of arcane symbols on your desk, but this is the preparation, the training, for going out and performing in the real world. Imagine if a soldier was somehow able to fake his or her way through basic training. When it came down to an actual combat situation, it would be too late to regret not practicing how to stay alive and complete an objective safely.
In the same way, but with (probably) less risk to life and limb, avoiding doing your own work, and looking for shortcuts in general, will leave you ill prepared for the situations that your degree is supposed to prepare you to excel at navigating.
When it comes to Computer Science, the same principles generally apply; however, there is one exception. Something called “code re-use” is a concept that is practically required by the way that the software industry works. When it comes to writing programs, the better-known and more commonly used processes are the most reliable. If a portion of a program has been in service for years without causing problems, it becomes trusted. Those familiar with it will want to incorporate it into their own software as a module or a code library. We like to use sections of code that has already been put through the wringer, because if we use that for the foundation of another, separate program, we have a huge head start at getting it done. Doing this not only speeds up development time, it also makes debugging, the process of finding and correcting mistakes, much easier, since we know that any bugs the new software produces can only be the result of the recent additions.
Very often, programmers will make their own code segments available to others. There are a number of software licenses under which individuals are allowed to take, modify and use the procedures created by others. Sometimes they are even permitted do so without giving credit to the original author. In such a case, the author will include an indication along with the uploaded files that he or she is waiving all rights to the source code, and that anyone may use it in both personal and commercial applications.
Of course, it is always “nice” when a programmer acknowledges the individuals that contributed to the final product, but in the specific circumstances I am describing, it is not necessary.
I’m taking the time here to describe this in some detail, because I do want to make it clear that this is the exception, and not the rule, when it comes to publicly available intellectual property. For the most part, and especially when it comes to students learning all these things for the first time, it is essential that budding professionals experience the process of designing, implementing, and then testing their own original creations.
Of course, once they themselves have done this, most programming courses will allow them to reference their own (now hopefully) reliable code to serve as the foundation for more complex productions.
Code that a programmer has written in the past becomes his or her “library,” a set of resources from which material can be drawn whenever a new project is initiated. As I’ve said, this is the way the industry works – in the real world, major software applications are almost never written “from scratch.” In fact, producing professional software is almost impossible to create in this way. We don’t re-invent the daisy-wheel; what we typically do is find something in our library (or someone else’s publicly available library) that is somewhat similar to the “core” of the new program we are writing, and extend it to include the functionality we want.
Within the virtual environment, a programmer with the right training can do pretty much anything he or she can imagine. This is why in an earlier article I compared Computer Science to magic. Taking into account what I’ve just said, though, I suppose it might be more precise to compare it to alchemy – that arcane blend of religious endeavour and proto-chemistry that attempts to, among other things, convert lead into gold and create the philosopher’s stone, from which may be distilled an elixir of immortality.
I have the feeling that the Comte de St. Germaine would appreciate the software development process, just as would Bach and Beethoven, programmers of sound and emotion. Programming is essentially alchemy with electricity. When we begin to write a program, we have in our systems a bunch of 1s and 0s. When we’re finished writing a program, we have in front of us… a bunch of 1s and 0s. In the physical world, nothing has visibly changed. However, what we have done is re-organized those logical bits, just as the arrangement of the same protons, electrons and neutrons results in over a hundred distinct elements, including both lead and gold. It’s the same “stuff,” but what we have done is added structure, design, information, and this is what causes it to behave the way it does.
We could get pretty deep here and talk about the nature of information, and how it imposes purpose on matter… how, without creating or destroying a single particle, we can cause metal and hydrocarbons (when arranged into a computer) to do things nature never intended them to do, like allowing us to order funny hats online. But really, sometimes it can be just as valuable to take a step back and just appreciate the bigger picture.
Lots of very smart people are doing lots of brilliant things in the world. We all benefit from that. Whether they are coming up with the next step in communication technology, discovering new treatments for diseases, or designing funny hats, our lives are shaped by art and the sciences. I think we have a faculty somewhere around here at Galen dedicated to these disciplines.
Computer Science is one area in which these two areas of human interest are perfectly blended. In addition, those of us who practice it get to spend a lot of our time figuring out how to get the machines to do more work so we can do less. It’s not exactly something for nothing… there are always trade-offs when it comes to technology. But at least we have more options – we get to decide how, and on what, we want to spend our time.
As we learn about these things in school (and in case you haven’t noticed, you’re learning a lot about computers here at Galen no matter what your major happens to be), we are also deciding how we are going to apply the latest tools and toys to our lives. With great technological power comes great technological responsibility, said Uncle Ben, almost. And then he sold rice.
The point is this: computers can make a lot of things easier, including those things that aren’t quite beneficial to us… so, when you’re given an assignment – unless you’re IN a class that teaches code re-use, AND have specific instructions to do otherwise – take the time to create something original. This, also, is a kind of magic, a kind of alchemy. By all means, draw on the genius of those who have come before you, but when you do, make sure you let us know who they are, and don’t forget to add yourself as an ingredient. In this way, we can celebrate both the past and the present, and there will not only be a greater likelihood of ‘A’s in your future… but perhaps more importantly, you get to become good at what you want to do with the rest of your life.
P.S. I’ve noticed that things get a lot more metaphorical after 2am. I should probably start writing these articles earlier in the day.
David Aguilar
Assistant Professor of Computer Science and Engineering
Galen University
HINT: All of the words in the solution are found in the first 12 articles.