Computing Theory


The main objective of this is small article is to introduce data structures in the computer programming space, and give an introduction and details about the queue and the stack data structures.


A very simple definition of data structure is: ways of organize and store data in computers. The concept of data structures exists since computers exist, and obviously a concept which is kept over the years and implemented in the programming languages. This article will present stack and queues as abstract data models (ADT), in other words, just presenting the concepts as a mathematical or a logical data model (MyCodeSchool, 2013).


Stacks implements the LIFO concept of the last item put in the stack is the first one to be picked off the stack. A stack operates basically with three operations: PUSH, POP and STACK-EMPTY. The PUSH operation basically puts an item at the last position of the stack, the POP operation basically pick off the top of the stack. STACK-EMPTY is used to check whether the stack is empty or not (Cormen, 2009).

Real world examples: Stack of dinner plates.

Complexity: Each operation takes: O(1) time.


Queues implements the FIFO concept of the first item put in the queue is the first one to be picked off the queue – this operations are called ENQUEUE and DEQUEUE. A queue operates basically with two operations: ENQUEUE and DEQUEUE. The ENQUEUE operation basically puts an item at the first position of the queue, the DEQUEUE operation basically pick it off. (Cormen, 2009).

Real world examples: A queue is very similar to a line, which customers are waiting to pay a bill.

Complexity: Each operation takes: O(1) time.


A good point to use a queue in an implementation of an operating system, would be the processes which are waiting to be processed in order. Let’s suppose a notification system in an operating system, for the user, it would be nice to have the first notifications coming out earlier than the last ones. On the other hand, considering a stack, program thread it would be good if the last item pushed to the stack to be the first one to be popped out. “When a program begins executing in the function main(), space is allocated on the stack for all variables declared within main(). If main() calls a function, func1(), additional storage is allocated for the variables in func1() at the top of the stack.” (University of Hawai, 1993).


Cormen, T.H, 2009. Introduction to Algorithms. 3rd ed. Massachusetts: The MIT Press.

Data structures: Introduction to stack – MyCodeSchool@YouTube. 2016. Data structures: Introduction to stack – YouTube. [ONLINE] Available at: [Accessed 07 February 2016].

University of Hawai. 1993. Programming in C. [ONLINE] Available at: [Accessed 07 February 16].


Networking is a very important topic in computer science, and it allows computers to communicate the way information is transferred across clients and servers, and used to share information worldwide. To organize the way computers are interconnected and the way and flow computers communicate is defined by the topology a network is organized. Network topology is defined as the way the nodes are placed and interconnected with each other. (Technopedia, 2015).

This article will introduce and compare two of the most common topologies used in computer networking.


  • Bus Topology → Organized with all the nodes connected sequentially to the same transmission line. To illustrate it, just imagine a single cable where all the computers are connected. The weakness at this point is that considering all the communication flows in only one channel, a failure may damage all the networking activity.


  • Star Topology → All the nodes are connected to one distributor device (hub or switch, for instance). Failures may not affect all the computers if it happens on the nodes, however it may affect all the nodes if this happen in a central/distributor device.


Figure 2 Star topology


Deadlock is a concept in computing which is related to a computer problem which does not allow a system to operate in a regular manner. TechTarget defines it as A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function.”. One example of deadlock is as follows:


Table 1 Example of Deadlock

Deadlocks in the Bus topology may happen as each network segment is, therefore, a collision domain. Deadlocks in the bus topology may happen because the way data is transferred using the same cable simultaneously as Concept Drawn says Each network segment is, therefore, a collision domain. In order for nodes to transmit on the same cable simultaneously, they use a media access control technology such as carrier sense multiple access (CSMA) or a bus master(Concept Drawn, 2015).


  • Bus topology may work in very small networks which does not need a lot of resources and equipments, what makes a networking solution cheaper;

  • It requires less cable;

  • Easiest network topology to be implemented.


  • Difficult to identify problems on the network;

  • Hard to identify individual issues;

  • Slow when connected with many devices;

  • Not recommended for large networks;


  • Centralized management;

  • Easy to add computers and very extensible;

  • In the case a computer fails, others continue communication not being affected.


  • May require expensive equipment;

  • If the centralized device fails all the network is impacted.


It is hard to say the right recipe for a network, what can be for sure recommended is based on the communication requirements, budget allowed to implement and expectations it may attend. In the case of a low budget and a few computers, imagine how cost effective would be to implement a bus topology and in the case of more financial resources and a high quantity of computers, it would be for sure a best strategy to use the star topology.


Technopedia. 2016. Network Topology. [ONLINE] Available at: [Accessed 15 January 16].

TechTarget. 2016. Definition of Deadlock. [ONLINE] Available at: [Accessed 15 January 16].

ConceptDrawn. 2016. Bus network topology diagram. [ONLINE] Available at: [Accessed 15 January 16].

Polymorphism is defined as “having multiple forms”, based on the words of “poly” from the Greek word meaning multiple, and “morphism” from the Greek word which means form, joining together the meaning of multiple forms (Ravichandran, 2001).

Polymorphism is pretty much common in high-level programming languages, having different ways of application (The Beginners Book, 2013):

  • For variables with may have the capability of having different forms: for example, the variable ID may have the data type of a String or may be of the Integer dataType;

  • For functions, it may assume different forms. With a parameter of Integer the function may be responsible for looking up the user by the ID, and with the parameter of a String format, it may look for the username. This is usually called method overloading, which allows having multiple methods with the same name, but with a different argument list.

My subject this week is to study how polymorphism could be implemented in the hardware space, more specifically in memory and data cells.

Let´s get a simple example of what can be done in the Ruby programming language:

“Before we get any further, we should make sure we understand the difference between numbers and digits. 12 is a number, but ’12’ is a string of two digits.

Let’s play around with this for a while:

puts 12 + 12

puts ’12’ + ’12’

puts ’12 + 12′




12 + 12

How about this:

puts 2 * 5

puts ‘2’ * 5

puts ‘2 * 5’




2 * 5″

(Chris Pine, 2006)

As shown above, the results are based on data distinguishing, string data types are concatenated, numbers are calculated and the hybrid operations brought up expected language-specific behavior while working with data types and operations. In the hardware level, you cannot distinguish data based on its type, because the processor executes or process data based on its cycles.

Since data instructions stored in the RAM memory are not distinguished because there is no metadata on the data cells, how could we add this to the data and make this possible to differentiate what kind of information is being processed? And one more question comes out, would that be viable in terms of costs in processing and costs in implementing such solution in hardware?

One imaginary solution to this problem is the data to carry itself its data type. In the case of representing a positive integer, for instance, there would be one byte which would carry this information. The same occurs with the digit signal, which in mathematics we represent negative numbers with the minus identifier ‘-‘, but in computing the negative symbol is represented by one bit, which is also usually the most significant bit: 0 for a positive number or positive zero, and 1 is for a negative number or negative zero (Tanenbaum, 2005).

The solution to identify a collection of bits which represent one data called metadata: is simply defined as “data that describes other data”. It can be compared to the signal, which is carried with the bit collection, which would identify the data type for it.

Two more things should be mentioned in this article: the need of the processor to implement a type-check solution, the need of the processor to support overloaded instructions.

What would be necessary to implement if I would like to call the ADD instruction by passing an integer and a floating point as arguments? This answer is simple: by just overloading this method allowing this to receive different parameters.

In C++ for instance, it can be achieved by using the concepts of Early binding, which happens during compilation, where the compiler defines what function to call based on the argument list, or the function return type. This is the default method used in C++. It can be also achieved by Late binding, which the functions are chosen at the execution time, or the Pure Virtual Functions, which only has the function declaration and the subclasses should implement it (Ravichandran, 2001).

In the case of the processor, it would be necessary to implement the function overloading in the execution time, considering that the processor executes dynamic operations and does not work with the compiled code.

If we consider the data path of a typical Von Neumann machine, we will realize that all the imagined scenario proposed above would be a utopia in the x86 computer architecture we have today. The figure below shows the data path for a basic addition operation (Tanenbaum, 2005).

Figure 1 Data path of a typical Von Neumann machine


Ravichandran, D. R, 2001. Introduction to Computers and Communication. 1st ed. New Delhi: Tata McGraw-Hill Education.

The Beginners Book. 2013. Polymorphism in Java – Method Overloading and Overriding. [ONLINE] Available at: [Accessed 26 December 15].

Tanenbaum, Andrew S., 2005. STRUCTURED COMPUTER ORGANIZATION. 5th ed. Amsterdam, The Netherlands: Pearson Education, Inc.

Chris Pine. 2006. Learn to Program . [ONLINE] Available at: [Accessed 26 December 15].