Introduction to Software Development Process

The objective of this mini article is to introduce software engineering in terms of: why software needs engineering, how engineering can be applied to software development, a brief introduction to the history of software processes, introduce the waterfall software process and introduce the agile software process.

INTRODUCTION TO SOFTWARE ENGINEERING

A good start for a Software Engineering article would be to define what is Software Engineering. In 1972, Friedrich Ludwig Bauer wrote a very simple definition for this in his book entitled “Software Engineering”:

Software engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines.”

Pressman adds the followings questions to this definition in one of the most important books about Software Engineering, entitled “Software Engineering – A Particioner´s Approach” which are:

What are the “sound engineering principles” that can be applied to computer software development? How do we “economically” build software so that it is “reliable”? What is required to create computer programs that work “efficiently” on not one but many different “real machines”?”

(Pressman, 2010).

Of course these questions are the challenge on building software, and software engineering methodologies and processes are the tools necessary to answers questions which may be applied to: “How to plan”, “Execute”, “Test” and provide a final result with reliable “Quality”.

SOFTWARE ENGINEERING PROCESSES

Software Engineering processes are the methodologies which are be applied in a software development process in order to get the expected results with excellence. Pressman gives a simple definition to software engineering as “I define a software process as a framework for the activities, actions, and tasks that are required to build high-quality software.” (Pressman, 2010).

THE WATERFALL SOFTWARE DEVELOPMENT PROCESS

A definition for the waterfall model is very simple, basically it is linear, that means each phase is developed and completed before the next phase starts and with no overlapping. This was the first process model to be introduced in the software engineering (Sommerville, 2007).

figure1

Figure 1 Illustration of a linear software development process – Waterfall

THE AGILE CONCEPT / INTERACTIVE AND INCREMENTAL PROCESS

Rapid development and delivery is therefore now often the most critical requirement for software systems. In fact, many businesses are willing to trade-off software quality and compromise on requirements against rapid software delivery. (Sommerville, 2007).

The agile software development processes came with the idea of bringing up rapid deliveries and getting faster feedback from the users and clients in a technological world full of change in the businesses, the laws, the teams, etc… The idea of planning, developing, testing, and delivering together in order to achieve a common goal, is the key aspect for the agile methodologies “Results and deliverable component / functionality comes first”, if a change is needed, this is done in a dynamical process which they call simply “Agile”.

One of the most common process in the agile space, is the scrum process, which brings a both iterative and incremental approach, and is considered one of the most common software development processes today.

figure2

Figure 2 illustration of the Scrum process

A simple definition for how a Scrum process works is: The user stories are put to a product backlog, which is basically holds a bunch of functionalities desired by the user, the product owner and the other team members. Sprints are created, which are basically iterations which may have a common objective of deliverable functionalities. Daily meetings happens in order to improve communication between the team, it must take from 10 to 15 minutes, and should be done everyday. At the end of the sprint, basically a demonstration of the deliverables are presented to the team. Grooming sessions and planning sessions are also part of the scrum methodology. All the prototyping, analysis, design, development and testing happens in an iterative and incremental approach with the common objective of bringing up faster deliverables at the end of the sprint.

REFERENCES

Bauer, F. L., 1972. Software Engineering. 1st ed. Germany: Information Processing.

Pressman, R. S., 2010. Software engineering : a practitioner’s approach. 5th ed. New York: McGraw-Hill.

Sommerville, I., 2007. Software Engineering. 5th ed. England: Pearson Education Limited.

A Brief Analysis of a Small Prolog Program

INTRODUCTION

The objective of this small article is to analyze a very small program programmed in the Prolog programming language in order to understand a bit how this program language works. Prof. William F. Clocksin defines it as “Prolog is a computer programing language that is used for solving problems that involve objects and the relationship between objects. When we say “”John owns the book”, we are declaring that a relationship, ownership, exists between one object ‘John and another individual object ‘the book‘” (Clocksin, 2003).

The object of analysis on this article is the given Prolog program:

parent(dad_tough_guy, babe_ruth).
parent(dad_tough_guy, little_kid).
parent(john_oldman, dad_tough_guy).
parent(mary_oldwoman, dad_tough_guy).
grandparent(X, Y) :- parent(X, Z), parent(Z, Y).

The program above declares the following relation:

  • Dad Tough Guy is parent of Babe ruth;

  • Dad Tough Guy is parent of Little kid;

  • John Old man is parent of Dad tough guy;

  • Mary Old woman is parent of Dad Tough Guy;

The first four lines declares facts. Facts are defined by the Prof. William F. Clocksin as “Suppose we want to tell Prolog the fact that ‘John likes Mary’. This fact consists of two objects, called ‘Mary’ and ‘John’, and a relationship, called ‘likes’. In Prolog, we need to write facts in a standard form, like this:

likes(john, mary).

(Clocksin, 2003).

The first four lines declares the relation between this family. The last line is a prolog rule that defines that person is grandparent if and only if person x is parent of z AND person z is parent of person y.

Example of queries on the given code:

prolog1

Figure 1 Example of usage of the Prolog interpreter

QUERIES

Considering the following query:

grandparent(X, babe_ruth).

When executed in the Prolog interpreter, gives the following result:

resultprolog2

Figure 2 Output of a Query execution in the Prolog interpreter

Basically, the Prolog interpreter response is the possible fits for the incognito.

ACADEMIC REFERENCES

Clocksin, Willian F., 2003. Programming in Prolog. 5th ed. Berlin: Springer.

RESOURCES WHICH HELPED ON THE RESULTS

HOW to run a Prolog program – YouTube. 2016. HOW to run a Prolog program – YouTube. [ONLINE] Available at: https://www.youtube.com/watch?v=6Dh7eux76a8. [Accessed 02 February 2016].

Programming In Prolog Part 1 – Facts, Rules and Queries – YouTube. 2016. Programming In Prolog Part 1 – Facts, Rules and Queries – YouTube. [ONLINE] Available at: https://www.youtube.com/watch?v=gJOZZvYijqk. [Accessed 02 February 2016].

Network Hacking

Network Hacking is a very common topic nowadays, and it has been one of the most important topics in computer science since the early years of connectivity and the wide spread of computers and networks.

A very simple definition for a hacker is given as “In computer networking, hacking is any technical effort to manipulate the normal behavior of network connections and connected systems. A hacker is any person engaged in hacking. The term “hacking” historically referred to constructive, clever technical work that was not necessarily related to computer systems. Today, however, hacking and hackers are most commonly associated with malicious programming attacks on the Internet and other networks.” (About.com, 2016).

There are many techniques a hacker may use to exploit a system or a vulnerability. They can be technical or social. In the following paragraphs, there will be summarized some of the ways a computer can be hacked in the networking context.

1. SOCIAL ENGINEERING

Social Engineering is a technique used by a hacker to persuade someone in order to achieve a goal which may, get some important data, get unauthorized access. Imagine a data center and let’s suppose the bad-intentioned hacker wants to get access to that place; this person may wear like the cleaning people and fake access to get into the place. This a a very simple example of how a social engineering activity may happen. According to TechTarget, 2016, some common examples of social engineering are:

  • Virus writers use social engineering tactics to persuade people to run malware-laden email attachments;

  • Phishers use social engineering to convince people to divulge sensitive information;

  • Scareware vendors use social engineering to frighten people into running software that is useless at best and dangerous at worst;

2. EXPLOITATION OF SOFTWARE TECHNICAL FLAWS

Computer Software are not safe and there are many flaws which can be exploited in order to get privileged access in computers, and this is one of the most common ways hackers uses to get access to private and corporate computers.

Apache has been the most common web server on the internet since April 1996, and is currently used by 38% of all websites (Netcraft.com, 2014). The most important HTTP server in the world, the Apache HTTP Server, has one section on their website just to announce and help to detect security flaws on their software:

2. USE OF HACKING TECHNIQUES

There are some common hacking techniques in order to get privileged information. Some of them are:

  • DNS POISONING: Consists of handling the resolver name of internet addressees and use fake pages and information to catch user data;

  • SNIFFING: Consists of intercepting network information and read them in order to get privilged information;

  • MAN-IN-THE-MIDDLE: Consists of intercepting and faking responses in order to manipulate user activity and get privileged information;

Other commons techniques are: Spoofing, Brute Forcing and Session Hijacking.

CONCLUSION

There are many ways of exploit systems and it is really difficult to guarantee a server is 100% secure, however one of the key computing concepts that came to improve server security is the cloud computing, which servers are stored and maintained in secure data-centers and more often the security is a key aspect and most of the providers offer security tools and resources in order to improve the servers security in the business computing space.

REFERENCES

About.com. 2016. What is a Hacker?. [ONLINE] Available at: http://compnetworking.about.com/od/networksecurityprivacy/f/what-is-hacking.htm. [Accessed 27 January 16].

TechTarget.com. 2016. Social Engineering. [ONLINE] Available at: http://searchsecurity.techtarget.com/definition/social-engineering. [Accessed 27 January 16].

Netcraft.com. 2014. Are there really lots of vulnerable Apache web servers?. [ONLINE] Available at: http://news.netcraft.com/archives/2014/02/07/are-there-really-lots-of-vulnerable-apache-web-servers.html. [Accessed 27 January 16].

Writing Algorithms and Unit Testing Computer Program Code

INTRODUCTION

A set of instructions to achieve an objective is a simple definition for an algorithm. In the computing space can be compared to a cooking recipe to solve a real world, computational or mathematical problem. This is defined as:

Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.” (Cormen, 2009).

PROPERTIES OF AN ALGORITHM

The basic properties of an algorithm are: The language it is written (usually English); each step very self explanatory; Have at least one input; have at least one output and it should have a limited number of steps. (CareerRide, 2016)

EXAMPLE OF AN ALGORITHM

Example of an algorithm:

Problem:

  • Search a string of at least five numbers (for example, 37540);

  • Identify all of the substrings that form numbers that are divisible by 3;

  • For example, applying the algorithm on the string 37540 should produce the following substrings (not necessarily in this order): 0; 3; 75; 54; 375; 540.

Solution in pseudo-code written using the English Language:

PSEUDO CODE Algorithm1

 INITIALIZE numbersString WITH "37540" TYPE STRING
 INITIALIZE cursor WITH 0 TYPE INT
 INITIALIZE cursor2 WITH 0 TYPE INT
 INITIALIZE index WITH NULL type INTEGER
 
 FUNCTION START
 WHILE CURSOR IS LESSER OR EQUAL TO LENGTH OF numbersString
 SET CURSOR2 VALUE TO CURSOR1 + 1
 WHILE CURSOR2 IS LESSER OR EQUAL TO LENGTH OF numbersString
 set INDEX VALUE TO SUBSTRING NUMBER -> (CURSOR, CURSOR2)
 IF INDEX % 3 = 0
 PRINT INDEX
 END IF
 SET CURSOR2 TO CURSOR2 + 1
 END WHILE
 SET CURSOR TO CURSOR + 1
 END WHILE
 END FUNCTION

END PSEUDO CODE

HOW ALGORITHMS BECOME COMPUTER PROGRAMS

As introduced above, algorithms brings a set of instructions to solve a problem. When this set is programmed in a programming language, compiled and set to run this become an executable computer program which may be executed and used as many times as wanted. Algorithms in action are computer programs running and using really powerful machines to solve thousands of inputs of data and process in a world wide scale. This is what computer programs do.

TESTING COMPUTER PROGRAMS

Computer programs can be tested using Unit Testing Code, which is basically a program to test another program using a predetermined input.

A unit test examines the behavior of a distinct unit of work. Within a Java application, the “distinct unit of work” is often (but not always) a single method. By contrast, integration and acceptance tests examine how various components interact. A unit of work is a task that is not directly dependent on the completion of any other task. (Massol, 2004).

UNIT TESTING

Let´s introduce a calculator code:

calc1

Figure 1 Code of a Calculator Program

Basically as shown above, the add method receives two arguments, number1 and number2 and returns the result of the add operation of the two values. How could be proven that this piece of software work well? Let´s suppose we have a mathematical software using this method with thousands of operations depending on this small piece of code for all the ADD operations. Think now about this small piece of software failing!

So, the question comes out: How could this small piece of software be tested and proven that this works fine in our computer software?

calc2

Figure 2 Example of a Unit Test using the jUnit framework

The answer is simply: Writing the unit test for this software component. The code above simply calls the calculator passing two numerical values and expects the result. This is the way a software component can be effectively tested and proved with satisfactory results that the component is behaving normally.

In the example above, 10 and 50 is passed, and it expects the result of 60. It 60 is the output, the first requirement for the test is satisfactory and the unit test passes.

BENEFITS OF UNIT TESTING

Unit Testing improves a lot the final product quality. So the first benefit is the code quality achieved on projects that uses correctly Unit Testing.

The second benefit is the product integrity to be kept. When a Unit Test is coded the right way, all the requirements on how the piece of code (classes) needs to behave correctly are given. If someone else change any line of code and make a mistake, the Unit Test will not pass and you will easily detect a defect that is possibly coming to your software product.

The third benefit is the ease of understanding your software component by the test. If a developer doesn’t know anything about the product and wants to change anything, the unit test sometimes can be used as a documentation for the software, because it can be possible to understand the tiny parts of any component/class.

REFERENCES

Cormen, T.H, 2009. Introduction to Algorithms. 3rd ed. Massachusetts: The MIT Press.

CarrerRide. 2016. Data structure – Algorithm, properties of an algorithm, types of algorithms. [ONLINE] Available at: http://www.careerride.com/Data-structure-algorithm-and-its-types.aspx. [Accessed 24 January 16].

Massol, V., 2004. JUnit in Action. 1st ed. Greenwich: Manning Publications.

Big-O notation explained by a self-taught programmer

This last times I have been studying Big-O notation and so I am posting you a very nice article I found about this topic, for the ones who are starting learning it:

https://justin.abrah.ms/computer-science/big-o-notation-explained.html

Another thing I’ve been palying is the MIT Scratch 🙂

https://scratch.mit.edu/

Network Topologies

INTRODUCTION

Networking is a very important topic in computer science, and it allows computers to communicate the way information is transferred across clients and servers, and used to share information worldwide. To organize the way computers are interconnected and the way and flow computers communicate is defined by the topology a network is organized. Network topology is defined as the way the nodes are placed and interconnected with each other. (Technopedia, 2015).

This article will introduce and compare two of the most common topologies used in computer networking.

COMMON TOPOLOGIES

  • Bus Topology → Organized with all the nodes connected sequentially to the same transmission line. To illustrate it, just imagine a single cable where all the computers are connected. The weakness at this point is that considering all the communication flows in only one channel, a failure may damage all the networking activity.

bus

  • Star Topology → All the nodes are connected to one distributor device (hub or switch, for instance). Failures may not affect all the computers if it happens on the nodes, however it may affect all the nodes if this happen in a central/distributor device.

star

Figure 2 Star topology

WEAKNESSES

Deadlock is a concept in computing which is related to a computer problem which does not allow a system to operate in a regular manner. TechTarget defines it as A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function.”. One example of deadlock is as follows:

foto

Table 1 Example of Deadlock

Deadlocks in the Bus topology may happen as each network segment is, therefore, a collision domain. Deadlocks in the bus topology may happen because the way data is transferred using the same cable simultaneously as Concept Drawn says Each network segment is, therefore, a collision domain. In order for nodes to transmit on the same cable simultaneously, they use a media access control technology such as carrier sense multiple access (CSMA) or a bus master(Concept Drawn, 2015).

ADVANTAGES OF THE BUS TOPOLOGY

  • Bus topology may work in very small networks which does not need a lot of resources and equipments, what makes a networking solution cheaper;

  • It requires less cable;

  • Easiest network topology to be implemented.

DISADVANTAGES OF THE BUS TOPOLOGY

  • Difficult to identify problems on the network;

  • Hard to identify individual issues;

  • Slow when connected with many devices;

  • Not recommended for large networks;

ADVANTAGES OF THE STAR TOPOLOGY

  • Centralized management;

  • Easy to add computers and very extensible;

  • In the case a computer fails, others continue communication not being affected.

DISADVANTAGES OF THE STAR TOPOLOGY

  • May require expensive equipment;

  • If the centralized device fails all the network is impacted.

CONCLUSION

It is hard to say the right recipe for a network, what can be for sure recommended is based on the communication requirements, budget allowed to implement and expectations it may attend. In the case of a low budget and a few computers, imagine how cost effective would be to implement a bus topology and in the case of more financial resources and a high quantity of computers, it would be for sure a best strategy to use the star topology.

REFERENCES

Technopedia. 2016. Network Topology. [ONLINE] Available at: https://www.techopedia.com/definition/5538/network-topology. [Accessed 15 January 16].

TechTarget. 2016. Definition of Deadlock. [ONLINE] Available at: http://whatis.techtarget.com/definition/deadlock. [Accessed 15 January 16].

ConceptDrawn. 2016. Bus network topology diagram. [ONLINE] Available at: https://conceptdraw.com/a878c3/preview/640. [Accessed 15 January 16].

Data Storage: From the Past to Today and Beyond

Computers do nothing without information and it needs to be processed and stored, what makes necessary the use of high technology to write data and keep it for further access. Technology nowadays offers so many ways to store and handle data (Magnetic Disks and Flash Drives, for instance), but as systems generates large amounts of data, it must be retrieved fast, must be stored with a reliable technology, must be cost-effective and it must be centralized, decentralized or even both, depending on the architecture strategy.

In computing, when data is stored and it is available to be retrieved at any time it is defined as “Non-Volatile Memory”. Non-Volatile Memory is a term when it refers to memory chips that hold their content without power being applied (PcMagazine, 2014).

The Data Storage topic is and will continue to be a critical topic in computing, especially considering decentralized systems with huge amount of data, using the concepts of Cloud and Big Data. According to Zdnet, 2014, as data usage continues to grow exponentially, IT managers will need to orchestrate multiple kinds of storage — including flash, hard disk and tape — in a way that optimizes capacity, performance, cost and power consumption.

STORAGE IN THE PAST

During all the technological advances, there are technologies which are obsolete today. In this article I will present 2 obsolete storage technologies which are listed below:

  • Zip drive: A Zip drive was a small, portable disk similar to a floppy disk, but able to support larger amounts of data. This technology was created and sold by Iomega Corporation. There were available initially disks in two capacities: The 100 megabyte size and the 250 megabyte storage disks (Search Mobile Computing, 2007). Zip Disk topped out at 750 MB by the end of its life (Wired, 2008). My personal experience with this kind of storage was that a friend used to work for a photography company which used to edit photos and create artwork. The disks were a good fit to store high resolution images with 300dpi.

  • Floppy disks: The floppy diskette was created in 1967 by IBM and was much cheaper than hard drivers, which were expensive at the time. Floppy disks were for many years the only way to install computer software, because this was the common type of removable media at that time. This type of storage used to support 1.44MB on its later versions ( ). My personal experience with floppy disks was I used to store many school homework, used to transfer software and games when I was a childhood.

Personally, I believe both of the technologies presented above became obsolete, because of the introduction in computing of more powerful storages like CDROM, Pen-drives and more recently the availability of high speed internet access and the spread of the cloud data storage online.

TODAY (Technology Trend #1)

One of the most notable technologies nowadays in storage is Flash Memory. When you store data in your smartphone, digital camera or GPS you are using this technology. Solid-state drives (SSD) using flash memory are replacing hard drives in netbooks and PCs and even some server installations, needing no batteries or other power to retain data, flash is convenient and relatively foolproof (ComputerWorld, 2014). Technically speaking, Flash Memory is a specific type of EEPROM (an acronym to Electrically Erasable Programmable Read-only Memory), which is programmed and erased in large blocks.

I personally use SSD storage on my computer, which I feel my OS running fast and not noisy, as it was when I used to use HDD storage with Personal Computers. I personally believe the HDD technology will continue to be commercialized in the next years especially because of the price, nevertheless as the SSD technology become cheaper it can overtake the HDD technology in terms of popularity.

THE TREND (Technology Trend #2)

Trends and predictions from 26 forward-looking articles on enterprise storage published around the turn of 2014/2015” (ZDNet, 20015):

Figure 1 Technology trends in Enterprise Storagetrendsdata

CONCLUSION

I conclude this article with my personal insight on computer storage. We should consider cloud computing as the key technology to support storage. Today, I store my pictures on Flickr, Google Photos, Instagram and Facebook. This is a revolutionary way, and I think this will become even more common the way people start to sync all their devices using centralized servers. Technologically speaking, the way the SSD drives become more cheaper, I believe this a very reliable and fast way to store information in the enterprise space to support the cloud availability of data.

REFERENCES

PcMagazine. 2014. Definition of: non-volatile memory. [ONLINE] Available at: http://www.pcmag.com/encyclopedia/term/48059/non-volatile-memory. [Accessed 20 July 14].

Wired. 2008. 5 Obsolete Storage Formats. [ONLINE] Available at: http://www.wired.com/2008/06/five-obsolete-s/. [Accessed 30 December 15].

ZDNet. 2014. Storage in 2014: An overview. [ONLINE] Available at: http://www.zdnet.com/storage-in-2014-an-overview-7000024712/. [Accessed 30 December 15].

TechTarget. 2007. Zip drive definition. [ONLINE] Available at: http://searchmobilecomputing.techtarget.com/definition/Zip-drive. [Accessed 30 December 15].

Computerhope. 2015. What is a Floppy disk?. [ONLINE] Available at: http://www.computerhope.com/jargon/f/floppydi.htm. [Accessed 30 December 15].

ComputerWorld. 2014. Flash memory. [ONLINE] Available at: http://www.computerworld.com/s/article/349425/Flash_Memory. [Accessed 30 December 15].

ZDnet. 2015. Enterprise storage: Trends and predictions. [ONLINE] Available at: http://www.zdnet.com/article/enterprise-storage-trends-and-predictions/. [Accessed 30 December 15].

Hardware Polymorphism in the x86 arch

Polymorphism is defined as “having multiple forms”, based on the words of “poly” from the Greek word meaning multiple, and “morphism” from the Greek word which means form, joining together the meaning of multiple forms (Ravichandran, 2001).

Polymorphism is pretty much common in high-level programming languages, having different ways of application (The Beginners Book, 2013):

  • For variables with may have the capability of having different forms: for example, the variable ID may have the data type of a String or may be of the Integer dataType;

  • For functions, it may assume different forms. With a parameter of Integer the function may be responsible for looking up the user by the ID, and with the parameter of a String format, it may look for the username. This is usually called method overloading, which allows having multiple methods with the same name, but with a different argument list.

My subject this week is to study how polymorphism could be implemented in the hardware space, more specifically in memory and data cells.

Let´s get a simple example of what can be done in the Ruby programming language:

“Before we get any further, we should make sure we understand the difference between numbers and digits. 12 is a number, but ’12’ is a string of two digits.

Let’s play around with this for a while:

puts 12 + 12

puts ’12’ + ’12’

puts ’12 + 12′

(results)

24

1212

12 + 12

How about this:

puts 2 * 5

puts ‘2’ * 5

puts ‘2 * 5’

(results)

10

22222

2 * 5″

(Chris Pine, 2006)

As shown above, the results are based on data distinguishing, string data types are concatenated, numbers are calculated and the hybrid operations brought up expected language-specific behavior while working with data types and operations. In the hardware level, you cannot distinguish data based on its type, because the processor executes or process data based on its cycles.

Since data instructions stored in the RAM memory are not distinguished because there is no metadata on the data cells, how could we add this to the data and make this possible to differentiate what kind of information is being processed? And one more question comes out, would that be viable in terms of costs in processing and costs in implementing such solution in hardware?

One imaginary solution to this problem is the data to carry itself its data type. In the case of representing a positive integer, for instance, there would be one byte which would carry this information. The same occurs with the digit signal, which in mathematics we represent negative numbers with the minus identifier ‘-‘, but in computing the negative symbol is represented by one bit, which is also usually the most significant bit: 0 for a positive number or positive zero, and 1 is for a negative number or negative zero (Tanenbaum, 2005).

The solution to identify a collection of bits which represent one data called metadata: is simply defined as “data that describes other data”. It can be compared to the signal, which is carried with the bit collection, which would identify the data type for it.

Two more things should be mentioned in this article: the need of the processor to implement a type-check solution, the need of the processor to support overloaded instructions.

What would be necessary to implement if I would like to call the ADD instruction by passing an integer and a floating point as arguments? This answer is simple: by just overloading this method allowing this to receive different parameters.


In C++ for instance, it can be achieved by using the concepts of Early binding, which happens during compilation, where the compiler defines what function to call based on the argument list, or the function return type. This is the default method used in C++. It can be also achieved by Late binding, which the functions are chosen at the execution time, or the Pure Virtual Functions, which only has the function declaration and the subclasses should implement it (Ravichandran, 2001).

In the case of the processor, it would be necessary to implement the function overloading in the execution time, considering that the processor executes dynamic operations and does not work with the compiled code.

If we consider the data path of a typical Von Neumann machine, we will realize that all the imagined scenario proposed above would be a utopia in the x86 computer architecture we have today. The figure below shows the data path for a basic addition operation (Tanenbaum, 2005).

xid-83832526_6
Figure 1 Data path of a typical Von Neumann machine

REFERENCES

Ravichandran, D. R, 2001. Introduction to Computers and Communication. 1st ed. New Delhi: Tata McGraw-Hill Education.

The Beginners Book. 2013. Polymorphism in Java – Method Overloading and Overriding. [ONLINE] Available at: http://beginnersbook.com/2013/03/polymorphism-in-java/. [Accessed 26 December 15].

Tanenbaum, Andrew S., 2005. STRUCTURED COMPUTER ORGANIZATION. 5th ed. Amsterdam, The Netherlands: Pearson Education, Inc.

Chris Pine. 2006. Learn to Program . [ONLINE] Available at: https://pine.fm/LearnToProgram/chap_02.html. [Accessed 26 December 15].

Trends in IT: NoSQL Databases

During the last decade, a new subject in the areas of data store came out to change the default choice for a paradigm which has been kept for years, the relational databases. In spite of the existence of different paradigms in database technologies like object-oriented data stores, the relational paradigm has always been the default choice to start up a new project in IT. Projects like CouchDB (2005), MongoDB (2008) and Cassandra (2008) came with the idea of storing data in a different manner compared to the “default” relational world (NoSQL Seminar, 2012);

Not Only SQL or NoSQL is a term which came with the idea of storing data without rigidly fixed schemas, in a distributed architecture, and a different approach if compared to the relational way of thinking when used to store information, using the standard SQL commands and the relational concept. Basically, this is a new paradigm of how to store data and a different way of thinking when designing an application.

Advantages of using this paradigm are summarized below (Tech Republic, 2010):

1. Elastic Scaling: Scale up by just adding new machines (distributed);
2. Flexible Data Models: Key pair and document databases;
3. Economic: No need to use proprietary servers and storage;
4. Big Data: Store a lot of data is possible and not handled by only one RDBMS.

Over the next decade, I believe in a greater and large acceptance on the new IT projects to use the NoSQL paradigm, simply for the reasons of the possibility of spending less money compared in implanting big storage solutions by huge corporations like Oracle or Microsoft, which are very expensive. and getting the best of scalability and power to process large amounts of data.

There are some differences and limitations on NoSQL databases, which are: Some of the implementations do not support ACID transactions, that sometimes can bring up a lack of reliability.

In spite of getting technical support on these technologies to be a challenging thing, because of most of them are open source tools, today, MongoDB Inc. for instance, provide full-time support in the English language (MondoDB, 2015).

REFERENCES

NoSQL Seminar 2012 @ TUT. 2012. Introduction to NoSQL. [ONLINE] Available at: http://www.cs.tut.fi/~tjm/seminars/nosql2012/NoSQL-Intro.pdf. [Accessed 23 December 15].

Tech Republic. 2010. 10 things you should know about NoSQL databases. [ONLINE] Available at: http://www.techrepublic.com/blog/10-things/10-things-you-should-know-about-nosql-databases/. [Accessed 23 December 15].

MongoDB. 2015. Support Policy. [ONLINE] Available at: https://www.mongodb.com/support-policy. [Accessed 23 December 15].

HTTP/2 on Software Engineering Radio

Stefan Tilkov talks to Mark Nottingham, chair of the IETF (Internet Engineering Task Force) HTTP Working Group and Internet standards veteran, about HTTP/2, the new version of the Web’s core protocol. The discussion provides a glimpse behind the process of building standards. Topics covered include the history of HTTP versions, differences among those versions, and the relation of HTTP/2 to Google’s SPDY open networking protocol. Mark goes into detail about HTTP/2’s technical features, including binary framing, improved connection handling, server push, and the different protocol negotiation approaches. The episode concludes with a look at the consequences of HTTP/2 availability and adoption, especially regarding the various hacks that are best practices with HTTP/1.1.

Link: HTTP/2 podcast