Target Security Breach in 2013


The main point to be covered in this discussion is the Target Corporation security breach that took place in 2013, which affected customers who swiped their credit and debit cards between Nov. 27/2013 and Dec. 15/2013, and became a very important topic in network security. This security issue was announced via a report written by the security researcher Brian Krebs, who published that Target had suffered a data breach and the customer’s card information was totally exposed and unsecured in 2013 (CNN Money, 2013).

About the Company

Target Corporation is the 2nd largest American retailing company having revenue of $72,596 in 2013 and taking the 29th place in the ranking on Fortune’s World Most Admired Companies List (Target Corporation, 2014).

Affected and Impacted Customers

The main fact is that in 2013 about 40 million credit and debit card information stored in the Target Corporation computers, was stolen by hackers and this information was used by bad-intentioned users, who possibly would use this information for self-benefit.

Technical Information and Causes

To understand a bit more about it, security experts said that hackers targeted the point-of-sale system, maybe infecting those computers with malware in the terminals or possibly collecting the data on route to the credit card processors (CNN Money, 2013).

Specialists said that the main problem is the obsolete technology used in American Credit and Debit card transactions. Comparing to the technology used abroad credit cards have a chip that creates a unique PIN for the transaction and is also more difficult to clone, different from the technology used in the USA, which the magnetic strip could be easily duplicated.

My personal critical thoughts

With my experience on developing software, first, a big mistake is probably the technique of storing the complete user’s credit card information since it is not necessary because there is the possibility of storing only a part of the credit card number and ask the user to fill it when necessary, so I would never encourage storing all the card information at any computing resource.

Based on my findings I can conclude that the main reason all the information being stolen is the use of obsolete technology not only in Target Corporation but also by all the American companies that use credit card related operations.


Computer security cannot be guaranteed with the kind of technology we have nowadays especially when we considering networks, which data can be intercepted and you do not have control to where this is information is being transferred and stored.

The Target Corporation case became very popular on the news, but this kind of security breach occurs frequently on the Internet and so much information is stolen every day, nevertheless, it is possibly never come to the news.


CNN Money. 2013. Target: 40 million credit cards compromised. [ONLINE] Available at: [Accessed 03 August 14].

Target Corporation. 2014. Corporate Overview. [ONLINE] Available at: [Accessed 03 August 14].

CNN Money. 2013. Target credit card hack: What you need to know. [ONLINE] Available at: [Accessed 03 August 14].

Think Progress. 2013. Why Target’s Security Breach Was Bound To Happen. [ONLINE] Available at: [Accessed 03 August 14].

Summarizing a Little Bit Operating Systems

Since the early years in computing, Operating Systems were used in computers to make an interface between the human user and the computer machine. OS or Operating System can be defined by “operating system is the software that controls the overall operation of a computer” (Glenn Brookshear, J, 2011).

Nowadays, users have plenty of options to choose as its operating systems, there are Operating Systems for final users, small business, medium and large business, mobile, server/networks, and so many others.

Operating Systems have some characteristics to be compared, in this paper will be covered: Usage (Desktop, Server?), Ease of Use (Graphical, Non-Graphical) and Costs.

            The most popular operating systems for desktop final-users nowadays are:

  • Microsoft Windows;
  • Apple Mac OS;
  • Linux.

For mobile most users use:

  • Apple iOS;
  • Google Android;
  • Windows Mobile.

And Server/Network Computers:

  • Linux;
  • FreeBSD/OpenBSD/NetBSD;
  • Oracle Solaris;
  • Windows Server;

To have a better understanding of OSs it is necessary to know a little bit about history. Most of them are based on the Unix Operating System, which is: Apple Mac OS, Oracle Solaris and Linux (including Google Android and Apple iOS). The UNIX Operating System was created in 1969 by system engineers at AT&T’s Bell Labs (Unix Help, 2014), and since nowadays it continues to be very influencing for the modern Operating Systems. Linux, in the beginning, was more focused in the server business, with a small slice of the market for final-users, but nowadays it became more popular with the effort of open-source collaborators and investing companies. Microsoft Windows, which is the most popular OS worldwide, was completely developed by Microsoft Corporation, not following the Unix OS concepts.

The ease of use is a very important topic to be pointed. Microsoft Windows became the most popular OS in the world because it has been always oriented by the GUI system, nevertheless, Mac OS is as simple as Windows, but it wasn’t much popular because of the Apple’s business strategy to sell their own built computers.

The table bellows points, the big difference between prices for Operating Systems.

Operating System Price
Windows 8.1 (Amazon, 2014) $95.91
Mac OS X version 10.6.3 Snow Leopard (Amazon, 2014) $29.38
Ubuntu Linux (Amazon, 2014) $10.99 (free to download)

While Apple OSX is cheaper than Microsoft Windows, you must consider that Apple sells its machines, and you can use only with Apple computers, on the other hand, Microsoft only sells operating systems but It can be used in any PC based computer. Ubuntu is totally free for download, but if you want to have the original DVD you can buy for a very cheap price at Amazon.

I personally used almost all the covered Operating Systems in this paper. I became a Linux enthusiast in 1999, using Caldera Open Linux and afterward used Slackware, Debian, Mandrake, Conectiva, RedHat, and others; I tested also NetBSD and OpenBSD OSs; I used Microsoft Windows since the Windows-95 version (going through Windows-98, Windows ME, Windows 2000, Windows XP, Windows Vista and Windows 7) and since 2012 I am a Mac OS user. My cell phone is an Android 4 operating system, which I really like to use.


Glenn Brookshear, J, 2011. Computer Science: An Overview. 11th ed. United States of America: Addison-Wesley.

UNIXhelp. 2014. History of the UNIX operating system. [ONLINE] Available at: [Accessed 30 July 14].

Amazon. 2014. Windows 8.1 System Builder OEM DVD 64-Bit. [ONLINE] Available at: [Accessed 30 July 14].

Amazon. 2014. Mac OS X version 10.6.3 Snow Leopard. [ONLINE] Available at: [Accessed 30 July 14].

Amazon. 2014. Ubuntu Linux 13.04 Special Edition DVD. [ONLINE] Available at: [Accessed 30 July 14].

Queues and Algorithms in the Scheduling Context

Since the early years, computer scientists in the search for computer science solutions to provide high complexity machines to support science, math and business related operations, developed the Queue technique in computing, which was and still one of the most important topics in computer science. “Queue” can be defined as “a storage organization in which objects (in this case, jobs) are ordered in first-in, first-out (abbreviated FIFO fashion.” (Glenn Brookshear, J, 2011). Jobs, in this context, can be defined as the instructions (or programs) inserted into the computer operating system, which run a specific operation controlling both software and hardware.

The most usual type of Queue is the FIFO concept, first-in, first-out, to illustrate it as an analogy it can be compared to the real world concept of line, the first that arrives, the first to get out the line. Something important to point is that the FIFO concept/structure is not always followed because as in the real world, you can have priority for some kind of person in a line, you can have also a job with more priority. In spite of the FIFO concept be used in the processing context, it is also widely used in Memory Management, File Systems, Algorithms and so many other areas in computing.

To enrich this research, two other concepts of algorithms will be introduced in order to organize scheduling/processing jobs. First, the Decreasing-Time Algorithm or DTA follows the idea of putting all the longest jobs in terms of time to be executed first. It creates the list of tasks by the longest tasks at the top being followed by the shorter ones (James Sousa, 2013).

To support this idea, consider you have the following process (with one processor) list to be processed defined by a name (time to complete):

T1 (5), T4 (2), T2 (6), T3 (8)

By sorting with the Decreasing-Time Algorithm, the priority list output would be:

T3 (8), T2 (6), T1 (5), T4 (2).

The second is the CPA – Critical Path Algorithm that is similar to the DTA algorithm, but it priories the largest critical time first though. To create the priority list using the CPA algorithm, the largest critical goes to the top preceding the less critical ones, if the critical time is equal so it can get any order, with no distinction.

I personally believe that every kind of scheduling algorithm has its advantages and disadvantages and it only depends on what kind of solution it must offer. For instance, if you are developing software in which its main objective is to dispatch e-mails, you could use the CPA (Critical Path Algorithm) with no doubt it would satisfy your needs, considering that the high critical e-mails should be sent first. On the other hand, if you are developing a system in which the user wait for feedback for its operations, you could probably go for FIFO, in which the feedback is much more instantaneous, considering the precedence.


Glenn Brookshear, J, 2011. Computer Science: An Overview. 11th ed. United States of America: Addison-Wesley.

James Sousa. (2013). Scheduling: The Decreasing Time Algorithm. [Online Video]. 16 September 2013. Available from: [Accessed: 27 July 2014].

MATH 103. 2001. Scheduling Algorithms. [ONLINE] Available at: [Accessed 27 July 14].Bottom of Form

Computer Science: Storage and Non-Volatile Memory Issues and Trends

Computers does nothing without information and it needs to be processed and stored, what makes necessary the use of high technology to write data and keep it for further access. Technology nowadays offers so many ways to store and handle data (Magnetic Disks and Flash Drives, for instance), but as systems generates large amounts of data, it data must be retrieved fast, must be stored with a reliable technology, must be cost-effective and it must be centralized, decentralized or even both, depending on the architecture strategy.

In computing, when data is stored and it is available to be retrieved at any time it is defined as “Non-Volatile Memory”. Non-Volatile Memory is a term when it refers to memory chips that hold their content without power being applied (PcMagazine, 2014).

The Data Storage topic is and will continue to be a critical topic in computing, especially considering decentralized systems with huge amount of data, using the concepts of Cloud and Big Data. According to Zdnet, 2014, as data usage continues to grow exponentially, IT managers will need to orchestrate multiple kinds of storage — including flash, hard disk and tape — in a way that optimizes capacity, performance, cost and power consumption.

One of the most notable technologies nowadays in storage is Flash Memory. When you store data in your smartphone, digital camera or GPS you are using this technology. Solid-state drives (SSD) using flash memory are replacing hard drives in netbooks and PCs and even some server installations, needing no batteries or other power to retain data, flash is convenient and relatively foolproof (ComputerWorld, 2014). Technically speaking, Flash Memory is a specific type of EEPROM (an acronym to Electrically Erasable Programmable Read-only Memory), which is programmed and erased in large blocks.

I personally use SSD storage on my MacBook, which I feel my OS running fast and not much noisy, as it was when I used to use HDD storage with Personal Computers. I personally believe the HDD technology will continue to be commercialized in the next years especially because of the price, nevertheless as the SSD technology become cheaper it can overtake the HDD technology in terms of popularity.


PcMagazine. 2014. Definition of: non-volatile memory. [ONLINE] Available at: [Accessed 20 July 14].

ZDNet. 2014. Storage in 2014: An overview. [ONLINE] Available at: [Accessed 20 July 14].

ComputerWorld. 2014. Flash memory. [ONLINE] Available at: [Accessed 20 July 14].

Current Trends in Computing: Devices and Technologies to Support E-Learning

Considering the large growth of the Online Learning System worldwide, my subject this week is Technology Trends to Support Online Learning, which I believe is one of the most important topics in innovation these times.

Electronic learning or E-learning is defined by Innovative Learning as a term used to define a learning system supported by technological devices and through the Internet. Nowadays we can have a friendly experience on learning using electronic devices like iPad, for instance, which allows an interactive and smarter way to learn.

In numbers, today, e-Learning is a $56.2 billion industry, and it’s going to double by 2015, (E-Learning Industry, 2014). As the statistics only directs to be larger and larger in terms of growth, so, what can be done to support online learning using technological resources and devices?

Mobile Learning is one of the most important ways to support e-learning nowadays. Mobl21 defines it, as “Mobile learning is the ability to obtain or provide educational content on personal pocket devices such as PDAs, smartphones and mobile phones” (Mobl21, 2014). Devices like Tablets and Smartphones can be found anywhere and with very good prices, making it accessible for all kinds of social classes, and we need to consider, that mobile devices will become cheaper and cheaper with the technology advances in the next years.

Technically, Cloud Computing is a very important topic about mobile technologies, considering that most mobile device systems are connected to a backend server, which stores user data and makes it available at any device the users want to use. The way as the Cloud Technologies become cheaper and more accessible, it will be easier for educational companies to offer high-accessibility solutions only IT giants (such as Google and Apple) can offer nowadays.

Security in the Mobile and Cloud Computing context is another very important concern in mobile technology. MOBIO, for instance, is a concept is to develop new mobile services secured by biometric authentication means (Mobio Project, 2014). Face and voice recognition will allow more security, for mobile devices, considering that a malicious user could easily handle a stolen or lost device.

I personally believe in the next few years the clearest trends are the development of the Cloud and Mobile technologies, with a high impact with the price decrease and the adaption of the current architectures to the mobile and cloud context. Security and High-Speed network for mobile devices will support the successful transition to this new environment, which will allow spreading the high-quality education worldwide with e-learning.


E-learning overview? [Online]. Available from: (Accessed: July 13, 2014).

Top 10 e-Learning Statistics for 2014 You Need To Know [Online]. Available from: (Accessed: July 13, 2014).

Mobile Learning Basics [Online]. Available from: (Accessed: July 13, 2014).

MOBIO – Mobile Biometry Technology [Online]. Available from: (Accessed: July 13, 2014).

Academic Integrity in a Cultural Context

Intellectual Property is defined by the WIPO as “creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names and images used in commerce” (Switzerland, 2014). WIPO – an acronym for World Wide Intellectual Property Organization can be defined as a self-funding agency, containing 187 member states, which its main objective is to ensure intellectual property in a worldwide range. WIPO was signed at Stockholm on July 14, 1967, and has amended on September 28, 1979. Brazil has joined in 1975 (Switzerland, 2014).

In Brazil, since the constitution was established on October 5, 1988 (Brazil, 2014) there were concerns about intellectual property, and people interested in ensuring its creations (such like composers, writers) were responsible for movements concerning the government. Currently, there are specific organizations (ECAD and INPE for instance) in which are responsible for registering patents and trade-marks, music, books and other kinds of intellectual property. It is common that this kind of organization manages royalties and yield-related distribution issues.

In Brazil, 2013, was approved by the government a law project (Brazil, 2014) to change the music-related intellectual property, in which the government would have involvement with the private intellectual property. The Central Office of Copyrights Collection (ECAD) and Brazilian Union of Compositors (UBC) intervened, claiming that the intellectual property issues are private and it doesn’t need intervention from the government, interrupting immediately the approval and taking it to judgment in 2014. Another instance of public intervention in the Intellectual Property related issues is the claim for Software Patents in Brazil, which Free Software Activists in Brazil do not agree, because software in Brazil is protected by the Copyright law “(Lei 9279/96)” (Brazil, 2014) and not patents, considering that software patents would only restrict the innovation and it is only a concern by the dominant industry.

At the academic community, the most important public organs related to research in Brazil are concerned about plagiarism, in order to prevent and punish researchers financed by the government. In 2012, for the first time, was established the Integrity Academic Scientific Commission to take preventive actions about the integrity of the research published in Brazil.

Personally, I believe citation and referencing should be used at every academic work, and plagiarism often comes out when the author doesn’t have a domain at the research subject. My final undergraduate paper was my first experience on referencing and I believe at this Master’s Program I will have a chance to reach a high level of academic paper production. Moreover, I believe the Harvard Referencing System is simple and objective compared to the Brazilian referencing styles that I used in my final undergraduate paper.

I believe at the first moment, that the Turnitin System offered by the University of Liverpool is a very smart anti-plagiarism system offering a wide-range on looking out for so many sources and publications, not only protecting the University but also helping the student to produce original and honest work.


What is Intellectual Property? [Online]. Available from: (Accessed: July 8, 2014).

Convention Establishing the World Intellectual Property Organization [Online]. Available from: (Accessed: July 8, 2014).

Brazilian Constitution [Online]. Available from: (Accessed: July 8, 2014).

Brazilian Law 2853 [Online]. Available from: (Accessed: July 8, 2014).

Brazilian Law 9279 [Online]. Available from: (Accessed: July 8, 2014).

SOLID (Object-Oriented Design)

After a long time without any publication here I am to post something every Software Engineer should know. The SOLID – one acronym for the Single responsibility, Open-closed, Liskov substitution, Interface segregation, and Dependency inversion principles when you are designing your classes using OO.

I’ve found out a very good explanation about this, and I will post below. As this information wasn’t written by me, I will share the credits for this publication also.


The SOLID principles are five dependency management for object-oriented programming and design. The SOLID acronym was introduced by Robert Cecil Martin, also known as “Uncle Bob”. Each letter represents another three-letter acronym that describes one principle.

When working with software in which dependency management is handled badly, the code can become rigid, fragile and difficult to reuse. A rigid code is that which is difficult to modify, either to change existing functionality or add new features. Fragile code is susceptible to the introduction of bugs, particularly those that appear in a module when another area of code is changed. If you follow the SOLID principles, you can produce code that is more flexible and robust, and that has a higher possibility for reuse.

 Single Responsibility Principle

The Single Responsibility Principle (SRP) states that there should never be more than one reason for a class to change. This means that you should design your classes so that each has a single purpose. This does not mean that each class should have only one method but that all of the members in the class are related to the class’s primary function. Where a class has multiple responsibilities, these should be separated into new classes.

When a class has multiple responsibilities, the likelihood that it will need to be changed increases. Each time a class is modified the risk of introducing bugs grows. By concentrating on a single responsibility, this risk is limited.

 Open / Closed Principle

The Open / Closed Principle (OCP) specifies that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. The “closed” part of the rule states that once a module has been developed and tested, the code should only be adjusted to correct bugs. The “open” part says that you should be able to extend the existing code in order to introduce new functionality.

As with the SRP, this principle reduces the risk of new errors being introduced by limiting changes to existing code.

Liskov Substitution Principle (LSP)

The Liskov Substitution Principle (LSP) states that “functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it”. When working with languages such as C#, this equates to “code that uses a base class must be able to substitute a subclass without knowing it”. The principle is named after Barbara Liskov.

If you create a class with a dependency of a given type, you should be able to provide an object of that type or any of its subclasses without introducing unexpected results and without the dependent class knowing the actual type of the provided dependency. If the type of the dependency must be checked so that behavior can be modified according to type, or if subtypes generated unexpected rules or side effects, the code may become more complex, rigid and fragile.

Interface Segregation Principle (ISP)

The Interface Segregation Principle (ISP) specifies that clients should not be forced to depend upon interfaces that they do not use. This rule means that when one class depends upon another, the number of members in the interface that is visible to the dependent class should be minimized.

Often when you create a class with a large number of methods and properties, the class is used by other types that only require access to one or two members. The classes are more tightly coupled as the number of members they are aware of grows. When you follow the ISP, large classes implement multiple smaller interfaces that group functions according to their usage. The dependents are linked to these for looser coupling, increasing robustness, flexibility and the possibility of reuse.

Dependency Inversion Principle (DIP)

The Dependency Inversion Principle (DIP) is the last of the five rules. The DIP makes two statements. The first is that high-level modules should not depend upon low-level modules. Both should depend upon abstractions. The second part of the rule is that abstractions should not depend upon details. Details should depend upon abstractions.

The DIP primarily relates to the concept of layering within applications, where lower level modules deal with very detailed functions and higher level modules use lower level classes to achieve larger tasks. The principle specifies that where dependencies exist between classes, they should be defined using abstractions, such as interfaces, rather than by referencing classes directly. This reduces fragility caused by changes in low-level modules introducing bugs in the higher layers. The DIP is often met with the use of dependency injection.

Credits: (