Post #6

Toward the end of our discussion about the Strategy design pattern, we briefly talked about the open/closed principle; I wanted to further my understanding of this concept, so I decided to do some research of my own.  Today, I will summarize an article by Swedish systems architect Joel Abrahamsson entitled “A simple example of the Open/Closed Principle”.

Abrahamsson begins the article by summarizing the open/closed principle as the object oriented design principle that software entities should be open for extension, but closed for modification.  This means that programmers should write code that doesn’t need to be modified when the program specifications change.  He then explains that, when programming in Java, this principle is most often adhered to when implementing polymorphism and inheritance.  We followed this principle in our first assignment of the class, when we refactored the original DuckSimulator program to utilize the Strategy design pattern.  We realized, in our in-class discussion of the DuckSimulator, that adding behaviors to Ducks would force us to update the implementation of the main class as well as each Duck subclass.  By refactoring the code to implement an interface in independent behavior classes – and then applying those behaviors to Ducks in the form of “setters” – we opened the program for extension and left it closed for modification.  Abrahamsson then gives his own example of how the open/closed principle can improve a program that calculates the area of shapes.  The idea is that, if the open/closed principle is not adhered to in the implementation of a program like this, it is susceptible to rapid growth as functionality is added to calculate the area of more and more shapes.

(Note: This is clearly not a Java implementation.)

public double Area(object[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        if (shape is Rectangle)
        {
            Rectangle rectangle = (Rectangle) shape;
            area += rectangle.Width*rectangle.Height;
        }
        else
        {
            Circle circle = (Circle)shape;
            area += circle.Radius * circle.Radius * Math.PI;
        }
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that does not adhere to the open/closed principle. )


public abstract class Shape
{
    public abstract double Area();
}
public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }
    public override double Area()
    {
        return Width*Height;
    }
}
public class Circle : Shape
{
    public double Radius { get; set; }
    public override double Area()
    {
        return Radius*Radius*Math.PI;
    }
}
public double Area(Shape[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        area += shape.Area();
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that adheres to the open/closed principle. )

Abrahamsson ends the article by sharing his thoughts on when the open/closed principle should be adhered to.  He believes that the primary focus of any good programmer should be to write code well enough that it doesn’t need to be repeatedly modified as the program grows.  Conversely, he says that the context of each situation should be considered because unnecessarily applying the open/closed principle can sometimes lead to an overly complex design.  I have always known that it is probably good practice to write code that is prepared for the requirements of the program to change, and this principle confirmed that idea.  From this point forward, I will take the open/closed principle into consideration when tackling new projects.

 

 

 

Advertisements

Post #5

Today I will be summarizing and providing commentary on a blog post from The Developer’s Piece called “8 design patterns that every developer should know”.  I was immediately attracted to the title but I was even more pleased when I actually read the article; it not only provides clear and concise descriptions of each pattern, it also provides examples of each pattern in practice in the form of code chunks.  Among these reasons, I primarily chose this article to be the subject of this week’s blog post because it lists patterns that we have and will cover in class.  Like I said in my last post, nothing is better than when you have thorough understanding of why you are covering a piece of material in school.

The article begins by giving a brief introduction to why developers use design patterns and the problems that arise when effective patterns aren’t employed.  The article emphasizes that many of the problems that arise in development have already been solved by other developers.  This fact gave way to the formation of design patterns, patterns to be followed during development that help developers avoid common problems.   The author of the article then provides three reasons that support his argument that design patterns are important:

  • Less time is spent resolving problems, overall, because many of problems that could have occurred were avoided.
  •  Design patterns are well-known, so using the technical terminology associated with them can be helpful in explaining and summarizing complex ideas i.e. “I used a factory pattern to create the object”.
  • Design patterns have been made quite easy to understand and employ, so much so that your solution will likely be less effective than a design pattern.

The author of the article then explained that there is no “one-size-fits-all” with regard to design patterns.  You need to adapt design patterns to your specific problem if you wish to effectively employ it.  The article then goes on to list the most important design patterns to know, according to the article.  Each pattern in the list contains a description as well as a chunk of code demonstrating the pattern in practice.  The list is as follows:

  • Singleton – Most used pattern; Used if you have an object that only needs to be instantiated once.
  • Initialization and Demand Holder – Thread safe variation of Singleton pattern; This pattern does not initialize the instance until its getInstance() method is called.
  • Strategy and Factory – Two well-know patterns; incredibly useful especially when used in tandem; Used to create objects from a given qualifier.
  • Fluent Builder – Used when objects require a lot of parameters to be passed in upon creation.
  • Chain of Responsibility – Used when applications require a lot of business logic; High complexity makes for unmanageable code so this pattern breaks code into pieces to organize them into sequential steps.
  • Template Method – Used to define a skeleton for operation within a method; Based on polymorphism; Used when you have common method call with different operations to be performed.
  • State – Used when an object has “states”; Allows you to define rules to create final states and states that require a previous state in order to execute

I took a lot away from this article – I now have a bit of insight and background of what is to come in the course, and I feel that I have gained a substantial bit of knowledge about effective design patterns that I can use in future interviews/career opportunities.  I will likely reference this article, and the examples within it, in the future when I am trying to resolve development problems of my own.

Post #4

As we are heading into our discussion of Decision-Table-Based Testing, I thought it would be a good idea to get some insight on what it’s all about and what its relevance is in the field of software testing.  Interestingly enough, I recently found an article about it on the International Software Testing Qualifications Board (ISTQB) website.  To become a certified member of the board, one must be knowledgeable in the use of Decision-table based testing techniques.  Today, I will summarize and provide commentary on their article.  As I head toward graduation, I am always pleased to know when the material being covered in class is relevant and useful in the industry.

The article begins by stating that equivalence partitioning and boundary value analysis work well in specific situations or sets of input values but struggle when different combinations of input values need to be tested – requiring different actions to be taken – because they are more geared toward the user interface.  However, decision-table-based testing can handle these different combinations of input because it is designed to be more focused on business logic and business rules.  Decision-tables are often referred to as “cause-effect” tables because there is a diagramming technique called “cause-effect graphing” that is sometimes employed to derive decision-tables.

The purpose of a decision-table is:

  • To provide a systematic way of stating business rules.
  • For use in test design that allows testers to explore the effects of combinations of different inputs and software states that implement specific rules.
  • To strengthen the relationship between developers and testers.
  • To improve developers’ techniques.

The article goes on to explain how to use decision tables in test design. It says that the first step to using decision-table-based test design is to find a function or subsystem that reacts according to a combination of inputs.  This subsystem should only contain a limited amount of inputs because having too many can become unmanageable.  Once a suitable subsystem is chosen, it should then be transformed into a table that lists all combinations of True and False for each of the aspects of the subsystem.  The article then provides a two examples of situations where a decision-table-based test design can be handy: a loan application with variable payment plans and a credit card application with a variable discount percentage.  The article finishes with a bit of advice; don’t assume that all combinations of input need to be tested because it is more functional to prioritize and test the important ones.

This article provided me with some background of what decision-table-based testing is and showed me relevant, real-world examples of how it is employed in the field today.  I feel better prepared for the discussion coming later this week.  Also, I think this article improved my focus and ability to prioritize key input values which will contribute to my overall testing ability in the long-run.

Post #3

Today, I am going to review and summarize Episode 283 of Software Engineering Radio.  This episode’s guest was Alexander Tarlinder, author of “Developer Testing: Building Quality into Software” The topics covered in this episode are quite relevant to the other posts I have done and one of those topics is specification based testing, which we have discussed in class.  I also selected this episode as the subject of this post because Alexander provides tips on how to test software effectively, both as a developer and as a tester.  The episode begins with Alexander providing an explanation as to why he wrote his book – he felt there were gaps within the software testing literature that needed to be bridged.  He claims that much of the existing literature on software testing will often focus too heavily on documentation, focus on a specific tool too much, or leave out crucial information on what to include in your test cases.  This can be hard to relate to for developers.  He defines general software testing as a process used to verify that software works, validate its functionality, detect problems within it, and prevent problems from afflicting it in the future.  He defines developer testing similarly but with the caveat that developer testing more systematically tackles the hard cases, rather than the rigorous; this is a result of developers being knowledgeable about the programming techniques implemented and the inherent bias that accompanies that.  Alexander argues that bias possessed by developers necessitates the need for additional testing to be performed by unbiased testers.  He insists that it is still necessary for developers to perform their own testing, though.  By performing testing during development, it ensures that at least a portion of testers’ workload will be a more of a “double-check”, allowing them to focus on the rigorous and unexpected cases that developers might overlook.

Alex then summarizes the main points of the discussion and provides tips on how to improve software testing.  The conversation strays a bit, so here is my summary and what I took from it:

  • Encourage developer testing – Developer testing is a necessary practice to both ensure that the finished product is of high quality and functional and to allow testers to work rigorously, which will detect and prevent any potential problems in the future.
  • Adopt an Agile development strategy – Agile development allows for rapid delivery of product and forces developers to adhere to effective working practices.
  • Write code that is testable – Consider a Develop by Contract(DbC) approach where the caller and the function being called programatically enter into a “contract” with one another.
  • Learn various techniques – Specification-based testing is a fundamental testing technique to learn, but it can sometimes lead to an approach that is too focused on where the data is partitioned, neglecting the in-between cases.

I think that this discussion contributed to my knowledge of effective development and testing practices which will help me a great deal when it comes time to implement them in the field (or an interview).

Continue reading “Post #3”

Post #2

I’ve been a longtime listener of podcasts but I’ve never really put any effort into finding a podcast that was relevant to software development.  Until now, I never considered the convenience and benefits of being able to learn about my field of study through my headphones while I perform other tasks.  In particular, I have been binge-listening Software Engineering radio because it is technical, topical, and informative while remaining pleasant to listen to.  Today, I am going to review and give commentary on Episode 51 of the show.

The topic of this episode is the Design By Contract (DbC) software correctness methodology.  One of the hosts of the show, Arno, defines DbC as a way of thinking about and designing interfaces that contain “contracts”.  The contract is a metaphor; the idea is that you as the caller must meet the preconditions expected by the method, and the method must meet the postconditions expected by the caller.  Arno gives the example of a square-root function:

– As the caller, you must meet the precondition of passing in a value that is zero or greater because of the inability to take square-roots of negative numbers.
– As the method, you must produce the value that, if multiplied by itself, is equal to the value passed in; this is expected by the caller.

Arno goes on to explain that, programatically, these preconditions and postconditions are boolean expressions that are listed in each method and must evaluate to true in order to perform their function.  One benefit of designing software in this way is that you provide more provide a more precise specification of what a method does.  Another benefit is that this prevents incorrect information from being displayed to the user; DbC evangelists believe that it is worse for the user to receive incorrect output than to not receive any output at all.  There are downsides to DbC as well, Arno claims.  He states that DbC runs into problems when implemented in a polymorphic system – subtypes of a supertype will inherit all of the contracts agreed upon by that supertype.  Arno advises that if somebody wishes to begin designing their code in this way, the first step is to just begin thinking with a contract-oriented mindset.  DbC functionality tools are not native to many languages so, if you wish to begin designing in this way in a language like Java, you must acquire them via a third party or start off by using assertions.  Arno insists that assertions are not enough to truly implement DbC because you have to manually maintain the checks on postconditions as they will often change based on input.

I selected this episode to be the subject of a blog post because I think that it prescribes a pretty good method of designing high-quality, sustainable software.  By preventing incorrect input and output at a developer level, you reduce the workload of testers, allowing them to perform more rigorous and extensive testing of input.  I have recently been doing a lot of research on various software development and design paradigms and one of the ones that I have dived into extensively is Agile.  I feel that DbC fits well into an Agile approach to development because it entails a process of developers testing as they work toward completion, which reduces the code needed to be done at the end of a sprint and generally produces a more sustainable and higher quality product.  In my next blog post, I will elaborate further on developer testing and how it contributes to the production of high quality software.

Continue reading “Post #2”

Post #1

This week, I am going to review and summarize Episode 262 of Software Engineering Radio.  This episode’s guest is Bill Curtis, Chief Scientist of CAST Software and Director for the Consortium for IT Software Quality.  Bill is on the show to discuss what software quality is and how architecture, Lean management, and following in big tech companies’ footsteps can help organizations achieve higher quality software.

Bill begins the podcast by defining high quality software as software that not only meets the functional requirements specified but also meets the needs of the user.  He insists that, in order to develop high quality software, the architecture of the system must be built properly in order to continually support the system as it grows.  To support this claim, Bill gives an example of a system that failed, at launch, due to a poorly built architecture – Obamacare.  According to Bill, the system architects for Obamacare built it in such a way that its users received, via download, more documentation than they probably needed, causing a rapid consumption of the Obamacare website’s bandwidth.  He believes that the Obama Administration was partly to blame for the failure because they requested that changes and additions be implemented late in development which only allowed for 2 weeks of testing to be conducted prior to launch.  I was intrigued by this story and chose this podcast to be the subject of this week’s blog post because it goes to show how crucial rigorous testing is to the production of high quality software.

Bill goes on to list his tips for improving software quality:
– Look at the architecture up front; it is incredibly difficult to refactor it in large systems.
– The most effective software engineering methodology to implement is one that combines Waterfall and Agile; building and testing for rapid feedback leads to structural quality.
– Strengthen management; Implementing a system similar to CMM or CMMi which transforms the organization into one that can effectively build big systems, dividing labor reasonably, giving developers and testers appropriate deadlines, getting control of commitments and baselines, weeding out practices that aren’t effective — stabalize, standardize, optimize, innovate.
– Follow rigorous processes for testing and measuring; this allows organizations to know what they can and can’t do, why they make mistakes, and raises quality and improvement levels.

I thoroughly enjoyed the discussion in this episode and felt that it was really pertinent to the work I have been doing lately.  I got a professional understanding of what it means for software to be of high quality,  learned the importance of rigorous testing and measurement throughout the development process, heard some examples of how failing to follow process can lead to failure, and received tips on how to improve quality of software architecture, organization, and management.

Continue reading “Post #1”