Post #16

I found an article on SoftwareTestingHelp.com entitled “Web Testing: A Complete guide about testing web applications” and I thought it would be relevant to review and summarize it in a blog post.  I have been working on my first true web application, recently, and realized that I may get to a point in its development where I want to conduct tests on it.  During my search for resources on the subject, I found many articles that covered a good portion of web application testing, but this guide was the most comprehensive and helpful, in my opinion.  Having read it, I feel that I will be able to conduct more inclusive and thorough tests of our application, if necessary.

This guide covers the 6 major web testing methods:

  • Functionality Testing
  • Usability Testing
  • Interface Testing
  • Compatability Testing
  • Performance Testing
  • Security Testing

Functionality Testing
Testing links, forms, cookies, browser optimization, databases, etc.

  • Links – Test for external and internal links, links used to send email/messages from users; Check for unlinked pages, broken links, etc.
  • Forms – Test for validations on each field, default values, wrong inputs, options to create/view/delete/modify forms, etc.
  • Cookies – Test for enabled/disable options, encryption, session cookies storing session information after they end, deletion of cookies, etc.
  • Browser Optimization – Test for syntax errors, crawlability, etc.
  • Databases – Test for data integrity, errors while you edit/delete/modify databases, correct execution of queries,  etc.

Usability Testing
Testing navigation, conent, user assistance, etc.

  • Navigation – Test for ease of use, consistency, etc.
  • Content – Check for spelling errors, broken in-text links, proper format of text and images, etc.
  • User assistance – Check for all links on sitemap, test search options and help files, etc.

Interface Testing
Testing the application server and database server interfaces.

  • Check that interactions between servers are executed properly, what happens if a user interrupts one of these interactions, what happens if a connection to the server is reset during an interaction, etc.

Compatibility Testing

  • Testing the compatibility of the program to different browsers, operating systems, mobile operating systems, and printers.

Performance Testing

  • Testing the performance of the application on different internet connections with different amounts of traffic.

Security Testing
Testing the web security of the application.

  • Test for internal pages opening without being logged in, reaction to invalid login inputs, effective CAPTCHA, all transactions/errors/security breach attempts should get logged in log files somewhere on the web server, etc.
Advertisements

Post #14

As a follow up to my previous post about our upcoming group-code-review, I thought I would review another article I found that compares group code reviews to peer code reviews.  “Peer Review vs Group Review: Which Is Best For A Code Review?“, written by Stephen Ritchie, provides an answer to this question as it is often asked by clients in the field.  Ritchie begins by stating that that both methods have advantages over the other, depending on the development team’s particular circumstance.

Ritchie defines a group code review as a code review in which a portion of the development team comes together to review code while the rest of the team provides feedback for improvement.  He believes that group code reviews engage the individual developer, the team, and the team’s leaders in ways that evoke experience and judgment to find and fix problems.  As I mentioned in the previous post, Ritchie also believes that effective group code reviews let tools perform fundamental review and let the focus of the developer review to be a spotlight on parts of the code that do not utilize the best practices and designs.  Ritchie feels that discussing code, as a group, serves as an excellent transfer of knowledge from the more experienced to the less experienced.  If a group code review is conducted properly, it opens up a healthy dialog between developers about the chosen conventions and why adherence to them is important.  The main advantage of group code reviews is that it allows for a group of developers to learn and grow together, improving relationships and cultivating the transfer of knowledge and experience.  The main disadvantage of group code reviews is that it can be challenging to get all group members up to the same working pace.  Group code reviews can suffer if group members become diverted or distracted.  This can potentially lead to a worsening of relationships among developers.

Ritchie defines a peer code review as a code review in which one developer’s code is reviewed by another developer.  A pair of developers would meet and one developer would review the code of the other, and provide constructive feedback.  Peer code reviews are similar to group code reviews but their personal nature generally ensures a thorough and properly done review.  The main advantage of peer code reviews is that pairs of developers who conduct peer code reviews have likely collaborated and formed a bond in the past that allows for issues to be better-handled and addressed immediately.  The is also somewhat of a disadvantage in that it necessitates that you have a decent relationship with the person you are reviewing with, if you wish to conduct the review effectively.  A bad relationship between the reviewer and the developer being reviewed can result in a lack of collaboration and dialog and, inevitably, a poor review overall.

Ritchie believes that, for teams that have many developers with years of experience, it is more efficient to conduct peer code reviews.  This is because the time of identification to resolution of an issue is much shorter, due to the personal relationship between peers.  Ritchie prefers peer code reviews to group code reviews because they quickly improve the developers’ skills while simultaneously making progress in the implementation.

I agree that peer code reviews are more personal and allow for faster feedback and resolution to issues but I also appreciate the fact that group code reviews bring many resolution strategies and points of view to the table.  For the small projects I have worked on, I have always found peer review to be more effective – but I think that, because larger projects generally require more developers, a group code review is necessary to get all the developers on the same page.  I think that the code we are reviewing could be done by a pair of developers, but I am looking forward to seeing if the group review will provide better solutions and insight.

 

Post #12

Last class, we were divided into teams to begin working on the software technical code review.  I thought it would be useful to look up what kinds of practices are the most effective, with regard to reviewing code.  I found a post by Gareth Wilson entitled “Effective Code Reviews – 9 Tips from a Converted Skeptic” , which explains the intuition behind the process of conducting code reviews and then provides a list of nine tips to help readers get started with their own reviews.  I hope that this article will help me be more effective in my contribution to the team’s overall review.

Wilson begins the post by listing the benefits of coducting code reviews as: it helps you find bugs, it ensures radability and sustainability of code, it spreads understanding of the implementation throughout the team, it helps new teammates get up to speed with the team’s process, and it exposes each member of the team to different approaches.  He explains that many entry level software engineers don’t appreciate the value of code reviews, at first, and that he was no different until he realized that the code reviews he was apart of were being conducted poorly. After gaining some experience and developing new working strategies at different companies, Wilson now feels confident enough to provide advice on how to conduct code reviews effectively.  His nine tips to performing effective code reviews are:

  • Review the right things, let tools do the rest – IDE’s and developer tools are in-place to not only help you troubleshoot problems and assure quality, but also help you notice the small style and syntax errors that are detrimental to readability and uniformity of code.  Knowing this, you should let the tools make note of those problems instead of including them in a code review.  Ensuring that code is correct, understandable, and maintainable is the goal, so that should be the focus of any code review.
  • Everyone should code review – Conducting code reviews spreads the understanding of code implementation, and helps new developers get up-to-speed with the working practices of an organization, so the ivolvement of veteran reviewers and newcomers is equally valued.
  • Review all code – A proper code review means that it was done as thoroughly as possible.  Wilson believes that no code is too short or simple to review, and that reviewing every line of code means that nothing is overlooked, and that you can feel more confident about the quality and reliability of code after a rigorous review of it is done.
  • Adopt a positive attitude – Code reviews are meant to be beneficial experiences and, by heading into them with a positive attitude and responding to constructive criticism appropriately, you can maximize the potential for this.
  • Review code often and in short sessions – Wilson believes that the effectiveness of a code review will begin to dwindle after about an hour, so it is best to refrain from reviewing for any longer than that.  A good way to avoid doing this is to set aside time throughout the day to coincide with breaks, to help form a habit and resolve issues quicker whilst code is still fresh in the head of developers.
  • It’s OK to say “It’s all good” – Not all code reviews will contain an issue.
  • Use a checklist to ensure consistence – A checklist will help to keep reviewers on track.

I think these tips are useful to consider and I hope to apply them in the upcoming code review.

Post #9

Today, I will be summarizing an interesting article I found on www.softwaretestinghelp.com about software quality assurance and how to prevent defects in a timely fashion.  The article, entitled “Defect Prevention Methods and Techniques”, was last updated in October 2017, but the author is not credited.  I selected this article as a topic for a blog post because I found it interesting how it provided a definition to “quality assurance” as well as methods and techniques on how to prevent software defects during development.

The article begins by differentiating between quality assurance and quality control; quality assurance activities are targeted toward preventing and identifying defects while quality control activities are targeted toward finding defects after they’ve already happened.  The reason that the emphasis of this article is on quality assurance and defect prevention is because, according to the article, defects found during the testing phase or after release are costlier to find and fix.  Preventing defects motivates the staff and makes them more aware, improves customer satisfaction, increases reliability/manageability, and enhances continuous process improvement.  Taking measures to prevent defects, early on,not only improves the quality of the finished product but also helps companies achieve the highest Capability Maturity Model Integration Level (CMMI).

The process of preventing defects begins after implementation is complete and is comprised of four main stages:

Review and Inspection – Review of all work products
Walkthrough – Comare the system to its prototype
Defect Logging and Documentation – Key information; arguments/parameters that can be used to support the analysis of defects
Root Cause Analysis – Identifying the root cause of problems and prioritizing problem resolution for maximum impact

This process often requires the involvement of three critical groups within an organization, each with their own sets of roles and responsibilities;

Managers are responsible for:

  • Support in the form of resources, training, and tools
  • Definitions of appropriate policy and procedure
  • Promotion of discussion, distribution, and changes

Testers are responsible for:

  • Maintenance of the defect database
  • Planning implementation of changes

Clients are responsible for:

  • Providing feedback

The article concludes by emphasizing the importance of defect prevention and its effect on the final product.  I believe that actively participating in a process of avoiding defects  before the final stages and release of a product is good practice and should be followed just like any other beneficial development strategy.  The article certainly contributed to my repertoire of development techniques and I think I will refer to these concepts in the future.

 

Post #7

I began researching good JUnit practices as a follow-up to our discussions of it in class.  I found a post on the codecentric Blog by Tobias Goeschel entitled “Writing Better Tests With JUnit” that addresses the pros and cons of JUnit and provides tips on how to improve your own unit testing.  This is the most thorough article I’ve found on JUnit testing (and possibly the longest), so it seems fitting to summarize it in a blog post of my own while we cover the topic in class.

The Goeschel begins the article by describing the value of test code.  He says that “writing fast, bug-free code is easy, if it’s not too complex and written once…”.  The key phrase, here, is “written once”; the implication is that rigorous testing is required in order to produce code that is both high-quality and maintainable.  He believes that tests are instrumental to implementing new functionality, refactoring architecture, and find newly introduced bugs, without damaging behavior.  In order to achieve these things, though, tests must be well-written.
He defines well-written tests as tests that tell you:

  • How to access the API
  • What data is supposed to go in and out
  • What possible variations of an expected behavior exist
  • What kind of exceptions might occur, and what happens if they do
  • How individual parts of the system interact with others
  • Examples of a working system configuration
  • What the customer expects the software to do

Goeschel then describes some of the inherent obstacles to be considered when writing JUnit tests, and then follows with a list of tips to overcome these obstacles and improve your test-writing.  Because the article is so well-written and thorough, I will summarize it in a TL;DR version that will highlight its key takeaways.


  • Test behavior, not implementation – Implementation-based testing leads to the creation of fragile tests; base your tests on what the expected output is; once the code works and tests are implemented, they should only fail when new behavior is added; name methods in such a way that they reflect the behavior they are triggering
  • Group tests by Context – Related test methods should be grouped together for better understanding
  • Enforce the Single Assertion Rule – Limit each test method to a single assertion; doing this makes code more readable and assigns meaning to a block of assert statements
  • Choose meaningful names – Self explanatory; give your tests, and the variables within them, names that help explain what the test is doing
  • Avoid test inheritance, if possible – If tests depend on each other, it is harder and more tedious to change the system; inheritance makes tests harder to understand and introduces performance problems

Testing is essential to writing good code that is free of bugs and, in order to create test suites that are readable, sustainable, and useful, we need to adopt good testing habits.  If Goeschel wanted the reader to take away just one thing from this article, it would be that testing should be focused on behavior rather than implementation.  I think this an incredibly useful tip to keep in mind and I think I will refer to it often when I am developing my own test suites in the future.

Post #4

As we are heading into our discussion of Decision-Table-Based Testing, I thought it would be a good idea to get some insight on what it’s all about and what its relevance is in the field of software testing.  Interestingly enough, I recently found an article about it on the International Software Testing Qualifications Board (ISTQB) website.  To become a certified member of the board, one must be knowledgeable in the use of Decision-table based testing techniques.  Today, I will summarize and provide commentary on their article.  As I head toward graduation, I am always pleased to know when the material being covered in class is relevant and useful in the industry.

The article begins by stating that equivalence partitioning and boundary value analysis work well in specific situations or sets of input values but struggle when different combinations of input values need to be tested – requiring different actions to be taken – because they are more geared toward the user interface.  However, decision-table-based testing can handle these different combinations of input because it is designed to be more focused on business logic and business rules.  Decision-tables are often referred to as “cause-effect” tables because there is a diagramming technique called “cause-effect graphing” that is sometimes employed to derive decision-tables.

The purpose of a decision-table is:

  • To provide a systematic way of stating business rules.
  • For use in test design that allows testers to explore the effects of combinations of different inputs and software states that implement specific rules.
  • To strengthen the relationship between developers and testers.
  • To improve developers’ techniques.

The article goes on to explain how to use decision tables in test design. It says that the first step to using decision-table-based test design is to find a function or subsystem that reacts according to a combination of inputs.  This subsystem should only contain a limited amount of inputs because having too many can become unmanageable.  Once a suitable subsystem is chosen, it should then be transformed into a table that lists all combinations of True and False for each of the aspects of the subsystem.  The article then provides a two examples of situations where a decision-table-based test design can be handy: a loan application with variable payment plans and a credit card application with a variable discount percentage.  The article finishes with a bit of advice; don’t assume that all combinations of input need to be tested because it is more functional to prioritize and test the important ones.

This article provided me with some background of what decision-table-based testing is and showed me relevant, real-world examples of how it is employed in the field today.  I feel better prepared for the discussion coming later this week.  Also, I think this article improved my focus and ability to prioritize key input values which will contribute to my overall testing ability in the long-run.

Post #3

Today, I am going to review and summarize Episode 283 of Software Engineering Radio.  This episode’s guest was Alexander Tarlinder, author of “Developer Testing: Building Quality into Software” The topics covered in this episode are quite relevant to the other posts I have done and one of those topics is specification based testing, which we have discussed in class.  I also selected this episode as the subject of this post because Alexander provides tips on how to test software effectively, both as a developer and as a tester.  The episode begins with Alexander providing an explanation as to why he wrote his book – he felt there were gaps within the software testing literature that needed to be bridged.  He claims that much of the existing literature on software testing will often focus too heavily on documentation, focus on a specific tool too much, or leave out crucial information on what to include in your test cases.  This can be hard to relate to for developers.  He defines general software testing as a process used to verify that software works, validate its functionality, detect problems within it, and prevent problems from afflicting it in the future.  He defines developer testing similarly but with the caveat that developer testing more systematically tackles the hard cases, rather than the rigorous; this is a result of developers being knowledgeable about the programming techniques implemented and the inherent bias that accompanies that.  Alexander argues that bias possessed by developers necessitates the need for additional testing to be performed by unbiased testers.  He insists that it is still necessary for developers to perform their own testing, though.  By performing testing during development, it ensures that at least a portion of testers’ workload will be a more of a “double-check”, allowing them to focus on the rigorous and unexpected cases that developers might overlook.

Alex then summarizes the main points of the discussion and provides tips on how to improve software testing.  The conversation strays a bit, so here is my summary and what I took from it:

  • Encourage developer testing – Developer testing is a necessary practice to both ensure that the finished product is of high quality and functional and to allow testers to work rigorously, which will detect and prevent any potential problems in the future.
  • Adopt an Agile development strategy – Agile development allows for rapid delivery of product and forces developers to adhere to effective working practices.
  • Write code that is testable – Consider a Develop by Contract(DbC) approach where the caller and the function being called programatically enter into a “contract” with one another.
  • Learn various techniques – Specification-based testing is a fundamental testing technique to learn, but it can sometimes lead to an approach that is too focused on where the data is partitioned, neglecting the in-between cases.

I think that this discussion contributed to my knowledge of effective development and testing practices which will help me a great deal when it comes time to implement them in the field (or an interview).

Continue reading “Post #3”