Thursday, 6 November 2014

Famous first words - the moments leading to defect discovery

Just a fun little thought of the day from today...

There's a well-known phrase that people use - "Famous last words". Often it's in this context:



I was thinking about my inner dialogue today during testing, and I realized that every moment just before finding a bug, I'd catch myself saying one of two phrases. I'm going to call them my Famous First Words, because they define the first moment when I know I'm on to a defect. I'm sure many testers can relate to these moments.

"Hey! I wonder what happens when..."
I'd say this is the more common one. It usually happens during exploratory test sessions. I'll be working through testing the feature as normal, and out of nowhere this thought occurs. It's like being able to tell ahead of time that an area is just asking for there to be a bug. "I wonder what happens when I enter this value..." BOOM! - Exception occurs.

The other phrase happens a little more out of my control:
"Hey! That was weird..."
This one happens after catching a glimpse of some action (or lack of action) that occurs. It's when a dark corner of my brain lights up and says "I've seen this before" or "Did that just...?". This one is neat to me because it's those little details that the untrained tester may miss. When just a flick of uncertainty pops up for a brief second. This phrase has led me to find countless threading/asynchronous issues and things that are just subtle enough they weren't caught in your average, functional testing.


These are just to two I seem to notice most commonly in my day-to-day testing activities. Are there any other internal phrases that people find are precursors to finding defects?


Wednesday, 1 October 2014

Using Blackbox & Whitebox Analysis Methods to Build Strong & Efficient New Feature Test Plans

My most recent position has me doing a significant amount of new feature testing, with the freedom to do it in an exploratory manner. This is great - a dream come true! So I see a testing task come through for the newest developed feature and the excitement begins. (EDIT - I feel the need to clarify here. I work hard to ensure testers are included early in the design and development process, so I'm well aware of the feature before it comes into the test queue. My excitement here is that I get to begin physically testing the feature.) But then the reality of the situation sets in: I'm responsible for testing this entire new feature and confirming that it works to a certain standard.

While a new feature can be intimidating, with the right planning, nothing should be too much to handle. Without really realizing it, I have grown into a process for how to handle the decomposition of a new feature. It pulls from a few different concepts to a formula that seems to have worked for me so far.


Now is a good time for this disclaimer: I'm a big fan of mind maps. Growing up, I had introductions to mind maps in a variety of classes and scenarios. I never really applied them to testing until meeting my (now) testing mentor. He's also a big fan of mind maps and the driving force for my continued use of them.

I think you know where this is going. Yes, I use mind maps for my test planning. This is not a new concept, and there's at least a couple good reasons why I like using them. They're lightweight. They're quick to whip up. They're easily modified. And most importantly, they're easy for anyone to understand. They serve as sort of a living document of the testing being performed. And once the testing is complete, they serve as documentation of what was considered during the initial testing phase of the feature.

In a previous blog post, I refer to Agile testers needing to wear many hats. I employ two of those hats when planning for new feature testing. One of the blackbox tester followed by one of the whitebox tester. And I do so in that specific order.

Step 1: Blackbox Analysis

First things first - I mind map the usage of the feature. 

  • Starting with any provided documentation (assuming there is any requirements or specifications), nodes are created in the map based on actions I'm told I can perform. These are the expected/documented actions of the feature. 
  • Then nodes are created based on my assumptions about what I think I should or should not be able to do. As far as I'm concerned, I'm playing the role of a user who has not read any documentation. So if I *think* I can or can't do something, that is a scenario worth testing. These are the assumed actions of the feature.
Step 2: Whitebox Analysis

Now that I have a mind map that shows some semblance of what I'm going to do during my testing, I remove my blackbox testing hat and replace it with a white one. There are many ways to get insights into the feature code itself. I use a combination of looking at the code reviews for the code submission for that feature and speaking to the developer face-to-face. 
  • Understanding what code paths exist allows for some nodes on the mind map to be de-prioritized. There may be a node for a negative path that we can clearly see (or have been told by the developer) that the code prevents. For me, I'd still like to try that scenario in the product to ensure it is handled well, but it requires less focus because it's a scenario the code was designed to handle.
  • This may also reveal execution paths we didn't think about in our initial assessment. Maybe a requirement wasn't stated, but the developer decided it was implied and coded for it. Or maybe there was just a potential path that we missed during our blackbox assessment.
  • Look at which methods have been unit tested, and what the unit tests actually test for. If there's good unit test coverage, there's a good chance we don't need to test for things like basic inputs and outputs, because that has already been covered (our unit tests run before the code is merged into the code base...assuring us that the unit tested scenarios are good before the feature makes it into any product build).


TL;DR:

The intention here is that the blackbox analysis of a new feature is performed before the whitebox analysis. By not having the code in front of us while we think about how to test a feature, it prevents tunnel vision which could cause us to not think creatively about how that feature might perform. The whitebox analysis allows us to hone that creative thinking down using facts. We can focus more on the areas that are risky, and have confidence that specific scenarios have been unit or integration tested.

Wednesday, 13 August 2014

Why "We've always done it that way" is not such a bad phrase after all


This isn't a new topic. I've heard many people before talk about how this phrase is so detrimental. I want to propose a different take on it, instead of the classic negative view.

There's a running joke at my office about this phrase. In a sprint retrospective, as a relatively new member of the team I mentioned that I was sick of people answering my questions with "We've always done it that way". As a result, my coworkers now like to toss the phrase out at me in playful office jest.

But all jesting aside, it did spark some very interesting discussion. I realized that I wasn't tired of hearing that phrase specifically, but I was tired of hearing it used as a way to explain away a questionable practice without any responsibility or ownership. As the newbie on the team and getting frustrated with the painfulness of process X, when I asked "why do we perform action X this way?", it was easy for a more experienced team member to reply with "we've always done it that way". This avoids having to explain the "why" portion of my question and halts all discussion about improvement. I don't think anyone's doing in intentionally, and that is what makes the phrase so detrimental.

A year ago, I was listening to a talk by Adam Goucher (@adamgoucher) entitled "It's our job to collect stories" (Title may not be exact). One of his points was regarding this phrase. Adam pointed out that if "we've always done it that way", then at some point a decision was made to do it that way, and that decision must have been based on certain factors at the time.  Make sense? Furthermore, if we trust our team, then we should also trust that it was the RIGHT decision at the time. So perhaps the phrase is actually an indicator of a previous decision that needs to be reassessed. We should be able to justify WHY a particular choice was made. I believe this blanket statement is so popular because it allows us to skip the justification altogether, thus not requiring us to think about the reasons behind the initial choice. I liken it to the classic scenario of a child asking an adult "why" repeatedly. Once we run out of the ability to provide a reasonable explanation, we revert to "because". But children often don't accept "because" as a response. They continue to prod. 

Consider the following scenario:
Child: Why do we wait until 8:00pm to eat dinner every day?
Father: Because that's the time our family has always eaten dinner.
It's easy to imagine that the father thinks this answer should suffice for the child. But what if the child is getting hungry at 6pm every day? This answer would probably frustrate him. 
Here's the magic part that transforms this phrase from detrimental to something that can be used to focus improvement.

Same scenario, but the child responds to his father's answer:
Child: Why do we wait until 8:00pm to eat dinner every day? 
Father: Because that's the time our family has always eaten dinner.  
Child: Well, why so late?  
Father: Because my dad didn't used to get home from work until after 7pm.  
Child: But you get home from work at 5pm, and I'm usually hungry by 6pm. 
Father: You're right - I never thought about that before. Let's make it 6pm then. 
It's a silly example, but it shows the point clear enough. By pushing to get to the reason why the family eats so late, they were able to recognize that they could change dinner time with no negative effects, while improving the scenario for the kid (he's not starving for hours each evening).

By pushing back and refusing to accept a blanket justification, we can dig down to the underlying reasons a decision was made. Perhaps a decision was made 2 years ago based on a time crunch, but now we have time to address tech debt and this particular feature would be a prime candidate. In fact, any time the question "why do we do X?" gets asked, it should be a flag to investigate further. You may find there's a reason that is still valid, in which case, no harm to have asked. But I'm guessing that fairly often you will find a decision was reached based on factors that have now changed.

Sometimes it just takes that one stubborn person to point it out. So the next time you're asking "why" and you get that dreaded response: push back. Encourage everyone involved to dig deeper, and find out the real reasons. It has worked for us and begun to help foster discussions of iteration and improvement on things we've been doing for a long time.

Just because it has always been done that way doesn't prove that it's the best way anymore. 

Sunday, 20 July 2014

Being an Agile Tester - You're going to need a bigger hat rack


Software testers have traditionally worn multiple hats. We are frequently asked to switch context quickly, and adapt accordingly to get each and every job done.

With the rise of Agile development, testers need to be prepared to perform a variety of testing activities - both traditional activities and some new, potentially unfamiliar ones. The general expectation is that these activities will be performed in less time, with even less structured requirements. If you're already a context-driven tester, this won't come as too much of a shock to you. If you come from a more traditional testing background, this could be a pretty sizeable change for you to grasp.

In the Agile world, all roles and contributors within the team are considered equal. That is, all are equally required for the team to be successful - there is no "rockstar". And from time-to-time, each and every member of the team WILL be called upon to perform actions outside of their standard responsibilities. I believe for some people, this is a scary concept. A classical developer is as likely to respond with "I'm expected to test?" as a classical tester is with "I'll have to develop things?". My answer to any of you with these questions is "yes". And I don't want you to simply grasp this concept - I want you to embrace it.
If you want to be a successful tester on an Agile team: you're going to need a bigger hat rack. 
There, I said it. Now let me explain why.

Throughout my early years in testing, I understood that as I grew, I would learn about more hats to include in my theoretical "hat rack" of testing skills. I began honing my functional, blackbox testing skills. I would follow test plans and test steps, keeping a keen eye out for things that were unexpected. Then I developed skills in performing the test planning - learning how to functionally decompose a feature and figure out how to test it against its expected behaviour (as provided by the dev team via the project manager). As I gained more credibility within my test team, I was able to work closer to the dev team and hone some whitebox testing skills, where I could see what the code was doing and test accordingly. This all happened throughout the course of a few years working with the same testing team.

Eventually I switched companies and joined an Agile test team. Within the first year, on any given work day, I could be expected to do any of the following:

  • Work with a developer and/or designer to talk about how a feature could potentially be tested when it was finished development
  • Functionally test a new feature
  • Accessibility test a new feature
  • Usability test a new feature
  • Performance test a new feature
  • Security test a new feature
  • Perform full system regression testing for all of the above
  • Gather statistics from internal and external sources and analyze data (for the purposes of improving testing focus)
  • Write code to support the test automation framework
  • Write automated test scripts
  • Develop test tools & scripts to assist testers wherever possible
  • Mentor team members (testers and non-testers) in improved testing techniques
  • Contribute to overall test strategies for Agile teams
  • And so much more...
Of course, not everyone has to do all of these things. But the opportunities for a purely functional, blackbox tester are diminishing. In Agile, they're virtually non-existent. As I said:
You're going to need a bigger hat rack.

We see Agile testing often go hand-in-hand with context driven testing. In context driven, you are the test planner AND the test executor. You are given a handful of requirements and expected to determine if they are met. You are also expected to advocate for the customer. Question when things don't feel right. And you have to do this all with limited time, using the most effective method. Is one exploratory pass good enough? Should this be automated and added to the regression suite? HOW should this be automated? Where does this fall within our testing priorities?

Hopefully you are prepared for this. It's a new and exciting world, where testers are being handed more responsibility than ever before. Testing is definitely not becoming obsolete - but bad testing is. And with that comes the need to constantly learn new things, continuously improve and find new and creative ways to contribute to the Agile team and keep quality improving. Just as programmers must continue to learn and keep up with the latest technologies and languages, testers must continue to learn new practices, new ways of thinking, and keep collecting those hats.




Tuesday, 10 June 2014

Student Testing Resumes

A friend and myself were discussing the pains of screening hundreds of resumes for Software QA co-op applications. 

If you're not familiar with the co-op program, students basically do classes for a term, then work for a term, on and off for a 5 year degree program. There's often lots of pressure to get good jobs (or jobs at all), and the competition is fierce. Students often apply for countless jobs and go through many interviews to land a job.

Often as employers we have to whittle a list of 100+ resumes down to an achievable number to pursue for interview. If we're lucky, that's around 15 interviews. Reading that many resumes and attempting to make a good selection can be tedious.

So today, the idea was jokingly tossed around that we should have a script that does a text search of all the applications (such as a recursive grep search) for the word "test". Then we realized the word "test" exclusively would be too limiting, so we figured a search of the pattern "test*" with a wildcard behind it would be better (ignore the syntax, I know it's not correct). This would result in words like "testing", "tested", "tester", etc. We also realized there were some words that would sneak through the pattern, such as "contest". This called for a regex to ensure there were no letters before the "t", such as " test*" (again, ignoring the syntax please).

This experiment resulted in a *rough* number of approx 17% of applications resulting in a hit. Remember, an application can include resume AND cover letter. Let's allow that to sink in...



...17%!?!?

This is absolutely ridiculous. I understand the time pressures of applying to lots of job postings, and I understand lots of people who want to eventually get a developer role often use testing as a stepping stone into dev. But come on! You're applying for a testing job - you should probably have something about testing on your resume. I guess the whittling down of applications to a few interview requests isn't so hard after all.


Let this be a lesson: To be considered for a testing position, your application should probably make mention of testing.

Tuesday, 13 May 2014

Reflecting on my Software Testing World Cup Experience

Leading up the competition:
When I first heard about the Software Testing World Cup, I was extremely excited. What a concept! Having a challenge for people to showcase their skills as testers? - It's brilliant! Developers have development competitions, so I'm glad the testing community can embrace similar concepts.

We quickly assembled our team (@_eddegraaf@drewbrend and @josh_assad. Unfortunately Josh ended up having a prior commitment and couldn't partake with us). We had a short initial meeting to talk about how we would develop our strategy before the competition start date. We developed a mind map of the things we needed to take into account, as well as a tracking strategy for all the information we'd need to track during the competition (besides bugs, which we knew we would be tracking in the provided HP software).

Unfortunately, once the date was announced I came to the realization that I would be away for the two weeks prior for vacation and a speaking slot at STPCon New Orleans. The week after my return was shaping up to be a busy one, with STWC being on the Friday at the tail end of that. If I'm honest, I had considered backing out because I was disappointed with the amount of pre-planning we'd be able to accomplish. I felt unprepared and this upset me because I had grand ideals built up in my mind about our execution of test strategies during the STWC.
Then it struck me: in the field of software testing, we're often hit with unexpected events. How often are we thrown into a project last minute and asked to sign-off on a project or feature? What about when we switch Agile teams and have to quickly ramp up to the new team's processes? Part of being a good tester means tackling challenges as they come to us.
With this in mind, I decided this was a challenge I needed to see through because it was a real-life representation of true software testing.


With the day of the competition came another unexpected roadblock - none of us were able to co-locate for the competition. Enter: my first experience with Google Hangouts. Wow! If anyone has to work remotely, and needs a convenient solution for video chat & desktop sharing with more than one person, USE GOOGLE HANGOUTS! It worked unbelievably well. I was also fortunate enough to have multiple machines at home (though both were OSX machines). This was super useful for me, as I used one for watching the YouTube live stream to listen for important info and ask questions as we needed, as well as for entering data into our Google shared docs/HP Agile Manager. I used the other for the software under test.

Our strategy:

We used shared Google docs to track the issues we found before entering them into HP Agile Manager. This allowed us to very quickly see the items the other team members had found, as well as what platforms they had been tried on. We also had a Google doc to track the "areas" of the tool that needed to be tested (broken down as we observed them - ie. Installer, individual features, mobile integration, etc). This allowed us to see what areas each member was working on so we didn't hammer at the same areas. It also allowed us to structure our Exploratory Test sessions into reasonable sizes.

With about an hour left in the competition, one of our members began porting the list of issues over to a document for the beginnings of our Test Report. We also took that opportunity to organize them by priority (based on end-user impact) and charting the results. In the final half hour, we collectively decided on what info to include in the executive summary and we made a ship/no-ship decision on the product (by platform, since we had different versions on PC and OSX).

Things to improve on next time:
  • We inaccurately assumed the software under test would be a web application. We prepared by assembling lots of tools to use for performance/load testing, debugging proxies, accessibility, etc. In future, we should assemble a similar list, but be prepared for both online and offline applications.
  • Try and co-locate for the duration of the testing. Having access to whiteboards and a central network would have been far ideal to using the online solution.
  • Be more prepared for multiple platforms. We got lucky having both PCs and Macs but we ended up only having Android devices to test the mobile integration. We should have had a better way of tracking the testing performed and issues found on each platform.
  • Build a template of a test report ahead of time. We knew what types of info we wanted to include in the report, but we didn't actually have a document framework to plug the data into. This would have saved valuable time wasted on basic formatting.

Final thoughts:

As I stated above, I'm so happy that this competition took place, regardless of our final standings (EDIT: We ended up placing within the Top 10, and won special recognition for "Most Useful Test Report"). It was a really good learning experience which I feel will only further my skills as a tester, especially in the area of test strategy and design. I strongly encourage everyone to participate if the opportunity ever presents itself again.

Lastly, a huge thank you to the organizers, sponsors, judges and other participants. I know lots of people have put quite a lot of time into this event and I hope in future I can pay it back and volunteer my own time and experience to something. As always, engage me on Twitter (@graemeRharvey) or email me (graeme@iteststuff.ca) to chat about all things testing.