Tuesday, 11 September 2018

Tips for programmers who want to build testing skills

I previously wrote a post sharing tips for testers who wanted to get more comfortable with code. In response, Twitter user @ssmusoke asked me, 
"What advice would you give for getting #developers #TestInfected and grow their #QA chops …" 
So here we are. I'm going to share my thoughts on how developers can grow their QA chops, to use Stephen's words.

If you're fortunate enough to be working with an Agile team, then you've already got an advantage because if you're doing it right, your team will be made up of specializing generalists whose collective goal is to complete all the stories committed to within each sprint. This sets the stage nicely for any team member to feel empowered or even encouraged to perform tasks beyond their defined specialty - in this case, testing even though they're a programmer. So there's a little motivation/reasoning.

Let me start with what I believe is the most important piece of advice I can give on this topic: Programmer's can still use their primary skill set, coding, in the pursuit of assisting with testing. Sure, to catch up on a QA backlog we can teach testing skills and set programmers loose on exploratory testing of features and fixes. But that would be a lot like handing engineers buckets and telling them to bail out the sinking boat, instead of fixing the broken bilge pumps.

1. Please, please, (and one more for importance) PLEASE write unit tests.

If you're not already doing this then I'm not convinced you're looking to grow your QA chops. This is easily the quickest win for getting developers to help with testing. Now, it's not just that easy. I've seen many instances where unit tests get added for the sake of having unit tests without much thought being put into what they test. Test the happy path, then test the negative cases, and any edge cases you can identify that seem valuable. Then grab a tester, show them the scenarios you've got and see if they have any more suggestions about what to test. Ok, now you've got good, useful unit test coverage. But that is likely not enough on it's own.

2. Pair with a tester, find out what cases they would test, and discuss how much of that can be tested in code. Now, do it.

If a tester says "Well I'd do x,y,z and then check that the expected value was written to the database"...you've got a candidate for a database test, or an integration test. The tests you come up with don't have to cover the *exact* steps that the tester would take. The goal here is to answer the same questions they were seeking to answer in their described testing scenarios.

Maybe you don't have the infrastructure in place to create and execute these kinds of tests. That happens a lot. In that case, you can discuss whether now is the right time to add them or not (make a value assessment around what info you'd be getting from the tests). If it's not the right time, log a tech debt ticket or a story and work with your product and engineering managers to get them prioritized appropriately.

3. Pull branches when doing others' code reviews, and exercise the code locally.

Here's where we diverge from just testing in code. A Tech Beacon article on the subject states: 
"Pure code-based testing fails because it lacks the human factor. Humans do interesting things to applications in ways that are often surprising. [Testers] enhance the success of coded tests by providing a human eye, and a human element, to help anticipate that."

Want to help get a jump start on early testing coverage? Well those code reviews you're doing (you ARE doing code reviews...aren't you?) is a great time to execute some high level exploratory testing. Don't just read the code changes in your friendly neighborhood diff viewer or code review tool, but actually pull the code and dive in through your IDE. Experience the code the way you would if you were the one writing it. Then click that run button and actually give the feature a test drive armed with your knowledge of the code change. The more you discuss with testers about how they'd explore through features during their testing, the more efficient you will be at doing this on your own at code review time...likely catching important bugs early in the process.

4. Start by having debriefs done. Then move to participating in debriefs.

I covered this a little bit above. If you can't pair with a tester when you're starting out to help you learn to think the way testers think, then at least do debriefs with them. After you've done some testing, grab a tester and walk them through what you did, and why. This can foster a healthy discussion around what kinds of workflows need to, or do not need to, be exercised. For instance, if you can prove edge cases were covered in other automated tests, then great! No need to run them manually. Perhaps the tester will point out some additional things they'd have tried, and you can go back and do it... and in the process, sharpen your ability to identify those types of scenarios in future.

5. Seat time

In past, my spare time has involved racing cars. At many driving schools and speaking with many experienced drivers, they'd almost unanimously agree on the single most important thing to improve one's driving skills.
"Seat time."
What they meant was that the single best thing you can do is practice. Now the caveat to this is to have advisers, teachers, mentors or whatever you want to call them ensure you're practicing the right behaviors and not bad ones. The same is true for testing. With enough practice and the right guidance, you can teach your brain to think the way a tester does. This is why I keep encouraging "talk to your testers!" Ask questions, discuss, and listen with an open mind to how they think, and why they do the things they do.

For those REALLY interested

There are other ways if you truly want to keep growing your QA chops. For instance, a couple times a year, our local testing meetup KWSQA hosts Testing Games nights. Anyone is invited. At each event, a company offers up their app to be put through its paces. Attendees are then encouraged to test and find bugs in the app in any way they feel is appropriate. Bugs get logged, and often there are prizes for most creative test, biggest security flaw, and other interesting things that attendees find. This is a great way to practice testing in a fun and open environment against an application not in your usual domain.

Good luck, and happy testing!

Thursday, 14 June 2018

I created the SDET role at my company and I wish that I hadn't...

Hindsight is 20/20, as "they" say.

"They" also say you have to make mistakes to learn from them. I say, learn from MY mistake so you don't have to make this one. Because some mistakes are really hard to recover from.

I was hired to develop automation frameworks and had been given the title "Software Engineer". To be honest, I didn't have the confidence to believe I deserved it. I even appended my own qualifier to the title in my online profiles, calling myself "Software Engineer (in Test)".

Fast forward a few months and I've been asked to lead a team of people doing the same job, developing and maintaining automation frameworks. So I set to work developing role descriptions and career growth plans - for the position I'd like to give people on my team: Software Development Engineer in Test (SDET).

I had used this title for automation related positions I helped create in the past, as well as having the title of Quality Assurance Developer earlier in my career. So when it came time to decide how to title the team members of my new automation focused team, I didn't put much thought to it. I completely glossed over the benefit I was being extended when I myself was given a Software Engineer title. Ooops.

In retrospect, after many conversations with industry professionals I respect, I realize that development is development regardless of what your focus area is. By tacking on a qualifier such as "in test", it opens up the discussion that these team members are not as skilled or deserving as team members with a "real" developer title. How often do we hear about developers being "good enough" to be in an SDET role? Or how about when development teams pass on junior applicants but recommend them for the automation role? And then you have to engage in discussions like where the SDET role fits in the pay scale of the organization (hint: in my experience, it's almost always been below developer, but above QA).

Simon Stewart made the claim (in a talk of his I attended at STPCon Spring 2018...and I hope he doesn't mind me attributing this to him) that finding good Automation Engineers (or SDETs, or SETIs, or Test Engineers, etc...) is actually very hard. Anyone who's been a hiring manager for this role will more than likely agree. And if this role is truly a hard one to fill, would that not warrant that position being worth more? Simon's claim was that there's plenty of developers in the candidate pool looking for positions, but we have to actively go out and hunt for months to find good SDETs. 

So, back to my 20/20 hindsight. I've been reflecting on this a lot lately and I've come to realize that my new opinion on the matter is that anyone developing in code should have a developer role and be treated like a developer. That is, held to the same standards, paid the same wages, etc. The bonus to this? The people focused on your test frameworks can float between automation development and feature development as required. You're finding the people with the test and automation skills and putting them on the development team. This doesn't mean other developers can't contribute to automation, and vice versa. It's about getting right skills in the door and making them available to assist the wider team.

I never gave much focus to titles in past. But take it from me: take the time and think carefully about it. Once in place, they can be quite hard to change.

5 steps for testers who want be more comfortable with code

In a previous post, I made mention of some tips I have for testers who lack confidence working with raw code. Here they are, in the order I think most easily executed. Of course, they could be done in any order, you can pick and choose which you even want to do, or you can do them in parallel. The idea is to just work towards being more comfortable talking about and writing code.

Look at code reviews.
Seriously. Just look at them. This is step one. You don't need to be added as an assignee or a reviewer, but start looking at them. When you do:
  • Look at what files change - how do they relate back to this particular feature/bug fix?
  • Look at what comments other reviewers leave - what kinds of things are they focusing on?
  • Try and understand what the code change is doing - read through and start to identify what kinds of functions live in the different files. 
Set up your laptop with a development environment.
You are part of the development team, after all - aren't you? So if your on-boarding as a tester didn't include setting up your environment the way a developer's is, find the docs and do it yourself. Or ask for it. Either way, I encourage that you get set up to pull code and run local builds. Not only does this give you the opportunity to see the code and spend time in the environment that the developers do, but it also allows you to be able to pull branches, run local builds, and get early eyes on features and code changes.

Use the terminal at every opportunity.
Just to get used to writing commands, and understanding what the terminal can do for you. There's so much power available to you. Once you learn some of the shortcuts, I promise you'll wish you had explored the environment sooner. This will also help you set a foundation for the skills needed to write bash scripts. Once you have that foundation, you can start to...

Experiment with writing scripts.
There are many opportunities for scripts to help you do tasks related to your job much more quickly and efficiently. Once you understand how to do things like cURL from a terminal window, you can start to script things like API calls for testing and validation. Consider any task you spend time doing more than once a candidate for a script. Once you've written a few of these, you'll be on your way to building a library of tools that help you complete menial and tedious actions with minimal effort - freeing you up to spend more time on the most impactful work.

Pair with developers.
Get face time with developers whenever you can. In my experience, they often more than willing to let you sit with them, shadow them, and work through problems together. By doing this, you'll see how they tackle problems and if they're really helpful, they'll talk through their process of building the feature with you. You can hear where they start, what kinds of things need to be added, modified or removed, any many other insights into the code base. You can also watch for and discuss coding conventions. Every organizations have their own conventions and standards and you'll want to be familiar with yours so as you learn you ensure that you're learning habits that align with the company. This sets you up to have your first few attempts at code be well received by your friendly neighborhood developers.


Feel free to comment below if you have any other useful tips for testers who may be nervous about participating in the code side of software engineering.

Thursday, 24 May 2018

Discussing Tester's Value, Beyond the Echo Chamber

I've finally come to a conclusion. I spend a lot of time thinking about why I feel like a lot of the advice to testers feels common sense, and why I don't feel like my thoughts are novel or unique. Here's the problem: the testing community is in some senses an echo chamber. The same thoughts get shared within the community, and people hear, agree and repeat. What I've personally neglected to do is reach outside the tight testing community to listen, learn and share. What do people in other engineering roles think the value of testers is? Or maybe they think there's no value, and I should seek to understand why. Sharing my thoughts on the value of the tester role and engaging in discussions when opposition is raised is only going to serve to improve bring clarity to my points.

Here's a couple things I've noticed in discussions with people in non-testing roles who think testing isn't valuable:


"Testers don't understand code. They rarely have technical understanding of how software works."
Admittedly, this one pisses me off more than most. Understanding how software works and understanding code don't go hand in hand. And being able to understand what a snippet of code is doing doesn't necessarily involve knowing how to code in that specific language either. Having said that, I've seen many organizations add fuel to this fire by hiring large numbers into testing roles who truly do not understand software. Like at all. We're talking applications-as-a-black-box, tell-me-what-to-click-to-test mentality. Check the box, move on. I've got plenty of tips and tricks I'll share with you (in person, over social media, or in a later blog post) if you are a software tester who is afraid of code, or doesn't know how to grow beyond the role of a pure blackbox tester. Having a blackbox hat on your rack is a great skill...but don't let it be your only skill.


"Testing is slow. We should automate it all so it's fast and we won't need human testers."
Well we're actually partially agreeing here. We should automate regression coverage so that devs can be sure their feature continues to work as expected going forward, without relying on testers having to revisit the feature and all valuable scenarios after every change. That would be slow, tedious, and not a good use of anyone's time. But if we can automate all that away, it's a good use of skilled testers time to explore the feature - exercise workflows as a user would, look for edge conditions and risky areas, think about the feel of the feature, consider performance, accessibility, security....
Good testers don't want to do the boring, repetitive shit. So let's not make them.


"Testers are only needed to execute testing steps. Therefore they're a cheap safety net for organizations."
Have I ever mentioned how much I hate seeing testers referred to as "cheap"? I read a response to a LinkedIn post the other day, and someone said: they don't want their developers responsible for doing any testing because "why pay a skilled developer $120/hr to test their feature when a tester could do it for less than half that"...or something to that effect. Wow. 
This is akin to the whole "anyone can test" or "testing isn't a real skill" types of arguments. You also hear a lot of people repeating things like "this is why companies like Google or Facebook don't have testers". (Newsflash by the way...they DO have testers, despite publishing all kinds of things saying they don't. Is it just cool to say you're hip and modern and don't have testers?)
I think this argument is perpetuated by people that have only come into contact with bad testers, or been at organizations that foster poor testing practices. Testing requires skills in functional decomposition, risk assessment, specific types of communication and technical writing. But we as testers need to prove that by fighting back against bad testing.

Wednesday, 9 May 2018

The value testers bring...

It started with a simple question that I posed in the Ministry of Test #general slack channel:
Hi all. Does anyone have any sort of resources to point to for “proving the value of testers”? I’ll try to explain: I’m looking for resources that would help when having a conversation trying to explain that testers do more than just sit in a seat and execute test steps. Along with that, I’d like to try and justify reasonable pay for testers. I know how I feel about it all, but having something to point to beyond my own opinion would likely go a long way.
Well, fortunately for me, Mr. Michael Bolton (someone who's lessons I've followed closely throughout my career as a tester) was watching. He pushed back on me to provide my thoughts on what value testers add. I typed what came to my head first, with no censoring or editing:
Personally I believe testers’ value comes at many stages of development, which is why I’ve always been a proponent of them being involved in every stage. Customer advocates, connectors (of people and information), informants (informing the org of their observations on quality, risk, …), analysts (analyzing change, risk areas, issues from the field…and distilling that data to inform changes in process/development in hopes of improving efficiency of development with quality)…and likely definitely more
At this point, Michael offered to role play with me some of the arguments I could expect to encounter when typically having this discussion. I'll paste the majority of the conversation here. It's a long read but those who have read it have expressed interest in the content. I'll summarize with a TL;DR for now, and then likely provide updates on my follow up thoughts soon.

michaelabolton
(In character:)  Don't you think the product owner is a customer advocate?  Isn't the business analyst a customer advocate?  Heck, aren't the programmers customer advocates?
I don't see programmers say "screw the customer", except as a joke sometimes.  Why do we need testers?
(Be prepared; these are perfectly legitimate questions.  AND there are good answers for them.)

Graeme 
Sure. And I believe programmers often do advocate for the customer and weigh in on discussions to that end. But programmer's primary skill sets lie in finding solutions to specifically defined problems, and implementing those solutions. We could take some of their time away to investigate how user's use the software and features, and analyze the types of bugs coming in from the field…but that's a lot less time they'd be spending in their primary skillset.
Does that seem valid?

michaelabolton 
That's pretty good, but not as strong as it could be.

Graeme 
I won't argue that. This is my first attempt at writing my thoughts on the matter beyond having them in my mind or rambling them verbally to an ally

michaelabolton 
(In character) The programmers have a lot of knowledge about the product already.  AND they know how to code.  And the testers don't, by and large.
(C) So the programmers will find most of the important problems anyway.
(C) The programmer's primary skill set is building a quality product.  Who would know more about how to do that than they would?

Graeme 
The programmers have a lot of knowledge about their specific domain of the product. In our case, programmers are focused on the windows app, the iOS app, the Android app, the Web app…the APIs, the worker services…all specifically. Testers are actually one of the only roles in the engineering org that understand the customer's usage and interaction with the apps from end to end. 
A programmer mindset is typically one that is able to prove there aren't problems with the solution the way they decided to implement and use it.
I don't know how to say this not anecdotally, but if I received demos of features from programmers where we didn't encounter bugs during the demos, I'd be more inclined to believe they're good at finding the important problems.

michaelabolton 
(NOT in character) Demos are funny like that.  They're intended to be demonstrations, but they turn out to be experiments.  :slightly_smiling_face:
But here's the thing:  it's important to acknowledge that developers are really good at finding most of the problems.  Almost all of them.  What kinds don't they find, and why not?

Graeme 
Speaking generally, it seems as though developers find important problems in the primary, happy path workflow. But typically not edge cases, or at integration points (especially when integrating with a feature outside of their primary focus). I don't think most developers are mindful in a way that allows them to understand how the user might use their feature beyond the way they expect a user to use it.

michaelabolton 
(C) So... we'll get product managers and BAs to test that stuff. 
(C) And after we've done that, we'll ship the product and if there are problems, we'll fix them right away?

Graeme 
Product managers don't have the same curious mindset that testers bring, and thus don't know how to experiment and investigate in risk areas of the product. In my experience, while testers may not always know how to write code, they do often know how to analyze it and determine technical risk areas. So we could have PMs and BAs try to test the primary workflows…but again that's time that they're not spending thinking about future roadmap features and products, or working to understand customer needs better.

michaelabolton 
(C) Why not just tell them to do that?
(C) Who says that testers are the only curious people?

Graeme 
As for the ship and fix problems…we do roll out most of our features in a way that allow us to catch issues before the entire customer base encounters them. However, for the issues to be found to be fixed, someone must encounter them. Hard to ask for money for a product where you expect the users to be your line of QA.
Who are the people teaching PMs how to experiment and assess risk areas/find issues at the integration points? Surely that's the job of a tester, right? Informing about those areas and providing that information back to people who can consume that information?

michaelabolton 
(C) Hey, it's not the job of a tester to tell the PMs how to do their jobs!
(C) As for the release-to-production strategy:  not much can go wrong with our product. We're really good at this stuff.  We've been doing it for years.

Graeme 
I mean…we have data to prove we're not really good at this stuff.
We can ask any one role to go do all of the things. We wouldn't need PMs if we just told developers to go figure out what the problems our customers are having that need to be solved. But I can't imagine a lot of actual development would get done. In the same vein, while PMs can go do that, asking them to also do the work of a tester is time spent that the PM isn't building roadmaps and deciding product direction to keep making the company money.

michaelabolton 
(NC) Very good.
(C) We don't have roles here.  Everybody does everything.  Why not just get rid of roles?

Graeme 
(NC) In this case, we are so much the other direction of having specific developers from different stacks, focus areas, etc…and product having specific focus areas…I'm not concerned about this argument. But will address anyways since its good to know how to.

michaelabolton 
:slightly_smiling_face:

Graeme 
(C) Again it comes back to mindsets and skillsets. We can either have specialists do what they're best at to build the strongest product possible, or we can have teams of people doing a mediocre job at everything, make a mediocre product, have a mediocre customer base…and make a mediocre amount of money…

michaelabolton 
(C) All right.  So what is it that testers do that's specialDistinct
(C) And why can't people just switch from one mindset to the other?  We've got really smart people here.

Graeme 
testers will experiment and test (both in manual and automated fashions) to gather and analyze data, distill that data and provide direct recommendations on issues to fix, not just in the product, but in the processes of development. This will allow us to ship quality products to customers, and do so with even more speed and efficiency in the future.

michaelabolton 
(C) I still don't see why the programmers and the PMs can't do that.

Graeme 
Hmm…I think this is where I could use some Michael Bolton wisdom and insight.

michaelabolton 
(NC) OK.  Not in character unless explicitly tagged that way. (edited)
The answer is that they can do that.
Anyone can switch roles and mindsets to some degree.  Anyone can learn to program.  Anyone can learn how to use tools.
The issue, it seems to me, is this:  it's hard.  And it's probably slow, too.
Shallow work is easy to do.  Deep work is harder.
Going through the motions and rituals is easy.  Skilled, expert, focused work is harder.

Graeme 
Awesome, that's what I was trying to reach for mentally…but couldn't articulate the way you are

michaelabolton 
Recognizing many of the problems in something you wrote just now is easy.  Recognizing other problems takes time, or distance, or both.

Graeme 
It's much more about providing T shaped value to the teams than about just doing a bit of everything with little focus and expertise

michaelabolton 
That whole T-shaped business sounded tired to me on the first day.
I'm not sure why.

Graeme 
I guess since the org is talking about building "T-shaped" developers anyway…its common language I can use
Even if its not ideal language in general

michaelabolton 
Yes.  It feels kind of naff to me, but it is what it is. Nonetheless:  it's reasonable to believe that everyone should have some degree of general skills outside their speciality.
The testers' speciality, it seems, is this:  living and working at close social distance, but farther critical distance than other people on the team.

Graeme 
We often have programmers "test" each other's work by pulling branches in code review and doing shallow testing. Yet bugs still get out, so obviously there's evidence there that having a non-expert do a shallow test doesn't solve the quality problem.

michaelabolton 
Exactly.
It helps to have someone at some distance from the work to evaluate it IF you need serious evaluation.

Graeme 
Right

michaelabolton 
Rather than having someone adopt that mindset, it's a powerful heuristic to have someone inhabiting that mindset.

Graeme 
So…to the people who say "engineers" deserve more money because their coding skills are hard, measurable skills…how is the monetary value proven in the tester's skills where you can't look at code output and "grade" it directly?

michaelabolton 
Coding skills are not hard and measurable... without testing.
(They're not even measurable with testing, but testing can tell us something about the quality of the product that non-empirical approaches can't.) 

Graeme 
interesting

michaelabolton 
You can't measure quality.  Measurement might inform some aspects or some attributes of quality.

Graeme 
Yes, that I am aware of and believe.

michaelabolton 
Quality is not about measurement.  It's about assessment and evaluation.
Literally, about how we value something; that's the value in "evaluation".
And that takes us right to the tester's role.
The tester's role is to focus on threats to the value of the product.
The tester's role is to focus on investigating the product to discover threats to its value.
The tester's role is to bring expertise and focus to that task.

For me, the TL;DR of this conversation boils down to two (or potentially combined to one) tweet-sized quotes from Michael.
The testers' speciality, it seems, is this:  living and working at close social distance, but farther critical distance than other people on the team. 
The tester's role is to focus on threats to the value of the product.

Tuesday, 8 May 2018

It's been a minute...and a lot has changed

Well, it's been a minute since I blogged here. To be more exact, it's been almost 3 years.

In those 3 years, a lot has changed. I've grown in my career, from Test Specialist to Test Strategist to SDET to Engineering Manager. I'll touch briefly on what I spent my time doing in each of these roles.

Test Specialist

My time as a Test Specialist was as one on a team of 4 as part of an engineering team of less than 20 people. These were times of a rapidly growing team (and company!) who was trying to find their way into more modern agile development practices with shorter release cycles and more automated processes. The team was just ramping up on things like CI, unit testing, and more automated tooling and scripting.

After learning the product, I helped the team move away from documented test scripts and test cases to a more context-driven model. We worked to shorten time needed for testing by performing tests in more isolated builds and environments. I also had a large hand in standing up the first automated UI tests for the product. As the team continued to grow and we broke development down into smaller feature focused agile teams, I began to recognize a need for the testing across teams to share good patterns, lessons learned and work towards and larger goal of a quality product offering, not just individual quality products. 

This is when I proposed the role of Test Strategist...

Test Strategist

I spent the next 2.5 years at the company in this position. While I continue to be an individual contributor, I was also tasked with attempting to bring some order and consistency to the company's testing practices across the teams. During this time, I wore many hats where I performed manual functional testing for multiple teams, wrote automation scripts, designed and built automation frameworks and tools, acted as a release manager, and "unofficially" product managed the STS team's work (Software Tooling and Support - essentially our version of an automation and release operations team). I also mentored junior testers, provided coaching and direction on good unit testing practices, advised on quality metrics provided to Execs, and much, much more. In this time, the company grew from 2 feature focused agile teams to 8+ agile development teams - most of which having embedded functional testers, and all of which employing CI processes and incorporating some level of automated testing. 


6 months ago I left that role to move across the continent from Waterloo, Ontario to San Francisco, California...

SDET (Software Development Engineer in Test)

My official title here was "Software Engineer", but I was asked to focus first on building/improving the Windows UI Automation framework (since I had experience in such things - my last company's main offerings being primarily Windows Desktop based products). I quickly went to work not only implementing this, but also working to reorganize the way QA was performed and structured at the company (essentially putting my Strategist hat back on, in a way). My biggest regret in this time is creating the SDET role officially at the company. I now believe the correct approach was to just continue calling the role Software Engineer, but giving them automation specific tasks to work on (more on this on another blog post - OMG I have content worth sharing again...after my 3 year funk! *squeals with glee*).

A little context - our company is a cloud based offering, which allows customers to access their projects via web browser, iOS app, Android app or Windows Desktop clients. The means we have teams building features in 4 platforms, plus APIs, workers, services, databases and all the other fun stuff that comes along with large cloud-based offerings. 

My proposal was to move QA back under one "umbrella", with a QA Manager at the top. "Why?" you might ask. With most companies moving to embedding testers into agile, or "no specific role", feature based teams - the "QA Team" model seems obsolete. Well, a year ago I would have agreed with you. However, as it turns out, there are some issues with that. In our case, we had a consistency problem. Features implemented across multiple platforms were developed and tested in isolation from each other, essentially causing the platforms to feel like different products. On top of this, testers often lack direction and career growth when not exposed to a manager that actually knows how to manage them and help the grow in QA. The outcome of this are things like stagnant testing practices, and often a very high attrition rate - either losing testers to other companies, or to other roles within the company (this is not inherently a bad thing, as people should be able to move to various roles but ideally this isn't happening because people feel a lack of growth in their current role after a relatively short period of time).

Oh, by the way, testers are on the QA team as far as reporting and feeling like they belog to a team, but they are each given a feature set to own, and are embedded in the respective feature teams to be the testing lead on that team. There is no old school, traditional dev - test wall that work gets thrown over. We still follow the model of testers being involved as early as design and planning of features, through development, deployment and monitoring in production together with the rest of the development team that owns that feature.

Part of this move to a QA team was the establishment of SDETs, who build and maintain frameworks that the development teams can leverage for various levels of automated testing. They also help implement quality of life tools and other offering to help teams deliver products of higher quality at greater speed. That's always the goal, enable the teams to deliver faster without sacrificing quality. Arguably the SDETs could go either way with respect to reporting to the agile teams' engineering managers or the QA manager. We chose the latter. 

It was decided there should be a manager with strong automation knowledge for the SDET team...and thus my seemingly quick move to...

Engineering Manager (Platform Automation Team)

I now currently manage 2 SDETs plus a part-time contractor (long story, but it's working out wonderfully for us - hmmm, I smell even more blog content), and have positions open for at least two more SDETs. Besides the team management responsibilities, I still contribute to the building and maintenance of automation frameworks and CI tooling (though sometimes I look at the open instance of VS Code on my monitor, realize that it has been days since I typed anything into it and cry a little inside), as well as advise the QA team in a strategist-like capacity on process and mentorship, and unofficially "product manage" the growing automation backlog. 

Because I get asked this a lot, I'll touch on this here for anyone interested in following up - here are the tech stacks we're currently using to automate our UI/E2E testing:
  • XCUITest for iOS
  • Espresso for Android
  • An in-house wrapper around FlaUi Automation Library for Windows (WPF)
  • Cypress.io for Web
  • ...with more to come as the team grows.

Wow it's been a crazy 3 years! And the adventure still feels like it is just beginning. Feel free to reach out to me with any questions, thoughts or challenges you have for me in regards to any of this. I'm always happy to share my thought process, experiences and to learn more from others' experiences as well.

Sunday, 7 June 2015

Please, stop telling testers how to test!



Let me start with a scenario.

A programmer works away on a new feature or a bug fix. They finally feel like they're finished and it's time to move the user story over to a tester. They update the user story with an explanation of the work they did, any assumptions they made and one final thing: 
"Testing notes: Try exercising feature X by using entry point A as well as entry point B and assure that the output is the same."
Now let me explain my problem with this. I have a gut-wrench reaction when something like this occurs, because my first thought is, "Wait! If these scenarios have been identified as important, why didn't you try them?"

That's right. I'm proposing that if a programmer can identify a risk area, wouldn't it make sense that they ensure their code works under those conditions? Even better - shouldn't there be coded tests for these scenarios, if at all possible?

I don't mean to lay blame here. As far as I'm concerned, in development there is no "us" vs "them". "We" are a team. So if testing is playing out like this, it is up to everyone to correct it. But, I AM advocating that it gets corrected. Allowing testers to be told how to test is basically boiling down their job to executing steps that others don't feel like doing. It is removing what I believe is the core expertise of testers - to think critically and creatively about how features have been implemented, how they integrate, and how to stress the shit out of them!

So please, stop telling testers how to test - and start collaborating with testers on what to test! In an Agile environment, we have the luxury as programmers and testers to sit together and chat about a feature or a bug fix any time we need to. Theres a few times when these talks can be mutually beneficial and help us build the best products as quick as possible.

  1. Before programming begins: This is an excellent time to discuss what the problem being tackled is (new feature or bug fix), and how both parties see it being solved. The programmer can explain how they are going to approach the implementation of the solution and the tester can talk about how they plan to test it. This gives the programmer the chance to keep those scenarios in mind during development. It also gives the tester the chance the develop a test plan based on the intended implementation.
  2. Demoing before testing: When the programmer believes they're done, a brief demo to the tester can be extremely helpful. (In fact, you wouldn't believe how many times we've caught a glaring bug during these demos...BEFORE any formal testing has begun). This is an opportunity for the tester to ask about integration points, and think about how to exercise those.
    The other thing that could be discussed at this time...UNIT TESTS! Talk about what unit tests have been written and if any more coverage would be beneficial. If certain things aren't unit testable, the programmer can explain why that is the case and the tester can plan to focus on that area (since we know there's less coverage there than other areas).
  3. During development: And of course, in true Agile fashion, any point in time is actually a great time to talk about roadblocks that arise, and possible solutions! Keep the communication up and leverage eachothers' skills early and often! 
Tell me what to test. Tell me where to test. But please, don't tell me how to test!