Wednesday 30 March 2011

SWET1 & Prezi

" #softwaretesting #testing #SWET1 #SWET2 "

It's just over a week now until the next SWET (Swedish Workshop on Exploratory Testing) peer conference in Gothenburg.

It looks like it will be a good grouping of people, abstracts, discussion and lightning talks. I'm really looking forward to it.

I thought I'd experiment with Prezi to do an experience report from it. To get a feel of the tool I took SWET#1 as a starting point. Of course, it's a presentation - so you miss a deal without the "presentation part" - and it's quite basic, but captures some of the salient points.

My Prezi re-cap from SWET#1 is here.

Look out for the #SWET2 hashtag in 9-10 days time, with reports afterwards.

Feedback welcome!

Monday 21 March 2011

Automation: Oh What A Lovely Burden!

" #softwaretesting #testing "

Do you have a suite of automated test cases? Have you looked at them lately? Do they seem to be growing out of control? Have they needed some update? Does it seem that you occasionally see a problem and think, 'why didn't the test suite catch that?'

If so, then maybe you have a 'lovely burden'.

One of the things with automated suites is that they are not guaranteed to maintain themselves.... Another is that they do not always tell the tester (or stakeholder) exactly what they're doing (and not doing). The information (results) that they give can be sometimes interpreted for something more than what they actually are.

Automated test suites can sometimes give very good information about changes you've made in the system, a lot of times they give very good feedback, sometimes they catch a catastrophic change.

However, they can sometimes lull 'people in software development' into a false sense of security. Wait! The test suites are not evil, as such, so how can that be?

Well, in addition to automated test suites not maintaining themselves and not guaranteeing a lot of  things - they are combined with people (whether testers, project, line or other stakeholders) into an idea of a 'holy suite'.

Why are they 'holy', untouchable and must be maintained (as though we form museums and living exhibits of test suites and frameworks)? Well, part of it is the "Golden elephant" problem (James Bach in Weinberg's "Perfect Software and other illusions about testing"). Another part of it is that people (testers, developers and stakeholders) can become detached from what the test suites are doing - something that has been around for a long time, might be 'left alone' until it breaks.

Oops!

Sometimes test suites are not maintained or refactored for several reasons. It may be a judgement call, sometimes it's not possible to easily see where the point of diminishing returns is reached, sometimes vanity (yes, we didn't see that they had an 'end-of-life' 5 years ago, but even so, I don't want to look like I couldn't plan 5 years ahead....) Projects usually have difficulty seeing clearly to the end of the project (at the beginning), so why should it be any different with any artifacts that are produced along the way (like test suites).

If I were not aware of a lot of the above problems I (mr stakeholder) might say that we need to plan better... But, as testers interested in contributing to working software products we should help contribute to the better understanding and use of automated test suites.

How?

Look at test suites regularly (or at least more than never) for:
  • Relevance
  • Signs for reaching the point of diminishing returns
  • The Test Suite "Frame"
Relevance

  • Is the test suite doing what it needs to do? Are there redundant test cases/scripts? Possibly - do you know where or which ones?
  • Are there cases (scripts) that never (or hardly ever fail)? Are there scripts that fail when there are always others that fail? This might show a pattern in (1) the system architecture - weak links are highlighted - this is good, but how do you react to it? ; (2) the test suite and the way it is configured - different tests funnel through the same route (is this intentional?)
  • The test suite is just one view of the software - maybe it's a real view or an artificial view (due to behaviour changes). Which is it?
  • Is it static - same data and behaviour model - or is it dynamic? If it's not dynamic do you (someone) inject dynamism in some way (e.g. change cases in and out, rotate cases used, ordering, data fuzzing, etc..) Do you have any refresh or re-evaluation routines for the suite?

Point of diminishing return

Think about the current cost of maintaining the test suite.

How many backlog items are there? Do the backlog items that 'need' implementing grow at a greater rate than can be supported with the current budget (time or people), is the architecture reaching it's viable limit, do you know or have thought about what the viable limit is?

  • Who's the test suite 'product owner', and how are the decisions about what goes in made?

It's important to understand what the automation suite is costing you now - this is an ongoing cost-benefit analysis - which is probably not done in a very transparent way. Not only should the current costs of maintenance be balanced against the benefits that the suite gives, but also more subtle items.

These more subtle items include the cost of the assumptions made about the suite - from the stakeholder perspective. How many decisions are based on inaccurate or incomplete information about what the test suite is giving? This is an area that is rarely appreciated, never mind understood or researched. Ooops!

The Test Suite Frame



Thinking about the test suite has several frames (models or filters in which people see the problem and interpret the information connected with it.) Some of these 'angles' might be:



1. What's the intention with the automated suite? Expectations?

  • Is it the same intention that the stakeholder has? If not, do they realize this? Do the stakeholders think it is all-singing-and-dancing - and if so, how do you bridge that expectation?
  • What about new stakeholders that 'inherit' a legacy suite? How do they get to know all the intricacies of the automated suites? They probably don't ask for them, so how does the tester communicate that?
2. Are there gaps to be filled? Planned or known about? (Maintenance plans)

3. The test suite will only give a limited view - do you actively counter this in some way?

4. Risks associated with the test suite - in terms of what it says and doesn't say? (How do you translate results into information further downstream?)

  • What assumptions are built into the test suite?
  • Happy path only?

These are just some of the most obvious frames.

And finally...

Ask yourself some basic questions (and just because they are basic doesn't mean they are easy to answer):

What assumptions are built into the test suite, what does it tell you, what doesn't it tell you, what expectations exist on it and how they are matched, or mitigated, how much reliance is placed on the suite, what risks exist with it and how they are monitored (evaluated)?

If you don't have the answer (or a view on these questions) then you have a potential burden.

You might think 'oh what a lovely burden' or you might think 'I'm a tester, get me out of here', or alternatively you might start thinking about which type of questions that need tackling now (soon) to ensure that the stakeholders are getting the information they need - and, importantly, understand the information that they are not getting. Then you/they can start wondering how much it will cost to get the extra (other) information and whether it's worth it.

But, ultimately you'll be working with automation in a responsible way.

Yes, sometimes it can be a 'lovely burden'...

Thursday 17 March 2011

Did you understand the question?

" #softwaretesting #testing #cognition "

I just took a short quiz on "Science fiction vs science fact" (here it is, (link), go and take it - it'll take 5 mins).

Well, I stank!

But long before I got to the end of the quiz I realised that I wasn't sure about the intentions of the quizmaster - were they phrasing the questions ambiguously (maybe to trap or fool people), or was the subject matter naturally close to the edge of plausibility?

Anyway, it seemed like I was both unsure of how to evaluate the questions but also how to evaluate the questioner - so really I didn't understand the question (it's context you could say) and whether it was important to deliberate long over the questions. Of course, it was just a bit of fun (trivia) so I plowed on, but I was aware of all these questions and potential thinking traps I was falling into...

Availability and anchoring biases - Ah, I heard of this in the news recently - or did I? Then connecting that with another question... Thinking something sounds plausible and then not wanting to move too far away from that opinion.

Where there were a range of topics being discussed then it's easy to fall for the recency effect so you don't dwell on the question and realize you were tricked.

There is, of course, a whole topic on taking exams - including not dwelling too long to save time to revisit the question - but that's another story... If you follow the wikipedia link then you'll probably be able to find examples of lots of different biases in the way you take the test (or answer the question) - one of them being "reading too much into the data bias..." (almost the self-serving bias.)


Testing?

Yes, I immediately started thinking along a couple of testing-related lines.

  • Did I understand the question/requirement?
  • Did I understand the context behind the question/requirement?

Having the stakeholder on-hand is always very useful to clarify and clear out any misunderstandings (by either stakeholder or yourself). Sometimes that's not possible - as in the case of the above multiple-choice test - but usually the things that matter in testing will have a stakeholder will be available at some point in time.

If you have a stakeholder (or proxy) available then you can follow-up with a whole range of questions to get to the bottom of the problem.

A very important aspect is your own frame - what's your attitude to the problem, but also to understand that how the information is presented (or by whom) can affect your response.
For example, if you're not on amicable terms with the stakeholder you might adopt an aggressive questioning attitude and not be receptive to the information to be able to react/respond with useful  follow-up questions. 
Or: You're feeling very tired (or not as alert as usual) and so you miss some implication in the question (requirement) - and act on the first layer of information: Yes, we can send a man to Mars because we can build a rocket and life-support system (but how much has his/her muscles deteriorated by the time they return to Earth - and so how much recovery time is needed, permanent damage(?) etc, etc..)

Can you guarantee that this correction package will work?

Yes, I've heard that question in the past. Working out where to start tackling that question is a whole different post - but really there is a whole different bunch of questions that the questioner/stakeholder has and he/she expresses the "simple" (compressed) question to me - but really I need to get behind the question and understand what their "real" problem is - one way is to use Gause & Weinberg's "context-free questions" from "Exploring Requirements" (here's a transcription from Michael Bolton). Another way is to use some of the techniques from "Are your lights on?" (Gause & Weinberg again.)

Framing

Most of this (for me) boils down to framing and how that influences both our problem analysis, information intake, problem exploration and ultimately decision making. We all have it - mostly without realising the affect it plays. But the important aspect is to (try to) be aware of it and some of the problems that it can cause - then we have a better chance of answering the question (requirement) in the real spirit that it was asked!

By the way, it did occur to me that this could be misconstrued as another "exam-bashing" link - but that's not the intention :) If that thought occurred to you, then that maybe says something about your frame.


Are you aware of your own frames?

Oh, this was my first transcription from 750words - thanks to Alan Page for tweeting about his use - I'm brain-dumping regularly now!

Links:

Quiz: 
  • http://www.bbc.co.uk/news/science-environment-12758575
Context-free questions: 
  • http://www.developsense.com/blog/2010/11/context-free-questions-for-testing/
Framing:
  • http://en.wikipedia.org/wiki/Framing_(social_sciences)
  • http://en.wikipedia.org/wiki/Framing_effect_(psychology)

Thursday 3 March 2011

Carnival of Testers #19

" #softwaretesting #testing "

“February is a suitable month for dying.” 
(Anna Quindlen)

Well, no, I disagree - not with what was on offer to read in the past month, with blog posts from four continents...



Visual
  • I enjoyed the way that Ralph van Roosmalen illustrated the difference between iterative and incremental development, here.
  • A post from Rob Lambert where he points to a video and draws some interesting parallels to communication for testers to non-testers/managers.


Mind Maps
  • I got a little worried when I saw the title of Darren McMillan's post 'Mind Mapping 101' - thinking of George Orwell's room 101 - but luckily it wasn't scary and a very readable account of how he uses it in his testing. :)
  • Using mind maps as part of reporting was discussed by Albert Gareev, here.
  • Aaron Hodder uses mind maps as a piece in the puzzle to create a test approach, here.


Confer-ing
  • A good overview of the Bug DeBug conference in Chennai was posted by Dhanasekar S.
  • Pete Walen wrote about the possibility to get involved in the emrging topics track fro CAST2011. If you're going, and fancy a punt, then check out this post.
  • On the topic of CAS2011 you may have read James Bach's recent post about the nature of context-driven testing. If not then here's the link.


Testers working together

  • There is no best tester and it's almost wrong to think in those terms, was the topic of a post by Martin Jansson.
  • Some good advice from Esther Derby on team construction - the 0th trap.
  • Some clever wording in a job description and the thinking behind it was a two-part posting from Thomas Ponnet, with the second part here.
  • The importance of the thinking and collaboration, and not the tools, in ATDD is highlighted in this post from Elisabeth Hendrickson.



Miscellany
  • The super bowl and bug hunting made an unlikely combination for Ben Simo's write-up, here.
  • Dorothy Graham responded to a suggestion that certification is evil, with some background on the original thinking behind one of the certifications. An interesting read in three parts, finishing here.
  • Have you read 'Perfect Software..'? Some are re-reading it. Here's what KC got out of re-reading it.
  • Good analysis of a challenge and his response, with an embedded challenge, from Peter Haworth-Langford, here.
  • A parable from Pradeep Soundararajan on the value a tester can add to the product - or in this case, costs that they can save.
  • Tim Western asked a question challenge, and then posted his analysis and thoughts around the answers, here.
  • A reminder from Pete Houghton about sticking to the happy path - even if you don't realise that's what you're doing.
  • There was a round-up of different aspects of bias that has been written about by testers, posted by Del Dewar. Good read.

“If February give much snow
A fine summer it doth foreshow”
(Proverb)

Until the next time...