Last day I wrote an article on Problems of any automation testing....http://www.askqtp.com/2012/01/problems-of-any-automation-testing.html
Hope you have enjoyed that. Today I am going to tell you that how my manager and I solved those problem.
We were trying hard , team meeting, team building exercise, effective training ....but all went on vain.One fine afternoon I found an interesting article while setting up a process for myself. It was 11 Best Practices for Peer Code Review -- a White Paper from Smart Bear.
I strongly recommend you to go through the original
As per the author---The review should be efficient light weight--
1. Review fewer than 200-400 lines of code at a time. Study says review 200-400 lines of code at a time in 60-90 minutes yields 70-90% efficiency. This is efficient as per my views.
we implemented this technique.
2. Aim for an inspection rate of less than 300-500 LOC/hour--
As per author if inspection rate (how fast are reviewers able to review code?)is optimal if he targets 300-500 LOC/hour.As per this white paper defect rate (how fast reviewer is getting defects) will be high in this zone.But when I tried to implement the same,I found,200-250 LOC/hour for testing of Code is more than enough.Rest of time reviewer can still find out and validate the logic.we implemented this technique.
3. Author should annotate source code-yes,I agree with author,reviewer should write comment--let me twist a bit here for testing,reviewer should comment what logic is written as a block of code.If it passes 4-eye principle,it is done.we implemented this technique with some modification.
5. Checklists improve result :
Well,well,I really admit this point of the author.Even as per the author "A Checklist" will remind authors and reviewers to confirm omission of any important aspect.I was very much motivated when they wrote--we can develop our own checklist.Even PSI,SEI,EMMI recommand such personal checklist.When I tried to implement in my current process.It really turned best.It yields nice result.we implemented this technique.
6.Fix problem on the fly:
I like the author's point here, verify the defects that are actually fixed. Yes I implemented in two folds.When a requirement review is done we changed our automation guideline on the fly and when we find some problem in automation Script,We fixed them during review.we implemented this technique.
Reference:-PEER code review checklist at www.CodeReviewBook.com
7. Initially,everything went against us.All the given problem data was used against us.And our objective became to pass a script anyway.Yes,I strongly agree with author's line.If developers believe that metrics will be used against them,not only will they be hostile to the process,but they will probably focus on improving matrics rather than bug tracking. It happened with us also.We made our point very clear. We encouraged and educated them. It worked.
Last point is the EGO effect:-
I agree with the author of this white paper but also agree that seniors of a process become very much egoistic if any problem found on their code.Once,I remember one of my fellow senior test engineer blamed the junior reviewer when his test cases failed to execute.
I have seen several occasions when seniors feel insulted when his/her test script(code) is reviewed by a junior member of the group.
I have seen people has to influence reviewer to make his/her piece of code as pass,even if these are going to fail in long run just because,he is senior,he is from the same region,he is from same culture,languages etc etc.
Well, my manager was there to handle this..:)
So,with the help of these points we found out a better solution .It was a great review process indeed.