GLSEC 2007 – Program Notes
Thursday was the program day of for GLSEC 2007. While not really live blogging, I did take notes.
Before we get to the session notes proper, here are the general ones around the conference itself.
- For a regional conference, it was really well organized and had high caliber speakers
- The reception with the students afterwards was pretty sparsely attended by students; you would think they would want to make local industry contacts
- While I don’t have access to the official statistics, I counted under women in attendance at the morning keynote that were not presenters or organizers and saw around a dozen. That is horribly squeued for a conference with ~ 140 attendees.
- Unlike DemoCamp or CAST there were very few laptops in attendance. At one track session only two other people than myself had one open. Of course, there was a lack of power available, but…
- I sometimes feel bad when I’m teaching and I say ‘the application blew up’, but I heard that expression a lot so I resolve to not feel guilty for using it anymore.
Opening Keynote – Craftsmanship and Ethics
- Robert Martin
- Software development is a craft; not yet a profession — but we’re getting close
- Disciplines that will make us professional (he talked about each)
- Short Iteration
- Don’t wait for definition
- Abstract away volatility
- Separate the parts of the system that change from that don’t change
- Commission > Ommission
- Do something rather than do nothing
- Even if you do something the business doesn’t want, you have learned something important
- Decouple from others (mocks, simulators)
- Never be Blocked
- Avoid turgid viscous architectures
- Incremental Improvements
- Boy Scout rule: every time you check-in a module in it is in a slightly better state
- No grand redesigns
- Progressive widening
- Add the feature through the stack, not ‘the entire database’ vs ‘the entire presentation layer’
- Progressive Deepening
- You do not have to follow the architecture when trying to get a test to pass. you can then fit it back into the architecture later
- Don’t write bad code
- why does something that slows you down make you go fast?
- Go Fast. Go Well.
- Clean Code
- TDD
- No production code until you write a failing unit test
- You are not allowed to write more of a unit test other than to get it to fail
- ou are not allowed to write more production code other than to get the test to pass
- 30s – 1m cycle around the loop
- Great TDD methodology metaphor is a traffic circle
- How much debugging does a TDD developer do? If you introduced a bug, it was only a minute ago so no context shift
- QA Should find nothing
- 100% Code Coverage
- Aim for 100% coverage
- Well, not really 100…
- Avoid debugging
- Manual test scripts are immoral
- Any tests that could be scriptable should be
- Definition of Done
- Test through the right interface
- Decouple the business rules tests from the gui
- That way only one set of tests break, not a massive failure
- Tests are written by humans; humans have intent
- And a couple others he didn’t have time for
People, Monkeys and Models
- Ben Simo
- Had a great sign saying: Be Careful – This machine has no brain of its own
- Should data driven testing really be called data reading testing as it is reading data in from an external source?
- The woodpeckers in Wienberg’s Second Law are they because they pound on stuff. Duh. But I only just got it.
- Trained Monkeys…
- Randomly generate input based on displayed input options
- Monitor for major malfunctions
- If you log everything, too much data
- If you log only the error, you don’t get what led up to them
- Automation’s usefulness is often only a short term benefit
- Model based testing
- Takes automation out of just execution into the design as well
How to build a world-class QA team
- Iris Trout
- The role of managers / leads is to run defense for your team. Gotta love being right; this has been my philosophy for a number of years and is part of my QA 101 course
- In large organizations, a Quality Center of Excellence is a good idea. Another thing I got right. I had hoped to help lead HP down this route before I left. Not sure how they are doing towards that…
- When joining an organization, phase in changes to QA. QA and the general commitment to quality is a long term arrangement.
- Hire the right people for the job. Automators for automation for instance.
Avoid the Unexpected: Identifying Project Risks
- Louise Tamres
- The goal is of course to try not to be surprised during the project
- Risk has 3 audiences
- Project managers who care about risk mitigation
- Developers who need to think of plan b
- Testers who need to know where to focus their testing
- Risk is the ‘heartburn factor’
- Ranking is based on rational, non-arbitrary criteria. This makes magic numbers hard. Not impossible, but hard.
- When prioritizing, ask yourself:
- Is test x more important than test y? Why?
- what is important to the customer? And how do we know that?
- what must be demonstrated to a customer (new or existing)? And when?
- Does this risk affect whether we can we sell it?
- Ask the developers what worries them about a feature. And since they will lie, search the code for the notes they left for themselves.
- An important thing about risks, is that their relative importance must have consensus among the stake holders. Not the people in the risk meeting, but the stake holders. Those can be two very separate groups of people.
Lessons Learned from Performance Testing
- Ben Simo (again)
- Don’t script too much too soon – applications change
- Bad assumptions waste time and effort – so ask more questions
- Get to know the people in your ‘neighborhood’
- Know your test environment
- Data randomization might not be good – as it can make result comparison and investigation (more) difficult
- Different processes have different impacts – while 80% of the usage is often in the 20% of the processes, 80% of the performance issues might not be as well
- Modularize your scripts – think about load, not user workflow (if possible)
- Think about code error detection and handling
- It is likely a software problem – so throwing hardware at the problem is a hack at best
- Result summaries can mislead
- Summaries get summarized
- A positively skewed distribution can give you acceptable numbers, but really is out of whack with the desired profile when summarized
- So look at the distribution to know which number is the one you care about
- Ask the developers to add transaction counters to get performance history of the app as testers do their regular testing
I then floated in and out of the afternoon sessions and didn’t really take any notes other than HDD (Hope Driven Development) which is what came before TDD. I did however attend the closing keynote.
Closing Keynote – Beautiful Software
- Patrick Foley
- Started with a picture of a waterfall, which ‘could be beautiful’ — to a round of chuckles.
- What makes software beautiful?
- It has to work
- It has to look good
- It has to have a great user experience
- Software is ultimately about accomplishing goals
- We’ve managed to get testers (more or less) as first class citizens in the development process. The next step is to then get the designers into the mix as well.
- Software is hard. Beautiful software is harder
- Steps to Beauty – Developers
- Separate the UI from the ‘purely functional code’ which you need to do in order to properly get the designers involved.
- Consider the user experience in the overall experience
- Treat the user experience as a top-line requirement
- Get help (from the professionals)
- Steps to Beauty – Designers
- Treat designs as actual software assets in source control
- Much like developers have to eat their own dogfood, designers have to learn to eat their own champagne
- Steps to Beauty – Together
- Get the right people physically together. Again, we’ve done this with development and test, why not design too
- Consider paired design / development
- Solve the tools problem for your environment
- Focus on minimal, ‘skinnable’ UI which is a good way to align the assets between developers and designers