The October issue of Software Test & Performance can be downloaded here.

Before I get into the articles, here are some random-ish links I found in it.

  • The folks at Acunetix have created a free edition of their WVS (Web Vulnerability Scanner) which is limited to XSS, but heck, it’s free and XSS is bad. Download it here
  • A new conference (or at least new to me) is FutureTest (Feb 26-27, 2008 in NYC) which has a pretty impressive list of keynotes, but other than that there is no information available. (Yet). Anyone want to sponsor me going? 🙂

And now for the interesting bits from the articles

Rex BlackTrue Performance Cases From Life on the Streets:

  • As the load exceeded limits, new viewers were still allowed to connect, slowing everyone’s video feeds to a standstill
  • You can avoid common goofs in five key ways
    • Configure performance, load and reliability test environments to resemble production
      as closely as possible, and know where test and production environments differ
    • Generate loads and transactions that model varied real-world scenarios
    • Test the tests with models and simulations, and vice versa
    • Invest in the right tools, but don’t waste money
    • Start modeling, simulation and testing during design, and continue throughout the life cycle
  • For systems that deal with large data sets, the data is a key component of the test environment
  • The first lesson when considering tool purchases is to avoid the assumption that you’ll need to buy any particular tool. First, create a high-level design of your performance, load and reliability test system, including test cases, test data and automation strategy. Second, identify the specific tool requirements and constraints for your test system. Next, assess the tool options to create a short-list of tools. Then hold a set of competitive demos by the various vendors and with the open source tools. Finally, do a pilot project with the demonstration winner. Only after assessing the results of the pilot should you make a large, long-term investment in any particular tool
  • Slow, brittle and unreliable system architectures usually can’t be patched into perfection or even into acceptability
  • Don’t wait until the very end, but integrate performance, load and reliability testing throughout unit, subsystem, integration and system test

Jim Falgout – The Prince of the Pipeline:

  • A nice overview of engineering strategies to make use of multicore processors
  • How-to / what-to monitor for strategy described

BJ RollisonMaking Child’s Play of Equivalence Class Partitioning

  • The overall effectiveness of any particular technique is limited by the tester’s knowledge of the system and their cognitive skill in applying a given technique in the appropriate situation
  • By randomly selecting string length and valid characters, we can provide variability into subsequent iterations of the test case and increase the probability of exposing unusual defects that might not otherwise be exposed using typical or real-world valid class data

Torsten Zelger – Are Developers Serving You With Testing Hooks:

  • Unique object ids
  • Dynamic lists – need to have some unique id
  • Consistency – in how unique id’s are named and assigned
  • Frequent changes – to the unique id
  • Special chars – in the unique id might foil some test apps
  • Language of the app drives tool choice
  • Custom Controls – also affects tool choice
  • Obfuscation – test before things are obfuscated
  • Prototype your – test) system
  • Hooks – to the database, special api, log files and Stubs
  • Tool dependency – within the application
  • Number of builds – automation takes time, time is scarce — should you be automating?
  • AUT maturity – if it isn’t ready to automate, don’t automate it
  • Tool selection – don’t limit yourself artificially
  • Make the case for all of the above early