Incompetent Software Hucksters

Consulting services

Actually, I'm generally not available for consulting, because I'm working hard in a tiny little company, and it's even fun. However, I wouldn't want anyone who's come this far to go away unhappy, so I will include some quick tips:

Patent searches

Once upon a time, you had to spend time in a library browsing microfiche, or pay some lawyer hundreds of dollars an hour to do this for you. Nowadays, there's information on-line that will help you do this yourself. Note that this isn't perfect; these searches are limited by what has been made available, but that does include most recent US patents, and some European patent information. In the software world, this is probably enough.

The US Patent and Trademarks Office has advanced and basic search pages. I am fond of the advanced page. Their search only covers the front matter and the abstract (not the claims, not most of the text) but if you are good with the jargon in your field, you can probably do a pretty good job here.

IBM, bless their big blue heart, provides both a search engine and actual patent retrieval. They also provide access some European patent information, but I don't know exactly what is there and what is missing.

When browsing patents, try the following tricks for finding what you are after:

Debugging code

The first rule is that any new bug is almost certainly caused by your recent change to the software. This ought to be obvious, but apparently it is not.

The second rule is that if it isn't tested, it almost certainly does not work. Whenever possible, write test cases for the most trivial of features. Don't forget to test the "wrong input" cases, unless graceless and/or nonintuitive failure are included in the specification.

The third rule is that any cause-of-bug hypothesis should be tested. The bug should be reduced to a small test case if at all possible, so that the cause and effect can perhaps even be observed, and this test case can be used to verify that the bug has actually been fixed.

Ideally, find some easy-to-use (if any exist, and do not believe advertising claims) tool that will allow you to collect tests, run tests, and cross-reference tests to bug reports and the changes in the code that fix them. Better still would be integration with code coverage tools so that you know which tests to re-run first when code is changed.

Writing (user) documentation

If the documentation begins to look complex and obtuse (try handing it to a friend to read, or read it yourself after a week's vacation) perhaps it simply reflects the interface presented by the product. Think about what most people want to do, and describe that first. If this requires non-default settings for knobs and dials, change the default settings to accommodate the common case gracefully.

Consider, when designing the interface, that if there are 10 independent options controlling system behavior, then you have over 1000 different ways you might run each individual test. Fewer knobs is better, because it reduces the number of inputs you must test. Designing the system so that the common case requires no knob setting at all is good, because that means that in the common case, people won't get curious and dream up new and unexpected ways to fiddle the knobs. (Despite my exhortation to use the documentation to drive the interface design, in fact most people will never read it anyhow. This doesn't matter; designing the system so that its documentation is short and simple yields a better-designed system in most cases.)

Performance and tuning

Know your problem size, measure your system's behavior, and use complexity analysis to guide design.

The point of knowing the problem size is simply that tests have a habit of being small, and actual inputs have a habit of being sometimes large. Think about how your system might respond to a gigantic input. You might even create such an input as a test.

Measurement is important because it is difficult to figure out where the time goes simply by reading the code.

Complexity analysis matters because that is the exception to the measurement rule; if you use bad algorithms fed inputs of unbounded size, eventually, that is where the time will go, and you can predict this ahead of time. A little algorithmic foresight will give you robust performance on unexpectedly large inputs. On the other hand, if you know (would bet thousands of dollars of your own money on, that is) that the inputs will not be large, it may be appropriate to use a worse algorithm if it is simpler, or if it is not enough worse to matter.

Tools

Find tools that work, and use them. I wish that I could say that I follow my own advice here, but I don't. After years and years of working with emacs, I find that is adequate for most of my needs. I can imagine a better tool, but nobody seems to have built it.
Contact us There's no place like home.