Actually, I'm generally not available for consulting, because I'm working hard in a tiny little company, and it's even fun. However, I wouldn't want anyone who's come this far to go away unhappy, so I will include some quick tips:
The US Patent and Trademarks Office has advanced and basic search pages. I am fond of the advanced page. Their search only covers the front matter and the abstract (not the claims, not most of the text) but if you are good with the jargon in your field, you can probably do a pretty good job here.
IBM, bless their big blue heart, provides both a search engine and actual patent retrieval. They also provide access some European patent information, but I don't know exactly what is there and what is missing.
When browsing patents, try the following tricks for finding what you are after:
The second rule is that if it isn't tested, it almost certainly does not work. Whenever possible, write test cases for the most trivial of features. Don't forget to test the "wrong input" cases, unless graceless and/or nonintuitive failure are included in the specification.
The third rule is that any cause-of-bug hypothesis should be tested. The bug should be reduced to a small test case if at all possible, so that the cause and effect can perhaps even be observed, and this test case can be used to verify that the bug has actually been fixed.
Ideally, find some easy-to-use (if any exist, and do not believe advertising claims) tool that will allow you to collect tests, run tests, and cross-reference tests to bug reports and the changes in the code that fix them. Better still would be integration with code coverage tools so that you know which tests to re-run first when code is changed.
Consider, when designing the interface, that if there are 10 independent options controlling system behavior, then you have over 1000 different ways you might run each individual test. Fewer knobs is better, because it reduces the number of inputs you must test. Designing the system so that the common case requires no knob setting at all is good, because that means that in the common case, people won't get curious and dream up new and unexpected ways to fiddle the knobs. (Despite my exhortation to use the documentation to drive the interface design, in fact most people will never read it anyhow. This doesn't matter; designing the system so that its documentation is short and simple yields a better-designed system in most cases.)
The point of knowing the problem size is simply that tests have a habit of being small, and actual inputs have a habit of being sometimes large. Think about how your system might respond to a gigantic input. You might even create such an input as a test.
Measurement is important because it is difficult to figure out where the time goes simply by reading the code.
Complexity analysis matters because that is the exception to the measurement rule; if you use bad algorithms fed inputs of unbounded size, eventually, that is where the time will go, and you can predict this ahead of time. A little algorithmic foresight will give you robust performance on unexpectedly large inputs. On the other hand, if you know (would bet thousands of dollars of your own money on, that is) that the inputs will not be large, it may be appropriate to use a worse algorithm if it is simpler, or if it is not enough worse to matter.
|Contact us||There's no place like home.|