For "With your threat model in mind, they should identify opportunities to add new test cases," one common reason is that security engineers are shared across a large company and it may be very expensive for them to learn the different testing frameworks used on many different projects. Also, independent review (without any exposure to developers' conceptions about what should be tested, or why, or how) may be economically justified because outcomes of security bugs are sometimes much worse than outcomes of many categories of ordinary bugs. Other reasons may include that the security engineers want to run a test that can't be expressed in your testing framework without a huge change to the framework, they may want to develop their test cases adaptively such that most of the tests turn out to be useless and the cost of capturing every test under version contol may be very high, they may want to run tests from a commercial testing product for which the license does not allow bulk copying of the tests into a customer's testing framework, or (if they aren't in-house engineers) their business model is that they won't tell you every test that was run unless there's an associated defect finding.
finnigja · 8h ago
> ... one common reason is that security engineers are shared across a large company and it may be very expensive for them to learn the different testing frameworks used on many different projects
That's where the partnering part of the approach I'm proposing comes into it. The security engineer isn't off there by themselves trying to figure out it, but is working with somebody who's already familiar with the existing code base & testing frameworks.
> also, independent review (without any exposure to developers' conceptions about what should be tested, or why, or how) may be economically justified because outcomes of security bugs are sometimes much worse than outcomes of many categories of ordinary bugs.
Economically justifiable perhaps, but that doesn't necessarily mean we shouldn't explore better ways of achieving similar outcomes.
> Other reasons may include that the security engineers want to run a test that can't be expressed in your testing framework without a huge change to the framework, they may want to develop their test cases adaptively such that most of the tests turn out to be useless and the cost of capturing every test under version contol may be very high, they may want to run tests from a commercial testing product for which the license does not allow bulk copying of the tests into a customer's testing framework, or (if they aren't in-house engineers) their business model is that they won't tell you every test that was run unless there's an associated defect finding.
Yeah, this'd be interesting to experiment with. The accepted model of security testing being separate allows this uncoupling of tooling / process, but .. perhaps the outcomes of a more-tightly-coupled testing methodology would be better?
I don't think any of these points are blockers, more just factors to consider or trade-offs to balance when exploring alternative, less separate, approaches.
That's where the partnering part of the approach I'm proposing comes into it. The security engineer isn't off there by themselves trying to figure out it, but is working with somebody who's already familiar with the existing code base & testing frameworks.
> also, independent review (without any exposure to developers' conceptions about what should be tested, or why, or how) may be economically justified because outcomes of security bugs are sometimes much worse than outcomes of many categories of ordinary bugs.
Economically justifiable perhaps, but that doesn't necessarily mean we shouldn't explore better ways of achieving similar outcomes.
> Other reasons may include that the security engineers want to run a test that can't be expressed in your testing framework without a huge change to the framework, they may want to develop their test cases adaptively such that most of the tests turn out to be useless and the cost of capturing every test under version contol may be very high, they may want to run tests from a commercial testing product for which the license does not allow bulk copying of the tests into a customer's testing framework, or (if they aren't in-house engineers) their business model is that they won't tell you every test that was run unless there's an associated defect finding.
Yeah, this'd be interesting to experiment with. The accepted model of security testing being separate allows this uncoupling of tooling / process, but .. perhaps the outcomes of a more-tightly-coupled testing methodology would be better?
I don't think any of these points are blockers, more just factors to consider or trade-offs to balance when exploring alternative, less separate, approaches.