To the untrained reader, Bex Huff
and I appear to be on different pages when in reality we are unified in our thinking...
My point was that creating secure software is extremely difficult... even if you educate your developers about the OWASP top ten (which ain't all that great anyway)
Bex, if you don't think the OWASP Top Ten is great, you are more than welcome to volunteer to help make it better unless of course you also enjoy throwing daggers.
and even if you religiously use tools like Ounce Labs or Coverity, you'll always have problems. Those tricks are good checks against developers making brain dead stupid decisions, but they'll never catch the subtler security problems.
Secure coding is only one half of the problem in which these tools will help catch brain dead decisions. The other half is secure design which no tool can magically fix up. Maybe a discussion is in order on how to design ECM platforms with security in mind? Would love for you to lead this conversation...
he issue is one of complexity... the vast majority of security holes occur in the interfaces between applications and/or concerns. This doesn't just mean cross-site scripting vulnerabilities on the web interface, nor just the sql-injection attacks on the back end... it also includes any time you connect two code bases together in new and novel ways.
Absolutely 100% correct. Have you ever noodled that as data flows from one system to another within an SOA, but the security model doesn't, that this is another attack vector? For example, what if I have access to data in a policy administration system such that I can figure out if you are insuring an auto that your wife doesn't know about but couldn't do the same in a claims administration system? I bet you can envision scenarios when you integrate a BPM engine with an ECM engine that security becomes weaker. While I know that you aren't a big fan of XACML, it would be great for you to describe an alternative on how you think security should work in a distributed way?
James seems to need some kind of evidence that the code is at least reasonably safe before putting it into production. Fair enough, but his suggestions suck. I can't think of one single certification that I would be personally willing to trust... penetration tests are OK but flawed. Developer certification courses only teach the basics, and are generally useless. Stamps of approval by "security experts" are nice, but as I've mentioned before, I've found problems that these self proclaimed "experts" missed.
Certifications tend to focus on process and not architecture, so I tend to agree. I am a fan of developer certification but not a fan of developer certification courses as they teach how to pass an exam and not about the breadth of a given subject area. Security experts aren't super-human in that one individual can identify 100% of all possible problems in all scenarios. The key thing is to find and fix and not to be fixated on completeness as this goal is unachievable.
You will never have a "100% secure" system. Accept it. The best you can hope for is something that gets more and more "defensible" as it matures. Accept it. Patches are a necessary evil. Accept it.
I think the discussion is not whether you would ever have to patch, it does however deal with frequency of patching. If you had to patch once a quarter, this is more reasonable than patching every week (think Microsoft). Patching should at some level become an exception, not the rule. Maybe Bex has solutions on how enterprises should encourage their vendors to reduce the need for patching? Does it require software firms to actually educate their staff on security or is he expecting something else? For example, if you had Craig Randall, what specific steps would you take to make Documentum more secure?