Thanks. I totally understand that having more eyes look at something is better. When I ran the safety committee at work I loved having office people inspect the production floor. Having fresh eyes look at something and say "why are you doing this?" can be great. But is it not counterintuitive to have security software be open source? Isn't that like showing the bad guys the security system plans to the bank? Have any of you looked at the source code of the password manager that you use?
When a big-name organisation wants to employ a particular security solution, they often want to see the source code. For example, the US government has been granted access to review the code in Windows a fair few times over the years. Similarly, if a major client wants to store something extremely valuable with a particular bank, they will want to review the bank's security. If the bad guys want to look at closed-source code, they will find a way. There are always poorly-paid employees to exploit, for example; IIRC Western Digital is currently reeling from a major system compromise with tonnes of data stolen. But having access to source code does not mean one's product is inherently less secure, if it did then OSS software would be constantly reeling from major compromises.
The problem with closed source in this respect is that company X will be claiming that their product is bulletproof, people will be therefore assuming that the product is decent because they haven't heard anything to the contrary, and the black hats will be laughing their asses off and use what they know for maximum profit (which may end up being your organisation's data, along with a multitude of others). With open source, chances are everyone will know already whether that software has been reasonably well designed.
Flaws are found in software all the time. If a flaw is found in one OSS project, there's a decent chance for a conversation about such-and-such attack technique that other projects should be aware of. Flaws in shared OSS libraries might mean that a tonne of other projects get patched automatically as soon as the shared library's issue has been fixed. With closed source, if a new technique of attack is found, the details of that may or may not be published, one company whose software is vulnerable might keep it to themselves while black hats are doing drive-bys on similar software. The reaction for positive change therefore far slower.
There's also that whole stupid business where some companies try to attack the messenger that their software has bugs in, because closed source encourages secrecy, and the CEO can't keep beating the drum that their software is amazing when flaws are exposed that made it look like a toddler made a plasticine key to open the lock, figuratively speaking.