On May 18, 2009, at 3:02 PM, till wrote:
No, I totally get your concern.
OK, good, I'm not crazy.
We talked about that part and we do plan to evaluate plugins before we host them.
Peer review.
Not that this is a procedure I recommend, by as an example Fedora
requires any package proposed for the repository to pass a review by
another developer. This review is not a code audit, but it is to make
sure the packaging part is consistent with other packages, and the
package doesn't step on the toes of another package, etc.
I keep thinking about how Firefox add-ons are handled.
Each add-on has its code vetted, installation checked, and
functionality tested. Installing an add-on has checks built into
Firefox.
I'm not saying these are specific procedures the RoundCube should
follow. I am simply pointing at a framework of how the issue was
handled elsewhere.
We don't want to end up with 10000 plugins where only a handful works and 2/3 of them pose a risk to your system.
Some plug-ins might be so specialized that it might be tough finding
one other person to test it.
Maybe some type of grading system.
Green : plug-in implemented by RC developers
Blue : plug-in that has been completely vetted, including code audit,
installation, and functionality test
Orange : plug-in known to work by community members, but otherwise
unknown quality
Red : plug-in that is experimental and completely untested
I don't know yet how much we can do automatically, etc..
Yes, having humans look at stuff is a time bottleneck and those
humans can potentially make mistakes.
I haven't looked into it. And most 'scanners' I have tried provided mediocre results.
Things like rpmlint can provide spurious output, but at least
something like that can remove some of the mundane parts of checking
and reviewing.