If you're considering adopting AGPL and the reason is to prevent commercial abuse of your work:
Please consider adding a non-commercial use exemption, for charities and academic research. These organisations can't afford the cost of open-sourcing their entire project.
I never understand this. I get not wanting to build a community around a project, handling contributions, etc. But why not just dump the source code somewhere?
Dumping the code somewhere is next to useless. NASA open-sources a ton of code (https://www.github.com/nasa), but the vast majority of it gets open-sourced at the end of a project and there's no money set aside for maintenance so it's mostly abandoned. I have one such project that I keep up the maintenance on my own time, but if I ever leave NASA I won't be able to even do that.
Because it's huge (perceived) risk for (often) little gain. These projects (I'm especially familiar with research) aren't known for code quality and following best practices regarding security etc. So you open yourself for shaming and casual hacking for some unquantifiable benefit of open-source contributions.
The mechanics of putting a tarball somewhere on the Internet are simple and cheap, but that action also directly and indirectly greatly increases the potential for liability. This effectively requires the organization to create additional management and processes to mitigate this increased potential for liability. It is a headache many organizations want to avoid or can't afford.
Yes, "dumping source code" is simple and cheap. Managing the implications of doing so are not. I know of many cases where companies backed away from open sourcing software due to the overhead it would entail, even when they could afford it in principle.
Open sourcing creates multiple classes of risk outside the scope of the license which any properly run company must manage.
As a couple elementary examples, it greatly increases your exposure to claims of patent and copyright infringement based on the actions of your employees, both intentional and inadvertent. It significantly increases the risk that the company's trade secrets and other non-public IP accidentally end up in the public domain. You must ensure that open sourced code does not come in conflict with contractual agreements with other parties. And that is after you get every outside stakeholder in the business's strategic objectives to sign-off on it, which isn't always easy.
When an organization decides to open source a bit of code, they have to run a formal diligence process to ensure there is minimal risk of any of the above and then put a process in place to help ensure that going forward. I've seen this process at multiple companies, it is not lightweight and involves lots of lawyers and documentation that would never happen otherwise. Many companies decide it isn't worth the money or distraction.
Because it’s effort. People will want you to make enhancements and maybe expect changes. It may link to proprietary libraries. Open source is not really just about dumping code on GitHub.
> The only people entitled to say how open source 'ought' to work are people who run projects, and the scope of their entitlement extends only to their own projects.
To be clear, if someone wants to just do a code dump under an open source license, ignore it, and send any communications about the code to /dev/null that’s their choice. Probably not a very useful one but a valid one.
They could be using some third-party code/services with closed source or under proprietary/non-AGPL-compatible license, and thus they can't open-source those parts under AGPL, as AGPL demands, even if they wanted to.
> In either case, being open source increases security risk.
This is blatantly false. Any claim that closed source is provides any form of security is entirely a claim in security by obscurity.
If open sourcing your code presents any risk to sensitive personal information, then that means that you are already grossly mishandling this information. Whether or not your open source your code at this point doesn't matter—the harm is already done.
> If open sourcing your code presents any risk to sensitive personal information, then that means that you are already grossly mishandling this information
This is also clearly false.
For example, take this scenario:
- You use web framework Omega, but minimise indicators of this (suppress HTTP headers, etc).
- At 2am, a critical security vulnerability is discovered for Omega and a patch is released shortly after.
- Malicious actors scrape GitHub to find sites that use Omega, and try compromise them.
- At 9am, you apply this patch.
If your project is open source, there is a 7 hour window where you are clearly and publicly broadcasting that you are vulnerable.
If your project is not, there is a 7 hour window where you are vulnerable, but this is not easily apparent to attackers.
It doesn't work that way. Attackers don't check if you are using "Omega", they check if you are vulnerable. There is simply no difference if you are hiding framework indicators here.
Well - unless there is a targeted attack _against you_. In this case the attacker will search for known vulnerabilities in Omega and maybe even try to come up with some new ones. Having source helps the attackers here, but then again, it has helped researchers fix the vulnerabilities too. So it's a mixed blessing.
Attackers either flood you with every attack under the sun, or tear your site apart and will know exactly how it works.
Imagining that you can hide the function of your site is again security by obscurity.
The key idea here (I forgot the name of the law, but others' mentioned it in the tread) is that regardless of what you do, the adversary will end with complete understanding of how your system works.
Therefore, any security based entirely on the adversary not learning about implementation details is entirely defective.
Furthermore, an attack exists for days, months or even years before fixed, it takes time to fix and release, and it takes time for you to discover the advisory and deploy.
You were not vulnerable for 7 hours. You were vulnerable for weeks, months or years.
The source code will indicate where/how the data is input, processed and stored. It might help an attacker compromise the application in any number of ways.
There's non-trivial risk there, enough to make it an ethical concern.
So, in order to use AGPL software, you have to open source your entire source code, which means you have to go through a long and arduous risk assessment which will likely decide you can't.
You only have to open source the AGPL'ed code if it's providing a networked service.
Many academics and charities don't provide services, so it doesn't affect them.
When you write "enough to make it an ethical concern", is that a hypothetical concern of your own making?
Many academics must go through institutional review boards or other ethics committees.
Many academics also develop and distribute free software for analyzing sensitive data where IRB oversight is required.
If what you are saying is a real concern, then I expect it would have been brought up long ago.
Can you point to examples?
I believe your argument is equivalent to those saying that Linux-based free OSes cannot be used for secure platforms because the source code is available, so anyone can potentially break in.
So why is it that many people doing research which requires IRB oversight use Linux-based OSes?
I agree with tokai - you're arguing for security-by-obscurity, and there's no evidence that that increases security.
I think the evidence shows that the ethical concerns you suggest don't actually exist.
I’ve always felt this argument breaks down with smaller scale targets. I’d argue security through obscurity is not security, but there can be safety in obscurity.
There are a massive number of systems that are completely bespoke for small organizations or even individuals, and their user base isn’t going to grow.
What’s more, these systems are extremely liable to rot- the contract developer writes the system and moves on. That means library versions in the repo aren’t going to get updated when new vulnerabilities are found. So now this random 1 GitHub Star system is siting unpatched out for anyone to see.
Now what might have been a hard to find but exploitable issue risks getting a black hat spotlight shown in it.
That’s a pretty good question. And likely there won’t be a definitive answer until there’s a decision from court.
I think in these cases one should try to understand what the IPR holder had in mind when they picked AGPL and not for example GPL or Apache. Trying to find loopholes might just get you into trouble in future.
That’s the difference about the AGPL - you have to open source the things that call it too. Putting it in a microservice is literally the thing that AGPL was written to prevent you from doing to work around it
I’m getting downvoted, but this is the stance of the company behind a popular AGPL library mentioned in this comments page based on emails I received from them after asking the same question.
Absolute malarkey. All academic research should use and produce free software. This used to be the case; that it isn't now is terribly stupid, especially now that distribution costs have gone from "little" to "nothing." It doesn't "cost" anything to throw a tarball on the internet.
Any charity that goes out of its way to produce proprietary software doesn't deserve anyone's money to begin with; they're wasting funds.
The product of government-funded academic research is already primarily going to private, for-profit journals, being effectively stolen away from the taxpayer. Proposing that we should make it easier for academic research to completely ignore the common good is ridiculous.
Please consider adding a non-commercial use exemption, for charities and academic research. These organisations can't afford the cost of open-sourcing their entire project.