Hacker Newsnew | past | comments | ask | show | jobs | submit | ran3824692's commentslogin

A "content" tag claims "A full list of dead products killed by Microsoft in the Microsoft Cemetery" but all I see is metadata and a javascript tag. Where is the content?

     <!DOCTYPE html>
     <html lang="en">

     <head>
         <meta charset="UTF-8">
         <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=5">
         <meta http-equiv="X-UA-Compatible" content="ie=edge">
         <meta name="description"
             content="Killed by Microsoft is the Microsoft Graveyard. A full list of dead products killed by Microsoft in the Microsoft Cemetery.">
         <link rel="shortcut icon" href="assets/favicon.png">
         <meta name="theme-color" content="#FAFAFA" />
         <link rel="canonical" href="https://killedbymicrosoft.info" />
         <title>Microsoft Graveyard - Killed by Microsoft</title>
         <meta name="image" content="https://killedbymicrosoft.info/assets/social/card.png">
         <!-- Schema.org for Google -->
         <meta itemprop="name" content="Killed by Microsoft">
         <meta itemprop="description"
             content="Killed by Microsoft is the Microsoft Graveyard. A full list of dead products killed by Microsoft in the Microsoft Cemetery.">
         <meta itemprop="image" content="https://killedbymicrosoft.info/assets/social/card.png">
         <!-- Twitter -->
         <meta name="twitter:card" content="summary_large_image">
         <meta name="twitter:title" content="Killed by Microsoft">
         <meta name="twitter:description"
             content="Killed by Microsoft is the Microsoft Graveyard. A full list of dead products killed by Microsoft in the Microsoft Cemetery.">
         <meta name="twitter:site" content="@killedbygoogle">
         <meta name="twitter:creator" content="@killedbygoogle">
         <meta name="twitter:image:src" content="https://killedbymicrosoft.info/assets/social/card-twitter.png">
         <!-- Open Graph general (Facebook, Pinterest & Google+) -->
         <meta name="og:title" property="og:title" content="Killed by Microsoft">
         <meta name="og:url" property="og:url" content="https://killedbymicrosoft.info">
         <meta name="og:description" property="og:description"
             content="Killed by Microsoft is the open source list of dead Microsoft products, services, and devices. It serves as a tribute and memorial of beloved services and products killed by Microsoft.">
         <meta name="og:image" property="og:image" content="https://killedbymicrosoft.info/assets/social/card.png">
         <meta name="og:site_name" property="og:site_name" content="Killed by Microsoft">
         <meta name="og:type" property="og:type" content="website">
     </head>

     <body>
         <div id="killedbygoogle"></div>
         <script src="main.js"></script>
     </body>

     </html>


No. Spammers will create repos and user profiles and snippets and anything they can with spam in them.


I would imagine authentication being done through Wikimedia's existing LDAP or Mediawiki solution and I hope that features that already exists in Phabricator(such as snippets) will be disabled.


invent.kde.org uses the nonfree google Recaptcha, that prevents it mostly. Not very nice for KDE to make people run nonfree software blob in their browser that gives up their freedom, gives up their privacy to google and trains Google's proprietary machine learning models.


Where does it use that?


There's been a gitlab bug for almost 3 years to stop relying on recaptcha, https://gitlab.com/gitlab-org/gitlab-foss/-/issues/45684 Debian, KDE and Gnome have never wanted to make their users run Google's nonfree javascript blob to contribute on their gitlab instance. There's been interest, Gitlab has done very little about it. Edit: other bugs about this can be found here https://gitlab.com/gitlab-org/gitlab-foss/-/issues/46548


We have a team currently working on improving the detection and mitigation of spam. We continue to look for ways to improve the security and user experience of our product. Our product includes the Akismet Spam filter which you can read more about in our handbook: https://about.gitlab.com/handbook/support/workflows/managing.... Further, Gitlab.com includes the ability to report abuse directly to our trust & safety team here: https://about.gitlab.com/handbook/engineering/security/opera... however, the report abuse feature on self-managed reports back to the instance admin. We are also currently developing an anti-spam feature intended to further improve spam detection & mitigation. This is set to be enabled on GitLab.com within 3 months.


As mentioned above in the thread, multiple times, maybe a simpler solution to reduce spam is to remove incentives by:

- removing links (making them as plain text forcing users to copy paste them..) - hiding links from non-registered users (plain text to non-registered users, clickable for registered users), - blocking links from search engine crawlers (robots.txt / rel=nofollow...).

Maybe these fall in the "for each complex problem there is simple but wrong solution" but it sounds like it's worth a try.


(I already replied on a different thread but this might make more sense)

A service like Stop Forum Spam might be a solution to this. It checks for IP address and email address and gives it a value based on how likely it is assumed to be a spammer.

When they have to set up a new email account and maybe even a new IP address for every few accounts, it gets to be a lot of work soon.

https://www.stopforumspam.com/

It has a very simple API and is not that hard to implement (really, I have done it myself :) )


Appreciate the response - I'll look into now


Okay, thank you. I see Gitlab is mostly Ruby. Just to get a general idea of the code this is a simple PHP function to use it:

https://plugins.trac.wordpress.org/browser/gwolle-gb/trunk/f...

That function can be called when the register form has been submitted. It will return true or false. Forget about the transient stuff, that is just WordPres caching stuff.

You don't need an API key like with Akismet. You would only need it if you want to add or remove entries from the SFS database. It really is much simpler. Ofcourse you might want to have a checkbox in the settings. But still, in an afternoon you might be able to finish this :)

Wish you the best.


Great suggestion, this looks like a very straightforward service and implementation. All open source as well.


I think the code of this problem is that it is hard to identify if a user is a bot or a human. I've not seen any elegant free solutions to this.


That is not the core of the problem. Spammers are humans, and sometimes they will solve recaptchas in large quantities to get their spam through. Its about having a multipronged approach for administrators to stay ahead of them. For some examples of free solutions see https://www.mediawiki.org/wiki/Manual:Combating_spam. It's even possible to connect spamassassin to forms. Gitlab needs tools and automation that detects and rolls back spam, bans users, knobs to tune restrictions and rate limits based on how spammers are acting. Gitlab inc just hasn't seemed to care much to help people trying to use Gitlab and keep their software freedom.


I think the focus of our Trust and Safety team has been on GitLab.com and not on all GitLab instances. We'll discuss changing this.


Thank you.


GitLab team member here. We just added a new page to our Handbook where we share approaches to preventing, detecting and mitigating spam on self-managed instances of GitLab. https://about.gitlab.com/handbook/engineering/security/opera...

We want to hear from you! Instructions on how to contact us: https://about.gitlab.com/handbook/engineering/security/opera...


I'm curious about the spamassassin integration. Do you know of any open source projects currently using it for a web application?


> I have not read it

Then try reading instead of typing, especially https://www.gnu.org/philosophy/freedom-or-power.en.html


I have read that. I have read most things I could find on the FSF and GNU websites about this years ago already, and again several weeks ago. And I think that's exactly the kind of casual dismissiveness I'm talking about which barely even recognizes that there are any trade-offs involved at all, much less address them.


The introductory philosophic essay lays it out very clearly:

https://www.gnu.org/philosophy/shouldbefree.html

> we should perform a cost-benefit analysis on behalf of society as a whole, taking account of individual freedom as well as production of material goods.

You cannot say in good faith that the essays fail to recognise or address the existence of trade-offs. Please take note.


Given that statement, would you be surprised to hear that FSF has multiple time stated that GPL is a strategy with benefits and drawbacks and not an end goal in itself? Similar they see permissive licenses as positive thing, but not the optimal strategy in many cases. It depend on the project, the situation, the authors, the community and so on.


> They do, however, very often use existing language, and custom language is minimized.

Guess what, AGPL does that too. Its only 1 paragraph different than GPL.

> Where contracts are often almost entirely standard per-company

"standard per-company", means custom and used used throughout the company. That doesn't make it less risky, and its not like these things don't constantly change and are hugely complicated, just look at privacy policies. AGPL is standard for all companies.

> And very rarely is the company in danger from the non-boilerplate clauses.

Citation needed.


> Guess what, AGPL does that too. Its only 1 paragraph different than GPL.

Yes, and the point is that paragraph is particularly risky and untested.

> Citation needed.

I gave an example.


The point people keep talking about here as risky are: what is a derivative work, and what constitutes complete and complete corresponding source definition. Both of those things HAVE been tested. complete corresponding source definition is the same in gplv3, almost exactly the same in gplv2. Derivative work is a general copyright thing tested in many cases. The extra paragraph doesn't have anything to do with them. To recap: 99% of the license is tested, and the "risk" everyone is discussing are about the parts that have already been tested. Basically, what Drew wrote is true.


Derivative work and complete corresponding source has not been tested w.r.t. Google's monorepo (or similar situations), because under the terms of the gplv2/3, Google doesn't distribute any software.

There's an entire class of tooling to make sure that GPL-tainted software isn't distributed (https://opensource.google/docs/thirdparty/licenses/#restrict...), but because the class of software that Google distributes under the GPL is limited (can you think of any?), this is workable, and such things can be isolated.

That doesn't work if the definition of "distribution" is broadened significantly. Then the derivative work questions (which aren't as cut and dry as you claim) do suddenly matter a lot more.


> There's an entire class of tooling to make sure that GPL-tainted software isn't distributed

Amazing the lengths people go to in order to avoid sharing and treating others well! Imagine if they did the opposite: imagine if they just freely shared their source code.


I mean, there's some amount of code Google really doesn't want to share (it's not shared with me and I and I work there) for various reasons including security. So I imagine there would be downsides ( and not a whole lot of up, much of the useful stuff is already shared)


Corresponding source has the exact same definition in GPLv3 and almost exactly the same in GPLv2, so all this "its completely untested" thing is completely disingenuous. Google uses Borg to control gplv3 code that they also distribute, so, exactly the same case and its complete BS you are spreading. Lawyers are actually pretty good at spreading FUD about GPL, they always have been.


AGPLv3 is exactly the same as GPLv3 except that it adds 1 paragraph. That paragraph has nothing to do with corresponding source or what a derivative is. Google ships distros with Gplv3 to customers GCP, so Borg and GCP stuff would be equally affected by the "risk to Borg" and other server side code, so, I don't believe that the claimed legal risk is real, just FUD.


I went to a high school in Cali that was converted to a charter school. As far as I could tell, the main change was that they kicked out all the kids with poor grades or other problems. That is the polar opposite of "choice", it was about increasing inequality and it was completely disgusting.


Wow, so charter schools allow for stack-ranking their pupils all the out of the school if desired? Is there a list of reasons that a school cannot use to suspend or expel a student up somewhere? In addition, a list of active conditions with associated constraints for when they can?


Schools can do many things either directly or implicitly, with some variations based on local laws. Here are some that I’m aware of:

1. Have an exam to pass up to the next grade. This isn’t necessarily intended to be malicious but it means that kids who are struggling, have less than fully supported disabilities, unstable home environments, etc. aren’t in their stats for the higher grades but will be in the comparison schools.

2. Have a graduation requirement that, say, you pass a ton of AP exams. Struggling students are probably going to bail when the odds aren’t looking good.

3. Not offering support services: charters have a financial incentive not to hire specialists for special needs, psychologists, librarians, cafeteria workers, etc. Parents will often pull kids over to the public schools where they’ll get more support, which also makes the cost differential and difficulties accurately comparing performance more pronounced.

4. Favor high levels of family support: academic performance tracks closely with family wealth so anything which favors affluence will have the effect of removing more lower-performing kids. That can be homework requiring laptops which aren’t provided, having a schedule or location which doesn’t work well for transit users, after school / weekend clubs or courses with supply fees, etc.

5. Tailor the curriculum to attract certain types of student: require things like taking Latin/Greek or a STEM load, don’t offer much in the way of arts/music/sports, or simply requiring more classes total, etc.

Again, this doesn’t need to be malicious - it just means that you’re encouraging kids who aren’t hyper-focused on academics and well supported to go elsewhere. That makes the numbers harder to accurately compare with public schools who have to serve everyone and also confounds the question of how many of those kids would have been high performers at any school. People like to say this pushes them to be better, and that’s true in some cases but not as many as lore would have it and tends to ignore the kids who burn out but would have done well with a bit less pressure.


> in a way that puts formal limits on what they can do with it and how long they have access to it, and I as the patient have both the right and technical ability to revoke that access

If that's what SOLID is, its a scam and more of his DRM promotion. There is no technical way to "revoke my access." Unless you have a memory erasing implant in my brain, if the data gets onto my screen, I can copy it and access it forever. Period. Fuck Tim Berners-Lee.


Data use agreements often specify under what terms the data is held and destroyed. While people may still remember some data, the usual use case is large databases that can't be memorized. If someone revokes that data, that data would need to be removed from the database and all associated downstream copies. Failure to comply would open the door to legal penalties, which is the real stick.

Imagine for example, if the US or UK governments took corporate misuse of personal health data (https://www.theverge.com/2019/6/27/18760935/google-medical-d...) as seriously as they currently takes video copyright violations by individuals....


> In the context of the early Web, should we have prevented any company from making their own website?

Well, websites are now a bundle of arbitrary remote code execution called javascript, we didn't allow that, so by today's standards, ya we did.

> Enforced some standard for how your website UX should work in the name of security and usability?

Well, html, so ya, again, ya we did. And we could again. A lot of the functionality of apps simply don't justify requiring you to run a program.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: