Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook reportedly researching ways to use encrypted WhatsApp messages for ads (macrumors.com)
133 points by arkadiyt on Aug 4, 2021 | hide | past | favorite | 96 comments


I just wish AdTech companies had a system where you could fill out an extensive form, where you openly state what your preferences are, because the targeting seems very skewed and wrong, at least from what I've experienced.

Facebook thinks I like rock music, but I really like house music. Facebook thinks I eat meat, but I am a vegan. Facebook thinks I like cars, but I don't even drive. There's no way of telling Facebook what my tastes and lifestyle preferences even are.

Facebook just infers based on keywords and what things I interact with on their platform, which is a terrible way to gauge preferences, since I could talk about cars in a comment, per se, but that doesn't mean I drive.

It's the same with all other ADs I get. They're all almost fraudulently inaccurate, and as an engineer I think we should be building things that work properly, are lucrative on their own merit, and not based on borderline fraud.


There's a growing base of evidence in both the US and the Netherlands (from news media websites, I believe), that targeted advertising is effectively a bubble, and didn't yield higher metrics or revenues. In fact I believe in a couple of examples, they saw higher growth and revenues from ads by going totally non-targeted.

There's an entire sector now focused on trying to hyper-target ads, on the presumed basis they're more effective. If the house of cards falls down, and people realise untargeted ads perform better without all that extra cost, things could get interesting.

Unfortunately it isn't as simple as that, as "independent" audience measurement seems to still be a requirement that leads to adtech type solutions emerging.

To go back to your observation about the quality of targeting though, this shows in a way that it simply doesn't work... The trouble is advertisers continue to throw money at the services falsely claiming to have these accurate data-driven targeting models, without seemingly ever thinking to cross-validate them to be sure they aren't absolute nonsense of false claims of targeting demographics - your ones having entirely the opposite of Boolean demographics show how wild it can be.

I fetched my data from Spotify and the limited data they had (in theory objective data) was chronically wrong - they had tried to put me into a group based on phone network used, but it didn't even get that right, despite that being trivial to do(!)


There's growing evidence that large media companies can generate revenue from advertising deals from large brands with high-level interest targeting.

That doesn't say anything about the performance of targeted ads - just that high-level targeting is a suitable replacement for narrow targeting at a certain scale.

That's a separate concern from niche publishers (without direct sales teams) and niche brands (who only have a few thousand dollars to spend on marketing) who both want to maximize the value they're getting from the ecosystem.

And that's where the real money in advertising is. Fortune 500 companies boycotted FB last year and it didn't make a dent in their revenue, because they have millions of small businesses that all spend $50 on ads per month. That level of spending is only effective with narrow targeting.


Interesting. Reminds me of the analogous process of iteratively improving a model or fitness function and over time ending up with something overfit.


Facebook kinda lets you do this now. Filling out a preference sheet, that is. Buried in your ad settings is a list of what FB thinks are your interests. You can remove any that don’t match.

Doesn’t always work though. You can remove all “interests” related to meat, and then still get slammed with steak ads after you clicked “like” on an article criticizing the beef industry. There seems to be a shadow list of interests that the user can’t see or edit.

The one combo that does work over time is deleting interests and consistently choosing “Do not show this ad again” for the unwanted subjects. But that’s more work than most are willing to do. I only did it because the ad system is interesting to me and I wanted to see how trainable it is.


Couple this with the questionable junk they're always advertising too. Even when they do advertise correctly it's not anything I would ever consider buying. I left the platform last year and haven't looked back, it's a time suck/waste all around.


I'm convinced that the real target of targeted advertising is actually the ad buyers and not the ad viewers. It's a tool to increase the revenue of the ad sellers.


There is. It’s a bit crude and as far as I remember you can only remove things.

https://www.facebook.com/help/247395082112892


> Facebook thinks I like rock music, but I really like house music. Facebook thinks I eat meat, but I am a vegan. Facebook thinks I like cars, but I don't even drive. There's no way of telling Facebook what my tastes and lifestyle preferences even are.

Advertisers can also choose to target you via Facebook, even if you aren't in their apparent target demographic. I think this issue is a combination of both innaccurate and accurate targeted advertising.


Ads are intended to make you to need something new, or to like something that you don’t need.

Ads showing you something you cannot afford can increase your desire of it, thus envying who can afford it, etc.

An ad appearing over and over is not necessarily intended to make you buy something. There are lot of sociological and psychological implications behind.


Someone in your network, Ethernet sort, likes those things. Either at home or at work.

Lots of targeting is based on IP Adresss not device.


Form I would fill out:

[√] Doesn't like ads.


I mean I get it. I got to know somewhere that something I’m interested in buying exists. However, there has to be a better system then tracking my every move online and then force-feeding me ads about crappy products that I don’t like or worse, outright scams. I even watch videos on youtube about products I’m interested in VOLUNTARILY. How is this a billion dollar industry?


> I got to know somewhere that something I’m interested in buying exists.

I even have a problem with that. It's hard to put into words — I'm put off even by the idea that I am to be treated as a consumer. I am even slightly embarrassed when I consume a non-necessity.

I don't like door-to-door salespersons. It's not because they are pushy that I dislike them but because they are unbidden. An unsolicited ad is a door-to-door salesperson — even if it is for a thing I wanted.

When I want something, I feel I should be the one to act to seek the thing out. The truth is, I likely consume more than I should and prefer that there be friction.


> Update: Will Cathcart, head of WhatsApp, has tweeted in response to The Information's report that WhatsApp is not exploring the use of homomorphic encryption. Cathcart states that there should be skepticism of "technical claims that apps like ours could see messages in "good" cases only."

There is no way to match advertising keywords in encrypted content without also allowing matches of arbitrary keywords, at which point the encryption is broken.


> There is no way to match advertising keywords in encrypted content

Complete tangent, since this has nothing to do with what Facebook are looking into here, but:

If you control both ends of the connection (like Facebook does in WhatsApp), you can match on advertising keywords securely. By doing it client-side, after the text is decrypted.

As I understand it, this is essentially what Apple's iAds framework does: all possible advertising content that could be run in an iAd view gets pre-cached on the client; the iAds framework—running inside the client's address space—then chooses what to show the user, selecting ads based on demographic/interest information it has available locally on the device, never leaked outside. The iAd framework then collects just the impression metrics — how many times each ad was shown to a given user — and then every-so-often batches those metrics out to Apple's servers. Apple then reports re-groups and anonymizes these metrics, and reports them to the advertiser / charges the advertiser for them.

Of course, this means that advertisers need to be able to trust Apple's impression reporting; and to do that, they need to trust that Apple has set up iOS devices so that iAd selection + reporting is basically inviolable, such that nobody on the client end is generating false impressions. (Or, at least, that devices generating such impressions can be detected as being jailbroken through some mechanism, and so their iAd impression reports culled from the corpus.)


>Of course, this means that advertisers need to be able to trust Apple's impression reporting;

And that is why it won't work for Facebook. They have already been caught lying about impressions in a way that ended up destroying companies.

https://www.hollywoodreporter.com/business/business-news/fac...


Nice to assume that would be a problem for Facebook.


Fool me once...


I think Telegram does that. I had someone send me a message about our video call we had that day and in the same minute I suddenly received a message from Telegram about their video call features


Unless you were using a secret chat, telegram isn’t e2e encrypted. So there’s no need for them to implement keyword detection on the client side.


No secret chat that I was aware of. It just caught me by surprise that I got an "ad" based on a 1:1 conversation


> The iAd framework then collects just the impression metrics — how many times each ad was shown to a given user — and then every-so-often batches those metrics out to Apple's servers.

If Apple knows how many times you were shown each ad, and which keywords are associated with that ad, then they are able to reconstruct which keywords appeared in your encrypted chat, and how often.


I believe the iAd platform offers fine-grained "eyeball" targeting, but not fine-grained context targeting. The iAds framework knows a lot about the device and its owner; but it doesn't/can't directly read the content of the app the iAd gets embedded into (the "host" app) — only the host app's barest metadata, e.g. the App Store category the host app is published under (e.g. Games vs. Productivity vs. Travel, etc.)

In the case of some apps, e.g. Apple's own News app, the app may opt to feed the iAds framework a "context clue" about the content currently being viewed — telling it the category of content being displayed (e.g. is this a Business News story? An Entertainment story? etc.) But this is just a high-level, categorical identifier attached to the metadata of the item being displayed, with no ability for specific content within the viewport to trigger specific ads.

Apple has a page for exactly what the iAds framework does/doesn't use as an input for its targeting algorithm: https://support.apple.com/en-ca/HT205223

One interesting thing I figure I should highlight from that page, for the sake of people who don't bother to click:

> We create segments, which are groups of people who share similar characteristics, and use these groups for delivering targeted ads. Information about you may be used to determine which segments you’re assigned to, and thus, which ads you receive. To protect your privacy, targeted ads are delivered only if more than 5,000 people meet the targeting criteria.

This is, I believe, a move to combat "seemingly-innocent" targeting rules that nevertheless de-anonymize someone, just because that person is weird. For example, if you're the only Tagalog speaker in Cuba, just plain-old Region+Language targeting that every platform supports would be enough to target+track you specifically. So Apple just won't allow "Cuba+Tagalog" unless there are at least 5000 Tagalog speakers in Cuba with iOS devices.


that's why they're pre-cached in theory, so they don't have to know.

if you're saying you can't trust the closed source implementation, then yeah


"and then every-so-often batches those metrics out to Apple's servers" means they do know, even if we trust the implementation.

For example, if you publish an ad that only matches the word "fnord", and the metrics reveal that you were shown that ad at least once, Apple can work out that "fnord" appeared in your chat.


This is a dup of:

https://news.ycombinator.com/item?id=28053808

From my other comment: https://news.ycombinator.com/item?id=28054329

There is a severe misunderstanding of what Homomorphic Encryption is, but how they explain facebook is trying to use it is not how it works.


What's that quote again from that early engineer? The smartest, highest paid engineers, and they fucking work on making you view more ads.

We're living in an embarrassing era of massively misallocated resources.


This is even more true today, with a decade of research into the ROI of advertising showing that it isn't even making money for those who buy it. What a sad waste of energy and imagination. We can do better.


Advertising is a way to backdoor surveillance in a post-Snowden era when people are wise to TLSing All the Things. The dollars were never a real concern imo.


Can we?

As Churchill said, democracy is the worst form of government -- except for all the others which have been tried. Similarly, capitalism is the worst form of economics -- except for all the others which have been tried. I mean, it's obvious to an external observer that a prisoner's dilemma, sub-optimal Nash equilibrium, or other coordination problem can best be solved by all sides cooperating. But the reason they're called dilemmas and problems is that there's no clear way for participants in the system to solve them.

It's not like anyone actually likes the system. But no one has (nor could be trusted to use) the power to coordinate an action of the sort that could end it. This is just the state of the world right now, we opened Pandora's box with web technology and pulled out the undesirable factoid "It's lucrative to run individual profiling and sell ads." Society has a limited ability to push an agenda against incentives; pushing past tech giant privacy invasion and attention abuse is not something we are capable of doing right now.

I think the only thing that can be hoped to change this equilibrium is to change the playing field. Change technology, improving adblockers, anti-tracking tools, and encryption. Educate kids and those willing to listen about the harms of advertising. The current generations of adult Americans grew up with roadside, TV, and radio ads, they view web ads as an extension of that and don't think about the privacy implications of that, but a kid who grows up in the Internet era aware that their activity is tracked and profiled (unless they use particular encryption tech and VPNs) will have a very different attitude, and perhaps eventually this abuse will be unprofitable.


I think about how we live in an era of information, not communication, frequently. Have you heard of pol.is or vtaiwan?

https://pol.is/home https://info.vtaiwan.tw

I have a lot of hope, as well as paranoia, that implementations of semantic analysis in those direct referendum/republic programs will generate more rational collective action.


I had not heard of those, thanks for sharing. I agree that informative tech that allows more rational collective action is the way forward, but I also share your paranoia that semantic analysis will be another hazard that empowers actors with misaligned goals.


Early in his term, Trump flirted with the idea of changing quarterly earnings reports for public companies to be biannual instead. I think that potentially could have had the largest shift in corporate behavior ever seen. Potentially force companies to think more long term...or it could have been the same rat race, just 3 months longer. Who knows. Too bad that nothing came of it.


This 100%


People are discussing a lot of advanced encryption in this thread, but the much more simpler explanation is that they will do the ad processing client side where the messages are already in plain text.


Still big red flag to me! If I'm secretly talking to someone about health concerns I might have, the last thing I want is facebook sending me ads for medicine about that concern.

"But we didn't read you messages, only took keywords" sounds like a child's excuse for bad behavior


Exactly. They don't have to "break" the E2EE design. Simply make use of the "end device" to analyse the contents and feed that into their advertising profile for the user(s).

TBH I am surprised they're not already doing that?


How about decrypting the messages on the device and then sending them for ad classification to a remote service? Sounds like a no-brainer.


Wouldn't that still be a privacy violation? The messages themselves would be protected, but advertising companies would receive a constant stream of keywords or categories over time.


IMHO yes it would be a violation but WhatsApp would technically still be end to end encrypted.

Do all the data mining and pattern matching on the device then upload some blob of data whenever needed to better shape the advertising profile.

Like I said I am honestly surprised this is not what Facebook have been doing since the day they bought WhatsApp. I thought this was the reason they bought them. Sure the messages are all encrypted in the air and on their servers (theoretically anyway) but on the device is free rein and always has been.

I can see (at least) two potential issues doing this though.

First is obviously the resources of a phone vs the super clusters they have chugging away in Facebook data centers.

Second is it would mean their precious data mining algorithms would need to be built into the app and on the device. This may be a huge intellectual property concern for them as it wouldn't take long for people to extract that no matter how obfuscated they try to make it. Of course that also depends on if it is even possible to get it onto a phone in the first place in a useful way.


Also people forget that facebook gets all the metadata. The encryption leaves the length of the messages intact as well as the identities of both parties. A conversation with someone who is known to be a hairdresser is likely about making an appointment, so maybe the person is interested in ads about hair products. A conversation with a midwife might yield in ads for baby products.


While from a technology point of view this is very interesting, I would really prefer an option to pay a reasonable subscription fee to have adverts turned off for me.

Before Facebook bought WhatsApp it actually used to require a small annual fee, but it was removed.

Alas, I fear that as tends be the case with advertising companies, they know that the willing to pay users are the most valuable advert audience, so are unlikely to give anyone an option to opt out.

On an unrelated note, I wish that YouTube offered a "no adverts" subscription option, I don't want to have to pay the full price for their premium package when I watch very little YouTube and the only thing I want from it is the lack of adverts.


I keep telling people that YouTube Premium is the most cost effective membership there is, and they look at me like I'm crazy. Since I got it YouTube has become a service I genuinely love.


This is a bigger thing on reddit it seems, where I assume more readers are teenagers with little/no income.


I cancelled mine, because the ads provide enough friction to keep me from spending too much time there.


This. No ads is great, but the big surprise was the ability to download videos on my device and bring it with me when traveling. It's one of those features I didn't know I wanted until I had it, now I can't see myself not having it.


Definitely worth it. I'm glad to have had my YT Red / Google Play Music subscription grandfathered into YT Premium / YT Music Premium for the original price.


If you're on android then NewPipe is fantastic for watching YouTube without adverts, in a hovering window or even just listening to the audio with the screen off.

It does not tie into your YouTube account (so no watch history or recommended vids) which may or may not work for you, but it's a fantastic resource.



Visually seeing ads is usually not the biggest problem, although there are issues with too many ads (bandwidth, bad UX) and the security risk of malicious stuff embedded in ads.

The issue (for me), and I think for many people, is the quantity and nature of the data that is collected, and the total lack of oversight for its sharing and usage.

Also, no company will ever remove you from the tracking/data pool just because you're a paying customer. Why would they? Now they can extract revenue from you in two ways!


uBlock origin.


If you use the Brave browser on iOS and ad block on your laptop, you’ll never see an ad on YouTube.


There's Youtube Premium Lite now which just stops the ads.


> [...] is hoped to allow companies to read information from encrypted sets of data while also preserving privacy [...]

These tow items sound mutually exclusive.


I don't know, how could figuring out the message is about arranging an extramarital affair or discussing cancer treatment options with your spouse be an invasion of privacy.

/s

This stuff is so obvious that it's obviously just willfull at this point.


Not only that, but if they can generate meaningful targeted advertising from encrypted data, that would be pretty a strong indicator that the encryption method is not strong enough. Ideally, encrypted data must be indistinguishable from random noise.


As another comment mentioned, they're talking about using a technique that allows you to perform computations on encrypted data, with encrypted results output - the thing doing the computation never sees any decrypted data.

So they could target advertisements but, as far as I can tell, they wouldn't be able to know which advertisements were served ... until the phone makes a request to download them, which seems to me to render it fairly but not entirely pointless.


My understanding is that homomorphic encryption allows one to transform encrypted data, but the data is still encrypted (random noise). i.e:

decrypt(encrypt(x+1)) == decrypt(encrypt(x) +1)

This does not allow you to get any information about X, only perform computations with it. If advertisers are getting information out of encrypted data, that means the encryption is broken.


> If advertisers are getting information out of encrypted data, that means the encryption is broken.

I'm just suggesting that Facebook wants to use homomorphic encryption to tell your phone which adverts to download. It won't know which adverts it told you to download or why ... but it will know that you've downloaded them once you do.

It could in theory serve the adverts through homomorphic encryption, but then it wouldn't be able to invoice people based on how many times their adverts were served. I don't see that happening though.


Oh, thank you, I should have paid more attention to the original article as well.

As a complete layman on the topic, I am kind of wary of (fully) homomorphic encryption. Not sure if I understand it right, but it seems to me that anybody in possession of the ciphertext would be able to run it through an arbitrary circuit, even if the result is encrypted and they cannot read it; in theory it would be vulnerable to MITM attacks.

For instance, say Facebook servers are compromised (yes, I know…) and an attacker then runs their own version of the recommendation algorithm instead, presenting fake ads to people that mention specific terms.

I am certainly several steps behind any academic discussions on the subject, so if that is not a problem I am curious to know why.


You mean like zk proofs?



They are not, they are referring to "Homomorphic encryption" which basically allows you to do computation on encrypted data without decrypting it or otherwise reveal the encrypted data. https://en.wikipedia.org/wiki/Homomorphic_encryption


> These resulting computations are left in an encrypted form

As opposed to "read information from encrypted sets of data"


This seems like a good thing for FB and users (isn't that why researchers playing with homomorphic encryption more and more)?

This way, FB gets to make money off ads with fewer privacy concerns.

I have no insider knowledge but I fully expect Apple, Google and MS are actively trying to do this too.


I'm not very well versed in encryption but I know many here are and wanted to see if how I imagined this has any merit -

is this something where WhatsApp could encrypt messages word by word instead of by message, and keep a list of the hashes of top sent words and their counts somewhere, then facebook would send some ads directory to WhatsApp clients, where they'd encrypt each keyword in the dictionary, and then compare the dictionary hashes to the top send word hashes, and try to make a match on the client side for specific ads to show?

Or is this more that full encrypted messages would be scanned for some type of needle in the haystack that could infer some value about what's being sent?


I am not a cryptographer.

At a quick inspection this kind of thing would result in pretty bad encryption. A hash of each word (unless salted with a different salt each time, defeating the purpose) is easily broken, even if the hash is a cryptographic hash. I.e. I can build up a dictionary by hashing the top 100k most common words. This kind of attack is why every encryption needs some randomness (or something that's changing) injected to it. It's also unprotected against replay attacks, etc. And I'm probably just scratching the surface.

You really have to encrypt full messages. And sending the same message twice has to result in different ciphertexts. And you need nonces. And probably many other things. Cryptography is hard.


This would only be possible if the same encryption key was used for every message. But I don't think that's the case. So the same plaintext word is encrypted differently every time.


Maybe they plan on making the encryption slightly less encrypted so they can tell something about the content? Would that not mean the encryption is basically pointless?


I'm just about still on Whatssap because my network are on it and I'm assured that messages are encrypted.

The moment ads arrive, I'm done.

I get the need to make money but this isn't it. *

* look at what Google did to Youtube for an abject demonstration in how to ruin your own platform ... 6 second ads because longer and longer, became multiple ads became full screen ads, became un-skippable, became ... all completely blocked or piped in separately now otherwise I'd go bananas.


I'm sure end-to-end encryption is great and all, but if a company like Facebook is giving you the client, what difference does e2e encryption make if you're trying to prevent Facebook from seeing your stuff?

Obviously you can see the plain text messages even if it was encrypted, and so Facebook/WhatsApp could theoretically see it too. Isn't this true of any app, e2ee or not?

Adding ads at that point is just trivial - process on the client.


And of course the same tech can be used for surveillance.


Targeted ads is surveillance.


They'll do something like FLOC but right before it's being sent lol


See also: Facebook Wanted NSO Spyware to Monitor Users, NSO CEO Claims

https://news.ycombinator.com/item?id=28001207


Why they can't just use ads without the targeting...


This should not be possible, or must be considered crypto analysis right away. An encryption is build by design such that it is not possible to obtain any knowledge of the content of the encrypted message (or series of consecutive encrypted messages). If I can make some assumptions about an encrypted message (i.e. its content, the authors interesets etc.) the encryption is flawed by definition.


Well, I suppose it is possible to target ads to specific users without knowing anything about the user other than an anonymous user ID.

Of course, this requires one to trust that Facebook is not actually collecting any identifying info. The temptation to cut corners for more profit or for sheer expediency might be hard to resist.


How I’m using WhatsApp today: because I have to in order to communicate with businesses and contacts that are required to use it in their home countries.

Ínstalled with no permissions to an address book, camera or mic. Somewhat of a pain that I can’t save contacts but at least it’s still possible to communicate.


Head of WhatsApp (Will Cathcart) responded:

“We're not pursuing homomorphic encryption for @WhatsApp…”

Full tweet: https://twitter.com/wcathcart/status/1422611553954668546?s=2...


i am sure “they” are reading this, others downvote me but, maybe ask our opinion before signup, like a quick 3 question survey. that would be less extreme than sounding creepy


Why not simply display generic ads to people and be done with it. Use more specific ads on certain pages/groups. Content tracking should not be necessary and is creepy.


Isn't there client side code in every Whatsapp client that analyzes your messages and sends data to FB for ad targeting?

I know for a fact they can block certain message content from being sent. So at least a hash of every message is sent to FB. This isn't very far from it.


Zuckerberg said that they combat spam on WhatsApp not by reading your messages because since they are end to end encrypted they are obviously unable to do it but by analyzing your metadata and your behavior on the service. So I assume the same strategy can be used for ads.

Encryption is not a problem; if I'm not mistaken various security agencies in USA also bet on metadata when strong encryption is in question.


Perhaps we consumers should demand E2EE metadata then?

Would such a thing be theoretically feasible?

Or not because it would be like Tor with the entire Internet controlled by the adversary?


I don't know about E2EE metadata but I was referring to WhatsApp metadata in general, for example: "Alice sent 3 messages to Bob at 7:05 AM"


How dare they try to make money from their business


If ad revenue is your only go to for money, maybe your business model just ... sucks.


Lots of people are happy with products that are supported by advertising. Problems arise when the business model relies on surveillance, which endangers and violates the rights of end users.


True. Personally it demonstrates a lack of imagination or the "easy way out".

For example, Google showed incredible to innovation to do what they did with Adwords. But then the way they slapped ads on Youtube has been gross.

There's basic user experience also.

Agreed though. Surveillance and privacy are even more important.


Can someone explain this? How can they not know what I'm talking about on WhatsApp if they show me targeted ads?


The article mentions homomorphic encryption, which is a way to perform computation on encrypted text, yielding encrypted results _without_ revealing the plaintext. Instead of f(x) → y, you do f(encrypted(x)) → encrypted(y).

So if you think of a function 'f' that can take text content and produce a relevant ad for it, and an encryption algorithm that converts plaintext 'x' into an encrypted 'X', then instead of running f(x) and getting an ad 'y', you'd instead run homomorphic_apply(f, X) – on the encrypted text – and you'd get an encrypted ad 'Y' without revealing 'x'. The device can keep the keys and do the decryption of 'Y', but running 'f' using homomorphic_apply can happen on the server side.

In principle homomorphic encryption is very cool and the "trick" involved to get fully homomorphic encryption described in Craig Gentry's dissertation in 2009 is pretty amazing. When it finally clicked for me I thought this was one of the most clever papers I'd ever read and truly a "mind: blown" moment.

That said, this is Facebook. As others have pointed out here, there's no need for such fancy processing and it makes much more sense to run the targeting algorithm on-device with access to the plaintext.


We should all just switch to something like https://delta.chat/en/ where the messaging protocol is implemented over plain old emails.

It makes the most sense, even without Facebook being bad and all.


well, the dumb fucks won't mind.


Funny that for an "ad" company, I do not see a single ad on their frontpage nor on any pages that require me to create a facebook account.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: