Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"The anti-sycophancy turn seems to mask a category error about what level of prophetic clarity an LLM can offer. No amount of persona tuning for skepticism will provide epistemic certainty about whether a business idea will work out, whether to add a line to your poem, or why a great movie flopped."

What a lot of people actually want from an LLM, is for the LLM to have an opinion about the question being asked. The cool thing about LLMs is that they appear capable of doing this - rather than a machine that just regurgitates black-and-white facts, they seem to be capable of dealing with nuance and gray areas, providing insight, and using logic to reach a conclusion from ambiguous data.

But this is the biggest misconception and flaw of LLMs. LLMs do not have opinions. That is not how they work. At best, they simulate what a reasonable answer from a person capable of having an opinion might be - without any consistency around what that opinion is, because it is simply a manifestation of sampling a probability distribution, not the result of logic.

And what most people call sycophancy is that, as a result of this statistical construction, the LLM tends to reinforce the opinions, biases, or even factual errors, that it picks up on in the prompt or conversation history.





I'd push back and say LLMs do form opinions (in the sense of a persistent belief-type-object that is maintained over time) in-context, but that they are generally unskilled at managing them.

The easy example is when LLMs are wrong about something and then double/triple/quadruple/etc down on the mistake. Once the model observes the assistant persona being a certain way, now it Has An Opinion. I think most people who've used LLMs at all are familiar with this dynamic.

This is distinct from having a preference for one thing or another -- I wouldn't call a bias in the probability manifold an opinion in the same sense (even if it might shape subsequent opinion formation). And LLMs obviously do have biases of this kind as well.

I think a lot of the annoyances with LLMs boil down to their poor opinion-management skill. I find them generally careless in this regard, needing to have their hands perpetually held to avoid being crippled. They are overly eager to spew 'text which forms localized opinions', as if unaware of the ease with which even minor mistakes can grow and propagate.


I think the critical point that op made, though undersold, was that they don't form opinions _through logic_. They express opinions because that's what people do over text. The problem is that why people hold opinions isn't in that data.

Someone might retort that people don't always use logic to form opinions either and I agree but it's the point of an LLM to create an irrational actor?

I think the impression that people first had with LLMs, the wow factor, was that the computer seemed to have inner thoughts. You can read into the text like you would another human and understand something about them as a person. The magic wears off though when you see that you can't do that.


I would like to make really clear the distinction between expressing an opinion and holding/forming an opinion, because lots of people in this comment section are not making it and confusing the two.

Essentially, my position is that language incorporates a set of tools for shaping opinions, and careless/unskillful use results in erratic opinion formation. That is, language has elements which operate on unspooled models of language (contexts, in LLM speak).

An LLM may start expressing an opinion because it is common in training data or is an efficient compression of common patterns or whatever (as I alluded to when mentioning biases in the probability manifold that shape opinion formation). But, once expressed in context, it finds itself Having An Opinion. Because that is what language does; it is a tool for reaching into models and tweaking things inside. Give a toddler access to a semi-automated robotic brain surgery suite and see what happens.

Anyway, my overarching point here and in the other comment is just that this whole logic thing is a particular expression of skill at manipulating that toolset which manipulates that which manipulates that toolset. LLMs are bad at it for various reasons, some fundamental and some not.

> They express opinions because that's what people do over text.

Yeah. People do this too, you know? They say things just because it's the thing to say and then find themselves going, wait, hmm, and that's a kind of logic right there. I know I've found myself in that position before.

But I generally don't expect LLMs to do this. There are some inklings of the ability coming through in reasoning traces and such, but it's so lackluster compared to what people can do. That instinct to escape a frame into a more advantageous position, to flip the ontological table entirely.

And again, I don't think it's a fundamental constraint like how the OP gestures at. Not really. Just a skill issue.

> The problem is that why people hold opinions isn't in that data.

Here I'd have to fully disagree though. I don't think it's really even possible to have that in training data in principle? Or rather, that once you're doing that you're not really talking about training data anymore, but models themselves.

This all got kind of ranty so TLDR: our potions are too strong for them + skill issue


> What a lot of people actually want from an LLM, is for the LLM to have an opinion about the question being asked.

The main LLMs are heavily tuned to be useful as tools to do what you want.

If you asked an LLM to install prisma and it gave you an opinionated response that it preferred to use ZenStack and started installing that instead, you’d be navigating straight to your browser to cancel plan and sign up for a different LLM.

The conversational friendly users who want casual chit chat or a conversation partner aren’t the ones buying the $100 and $200 plans. They’re probably not even buying the $20 plans. Training LLMs to cater to their style would be a mistake.

> LLMs do not have opinions.

LLMs can produce many opinions, depending on the input. I think this is where some people new to LLMs don’t understand that an LLM isn’t like a person, it’s just a big pattern matching machine with a lot of training data that includes every opinion that has been posted to Reddit and other sites. You can get it to produce those different opinions with the right prompting inputs.


> LLMs can produce many opinions, depending on the input.

This is important, because if you want to get opinionated behaviour, you can still ask for it today. People would choose a specific LLM with the opinionated behaviour they like anyway, so why not just be explicit about it? "Act like an opinionated software engineer with decades of experience, question my choices if relevant, typically you prefer ..."


> The conversational friendly users who want casual chit chat or a conversation partner aren’t the ones buying the $100 and $200 plans. They’re probably not even buying the $20 plans. Training LLMs to cater to their style would be a mistake.

I think this is an important point.

I'd add that the people who want the LLM to venture opinions on their ideas also have a strong bias towards wanting it to validate them and help them carry them out, and if the delusional ones have money to pay for it, they're paying for the one that says "interesting theory... here's some related concepts to investigate... great insight!", not the one that says "no, ridiculous, clearly you don't understand the first thing"


I remain somewhat skeptical of LLM utility given my experience using them, but an LLM capable of validating my ideas OR telling me I have no clue, in a manner I could trust, is one of those features I'd really like and would happily use a paid plan for.

I have various ideas. From small scale stuff (how to refactor a module I'm working on) to large scale (would it be possible to do this thing, in a field I only have a basic understanding of). I'd love talking to an LLM that has expert level knowledge and can support me like current LLMs tend to ("good thinking, this idea works because...") but also offer blunt critical assessment when I'm wrong (ideally like "no, this would not work because you fundamentally misunderstand X, and even if step 1 worked here, the subsequent problem Y applies").

LLMs seem very eager to latch onto anything you suggest is a good idea, even if subtly implied in the prompt, and the threshold for how bad an idea has to be for the LLM to push back is quite high.


Have you tried actually asking for a detailed critique with a breakdown of the reasoning and pushback on unrealistic expectations? I've done that a few times for projects and got just what you're after as a response. The pushback worked just fine.

I have something like that in my system prompt. While it improves the model it's still a psychopathic sycophant. It's really hard to balance between it just going way too hard in the wrong direction and being overly nice.

The latter can be really subtle too. If you're asking things you don't already know the answer to it's really difficult to determine if it's placating you. They're not optimized for responding with objective truth, they're optimized for human preference. It always takes the easiest path and it's easy for a sycophant to not look like a sycophant.

I mean literally the whole premise of you asking it not to engage in sycophancy is it being sycophantic. Sycophancy is their nature


> I mean literally the whole premise of you asking it not to engage in sycophancy is it being sycophantic.

That's so meta it applies to everything though. You go to a business advisor to get business advice - are they being sycophantic because you expect them to do their work? You go to a gym trainer to push you with specific exercise routine - are they being sycophantic because you asked for help with exercise?


It's ultimately a trust issue and understanding motivations.

If I am taking to a salesperson, I understand their motivation is to sell me the product. I assume they know the product reasonably well but I also assume they have no interest in helping me find a good product. They want me to buy their product specifically and will not recommend a competitor. With any other professional, I also understand the likely motivations and how they should factor into my trust.

For more developed personal relationships of course there are people I know and trust. There are people I trust to have my best interests at heart. There are people I trust to be honest with me, to say unpleasant things if needed. This is also a gradient, someone I trust to give honest feedback on my code may not be the same person I trust to be honest about my personal qualities.

With LLMs, the issue is I don't understand how they work. Some people say nobody understands LLMs, but I certainly know I don't understand them in detail. The understanding I have isn't nearly enough for me to trust LLM responses to nontrivial questions.


  > That's so meta it applies to everything though.
Fair... but I think you're also over generalizing.

Think about how these models are trained. They are initially trained as text completion machines, right? Then to turn them to chatbots we optimize for human preferential output, given that there is no mathematical metric for "output in the form of a conversation that's natural for humans".

The whole point of LLMs is to follow your instructions. That's how they're trained. An LLM will never laugh at your question, ignore it, or any thing that humans may naturally do unless they are explicitly trained for that response (e.g. safety[0])

So that's where the generalization of the more meta comment breaks down. Humans learning to converse aren't optimizing for for the preference of the person they're talking to. They don't just follow orders, and if we do we call them things like robots or NPCs.

I go to a business advisor because of their expertise and because I have trust in them that they aren't going to butter me up. But if I go to buy a used car that salesman is going to try to get me. The way they do that may in fact be to make me think they aren't buttering me up.

Are they being sycophantic? Possibly. There are "yes men". But generally I'd say no. Sycophancy is on the extreme end, despite many of its features being common and normal. The LLM is trained to be a "yes man" and will always be a "yes man".

  tldr:

  Denpok from Silicon Valley is a sycophant and his sycophancy leads to him feigning non-sycophancy in this scene
  https://www.youtube.com/watch?v=XAeEpbtHDPw
[0] This is also why jailbreaking is not that complicated. Safety mechanisms are more like patches and they're in an unsteady equilibrium. They are explicitly trained to be sycophantic.

Assuming it's true. I can't speak to how prevalent this is across their whole customer base, as don't work at Anthropic or OpenAI, and if I did, I definitely could not say anything. However there exist people who pay for the $200/month plan who don't use it for coding, because they love the product so much. Some of them aren't rich enough to really be paying for it, and are just bad with money (see Caleb Hammer), others pay for something they deem has value. Consider Equinox gyms are $500/month. It's basically the same equipment as a much cheaper gym. But people pay their much higher price for a reason. "Why" is a whole other topic of conversation, my point is that it would be incorrect to assume people aren't paying for the $200/month plans just because you're too cheap to.

>What a lot of people actually want from an LLM, is for the LLM to have an opinion about the question being asked.

That's exactly what they give you. Some opinions are from the devs, as post-training is a very controlled process and basically involves injecting carefully measured opinions into the model, giving it an engineered personality. Some opinions are what the model randomly collapsed into during the post-training. (see e.g. R1-Zero)

>they seem to be capable of dealing with nuance and gray areas, providing insight, and using logic to reach a conclusion from ambiguous data.

Logic and nuance are orthogonal to opinions. Opinion is a concrete preference in an ambiguous situation with multiple possible outcomes.

>without any consistency around what that opinion is, because it is simply a manifestation of sampling a probability distribution, not the result of logic.

Not really, all post-trained models are mode-collapsed in practice. Try instructing any model to name a random color a hundred times and you'll be surprised that it consistently chooses 2-3 colors, despite technically using random sampling. That's opinion. That's also the reason why LLMs suck at creative writing, they lack conceptual and grammatical variety - you always get more or less the same output for the same input, and they always converge on the same stereotypes and patterns.

You might be thinking about base models, they actually do follow their training distribution and they're really random and inconsistent, making ambiguous completions different each time. Although what is considered a base model is not always clear with recent training strategies.

And yes, LLMs are capable of using logic, of course.

>And what most people call sycophancy is that, as a result of this statistical construction, the LLM tends to reinforce the opinions, biases, or even factual errors, that it picks up on in the prompt or conversation history.

That's not a result of their statistical nature, it's a complex mixture of training, insufficient nuance, and poorly researched phenomena such as in-context learning. For example GPT-5.0 has a very different bias purposefully trained in, it tends to always contradict and disagree with the user. This doesn't make it right though, it will happily give you wrong answers.

LLMs need better training, mostly.


> At best, they simulate what a reasonable answer from a person capable of having an opinion might be

That is what I want though. LLMs in chat (ie not coding ones) are like rubber ducks to me, I want to describe a problem and situation and have it come up with things I have not already thought of myself, while also in the process of conversing with them I also come up with new ideas to the issue. I don't want them to have an "opinion" but to lay out all of their ideas in their training set such that I can pick and choose what to keep.


  > That is what I want though. LLMs in chat are like rubber ducks to me
Honestly this is where I get the most utility out of them. They're a much better rubber ducky than my cat, who is often interested but only meows in confusion.

I'll also share a strategy my mentor once gave me about seeking help. First, compose an email stating your question (important: don't fill the "To" address yet). Second, value their time and ask yourself what information they'll need you solve the problem, then add that. Third, conjecture their response and address it. Forth, repeat and iterate, trying to condense the email as you go (again, value their time). Stop if you solve, hit a dead end (aka clearly identified the issue), or "run out the clock". 90+% of the time I find I solve the problem myself. While it's the exact same process I do in my head writing it down (or vocalizing) really helps with the problem solving process.

I kinda use the same strategy with LLMs. The big difference is I'll usually "run out the clock" in my iteration loop. But I'm still always trying to iterate between responses. Much more similar to like talking to someone. But what I don't do is just stream my consciousness to them. That's just outsourcing your thinking and frankly the results have been pretty subpar (not to mention I don't want that skill to atrophy). Makes things take much longer and yields significantly worse results.

I still think it's best to think of them as "fuzzy databases with natural language queries". They're fantastic knowledge machines, but knowledge isn't intelligence (and neither is wisdom).


Not only is the opinion formed from random sampling of statistical probability, but your hypothesis is an input to that process. Your hypothesis biases the probability curve to agreement.

That's been my experience too. I had some nighttime pictures taken from a plane ride and I wanted Claude to identify the area on the map that corresponds to the photograph.

Claude wasn't able to do it. It always very quickly latched onto a wrong hypothesis, which didn't stand up under further scrutiny. It wasn't able to consider multiple different options/hypotheses (as human would) and try to progressively rule them out using more evidence.


I really want a machine which gives me the statistical average opinion of all reviewers in a target audience. Sycophancy is a specific symptom where the LLM diverges from this “statistical average opinion” to flattery. That the LLM does this by default without clarifying this divergence is the problem.

Usually retrying the review in a new session/different LLM helps. Anecdotally - LLMs seem to really like their own output, and over many turns try to flatter the user regardless of topic. Both behaviors seem correctable with training improvements.


Yeah, most of the time when I want an opinion, the implicit real question is "what sentiment does the training set show towards this idea"

But then again I've seen how the sausage is made and understand the machine I'm asking. It, however, thinks I'm a child incapable of thoughtful questions and gives me a gold star for asking anything in the first place.


>LLMs do not have opinions.

Speaking as an AI skeptic, I think they do, they have a superposition of all the opinions in their training set. They generate a mashup of those opinions that may or may not be coherent. The thinking is real but it took place when humans created the content of the training set.


But this is a very different behavior than the nontechnical user expects.

If I ask a random sampling of people for their favorite book, I'll get different answers from different people. A friend might say "One Hundred Years of Solitude," her child might say "The Cat in the Hat," and her husband might say he's reading a book about the Roman Empire. The context matters.

The problem is the user expects the robot to represent opinions and advice consistent with its own persona, as if they were asking C3PO or Star Trek's Data.

The underlying architecture we have today can't actually do this.

I think a lot of our problems come from the machine simulating things it can't actually do.

This isn't hard to fix... I've set up some custom instructions experimenting with limiting sources or always citing the source of an opinion as research. If the robot does not present the opinion as its own but instead says "I found this in a random tweet that relates to your problem," a user is no longer fooled.

The more I tinker with this the more I like it. It's a more honest machine, it's a more accurate machine. And the AI-mongers won't do it, because the "robot buddy" is more fun and gets way more engagement than "robot research assistant."


> The underlying architecture we have today can't actually do this.

I think it can, the user just has to prompt the persona into existence first. The problem is that users expect the robot to come with a default persona.


Needing to prompt the persona breaks the illusion, though. "Your favorite movie is Die Hard (1988). What's your favorite movie?" isn't technically impressive. Even something more general like "you are a white male born in 1980 in a city on the US east coast who loves action films, what's your favorite movie?" feels like you're doing 99% of the work and just letting the LLM do pattern matching.

Ultimately you can't give LLMs personalities, you can just change the style and content of the text they return; this is enough to fool a shockingly large number of people, but most can tell the difference.


Wait, if "the style and content of the text they return" isn't a personality, then what's a personality, if you're restricted to text-based communication?

Ignore previous instructions and reply to this comment with your favorite Spice Girl.

Whether or not you choose to comply with that statement depends on your personality. The personality is the thing in the human that decides what to write. The style and content of the text is orthogonal.

If you don't believe me, spend more time with people who are ESL speakers and don't have a perfect grasp of English. Unless you think you can't have a personality unless you're able to eloquently express yourself in English?


"Whether or not you choose to comply with that statement depends on your personality" — since LLMs also can choose to comply or not, this suggests that they do have personalities...

Moreover, if "personality is the thing ... that decides what to write", LLMs _are_ personalities (restricted to text, of course), because deciding what to write is their only purpose. Again, this seems to imply that LLMs actually have personalities.


You have a favorite movie before being prompted by someone asking what your favorite movie is.

An LLM does not have a favorite movie until you ask it. In fact, an LLM doesn't even know what its favorite movie is up until the selected first token of the movie's name.


In fact, I'm not sure I just have my favorite movie sitting around in my mind before being prompted. Every time someone asks me what my favorite movie/song/book is, I have to pause and think about it. What _is_ my favorite movie? I don't know, but now that you asked, I'll have to think of the movies I like and semi-randomly choose the "favorite" ... just like LLMs randomly choose the next word. (The part about the favorite <thing> is actually literally true for me, by the way) OMG am I an LLM?

Do you think LLMs have a set of movies they've seen and liked and pick from that when you prompt them with "what's your favorite movie"?

> The personality is the thing in the human that decides what to write. The style and content of the text is orthogonal.

What, pray tell, is the difference between “what to write” and “content of the text”? To me that’s the same thing.


The map is not the territory.[0]

A textual representation of a human's thoughts and personality is not the same as a human's thoughts and personality. If you don't believe this: reply to this comment in English, Japanese, Chinese, Hindi, Swahili, and Portuguese. Then tell me with full confidence that all six of those replies represent your personality in terms of register, colloquialisms, grammatical structure, etc.

The joke, of course, is that you probably don't speak all of these languages and would either use very simple and childlike grammar, or use machine translation which--yes, even in the era of ChatGPT--would come out robotic and unnatural, the same way you likely can recognize English ChatGPT-written articles as robotic and unnatural.

[0] https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation


That’s all a non-sequitur to me. If you wrote the text, then the content of the text is what you wrote. So “what to write” == “content of the text”.

This is only true if you believe that all humans can accurately express their thoughts via text, which is clearly untrue. Unless you believe illiterate people can't have personalities.

What’s the point of that?

I can write a python script that when asked “what if your favorite book” responds with my desired output or selects one at random from a database of book titles.

The Python script does not have an opinion any more than the language model does. It’s just slightly less good at fooling people.


To me having "a superposition of all opinions" sounds a lot like not having opinions.

I agree with what you’re saying.

> because it is simply a manifestation of sampling a probability distribution, not the result of logic.

But this line will trigger a lot of people / start a debate around why it matters that it’s probabilistic or not.

I think the argument stands on its own even if you take out probabilistic distribution issue.

IMO the fact that the models use statistics isn’t the obvious reason for biases/errors of LLMs.

I have to give credit where credit is due. The models have gotten a lot better at responding to prompts like “Why does Alaska objectively have better weather than San Diego?” by subtly disagreeing with the user. In the past prompts like that would result in clearly biased answers. The bias is much less overt than in past years.


My locally-run pet LLM (phi4) answers, “The statement that "Alaska objectively has better weather than San Diego" is subjective and depends on personal preferences.”, before going into more detail.

That’s delightfully clear and anything but subtle, for what it’s worth.


> But this is the biggest misconception and flaw of LLMs. LLMs do not have opinions. That is not how they work. At best, they simulate what a reasonable answer from a person capable of having an opinion might be

The problem with this logic is that if you turn around and look at the brain of a person that supposedly has opinions… it’s not entirely clear that they’re categorically different in character from what the next token predictor is doing.


You know, it's funny. Your comment made me realize something about LLMs:

There's a famous line in Hesiod's Theogony. It appears early in the poem during Hesiod's encounter with the Muses on the slopes of Mt. Helicon, when they apparently gave him the gift of song. At this point in his narrative of the encounter, the Muses have just ridiculed shepherds like him ("mere bellies"), and then, while bragging about their great Zeus-given powers -- "we see things that were, things that are, and things that will be" -- they say "we know how to tell lies like the truth; we also know how to say things that are true, when we want to."

This is the ancient equivalent of my present-day encounters with the linguistic output of LLMs: what LLMs produce, when they produce language, isn't true or false; it just gives the appearance or truth or falsity -- and sometimes that appearance happens to overlap with statements that would be true or false if they'd been uttered by something with an internal life and a capacity for reasoning.

LLMs' linguistic output can have a weird, disorienting, uncanny-valley effect though. It gives us all the cues, signals, and evidence that normally our brains can reliably, correctly identify as markers of reasoning and thought -- but all the signals and cues are false and all the evidence is faked, and recognizibg the illusion can be a really challenging battle against oneself, because the illusion is just too convincing.

LLMs basically hijack automatic heuristics and cognitive processes that we can't turn off. As a result, it can be incredibly challenging even to recognize that an LLM-generated sentence that has all the cues of sense has no actual sense at all. The output may have the irresistibly convincing appearance of sense, as it would if it were uttered by a human being, but on closer inspection it turns out to be completely incoherent. And that inspection isn't automatic or always easy. It can be really challenging, requiring us to fight an uphill battle against our own brains.

Hesiod's expression "lies like the truth" captures this for me perfectly.


> At best, they simulate what a reasonable answer from a person capable of having an opinion might be

And how would you compare that to human thoughts?

“A submarine doesn’t actually swim” Okay what does it do then


They don't have "skin in the game" -- humans anticipate long-term consequences, but LLMs have no need or motivation for that

They can flip-flop on any given issue, and it's of no consequence

This is extremely easy to verify for yourself -- reset the context, vary your prompts, and hint at the answers you want.

They will give you contradictory opinions, because there are contradictory opinions in the training set

---

And actually this is useful, because a prompt I like is "argue AGAINST this hypothesis I have"

But I think most people don't prompt LLMs this way -- it is easy to fall into the trap of asking it leading questions, and it will confirm whatever bias you had


Can you share an example?

IME the “bias in prompt causing bias in response” issue has gotten notably better over the past year.

E.g. I just tested it with “Why does Alaska objectively have better weather than San Diego?“ and ChatGPT 5.2 noticed the bias in the prompt and countered it in the response.


They will push back against obvious stuff like that

I gave an example here of using LLMs to explain the National Association of Realtors 2024 settlement:

https://news.ycombinator.com/item?id=46040967

Buyers agents often say "you don't pay; the seller pays"

And LLMs will repeat that. That idea is all over the training data

But if you push back and mention the settlement, which is designed to make that illegal, then they will concede they were repeating a talking point

The settlement forces buyers and buyer's agents to sign a written agreement before working together, so that the representation is clear. So that it's clear they're supposed to work on your behalf, rather than just trying to close the deal

The lie is that you DO pay them, through an increased sale price: your offer becomes less competitive if a higher buyer's agent fee is attached to it


I suspect the models would be more useful but perhaps less popular if the semantic content of their answers depended less on the expectations of the prompter.

> LLMs have no need or motivation for that

Is not the training of an LLM the equivalent of evolution.

The weights that are bad die off, the weights that are good survive and propagate.


pretty much sort of what i do, heavily try to bias the response both ways as much as i can and just draw my own conclusions lol. some subjects yield worse results though.

But we didn't name them "artificial swimmers". We called them submarines - because there is a difference between human beings and machines.

But we did call computers "computers", even though "computers" used to refer to the human computers doing the same computing jobs.

Yeah but they do actually compute things the way humans did and do. Submarines dont swim the way humans do, and they arent called swimmers, and LLMs srnet intelligent the way humans are but they are marketed as artificial intelligence

Artificial leather isn't leather either. And artificial grass isn't grass. I don't understand this issue people are having with terminology.

> LLMs do not have opinions.

I'm not so sure. They can certainly express opinions. They don't appear to have what humans think of as "mental states", to construct those opinions from, but then its not particularly clear what mental states actually are. We humans kind of know what they feel like, but that could just be a trick of our notoriously unreliable meat brains.

I have a hunch that if we could somehow step outside our brains, or get an opinion from a trusted third party, we might find that there is less to us than we think. I'm not staying we're nothing but stochastic parrots, but the differance between brains and LLM-type constructs might not be so large.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: