Hacker Newsnew | past | comments | ask | show | jobs | submit | sh33mp's commentslogin

>not bc of performance, but bc I hate OSXs Cmd-Tab ordering

That's a trivial but surprisingly reasonable reason for using two browsers. OSX has some annoying quirks sometimes.


The 20,000 for advanced degree holders already holds in the current system, and it already over-subscribed. So this would not reduce the number of H-1Bs issued.


The above commenter probably spoke a little too hastily. "Building Machines" is indeed not a paper about a neural network method, but a survey of problems that we should expect neural networks to do better on, but they currently aren't. That said, the paper isn't down on deep learning, but rather says "we need extra stuff, either additional methods or more inductive biases in our models".


Could you guys elaborate on the relationship between PyText, torchtext, and AllenNLP? I've briefly used the latter two, but with how quickly things are moving it'd be nice to have a quick answer from the devs themselves.


PyText dev here, Torchtext provides a set of data-abstractions that helps reading and processing raw text data into PyTorch tensors, at the moment we use Torchtext in PyText for training-time data reading and preprocessing.

AllenNLP is a great NLP modeling library that is aimed at providing reference implementations and prebuilt state-of-the-art models, and make it easy to iterate on and research with models for different NLP tasks.

We've built PyText to be a rich NLP modeling library (along the lines of AllenNLP) but with production capabilities baked in the design from day 1.

Examples are: - We provide interfaces to make sure data preprocessing can be consistent between training and runtime - The model interfaces are compatible with ONNX and torch.jit - A core goal for us in the next few month is to be able to run models trained in PyText on mobile.

Among other differences like supporting distributed training and multi-task learning.

That being said, so far our library of models has been mostly influenced by our current production use-cases, we are actively working on enriching this library with more models and tasks while keeping production capabilities and inference speed in mind.


>I grew up in a place practicing one-candidate votes (i.e., you choose between "in favor" that in the end will show 95+% and "against" with no alternatives) so acquired immunity to "citizen votes solve all problems" mentality.

I grew up in a place with effectively single party rule, and it was imbued into our culture that it's pointless to vote because you'll never unseat that party anyway. A big recent change was when, in a semi-recent election, an opposition party won an electoral constituency, worth a mere 5% of parliamentary votes, by the thinest of margins. I couldn't tell you what the opposition platform was today. But regardless of the fact that the opposition still wielded absolutely no legislative power, this led to an era of what many would consider a very electorate friendly legislative push.

This "don't vote for the lesser of two evils" business is nonsense. Use your right to vote relentlessly: to punish arrogant politicians, to press on single issues, to fight for the lesser of two evils because it is the LESSER of two evils. Finding and balancing the lesser of two evils is your job as a voter.

Even if your desired choice has no chance of winning, grinding down the margin, year after year, makes the other side nervous and more willing to compromise. Even if your desired choice has no chance of losing, expanding the margin gives them more room to take less "centrist" stances and push for the things you want. If there's a lesser of two evils, keep voting until the more of two evils has no choice but to compromise and become less evil. Lather, rinse, repeat.


> This "don't vote for the lesser of two evils" business is nonsense.

I was talking literally, not figuratively: two options would be "yes" or "no" for a single candidate. There would not be an option for a second candidate, however unlikely to win. Thus, sorry, I am not buying "use your right to vote relentlessly" whatever the options. If the ballot box is rigged, the other three boxes of liberty (soap, jury or ammo) should be considered instead. My 2c.


I want to commend you for trying to learn more about the immigration process. More often than not, I find that Americans tend to not know hoops and travails that internationals have to jump through to work in the US, or even just to keep working in the US. Too often, I've heard "H1-B is for cheap foreign labor - just apply for an EB-1/2 or something."


Yep, all of the Americans I've dealt with have been shocked to find out how difficult it is to immigrate. In fairness, I had no idea either before I tried.


"Evidence suggests that the heat death of the universe is unavoidable."


To put a futurist spin on it: advertising is the commercialization around the information bottleneck in the information age.


Advertising is actively creating and perpetuating the information bottleneck.


ULM-FiT and OpenAI's Transformer* are quite different. Both are pretrained language-models, but ULM-FiT is a standard stack of LSTMs with a particular recipe for fine-tuning, whereas the OpenAI's Transformer uses the much newer Transformer architecture, and no really fancy tricks in the actual fine-tuning. I suspect the difficulty is with the Transformer model itself - this is not the first time I've heard that it is difficult to train.

* = To be clear, this refers to OpenAI's pretrained Transformer model. The Transformer architecture was from work at Google.


Self-documenting code base, right?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: