Hacker Newsnew | past | comments | ask | show | jobs | submit | cosminro's commentslogin

One important aspect of leetcode/coding contest problems is that they have a input size constraint and a time limit constraint.

You can use the two to figure out what is the time complexity for a solution that would work. This simplifies the search for a solution by quite a bit. Here's a blog post about this idea (going from the input constraint to the possible algorithm): https://www.infoarena.ro/blog/numbers-everyone-should-know

Other than that, understanding a set of frequent data structures and algorithms helps a ton. Here's a short course from stanford on preparing for coding contests http://web.stanford.edu/class/cs97si/


A lot of people reading the paper miss this. I guess it's not emphasized enough.

In the first paper, the selfplay trained policy is about 1500 in elo rating, while darkforest2 a supervised trained policy from Facebook is around the same, if not better. So selfplay wasn't of much use the first time around. While in the AlphaZero paper the selfplay trained policy has about 3000 elo rating.


> A lot of people reading the paper miss this. I guess it's not emphasized enough.

Yeah, it's hilariously underemphasized. 1 sentence, literally. Fortunately I was able to ask Silver directly and get confirmation that it's the tree iteration: https://www.reddit.com/r/MachineLearning/comments/76xjb5/ama...


Attention is used in machine translation since 2014 as far as i know (Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio: Neural Machine Translation by Jointly Learning to Align and Translate) in various forms.


He is referring to this paper https://research.googleblog.com/2017/06/multimodel-multi-tas...

Where they used one model to do image recognition, speech recognition and translation in the same network.


He also mentions sparsely activating only the neurons that matter, which they explore in https://arxiv.org/pdf/1701.06538.pdf

Personally I didn't find it very satisfying, I imagine something more fundamental and self referential.


No batch norm for LSTMs


Totally agree, but RF's are more related to "CNN-ish problems" (image classification and...?), not RNNs, or generally, any graphical sequence model.

EDIT: to clarify: "j/k" with the thing in parenthesis ;-)


they probably used the meet in the middle technique: http://www.infoarena.ro/blog/meet-in-the-middle


Here's some more background around Petr http://www.quora.com/Petr-Mitrichev/What-it-is-like-to-meet-...

And some discussion around Russia and China dominating in coding contests http://www.quora.com/Why-do-people-from-Eastern-Europe-and-C...


from the first 'quora' link

"Petr almost never submits any solution without having a rigorous proof even when good mathematical intuition is enough and the proof is hard."

wow! reading this brings home how much I still have to learn. Humbling.


Any others?


Here is one more about data structures: https://gist.github.com/73b62f3f34cb59a2913b

Rest, I think they are bit personal.


It takes time and practice. You won't get the practice at work. I dealt with a network flow problem a while back and a spell correction for URLs problem before that. In the last year I haven't done anything algorithmically challenging. Most of the time you're building systems that move data around.

A common mistake is that people learn about complex algorithms and data structures, remember their names but then can't solve questions that involve just basic structures like vectors, stacks or binary search trees.

I can't learn by reading a book, I have to solve problems to really grasp a concept so my advice is this: Do topcoder.com/tc div 2 practice rooms, about 30 of them in a short time span. You'll see the solutions of other people and be able to learn from them. Also the level of div 2 problems is about the level of more difficult interview questions.

Another suggestion is to try projecteuler.net


Picking up algorithm skills on the job is not that easy. You really have to spend some time, think problems though, play with them and internalize the learnings.

Your job in any company doesn't deal with algorithms that much but it's very useful to know them for the rare opportunities that do occur so that you make better choices.

As for YCombinator, in startups when you hit scaling issues if means your product is good and you're already on the path to success and you can hire someone with better fundamentals to help you deal with the load.

mzuckerberg (facebook ceo) Algorithm Rating: 1044 Total Earnings: $124.00 School: Harvard University http://www.topcoder.com/tc?module=MemberProfile&cr=27613...

dangelo (former facebook cto) Algorithm Rating: 2351 Total Earnings: $3,082.50 School: California Institute of Technology http://www.topcoder.com/tc?module=MemberProfile&cr=26098...


Start picking up algorithm knowledge now online. This guy is excellent and entertaining:

  http://www.youtube.com/watch?v=RpRRUQFbePU&feature=relmfu
I too have had interviews with Google and had questions about big O notation. And questions about sorting during interviews with several other companies.

Not knowing fundamentals like this never helps. Many interviewers will spoon feed it to you, but you're not going to respond to questions with all your mental resources if you're spending all your energy trying to understand the context.

But its not necessary. And it costs nothing other than time to fix it. And it may even be fun.


Great link! Buckland is a stellar lecturer. Stumbled across him some time ago.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: