Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Using stubs for inner calls" is literally a top-level description of London-style TDD.


Only Detroid school! Mocks should be banned (unless they mock external system). London-style tests cargo culting is one of the reasons why OOP got bad name.


Leaving mocks in place once you're done should be banned, I agree with that. I think they're fine as an exploratory tool to figure out what an interface should look like.


Mocks means that there's a side effect neccessary (and that's quite rare) so the use is justified in such a case. If a stub is needed, then mostly it's gone when the code is refactored to a more functional style.

Unfortunately, most OOP developers bury side effects at the bottom of the stack, instead of passing an object that describes the side effect to happen (which it's executed at the shallow and simple application layer).

Said result object can be tested as any other function result and no mocking is required.


I love this sort of approach for one-shot externalities but what about when your entire program is a conversation with external components? My current project coordinates software repositories and services within AWS and I find myself using a lot of mocks in testing.

I can return a tree of lambdas but then I have to resolve them against something and that's just replacing mocks with lambdas, really. Not sure it's any better in practice.


Great question! I worked with two approaches: 1. Sagas - a centralised place to handle bussines flows 2. Event system

Start with 1. and eventually (pun intended) move to 2. Both ways allow parallelized interactions with external systems, deferred decisions, etc. whatever the bussines will require.


Sorry I wasn't clear, I'm talking about imperative code that coordinates multiple external actions. In a world where it's modeled as jobs in a distributed system, I agree that each job can be nicely functional (and I love this pattern).


> Mocks means that there's a side effect neccessary

Eh, kinda. London style says to use mocks as you work down the call stack, whether or not there's a side effect. At some point you might hit an edge where you're calling into third party code for a side effect (and all side effects are calling third party code), but that's not really the point.

This is where the "never mock code you don't own" principle comes in: if you're mocking out third party code, it needs to have a wrapper of your own code to hide it, so you're in control of the interface. At least in theory. You want to be passing that thing in anyway, so for the tests you pass down a MockThing, or a NullThing, or an InMemoryThing. That way the side effect can happen at the lower level, but the choice of exactly whether there's an observable side effect is still in control of the top-level application.

Really you and I are describing two different ways of achieving the same result: moving side effects to somewhere they're easy to handle independently of the logic, whether that's functional core/imperative shell, or dependency inversion. You don't really need mocks for either because the only time you're actually testing the side effect itself is probably an integration test, but they're a useful tool to get to that point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: