There are actually two different SRE roles: the one people are describing above where you are 85-99% of the way to SWE (Software Engineer), and you have sysadminy experience, and another one where you are 100+% of the SWE bar and optionally have sysadminy experience. The former is called SRE-SE (Systems Engineer), and the latter SRE-SWE.
SRE-SE interviews are super heavy on the sysadmin stuff usually, with less (but still significant) attention paid to SWE skills, whereas SRE-SWE interviews may not even have an SRE component (it's possible for candidates in the 'normal' SWE hiring pipeline to be shunted to SRE-SWE post-interview).
Yeah a lot of people don't understand this distinction. You have your pure SWEs who were hired that way who then were either picked for or switched to SRE-SWE. Then you have people who were recruited into SRE-SWE from the beginning. People in SWE and SRE-SWE job classes can freely move between them. Then finally you have people who were recruited as SRE-SysEng, or were recruited as SWEs and didn't quite make the cut. These folks have to do a transfer interview to jump to the SWE or SRE-SWE roles.
I'm an SRE-SE and regular do phone interviews for SRE-SE candidates.
While I do tend to spend more of the interview time talking about sysadmin tools, operating systems, networking, databases, security and troubleshooting, I still expect candidates to have reasonably good coding chops.
The difference is that the coding questions tend to be more task-oriented or procedural (i.e. log processing, building automation pipelines, implementing standard unix cli tools, etc.), rather than the algorithmically challenging or math-oriented problems that we'd usually ask SWE candidates.
Both the SE and SWE side SRE candidates need to be able to design and reason about large systems, making trade-offs between performance (especially latency), redundancy and cost.
Thanks. Is being well-versed in C a prerequisite for the role then? I'm imagining you need to be fluent in at least one statically compiled language or ???
In my interviews you code in whichever language you prefer. Some interviewers will ask you to use a specific language that's mentioned on your resume. In general I think that if you show strong coding skills in some language, it is believed that you won't have much trouble teaching yourself the languages your team uses (typically some subset of C++, Java, Python, Go, Borgmon).
I think you're misreading that sentence. "Examples are the programming languages Clojure, which is a contemporary dialect of Lisp, Rebol and Refal.", which claims that Clojure is a dialect of lisp (indisputable), rather than Rebol. Rebol is just another example of a homoiconic language.
"This paper studied the incidence and characteristics of
DRAM errors in a large fleet of commodity servers. Our
study is based on data collected over more than 2 years and
covers DIMMs of multiple vendors, generations, technolo-
gies, and capacities. All DIMMs were equipped with error
correcting logic (ECC) to correct at least single bit errors"
from conclusion 1.
"The conclusion we draw is that error correcting codes are
crucial for reducing the large number of memory errors to
a manageable number of uncorrectable errors. In fact, we
found that platforms with more powerful error codes (chip-
kill versus SECDED) were able to reduce uncorrectable er-
ror rates by a factor of 4–10 over the less powerful codes."
I don't think that optional typing is in general an attempt to solve sucky static type systems -- instead, many researchers and organizations view it as a way of converting programmers who have only seen Java into developers that willingly use the type checker. It's a great motivator when you experience a bug in production that the runtime helpfully reminds you could've been caught if only you'd turned on types in that code :).
I'm a web developer, but I can't really use my skills to provide an open source web app the way I'd like to. I'd like to build a small server-side budgeting app that people can use from their computers or phones to record expenses, but there's no way I can ask people to find a web host that lets them run rails, or set up a heroku account or whatever.
So my only alternative would be to run the service myself, but then I'm storing other people's data, I have to worry about scaling if lots of people use it, and user accounts, and all this stuff.
The idea of sandstorm is folks run this platform on their personal servers, and then it lets you browse an app store like interface and one-click install these server side apps. So I'd bundle up my budgeting rails app as a sandstorm package, and if someone wants to track their expenses from a variety of devices, they install the app. Now they're running it so the data is theirs, there's no scaling issues, and user authentication is provided by sandstorm.
Sandstorm is still in its infancy so there's not a lot of apps available and the development APIs are being worked on, but I hope it's the future. It would lead to a more decentralized web with better privacy and users owning their own data.
I'm hopeful, if not optimistic, for a future where every family is expected to have their own little server running somewhere. And they access that server through the sandstorm web interface, and can easily add little apps to it. My budgeting app, a webmail app, some future federated profile app to replace facebook, etc.
You know, a picture might really explain it better. If it's something that sits on top of Linux and manages software installation, it's easy to illustrate.
The sentence you quote doesn't appear on the page. I think it might have said that at one time, but that was long ago, so I wonder if you're somehow seeing some ancient cache or something? Or are you looking at some other page?
With that said, it's true we've had a tough time coming up with a two-line summary that fully explains what Sandstorm is. If you could suggest what would have made more sense to you, that would be helpful. Otherwise our strategy has been to try to push people towards trying the demo, which I think illustrates what it is much more quickly than words can.
If it's not 'too popular', it defaults to 'suggestions' mode, which will show all the vandalism -- switch to 'viewing' mode, and you'll only see the original doc.
I'm pretty sure OpenBSD core doesn't have any of those, and that's really what the project is caring about. Problems may happen in ports, but there's a reason why they're ports.
Also, as stated in the article:
> Finally, if you are one of the exceedingly few people for whom the clock being off by a second actually matters, then I'm pretty sure you also know how to deal with it.
I would think that waveform capture on the electric grid would be impacted when doing timeseries data logging and use in subsequent analysis. Frequently these will be relative time snapshots or using a different time basis so not sure if they are actually impacted.
The only time people would care that I'm aware of would be if the grid drooped at that exact time and one was trying to do analysis around that. Otherwise they are just doing things real time as they happen and time tracking is not necessarily involved except for logging.
Anyway I frequently run into timeseries trending clients that cannot handle displaying daylight saving time transitions correctly let alone hope that they handle leap seconds correctly.
Shouldn't `ntp`, over a short period of time, simply lengthen the actual amount of wall clock time per 'second' exposed to the kernel/userland? So there should never be a huge delta of 1s, nor should time ever go in reverse.
SRE-SE interviews are super heavy on the sysadmin stuff usually, with less (but still significant) attention paid to SWE skills, whereas SRE-SWE interviews may not even have an SRE component (it's possible for candidates in the 'normal' SWE hiring pipeline to be shunted to SRE-SWE post-interview).