Hacker Newsnew | past | comments | ask | show | jobs | submit | capt_hotpants's commentslogin

Any thoughts on keeping accurate RTC time on these things at sites with poor internet connectivity, weird latency or restrictions on outbound NTP?

I'm thinking a nice addition would be a small module to plug on top of the 40-pin GPIO connector with a GPS receiver in it and an antenna attached. The attached node could run NTP and pull accurate time from satellites. Throw in 3 of the modules, and NTP can use quorum to keep the time accurate even if one GPS module fails.


I can't think of anything better than a GPS receiver.

About 10 years ago we had a legacy system that was using a weird shaped (think UFO spaceship) FM radio receiver to get time. I never understood why our provider chose that over a simple USB GPS receiver.


Out of curiosity, does the firmware in the Compute Module allow for PXE boot?

Would be a neat way to handle corruptions of the SD / eMMC. Flick the power off through the Turing Pi's I2C, have one of the other nodes act as a PXE server, flick the power on again to the affected node, and have it pull in a tiny kernel and init-script through PXE that flashes the SD / eMMC with an image from the PXE node.


That's really nice. There are high-end products out there with redundant backplanes (dual or triple) and redundant PSUs (2 or more). To match that level of redundancy it seems you would need 4x Turing Pi, to be able to survive a simultaneous failure of 1x arbitrary PSU and failure of 1x arbitrary backplane (ie. the integrated RTL8370 GigE switch).

The 2x external GigE ports on the Turing Pi helps a lot, in that each cluster can connect simultaneously to 2 external GigE switches, in case one of those things fails.

Could be an interesting exercise to set up and get it all working, in particular K3s + dynamic routing might be a fun challenge.


Holy moly!

Maybe now we can finally have the bug fixed where Thunderbird displays random stuff from the email body in the Inbox tree's From and Date fields (instead of the proper envelope sender and MTA timestamp from the Return-Path and final Received headers).

I hope they hire a full-time Firefox developer next. I'm going to upgrade to Quantum (currently my apt-get is holding Firefox 54) as soon as accidentally hitting CTRL-Q instead of CTRL-W on Linux doesn't activate a magical Linux-only hotkey that blows up the whole Firefox browser and looses all of your web-related work.


I don't know a thing about AdSense, but I'm curious as to why the numbers depicted in the graph, while lower, shows a lot less random-looking fluctuations after the switch?


Hint: the 2011 airbook is practically indistinguishable to the 2012 model. but the 2012 model has a cheaper SSD.

factory firmware on that SSD has a bug that causes it to permanently lock up / disappear from the bus, with the result that all data on it is lost. Apple will replace the SSD when it happens, but data will be gone.

If you're getting/using the 2012 model, I'd recommend getting a replacement SSD from OWC or Transcend before installing Linux.

The wifi is Broadcom and works poorly under Linux. Not so many crashes anymore, but frequent disconnects, horrible performance and quick loss of of signal when the ether is crowded or the laptop is a few meters away from AP.

The wifi card can be replaced, but AFAIK, the Apple-provided firmware will not allow the machine to boot if the PCI ID of the wifi card does not match the original broadcom one. I don't think coreboot runs on the airbook either.


I like it a lot. But in no way is this a brand new idea as stated in the article.

One reason that I like it, is that I've been peddling the "half the earth should belong to nature" idea for a couple of years now. Not sure where I picked it up originally though. Probably an article linked from HN.


A thousand times yes.

PGP and SMIME is perfectly fine for high security scenarios (whistleblowing and such), in other words for the 0.000001% use case.

For the 99.9% use case, all that regular folks need is for the sending MX to verify that the recipient MX owns the domain before delivery.

PGP and SMIME with their key-signing parties, government-owned PKI et cetera, is either wild overkill or so utterly complex that it defeats the purpose for the 99.9% use case.

---

That said, you are going to break some of my software with this.

Specifically a SMTP reverse proxy, that looks at the domain part of RCPT TO, and transparently forwards the SMTP connection to the correct customer's MX for processing.

It could easily be unbroken again - BUT that would require that Postfix get their software together and add SNI support to their TLS stack (like all? other MX software does).

---

Implementation proposal:

1) Use RCPT domain-part for the SNI hostname.

2) Always try SMTPS port before SMTP port. Always try STARTTLS before plaintext.

3) Actually verify the certificate, duh.

4) Support a new EHLO header that mimics Strict-Transport-Security exactly.


Actually, even PGP is not fine for high security scenarios. For example, it doesn't encrypt headers, including the subject line, which I would say is just as important as an email's body.


Some people are trying to fix that: https://modernpgp.org/memoryhole/


Good point.

S/MIME might also have some dependencies on PKI that in some scenarios could make it unsuitable for high security scenarios?


What is the 99% use case? Most people's email has one endpoint at either a big cloud provider or an employer, both of which are examining it at rest.


Good question!

For me personally, I'm not too worried about the NSA. But I do think it is quite silly that Gmail and Outlook has a little lock icon to indicate that security is ON, while the first thing that happens when you click Send is that your email is whizzed over half the internet in plaintext.

For organizations and corporations, I imagine they would very much like to be able to verify the identity of the receiving organization before delivering possibly sensitive email.

(The sender identity is already authenticated via DKIM.)


For organizations that subscribe to these cloud services (like Google Apps, or hosted Exchange) there are settings to enforce the use of TLS on both inbound and outbound. For example https://support.google.com/a/answer/2520500


If you read the fine print in bullet #8, you'll discover that there is, per default, no validation of the presented certificate at all.

Without proper certificate validation, the encryption step is cryptographically worthless. Anyone can MITM the traffic just by presenting a random certificate to the sender.


You are assuming that only one RCPT is being sent in the session.

(What does happen with your reverse-proxy when someone opens up a session and sends two mails, to two different customers?)


Currently, this happens:

   if hostname != target {
      downstream.Write([]byte("452 Different domain, please reconnect and deliver separately.\r\n"))
      continue
   } else {
As a side-effect, email to the secondary domain is slightly delayed.

It is a full proxy, in the sense that it sees all of the traffic, so technically it could de-multiplex and spool to two different targets at the same time for the duration of the current email.. Hasn't been a noticeable problem so far. But it would be nice to add at a later point in time. If/when someone complains, probably.


5) Slap a nice name on the feature-set, so products can advertise it. (An important step often forgotten by engineers ;-))


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: