It might make economic sense. Instead someone who knows their value and asks for it you can get the cheapest aspiring gig worker and replace them as needed.
Errors experienced by quantum computers can be decomposed into two types of errors: X type (bit flips) and Z type (phase flips). Classical computers only have bit flips.
The surface code is a quantum error correcting code built out of a checkerboard of interlocking parity checks, where the qubits lives at the intersections, the black squares check one type of parity (Z type) for their four neighboring qubits, and the white squares check the other kind of parity (X type). You need a 2d checkerboard instead of a 1d line because there are constraints in how the parity checks can overlap. Adjacent parity checks that disagree about the type of check need to touch at an even number of places, not an odd number.
The convention that one square is X and the other is Z is arbitrary. You can swap the X and Z roles of a qubit as long as you do it consistently. So instead of this local situation, where the 2x2 blocks indicate a nearby parity check:
xx ZZ
xx ZZ
q
ZZ xx
ZZ xx
You can just as well do this:
xx ZZ
xZ xZ
q
Zx Zx
ZZ xx
If you swap the X and Z role of qubits in a checkerboardy sort of way, you end up with every parity check looking like
xZ
Zx
Which is sort of neat. Two different things became one thing. The paper shows that this arrangement has some other benefits. In particular, it does surprisingly well if one type of error dominates. There are proposals and concepts for hardware where one type of error dominates.
I will note that when you attempt to translate these parity check specifications into a quantum circuit, they have a tendency to end up compiling into the same thing regardless of whether you swapped the X and Z roles of a qubit in the parity check spec. So in a certain sense nothing has actually changed, and the improvement is an artificial result of considering noise models that aren't aware of the step where you compile into a circuit. In order for the idea to really land, hardware with a dominant error has to implement enough types of interactions between qubits that you don't need to use an operation called the Hadamard operation that swaps the X and Z axes of a qubit. Because if you ever use that operation then your dominant Z type errors have a simple route to be transformed into X type errors, which removes all the benefit. The hardware needs to enable you to change how you compile the circuit. AFAIK, no one has yet demonstrated any two qubit interaction while maintaining a dominant type of error.
$56 of which is breadboard and hook up wire - I'm sure it's possible to have a custom pcb fabbed for about $20,and if you do small runs to have the pcb cost drop to less than $1.
But then you get crappy breadboards that once in a while gives a bad connection. Without any visual clue. That can be very hard to debug, especially for the hardware newbie Ben Eater is teaching. Spending an extra $50 to make the process hassle free and enjoyable is money well spent.
Better yet, use wirewrapping (https://en.wikipedia.org/wiki/Wire_wrap) instead of breadboarding, so quality of the board doesn't matter. Buy a wirewrap stick (20 USD), some crappy perfboards from China (10 USD), some wires (5 USD), some long DIP headers (10 USD), and you're good to go. You may still need two or three breadboards for quick experiments.
I'm currently working on a homebrew Z80 computer. I'm at the stage of moving the preliminary designs to PCBs, but from my experience, wirewrapping is a lot better than breadboarding when you start building circuits with many signal wires. Breadboards are quick and simple at first when you can "plug and play", but it would quickly become a nightmare when the number of connections and wires exceed 200. It may be a less concern for a modern microcontroller as i2c and SPI are serial interfaces, but on a 8-bit computer, you'll hit this number really quickly, because the system bus is 24-bit (16-bit addr, 8-bit data), parallel. A bus driver using two unidirectional buffer has 24 x 2 + 8 = 56 wires, two RAM chips have 48 wires, a ROM has another 24 wires, it's already 148 wires now for a bare-minimum system without even an I/O port. It will get out of control soon. Also, a 16-bit machine will become a nightmare even quicker as they have 32-bit bus.
On wirewrapped boards, you'll never get a bad connection without any visual clue, the connection is as solid as soldering, and there are no jumper wires hanging in the mid-air to stop you from probing it using an oscilloscope. Strongly recommended, to learn more, search keywords "wirewrap electronics" at YouTube.
You're severely underestimating the cost of wire-wrap sockets. They're a low-demand item, so they're rather expensive; a few dollars for each socket is typical, and DIP40 sockets can get into the $10 range. (The sockets will cost more than most of the parts in them!)
> You're severely underestimating the cost of wire-wrap sockets. The sockets will cost more than most of the parts in them!
Which is why getting them from China is a good idea. I'm building the real thing and I'm well aware of that, but I've found a very economical solution: I've found that buying single-row 40-pin sockets, like this one (https://www.aliexpress.com/item/32959627004.html), is a good low-cost alternative to wirewrap sockets, it only cost you one dollar each.
It's not extremely easy to use, as you have to cut them and manually plug two rows of them to make a poor-man's DIP header, but not difficult either, and doing it is straightforward. Also, if the socket is too rough for the components you need (for example a heavy ZIP socket, or a DIP-40/64 chip), I found you could install the wirewrap DIP header to the board first, then plug a generic, cheap DIP socket on top of it, then plug your ZIF socket on top of those.
The only real disadvantage is the increased weight and height, so the solution is not very elegant, but hey, one 40-pin header only costs you one dollar, 10 dollars buy you ten 40-pin headers, which is good for ~15 chips!
I did wired wrapped electronica in college around 20 years ago. Wire running can easily become a dishrevelled mess. We also only had one color wire available, making non trivial difficult to follow and extremely difficult to debug.
Personally, I found my bread board circuits more aesthetically pleasing and easier to reason about. I had 4 different colors of wire, I always ran my wires in the cleanest manner possible using the most direct and shortest wire.
Probably the most complicated bread board circuit I wired up was an 8KB RAM bank for a 16bit microcomputer. My only real wire wrap project was when we prototyped an MP3 player driven by an AVR.
> We also only had one color wire available, making non trivial difficult to follow and extremely difficult to debug.
This is no longer a problem today.
> using the most direct and shortest wire.
Unfortunately, it's often not even possible! A DIP-40 chip already occupies most columns, leaving only two or three columns for wiring. Impossible to wire the bus directly...
Well, aliases are ignored in scripts usually, but you could prepend something to the PATH so your script/exe gets found before built-ins, no? I agree, I don't see that this is a problem unique to Xonsh.
I mean, that's the underlying concept behind the fork bomb isn't it? With ":() { :|:&}" you're essentially redefining the bash noop to be a function that pipes/forks itself into itself recursively.
It's the reason './' is not in the default path as well (so you can't place an 'ls' in your home directory and have the admin run your command instead of the real ls).
Perhaps I should have asked for the most obscure :) I wonder how many truly emerging fields still exist within computer science. I feel that resiros [1] may be correct in suggesting applications in other areas of science are most interesting / obscure (in the context of that discipline, at least).
While interesting, is this really necessary? Call me skeptical, but value add = negligible, justification for an entire department to spend several months+ working = likely.
I can see why companies like Apple and Microsoft would want to be involved in making fonts, they make their bread and butter by creating warm-fuzzies and having a consistent design language; IBM though, less so?
Why would IBM not also benefit from a consistent design language? Irrespective of what people might think of their products, they are involved in a wide range of business sectors (services, desktop,
mainframe, AI, etc.) and I would have thought that consistent typography would help to give them a more integrated feel.
Sure there is an element of "me to" about this - Google, MS, Apple, even Oracle and Atlassian have design languages - but as someone with a general interest in typography I thought this looked pretty good.
Yeah, I think web sites that insist on having their own fonts are being a bit narcissistic. The user should have the fonts that are pleasing to their eyes installed and their browser should use them. Web sites insisting they know better have too many people doing design. Go ahead and cut that cost, we won't notice.
IBM is a large company (380K headcount) with a design staff. They produce A LOT of textual material for clients that are paying serious piles of money.
Typography is a fickle, subtle thing that can influence people without them being aware of it. While on HN, folks may pretend to prefer stark naked HTML, that doesn't fly with the general public.
IBM decided it needed a refresh in that area. Good for them and good for the graphic design professionals that did the rather impressive work here.
stark naked html doesn't necessarily imply ugly default styles. think of it as attempting to achieve perfection in fashion through doing as much as possible with as little as possible.
IBM's last corporate typeface was a slightly modified version of Helvetica, which wasn't particularly distinctive and didn't see much adoption internally.
Plex was introduced a few weeks ago internally with considerable uptake. Intranet articles and corporate comms look much nicer now :P
IBM also produced the Carbon Design System which influences most of their product offerings (most notably on IBM Cloud/Bluemix) http://carbondesignsystem.com/
Airports comission their own fonts and so do cities, car manufacturers and banks. Typically they are proprietary and might even have an exclusive license because they are considered to be part of the corporate identity.
I think when you're as big as IBM, you also need to have random people doing random stuff here and there, as who knows when you'd need something special unexpectedly or when you might suddenly have a hot product unexpectedly. As well, it's helpful to have random teams working on silly projects or even toys if only to keep people's minds fresh and flexible. If we're only always thinking about the value add, won't we become like the stock market analysts that only care about share price and end up giving advice only focused on the short-term? Of course too much waste is too much. But what's a small project here and there, especially if there are zero legal issues in a place like IBM where so much stuff has an "only built here" sentiment; I can't for the life of me imagine why IBM insisted on using Lotus Notes when I was there. I knew very few people who liked it.
There's this wonderful book, "Corporate Culture & Performance" by John P. Kotter - the biggest take-away for me was that companies under stress to perform will generally cargo-cult their behavior from successful times.
IBM building its own fonts to polish the nooks and crannies of their brand identity may well be such an instance where it is replicating behavior from the late 70's and early 80's of doing everything under the sun. Imho it should clearly not be a priority for them right now.
I think it’s more like the design equivalent of open source software. It’s meant for internal use but then “open sourced”, there may be a trend, I’m not sure.
Dustin, wow. I can see a lot of work went into your talk. I read along for about 80% and you make some great points. Hyperfiddle looks impressive - I signed up to the dev mailing list.
As an aside, do you think traditionally built apps should be moving away from SQL and towards immutable stores like Datomic?