AI is such a blessing. I use it almost every day at work, and I've spent this evening getting a Bluetooth to USB mapper for a ps4 controller working by having ChatGPT write it for me, for a bigger project I'm working on. Yes, it's going to take some time to fully understand the code and adjust it to my own standards, but i've been playing a game a few hours now and I feel zero latency and plenty of controller rumble that I'm having fun giving some extra power. It pretty much worked with the first 250 lines of C it spew out.
What's gonna be super interesting is that I'm going to have an rpi zero 2 power up my machine when I press the controller's ps-button. That means I might need to solder and do some electrical voodoo that I've never tried. Crossing my fingers that the plan ChatGPT has come up with won't electrocute me.
This is also the university that develops RumbleDB[0]. It uses JSONiq as its query language which is such a pleasure to work with. It's useful for dealing with data lakes, though I've only experimented with it because of JSONiq.
Something to consider when using SQLite as a file format is compression (correct me if I'm wrong!). You might end up with a large file unless you consider this, and can't/won't just gz the entire db. Nothing is compressed by default.
Sure. But if you have reasonably small files just compress the whole file, like MS Office or EPUB files do.
Or if your files are large and composed of lots of blobs, then compress those blobs individually.
Whereas if your files are large and truly database-y made of tabular data like integers and floats and small strings, then compression isn't really very viable. You usually want speed of lookup, which isn't generally compatible with compression.
Please do not use second resolution mtime (cannot represent the high accuracy mtime that modern OSs use, so packing and unpacking , or causes differences eg in rsync), or build anything new using DEFLATE (it is slow and cannot really be made fast).
This seems completely orthogonal? This is an alternative to zip and tar built on SQLite:
> An "SQLite Archive" is a file container similar to a ZIP archive or Tarball but based on an SQLite database.
Your parent comment said that when you're using SQLite as an application format, the content in the database don't get compressed. These two things have nothing to do with each other.
People who have experience with Aurora and RDS Postgres: What's your experience in terms of performance? If you dont need multi A-Z and quick failover, can you achieve better performance with RDS and e.g. gp3 64.000 iops and 3125 throughput (assuming everything else can deliver that and cpu/mem isn't the bottleneck)? Aurora seems to be especially slow for inserts and also quite expensive compared to what I get with RDS when I estimate things in the calculator. And what's the story on read performance for Aurora vs RDS? There's an abundance of benchmarks showing Aurora is better in terms of performance but they leave out so much about their RDS config that I'm having a hard time believing them.
We've seen better results and lower costs in a 1 writer, 1-2 reader setup on Aurora PG 14. The main advantages are 1) you don't re-pay for storage for each instance--you pay for cluster storage instead of per-instance storage & 2) you no longer need to provision IOPs and it provides ~80k IOPs
If you have a PG cluster with 1 writer, 2 readers, 10Ti of storage and 16k provision IOPs (io1/2 has better latency than gp3), you pay for 30Ti and 48k PIOPS without redundancy or 60Ti and 96k PIOPS with multi-AZ.
The same Aurora setup you pay for 10Ti and get multi-AZ for free (assuming the same cluster setup and that you've stuck the instances in different AZs).
I don't want to figure the exact numbers but iirc if you have enough storage--especially io1/2--you can end up saving money and getting better performance. For smaller amounts of storage, the numbers don't necessarily work out.
There's also 2 IO billing modes to be aware of. There's the default pay-per-IO which is really only helpful for extreme spikes and generally low IO usage. The other mode is "provisioned" or "storage optimized" or something where you pay a flat 30% of the instance cost (in addition to the instance cost) for unlimited IO--you can get a lot more IO and end up cheaper in this mode if you had an IO heavy workload before
I'd also say Serverless is almost never worth it. Iirc provisioning instances was ~17% of the cost of serverless. Serverless only works out if you have ~ <4 hours of heavy usage followed by almost all idle. You can add instances fairly quickly and failover for minimal downtime (of course barring running into the bug the article describes...) to handle workload spikes using fixed instance sizes without serverless
Have you benchmarked your load on RDS? [0] says that IOPS on Aurora is vastly different from actual IOPS. We have just one writer instance and mostly write 100's of GB in bulk.
We didn't benchmark--we used APM data in Datadog to compare setups before and after migration
I believe the article is talking about I/O aggregate operations vs I/O average per second. I'm talking strictly about the "average per second" variety. The former is really only relevant for billing in the standard billing mode.
Actually a big motivator for the migration was batch writes (we generate tables in Snowflake, export to S3, then import from S3 using the AWS RDS extension) and Aurora (with ability to handle big spikes) helped us a lot. We'd see application performance (query latency reported by APM) increase a decent amount during these bulk imports and it was much less impactful with Aurora.
iirc it was something like 4-5ms to 10-12ms query latency for some common queries regularly and during import respectively with RDS PG and more like 6-7ms during import on Aurora (mainly because we were exhausting IO during imports before)
For me, the big miss with Postgres Aurora RDS was costs. We had some queries that did a fair amount of I/O in a way that would not normally be a problem, but in the Aurora Postgres RDS world that I/O was crazy expensive. A couple of fuzzy queries blew costs up to over $3,000/month for a database that should have cost maybe $50-$100/month. And this was for a dataset of only about 15 million rows without anything crazy in them.
We were burned by Aurora. Costs, performance, latency, all were poor and affected our product. Having good systems admins on staff, we ended up moving PostgreSQL on-prem.
> There's an abundance of benchmarks showing Aurora is better in terms of performance but they leave out so much about their RDS config that I'm having a hard time believing them.
Aurora doesn't use EBS under the hood. It has no option to choose storage type or io latency. Only a billing choice between pay per io or fixed price io.
Precisely! That's why RDS sounds so interesting. I get a lot more knobs to tweak performance, but I'm curious if a maxed out gp3 with instances that support it is going to fare any better than Aurora.
I've had better results with managing my own clusters on metal instances. You get much better performance with e.g. NVMe drives in a 0+1 raid (~million iops in a pure raid 0 with 7 drives) and I am comfortable running my own instances and clusters. I don't care for the way RDS limits your options on extensions and configuration, and I haven't had a good time with the high availability failovers internally, I'd rather run my own 3 instances in a cluster, 3 clusters in different AZs.
Blatant plug time:
I'm actually working for a company right now ( https://pgdog.dev/ ) that is working on proper sharding and failovers from a connection pooler standpoint. We handle failovers like this by pausing write traffic for up to 60 seconds by default at the connection pooler and swapping which backend instance is getting traffic.
RDS PG stripes multiple gp3 volumes so that's why RDS throughput is higher than gp3
I think 80k IOPs on gp3 is a newer release so presumably AWS hasn't updated RDS from the old max of 64k. iirc it took a while before gp3 and io2 were even available for RDS after they were released as EBS options
Edit: Presumably it takes some time to do testing/optimizations to make sure their RDS config can achieve the same performance as EBS. Sometimes there are limitations with instance generations/types that also impact whether you can hit maximum advertised throughput
Only if you allocate (and pay for) more than 400GB. And if you have high traffic 24/7 beware of "EBS optimized" instances which will fall down to baseline rates after a certain time. I use vantage.sh/rds (not affiliated) to get an overview of the tons of instance details stretched out over several tables in AWS docs.
Proton version will always work better if someone does not show an example and encourage the usage of native support. With Proton you are guaranteed to never reach the optimal potential, or get full advantages of the Linux/Wayland ecosystem. While with native versions you have at least the chance to get in there.
It is like judging someone for taking an advantage of the new CPU instructions that accelerate processing because general instructions are already good enough.
Native doesn't automatically mean better - quite a few examples of games running better on proton than with native executables(and yes then we can start arguing that it just means the native port is done poorly, but I'm just saying don't assume native will always run better).
It seems like a similar argument around the popularity of third party engines, whether studios should use Unreal, or whether they have the expertise/resources to change to and use another engine, or make their own bespoke engine, and if that will produce better results.
I think that is not fair comparison. Proton adds additional layer which can be completely removed and affects the runtime performance. Switching different game engine changes the layer implementation, instead of removing.
When Proton started to get good, there were multiple stories of small game studios just dropping their bespoke Linux builds because the Windows->Proton version ran much much faster and required zero effort from them.
I'm thinking about how to properly test AWS Step Functions. The problem is that I can either mock the entire response for every state in JSON only, or call out to a lambda. What I want is to type check the evaluated JSONPath payload and the mocked JSON response, to ensure that my tests always adheres to global contracts/types written in JSON Schema.
I think it's doable by dynamically creating lambdas based on test cases I define in one way or another, perhaps like mocked integration services, that does nothing but validate if the event from SFN matches a schema, and that the mocked response also matches a schema.
My concern is that I can't find prior projects doing this. My use case is mostly (exclusively at the moment) calling out to lambdas, so perhaps I can get away with this kind of type checking. But it's just weird that something like this doesn't already exist! Past experiences have taught me that if no one have tried it before, my idea is usually not that good.
Let me know what you think!
(Would have liked to use durable execution which totally solves the typing issue, but can't in this case)
I think you get the biggest advantage from BitLocker when you use TPM (PCR 7+11) with a PIN. That should mitigate the exploit because the FVEK should never be read without the PIN, and if BitLocker does it right (which I think it does) too many wrong PIN's results in the TPM going into dictionary attack lockout mode.
Now I've been trying for months to do the same for Linux. There's systemd-cryptsetup/cryptenroll, but it's only for LUKS and I'm trying to encrypt a few sensitive directories on a super slow internal eMMC with fscrypt (keys for secure boot and /home). The TPM is _EXTREMELY_ hard to code for when things go beyond the basics:
1. Bind to PCR 7
2. Bind to changing PCR 11 (changes whenever the kernel, init, cmdline etc. is updated)
3. Use a PIN - but not the AuthValue, because I want to use the same authorization policy for resetting the DA lockout counter on login, and also have a long password/AuthValue for resetting the counter manually.
4. Make it all work with PCR 11 signatures and public keys provided by systemd-stub.
Maybe this isn't the right place to ask, but there's almost nothing but basic TPM guides out there, so if you're an expert I could really use your help. It's just for a personal project, but I'll write about it once I'm done - if I ever figure it out!
> I want to use the same authorization policy for resetting the DA lockout counter on login, and also have a long password/AuthValue for resetting the counter manually.
LUKS has multiple "key slots" so IIRC you can use one slot for TPM unlock, and a different one for long password unlock.
Have you considered using that as your recovery mechanism?
> It's just for a personal project,
One of the reasons very few hobbyists touch the open source TPM stuff is there are a number of alternatives that scratch similar itches much more easily.
Need to protect a crucial encryption key by locking it up in hardware? Buy a Yubikey.
Disk encryption password on your laptop is inconvenient? Just use standby when you close the lid instead of powering off fully. Login password is inconvenient? Fingerprint reader, or biometric yubikey.
Unattended kiosk, school computer lab or similar that needs to boot without a password? Just put it in a sturdy metal box and chain it to the wall.
Server in a data centre that needs to boot unattended? Move to a data centre with physical security you can trust. Still worried? Dropbear or Tang so it has to be on the right network before it'll boot.
Home lab hobbyist, working with the TPM for fun? Assess whether you're actually having fun working with the TPM, and you'll probably notice you're not.
Surely Windows keeps the FVEK in RAM regardless of whether the TPM requires a PIN to initially obtain it. Otherwise, wouldn't you need to enter your PIN every time a block from the disk needs decrypting? Not to mention the performance impact of calling the TPM on every disk operation.
This attack reads the key from RAM, so I don't see how a TPM PIN would mitigate it.
The point is that the TPM PIN prevents the attack if the system is powered off when the attacker obtains it.
If the TPM doesn't have a PIN, this attack works even if the attacker obtains the system when it's powered off. They can start the computer, proceed to the Windows logon screen (that they can't get past and that hence prevents them from exfiltrating data from the running system), then just reset the computer and perform this attack to obtain the encryption key. This obviously doesn't work if the PIN prevents Windows from ever even starting.
I know this is besides the point, but still kinda relevant:
Even on Win11 it's still possible to do the old utilman (or other suitable module) replacement hack from Windows repair (trigger by interrupting boot), from there you can change account passwords at will.
I think Windows repair prompts for an admin login and the bitlocker key before allowing you to proceed. Assuming the windows install is intact enough to read the security sam.
Correct, unless you're using a self-encrypting drive the FVEK sits in RAM once it's been released by the TPM during boot. The TPM is only a root of trust; for fast crypto operations without keeping the key in kernel memory you would need something like Intel SGX or ARM TrustZone.
BitLocker no longer leverages SED by default due to vulnerabilities in drive manufactures firmware as of Sept 2019.
> Changes the default setting for BitLocker when encrypting a self-encrypting hard drive. Now, the default is to use software encryption for newly encrypted drives. For existing drives, the type of encryption will not change.
If you can short the reset pins while the computer is running and make it restart without losing power, then yes, I agree. But if you have to shut down to make your modifications, then you won't get past the PIN prompt.
Why? It means you'll only get one shot at the attack, but nothing here is intrinsically prevented by using a TPM PIN (or even a non-TPM password, the attack doesn't depend on TPM-based Bitlocker in any way other than if the target machine is powered off or your first attempt fails)
I wouldn't underestimate that a PIN prevents this attack on machines that are powered off.
You can then go further up the chain with a UEFI settings password and no usb booting. If the password is hard to decrypt, then that's a pretty good approach.
Then there's custom Secure Boot certificates that replaces the ones from MS. It'll work for Linux, not sure about BitLocker. But my Surface tablet doesn't even support custom sb certs.
Would you be better off using split key encryption or encrypted secret key?
If you have to put a password in before boot that needs to be combined with the TPM key to unlock the drive, it would help in scenarios where a TPM key can be found later.
I’m not sure how much anything helps against this attack though. Retrieving data from RAM in this way should work for most scenarios by changing where you look for the key (as it needs to be held somewhere by the OS to maintain read/write access to the drive).
I would assume Apple devices aren’t vulnerable to this type of attack as IIRC the keys never exit the enclave. Maybe TPM 3.0 needs to look a lot more like that.
> If you have to put a password in before boot that needs to be combined with the TPM key to unlock the drive, it would help in scenarios where a TPM key can be found later.
Bitlocker already does this if you use a PIN/password.
You might know better than I do, but I had believed that Bitlocker used TPM PIN when you use a PIN, which is challenge/response (i.e. if PIN matches then TPM releases key) so wouldn't help in this case.
If Bitlocker PIN is split key then yes that would be ideal, but I think you can change the PIN without re-encryption (which implies it's challenge/response).
A power-on password (set in the BIOS) should also work, since without it the system will never get to the point where the TPM unlocks the FVEK, right?
I prefer this setup to a Bitlocker PIN because I can use a fingerprint instead of the power-on password on my Thinkpad, and because it should make the device largely unusable to a thief.
Of course, power-on password and fingerprint auth are only as strong as my TPM, but the same goes for Bitlocker TPM+PIN, right?
Isn't TPM just a honeypot of sorts? It seems strange to me that after successful open source encryption software, there was a shift to TPM, like you'll have a notion of super secure storage provided by big corporations and you should just not worry about it and not question.
Surely there must be a backdoor access for three letter agencies to just download all the pins and passwords and then take a dip in the data, no worries.
It's not a honeypot, and it does have value when used properly.
Their main purpose is to generate and store keys that cannot leave the device, instead performing signing operations as needed internally and only returning the result, and only if attestation passed. This is a lot better than just having private keys on disk.
People just forget that security isn't absolute, and each solution has a threat model it is appropriate for. In case of full disk encryption, neither a TPM nor user input can protect against evil maid on its own for example - the TPM will unlock for anyone, while user input might be collected by a modified and malicious bootloader. Having both, however, works well.
"TPM" is a bit dated as a term as it's all directly built into the processor nowadays, including for smartphones and such. Another modern feature in that catalogue is memory encryption, which rules out the attack described by OP as the rebooted machine would be unable to read old memory content.
I encourage you to read what a TPM is. A TPM isn't an "encryption" software/hardware. It's completely orthogonal to "successful open source encryption software".
"successful open source encryption software" don't solve the main usability problem with encryption: "Where do I store my super secure 4096-bit private key so it's both secure and convenient to use"
I don't see why a TPM couldn't be open? Nobody makes open-source TPMs (because they're put inside CPUs or attached to motherboards with specific pins and protocols) but in theory you could just do it. All you need to do is make sure any secrets stored get wiped permanently whenever you flash new firmware.
It'd be similar to secure boot: usable by default, but reconfigurable so that you can bring your own keys and signatures, putting you in complete control of your hardware, to the point where even the manufacturer no longer has a say in what's running and what isn't.
> usable by default, but reconfigurable so that you can bring your own keys and signatures, putting you in complete control of your hardware, to the point where even the manufacturer no longer has a say in what's running and what isn't.
You can control what's your TPM. That's how they work today. Sure their software isn't "open source" but there aren't that many 100% "open source hardware" options around. If you want to be able to flash it, build your own HSM. I don't know if there is a market for a prebuilt microcontroller with something like picokeys preinstalled. I know that the market for "open" hardware is tough.
The TPM emulation offers a full TPM implementation in software, for providing TPM functionality to a virtual machine when the host doesn't have one (or, when the TPM needs to be virtualized for other reasons, e.g. migration).
> I don't see why a TPM couldn't be open? Nobody makes open-source TPMs
The main advantage of the TPM is how it is made physically. It should be designed to make it hard or impossible to read the secrets out of it and those things depends on how the components are manufactured on the silicon wafer.
Maybe the manufacturing process could be published, but I don't think it would help much.
You could probably write your own TPM emulator or modify swtpm a bit and compile it to any microcontroller, but in that case the chip could be easily decapped to make all the secrets readable.
Unlike with cryptography, there is no rigorous notion of physical security. Doors, locks and even security systems can all be overcome with sufficient effort, skill and resources. They work because physical attacks require proximity and are very hard to keep anonymous. I seriously doubt that any TPM implementation would last a week against government funded researchers with state of the art technology, but that doesn't mean the TPM is useless.
No, it's the same. Cryptography is like a lock that you can overcome with mathematical force. It's just in different domain than physical objects.
If you know how the lock is built, you can rule out existence of master key for instance. You don't know if your TPM chip has API where three letter agency can just download the keys from it. You are in the dark.
Same with cryptography, you can choose the method, just like you can choose type of lock. There are locks that have not yet been picked, but you can use a hammer, similar with cryptography - you can use quantum computer etc.
Which locks haven't been picked? Abloy Protec 2 got picked, Bowley got picked, StealthKey got picked… I'm not aware of any designs for an unpickable/unbypassable lock. Whereas several AEADs have not been broken.
These things make it harder to break into the internals of the chip regardless of they being kept secret, so I wouldn't call it security by obscurity. I'm not even sure you can apply that principle to physical security.
No, it's security by intrusion detection, generally. HSMs are designed to be boxes that it's very hard to get a secret out of with physical access. TPMs generally aren't the most paranoid version, but it gets more expensive and less practical as you go further (e.g. a large box which has a battery backup, keeps the secrets in RAM, and will wipe them as soon at it detects any funny business. These are DIYable, but the list of tricks by attackers is long and it's hard to cover all of them at once). A TPM is generally somewhere in between that and a regular micro with no particular effort to prevent readout of internal storage, in that they are small, can persist secrets without power, but are still difficult to attack physically (~maybe at the level of advanced criminal organisations, ~probably at state level if they're willing to spend some money on it, even absent a backdoor).
They’re built from essentially the same secure MCUs as traditional TPMs and both the hardware and the proprietary crypto libraries used on them have been exploited many times over.
But would you not agree that using a yubikey can improve security?
If you chose to label TPMs as security by obscurity, so be it, but it doesn’t make them less useful conceptually. Shitty implementations and complexities of UEFI do that.
You can store it encrypted with a password on a USB memory that you insert when booting the computer, like you would use a key for starting a car.
This is what I actually do. I also store the OS kernel on the USB memory and I boot from there, with the root file system set to mount an internal SSD. The SSDs in the computer are completely encrypted with such long "super secure" keys (distinct for each SSD and selected automatically based on their serial number), and they do not store any information about the keys.
I have used this system for years and I find it very convenient. My computers cannot be booted without inserting the bootable USB memory and also giving the correct password. I have multiple bootable keys stored in different places, for the case when one becomes defective or lost.
I am sorry I wasn't clear. I am aware that TPM is a key storage. Just I am not convinced they keys it stores are secure. It smells of security by obscurity and all big corporations are happy clappy to use it and government is silent about it, which likely means they have a backdoor.
>It smells of security by obscurity and all big corporations are happy clappy to use it and government is silent about it, which likely means they have a backdoor.
The government is also pretty silent about AES. Does it mean that's backdoored as well? More to the point, I'm not sure what the proposed alternative is. Not using TPM, and exposing yourself to bootkits and evil maid attacks?
It is security the same way a lock is. It limits low efforts attempt which is why we put locks in our doors and close our most easily accessible windows in the first place.
This type of /r/ufos|/r/aliens speculation isn't particularly useful. It comes with no evidence of TPMs being backdoor'ed. Have they been compromised [at least pre-2.0]? Yes, in as much as Apple's Secure Enclave has as an example.
Gut feelings aren't always correct and for topics which have a sort of 'correctness' about them, they're not useful.
Maybe it's different for you, but I don't think any three letter agencies have some kind of TPM backdoor (they don't need to with how often TPM chips end up being vulnerable to common software exploits, the firmware being written in unsafe languages and all). If a government was going after me with enough force to use their TPM bypass trick, I'd probably be in jail for years on fake allegations regardless of encryption status.
TPMs work great against things like common thieves and probably corporate espionage, if set up well. When implemented well, they provide no additional friction (except for having to store a recovery key somewhere) but all the security against a laptop being stolen at the airport you could wish for.
Should be good enough for a personal tablet used for mail and browsing. If I drop it and someone curious finds it, I'd like to make it impossible for them to extract anything useful.
I think this is a good analogue. A smart card is a challenge-response system where sure you could extract the inner key, but doing so would take time and require destroying the card, which would alert the user— we all learned years ago about skimming and now the payment terminal comes to our table rather than the card being carried off elsewhere.
TPM is one piece of a larger puzzle and provides a middle ground where among other things you can get full disk encryption without needing to input a memorized key on every boot.
Found the article where I read about PCR 7+11 being the default [1]. The reason I looked it up is because if this is actually true, and the TPM is built into the cpu, what prevents someone from placing the cpu and disk on another motherboard?
Say that you have disabled usb booting and secured UEFI settings with a password. If you extract the cpu (and thereby its tpm) and the disk, then you'd still be able to boot, right? Meaning that without a TPM pin, you'd be able to do OP's attack on a different motherboard even when the original machine was off and UEFI settings secured.
What am I missing? Is it that easy to circumvent UEFI settings protection and maintain the PCR 7 value?
From what I know, the state of the UEFI settings is hashed into some PCR registers. Potentially even hardware serial numbers. Sometimes when I modify non-secureboot BIOS settings, Bitlocker complains and enters into recovery mode.
So I really doubt TPM will release the keys on a different motherboard with different UEFI settings.
Odd that you have to recover from changing UEFI settings with Secure Boot! You should be able to change any setting when that's enabled. BitLocker binds to a lot of other things when SB is off and might be fragile in that state. But it does seem that some changes will affect PCR 7:
> PCR 7 changes when UEFI SecureBoot mode is enabled/disabled, or firmware certificates (PK, KEK, db, dbx, …) are updated. The shim project will measure most of its (non-MOK) certificates and SBAT data into this PCR. — https://uapi-group.org/specifications/specs/linux_tpm_pcr_re...
It makes sense to use the certificates to generate PCR 7. I wonder if you can swap out the motherboard with one of the same model with the same certificates without modifying the PCR 7 digest...
But if Shim actually modifies the digest, I guess that SB would completely mitigate OP's exploit since the TPM policy is going to fail when the PCR 7 values doesn't match.
There is rarely a need for GRUB on EFI systems. Create a unified kernel image, create a fat fs gpt partition, mount on /boot/efi and put the image there, then add it to efibootmgr. Sign and use systemd-stub and Shim for Secure Boot. No GRUB, no systemd-boot, just EFI firmware -> initrd -> kernel. Tried and tested on Debian 12 and Trixie. Can also dual boot Windows if the machine has a way to select an OS on boot, something most systems have these days.
I think the work required to do the above vs just using the standard installer is what holds people back, but it's hard to mess up once it's done.
What's gonna be super interesting is that I'm going to have an rpi zero 2 power up my machine when I press the controller's ps-button. That means I might need to solder and do some electrical voodoo that I've never tried. Crossing my fingers that the plan ChatGPT has come up with won't electrocute me.