Hacker Newsnew | past | comments | ask | show | jobs | submit | gregdunn's commentslogin

Disclaimer: I work at AWS, primarily in the compute space, but I'm not speaking in an official capacity.

I would say they fulfill different purposes. The SSM agent has quite a bit of additional functionality, even within the Session Manager portion. It's more of your solution for online, general day to day access.

Serial console will let you fix issues when you have lost the ability to boot an instance, or network connectivity has failed. When SSH or Session Manager are available, I personally would opt to utilize them over the serial console. But if I have an instance that I can't reach via those, am unable to replace it for whatever reason, and need to bring it back online, serial console would be what I would reach for.


Disclaimer: I work at AWS, primarily in the compute space, but I'm not speaking in an official capacity.

As fguerraz mentioned, modern AWS instance families are basically all powered by Nitro, which refers to the ecosystem around the hypervisor and hardware acceleration cards utilized. https://aws.amazon.com/ec2/nitro/


Thank you very much! I thought nitro referred to the bare metal offering only.


>Maybe Amazon is doing something similar?

Yep!

https://sustainability.aboutamazon.com/environment/sustainab...

Disclaimer: Amazon employee, not speaking on behalf of the company, just posting a link to something I am aware of.


Disclaimer: I work at AWS, totally different team, and had never heard of this product until this announcement. This is 100% my personal opinion and I'm not operating in any official capacity.

>Personally, I'd probably just suffer through having to spend a week doing a slow internet upload/download rather than paying for Snowcone.

Well, I think there's two things here.

1) A lot of businesses probably won't be willing to spend a week with reduced internet capacity to upload stuff. Things we as single users might be okay with might not always translate to being a good fit for a business overall.

2) My reading is that some of the use cases for this are areas where you are likely to have limited or no internet connectivity.

From https://aws.amazon.com/snowcone/

>AWS Snowcone is built for edge computing and data storage outside of a data center. It is designed to meet stringent standards for ruggedization, including free-fall shock, operational vibration, and more. When sealed, the device is both dust-tight and water-resistant, protected from water jets on all sides. Snowcone has a wide operating temperature range from freezing to desert-like conditions, and withstands even harsher temperatures in storage.

and:

>AWS Snowcone deploys virtually anywhere you need it. It features 2 CPUs, 4 GB of memory, 8 TB of usable storage, Wi-Fi or wired access, and USB-C power using a cord or optional battery. You can put it in a messenger bag, run it in an autonomous vehicle or an airplane, or even attach it to a drone.

So, ruggedization and the ability to run this totally off battery points me towards thinking about use cases where there's not existing infrastructure to take advantage of. I guess this supported by the 'run it in an autonomous vehicle or airplane' bit I'm quoting as well.


>1) A lot of businesses probably won't be willing to spend a week with reduced internet capacity to upload stuff. Things we as single users might be okay with might not always translate to being a good fit for a business overall.

Perfect for us. We used to ship small amount of data in the scheme of things on external drives to Amazon for long term storage in Glacier. Worked great. That program was dropped and replaced by Snowball.

We tried Snowball and never could get it to work properly in our location. Amazon support couldn't get it to work, either. It was really overblown for what we wanted to, anyway.

Sending over the wire isn't an option for us.

This is a better solution for us as long as the networking issues are resolved and the pricing works out.


Market, and point in the career.

If you've got 15-20 years of experience and you've never stayed somewhere more than 12-18 months, it's a very different scenario than someone with 5-6 years, even in SV.


It depends on the company and their needs. In the majority of cases, I would say this is a positive thing for your resume.

If their goal is to hire someone for a specific position and they hope that person stays in that same role long term, maybe not so much. Companies are generally happy to just have you stick around, though - if you are moving within the company, you should in theory still be providing value to the company, and potentially even more value, if the moves are upwards.


This matches my experiences, and I'm certainly no Distinguished Engineer, but as a PE I would still have relatively few options to work at a similar level and compensation if I was looking. Thankfully I'm quite happy where I am.


Disclaimer: I work at AWS on an unrelated team. I was not involved in development of this product. Opinions stated are my own, and not necessarily a reflection of my employer. Nothing here is being posted in any sort of official capacity.

There's lots of focus here in the comments on the code reviewer portion, but one of the things I'm most excited about is the profiler - https://aws.amazon.com/codeguru/features/

I do a lot of performance engineering work, and one of my go to tools for visualizing where programs are spending their time is flamegraphs. While you can certainly create them with profilers besides CodeGuru (and I do not work with Java, so I haven't yet had the chance to check out CodeGuru for any of my use cases), I'm super excited about anything that gets more people using them. They make it very easy to see where your optimization opportunities are, and I have personally found them very useful when working with our customers - they're way easier, in my opinion, to go through and explain than just looking at raw perf output or similar.


A profiling tool I want to try out—it seems almost magical—is Coz. It can estimate the effect of speeding up any line of code. It does this by pausing (!) other threads, so it gives a 'virtual' speed up for that line.

What's interesting is that this technique correctly handles inter-thread effects like blocking, locking, contention, so it can point out inter-thread issues that traditional profilers and flame graphs struggle with.

Summary: https://blog.acolyer.org/2015/10/14/coz-finding-code-that-co...

Video presentation: https://www.youtube.com/watch?v=jE0V-p1odPg&t=0m28s

Coz: https://github.com/plasma-umass/coz

JCoz (Java version): http://decave.github.io/JCoz/ and https://github.com/Decave/JCoz


I have never heard of this kind of profiling before, thanks for sharing



Yes, indeed! Especially when paired with an continuously running profiler, one can learn quite a bit about one's code. It's actually rather surprising to me that they have not quite caught on earlier.

A bit of an (almost) shameless plug is a project I have been working on at https://blunders.io. A bit similar to the Code Guru profiler, but with a different feature set.


Seconded. I used them a lot when I worked for myself on old-school single server apps but have struggled to convince my team now that I work on something spread across AWS instances. I'd just brought the concept up again this week for a hackathon project but this looks like we could buy our way to what I want for cheap (compared to overall hosting). I suspect it may pay for itself.


The Profiler reminds me a bit of PHP Symfony's https://blackfire.io/. Even the graphs (flamegraphs) have some strong similarities (e.g. https://blackfire.io/docs/reference-guide/analyzing-timeline...).

I've used Blackfire for a while, and this type of visualization is definitely helpful for finding bottlenecks in web performance. I've been able to reduce page load by caching big chunks that I was able to see in the graph / timeline.


I, for one, am very interested in flamescope [1]. Not tried it yet but it's like a time machine that allows to zoom on a given time interval and look at a flamegraph of what happened at that moment.

You can look at the introductory video [2] to get an idea

[1] https://github.com/Netflix/flamescope [2] https://www.youtube.com/watch?v=cFuI8SAAvJg

EDIT: missing anchor


Does anyone know of a good flame graph visualizer for callstacks? Particularly one that allows you to drill down into a stack. Bonus points if you can diff two data sets. I recently built an in-app profiler and am trying to work out the analysis side of things to make life easier for the other developers on my team.


Disclaimer: I work at AWS, but this post isn't being made in any sort of official capacity, I have no relation to the team in question (this is the first I've even heard of the service), and the opinions here are entirely my own and not necessarily a reflection of that of my employer.

> I wonder what kind of future this spells for quantum computing - will it continue to spread or will it be limited/stunted by being controlled by only the few?

I feel like this is a step in the right direction, though. Right now using quantum computers is totally outside of the realm of possibility for the vast majority of people - they're simply too expensive in materials cost, expertise to create, conditions for operation, etc. etc. etc. - without services like this one. The only chance an "everyday" person has to try out a quantum computer is to rent time on someone's else's.

I don't think at a similar point in the life of classical computers we had options like this that were readily available - you could rent time on the computers, but I can't imagine that getting access to them was as easy as it will be today with the internet being a thing and service providers offering high granularity on billing.

My understanding (and I'm not even remotely an expert, so I could be totally off base here!) is that it's an open question on whether or not quantum computing will ever even be doable in environments where classical computing works - it might not be within the realm of what physics allows for it ever to be possible to have a quantum computer powered smartphone.

I hope access is ubiquitous someday for people, but in general I feel like this is a good step while that's not practical.


> an "everyday" person has to try out a quantum computer

what would an everyday person do on a qc?


Well, practical QC seems to be involved with optimization problems. D-Wave recently demonstrated doing something with bus routing for Volkswagen. I could imagine, say, a map service scaling that out by integrating QC into their route-finding for drivers to cooperatively improve traffic flow by finding optimal solutions to problems of a scale that is intractable with classical systems.

The everyday person will use QC like they "use" machine learning today: from a very high level abstract viewpoint, where services they consume have a little bit of intelligence that makes interacting with them more efficient.


Yeah, but what kind of optimization problem where an exact solution is intractable also doesn't have approximate algorithms that are good enough?


D-Wave has never demonstrated quantum speedup. Many doubt that their approach can be useful, even in theory.


Meh, sounds like overly negative propaganda to me. Clearly they're building up a big body of knowledge and have a lot of potential, as long as someone can figure out a practical application for the kinds of optimization problems their machine is good at.

It seems like neural networks should map to it well. Once the degree of connectivity and the number of qubits approaches the millions, there's no way any normal software solver is going to be able to keep up with it.


Facebook


High performance networking: https://www.iovisor.org/technology/xdp

Cloudflare uses XDP for a variety of things: https://blog.cloudflare.com/xdpcap/ https://blog.cloudflare.com/l4drop-xdp-ebpf-based-ddos-mitig...

Performance engineering, debugging, etc: https://github.com/iovisor/bcc https://github.com/iovisor/bpftrace

Brendan Gregg is all on board the BPF train as well - check out all the blogs he's written about it over the past several years: http://www.brendangregg.com/blog/

IMO, (E)BPF is one of the most exciting technologies to be introduced in the past half decade or so. bcc and now bpftrace have become two of my favorite tools to reach for when assisting EC2 customers with performance issues. (Edit: I suppose I should note that that's a personal preference and not AWS policy, and also that the performance issues aren't special to EC2 ;))


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: