Hacker Newsnew | past | comments | ask | show | jobs | submit | bmitc's favoriteslogin

I've given this a skim, and it seems to be very much "of its time" (the notes were written in 2006, but only published in this paper in 2021). It basically takes as an axiom that in order to be conscious you need to be interacting with the physical environment.

But the thinking has moved on since then, and we can view the internet as an alternative "environment" in which a conscious entity could exist.

Instead of existing primarily in the physical world and using clumsy tools to access the internet, conscious artifacts will exist primarily in the internet and will use clumsy tools to access the physical world.


> The problem with Rust GUI libraries is that Rust isn't really old enough to have mature ones yet.

Rust is between 14 and 18 years old now. Depending on who you ask. [0]

If anything that's a testimony to what bmitc wrote.

> Rust's ecosystem is also very sporadic. It seems everyone jumped on board in the gold rush (and still do), reinvent the wheel in some package to lay claim, and then abandon it when its 70% there once they get bored and/or realize rust doesn't magically solve programming.

[0] https://en.wikipedia.org/wiki/Rust_(programming_language)#Hi...


Nice writeup, easy to follow.

Regarding improvements to crossing minimization, we're fans of Brandes and Köpf, "Fast and Simple Horizontal Coordinate Assignment" GD2001 that guarantees linear time and at most two bends per edge. (If anyone would like to implement this in Graphviz let us know!)

Regarding introducing additional constraints to address the problem that the algorithm "doesn't understand the sensible or hierarchical relationship between the blocks in my diagram" we're fans of Yoghourdjian, Dwyer et al, "High-quality ultra-compact grid layout of grouped networks" from IEEEVIS 2015. It looks quite difficult to implement. The figures in the paper took several minutes to generate and probably involved cutting and pasting between several tools.

A different approach was taken by Dwyer, Koren and Marriott, in "IPSep-CoLa: An Incremental Procedure for Separation Constraint Layout of Graphs" IEEEVIS 2006 where they introduce a kind of layering or asymmetry into force-directed energy models. This is implemented in Graphviz neato -Gmode=hier


According to this book: https://www.amazon.com/Tankship-Tromedy-Impending-Disasters-...

It is quite common and vessels often have outages that leave them Not Under Command. Usually they are safely at sea when this happens and they can drift for hours without causing problems. But of course there's always a possibility of it happening at exactly the wrong moment.

The reasons for this are the usual: lack of redundancy, lack of maintenance, overworked and understaffed crews, etc. etc. The book lays out how ships are pretty much designed to be floating disasters and the Class societies (essentially privatized regulators) are in the pockets of the builders, and they are so captured that they make rules that make it difficult to make safe vessels.

For instance, he was trying to design multi-screw vessels but the rules now assume single-screwed ships and it can be impossible to design in additional shaft alleys and still conform.


I share your sense of wonder at everyday objects. The essay “I,Pencil” captures this rather poignantly.

https://cdn.mises.org/I%20Pencil.pdf


I thought this was an interesting and very related read that touches on voting controls: https://ntrs.nasa.gov/api/citations/20020039704/downloads/20...

Matanuska Telecom Association | Software Engineer - Systems Engineer | Full-Time | Alaska (UTC-9): REMOTE

MTA is an Alaskan ISP serving the southwest area of the state. The software team consists of 7 devs who support a relatively complex business enterprise environment. We're looking to add another Software Engineer and a dedicated Systems Engineer (DevOps) to our team.

As the hiring manager I want to be clear about a few things:

- This is a 70 year old enterprise telecom, not a high speed startup

- This job may interest you if you want great benefits and long-term job stability

- We have an existing environment running onsite on VMWare; the Systems role is designed to support, improve, and make adjustments to our tech stack over time

- Our core backend language is F# these days (replacing Haskell after 5+ years in production), frontend is React

Software Engineer: https://mta.csod.com/ux/ats/careersite/4/home/requisition/59...

Systems Engineer: https://mta.csod.com/ux/ats/careersite/4/home/requisition/60...


I draw much from David Krakauer and J. Doyne Farmer, both of the Santa Fe Institute, and the latter's drawing on Muth (1980s).

Krakauer: Intelligence is Search. It's the ability to find a least-energy path or solution through an arbitrary n-dimensional space.

Knowledge is having a solid backing of (valid) data and models.

Ignorance is not having that data and model store. It's a curable state, though being primed with bad data and models is very damaging to effective search.

Krakauer gave a really good interview at Nautilus a year or so back on this, it's very highly recommended.

http://m.nautil.us/issue/23/dominoes/ingenious-david-krakaue...

Farmer, quoting Muth, looks at a model of engineering in which engineers generate solutions at random but are very good at identifying the better solution. I'm thinking that there are emperical reasons to suspect that this may not be far from the truth, and methods such as A/B testing seem to explicitly embody the identification element of this.

You'll find this covered in many of Farmer's recent talks (past few years), Martin School at Oxford and Nanjing University in particular, at YouTube.

https://m.youtube.com/watch?v=u6x10eBpAPo


Also Lewis Mumford [1], Günther Anders [2], or if you want to go truly underground, Gilbert Simondon [3] [4] or Friedrich Kittler [5].

If there are thinkers who have been in the conceptual space of the 23rd century and beyond, Simondon was surely one. Also radically of the future and forgotten is FM-2030 [6] [7].

All in all, blowing up people is easy, blowing up antiquated concepts, grasping for the grounds of a new metaphysics, painstakingly implementing and debugging is the hard part.

Besides, to think that there even is such a thing called technology (as distinguished from what) is incredibly naive after following to conclusions systems such as the Grotthuss proton translocation mechanism driving motion in a F_0/F_1-ATP synthase rotation mechanism [8] [9].

[1] https://en.wikipedia.org/wiki/Lewis_Mumford

[2] https://en.wikipedia.org/wiki/G%C3%BCnther_Anders

[3] https://en.wikipedia.org/wiki/Gilbert_Simondon

[4] Gilbert Simondon - 'The Technical Object as Such', https://www.youtube.com/watch?v=eXDtG74hCL4

[5] https://en.wikipedia.org/wiki/Friedrich_Kittler

[6] https://en.wikipedia.org/wiki/FM-2030

[7] Futurist FM-2030 Appears on CNN's Future Watch, https://www.youtube.com/watch?v=mT__dTtX2ik

[8] Prof Levin, Prof Frasch (2022) Mitochondria, bioenergetics, information, electric fields, https://youtu.be/MEhrMR-Jaw0?t=3429

[9] 2021, Living Things Are Not (20th Century) Machines: Updating Mechanism Metaphors in Light of the Modern Science of Machine Behavior, https://www.frontiersin.org/articles/10.3389/fevo.2021.65072...


It's not just the draughts.

https://www.seaspiracy.org/facts

"Species like thresher, bull and hammerhead sharks have lost up to 80-99% of their populations in the last two decades.

Seabird populations have declined by 70% since the 1950's.

Studies estimate that up to 40% of all marine life caught is thrown overboard as bycatch.

Six out of seven species of sea turtles are either threatened or endangered due to fishing.

Over 300,000 whales, dolphins and porpoises are killed as bycatch every year.

2.7 trillion fish are caught every year, or up to 5 million caught every minute.

Fish populations are in decline to near extinction.


Not having used them myself, I can't comment on the quality of the JVM based Smalltalks:

http://www.redline.st/

https://github.com/hpi-swa/trufflesqueak

Another project that I watched without trying myself was for .NET:

https://refactory.com/sharp-smalltalk/

There was also Essence# but it doesn't seem active.


The article is pretty useless frankly.

The mystery about how Rentec makes a ton of money is very real - nobody seem to have a clue. It's most probably indeed due to some excellent math / engineering work, but could very well be a fraud too - wouldn't be a first time that something is 'very secretive' for a reason.

Btw, another interesting trading company is XTX - founded by a Russian-born mathematician Alexander Gerko - they are huge (in terms of trading volumes and profits, not in terms of headcount), and apparently using a lot of modern machine learning.


I've worked in "climate intelligence" for many years. The list overlooks one of the largest and most immediate opportunities around that market: the data infrastructure and analysis tools we have today are profoundly unfit for purpose. Just about everyone is essentially using cartography tools to do large-scale spatiotemporal analysis of sensor and telemetry data. The gaps for both features and practical scalability are massive.

It has made most of the climate intelligence analysis we'd like to do, and for which data is available, intractable. And what we can do is so computationally inefficient that we figuratively burn down a small forest every time we run an analysis on a non-trivial model, which isn't very green either.

(This is definitely something I'd work on if I had the bandwidth, it is a pretty pure deep tech software problem.)


Renaissance Technologies is rumoured to use methods from topological quantum field theory.

They are fiercely and aggressively secretive, so no one really know for sure.

However, their founder worked in that field before moving to business: https://en.wikipedia.org/wiki/Chern%E2%80%93Simons_theory

Famously, he was sacked by the NSA for giving an interview opposing US involvement in the Vietnam war.


Someone's going to say this eventually, so it may as well be me.

Rentech is not the only hyper successful fund. There are others, like TGS management (https://www.google.com/amp/s/www.cnbc.com/amp/2014/05/09/mys...) that are just as successful and who you've never heard of.

What rentech has done is to have built an excellent data processing engine that automatically extracts signal from noise. Other, much more secretive, funds have done this too.


I'd recommend two things. The first is Brad Myers' Software Structures for User Interfaces course at Carnegie Mellon University, which focuses on the guts of how graphical user interfaces work.

https://www.cs.cmu.edu/~bam/uicourse/05631fall2021/

The second is Dan Olsen's book Developing User Interfaces, which has all of the details of how GUIs work, from graphics to interactor trees to events to dispatching. For some reason, it's absurdly expensive on Amazon right now.

Both Dan Olsen and Brad Myers were early pioneers in GUIs and GUI tools, so you'd be learning from the masters.


newline (formerly Fullstack.io) | Book author | Remote | Part Time | https://www.newline.co/write-a-book Earn on order of $50k/year by writing a programming book. We’re the authors of Fullstack React, ng-book, Fullstack Vue and we’re looking to work with authors like you to write a few new books this year. Our books sell very well because: - We go way beyond API docs and teach everything you need to know to build real apps. - We guarantee the books and code are up to date.

- We invest in marketing the books (and have an active email list of over 100k)

- We love the topics we write about and aim to create something remarkable every time.

If you decided to self-publish, you may find the marketing is more than writing the book. We have an audience, and we know what they want to read - so when your book is done, we already have people who want to buy it.

If you decide to go with a “traditional” publisher, you may be given a mediocre editor, write your book in MS Word (ha), and earn 5-15% in royalties. With us, our editors (me) are programmers first, our tooling is dev-friendly, and our royalties are split 50/50. (For scale, the author of Fullstack Vue earned $20k on the opening weekend, Fullstack D3 even more.)

We’re looking to write content about JavaScript, Building Full-stack web apps, ASP.NET Core, Serverless, Python, Kubernetes, Elixir, Blazor etc. Anything up and coming.

If this sounds like something you’d be interested in, fill out the form linked below. Looking forward to hearing from you!

https://www.newline.co/write-a-book

(I've talked more about our economics of writing books here: https://news.ycombinator.com/item?id=17015117)


Philip Wadler has a seminal paper [0] on implementing pretty printers by modeling them using an algebra. There's implementations of the algorithm in most programming languages nowadays, and some state of the art advances that help with performance [1]

It's a very elegant algorithm, and is very pleasant to work. Elixir's formatter uses a variant of it [2], and it appears in the standard library [3]. I've personally made use of it to write a code formatter for GraphQL queries, and it was a very small amount of code for some powerful results.

Definitely take a look if you ever need to do something like this.

[0] https://homepages.inf.ed.ac.uk/wadler/papers/prettier/pretti...

[1] https://jyp.github.io/pdf/Prettiest.pdf

[2] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2...

[3] https://hexdocs.pm/elixir/Inspect.Algebra.html


This is a quite inaccurate comment. The catalysts (in 1966) for the ideas I had included Sketchpad (especially), Simula I (a few days later), the ARPAnet (under discussion), operating systems inter-process communications (especially Project Genie), the Burroughs B5000, my old biology and mathematics majors. All these are mentioned in "The Early History of Smalltalk" that I wrote for the ACM History of Programming Languages"

Actors appeared after I gave a talk at MIT about the very first Smalltalk.

There were several later ideas that were discussed at Parc but not taken up because of the whirl that was already going on. One of these was derived from McCarthy's "fluents" and "situations" (essentially a labeled states/layers idea for allowing concurrencies without race conditions -- this was done very well in David Reed's 1978 MIT thesis.

Another was not waiting for replies. This was in the original set of ideas -- via biology and OS techniques -- but never got implemented in a deep way. The hardware we had at Parc was tiny and Smalltalk was expressive enough to fit a lot into a little.

Another set of ideas that were completely missed appeared in LINDA by Gelernter. This (and the larger ideas around it) are very good ways to deal with larger scalings, etc.


I work at Evergrow [1] on climate fintech for regulated carbon markets.

If meaning in your career is an issue you're wrestling with, I empathize. I know how difficult it can be to do work that you no longer find meaningful. Many of my friends and family are able to treat their work as just a job, but for whatever reason, I haven't been able to do so.

So I quit my job in January to take a step back and think about what I wanted to do with my career. And I decided that as long as the comp was reasonable, I'd be willing to work on any technical problem connected with climate change.

I was surprised at how many interesting tech opportunities were available -- no end of ML and computer vision companies, for example, on everything from recycling robots to weather modeling. And that's leaving aside more traditional full-stack SWE work for collecting and presenting data. If you're interested in climate work specifically, I strongly recommend reaching out to Work on Climate [2] or ClimatePeople [3].

Ultimately, I joined Evergrow because I thought that understanding the capital dynamics in these markets would be most critical. I also thought the team was outstanding -- pragmatic, driven, and very high-integrity.

And if you want to consider different sectors more broadly, I've heard good things about 80000 Hours [4].

[1]: evergrow.com [2]: workonclimate.org [3]: climatepeople.com [4]: 80000hours.org


Gro Intelligence | Principal/Senior Software Engineer | NYC or Remote (US/Canada) | Full-time | $150k-$225k base + equity | Python, Rust, React | https://grnh.se/24ac6dbd4us | https://grnh.se/6c20efd34us

Gro Intelligence is building a platform that harnesses human expertise and machine learning as it assists users attempting to address two of the most pressing challenges facing humanity today: climate change and food security. At Gro, we illuminate the correlations between the Earth's ecology and our economy in ways that reveal the big picture and that help people act on the small details, and we are always on the lookout for people who can help us achieve our goals.

Fundamentally, Gro is a big data platform. Deep domain knowledge and machine learning power all of our applications and underlying analytical engines; scalable collaboration defines our engineering culture. We are a rapidly growing Series B ($85M - January 2021) company that provides our engineers with the opportunities and resources that they need to thrive.

As we scaling out and up, our technical infrastructure group needs engineers who are passionate about building and scaling systems (think decomposing monoliths, CI/CD, scaling data pipelines from petabytes to exabytes) and who have at least one area of deep understanding in software engineering fundamentals. Our code base is primarily Python currently, but we will be transitioning our foundational layers to Rust. We welcome C and C++ programmers who are looking to transition to Rust. Apply here: https://grnh.se/24ac6dbd4us

We are also building a new team staffed with engineers with strong frontend experience and API design knowledge. This new team will construct a framework that lets internal and external users rapidly build and deploy new solutions/visualizations (React) using our data. Apply here: https://grnh.se/6c20efd34us

As a Gro engineer, you will build out systems aimed at providing the world with the most relevant, accurate, and actionable information on climate and agriculture. If you love a good challenge, have deep expertise, and a desire to make a positive impact - or just have more questions about what we do or what opportunities are available - please don't hesitate to reach out to us at jobs@gro-intelligence.com


> The code will deteriorate in one way or another. You need to prepare to fix it since you can't avoid it.

Lehman's laws of software evolution 1 and 2:

"A system must be continually adapted or it becomes progressively less satisfactory. As a system evolves, its complexity increases unless work is done to maintain or reduce it."


> Leetcode isn't about fun and challenging things, it's about thinking in one particular way, spitting out solutions using the same exact data structures and jumping through hoops on command without philosophizing or creating anything that can be reused/extended.

> This is also what Software Engineering has become: you memorize, regurgitate and participate in agile the masquerade. Creativity is shunned. Tried architectures/patterns are what is expected.

Compare https://sockpuppet.org/blog/2015/03/06/the-hiring-post/


Software Foundations (https://softwarefoundations.cis.upenn.edu) is pretty informal and understandable, what it isn't is short. But it's a very useful resource.

The best general introduction is probably still Types and Programming Languages: (https://www.cis.upenn.edu/~bcpierce/tapl/index.html) It's a great book, but it doesn't get to dependent types.


The [Racket gui toolkit](http://docs.racket-lang.org/gui/) is successfully multithreaded, and manages this despite being implemented on top of existing non-multithreaded and non-thread-safe toolkits. It's hard, but it's certainly not impossible.

[Here's a paper](http://www.ccs.neu.edu/racket/pubs/icfp99-ffkf.pdf) about the system, and how it enables cool new stuff


A lot of the engineering math associated with splines (NURBs) and surface approximations came from Citroen, Peugeot and Renault, the car makers, in the mid-20th century. I read a great book on this back in the 80's and can't remember the name. It was about the early days of CAD and its origins.

Like, the control points on splines were called "ducks", which were weights attached to spring steel suspended from the ceiling of the studio, which caused the steel to bend into particular shapes computed mathematically beforehand by the design engineers. These curves were used to guide model making. It was a fascinating book.


What's your view on cases like Renaissance Technologies? There's an interview online on James Simons (https://www.youtube.com/watch?v=QNznD9hMEh0) - and he explicitly talked about using math models to detect anomalies, e.g. trends. It's also known that they've used Hidden Markov Models, at least in the early days.

Renaissance Technologies has completely automated the process of signal discovery.[1] They don't hire researchers to manually derive novel insights or trading models from data, and they don't really bother with exclusive sources of data. Instead, they hire researchers to improve methods for automatically processing vast amounts of arbitrary data and extracting profitable trading signals from it.

When most funds say they're "quantitative", what they really mean is that they use huge amounts of data to inform fundamentally manual trading strategies (this includes most of the places widely considered to be "top" firms). They develop trading algorithms, and those trading algorithms are often successful. But the algorithms are developed manually and then deployed. Their researchers and engineers actively seek out new sources of data and try to compete on novel sources of untapped information. But the reality of what happens is that they simply drown in the data. They can't clean it or process it nearly fast enough to maintain long term trading strategies, nor can they even begin to find a way to automate the trading strategy extraction. If you're working with hundreds of terabytes of data, you cannot selectively formulate hypotheses and test them. It's far too slow. You will find dramatically fewer novel insights than a fully automated process.

In other words, they're a step above traditional "fundamental" hedge funds, but they focus on the wrong problem (but not for lack of trying!). In contrast, the truly successful quant funds have automated the data processing and feature extraction pipeline end to end. The data is a pure abstraction to them. They don't bother with forming hypotheses and trying to find data to test them, they allow their algorithms to actively discover new correlations from the ground up. So many quantitative funds advertise how much data they work with, and how they have all these exotic sources of data at their disposal...but the data does not matter. The models for the data do not matter. The mathematics of efficiently processing that data are what matters.

As a result of their consistent profitability, most of the jobs you see listed for the really successful funds (if they have a website) are not "real" in the strictest sense of the word. You can apply to them, but they only keep active careers pages to attract the best researchers. Their only incentive to hire is to 1) keep someone who is actually exceptional from joining a competitor or 2) keep an academic researcher from re-discovering their work when they seems like they're getting close to it. This is why they primarily focus on quantitative PhDs in information theory, high energy physics and computational mathematics (especially information geometry).

To be completely frank, Renaissance is an outlier, but not just because of their returns. They're an outlier because of how public they are. Most of the funds with comparable returns not only don't take any outside investor capital, they only have 25 - 50 employees. They virtually never hire because they don't have to. If your work is fundamentally interesting, novel and applicable to what they're doing (even if you can't immediately see why), they will call you.

___________________________

1. Other more secretive (but equally successful) funds have done this, but they are much more under the radar.


Weirdness is good. If the market is weird, it means people don't know how to exploit it, i.e. you actually have a chance of finding an edge that hasn't been exhausted by someone else.

There isn't a compendium of standard tactics that "work" in normal markets. If there were, they would no longer be profitable. Anything that works is secret & novel, and finding a novel approach to an old market is harder than finding a novel approach to a weird one.


I mean the technology is highly damaging to society in a whole bunch of ways, damaging the environment, supporting fraud and ransomware, used for money laundering, funding crime, and if it gets integrated into the financial system like some want it to be, it could even destroy human civilization: https://benoitessiambre.com/specter.html (even Douglas Adams tried to warn us 40 years ago).

So it's a relief when it looks more like it's going away or at least staying contained.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: