I used to, when I was in a classroom or at a bar. Actually managed to get quite good at it through sheer boredom in grande école. Then life happened and that faded away, alongside my mental health. Recently I've rediscovered doodling while attending ACM CCS 2025 as an independent (long story) and I want to improve my mental health in 2026, to the point where I can draw regularly again.
I was in the same boat and started drawing again at around 30.
Remember that paper is cheap and that experimentation is valuable. Make all the bad art you can. The cost of all the paper I wasted in the last 5 years is probably less than the cost of a pizza. There is a valuable life lesson in there about being okay with making mistakes so that you can learn from them.
Nowadays I always carry a notebook, and more often than not pens and watercolours. You can build a really tiny kit out of makeup palettes.
I also loved taking painting lessons and going to live nude drawing at one of my favourite pubs. Making art is such a pleasant disconnect from work and digital life.
> The cost of all the paper I wasted in the last 5 years is probably less than the cost of a pizza.
Once you get into it, there is an amazing assortment of papers that can cost up to a pizza per sheet. Especially if you're going for the larger formats.
Yes, but let's not nitpick our way out of understanding the point. Don't be afraid to soil a notebook with bad drawings, and waste as much paper as it takes. Mistakes are part of the process, not something to avoid.
That's awesome! I feel similar, I drew a lot back in the days because growing up in a small town I was bored so often. I did portrait art only but today I struggle because I just don't know what to draw and I'm just not good at doodling.
Best of luck to you!
Thanks for your openness about with struggling with mental health. It is brutal. For me, exercise really helped. For others, it is reconnecting (or getting closer) with friends and family. Keep at it -- you can beat it!
I have the same setup and I like it, but traditional media is just much more pleasant.
In any case, practice. Keep drawing, and try drawing the same thing multiple times. Don't just start over, fix your mistakes. Step back and take time off to let the mistakes come out.
Above all, remember to have fun. Mistakes are an integral part of learning, and if you take yourself too seriously, you will never make any. Waste as much paper as you need, if it means that you will keep practicing.
Practice. Lots and lots of practice. There's no way around that.
Besides that, there are plenty of resources to learn particular topics/techniques out there. For drawing people with any degree of realism, you'll need at least drawing proportions at first and then anatomy later on.
While you can brute-force it from zero on your own like I did, I wouldn't recommend it. You'll learn faster if you study it like a proper discipline.
I am pretty good at drawing, and would highly recommend starting with traditional media rather than digital tools.
Drawing on paper allows for a wide range of physical setups, such as using a notebook on your lap or on a table, large sheets mounted on a wall, or a board on an easel. Each configuration engages different muscle groups. Large-format drawing relies primarily on shoulder movement, whereas smaller, more detailed work involves the wrist, forearm, and fingers. I'm convinced that deliberately training hand–eye coordination at multiple scales (finger–eye, wrist–eye, or shoulder–eye), is beneficial in learning to draw better.
It is also a good idea to experiment with a variety of media: pens, pencils, chalk, charcoal, and different surfaces such as paper, wood, or canvas. The differing tactile feedback and resistance will improve your motor control. You don't need to spend a fortune on this, but don't limit yourself to the cheapest color pencils and toilet paper.
That said, if your primary goal is accurate photo replication, it's probably easiest to start with Drawing on the Right Side of the Brain [1], along with some YouTube tutorials.
Being entirely self-taught, I'm not sure how to describe my style. If I have to, it's kinda a nondescript knock-off of Gisèle Lagacé's recent webcomics.
As for the subjects, being a horny teenager at the time I mostly drew scantily clad women. Sometimes portraits/caricatures of teachers or other students, mostly on request. All together, that led to an unfathomable number of hijinks.
Thankfully, the one time that a teacher came across their caricature, it ended well. A fellow student requested it while in class (of handwriting Java of all things). She then took my handout and brought it to the teacher, proudly stating with glee "look at what boricj drew!". Cue the laughter. Then the teacher stated flipping pages and stumbled upon the rest of my usual bodywork, so to speak. Cue the laughter again. By that point, I was rolling on the floor, my sides hurting.
I don't think I'll ever top that, but the reception of my doodles at the conference by academics reminded me of that past. Hopefully I'll manage to rekindle it.
I'm working with JSON schema through OpenAPI specifications at work. I think it's a bit of a double-edged sword: it's very nice to write things in, but it's a little bit too flexible when it comes to writing tools that don't involve validating JSON documents.
I'm in the process of writing a toolchain of sorts, with the OpenAPI document as an abstract syntax tree that goes through various passes (parsing, validation, aggregation, analysis, transformation, serialization...). My immediate use-case is generating C++ type/class headers from component schemas, with the intent to eventually auto-generate as much code as I can from a single source of truth specification (like binding these generated C++ data classes with serializers/deserializers, generating a command-line interface...).
JSON schema is so flexible that I have several passes to normalize/canonicalize the component schemas of an OpenAPI document into something that I can then project into the C++ language. It works, but this was significantly trickier to accomplish than I anticipated.
I used to believe that I was working with JSON schema through OpenAPI 3.0, but then I learned a hard lesson that it uses an “extended subset” of it. And what does that mean? It “means that some keywords are supported and some are not, some keywords have slightly different usage than in JSON Schema, and additional keywords are introduced.” [1]. Yes, that’s a bonkers way to say “this is not JSON schema although it looks similar enough to deceive you”. This word game and engineering choice is so bizarre that it’s almost funny.
OpenAPI 3.1 replaced the not-a-superset-or-subset of JSON Schema with the actual JSON Schema (latest version) over five years ago. No one should be using 3.0.x anymore. And 3.2 came out a few months ago, containing lots of features that have been in high demand (support for arbitrary HTTP methods, full expression of multipart and streaming messages, etc).
Many users are stuck at 3.0 or even Swagger 2.0 because the libraries they use refuse to support recent versions. Also OpenAPI is still not a strict superset because things like `discriminator` are still missing in JSON schema.
If you're building a brand new, multi-language, multi-platform system that uses advanced open-api features - you will get bitten by lack of support in 3.1 versions of tooling for features that already existed and work fine right now in 3.0 tool versions. Especially if you're using a schema-first workflow (which you should be). For example, $ref's to files across windows/linux/macos across multiple different language tools - java, .net, typescript, etc.
If you need (or just want) maximum compatibility across tools, platforms and languages - open-api 3.1 is still not viable, and isn't looking like it will be anytime soon.
The solution here is to demand support for the most recent specification version from your tooling vendors. We (the OpenAPI TSC) sometimes hear from vendors "we're not moving quickly to support the latest version because our users aren't asking for it." So it's a catch-22 unless you make your needs known.
I am thankful for my coworkers. I'm the kind of software engineer who is a mad scientist in disguise.
Bridging dissimilar message busses with data-driven Lua scripts. Creating a Jenkins SCM plugin that exposes the sources packages of a Debian repository (complete with binary dependency tracking) as an organization folder to turn it into a package builder. Improvising a Git proxy/cache with a hundred lines of Bash to lessen the load on the lab uplink (still load-bearing to this day). Writing a toolchain in Python that takes OpenAPI documents as an abstract syntax tree and run passes on it to parse, validate, aggregate, transform and format it for various needs (such as generating C++ code for data models, dataframe bindings and so on). Delinking programs back into object files and creating Frankenstein monsters from salvaged pieces, and somehow landing a poster presentation about that at ACM CCS 2025 as a hobbyist (this one outside of office hours, but it still triggers brain meltdowns when I talk about it). And so many, many more sins.
I honestly don't know how they are putting up with the incarnation of chaos that is me.
Instead of dedicating an entire Raspberry Pi with fancy pinning and temperature management by burning CPU time, wouldn't a micro-controller and a precise external oscillator fare better for time-keeping stability? I would assume that a STM32 discovery kit running a NTP server on its Ethernet port could probably do better.
In general, NTP is a time sensitive process, and application processor/SoC are indeed going to have far greater rates of clock slips than an MCU running off an XTAL or TCXO.
RTLinux has a module feature to sync the scheduler to an external pin state. It is an obscure feature...
Adding more processors creates a well-known named-problem with metastability:
I do have that as well, but haven't done a write up on it yet. It was a $70 GPSDO from eBay (BH3SAP variant running Fredzo firmware with some changes to enable flywheel (generating pulse in absence of GPS PPS)). I have verified it can feed the Pi for NTP. Believe the STM32 driving the GPSDO can as well but it has no ethernet capability as-is
> And in many ways I think this was a positive energy to bring to the world!
Or in other words, becoming the landfill of negative energy of the world.
As someone who used to be that person for over a decade, having people endlessly confiding their relationship/health/mental/work/legal/family/gender issues will over time completely wreck your sanity. Because when you're that someone, people will not just tell you the light stuff, but also some really heavyweight and/or deeply fucked up things.
Oh, certainly. And if you have any resources besides attention - money, or social capital, for instance - people will happily take those too. It's not that they are wrong to ask, but the need is truly infinite, and no one will create guardrails for you except yourself.
Take care of your mental health proactively. Do not let depression run unchecked, or it will end up robbing you of everything you hold dear before you realize it.
You could store money digitally on a card. Moneo did that until 2015 and it was used in France as a wallet for paying meals in school canteens for tertiary education. That system was phased out just as I left university.
I remember writing an app in Java to read the balance on a card with my laptop which had a built-in smartcard reader, because I was too lazy to go to a station. Everyone in the classroom then promptly asked me to check their balance... and a few asked if I could top it up somehow.
Several years ago, I was the sysadmin/devops of an on-premises lab whose uplink to the rest of the company (and the proxy by extension) was melting under the CICD load.
When that became so unbearable that it escalated all the way to the top priority of my oversaturated backlog, I took thirty minutes from my hectic day to whip up a Git proxy/cache written in an hundred lines of Bash.
That single-handedly brought back the uplink from being pegged at the redline, cut down time spent cloning/pulling repositories in the CICD pipelines by over two-thirds and improved the workday of over 40 software developers.
That hackjob is still in production right now, years after I left that position. They tried to decommission it at some point thinking that the newly installed fiber uplink was up to the task, only to instantly run into GitHub rate limiting.
It's still load-bearing and strangely enough is the most reliable piece of software I've ever written. It's clever and witty, because it's both easy to understand and hard to come up with. The team would strongly disagree with the statement that they didn't directly benefit from it.
> License is the obvious blocker, aside from all the technical issues. Btrfs is GPL
WinBtrfs [1], a reimplementation of btrfs from scratch for Windows systems, is licensed under the LGPL v3. Just because the reference implementation uses one license doesn't mean that others must use it too.
I used to, when I was in a classroom or at a bar. Actually managed to get quite good at it through sheer boredom in grande école. Then life happened and that faded away, alongside my mental health. Recently I've rediscovered doodling while attending ACM CCS 2025 as an independent (long story) and I want to improve my mental health in 2026, to the point where I can draw regularly again.
reply