Hacker Newsnew | past | comments | ask | show | jobs | submit | archerx's commentslogin

If you’re noticing stuttering on 24fps pans then someone made a mistake when setting the shutter speed (they set it too fast), the motion blur should have smoothed it out. This is an error on the cinematographer’s fault more than anything.

60fps will always look like cheap soap opera to me for movies.


Pans looking juddery no matter what you do in 24 fps is a very well known issue. Motion blur’s ability to help (using the 180-shutter rule) is quite limited, and you can also reduce it somewhat by going very slow (using the 1/7 frame rule), but there is no cure. The cinematographer cannot fix the fundamental physical problem of the 24 fps framerate being too slow.

24 fps wasn’t chosen because it was optimal or high quality, it was chosen because it’s the cheapest option for film that meets the minimum rate needed to not degrade into a slideshow and also sync with audio.

Here’s an example that uses the 180-shutter and 1/7-frame rules and still demonstrates bad judder. “We have tried the obvious motion blur which should have been able to handle it but even with feature turned on, it still happens. Motion blur applied to other animations, fine… but with horizontal scroll, it doesn’t seem to affect it.” https://creativecow.net/forums/thread/horizontal-panning-ani...

Even with the rules of thumb, “images will not immediately become unwatchable faster than seven seconds, nor will they become fully artifact-free when panning slower than this limit”. https://www.red.com/red-101/camera-panning-speed

The thing I personally started to notice and now can’t get over is that during a horizontal pan, even with a slow speed and the prescribed amount of motion blur, I can’t see any details or track small objects smoothly. In the animation clip attached to that creativecow link, try watching the faces or look at any of the text or small objects in the scene. You can see that they’re there, but you can’t see any detail during the pan. Apologies in advance if I ruin your ability to watch pans in 24fps. I used to be fine with them, but I truly can’t stand them anymore. The pans didn’t change, but I did become more aware and more critical.

> 60fps will always look like cheap soap opera to me for movies

Probably me too, but there seems to be some evidence and hypothesizing that this is a learned effect because we grew up with 24p movies. The kids don’t get the same effect because they didn’t grow up with it, and I’ve heard that it’s also less pronounced for people who grew up watching PAL rather than NTSC. TVs with smoothing on are curing the next generation from being stuck with 24 fps.


I doubt that, I hear on the internet that Gemini pro is great but every time I have used it has been beyond disappointing. I’m starting to believe that the Gemini pro is great is some paid PR push and not based on reality. The Gemma models are also probably the least useful/interesting local models I’ve used.

What are you using them for? Gemini (the app, not just the Google search overview) has replaced ChatGPT entirely for me these days, not the least of which is because I find Gemini simply be able to handle web searches better (after all, that is what Google is known for). Add to that, it can integrate well with other Google products like YouTube or Maps where it can make me a nice map if I ask it what the best pizza places are in a certain area. I don't even need to use pro mode, just fast mode, because it's free.

Claude is still used but only in IDEs for coding, I don't ask it general questions anymore.

I use Gemma as a developer for basic on-device LLM tasks such as structured JSON output.


Gemini just has many basic things missing like the ability to edit a message more than one message in the past and see branches of that conversation.

That's true but to be honest I didn't really use those features anyway, my chats are just one long stream of replies and responses. If I need to switch to a new topic I make a new chat.

I used Gemini Pro and it was unable to comply with the simplest instructions (for image diffusion). Asking it to change the scene slightly by adding or removing object or shifting perspective yielded almost the same result, only with some changes I did not ask for.

The image quality was great, but when I ask a woodworker for a table and get a perfectly crafted chair of the highest quality, I'm still unsatisfied.

I cancelled my subscription after two days trying to get Gemini to follow my instructions.


When was this, before or after Nano Banana Pro came out? This is a well known bug, or rather, intended behavior to some extent, because it goes through content filters on Gemini which can be overly strict so it doesn't edit it as you'd expect.

You can try it on AI studio for free, which does not have the same strict content filters, and see if it still works for your use case now.


The local Gemma models are pretty good for tasks involving multilingual inputs (translation, summarization, etc.). They have their niche.

I use Gemini from within AI Studio [0]. Not sure in what way you find Gemini disappointed, but I have get success with it through AI studio.

[0] https://aistudio.google.com


I literally said “oh no” out loud when I read the headline.

Art and cinema, if I can’t write code I’ll write stories instead and try to bring them to life.

How does Mark Zuckerberg’s boot taste like?

if Llama counts, then it tasted great (while it lasted).

Llama was not great, it was barely good, it wasn't very smart nor creative and had it's guardrails cranked up to 11. Local models didn't get interesting until Mistral and China entered the game. Meta still hasn't released it's image models which has been trained on 10s of thousands of my photos.

yeah, well, it was all we had, hence llama.cpp, ollama, r/localllama, etc, all of which look increasingly silly now that it's highly unlikely we'll ever have another Llama.

Yea I’ll give meta a bit of credit for that but I remember llama’s first release was a leak and i remember frantically downloading them just incase they would get permanently taken down but to everyone’s surprise meta decided to roll with it and embrace the open source community. Unfortunately they face planted with llama4 which was weird since they were supposed to have so much “talent” working on it.

The new SAM (segment anything) and SAM3D are actually impressive and good on them for releasing it to the public. They still need to release an image model.

I honestly believe the weird pursuit for “safety” is what sabotaged them, it seems to lobotomy models. It’s also the reason Stable Diffusion went from the hot thing to a joke. Stable diffusion 3 was so safe you couldn’t generate a woman laying down on some grass because that’s apparently dangerous for reasons unknown.

All models have had their “safety” and guardrails removed by the community and the world didn’t end.


>Adafruit’s Torrone had also said Arduino’s new documents “introduce an irrevocable, perpetual license over anything users upload.”

Yea that's gonna be a hard pass for me. Thank goodness for the Pi Pico which means I'll never have to use Arduino ever again. On a side note, the new Arduino IDE based on Monaco looked nice but made development so painful I just stopped. I had to keep disconnecting and reconnecting devices all the time to upload sketches when before with the old IDE that was never an issue. Everything Arduino feels like a regression.


They can’t detect me splitting my hdmi output, feeding one of them to a separate machine with a vision model to detect what needs to be detected and the same machine moving and clicking the mouse. People are already doing this.


Could you please share examples of ML-based cheats that actually work?


This one is a good example; https://m.youtube.com/watch?v=T4rrcw_oRVs


Thanks, interesting! Looks like it works way better than I expected.


Yes the channel “Basically homeless” has a few variations on this. Using electrodes to move your muscles to more practical a bot that moves your mouse pad for you to give you perfect aim. No anti cheat can detect that because there is nothing to detect.


I still blame the devs of the games.


Well when you’re coding on your own you can get into the zone and just “flow”. With an LLM you’re waiting for the result, you see it has changed things it shouldn’t have changed and while the over all result is a step in the right direction, you have to go back and fix a lot of the LLMs “corrections” which is super tedious.

I asked Claude to help me out with an issue I was having with a small renderer I was working on, it fixed the issue but also added a memory leak, the leak was easy enough to fix because I fully understood what was going on but if you’re vibe coding and don’t have the skills to debug yourself you’re going to have a bad time.

In the end I don’t like using LLMs for coding but they are good at solving isolated problems when I get stuck. I prefer it when they review my code instead of writing it for me.


I’ve tried the paid models through GitHub copilot and I just can’t find any of them actually useful for anything more than generating tests.

They can generate stuff, but generally I spend so long fixing it manually that it makes the time savings zero or negative.

Only thing I have found useful is the PR review bot. That thing is genuinely incredible at spotting tiny mistakes in massive PRs that make your eyes glaze over.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: