Hacker Timesnew | past | comments | ask | show | jobs | submit | v1ne's commentslogin

Great! Brings a bit more dynamic into the market. So far, I'm happy with DxO, but I also don't need to manage a library.

I don't know, does Resolve have lens corrections for 100+ lenses built-in? That's the thing that DxO does really well: Lens corrections, matching your camera's color rendering, denoising. Unfortunately, they still struggle with HDR output.

I imagine the tools in Resolve save you much time, due to automation. Probably handy if you shoot a lot. Yet, the biggest difference is that in photography, you're not necessarily limited by throughput. You can and do actually put a lot of effort into single images.


I think it's more people being fascinated by this curious architectural detail. I imagine it's fascinating to people who are not exposed to the intricate details of computer architecture, which I assume is the vast majority here. It's a glimpse into a very odd world (which is your day-to-day work in the HFT field, but they rarely talk about this, and much less in such big words).

TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.

And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.

So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.


Does anybody else's fingers also tingle like this is written by an AI?

The formatting is strangely inconsistent, highlighting only some numbers and some variables in fixed-width font. Also there's odd statements like that the reference resistor keeps its value "at all temperatures", which is just not true. Other phrases like "poly-silicon resistor" are highlighted, and then not explained. All in all, I find this article to be quite a mess and not a clear explanation.


There are some typos like "if" instead of "of" that seem to imply at the very least, some of it is verbatim written by a person. Given the subject matter, I'd be extremely surprised if this was 100% AI but one thing I've totally done for similar technical writing is ask AI for help refining a rough draft. There's some suggestions I'd ignore but the larger grammatical and sentence structure suggestions I'd usually adopt.


All of this works much worse on macOS: Scaling sucks, as it's integer-upscaled rendering + fractional downscaling in a shader. Windows can't span screens either.

On Windows, the window will adapt as you move its center of gravity across the edge of the screens. Sure, could be better than at the moment where the window is the wrong size, but it would always be blurry.


If people make extraordinary claims, I expect extraordinary proofs…

Also, there is nothing complex in a C compiler. As students we built these things as toy projects at uni, without any knowledge of software development practices.

Yet, to bring an example for something that's more than a toy project: 1 person coded this video editor with AI help: https://github.com/Sportinger/MasterSelects


From the linked project:

> The reality: 3 weeks in, ~50 hours of coding, and I'm mass-producing features faster than I can stabilize them. Things break. A lot. But when it works, it works.


Completely agreed. I sit in dismay, remembering the Microsoft I frowned upon back in the days as a Linux/FreeBSD user. But at least their software was accessible via keyboard and their translations were really good.

Fast forward to now, after being a dev on Windows for years and loving it, and now their UX is a joke. For example, to jump back and forth between chats, neither the back/forth mouse buttons nor any other key combo works on macOS. You have to click the navigation buttons in the symbol bar instead. Translations are AI-powered, and that shows. Also, Teams is dog slow, which I also count as a UX issue.


I remember working at MS a decade ago and how good out translation pipeline was. Tons of attention paid to cultural nuances even between different English dialects. We'd have separate translations for UK, US, AUS, and international English. We'd change not just words but the overall tone of messages based on the culture in different countries.

So much care, and the expertise and professionalism of the people doing the worn was amazing.


It’s sad to see the decline in the quality of desktop computing. I blame this on the rise of mobile apps and Web apps in the 2010s. It’s not that mobile apps and Web apps are inherently bad; that’s not the problem. The problems is that we have an entire generation of engineers who never learned desktop UI/UX conventions and principles.

To make matters worse, in an attempt to save on development costs, mobile and Web applications have been deployed on the desktop, with the justification that it’s better to have an app, even a shoddy one, than to not have one at all. What’s appropriate on a smartphone or a tablet may not be appropriate on a desktop, and vice versa. The Web never had a mechanism for enforcing UI/UX guidelines, similar to the MS-DOS and Apple II days of computing.

The sad thing is Microsoft and even Apple now have shoddy desktop apps, despite the fact they have the resources to make well-designed desktop apps, and that at one point they set standards for excellent desktop apps and conformed to them.

We had a sweet spot in the 2000s with Windows 2000/XP/7 and Mac OS X and their ecosystems of desktop applications. It’s been downhill since.


That's also my opinion but I am ready to give them some excuse, because they have it harder than when it was just desktop computing.

Now we expect a desktop and a mobile app, also native and browser based. They all have different requirements. Even in the same category, such as iOS vs Android, some conventions are different. Having to write the app differently for each platform to make the best of it is not only expensive, but it may also be confusing to users who switch from one to another.

For example let's say you have a button on your desktop app that sees little use, but it is a nice feature for the few times it is needed. Because it is a desktop and you have lots of space and a precise pointing device, it stays. But for your mobile version, there is simply no room for it, so you remove it and tweak the workflow a bit so that it isn't needed anymore. Taken individually, they are both good decisions, but I can guarantee that the desktop user will complain that it is missing on the mobile app, and he would be right. It means you have to make a compromise you didn't have to make before.


Even Microsoft MacOS apps are second-class citizens next to the ones found on Windows. I personally feel this is $WORKING_AS_INTENDED because honestly why would Microsoft empower people to exit the platform? It would be like creating an open source version of Active Directory and giving it away.


> Also, ML is now really good to translate between European languages

As somebody who has to regularly bear "German" machine-translated UIs and manuals that originate in English, I can only say: No, it's not. It's atrocious.


Best one was when gedit had the option to syntax highlight for a language named “Los.”


Not a bad name, to be honest!


Something to hack, but I don't see how to easily type braces and parentheses. Looks like a non-starter to me because for me, I hack by writing in languages that require parentheses.


The problem is not only the DAW support, but the support of low-latency audio interfaces in Linux. Audio interface makers rarely create a Linux driver, and a low-latency setup on Linux is its own hell, with real-time kernel patches. On MacOS and Windows, it works out of the box.


rt patches are upstream since about a year ago. You might need to swap to rt Kconfig, sure, but not patches.


Thing is: That's your preference and nobody should force you to use these indicators. Even on Windows, the tray icons are usually mostly hidden away.

I find them highly useful on macOS, but there I lack the configurability I have on Windows.


Having to interact with them, even having to hide them, is forcing them upon me. I don’t understand a reason for them to exist. They’re simply useless to me and a sign of a complex design.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: