Hacker Timesnew | past | comments | ask | show | jobs | submit | duskdozer's commentslogin

Windows 10 Enterprise LTSC IoT version. Official support until 2032

Or winapps/cassowary/<latest tool>

I try to use libreoffice when possible but sometimes the performance takes a nosedive for opaque reasons when excel is ok


Every time I try to edit my cv containing many disconnected tables I want to scream from the frustration.

In Ms Office it's always the breeze and 2 minute job. In Libre office it's 15 at least, multiple fights with pages suddenly breaking, cells and rows refusing to stick to my dimensions or something perfectly fine in the print preview lose edges of the cells (ie missing letter, etc) when actually on paper/pdf.

Infuriating.

And I didn't even started about printing in Linux. What works in android ootb didn't consistently work for me across two distributions, several years and many versions. Papercut is the worst but cups is close second.


If you don't want to go insane try to forget before you notice everything else. Might be too late already once you first do though

>distributes it to the scum of the Earth

Who?


A big smell and my biggest pet peeve with them is the excessive, custom javascript animations that don't respect my settings to disable animations and which break through my own extra defenses, all tucked away in some webpack chunk I'd have to debug to get rid of. As soon as I see above-the-fold text fade and slide in I close the tab to spare my head, stomach, and CPU.

Would it silently allow or would you still get the notif or whatever (iirc from littlesnitch years ago)?

The allow rule for Firefox is what would suppress the prompt. You probably don't want to have a prompt for every Firefox connection though, so you'd need to come up with some kind of ruleset (or get very annoyed :D).

>Why is it a bunch of mostly unpaid volunteer hackers are putting more effort into supply chain security than OpenAI.

Unpaid volunteer hackers provide their work for free under licenses designed for the purpose of allowing companies like OpenAI to use their work without paying or contributing in any form. OpenAI wants to make the most money. Why would they spend any time or money on something they can get for free?


Not sure if you're fully over the context that openAI bought Astral - who "own" uv.

Yep. Permissive licenses, "open source", it's all just free work for the worst corporations you can think.

It's free work for anyone.

Seems like the most cynical take on OSS possible.

Like anything good you do an evil person could benefit from - is the solution to never do any good?


The solution is to use AGPLv3.

I’m maybe daft but AGPLv3 doesnt prevent $Evilcorp from using it, they just need to share any modifications or forks they made?

And at this point, it appears running code through an LLM to translate it eliminates copyright (and thus the licence), so $Anycorp can use it.

Our stuff is AGPL3 licenced and if this present trend continues we might just switch to MIT so at least the little guys can take advantage of it the way the big guys can.


I think the whitewashing of code through LLMs is still unproven if it actually works for a reasonably complex project and also it’s still kind of legal Wild West - I think no one knows for sure how it will work out.

There are piles of examples of it working for complex projects and libraries now especially if they have good test suites your clone can pass.

Also they are even getting quite good at reverse engineering binaries.

Anything not released as FOSS, will have a FOSS copy made.

There is no moat and the reign of restrictive licenses on software is effectively over.


Can you share any of these examples? I haven’t been able to find any…

In reality most $Evilcorp have policies against AGPLv3, which is why projects can make moneh selling a less-restricted enterprise license for the same code.

I often hear this but I don’t really understand it. Not saying you need to explain it to me but what is the issue with AGPLv3 that turns those corporations away?

To my non-lawyer eyes it looks like MIT or Apache2 but modifications need to be made public as well.

If you don’t make any modifications then it should be fine? Or do most $Evilcorp aim to make modifications? Or is AGPLv3 something like garlic against vampires (doesn’t make sense but seems to work)?


AGPLv3 includes that “distribution” includes essentially communicating with the service over the network, as opposed to the GPL concept of like, sending a shrink wrapped binary that someone downloads and runs themselves.

So basically they are worried that they have no way of avoiding one or more of their tens of thousands of engineers “distributing” it to customers by including it in some sort of publicly accessible service. AFAIK there’s no settled case regarding what level of network communication qualifies - like if I run a CRUD app on Postgres and Postgres was AGPL, am I distributing Postgres?

Now the second part is that you only have to give out your changes to the AGPL software to those that it was “distributed” to. Most people aren’t changing it! If anything they’re just running a control plane in front of it…

but it goes back to the corporate legal perspective of “better safe than sorry” - we can’t guarantee that one of our engineers, isn’t changing it in some way that would expose company internals, then triggering a condition where they have to distribute those private changes publicly.


Oh I see that makes sense, thanks for the explanation!

Only if they provide the software or software as a service. Then I suspect it's good enough if the modifications or forks made are shared internally if software is used only internally, but on the other hand I'm not a lawyer.

> if software is used only internally

Internal users are still users tho. They are entitled to see source code and license allows them to share it with the rest if of the world.


Employers might argue that such internal use and distribution would fall under the “exclusively under your behalf” clause in the GPLv3, which is inherited by the AGPLv3.

Oh, I guess it would. Ignore me.

This is the point. They can use and modify it, but they also have to share their modifications, i.e., help its development. Yet most megacorps never even touch this license.

Never let the left hand know what the right hand is doing. I suppose it works both ways here, but the specific end user is not why people make code available, it’s in the hope of improving things, even just the tiniest bit.

Looks like LLM-generated to me

I think the point is that if you have to squash, the PR-maker was already gitting wrong. They should have "squashed" on their end to one or more smaller, logically coherent commits, and then submitted that result.

It’s not “having to squash”. The intent was already for a PR to be a single commit. I could squash it on my end and merge by rebasing, but any alteration would then need to be force-pushed. So I don’t bother. I squash-merge when it’s ready and delete the branch.

if you mean better messages, it's not really that. those junk messages should be rewritten and if the commits don't stand alone, merged together with rebase. it's the "logical chunks" the parent mentioned.

it's hard to say fully, but unless a changeset is quite small or otherwise is basically 0% or 100%, there are usually smaller steps.

like kind of contrived but say you have one function that uses a helper. if there's a bug in the function, and it turns out to fix that it makes a lot more sense to change the return type of the helper, you would make commit 1 to change the return type, then commit 2 fix the bug. would these be separate PRs? probably not to me but I guess it depends on your project workflow. keeping them in separate commits even if they're small lets you bisect more easily later on in case there was some unforseen or untested problem that was introduced, leading you to smaller chunks of code to check for the cause.


If the code base is idempotent, I don't think showing commit history is helpful. It also makes rebases more complex than needed down the line. Thus I'd rather squash on merge.

I've never considered how an engineer approaches a problem. As long as I can understand the fundamental change and it passes preflights/CI I don't care if it was scryed from a crystal ball.

This does mean it is on the onus of the engineer to explain their change in natural language. In their own words of course.


Commits don't show "how an engineer approaches a problem". Commits are the unit of change that are supposed to go into the final repository, purposefully prepared by the engineer and presented for review. The only thing you do by squashing on merge is to artificially limit the review unit to a single commit to optimize the workflow towards people who don't know how to use git. Personally I don't think it's a good thing to optimize for.

Preserving commit history pre-merge only seems useful if I had to revert or rebase onto an interstitial commit. This is at odds with treating PRs as atomic changes to the code base.

I might have not stated my position correctly. When I mean "squash on merge", I mean the commit history is fully present in the PR for full scrutiny. Sometimes commits may introduce multiple changes and I can view commit ranges for each set of changes. But it takes the summation of the commits to illustrate the change the engineer is proposing. The summation is an atomic change, thus scrutinizing terms post-merge is meaningless. Squashing preserves the summation but rids of the terms.

Versioned releases on main are tagged by these summations, not their component parts.


"Tell me you don't have to debug foreign codebases often without telling me" ;)

The primary value of commit history comes from blame, bisect and corresponding commit messages. There's no reason to "treat PRs as atomic changes to the code base", commits are already supposed to be that and PRs are groups of commits (sometimes groups of size 1, but often not).

> When I mean "squash on merge", I mean the commit history is fully present in the PR for full scrutiny.

And when you merge the set of commits prepared by the author for review in, you get both "summations" and individual commits stored in the commit graph (where their place is) and you get to choose which way you want to view them at retrieval time without having to dig up things outside of the repository. Sometimes it's useful to see the summations only ("--first-parent"), sometimes you only want the individual atomic changes ("--no-merges"), sometimes you want to see them all but visually grouped together for clarity ("--graph"). When you squash on merge, you just give all that flexibility away for no good reason.

It's a commit graph, not a commit linked list, so treat is as such.


If you don't care about how the problem was solved, why are you reviewing it at all?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: