Hacker Timesnew | past | comments | ask | show | jobs | submit | MaulingMonkey's commentslogin

> no commit history, 10k+ lines of complicated code

This kind of pattern is incredibly common when e.g. a sublibrary of a closed source project is extracted from a monorepository. Search for "_LICENSE" in the source code and you'll see leftover signs that this was indeed at one point limited to "single-process-package hardware" for rent extraction purpouses.

Now, for me, my bread and butter monorepos are Perforce based, contain 100GB+ of binaries (gamedev - so high-resolution textures, meshes, animation data, voxely nonsense, etc.) which take an hour+ to check out the latest commit, and frequently have mishandled bulk file moves (copied and deleted, instead of explicitly moved through p4/p4v) which might mean terrabytes of bandwidth would be used over days if trying to create a git equivalent of the full history... all to mostly throw it away and then give yourself the added task of scrubbing said history to ensure it contains no code signing keys, trade secrets, unprofessional easter eggs, or other such nonsense.

There are times when such attention to detail and extra work make sense, but I have no reason to suspect this is one of them. And I've seen monocommits of much worse - typically injested from .zip or similar dumps of "golden master" copies, archived for the purpouses of contract fulfillment, without full VCS history.

Even Linux, the titular git project, has some of these shenannigans going on. You need to resort to git grafts to go earlier than the Linux-2.6.12-rc2 dump, which is significantly girthier.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://github.com/torvalds/linux/commit/1da177e4c3f41524e88...

0 parents.

> It looks to be generated from another code base as a sorta amalgamation (either through code generation, ai, or another means).

I'm only skimming the code, but other posters point out some C macros may have been expanded. The repeated pattern of `(chunk)->...` reminds me of a C-ism where you defensively parenthesize macro args in case they're something complex like `a + b`, so it expands to `(a + b)->...` instead of `a + b->...`.

One explaination for that would be stripping "out of scope" macros that the sublibrary depends on but wishes to avoid including.

> We're supposed to implicitly trust this person

Not necessairly, but cleaner code, git history, and a more previously active account aren't necessairly meant to suggest trust either.


> One explaination for that would be stripping "out of scope" macros that the sublibrary depends on but wishes to avoid including.

Another explaination would be the original source being multi-file, with the single-file variant being generated. E.g. duktape ( https://github.com/svaarala/duktape ) generates src-custom/duktape.c from src-input/*/*.c ( https://github.com/svaarala/duktape/tree/master/src-input ) via python script, as documented in the Readme:

https://github.com/svaarala/duktape/tree/master?tab=readme-o...



In fact, Windows 10+ now uses a thread pool during process init well before main is reached.

https://web.archive.org/web/20200920132133/https://blogs.bla...


> video games

Often use dynamic/scripting languages to improve iteration on gameplay code, even if a lot of the fundamental underlying code is native. And add dev-time hot reloading wherever we can so when you change a texture, it reloads ≈immediately without needing to so much as restart the level. We exile as much as we can to tables and other structured data formats which can easily be tweaked and verified by non-coders so we're not a bottleneck for the game designers and artists who want to tweak things, and make that stuff hot-reloadable if possible as well.

We also often have in-house build server farms full of testing code, because it's such a pain in the ass to iterate with anything dynamic. After all, games are huge, and sufficient testing to make sure all your uncompiled unanalyzed typecheckless code works is basically impossible - things are constantly breaking as committed during active development, and a decent amount of engineering work is frequently dedicated to such simple tasks as triaging , collecting, and assigning bugs and crash reports such that whomever broke it knows they need to fix it, as well as allowing devs and designers to work from previous "known good" commits and builds so they aren't blocked/unable to work on their work - which means internal QA helping identify what's actually "known good", hosting and distributing multiple build versions internally such that people don't have to rebuild the universe themselves (because that's several hours of build time), etc.

Some crazy people invest in hot-reloadable native code. There's all kinds of limits on what kinds of changes you can make in such a scenario, but it's entirely possible to build a toolchain where you save a .cpp file, and your build tooling automatically kicks off a rebuild of the affected module(s), triggering a hot reload of the appropriate .dll, causing your new behavior to be picked up without restarting your game process. Which probably means it'll immediately crash due to a null pointer dereference or somesuch because some new initialization code was never triggered by the hot reloading, but hey, at least it theoretically works!

And, of course, nothing is stopping you from creating isolated sandboxes/examples/test cases where you skip all the menuing, compiling unrelated modules, etc. and iterating in that faster context instead of the cumbersome monolith for most of your work.


> They didn't need more unmanned testing to find the issue; they needed to stop ignoring it.

Should such testing have been needed? No.

Was such testing needed, given NASA's political pressures and management? Maybe. Unmanned testing in similar conditions before putting humans on it might've resulted in a nice explosion without loss of life that would've been much harder to ignore than "the hypothesizing of those worrywart engineers," and might've provided the necessary ammunition to resist said political pressures.


> Unmanned testing in similar conditions before putting humans on it might've resulted in a nice explosion without loss of life that would've been much harder to ignore

The loss of the Challenger was the 25th manned orbital mission. So we can expect that it might have taken 25 unmanned missions to cause a similar loss of vehicle. But what would those 25 unmanned missions have been doing? There just wasn't 25 unmanned missions' worth of things to find out. That's also far more unmanned missions than were flown on any previous NASA space program before manned flights began.

Even leaving the above aside, if it would have been politically possible to even fly that many unmanned missions, it would have been politically possible to ground the Shuttle even after manned missions started based on the obvious signs of problems with the SRB joint O-rings. There were, IIRC, at least a dozen previous manned flights which showed issues. There were also good critiques of the design available at the time--which, in the kind of political environment you're imagining, would have been listened to. That design might not even have made it into the final Shuttle when it was flown.

In short, I don't see your alternative scenario as plausible, because the very things that would have been required to make it possible would also have made it unnecessary.


Record low launch temperatures are exactly the kind of boundary pushing conditions that would warrant unmanned testing in a way that not all of those previous 25 would have been. Then again, so was the first launch, and that was manned.

> I don't see your alternative scenario as plausible

Valid.


> Record low launch temperatures

Were not necessary to show problems with the SRB joint O-rings. There had been previous problems noted on flights at temperatures up to 75 degrees F. And the Thiokol engineers had test stand data showing that the O-rings were not fully sealing the joint even at 100 degrees F. Any rational assessment of the data would have concluded that the joint was unacceptably risky at any temperature.

It might have been true that a flight at 29 degrees F (the estimated O-ring temperature at the Challenger launch) was a little more unacceptably risky than a flight at a higher temperature. But that was actually a relatively minor point. The reason the Thiokol engineers focused on the low temperature the night before the Challenger launch was not because they had a solid case, or even a reasonable suspicion, that launching at that cold a temperature was too risky as compared with launching at higher temperatures. It was because NASA had already ignored much better arguments that they had advanced previously, and they were trying to find something, anything, to get NASA to stop at least some launches, given that they knew NASA was not going to stop all launches for political reasons.

And just to round off this issue, other SRB joint designs have been well known since, I believe, the 1960s, that do not have the issue the Shuttle SRBs had, and can be launched just fine at temperatures much colder than 29 F (for example, a launch from Siberia in the winter). So it's not even the case that SRB launches at such cold temperatures were unknown or not well understood prior to the Challenger launch. The Shuttle design simply was braindead in this respect (for political reasons).


I should point out that the Buran launched and took earth, with bad conditions, completely automated. It's sad how it ended.


> So we can expect that it might have taken 25 unmanned missions to cause a similar loss of vehicle.

That doesn't follow. If those were unmanned test flights pushing the vehicle limits you can't just assume they would have gone as they actually did.


> If those were unmanned test flights pushing the vehicle limits

As far as the launch to orbit, which was the flight phase when Challenger was lost, every Shuttle flight pushed the vehicle to its limits. That was unavoidable. There was no way to do a launch that was any more stressful than the actual launches were.


You can push the environmental conditions of the launch e.g. winds and temperatures.


See my response to Mauling Monkey upthread on why the cold temperature of the Challenger launch actually wasn't the major issue it was made out to be.

Note also my comments there about other SRB designs that were known well before the Shuttle and the range of temperatures they could launch in. Those designs were used on many unmanned flights for years before the Shuttle was even designed. So in this respect, the unmanned test work had already been done. The Shuttle designers just refused to take advantage of all that knowledge for braindead political reasons.


Skeptical notes based on my own experiences in Seattle (≈1148ft average per article - which might be considered high enough that the article already considers the mission for fewer bus stops a success?):

Some of the routes I've taken had "express" variants that skipped many stops, yet still stopped at my usual start and exit. I never bothered waiting for them - the savings were marginal, and taking the first bus was typically fastest, express or not. Time variation due to traffic etc. meant you couldn't really plan around which one you wanted to take either.

The buses already skip stops where they don't see anyone waiting for the bus, and nobody pulls the coord to request an exit, and said skipping tends to happen even during the dense rush hour. Additionally, stop time seems to be dominated by passenger load/unload. Clustering at fewer bus stops doesn't significantly change how much time that takes much, it just bunches it together in longer chunks. The routes where this happens a lot also tend to be the routes where they're going to be starting and stopping frequently for traffic lights anyways - often stopping before a light for shorter than the red, or after a light and then catching up to the next red.

What makes a significant difference in bus speed is the route.

If the bus takes a route where a highway is taken - up/down I-5 or I-405, or crossing Lake Washington, there are significant time savings. This isn't "having less/fewer bus stops", this is "having some long distance routes that bypass entire metro areas".

Alternatively, buses that manage to take low density routes - not highways per se, but places where there are still few if any traffic lights, and minimal traffic - tend to manage a lot better speed, compared to routes going through city centers. They may have plenty of bus stops, but again skip many of them due to lower density also resulting in lower passenger numbers, and when they do stop it's for less time than a typical traffic light cycle. A passenger might pull the coord, get up to exit, stand while the bus comes to a stop, hop off, and watch the bus pull off, delaying the bus by what... 10 seconds pessimistically for the stop itself, and another 10 seconds for deacceleration and then acceleration back to the speed limit?

Finally, there's also grade separated light rail, grade seperated bus lanes, and bus tunnels through downtown Seattle, that significantly help mass transit flow smoothly even in rush hour, for when you do have to go through a dense metro area. While these are far from fast or cheap to implement, axing a few bus stops isn't going to make other routes competitive when these are an option.


I'll note another fun pattern I've seen:

• Bus crawls along behind traffic during rush hour traffic, or a long line of traffic bottlenecked by a busy stop sign

• Bus stops to load/unload, blocking traffic for a bit, with a gap opening up in front of it as a result of cars not being able to get around (e.g. the stop is just directly on the typical curb/sidewalk with one lane in that direction.)

• Bus continues, and quickly catches up to the car it was behind before, since traffic was going slower than the speed limit as a result of bottlenecks

The stop was free, in these cases.


(equivalent C file: https://github.com/id-Software/wolf3d/blob/master/WOLFSRC/WL... )

> Was this translated automatically from C?

I'll note that when I convert code between languages, I often go out of my way to minimize on-the-fly refactoring, instead relying on a much more mechanical, 1:1 style. The result might not be idiomatic in the target language, but the bugs tend to be a bit fewer and shallower, and it assists with debugging the unfamiliar code when there are bugs - careful side-by-side comparison will make the mistakes clear even when I don't actually yet grok what the code is doing.

That's not to say that the code should be left in such a state permanently, but I'll note there's significantly more changes in function structure than I'd personally put into an initial C-to-Rust rewrite.

The author of this rewrite appears to be taking a different approach, understanding the codebase in detail and porting it bit by bit, refactoring at least some along the way. Here's the commit that introduced that fn, doesn't look like automatic translation to me: https://github.com/Ragnaroek/iron-wolf/commit/9014fcd6eb7b10...


> Would have to be F32, no?

Generally yes. `NonZeroU32::saturating_add(self, other: u32)` is able to return `NonZeroU32` though! ( https://doc.rust-lang.org/std/num/type.NonZeroU32.html#metho... )

> I cannot think of any way to enforce "non-zero-ness" of the result without making it return an optional Result<NonZeroF32>, and at that point we are basically back to square one...

`NonZeroU32::checked_add(self, other: u32)` basically does this, although I'll note it returns an `Option` instead of a `Result` ( https://doc.rust-lang.org/std/num/type.NonZeroU32.html#metho... ), leaving you to `.map_err(...)` or otherwise handle the edge case to your heart's content. Niche, but occasionally what you want.


> `NonZeroU32::saturating_add(self, other: u32)` is able to return `NonZeroU32` though!

I was confused at first how that could work, but then I realized that of course, with _unsigned_ integers this works fine because you cannot add a negative number...


You'd still have to check for overflow, I imagine.

And there are other gotchas, for instance it seems natural to assume that NonZeroF32 * NonZeroF32 can return a NonZeroF32, but 1e-25 * 1e-25 = 0 because of underflow.


One thing I appreciate about Rust's stdlib is that it exposes enough platform details to allow writing the missing knobs without reimplementing the entire wrapper (e.g. File, TcpStream, etc. allows access to raw file descriptors, OpenOptionsExt allows me to use FILE_FLAG_DELETE_ON_CLOSE on windows, etc.)


Where I live (pacific northwest), it's not snow that's the problem, but windstorms. Presumably knocking over trees, which in turn takes down power lines - which of course implies said trees are tall, in proximity to the power lines, and not cut down. I maybe average 24 hours of outage per year (frequently less, but occasionally spiking to a multi-day outage.)

I don't think that's something that can be solved with just "build quality"... but it presumably could be solved through "maintainence" (cutting down or trimming trees, although that requires identifying the problem, permissions, a willingness to have decreased tree coverage, etc.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: