Hacker Timesnew | past | comments | ask | show | jobs | submit | jasonpeacock's commentslogin

I'm concerned that someone had the permissions to make such a change without the knowledge of how to make the change.

And there was no test environment to validate the change before it was made.

Multiple process & mechanism failures, regardless of where the bad advice came from.


If you have to do all that, then what's the point of the AI? I'm joking, but I'm afraid many others say the same thing 100% seriously

As an article that was here recently claims, every verification you do in a chain increases the total time of your work by an order of magnitude. So, it's only work optimizing any productive task if you already removed most verifications.

Now, some people claim that you need to improve the reliability of your productive tasks so you can remove the verifications and be faster. Those people are, of course, a bunch of coward Luddites.


Flakes fixes this for Nix, it ensures builds are truly reproducible by capturing all the inputs (or blocking them).


Apparently I made note of this in my laptop setup script (but not when this happened so I don't know how long ago this was) so in case anyone was curious, the jar file was compiled with java 16, but the nix config was running it with java 8. I assume they were both java 8 when it was set up and the jar file upgraded but don't really know what happened.


No it doesn't. If the content of a url changes then the only way to have reproducibility is caching. You tell nix the content hash is some value and it looks up the value in the nix store. Note, it will match anything with that content hash so it is absolutely possible to tell it the wrong hash.


Not having a required input, say when you try to reproduce a previous build of a package, is a separate issue to an input silently changing when you go to rebuild it. No build system can ensure a link stays up, only that what's fetched hasn't changed. The latter is what the hash in nix is for. If it tries to fetch a file from a link and the hash doesn't match, the build fails.

Flakes, then, run in a pure evaluation mode, meaning you don't have access to stuff like the system triple, the current time, or env vars and all fetching functions require a hash.


Buildkit has the same caching model. That's what I'm saying. It doesn't force you to give it digests like nix functions often do but you can (and should).


You can network-jail your builds to prevent pulling from external repos and force the build environment to define/capture its inputs.


just watch out for built at timestamps


I’m curious why they didn’t deploy diagnostics in the field if they couldn’t replicate in the lab?

Every few months for 7yrs is a lot of opportunities to iterate on collecting field measurements. And it could be done in a holistic way that doesn’t break the safety certification.


Good question!

This equipment was deployed in remote places without any kind of connectivity, sometimes not even cell coverage.

But the real problem is that the frequency of this failure in a single device was much lower than that. There were hundreds of these devices deployed and we never had one particular unit that was triggering all the time, sometimes here, sometimes there. Really a nightmare.


I’ve always that forums are much better suited to corporate communications than email or chat.

Organized by topics, must be threaded, and default to asynchronous communications. You can still opt in to notifications, and history is well organized and preserved.


The bullet points for using Slack basically describe email (and distribution lists).

It’s funny how we get an instant messaging platform and derive best practices that try to emulate a previous technology.

Btw, email is pretty instant.


If you work in a team, email is limited to the people you cc: while a convo in a slack channel can have people you didn't think of jump in* with information.

See the other point in the article about discouraging one on one private messages and encouraging public discussion. That is the main reason.

* half a day later or days later if you do true async, but that's fine.


I am neutral in this particular topic, so don’t think I’m defending or attacking or anything.

But aren’t mailling lists and distribution groups pretty ubiquitous?


But - from the people you actually want to get to contribute - emails come with an expectation of a well thought out text. IMs ... less so.

I've been working across time zones via IM and email since ... ICQ.

I'm probably biased by that but I consider email the place for questions lists and long statuses with request for comments, and for info that I want retained somewhere. While IM is a transient medium where you throw a quickie question or statement or whine every couple hours - and check what everyone else is whining about.


I have now been roped into talking more about a topic I have no interest in and am completely ambivalent to… :/

But clearly, thats cultural.

If you keep your eyes on the linux kernel mailing you’ll see a lot of (on topic) short and informal messages flying in all directions.

If you keep your eyes on the emails from big tech CEOs that sometimes appear in court documents; you’ll see that the way they use email is the same way that I’d use slack or an instant messenger.

Thats likely because its the tool they have available- we have IM tools that connect us to people we need (inside the company)- making email the only place for long form content, which means its only perceived as being for long form content.

But when people have to use something federated more often, it does seem like email is actually used this way.


I get it, email accomplishes a lot. But it "feels" like a place these days for one-off group chats, especially for people from different organizations. Realtime chat has its places and can also step in to that email role within a team. All my opinion, none too strongly held.


C libraries have advertised "header-only" for a long time, it's because there is no package manager/dependency management so you're literally copying all your dependencies into your project.

This is also why everyone implements their own (buggy) linked-list implementations, etc.

And header-only is more efficient to include and build with than header+source.


I never copied my dependencies into my C project, nor does it usually take more than a couple of seconds to add one.


There's a number of extremely shitty vendor toolchain/IDE combos out there that make adding and managing dependencies unnecessarily painful. Things like only allowing one project to be open at a time, or compiler flags needing to be manually copied to each target.

Now that I'm thinking about it, CMake also isn't particularly good at this the way most people use it.


They are certainly bad vendor toolchain, but I want to push back against the idea that this is a general C problem. But even for the worst toolchains I have seen, dropping in a pair of .c/.h would not have been difficult. So it is still difficult to see how a header-only library makes a lot of sense.


One of the worst I've experienced had a bug where adding too many files would cause intermittent errors. The people affected resorted to header-izing things. Was an off-by-one in how it was constructing arguments to subshells, causing characters to occasionally drop.

But, more commonly I've seen that it's just easier to not need to add C files at all. Add a single include path and you can avoid the annoyances of vendoring dependencies, tracking upstream updates, handling separate linkage, object files, output paths, ABIs, and all the rest. Something like Cargo does all of this for you, which is why people prefer it to calling rustc directly.


People certainly sometimes create a horrible mess. I just do not see that this is a good reason to dumb everything down. With a proper .c/.h split there are many advantages, and in the worst case you could still design it in a way that it is possible "#include" the .c file.

I tried to use cargo in the past and found it very bad compared to apt / apt-get (even when ignoring that it is a supply-chain disaster), essentially the same mess as npm or pip. Some python packages certainly wasted far more time of my life than all dependencies for C projects I ever had deal with combined.


I consider it compiler abuse to #include a source file. Useful for IOCCC competitions though.

Apt is fine for local development, but it's a bit of a disaster for portability and reproducibility. Not uncommon to see a project where the dependencies either have unspecified versions whose latest versions in universe are incompatible, or where the package has been renamed and so you have to search around to find the new name. Plus package name conventions are terrible. libbenchmark-dev, libgoogle-perftools-dev, and libgtest-dev are all library packages from the same company. The second one is renamed to gperftools-lib with RPM repos, to further compound the inconsistency.

I find myself dealing with package and versioning rename issues regularly in the CI pipelines I have using apt.


Well, I consider it abuse of header files to put an implementation in there. And comparing including a c-file to IOCCC seems very exaggerated.

My experience with dependencies in Debian-derived distributions is actually very good, far less problematic than any other packaging system I ever used. But yes, one needs to maintain dependencies separately for RPM and others distributions. But obviously the problem is not lack of a package manager and adding another one would not solve anyhing: https://xkcd.com/927/ The solution would be standardizing package names and versions across distributions.


> And header-only is more efficient to include and build with than header+source.

Dispute.

This is C code. You can't just drop it in and build it; you have to write code to use it. You have to figure out the API to correctly use it. If memory is passed around by pointers, you have to understand the responsibilities: who allocates, who frees, who may touch what when.

In the first place, you have to decide whether to commit to that library that far; it might not be until you've done some exploratory programming with it that you want to scrap it and find another one.

The cost of adding two files versus one is practically nothing in consideration of the larger picture.

The separate header model is baked into the C mindset; the tooling readily supports it.

Many "header only" libraries are secretly two files in one. When you define a certain preprocessor symbol before including the file, you get the implementation bits. When you don't define that symbol, the header is a pure header.

That means you have to pick some existing .c file which will define the preprocessor symbol and include the heaader. That source file becomes a "surrogate" source file, taking the place of the source file it ought to have.


1491 is a great book about the history of the Americas before Columbus.

https://a.co/d/03l04Lvv


"The Dawn of Everything" by David Graeber is a great, more recent alternative with a lot more context around the non-linear trajectory of history - the modern myths of linear progressive societal progress from savages, to agriculture, to cities and centralized technological futurism.

Graeber also explores the question what defines a society, and how at certain points some groups of people identified their culture through "schismogenesis" more so in oppositional context to against other group(s)

It's a massive book, but really refreshing and full of delightful little anecdotes and footnotes all through out.


I'll recommend Jungles of Stone - the story of explorers Stephens and Catherwood - the first Europeans to document and explore the sites of the ancient Maya.


Also look up "Cabeza de Vaca."

https://en.wikipedia.org/wiki/%C3%81lvar_N%C3%BA%C3%B1ez_Cab...

I found a book on his trip in a "little library," and was surprised they never mentioned this guy once in history class, at least enough for me to remember. Fascinating, sometimes funny story as well.


I second this recommendation!


Licensing generally applies only to the thing being licensed and not its output.

Otherwise all software written with a GPLv3 editor would also be GPLv3…or all software built with a GPLv3 compiler would be GPLv3. (Neither are true)


That's because the output isn't a derivative work of the licensed software.


> No cooling necessary.

This is false, it's hard to cool things in space. Space (vacuum) is a very good insulator.

3 are ways to cool things (lose energy):

  - Conduction
  - Convection
  - Radiation
In space, only radiation works, and it's the least efficient of those 3 options.


> In space, only radiation works

it's worse, incoming radiation also works to heat up objects that are in sunlight and in space. And you want to be in sunlight for the solar panels.

This is why surface of the moon is at temperatures of -120C when it's night and +120C when it's day there.

And the sun's radiation also flips bits.

Yes, it's technically possible to work around all of these. There are existing designs for radiators in the shade of the solar panels. Radiation shielding and/or resistant hardware. It's just not even close to economic at datacentre scale.


Superconductors.


Magnets.

(We're just saying random physics things right?)


Could we use a constant stream of micro-asteroids as a heatsink?


i think so, next is Quantum right?


No, just you. Superconductors don’t get hot. There is 0 resistance in superconducting mediums. Theoretically you could manufacture a lot of the electricity conducting medium out of a superconductor. Even the cheapest kind will superconduct in space (because it’s so cold).

Radiation may be sufficient for the little heat that does get produced.


> Even the cheapest kind will superconduct in space (because it’s so cold).

Space is not cold or hot - it isn't. It's a vacuum. Vacuum has no temperature, but objects in space reach temperatures set by radiative balance with their environment. This makes it difficult to get rid of heat. On earth heat can be dumped through phase change and discharged (evaporation), or convection or any number of other ways. In space the only way to get rid of heat is to radiate it away.

Superconductors don't have any resistance - and so heating from resistance isn't present. However, no super conducting computers have been created.

https://en.wikipedia.org/wiki/Superconducting_computing

And yes, it is really impressive - but we're also talking about one chip in liquid helium on earth. One can speculate about the "what if we had..." but we don't. If you want to make up technologies I would suggest becoming a speculative fiction author.

Heating of the spacecraft would get it on the warm side.

https://www.amu.apus.edu/area-of-study/science/resources/why...

> The same variations in temperature are observed in closer orbit around the Earth, such as at the altitudes that the International Space Station (ISS) occupies. Temperatures at the ISS range between 250° F in direct sunlight and -250° F in opposition to the Sun.

> You might be surprised to learn that the average temperature outside the ISS is a mild 50° F or so. This average temperature is above the halfway point between the two temperature extremes because objects in orbit obviously spend more time in partial sunlight exposure than in complete opposition to the Sun.

> The wild fluctuations of 500° F around the ISS are due to the fact that there is no insulation in space to regulate temperature changes. By contrast, temperatures on Earth’s surface don’t fluctuate more than a few degrees between day and night. Fortunately, we have an atmosphere and an ozone layer to insulate the Earth, protect it from the Sun’s most powerful radiation and maintain relatively consistent temperatures.

If you want solar power, you've got to deal with the 250 °F (121 °C). This is far beyond the specification for super conducting materials. For that matter, even -250 °F (-156 °C = 116 K) is much warmer than the super conducting chip range of 10 K.

Furthermore, the cryogenic material boils off in space quite significantly (I would suggest reading https://en.wikipedia.org/wiki/Orbital_propellant_depot#LEO_d... or https://spacexstock.com/orbital-refueling-bottlenecks-what-i... "Even minor heat exposure can cause fuel to boil off, increasing tank pressure and leading to fuel loss. Currently, the technology for keeping cryogenic fuels stable in space is limited to about 14 hours.") You are going to have significant problems trying to keep things at super conducting temperatures for a day, much less a month or a year.

Even assuming that you can make a computer capable of doing AI training using super computers this decade (or even the next) ... zero resistance in the wire is not zero power consumption. That power consumption is again heat.

---

> Theoretically you could manufacture a lot of the electricity conducting medium out of a superconductor.

Theoretically you can do whatever you want and run it on nuclear fusion. Practically, the technologies that you are describing are not things that are viable on earth, much less to try to ship a ton of liquid helium into space (that's even harder than shipping a ton of liquid hydrogen - especially since harvesting it is non-trivial).

---

Computing creates heat. Maxwell's demon taught us that doing 1 & 1 and getting one creates heat. Every bit of computation creates heat - superconductor or no. This is an inescapable fact of classical computation. "Ahh," you say " - but you can do quantum computation"... and yes, it may work... and if you can get a quantum computer with a kilobit of qbits into space, I will be very impressed.

---

One of the things that damages superconductors is radiation. On earth we've got a nice atmosphere blocking the worst of it. Chips in space tend to be radiation hardened. The JWST is using a BAE RAD750. The 750 should be something that rings a bell in the mind of people... its a PPC 750 - the type in a Macintosh G3... running between 110 and 200 Mhz (that is not a typo, it is not Ghz but Mhz).

High temperature super conductors (we're not dealing with the 10 kelvin but rather about 80 kelvin (still colder than -250 °F) are very sensitive to damage to their lattice. As they accumulate damage they become less superconductive and that causes problems when you've got a resistor heating up in the cryogenic computer.

---

Your descriptions of the technology for superconducting computers is in the lab, at best decades from being something resembling science fact (much less a fact that you can lift into space).


Right. You build your computers out of superconductors, and they don't get hot.

Sadly, they also don't compute.

> Even the cheapest kind will superconduct in space (because it’s so cold).

Is this a drinking game? Take a drink whenever someone claims that heat is not a problem because space is cold? Because I'm going to have alcohol poisoning soon.

Let's see how cold you feel when you leave the Earth's shadow and the sun hits you.


If/when we get high-performance superconducting computers, we wouldn't need to put the computers in space in the first place.


You've invented a room-temperature superconducting material? No?

Didn't think so.

Currently available superconductors still need liquid nitrogen cooling, meaning they're not feasible for in-orbit installations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: