Hacker Timesnew | past | comments | ask | show | jobs | submit | more orclev's commentslogin

A large complicated well maintained and widely used library is infinitely preferable to a large complicated library you need to maintain yourself and used only by you. In a similar vein, a well known standard format (or encoding) will always be a better choice than some ad-hoc format you create yourself because not only will that standard have encountered and dealt with problems you haven't even considered, but there are also likely to be a plethora of libraries, frameworks, and tools that support that format, where as if you create something yourself you end up needing to create anything you need.

Your time is generally better spent working on solving your core problem rather than the dozens of ancillary problems that end up needing to be solved along the way (particularly where a whole bunch of other people have spent a whole bunch of time already solving those problems).


> A large complicated well maintained and widely used library is infinitely preferable to a large complicated library you need to maintain yourself and used only by you.

Yes, but a large complicated well maintained and widely used library is not necessarily preferable to a small not so complicated library that does exactly what you need and nothing else. And that goes for formats too.

Recently I was involved in a project where order numbers had to be sent from one system to another. Some colleagues insisted that we baked them into a large xml document and then used libraries to both create the documents as well as parse them. In this case the economic thing to do was to write them each separated by EOL. Even the code we would have written ourselves would have been larger if we’d used the XML solution, not to talk about everything needed to include in builds and deploys.


There's an important difference between using something because it's popular, and using something because it's a standard designed for your problem. For something simple like just sending some numbers across the wire XML is massive overkill (as an aside, there's actually very little XML is a good solution to). CSV, TSV, JSON arrays, one of dozens of serialization formats, or even just a simple EOL separated value like you proposed are all both standardized and very simple solutions to the problem. On the other hand, had they proposed inventing some new binary serialization protocol and using that to transmit the numbers, that would be even worse than using XML.

You should always pick the simplest solution to the problem that meets all your requirements, but when considering solutions you should favor standards compliant solutions. A common example is date formats. Lots of places roll their own date format string when sending dates, but using ISO-8601 will save you (and your clients) so many headaches in the long run.

Honestly for your example, not knowing all the details I can't say for sure if a EOL separated value is a good solution, but based on just the description I probably would have gone with a CSV, or possibly a JSON array. I definitely would not have used XML (dear god, why would anyone pick XML in this day and age?), although if they were concerned about needing to add more data down the line I could maybe see an argument for something a bit more involved than a CSV.


The problem is that security questions are fundamentally flawed. Most of them are easily guessable with a little bit of research, and because they can often be used to bypass your password they're effectively a backdoor into your account. You're generally better off using them as either a backup password (that is, not guessable even given knowledge about you), or simply not using them at all. If you forgot your password then reset it via your e-mail account. In short, don't use security questions, they're fundamentally broken.


Another ELI5 by someone not at all qualified to understand this stuff (I.E. a layman).

We know from measurements of the leftover energy (CMB) approximately how long ago the big bang happened, and how fast it appears to be expanding. Our most favored model of the universes physics, General Relativity makes certain predictions that mostly match up with reality up until you get to galaxy scales at which point they start to diverge. In order for our measurements to work under General Relativity our galaxies need to be more massive than they appear to be based on all the stuff we can actually see in them. This needed excess mass is called dark matter, but even with dark matter the universe appears to be expanding faster than it should. The theory proposed in the paper is that the universe as whole isn't actually expanding faster, but due to a quirk of where we are in the universe it only looks like it is when we look at nearby galaxies. Unfortunately for that to be true we would need to be in a void in the structure of the universe which General Relativity predicts shouldn't be possible.

An alternative theory of universal physics exists called MOND. MOND is similar to General Relativity, but rather than solving the problem at the galaxy scale through theoretical dark matter, it instead just assumes that gravity works differently once you reach a certain cutoff point. This aligns with observations of actual galaxies (not entirely unsurprisingly because the cutoff point was chosen in order to align with those observations) without needing dark matter to exist. From the perspective of the paper there's another nice property of MOND which is that simulations based on it allow for the kind of void to form that the paper predicts would be necessary to explain the locally observed expansion.

Basically, General Relativity can't explain how fast galaxies spin without Dark Matter, nor how fast the universe appears to be expanding. MOND combined with our galaxy being in the middle of a big void can explain both. Both theories, General Relativity and MOND require a certain amount of hand waving in order to align with reality. MOND requires a bit less but is highly suspect because it's solution is basically "gravity just acts different sometimes" which is suspiciously close to "it's that way because it is".

As for the actual math involved in all of this, beats me, we'll need to wait for someone who's actually in this field to look it over and explain what if anything is wrong with it all.


So General Relativity is a theoretical framework that has been proven to match observation[0], while MOND's details have been chosen to match observation, without any theoretical basis? Is that right? Or is there some mechanism proposed as to why gravity's cutoff point is where it is?

[0]up to a certain point, and if you include Dark Matter, which we still don't understand (the original assumption was that it was WIMPs, but as we still haven't found a WIMP that would do this, then we don't know what it is).


So, first thing, I'm not a astrophysicist nor even a physicist of any type, so I might be misunderstanding things here, but this is how I interpret all this. Hopefully if I've gotten something significantly wrong someone will correct it.

General Relativity matches observations to a point. The issue is that it stops matching observations once you reach galactic scales. In order to explain why that doesn't work you need to start hand waving, and the start of that is dark matter. MOND was thought up not so much as an alternative to General Relativity but as an alternative to dark matter. It tweaks some of the math used in General Relativity to assume that gravity behaves differently at different levels. Basically once you have a strong enough gravitational field it behaves like the gravity we know, but until you hit that point its effects diminish at a different rate. Doing that explains why galaxies behave like they do. For the bulk of the galaxy gravity is strong enough that it behaves exactly like General Relativity says it should, but out near the edges of the galaxy gravity has grown weak enough that it behaves differently. It's sort of hand wavy and leaves a bit of a bad taste in the mouth since there's no real explanation of why gravity should behave that way. On the other hand it doesn't require some phantom matter that we have no observational data to back up.

Either theory falls far short, and both of them require a lot of fudging around the edges to align with galactic scale observations, although MOND once you get past the arbitrary change to gravity seems to require less hand waving. Importantly for the linked paper it also seems to line up with the proposed theory and predict the kind of void the paper is predicated on which would be a strong point in favor of MOND.

Of key point to the proposed theory, General Relativity predicts that in the first moments after the big bang that the universe was essentially uniform, that everything spread out more or less evenly, and it wasn't until much later when things started to form the likes of planets and stars that we started seeing significant variation in matter distribution of the universe. MOND on the other hand allows for variation in that initial expansion. That's important for the paper because there simply isn't enough time in the General Relativity model to explain a void the size that their theory predicts would be necessary to form. MOND allowing for more variability early on on the other hand does allow enough time that a void of the necessary size could exist.

Basically General Relativity on its own doesn't work for things galaxy size and bigger. MOND on its own doesn't work at galactic cluster levels and above. The theory proposed in the paper could explain the discrepancy we see in the rate of expansion of the universe, but doesn't seem to be possible under General Relativity, but is possible under MOND. Both General Relativity and MOND rely on the presence of things not observed yet in order to match with our observations once you scale things back far enough, and neither on its own can explain why the universe seems to be expanding faster than they predict it should. The paper proposes one theory for that, but it's only possible with MOND.


As far as I understand the problem with dark matter is also that there isn't a single dark matter theory, but lots of different ones with a lot of very variable tunables. So the problem is that dark matter by itself isn't that predictive.


The term dark matter itself is kind of misleading in the first place. The math in general relativity just doesn't add up when applied to certain observations. They've essentially gotten 2 + 2 = 5. In order to fix that they just assume that one of those 2's was actually a 3 somehow, and the extra 1 it picked up got labelled as dark matter. In other words, dark matter is just a term for some missing numbers somewhere in the calculation. Based on the different places where the extra numbers might be included they think it's something with mass, but really that's just a guess based on the existing formula and the changes that would be necessary to make it match the observation.


thanks :)


Isn't there also observational data that disfavors MOND?

I remember it had something to do with galaxies colliding and the dark matter staying behind or something along those lines.


Not sure honestly. That might have something to do with the vHDM theory mentioned in the paper. I didn't really follow a lot of it, but I think (major grain of salt here) it's saying that that theory predicts some very light neutrinos exist and tend to collect inside of galactic clusters, but not galaxies themselves and that it's those neutrinos that make up the missing mass at universal scales that general relativity explains using dark matter.

Either case seems pretty hand wavy honestly. When it comes to galaxy and universe level physics it all seems pretty weak compared to the sort of particle physics and classical physics that we can actually measure and test on Earth. It's all just a bunch of theoretical math with relatively few actual measurements to pin it all down. I don't think we're anywhere near having a solid theory of the universe so it's mostly an exercise in trying to prove which theory is the least wrong at this point, rather than which one is correct.


The bullet cluster is an n of one and the dark matter estimates come from redshifting/lensing, which could have come from other sources in such an unusual scenario.

Some of the other crazy observations like ultradiffuse galaxies are, on recalculation, not as extreme as initially guessed, and predicated on an indirect empirical estimate of mass (number of globular clusters) with no mechanistic confirmation.


That's because there isn't one. Fish of any kind is generally considered to be seafood.


More generally labeled as Krab.


Yeah, we call it "kay-rab" to avoid confusion.


Because game developers mostly don't pick the backend, game engine developers do. The vast majority of game developers pick a game engine and that drives most of their other technical decisions. There are really only a dozen game engines that have enough market share to matter, and a decent chunk of the biggest ones were built on top of DX for various reasons. OpenGL, while a great concept, was a fairly flawed execution for quite a while (its gotten a lot better in the last 10 years or so), so I can at least partially understand why in the past someone who doesn't care at all about cross-platform support might have steered clear of it.


I certainly hope it doesn't come to that. Servo was really starting to show serious improvements in both memory usage and speed relative to Gecko and Webkit. It would be a shame to see Firefox devolve into just another rebranded Webkit browser, particularly since everyone loses when things devolve to a monoculture.


You're solution is the ideal one and safest, although in the interest of maximum flexibility since the goal here seems more documentative than prescriptive, it could also be as simple as creating a type alias. In C for example a simple `#define UnitInterval float`, and then actual usage would be `function FuncName(UnitInterval accuracy)`. That accomplishes conveying both the meaning of the value (it represents accuracy) and the valid value range (assuming of course that UnitInterval is understood to be a float in the range of 0 to 1).

Having proper compile time (or runtime if compile time isn't feasible) checks is of course the better solution, but not always practical either because of lack of support in the desired language, or rarely because of performance considerations.


That's fair, but I do personally have a stance that compiler-checked documentation is the ideal documentation because it can never drift from the code. (EDIT: I should add: It should never be the ONLY documentation! Examples, etc. matter a lot!)

There's a place for type aliases, but IMO that place is shrinking in most languages that support them, e.g. Haskell. With DerivingVia, newtypes are extremely low-cost. Type aliases can be useful for abbreviation, but for adding 'semantics' for the reader/programmer... not so much. Again, IMO. I realize this is not objective truth or anything.

Of course, if you don't have newtypes or similarly low-cost abstractions, then the valuation shifts a lot.

EDIT: Another example: Scala supports type aliases, but it's very rare to see any usage outside of the 'abbreviation' use case where you have abstract types and just want to make a few of the type parameters concrete.


It's a question of what is meant by the term censorship. In the strictest sense, moderation and censorship are very often the same thing. If for instance, I post something terrible in a comment on here, and the administration of HN deletes that comment, then that's censorship.

However, when most people talk about censorship they're using it not in the strict sense, but rather as a shorthand for someone violating their first amendment right. In this case this is really only a crime when it's a government entity doing it, although people don't typically differentiate between the government and any large organization, which technically are legally allowed to censor you on their platform or property.

There's a larger discussion that needs to happen with regards to censorship. There are two extremes at play here, on the one hand there's the absolute freedom stance of literally nothing censored (only example I can think of for this is maybe the dark web, but really everyone censors if only a little), even shouting fire in a crowded theater or posting child pornography. On the other extreme is the absolute censorship of someplace like China, where only permitted thoughts and expressions can be posted. The US and most of the rest of the world tends to fall somewhere in the middle.

The big struggle right now is that everyone has recognized that there's clearly some kind of problem. We're seeing unprecedented levels of misinformation, and a frankly weaponization of social media both for profit, and for international politics. I don't know that anyone has a good solution for how to address that problem, but the pendulum seems to be swinging towards a more censorship focused response.


> other extreme is the absolute censorship of someplace like China, where only permitted thoughts and expressions can be posted. The US and most of the rest of the world tends to fall somewhere in the middle.

It's like other countries only exist as rhetorical devices for most of HN. If you actually used the fediverse you'll see that there are plenty of Chinese users on it criticizing the state. It's the Western fediverse users being censored for wrongthink this time. Even the creator of Mastodon straight up doesn't believe in free speech wrt. to certain far right beliefs.


Our economy has become so overwhelmingly top heavy that the majority of the population is losing the ability to take part in it. The middle class used to be the mean income, but these days the mean has drifted so far from the mode that the distinction between middle class and lower class has all but vanished. Just taking a brief look at the 2014 census numbers, the mode income was centered around $22,000 a year while the mean was $75,000. Let that sink in. The largest percentage of the US population makes less than $25,000 a year.


The mode is meaningless with such a huge continuous range. A huge chunk of people are right at that income level because of minimum wage. 50% of the people still make more than 50k.

Look at a distribution graph to help visualize it: https://www.census.gov/library/visualizations/2015/demo/dist...


That chart is s great example of exactly what I'm talking about. Look at that 95th percentile line. It's LEFT of the center. 95% of the population is making less than HALF of the top earners.


Do you even know what point you’re trying to make? Because I don’t think this chart is saying what you’re trying to imply. All this chart shows is that there is a long tail of possible incomes to the right.

It doesn’t tell you how much that 5 percent is making in aggregate compared to the 95 percent. Put another way, you could stuff everyone on the right into a single >250k column and it wouldn’t be as tall as that 90th percentile column.

This chart doesn’t really say anything about wealth inequality (which is what “point” I think you were trying to make) and neither does the mode of the incomes.


Mode is a preposterous and arbitrary measurement to use. Median is what you’re looking for, but it doesn’t support your narrative as well.


Median is just the arbitrary mid point between the two extremes, that doesn't actually mean anything useful to the average person. If you picked someone at random from the subset of the US population that reports an income (presumably the working population) and asked them what their income is, the most likely answer is the mode value, or in 2014 about $22,000. That's reality. That's why that is important.

The median income ($53,000) is literally pointless except as yet another indicator of how unbalanced the economy is. If the economy was perfectly balanced the median and mean would be the same, but they're nowhere near that. The median was $53,000, while the mean was $75,000. An incredibly large chunk of the US is making significantly less than a handful of massive earners. And that's not even factoring in all the dirty tricks that the richest use to hide their wealth like offshoring bank accounts and shifting most of their assets into capital gains.


Median is significant because half make more and half make less. That over half of America makes over 50k is very meaningful.

Mode is just the exact number that appears most often in the distribution. Salaries are a continuous range, except for the various minimum wages. That is why the mode is so low. It doesn’t tell you anything.


>and asked them what their income is, the most likely answer is the mode value, or in 2014 about $22,000. That's reality. That's why that is important.

FFS no. That’s not what the mode means. Mode does not imply that’s the majority probability, it’s just the single highest probability out of the possible outcomes.

Statistically speaking, if you asked random people what their income was, the majority would not have an income of $22k. Let that sink in. Your whole mental model of this is deeply flawed. “Mode” != “majority”.

Consider this distribution:

1,2,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16

What is the mode? Do you see why the mode is meaningless shit when discussing what the distribution looks like?


This information is only useful if it is contrasted against the number of people who want/need/are able to work. People like the retired, teenagers, stay-at-home parents, college students, etc. all may make some small amount of income each year but generally speaking don't work full time. This isn't evidence of people struggling. It is evidence of people exercising their personal rights which include not being a worker.


You're suggesting that the retired, teenagers, stay-at-home parents, and college students make up the majority of the US working population? Yeah, not buying that one at all. The US economy is fucked right now, we've been madly spinning the plate to try to keep it going, but it's starting to wobble badly. Unless something major is done soon it's only a question of when not if things start to implode.


> You're suggesting that the retired, teenagers, stay-at-home parents, and college students make up the majority of the US working population? Yeah, not buying that one at all.

No where did I say this. What are you talking about?


My point was that the mode income, that is the income reported by the largest number of people was incredibly low. You claim that's because there are a bunch of people that don't really need/want to work, so they don't make much. That implies that that group represents the majority of the working population (in order to be the mode income). You're point might have been correct if we were talking about the mean income being very low, but in this case it's exactly the opposite.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: