I bought one for coding. I no longer can read text for an extended period with lower DPI monitors.
There’s a few reasons why.
1. Non integer scaling sucks on Linux. It sucks on all OSs, but it’s just unusable on Linux IMO.
2. Text antialiasing sucks for most OSs except Microsoft ClearType, but I hate Windows.
I currently use an old LG UltraFine 5k. It’s physically cracking / falling apart and has atrocious burn in. I tried replacing it with a Dell 32” 6k monitor. The stupid matte coating on it and seemingly all other high DPI screens is so, so bad.
I just want glossy 5k at 120Hz with decent build quality and color accuracy.
I don’t even like the fact it’s mini LED. Blooming is pretty awful on those things. I hope there’s a way to turn it off.
So yes, I bought a $3,500 monitor to read text because no one else can seem to do it even remotely acceptably.
> this generated function is not understood completely
I think this kind of stuff is OK for the most part. I think it's a thrilling part of computer science: building systems so complex they're just on the brink of what can be fully understood by a single person. It's what sets software engineering apart from other engineering fields where it's unacceptable not to fully understand the engineering, say, for factories, buildings, bridges, ships and infrastructure and such.
Aim to make the road laminar. Every time you hard brake, you're causing the milk jug to glug, making a ripple of entropy as momentum turns to heat from your brakes and those behind you, sometimes in perpetuity. I learned this while doing a 1.5hr daily commute in a Subaru with a clapped out manual transmission. I wanted to conserve energy shifting, but realized I was now participating in large choreographed dance of "smooth" with other drivers who already knew this. There are many of us. And we all glare at the driver blinking their red lights on the interstate indicating that they're loud and proud of introducing turbulence to an otherwise peaceful system.
I try to do the same, and do my part to smooth out the wrinkles in traffic.
What I would really like in a car is not only my current speed, but the relative speed to the car ahead of me. Given my car has cameras and other sensors for cruise control and other features this ought to already be possible.
This is the natural response of tapping the brakes for any slowdown: people naturally over compensating and chain reactions happening behind them. This video shows how stop and go traffic forms and snowballs with no real impetus beyond mis estimated follow distance.
What this video also shows is that if people pull off more or less at the same time and speed of the car in front then this is a complete non issue. Unfortunately most people are unable to watch a few cars ahead and be ready to pull of when the car in front does, and so we have this situation where each car takes around 5-10 seconds to pull away. multiply this by hundreds or thousands of cars and you end up with a phantom traffic jam.
I always pull away at the same speed as the car in front of me and maintain the same distance as when we were stopped. It is very easy to do and completely eliminates traffic build up if multiple people do it at the same time.
This is the same reason that we have amber lights on traffic lights, so that the drivers have time to get into gear and start pulling away so that when the light goes green they are imediately travelling through it, causing no excess traffic build up at the lights. Again unfortunately people dont concentrate when they are stopped at lights and so you have the situation where they see the light go green and then proceed to start changing into gear and remove the handbrake. By the time they are moving through the green light, they have already taken 10-20 seconds of green light time, eating well into the time alotted for cars to be travelling across the junction.
The only thing which will solve this is driverless cars, meaning that the cars can all talk to each other and move at the same time like a chain. I welcome this advancement to elimante human error in driving and get rid of traffic jams for good.
Of course braking/change in velocity creates waves. But this effect is overemphasized in my opinion. Locally analyzed, traffic can be simplified incredibly by observing that a lane's maximum throughput is simply given by following spacing, measured in time.
If drivers are using a 2 second following distance, commonly taught in driving school, then max throughput is simply
1 car / 2 sec
If you double following distance, you halve the throughput. If you halve following distance, you double your throughput. The throughput of a (full, i.e. rush-hour) road has nothing to do with speeds of people driving, and everything to do with following distance.
This assumes that a 2 second interval is appropriate for all travelling speeds.
This assumption is untrue at very low speeds, particularly when it takes longer than 2 seconds for a car to pass a point. For instance if we assume cars are 4m long, then with an interval of 2 seconds the cars would be touching at 4.47mph
The assumption is also untrue at very high speeds. You'll want a larger gap. That's partly because at such high speeds the ability of a vehicle to decelerate differs - if a vehicle with good brakes does an emergency stop and the car behind it has a respectable 2 second gap but has worse brakes then they can end up colliding. It's also partly because a 2 second gap at very high speeds means the car in front is further away, and that can cause a greater delay before the driver realises what is happening. As a third reason a greater margin needs to be used at very high speeds simply because the consequences of a crash are that much greater and should therefore be avoided even more than at lower speeds.
Therefore there is a kind of U-shaped curve in the "safe" following interval, and consequently a speed at which safe throughput is maximised.
That's why variable speed limits have been introduced in various places. For instance, in the UK which normally has a 70mph speed limit on motorways, in very high traffic conditions this can be lowered using electronic signs to increase the safe throughput of the road. It's commonly reduced to 50mph, though it does get lowered further in sections approaching a queue of vehicles that has actually stopped.
There's also the issue of speed oscillations. With a high speed limit and vehicles following too closely, a little variation in speed in one vehicle can turn into a larger variation in the following vehicles, causing a backwards-travelling wave of braking (sometimes to an absolute halt) and speeding up again. Lowering the speed limit reduces this.
By 2 sec following distance I am referring to their back bumper to your front bumper. So cars "overlapping" in your example is not possible
If you want 4 sec gap at higher speeds that's fine, the formula is speed-independent for throughput, not speed-independent for following distance. If you want 4 seconds at high speed then use 4 sec instead of 2 sec (i.e. 1 car/ 4 sec)
>There's also the issue of speed oscillations. With a high speed limit and vehicles following too closely, a little variation in speed in one vehicle can turn into a larger variation in the following vehicles, causing a backwards-travelling wave of braking (sometimes to an absolute halt) and speeding up again. Lowering the speed limit reduces this.
"Lowering the speed limit reduces oscillations." Exactly, that is my whole point, that (again, locally analyzed) you can ignore the waves, and instead look only at the following distance of the slowest car in the lane, to determine throughput of the road behind that car. Your idea of "lowering the speed limit" to eliminate waves is the same net effect on throughput as observing that the throughput cannot exceed that given by the longest-following car on the road.
A niggle - if you are referring to a 2 second gap between the back bumper and the following front bumper, then the formula is no longer speed independent, as you need to add the small overhead to account for the time taken for the length of the vehicle to pass as well. This will be small enough to be mostly negligible except at low speeds.
If you take this measurement as the goal it would be best at near standstill speed (around 4m/s). If you want to maximize the traveled distance of the group it is around 60kph, which is the metric most people actually care about when discussing throughput.
1 car / (actual max following time) no matter the speed. Where do you see speed in the equation? It's not. You can put it in of course but my point is it's unnecessary if you know following distance, which is theoretically more invariable than speed on a road anyway
> If you double following distance, you halve the throughput. If you halve following distance, you double your throughput.
That postulate breaks down as soon as you move away from a laminar traffic assumption and include distracted drivers, lane changes, and weather influences. Which is why the wave theory model is important to understand the propagation of perturbations and their effect on maximum throughput.
> The throughput of a (full, i.e. rush-hour) road has nothing to do with speeds of people driving, and everything to do with following distance.
And yet, in the limit case of a bumper-to-bumper situation (or, in fluid dynamics parlance, an incompressible flow), the variable determining the change in mass flow-rate is the velocity of the medium. Mimetically, we could also look at ants. To ease congestion in a bumper-to-bumper situation, they accelerate.
YES to all! You're so close. Drivers do not accelerate in bumper-to-bumper the way ants do. They maintain a 2sec (or whatever they are trained) following time instead. Which therefore dictates velocity (car lengths per following time). Thus the limiter on flow-rate is actually following time!
THIS this so much. Look far out ahead and if you see traffic compressing then slow down sooner so as to try to make that compression vanish for those behind you.
The tradeoff for the compression 'vanishing' is longer times pressing your breaks, travelling slower overall, and leaving your engine running for more of the time.
Also you then just leave a bigger gap in front of you for somebody to jump into, forcing you to break more and go further back to maintain your distance. This in turn just winds up the drivers behind you who end up overtaking you. All this chaos becasue you think you are helping by 'reducing compression'.
In heavy traffic I much prefer to quickly catch the car in front up and then sit stationary with my engine off. Much more efficient and less polluting than spending the whole time with your ewngine on managing gaps and braking distances at low speeds.
> The tradeoff for the compression 'vanishing' is longer times pressing your breaks, travelling slower overall, and leaving your engine running for more of the time.
What? Your travel time does not get longer, and if the traffic is merely slow then this "leaving your engine running for more of the time" is nonsense. Even if it's stop and go and you have a car that kills the engine every time you come to a dead stop, that will use more gas because you're making inefficient use of the electrical system (charging and discharging the battery more than you should have to).
> Also you then just leave a bigger gap in front of you for somebody to jump into,
Sure, so as earlier you have to pay attention and not allow that.
> In heavy traffic I much prefer to quickly catch the car in front up and then sit stationary with my engine off. Much more efficient
Doubt.
And besides doubting your claim about efficiency, you're doing exactly that which most helps the pressure wave endure, and thus you're causing more delays for more people, and more people to have to brake hard, and more engine stop/start cycles, all of which means more pollution overall not less, etc.
IMO there is nothing worse than drivers on highways who think that by leaving bigger gaps and slowing less rapidly they are 'smoothing' out the traffic and helping the whole road run better. You are not, you are just making the whole road run slower and taking up more space for yourself on a crowded road.
This is untrue, there have been studies done that discovered the primary cause of traffic is following too close. Following further away really does reduce the amount of traffic. It seems unintuitive, but consider: every time you brake, the person behind you has to brake a teensy bit longer. And then the person behind them, and behind them. That little bit of extra time accumulated quickly and grinds the highway to a halt.
Exactly, and that same theory also applies to long slow breaking as well.
What you are refering to is the well known phenomenon of traffic waves or phantom jams. The UK has all but eliminated them with variable speed limits without the need to make every leave half a mile gap between each car.
We have variable speed limits on parts of our roads. People commonly exceed the stated limit by 20+ mph since they're used to "full speed" and ignore the instances where it's being reduced.
I welcome our future robot car overlords where all of these problems should in theory be greatly reduced or eliminated entirely.
Where I live it is impossible to exceed the variable speed limit because every other car on the road is doing it.
I agree, a full network of self driving cars which can all move together in a chain will eliminate this problem. I just hope I live long enough to see it.
However purposely making the cars behind you break more and causing more compression further back just so that you can avoid the compression in front of you is madness.
I recently installed a mini split heat pump in a detached accessory building. The installer upsold me on a more expensive unit because I’d get federal refunds due to its higher SEER rating. Ok, sure: higher efficiency, same price.
In fact, efficiency was the main reason I wanted a mini split in the first place. It just bugged me to _not_ pump the heat entirely outside the structure. And I paid a bit more for that versus just using a window unit or “portable” AC. All we’re talking here is the location of the condenser coil: inside versus outside. It just makes sense to put it outside, with just a small penetration in the building.
Well, during electrical inspection apparently I paid too much. After paying more than a certain threshold for converting an unconditioned space to a conditioned space, I now need to insulate the accessory structure to a certain degree in order to pass code.
The kicker is, the only way I can insulate the space to meet code is to insulate with polyiso (aka styrofoam) because the structure is so small. So, I guess in an effort to be “green” according to local government, I need to rip out the mineral wool insulation, dump it and replace it with styrofoam. Or put the mini split in the dump and buy a cheaper less efficient unit like a window unit.
I’d save approximately $0.30 a year on energy costs to insulate to code versus what I have now with the mini split.
This whole industry is stupid and that’s because it’s regulated by idiots.
Yes, exactly. The demo of Gemini's Diffusion model [0] was really eye-opening to me in this regard. Since then, I've been convinced the future of lots of software engineering is basically UX and SQA: describe the desired states, have an LLM fill in the gaps based on its understanding of human intent, and unit test it to verify. Like most engineering fields, we'll have an empirical understanding of systems as opposed to the analytical understanding of code we have today. I'd argue most complex software is already only approximately understood even before LLMs. I doubt the quality of software will go up (in fact the opposite), but I think this work will scale much better and be much, much more boring.
On the one hand you are right, there are military money and people involved in essentially every high-tech firm and university project in the USA.
On the other hand, a lot of the time this is done as an easy way of subsidizing a potentially useful civil technology without going into the weeds of Congress and public discussions about how to allocate grants and so on. The military budget acts partly as a discretionary fund that the US government can use to fund (civil) R&D as it sees fit without as much red tape.
Noam Chomsky wrote about this from his experience at MIT. Especially in the 70s and 80s, MIT was getting lots of grant money from defense spending, and every professor could access it by simply putting some fictitious "possible military applications" on their research into shortest-path algorithms or what have you.
Of course, this plays it both ways, because providing the money, even as a thinly veiled subsidy, also allows you to come back later and assert some control if it does turn out it could be beneficial for "defense" purposes.
Yeah, more or less. The US uses the DoD as its private-sector R&D funding firehose. They aren't writing Raytheon a $800B check and saying "make us some missiles." They might write a $4B check for some missiles, but that leaves $876B left over based on 2023's defense spending numbers.
Most of the money goes to stuff you wouldn't even think of as "military equipment." Stuff like medical devices, security, communications, networking, search-and-rescue, and so on. Morally neutral-to-good things that the military needs, but so does everyone else.
As a business, the difference between the DoD and the rest of the market is that the DoD is a single institution with the budget and willingness to bankroll your R&D. Sometimes it's the only feasible way to fund development of a genuine, morally good product.
Classic example of academic research funded by the military.
> In 1985, the wreck was finally located by a joint French–American expedition led by Jean-Louis Michel of IFREMER and Robert Ballard of the Woods Hole Oceanographic Institution, originally on a mission to find two nuclear Cold War submarines.
thanks a TON for this recommendation it was really eye-opening listening to his presentation and the detailed history, as well as the cat-and-mouse game escapades. Very well put-together presentation and yeah, really makes me reflect on the specialness of Silicon Valley and those relationships.
I had an uncle, an electrical engineer, who went there for higher studies, to stanford, got his PhD (typical common route for many talented Indians, at least up to MS, doctorate is less common), was mentioned in who's who in electrical engineering, worked there, married an American woman, and became a US citizen, and a hardware entrepreneur, in Sunnyvale, California, but I didn't hear about this from him, i just read it on the net.
> In fact, we didn’t found Tailscale to be a networking company. Networking didn’t come into it much at all at first.
I always just assumed they were building some kind of logging software (“tail”scale), used Wireguard to connect hosts, and just kind of stopped there. Don’t get me wrong, Tailscale is a nice way to connect machines. It’s nice because Wireguard is nice.
This long blog post (by the now-CEO of Tailscale), if you skip to the end, describes that parent’s hypothesis is basically exactly correct.
> Update 2019-04-26: Based on a lot of positive feedback from people who read this blog post, I ended up starting a company that might be able to help you with your logs problems. We're building pipelines that are very similar to what's described here.
Update 2020-08-26:
Aha! Okay, for some reason this article is trending again, and I'd better provide an update on my update. We did implement parts of this design for use in our core product, which is now quite distinct from logs processing.
After investigating the "logs" market last year, we decided not to commercialize a logs processing service. The reason is that the characteristics we want our design to have: cheap, lightweight, simple, fast, and reliable - are all things you would expect from the low-cost provider in a market. The "logs processing" space is crowded with a lot of premium products that are fancy, feature-filled, etc, and reliable too, and thus able to charge a lot of money.
Instead, we built a minimalistic version of the above design for our internal use, collecting distributed telemetry about Tailscale connection success rates to help debug the network. Big companies can also use it to feed into their IDS and SIEM systems.
We considered open sourcing the logs services we built (since open source is where attributes like cheap, lightweight, etc tend to flourish) but we can't afford the support overhead right now for a product that is at best tangential to our main focus. Sorry! Hopefully someday.
Wireguard by itself is good, but it isn't nice. Tailscale is nice because it builds on top of Wireguard (which is good) and adds UX stuff (which makes it nice).
There’s a few reasons why.
1. Non integer scaling sucks on Linux. It sucks on all OSs, but it’s just unusable on Linux IMO.
2. Text antialiasing sucks for most OSs except Microsoft ClearType, but I hate Windows.
I currently use an old LG UltraFine 5k. It’s physically cracking / falling apart and has atrocious burn in. I tried replacing it with a Dell 32” 6k monitor. The stupid matte coating on it and seemingly all other high DPI screens is so, so bad.
I just want glossy 5k at 120Hz with decent build quality and color accuracy.
I don’t even like the fact it’s mini LED. Blooming is pretty awful on those things. I hope there’s a way to turn it off.
So yes, I bought a $3,500 monitor to read text because no one else can seem to do it even remotely acceptably.