> And, for the any employees still at Twitter, don’t underestimate the power of a pocket veto.
This is something I've been repeating to some of my younger colleagues.
Engineers aren't really fungible resources, to the extent that these projects require. Ask any manager how easy it is to swap "allocated resources", and they'll probably sigh heavily.
People are afraid that if they don't follow their manager's every request, they will be fired. But remember that hiring is hard, and managers are loath to fire someone they've already spent so much effort finding, hiring, and onboarding. Finding someone else to do it can take weeks, months, or longer! Which in many cases risks killing the project altogether.
Even if you're at the bottom of the chain, as the person who does the actual implementation, you have a lot of power on what gets prioritized.
Reminds me a little of the story [1] about how in 2005 the execs at Google had a meeting to figure out what to call "Satellite View" in Google Maps. One faction did not like the name "Satellite View" because it was technically incorrect as many of the images had been taken from airplanes, not satellites. But the proposed alternatives like "Aerial photography" all sounded awkward. Right before the meeting ended Sergey Brin decided it would be called "Bird Mode."
Later on when the engineering team was actually implementing it they thought Bird Mode sounded dumb and just called it Satellite View. And so it has been ever since.
An object on a suborbital trajectory is by definition not a satellite.
As a practical matter, there's a differing relationship with atmosphere. Planes depend on air to produce lift and sustain flight, but satellites are either inconvenienced by air, or entirely unaffected by it.
Altitude, atmospheric effects, and relative angular velocity are all factors in photography. Imaging from orbital platforms is also cheaper than airborne reconnaissance (per square meter, although the up-front capital investment is greater), covers a wider variety of purposes, and you don't have to worry about airspace violations; however, it may provide poorer definition, especially the more affordable commercial satellite imagery, and cannot compensate for cloud cover. So the distinction is significant on technical, operational, financial, and political levels.
Compensation for cloud cover in the visible spectrum is achieved by just picking an image from some of the next satellite passes. Also, there are active illumination imaging instruments (e.g. SAR) that can penetrate through clouds and see at night.
Atmospheric correction however is really an issue and often results in distinct patches on the "satellite" view
Well, with the same camera, you get 100 times higher resolution from 1 km than from 100 km. But a satellite in a polar orbit overflies the whole Earth, typically, every few weeks (though keyhole-type satellites only photograph a very narrow track) while in many cases the only available aerial imagery is years or decades old. And a satellite can fly over restricted airspace (the only way to stop it from doing so, even for its owner, would be to blow it out of orbit) while doing that in an airplane is likely to get you thoroughly murdered and possibly result in a diplomatic incident.
The result is that satellite photographs are much more frequent and have much better coverage, while aerial photographs have much higher resolution. The dishonest naming of the Google Maps feature has given people extremely unrealistic expectations of what satellites can do, which results in difficulty in selling actual satellite photography products when they don't match what people have come to expect from GMaps.
You can, and satellite optics typically are a lot bigger than aerial photography optics, but the wavelength of light and the sizes of satellites you can afford to launch still impose a practical limit. For US companies, laws impose another limit.
You can and there were / are 7-8 Hubble sized telescopes in orbit, with somewhat other optics and sensors looking in the other direction. Most likely the same is true for siblings of JWT.
The further you get from the object you're photographing, the closer your photo gets to an orthographic projection instead of a perspective projection.
The first incarnation of Google maps used low resolution Landsat imagery for most of the US. Massachusetts stood out distinctly with a different color palette because they had a public dataset of higher quality aerial imagery for the whole state.
Going down the captain pedant conversation path here, but technically all satellites also need to burn energy to stay in orbit or will eventually fall. The only ones who don’t have achieved escape velocity
Some Lagrange points are stable and therefore will not decay toward the Earth outside of other factors. (Because these systems are never sufficiently isolated, in the eternal view, therefore, also still require energy, though much, much less.) Though, of course, an object at a Lagrange point may still not technically be a satellite of earth. https://solarsystem.nasa.gov/faq/88/what-are-lagrange-points (though by the NASA definition above I'd argue that they are)
To further add some (maybe helpful) pedantry, the boundary between an airplane and a satellite is usually taken to be the point at which the velocity required to remain aloft via aerodynamic lift exceeds the orbital velocity if there were no atmosphere.
No, they don’t. In the absence of drag, which only the lowest satellites have, they just stay up there forever. The fuel is needed for orbit changes and correcting drift due to gravitational instabilities.
IANAP ("I am not a physicist"), but any two objects in orbit around their common center of gravity are slowly radiating energy into space in the form of gravity waves. This is why LIGO reports its chirps. Of course, this isn't very much energy, but given enough time all should orbits collapse.
I am a physicist. The gravitational energy loss from planet + satellite scale orbiting bodies is so small as to be orders of magnitude less than, say, the influence of gravitational anomalies like the Himalayas, or the tidal pull of the Moon.
Are any of them truly free of drag? Like, are there 0 molecules of atmosphere at some height, or just entirely negligible amounts of atmosphere for all practical purposes?
Earth orbits around the Sun and the Sun orbits around the center of the galaxy so from a certain perspective all pictures humans have ever taken are satellite pictures.
"Aerial" isn't directly related to airplanes. It means in the air. Unless you consider low earth orbit outside of the atmosphere, which is somewhat debated.
It's disturbing that execs are wasting time in a meeting over a low level decision like that. Leave it to the product manager and designer to figure out.
That's textbook bike shedding, and maybe it's why they seem to struggle with actual important things, like dealing with search spam.
One of my proudest moments was about seven years ago. I was two years into my career as a junior software engineer with no academic background in programming. I was by any measure an impostor and I worked very hard to learn and impress and earn the luck I was given with that job.
A PM, one who I liked and wanted to impress all the time, came to me asking for help to get git commit history for each person on our wider team “to measure how productive everyone is being.”
Despite being anxious about “what-ifs” like being blacklisted or some other concepts I knew nothing of, I gently explained why it would be a bad metric. I remember even saying, “some of the best engineering someone can do is to write negative lines of code.” I felt so wise despite being so green.
He pressed the matter and I calmly said that I said my part and I’ll play no role in this.
I asked around weeks later and apparently he approached nobody else and the issue was dropped.
Maybe this is a mundane anecdote or I’m not telling it properly but I’m still so proud that I was even capable of seeing the ethical dilemma, let alone acting correctly on it. Those years were full of “I have no clue what normal looks like in this industry.”
I feel somewhat confident in saying that experience emboldened me to do the right thing even if it was scary. Sometimes I worry that I fly too close to the sun with my attitude of “you won’t fire me.” But so far it’s worked.
it's also a bad metric? cleanup is nice but # of lines is still not a good measure of good code or even good.. anything. i went a month or more without writing any code because i was just writing docs
They are terrible metrics. To use a worn aphorism: Measuring productivity by lines of code is like measuring progress on an airplane by weight. Obviously using "more weight" as a metric is bad because it will result in a plane that won't fly. Slightly less obviously "less weight" is also a bad metric, because you'll end up with a plane made from paper and twigs where the wings will shear off if you look at it wrong.
You want to aim for a plane with the right weight, but you only know what that is by working out the entire design. Similarly you want to aim for the right number of lines in your code base, but you can only know what that is by working out the entire design.
# of lines of codes is a super-valuable metric when used relatively to compare two different things in software.
For example, I worked a .NET project with a central form that had close to 30K lines of code. This was 10x in comparison with anything else in the app. This is clearly a "whale" of a problem.
The lines-of-code comparison made it much easier for non-technical staff to now understand why this "one form" (really dozen of different functions contained in a single form) was troublesome in terms of fixes, enhancements etc. and also why no one wanted to touch it.
Parent explained in a metaphor how both metrics (more lines, less lines) can be bad.
Maybe engineer A has a easy feature to do. Lots of lines of code but it's smooth sailing. Another engineer, B, has a tricky bug fix to do, which requires him to read documentation, navigate the fode, reproduce the issue, until he published a fix with a handful of lines of code.
Who's the better engineer, A or B?
Even if we consider the average over time, one may be getting more tricky features than other. Or spending time in tasks such as hiring, mentoring, etc, which is worth a lot to companies.
Maybe it helps you understand if you think about how easy they are to game. You could just as well create useless lines of documentation as you could create useless lines of code.
Goodhart's law says:
"When a measure becomes a target, it ceases to be a good measure".
I believe that you maybe misread me, which usually means that I haven't made myself clear enough.
I'm not saying "LoC is a bad metric because it can be gamed". Most metrics can, if you work hard enough.
I'm saying "LoC is a bad metric because it can be gamed by a child within a couple of minutes".
It's the difference between lock made of a tin sheet and a proper heavy-duty steel lock. People like the lockpicking lawyer can still pick the latter, but the former is so weak that it should never be relied upon.
It depends on how junior the engineer is. My first job out of college, I was asked to write some code to cheat a benchmark, basically detect when a particular benchmark program was running and only then put the software into an alternate "fast path" that would result in better benchmark results. I agonized over this and didn't want to refuse. This was my first real job as a professional developer, and I didn't want to make waves. Eventually I got the nerve to tell my boss I was uncomfortable with the assignment, and he said "Oh, no problem at all! We keep our devs happy here." and assigned me onto another task. Joe, three cubicles down was more than happy to write the benchmark-cheating code.
Probably more than likely it's not generalizable to real world conditions.
Benchmarks are meant to be reproducible, meaning perfectly predictable. CPUs have things called branch predictors which try to predict what the software is going to do and try to do the calculation ahead of time resulting in (hopefully, if it predicted right) faster execution time. If you know which 'branches' a benchmark goes down, you can make a program which can coax the branch predictor to always make the right guesses for a given benchmark.
A program branches whenever you encounter some sort of conditional if-else statement.
Some optimizations might only apply to certain inputs that are used by benchmarking software. Or the driver makes unfavorable power tradeoffs to maximize performance when a benchmark is running. For example, if the driver knows a benchmark is single threaded, it can artificially throttle other cores and boost the core the benchmark is running on. There's more extreme stuff like GPU drivers replacing shaders (benchmarks don't care about graphics quality) or pre-rendering frames. https://videocardz.com/74912/professional-overclocker-demons...
I have successfully implemented pocket vetoing at the most immoral company I worked for, it was a brief stint (caused by the moral issues) where I could play around not delivering all the features management wanted to gouge their customers by playing with other priorities.
You don't need to do it, you don't even have to explicitly say no, you can just always find (or create) work that's more important to do than breaking your own morals. The worse that can happen is someone else gets the hot potato.
> managers are loath to fire someone they've already spent so much effort finding, hiring, and onboarding
Caveat: this applies to perms. It doesn't apply nearly as much to contractors (as my many experiences with saying "No, but..." to managers and being canned can attest.)
> But remember that hiring is hard, and managers are loath to fire someone they've already spent so much effort finding, hiring, and onboarding. Finding someone else to do it can take weeks, months, or longer! Which in many cases risks killing the project altogether.
Anecdata. One of my colleague got fired for not meeting expectations at his level for two consecutive halves. From what I've seen, he was competent and provided value to the project. Some companies have high turnover and are functioning with the idea that everyone is replaceable.
> Engineers aren't really fungible resources, to the extent that these projects require. Ask any manager how easy it is to swap "allocated resources", and they'll probably sigh heavily.
I'm hearing Meta, Stripe, Google, Netflix, Lyft and Uber are hiring like crazy for amazing salaries. Not only that but one basically just needs to sort of show up half the time and surf the net 99% of the time there.
Browsing some online code communities would lead someone to believe that faang and silicon valley companies are the only employers in our industry, and if your employed anywhere else you are probably on the verge of homelessness
This is something I've been repeating to some of my younger colleagues.
Engineers aren't really fungible resources, to the extent that these projects require. Ask any manager how easy it is to swap "allocated resources", and they'll probably sigh heavily.
People are afraid that if they don't follow their manager's every request, they will be fired. But remember that hiring is hard, and managers are loath to fire someone they've already spent so much effort finding, hiring, and onboarding. Finding someone else to do it can take weeks, months, or longer! Which in many cases risks killing the project altogether.
Even if you're at the bottom of the chain, as the person who does the actual implementation, you have a lot of power on what gets prioritized.
See also the oft-circulated OSS "Simple Sabotage Field Manual" http://svn.cacert.org/CAcert/CAcert_Inc/Board/oss/oss_sabota...