As for programming with types, that's partly what Go was trying to avoid: https://qht.co/item?id=6821389 . And I agree with Pike on this one. The nice thing about Go for me is that I'm just writing code. Not defining a type hierarchy, not rewriting some things to get them just right, not defining getters, setters, move and copy constructors for every type :). Just telling the computer what to do and in what order. When I'm writing a library I'm defining an API, but that's about it; and you can usually steal whatever the standard library's patterns are there.
I disagree about nil as well; I think Go's zero value approach is useful, and basically impossible without nil (it wasn't necessary in my code, but I may want to instantiate an ApiResponse object without a data structure ready to be passed in).
A little bit of a rambly response from me, but, all in all, I think I'll be one of the stubborn ones which refuses to use generics in his code for a long time.
The first comment below your link to the message about Rob Pike's feeling on types was a more succinct form of my reaction:
> Sounds like Rob Pike doesn't understand type theory.
I'd be a bit more charitable; Pike is a smart dude, I'm sure he does understand type theory, but has decided he'd prefer to write imperative code. And he's actually good at writing imperative code correctly, so for him, that works. But most people are not very good at writing imperative code correctly, at least not the first time, and not without writing a large volume of tests (usually several times as much test code as program code) to verify that imperative code.
Or perhaps Pike was just assuming the person he was talking to was only talking about types in the OOP sense. If that's the case, I agree with him: taxonomies are boring, and inheritance often leads to leaky abstractions.
But types let you do so much more than that. I much prefer being able to encode behavior and constraints into the types I define over writing a bunch of tests to verify that the constraints I've expressed in code are correct. Why do the work that a compiler can do for you, and do it much better and more reliably than you?
> But most people are not very good at writing imperative
> code correctly, at least not the first time, and not
> without writing a large volume of tests (usually several
> times as much test code as program code) to verify that
> imperative code.
How are languages other than Go (like Rust, Swift etc.,) not imperative? They are nearly as much imperative as Go but have FP features like ADTs and pattern matching, but those don't make them "functional". This point might be valid if you are talking about Haskell, MLs and the likes.
> I much prefer being able to encode behavior and constraints
> into the types I define over writing a bunch of tests to verify that the constraints
> I've expressed in code are correct. Why do the work that a compiler can
> do for you, and do it much better and more reliably than you?
IMO, this "types replacing tests" might be true when comparing dynamic vs static typing, but for already static typed languages, extra typing cannot be a substantial gain. Personally, I find type-level programming beyond a certain extent to be counter productive and it doesn't seem to have much impact on correctness (unless we bring in dependent types).
I'm going to be a smartass about this and say I think neither of you are wrong. The "extra typing" gives you a large boost in your confidence that you've written something correct, without necessarily boosting the correctness of what you have written.
The best example of this which I have seen is Amos' https://fasterthanli.me/blog/2020/i-want-off-mr-golangs-wild... in which he chastises Go for allowing him to print filenames which contain non-UTF-8 data to the terminal which does not display correctly, instead of forcing them be escaped. Rust does that, so he is confident that the program he writes in Rust handles filenames properly: if they are UTF-8 they are printed normally, if they are not they are escaped. Since the Rust type system allows him to do this, it must be correct, right? Of course not. Filenames which contain vt100 escape codes would do a lot more damage and they are not handled at all.
At the end of the day you still have to think of all the possible cases. Types help to constrain the number of possible cases, but the more time you're spending making type constraints as narrow as possible, the less time you're spending handling the cases which can never be caught by the type system.
Parent comment isn't implying that functional vs imperative is a binary; they're pointing out that the power of a type system is orthogonal to the imperative/functional spectrum.
How about just having the bread-and-butter tools of FP available? You can't have a typesafe map operation without generics, so that alone is a major impediment to FP in Go.
You are conflating FP with statically typed FP. There are dynamic typed FPLs too (Scheme, Elixir etc.,). Yes, I agree that Go could've been better if it had simple generics and discriminated unions.
By the way, generics are not at all exclusive to FP. Both C++ and D proved advanced generic/meta-programming facilities like template template parameters(HKTs) and constant generics which Rust currently lacks. But I'd certainly not use these if there is no absolute need.
What I objected to was abstracting and generalizing too much to the point of over-engineering. Some type systems, like those of Rust and Haskell, provide more room for such abuse and it takes some discipline to keep it simple. In contrast, OCaml is a very good sweet spot. ML modules are as powerful as type classes and lead to much cleaner and simpler APIs. OCaml compiles even faster than Go!
>I disagree about nil as well; I think Go's zero value approach is useful, and basically impossible without nil (it wasn't necessary in my code, but I may want to instantiate an ApiResponse object without a data structure ready to be passed in).
I like your solution to my imaginary problem, but I am going to counter this one. Now I'm salty because nil interface[1] actually bit me once and cost me. Go's zero value approach also exists in Rust, but is type safe. Basically every primitive type, and Option<T> have zero values. Any struct that is made up of those values can also safely have zero values. Then for those that don't, you can implement your own zero values. This is called Default in Rust. Go's zero value approach is perfectly possible, and arguable better without nil. And even then, the number of times I've seen a service crash because someone "forgot" to initialize a nullable type, and passed it to a function that then exploded isn't an issue I should deal with in 2020.
First of all, interfaces are not nil if they point to a nil value for the same reason a * * T is not nil if the * T is nil. That is the correct decision. You can not call a function on a nil interface, but you can call a function on an interface which is not nil, but where the value of the type is nil[1].
As for your proposal, there are some issues with it: not all structures have a sane default for all not optional, nilable parameters. What is the default underlying reader for a bufio.Reader? A reader which returns zero bytes? Certainly that would be more confusing to debug than a simple panic, which is what we have now [2]. There's also the fact that a zero value is just a value with all the bytes zeroed and allocating an object via language means (new) never allocates more than the size of the object and doesn't do computation.
But I guess the main point would be that I simply do not have a problem programming with nil-able types. Failing to initialize in Go means writing var a *T as opposed to a := &T{} or a := NewT(), which seems like an odd mistake to make - or forgetting to initialize a member of a struct in the NewT() function. Fundamentally, I do not want to spend hours of my life dealing with safeguards which are protecting me from a few minutes of debugging.
But hey, that's just me. Go isn't Rust and Rust isn't Go and that's a good thing.
> not all structures have a sane default for all not optional, nilable parameters. What is the default underlying reader for a bufio.Reader?
I think you're actually agreeing with nemothekid here, what you're saying is that there is no sensible default value for a bufio.Reader. In Rust terminology, that would mean bufio.Reader would not implement the Default trait. Types need to explicitly implement the Default trait, so only types where that makes sense implement it.
> I simply do not have a problem programming with nil-able types
Yes, that's going to make a big difference to how you feel about features that reduce the chance of errors like that. I'd hazard a guess that in C# the most common exception that is thrown is the NullReferenceException, after a quick search[1] it looks like NullPointerException is a good bet for the most common exception in Java. Most of those IlligalArgumentExceptions are probably from null checks too.
> Fundamentally, I do not want to spend hours of my life dealing with safeguards which are protecting me from a few minutes of debugging.
Similarly, I've already spent hours of my live dealing with null and undefined value errors, and I'd like to stop doing that. So I welcome new languages and new language features that help to stop those errors before they happen.
Go is designed as a language for the average. Average programmer doing averagely complex things in the current average environment (ie web services, slinging protobufs or equivalent). That is its specific design goal for Google.
It's designed to be simple, to be boilerplate, to be easily reviewable/checkable by coding teams.
Not sure why Go and Rust are always the compared languages. Go is designed to replace Java/RoR/Python in Enterprise-land, not to replace C/C++.
Rust is designed to replace C/C++ in system-land, embedded, kernel, thick app components (browsers are probably the most complex apps running these days on end user systems). The entire focus is zero-cost abstractions.
Because for a long time people were trying to shove Go into use cases that before were covered by C applications. Some infrastructure has been developed in Go (Kubernetes being quite prominent) and so the overlap between systems programming and web/enterprise got muddy.
Rust can't cover the enterprise use-cases of Go the same way that Go won't ever cover what Rust can do on systems/embedded level but there is enough overlap in some cases to confuse people into trying to compare them directly.
Ah! Yes, that is a case that I would consider could have an overlap between Go/Rust and C. Thanks for sharing it, had no idea it was implemented in bare metal Go, will check it out :)
I have seen average programmers who can write less dumbed down code than the most succinct code possible in Go.
I have seen below average programmers who understand how to use generic data structures / programs in C++ / Java / C# etc..
> It's designed to be simple, to be boilerplate, to be easily reviewable/checkable by coding teams.
Boilerplate and easily reviewable are at the odds. Unless Enterprise style java is the bar, it is hard for anyone to say that. Go is too much boilerplate than average programmer's python code, for example. Don't tell me python is dynamic. The average programmer doesn't do metaclass magic.
Being so much boilerplate and verbose `if err != nil` and no generics and no methods/functions for common operations like finding the index of an element in an array. All this leads to for-loop-inside-while-loop-inside-for-loop attrocities where it is harder to decipher what the intention is. Compare to python where you have methods on collections to carry out common manipulations, and list comprehension in python is so cleaner than 4 line imperative go code.
Go seems to miss why python/JS/ruby are so popular. It is because so much is built in that you can communicate __intent__ clearly without getting bogged down in details. Compared to any modern language that's not C and not mediaeval C++, Go is so much more verbose. Even java has enough shortcuts to do these common things.
And don't start telling me this leads to incomprehensible code. Coding standards are there. What's unreadable is 8 level indented imperative attrocity of blue collar language Go.
> Not sure why Go and Rust are always the compared languages.
They both emerged at same time and have some overlapping scope - eg static native compilation, memory safety etc.. but there similarities end. However, there is lack of a a popular, succinct natively compiled language which gets out of the way to write software. Everyone knows expressive languages need not be slow or difficult to deploy. Some people have to write Go in dayjob and the sibling rust, having quality-of-life improvements that anyone expects in a post-2000 language [0], seems to be a closer candidate to comparison (even though rust isn't the optimal language for the things Go is used for, given it is a systems language with static memory management) Others like D and Nim have a fraction of users. Of course there are also some vocal rust fanboys who think crab god is not popular because world is anti intellectual.
Well you have seen lot of things which I guess is fine. Others may have seen different things. In my last 10 workplaces and dozens of projects I have seen code which would be at least 5-10 times more verbose than an equivalent Go code. I also differentiate verbosity of individual expression vs verbosity of overall project due to dependencies, code arrangement and other associated files/ resources etc to produce a deliverable.
> Go seems to miss why python/JS/ruby are so popular.
Not sure what is there to miss especially since Go is pretty popular for its age. Considering it is not even mandated or officially supported like Swift by Apple, or Dart/Kotlin by Google/Android.
This is pretty much correct, except for 2 minor gripes: Firstly, Go was meant to replace C++ in server-land. The fact that it ended up being a suitable alternative for Java/Python in many cases was a happy accident.
Secondly, I don't think the fact that Go making the opposite trade-offs in terms of (let's say) programmer effort vs. CPU clock cycles means it is a "language for the average". Go is great for any number of interesting high-level server-side components where it being done in half the time is better than it being 15% faster.
I'm just going to comment on one point of your comment:
> to be easily reviewable/checkable by coding teams.
I strongly disagree with this. I find Go code to be incredibly difficult to read. This is because of two main reasons.
The first is that the lack of expressive power in the language means that many simple algorithms get inlined into the code instead of using an abstraction with a simple and understandable name. I find that the first pass of the code is me reading over the lines (each of which is very simple and understandable) and virtually abstracting it into what the code actually does. I find that the interesting business logic is lost in all of the boilerplate.
out := []int{}
for _, v := range input {
number, err := fetchData(v)
if err != nil {
return nil, err
}
if math.Abs(float64(number)) <= 2 {
continue
}
out = append(out, v)
}
return out
As I said, every line in the Go is simple (except maybe for append, but generics can help with that). However the actual business logic is lost in the boilerplate. In the second example (Rust with one existing and available helper function) the boilerplate is much less and each line basically expresses a point in the business logic. There are really two bits of boilerplate here `filter_ok` instead of `filter` to handle the errors from `fetch` and the `collect` to turn the iterator into a collection (although maybe you could improve the code by returning an iterator instead of a collection and simplify this function in the process).
Secondly the "defaults are useful" idea is in my opinion the worst mistake the language made. They repeated The Billion Dollar Mistake from C. I have seen multiple expensive production issues as a result of it and it makes code review much harder because you need to check that something wasn't uninitialized or nil. It is absolutely amazing in Rust that I don't have to worry about this for most types (depends on your exact coding style, in the above example there is never a variable that isn't "complete").
So while Go may be quick to write. I think the understandably is deceiving. Yes, I can understand every line, but understanding the program/patch as a whole becomes much more difficult because of the lack of abstraction. Humans can only hold so much in our head, making abstraction a critical tool for understandable code. So while too much of the medicine can be worse that the disease I think Go aimed - and hit - far, far below the ideal abstraction level.
> So while Go may be quick to write. I think the understandably is deceiving. Yes, I can understand every line, but understanding the program/patch as a whole becomes much more difficult because of the lack of abstraction. Humans can only hold so much in our head, making abstraction a critical tool for understandable code. So while too much of the medicine can be worse that the disease I think Go aimed - and hit - far, far below the ideal abstraction level.
I think this is the fundamental point of contention. Go aims at abstraction at the package level. Each package exports a set of types and functions which are "magic" to outsiders and can be used by outsiders - an API, if you will. Rust seems to aim at abstraction at the line level - each line is an abstract "magic" representation of what it is meant to do.
In your Go code, the only pieces of line-level magic are `range` and arguably `append` (even though I would argue it is integral to the concept of slices). And of course the API magic of `fetchData` and `abs` which is in both versions.
On the other hand, Rust has .iter(), .map(), .filter_ok() and .collect(). So, while I think anyone could understand the Go code if you explained `range` to them, I do not understand the Rust code. Yes, I understand what it does, but I have no clue how it does it. What is the type of .iter()? Why can I map over it? Why can I filter a map?
But that's not the point of the Rust code. The Rust code expresses what should be done, not how it is to be done.
The way Rust deals with complexity is by offering tools which push that complexity into the type system, so you do not have to keep everything in your head. The way Go deals with complexity is by eliminating it and, when that is not possible, by making sure it does not cross API boundaries. In Go you do keep everything in your head.
> Go aims at abstraction at the package level. Each package exports a set of types and functions which are "magic" to outsiders and can be used by outsiders - an API, if you will. Rust seems to aim at abstraction at the line level - each line is an abstract "magic" representation of what it is meant to do.
Are line-level vs. package-level abstractions are necessarily mutually exclusive? One could argue that the functions Rust iterators expose are "'magic' to outsiders and can be used by outsiders --- an API, if you will".
> but I have no clue how it does it
The question here is whether it actually matters whether you know how the Rust code works. You don't necessarily need to know how `range` or `append` work; you just need to know what they do. Why should the Rust code be held to a different standard in this case?
> I think this is the fundamental point of contention.
I agree with your point. Part of Go was definitely the removal of unnecessary abstraction and complication. However my argument is that they went too far.
> Go aims at abstraction at the package level [...] Rust seems to aim at abstraction at the line level
I don't understand the difference in your mind vs package level or line level. For whatever is exposed as a package is surely intended to be used in a line elsewhere in the program.
> On the other hand, Rust has .iter(), .map(), .filter_ok() and .collect(). So, while I think anyone could understand the Go code if you explained `range` to them
If you explain range, continue, return and append then you could understand the Go snippet. I don't see how this is meaningfully different from explaining iter, map, filter_ok and collect. Sure, the former are language features while the latter are library features but that doesn't seem to be a meaningful difference when it comes to comprehension.
> Yes, I understand what it does, but I have no clue how it does it.
This is the whole point of my argument. You don't need to understand how it works. Much like you don't need to understand how continue or append work in go. That is the point of abstraction. You need to know what they do, not how they do it. In my opinion Go forces you to leave too much of this usually-irrelevant plumbing in the code, which distracts from the interesting bits.
> The way Go deals with complexity is by eliminating it
This is again my key point. I'm arguing that in most cases Go hasn't managed to eliminate the complexity. Maybe it got rid of a little, as your "map" loop doesn't need to be as generic and perfect as the Iterator::map in the standard library. But the complexity that is left is now scattered around your codebase, instead of organized and maintained in the standard library.
The intrinsic complexity has to live somewhere. And in Go I find a lot more lives inline in your code. In other languages I find it is much easier to move the repetitive, boilerplate elsewhere. And when this is done well, it makes the code much, much easier to read and modify as well as leading to more correct code on average.
> That's a Go feature.
I agree with that. But what I am trying to express that in my experience this is actually a flaw. It looks good at the beginning. But once you start reviewing code you start to see it break down. I think Go had a great idea, but based on my experience I don't think it worked out.
> Sure, the former are language features while the latter are library features but that doesn't seem to be a meaningful difference when it comes to comprehension.
Absolutely. The difference is that Go has a limited number of such features and once you have learnt them that's all you need to know, in that sense, and can understand any code base.
One thing which I have not expressed very well in my reply is what exactly I meant by "understanding what the code does". When you look at func fetchData(T1) (T2, error), it's easy to understand what it does: it fetches some data from T1 and returns it as T2, with the possibility of it failing, and returning an error. If you know what T1 and T2 are (which you should if you're inspecting that code), that's usually sufficient. You understand (almost) all of it's observable behavior, which is different from it's implementation details. Similarly, `abs` returns the absolute value of a number
`append` also has easy to understand observable behavior: it appends the elements starting at position len(slice) and reallocating it if necessary (generally, if the capacity is not big enough), but it's actual implementation is undoubtedly very complex. `range` is harder to explain, but rather intuitive when you get the hang of it.
Of course you also want to keep in mind the behavior of all the language primitives as well: operators, control flow etc. In Go, you have to keep all of these things in your head to understand what is happening in the code, but once you do you really understand it.
We can call all of these things: variables, language primitives, API functions etc. atoms of behavior. In Go, to understand a piece of code, you first have to understand what the observable behavior (but usually not the implementation) of all of the atoms in that code are, and then understand all of the interactions between those atoms that happen as a result of programmer instructions.
What I mean by line-level vs package-level abstraction is quite simple (maybe not the best names, but hey, I'll stick with them). With package-level abstraction, the atoms, as well as the interactions between them, remain conceptually easy to understand, but become more powerful as you move up the import tree. The observable behavior of an HTTPS GET is easy to understand, but very complex under the hood.
With line-level abstractions the atoms, and especially the interactions between them, become very complex. The programmer no longer "has to understand" the observable behavior of every single function he uses. Odd one-off mutators are preferred to inlining the mutation because it "makes the code more expressive" - in that it makes it look more like english, it makes it easier to understand what the programmer is trying to do. It does not, however, make it easier to understand what the programmer is actually doing, because the number of atoms - and their complexity - increases substantially. If you want to get a feel for this look at the explanation for any complex feature in C++ on cppreference.com. I must have read the page on rvalue references 20 times by now and I still don't grok it.
Of course, with line-level abstraction, the programmer doesn't need to constantly keep in mind 100% of the behavior of the atoms he's using, much less whoever's reading.
I can't tell you which one's better - probably both have their place - all I'm saying is that I, personally, can't work with C++/Rust/other languages in that style. I've tried to use them but I can't. C is easier to use - for me.
I'm honestly still confused what point you're trying to make.
Paragraphs 2 through 6 seem like they would apply to most, if not all, languages, even if the language is more complex. For example, here's the text with a few minor alterations:
> One thing which I have not expressed very well in my reply is what exactly I meant by "understanding what the code does". When you look at `fn fetch_data(input: T1) -> Result<T2, Error>`, it's easy to understand what it does: it fetches some data from T1 and returns it as T2, with the possibility of it failing, and returning an error. If you know what T1 and T2 are (which you should if you're inspecting that code), that's usually sufficient. You understand (almost) all of it's observable behavior, which is different from it's implementation details. Similarly, `abs` returns the absolute value of a number
> `Vec::push` also has easy to understand observable behavior: it appends the elements starting at position vec.len() and reallocating it if necessary (generally, if the capacity is not big enough), but it's actual implementation is undoubtedly very complex. `Vec::iter` is harder to explain, but rather intuitive when you get the hang of it.
> Of course you also want to keep in mind the behavior of all the language primitives as well: operators, control flow etc. In Rust, you have to keep all of these things in your head to understand what is happening in the code, but once you do you really understand it.
> We can call all of these things: variables, language primitives, API functions etc. atoms of behavior. In Rust, to understand a piece of code, you first have to understand what the observable behavior (but usually not the implementation) of all of the atoms in that code are, and then understand all of the interactions between those atoms that happen as a result of programmer instructions.
The details differ, but the overall points remain true, do they not?
Unfortunately, I'm still confused after your description of line-level vs. package-level abstraction. Just to make sure I'm understanding you correctly, you say an HTTPS GET is an example of a package-level abstraction, and an "odd one-off mutator" (presumably referring to a functional/stream-style thing like map()/filter()) is an example of a line-level abstraction?
If so, I'm afraid to say that I fail to see the distinction in terms of package-level vs. line-level abstraction, since what you said could apply equally well to both HTTPS GET and map()/filter(). For example, if a programmer uses a function for an HTTPS GET instead of inlining the function's implementation, you can say "using the function is preferred to inlining the request because it 'makes the code more expressive' --- in that it makes it look more like English, it makes it easier to understand what the programmer is trying to do. It does not, however, make it easier to understand what the programmer is actually doing..."
That's the nature of abstractions. You hide away how something is done in favor of what is being done.
> With line-level abstractions the atoms, and especially the interactions between them, become very complex. The programmer no longer "has to understand" the observable behavior of every single function he uses.
I believe your second sentence here is wrong. Of course the programmer needs to understand the observable behavior of the functions being used --- how else would they be sure that what they are writing is correct?
> Odd one-off mutators are preferred to inlining the mutation because it "makes the code more expressive"
I think you might misunderstand the purpose of functions like map() and filter() if you call them "odd one-off mutators". The same way that `httpsGet` might package the concept of executing an HTTPS GET request, map() and filter() package the concept of perform-operation-on-each-item-of-stream and select-stream-of-elements-matching-criteria.
> If you want to get a feel for this look at the explanation for any complex feature in C++ on cppreference.com. I must have read the page on rvalue references 20 times by now and I still don't grok it.
You need to be careful here; in this case, I don't think rvalue references are a great example because those aren't really meant to abstract away behavior; on the contrary, they introduce new behavior/state that did not exist before. It makes sense, then, that they add complexity.
In the end, to me it feels like "package-level" and "line-level" abstractions are two sides of the same coin. The functions exposed by a "package-level abstraction" become a "line-level abstraction" when used by a programmer in a different part of the code.
> I think this is the fundamental point of contention. Go aims at abstraction at the package level. ...
I think this is an excellent point. I have Go code which might make Rust fan apoplectic with its verbosity and basicness. But for me it is like magic every time it runs and get stuff done on any of my remote/local machine.
I made a similar point elsewhere about expression verbosity vs project verbosity. To me all the 3rd party dependencies, substantial number of outside tools and obscure setup files are a type of verbosity when I work on a project. Though I do not mind them if thats what is needed.
Because originally Go was designed while Pike and others were waiting on C++ builds and he couldn't grasp why C++ developers don't jump of joy into adopting Go.
> We—Ken, Robert and myself—were C++ programmers when we designed a new language to solve the problems that we thought needed to be solved for the kind of software we wrote. It seems almost paradoxical that other C++ programmers don't seem to care.
If I understand it correctly, your code doesn't check when unmarshalling if the content of Data has the expected structure. Using interface{} it's always been possible to achieve something like you'd do with Generics but it's ugly and can't check types at compile time.
Never said it was a good idea. It's inefficient, unsafe and hard to read but, as far as I can tell any generic function with concrete return types can be written using reflect (and any generic function with generic return types can be written also, it just requires the caller to manually cast from an interface{} to the correct type).
You claim that law enforcement culture differs in the countries which you are comparing and that is indeed true. But is the crime culture the same? Or is that not a factor worth considering?
Police are institutions with standardization, their culture isn't some random construct, it's something that's actively decided on, that's codified in legislation and training.
Criminal culture is not, it's something that emerges way more naturally, usually as a direct response to the culture set by police and punishments established by law.
If, for example, the punishment for certain crimes is so high by default that the criminal would rather die than get a sentence that would equal death, then you have taken any and all motivation from that criminal to look for a more reasonable way out, instead preferring to "go out in a blaze of glory" on their own terms.
That's why this whole "tough on crime" approach mostly leads to an escalation on both ends: Cops treat criminals more harshly, criminals respond by acting more harshly themselves because acting more reasonable wouldn't gain them anything anyway so they might as well completely live out their destructive urges.
This is further reinforced through a prison system that's not aimed at rehabilitation, but generally seen as a form of "revenge", as "punishment" and as such victimizes its inmates, which leads to even more resentment, while leaving them utterly unprepared, and with quite a grudge, when they get released back into society.
Which leads to the outcome that it usually won't take long until they get into trouble again [0] because their time in prison taught them nothing except "might makes right" and how sadistic cruelty is a valid way of interaction with other people when you are the one in a position of power.
> If, for example, the punishment for certain crimes is so high by default that the criminal would rather die than get a sentence that would equal death, then you have taken any and all motivation from that criminal to look for a more reasonable way out, instead preferring to "go out in a blaze of glory" on their own terms.
Yep. That is what happens in most cases. Which is why I reject the idea that most cops which shoot black people are racist. Sometimes, as is the case of George Floyd, the individual officer is at fault (even if that does not _necessarily_ mean he is racist), however, in the vast majority of police shootings, the victim is trying to reach for the cop's gun, assaulting the officer, or in some other way is putting the officer's life in danger. So you can not blame the individual shooter in such cases.
You can, however, blame the systems which cause the erratic behavior, that's true. However, we need to have an honest conversation about what the causes actually are.
The gang culture in prisons is indeed a problem, but making prisons less tough would not solve this problem - it would only exacerbated, as gangs would have increased influence over the prison.
Of course, in reality many gangs are in cahoots with the prison staff, which is a corruption problem which needs to be solved.
Another problem the article correctly points is non-violent offenders becoming violent as a result of their time in the prisons.
That said, the gang culture in prisons is only an extension of the gang culture outside of prisons. And you can not blame that on tough sentencing, as the gang culture exists in many other places around the world - in the UK grooming gangs, in Sweden's no-go zones, in Romania's gypsy gangs and so on, all countries without harsh sentencing.
I've watched 30 clips. ALL of them were 30 second clips with no context. All of them from one Twitter thread as well, so not sure why that was not linked instead.
On the other hand, all of the instances of so-called police brutality with context which I've seen elsewhere occur after the police calmly order the protestors to leave a particular area, or to go home because of a curphew order or something else and the protests refuse in a less-than-calm manner.
My question is this: is the police using force to manage a crowd always unjustified in your view?
There is a very American viewpoint that the rest of the world is somehow different. There are police in Taiwan, Australia, Canada and most of Europe.
They are friendly and don’t over exert their force in a frequent manner like the US.
In US the police just take out their gun for absolutely petty reasons. It’s like they just want to escalate the situation rather than calm it down.
So yes, the problem is American police don’t know how to calm a crowd down. They stand like robots rather than be humans and listen and work with the crowd.
In many instances the first shots are fired by the police.
The crowd has been otherwise quite peaceful exercising their first amendment right.
> The crowd has been otherwise quite peaceful exercising their first amendment right.
Not going to lie, this actually made me laugh[1].
Other than that, you missed my point entirely. When a policeman calmly explains to you that you are not allowed to be in a particular area for 10 mintues and the whole crowd screams in disapproval and refuses to leave, so the police use force to remove the crowd, and someone tapes a 30 second clip of this, you can see how you can get the wrong impression.
Maybe the portestors are in the right and their presence is protected by the first amendment, but that's a whole different story. It'd also be a different story if they weren't allowed to protest at all, but they are. Just not absolutely whenever and wherever they want.
Is that the story behind every single one of these instances? Of course not. But I'd wager it's the story behind a vast majority of them. Unfortunately, there is no way to know if I'm right or not, as these clips do not provide any insight into that. Just senzationalism.
The majority of policemen are either beating people up, standing around while others beat people up, or are actively aware of systemic injustices in their system and not fighting it tooth and nail. They're all part of a union. They should be voting for policies to operate among themselves with accountability, respectability, and and a sense of justice.
All 57 members of Buffalo's emergency response unit resigned in protest of disciplinary actions taken against the men who shoved a 75 year old man and then walked over his body while he bled from the ear. https://www.washingtonpost.com/nation/2020/06/05/buffalo-off...
It's not at all obvious to me that the majority of policemen don't support the abuses, even if they aren't all getting their hands dirty.
I can't access the link because it's paywalled, but I assume you're talking about this clip: https://youtu.be/kGsqg5vtahA . EDIT: Found the free button :). Yeah, I guessed right.
I can't see how you can classify this as an abuse. Yeah, the cops should have apprehended him instead of pushing him after he started scanning their equipment, or whatever he was doing, but it's very clear from the clip the officers had no intention of harming the man. You even have some guys in green checking on him at the tail end of the video. The officer which pushes him also appears to try to help immediately after the incident - even if it's not exactly clear what he was trying to do -, before he is pushed back in formation.
Did any officer resign in solidarity with George Floyd's murderer? Or in some other incident which we can all agree is an abuse?
As for programming with types, that's partly what Go was trying to avoid: https://qht.co/item?id=6821389 . And I agree with Pike on this one. The nice thing about Go for me is that I'm just writing code. Not defining a type hierarchy, not rewriting some things to get them just right, not defining getters, setters, move and copy constructors for every type :). Just telling the computer what to do and in what order. When I'm writing a library I'm defining an API, but that's about it; and you can usually steal whatever the standard library's patterns are there.
I disagree about nil as well; I think Go's zero value approach is useful, and basically impossible without nil (it wasn't necessary in my code, but I may want to instantiate an ApiResponse object without a data structure ready to be passed in).
A little bit of a rambly response from me, but, all in all, I think I'll be one of the stubborn ones which refuses to use generics in his code for a long time.