It's more hand-wavy than it's admitted. When you think about it, the idea of emergent phenomena amounts to:
- There's no underlying consciousness or conscious originator (no god, no panpsychism, no underlying conscious layer at the basis of reality, no nothing)
- At some point, two or more elements (rocks, atoms, etc, that weren't conscious), aligned precisely in a given configuration and zap!, they became conscious and interactive
- All consciousness then sprang from that.
So the question would be, if systems favour inertia, stasis and conservation of energy, why would there be consciousness at all and just not an endless void, or a perfectly stable (as in homeostasis) system without conscious agents, or just rocks floating in the space.
I'm not saying the idea of emergent phenomena is wrong, just that you better answer the complex questions other "supernatural" theories try to address, before declaring it some sort of obvious and correct answer.
I don't understand completely why you're being downvoted. I'm european, in favour of GDPR, and I think this is a valid way of doing it. These reactions confuse me the same as using incognito or adblockers to pass paywalls and such - if that's their business model and their choice, I'm going to say no, and won't even be interested.
> I don't understand completely why you're being downvoted.
Because it's irrelevant, wrong and passive-aggressive belligerent: "Sorry you all didn’t get the consequences you wanted... childish ... Zero thinking ... churlishness".
That's actually much a feature.
Many problems can still be done quite efficiently, for instance stream parsing a file you can have each N newlines being sent to different processes, and the same with many other problems that can be sliced, traversing nested collections, fetching batches of records from stores, etc.
Sometimes you can also reformulate the problem, but yes not all problems fit.
I would add though that whenever you want to write orchestration around that parallel work it's much easier in erlang than the alternatives.
I pick the peak of everything, my house is from 1806, my bike is from 1950, my computers are 8-core Atom 2017 (server) and Jetson Nano 2019 (client)... no house/bike/computer will ever be better ever in the history of the universe.
With Java I was just lucky. I learned C++ first and then now 20 years later I learned C, you have to go back in time to see the future. I also went back to the C64 to predict the peak of computers.
a) A theory
b) That in no way contradicts the possibility of a continuum where universes may rise, expand, contract and die, only to rinse and repeat
c) If nothing can be created out of nothing, and if in the universe energy cannot be created or destroyed that doesn't seem to be correct unless the universe is an artificial system
d) The only way for C) to be true is if everything is always the same thing in different forms, at which point we might as well say time is infinite
(caveat: artificial systems of course - but those still need to be initiated from somewhere else at some point down or up the chain of creation - so it should follow that something infinite must be at play)
- humanity will never go beyond 1 gigabit ethernet due to 'the physical limits and energy'
The complexity and energy requirements of 10GB/s make it improbable at home in the long run, also http://radiomesh.org
- hydroelectric is the only real source of electricity
It's the only viable alternative to photosyntesis (also powered by the fusion reactor in the sky).
- 3D MMOs are the "final medium" and that they are building one to last 100 years,
I'm building a MMO engine for eternity, the server hardware is specced for 100 years minimum, could work for 250 years with enough spare parts.
- they made the fastest database and they have 100% uptime,
100% READ uptime, but very verbose on disk (fixable but I digress)
- 2011 SSDs are the peak of disk space
They are the peak of writes per bit for the NAND 50nm SLC
- HTTP 1.1 is the 'final transport for humanity'
Yes.
- java doesn't crash
It can, but I have never in 20 years seen it happen in a server application; my VR LWJGL MMO has crashed on linux around 5-10 years ago, but I blame that on linux more than Java.
- smaller transistors 'wear out sooner'
I'm speculating about this one, we'll see.
- anything too hot to hold in their hand will break soon
Electronics wear out faster with heat, yes.
- load balancers save IP addresses
Yes, obviously.
- the synchronize keyword in java makes their programs non-blocking
No, I'm not going to explain this one as the source is there for you to read.
- multi-threading in games gives them 10 frames of motion to photon latency
Yes, "The Last Guardian" had 10 frames lag on the PS4: http://move.rupy.se/file/20200106_124100.mp4
Here's a tip: if a lot of smart, informed people have a very positive view of a technology but it doesn't click for you, it may not be the right tool for you but it very likely is not a "sign of dementia".
Thank you, I can be wrong obviously, but your argument in light of our current times would actually indicate that a lot of smart informed people are more tending to idiotic brainless emanations than anything else.
This is of course unrelated to tailwind and the words I used there were pretty uncharitable - it's open source after all - but I have yet to see a good example of it clicking, the way I've seen it, in my assumedly small real use sample pool, doesn't seem good at all.
Good luck finding anybody with a "very positive" view of C++ other than maybe Bjarne.
People that do real work with C++ will give you an honest assessment of its strengths and weaknesses. There are good reasons it's still so widely used and it's still the right tool for some jobs. Using it for something it's not well suited for is a mistake, of course.
Of course it's also a tool with a lot of legacy cruft accumulated over decades so it's not a great analogy for Tailwind.
Slightly less typing (compared to CSS), pre-defined memorable names, good documentation and a library of examples.
It's just convenient, that's it. Ideologically I don't like it; for getting work done, it's nice.
Theoretically you could achieve much of it in a "more correct" way, but we don't have the resource to do that in our company and ultimately it wouldn't be much of a value-add. Tailwind saves time.
After years of wrestling with what the “right” UI framework was and marrying your project to the syntax and rules, Tailwind has entirely removed that concern from my life.
But honest question, wouldn't those years of wrestling with the right UI framework be better spent learning the underlying CSS rules and adopting a simple pre-processor that gives you programmatic generation for repetitive bits?
Because in the end, tailwind is better than bootstrap, but it's still the wrong way and incentivises wrong patterns.
And by making it "standard" it makes everybody new to the field start with it and use it, foregoing learning the CSS. And will be supplanted at some point, and then there'll be the dance again. And again. It's just this that I don't understand.
At the point you're customising tailwind and classes you run into the same problems as with CSS without clear guidelines/organisation. So the only way to not run into that it's to use it as inline styles but with classes...
I’d used plain CSS for years following the Zen of CSS approach which I was very fond of and I was a big fan of SASS for simplifying it too. Dealing with browser quirks got old though.
The issue is that most CSS is approached from a perspective that you’re going to reuse a specific part over and over in your HTML as a single tag.
Tailwind realizes that this happens, but simply argues that in most code it happens in templated loops instead of all over your source code. So instead it focuses on reusable parts that cover all of the browser quirks and can be easily combined.
It’s such a boost to productivity and practicality that I can’t believe it hasn’t been around forever.
I think this is an important point, good standards evolve by adopting real world proven practices. If everyone only used 'the standard', then it would never improve.
Have you tried it? I don't know what you mean by "sign of dementia", but I find the approach taken by tailwind makes me a lot more successful in applying designs. Doesn't feel like dementia to me.
What it does is force you, as other frameworks, to learn all its intricacies, design decisions, use an heavy and complex development environment (that besides that reads files it doesn't have anything to go about reading?) so that you don't need to learn the underlying language.
Then it nudges you to write a soup of classes (that you need to learn, and need to learn the config because it seems by default it has -400 but not -500, or whatever, and need to learn their priorities), that you need to keep changing and copying pasting (yes you can write classes that coalesce your styling... but that's kinda the point of CSS and/or any other pre-processor), and forget about semantically marking your html. (I'm not saying you have to write it badly, it's just that writing it poorly seems much easier - and is the same with CSS).
I just don't see the point - with the exception of the pre-processing part of course - CSS can use some little help in some places to generate programatically some things but I think those are better served by a pre-processor and not a framework as the framework tends to guide the overall design of the remaining things.
> What it does is force you, as other frameworks, to learn all its intricacies, design decisions, use an heavy and complex development environment (that besides that reads files it doesn't have anything to go about reading?) so that you don't need to learn the underlying language.
The Tailwind dev environment is literally one Javascript file with a few settings. Not that complex. Tailwind is worthless if you don't know CSS, so I'm not sure that second point works either.
> Then it nudges you to write a soup of classes (that you need to learn...
Is that bad? Not using Tailwind forces you to have a separate stylesheet that hides tag to property relationships, among other hidden abstractions. The class names barely need to be learned. With an IDE, you get class name completion, and there are only a few properties that have unexpected names.
> yes you can write classes that coalesce your styling... but that's kinda the point of CSS and/or any other pre-processor
What do you mean "the point of CSS"? Read the spec. It says nothing about the number of properties a class should contain.
> and forget about semantically marking your html.
Tailwind's reset styles mean you can use whatever "semantic" HTML5 elements you want. If you mean classnames should be higher level abstractions than CSS properties, well, that's a convention that developed during the CSS Zen Garden days. Before that we used HTML attributes for styling. Conventions come and go.
> Is that bad? Not using Tailwind forces you to have a separate stylesheet that hides tag to property relationships, among other hidden abstractions. The class names barely need to be learned. With an IDE, you get class name completion, and there are only a few properties that have unexpected names.
Well, the purpose is actually to have separate stylesheets, so that you can name them in a relevant manner, I don't know, like menus.[s]css, panels.[s]css, just like so you know where things are, and so that you can use a class to define repeating elements across a codebase. There's no abstractions in CSS that aren't present or amplified in tailwind - there's pre-processing utilities that are useful in tailwind but they're present in any pre-processor without the remaining stuff. And in fact the logical way of using CSS is by defining classes and then defining modifications to those classes, when needed, when they're part of a hierarchy of classes that requires it. They're one grep away of being discovered in all css files.
> What do you mean "the point of CSS"? Read the spec. It says nothing about the number of properties a class should contain.
Uhh. Yeah, I might one day, but I don't get what you're saying.
> If you mean classnames should be higher level abstractions than CSS properties,
A classname is a selector token, that you can place in CSS hierarchies and define a set of rules, that affect the elements using that classname. It's obviously a higher abstraction than a style rule.
I wasn't talking about semantic html5, I'm talking about semantic markup for readers of the code. If I see "t-4 h-3 w-2 mongo-xyz bg-pearl-800 flex flex-col m-4" I can understand it after reading all of those properties in tailwind, perhaps and how they all interact. But I'll need to understand that it uses relative sizes (like, why...) that m-px is one 1px, m-4 is 1rem, but what I want is fixed sizes 99% of the time. That someone might have disabled some of the sizes generation. Then I don't know, if someone asks me to change the styling now I have to go through all the codebase, searching for elements that are styled like that, because I have not way of identifying it and I have to change all their classes to the new style. Obviously, it's much harder to have it placed in a single file. Inline classes are better somehow than inline styles (although you can't know exactly what it's affecting), and there's a place for inline styles, but 99% of the time it's bad.
We're trying to find the balance between semantic classes and utilities now with @apply. I've always found layout to be easier with utility classes (regardless of Tailwind, this counts for Bootstrap 4 too).
I like "atom"-like components better with semantic classes. In BEM/ITSS I'd make a component for -everything-. But in Tailwind we'd only make one for common "atoms" such as buttons, tags, inputs etc.
Code example: write ".t-button" and then use @apply to put the classes there. Keeping the design constrained to a set of tokens.
So yeah, I agree with pre-processing utilities - but this is just the same problem as with CSS and I think the apply idea is actually pretty neat...
But... I need to know all those classes to know why the final rules are put in place, I need to know that they don't have conflicting properties, or if they have, their priorities. And when you add your own custom apply's in there, then it can also break just the same. And when you customise your tailwind classes it ends the same problem. So when you're trying to figure out why rows in a table with the same number of headers and same number of td's have different sizes because some no-breaking space class was applied, and why your a.btn isn't getting the same styling as your button.btn and etc, I always feel like I've wasted more time than I should (perhaps because I know css and used it extensively)
I might have been a bit harsh on saying "dementia" as it's disrespectful for those who've put the work in writing it and releasing. It's better than all previous frameworks - I just personally think that it's problem searching for a solution as you can't really win.
It also eliminates the possibility of using a regular preprocessor.
Using the Tailwind-defined values for font sizing, spacing and colors in SCSS is trivial.
This, combined with ITCSS and BEM becomes a huge time saver, especially if your browser-base allows for the use of CSS Grid.
Naming components is the only overhead but with a clear naming and file structure is not really an issue.
So far, writing SCSS, and much less of it (because of ITCSS), is a benefit—compared to learning a proprietary framework aiming to replace CSS at a similar enough abstraction level.
Most people don't actually do design systems, because most organizations aren't set up to make/reward systematic design approaches.
Like any system that doesn't plan/support for its own maintenance, this means that most systemic/semantic approaches break down. The larger the team/projects, the greater the entropy inputs.
Solution: don't do it. Drop your styles (or some mediated subset) directly into a component. This also "solves" the overhead of separating style from content and other ways in which CSS suffers from "everything happens somewhere else"-itis.
Personally, I think tailwind is the wrong solution to these problems and we'll see posts about how this turned out to be a local maximum at best in about 2-4 years (or, hey, recently by the author of this post), but people climb to local maxima for a reason.
This is the thing, not all projects are super giant web applications being maintained by hundreds of front-end developers. For a dummy like me who curses at the laptop every time he needs to center a div and whose style sense pretty limited, tools like tailwind are a bless. I do a small library of components, copy what people who knows more than me does (taking examples from here https://tailwindcomponents.com/) adapt them to my needs and that's it. I dont have neither the skills, nor the time or the inclination to spent time on other alternatives. I like the style (more than bootstrap) and I am willing to pay the price of a cluttered html.
I personally wouldn't trust a person who can center a div on the first try - something isn't right with them!
What makes me concerned with tailwind is that its yet another API to remember. Why would I pick it over material UI with some theming or chakraUI? Why Did you pick tailwind over material or chakra?
Having worked with Tailwind, Bootstrap, and mUI, Tailwind is by far my favorite. With bootstrap and mUI you have to work very hard to go against the grain of the design they ship with, while Tailwind forces you to make your own designs, to an extent.
Sites can _feel like_ bootstrap or mui sites, but rarely have I seen something and immediately thought “this is a Tailwind site.”
For one, Tailwind has a minimal learning curve if you already know CSS. That's a huge thing going for it.
Secondly, UI frameworks are more suitable for full-blown applications, whereas Tailwind can be used for anything—applications, Wordpress themes, static sites, etc.
Material-UI React specifically has terrible performance. It's been a known issue for a while and it's still causing problems on the latest version. One of my projects has a seemingly random 400~500ms render time on any component that uses even a single Material-UI component, which I believe to be an issue with their styling engine. It's incredibly frustrating.
Since Tailwind is literally just CSS classes, at the very least I know I won't ever run into this.
The problem with what you're saying is that it's meaningless without concrete examples?
What is "over abstracted code"? Perhaps we agree on this point, but I don't know because when people talk about these things no one has any idea of what they mean - we use "over-testing", "under-testing", "over-engineered", "sloppy", "too much abstraction", "simplicity", "complexity", and everyone has different takes on what these words mean but everyone agrees they're bad or good. It's a bit non-sensical.
What is the problem of jumping through files? Do you feel the same about jumping through functions? Is it better to have a soup that you can only test by setting up a new solar system? Is that why testing is considered "expensive"? And what is the solution when products need to be evolved, redesigned or pivoted? Re-write? Re-hire a team? Do the same crap again but with new tools? No tests? Or you simply don't like jumping?
Is it worth it to have bad api's, bad products, that offset millions of hours in work around to all developers (and non-developers as well) who have to use them because someone couldn't be bothered? Shouldn't the premium on salaries for developers conjure some real interest in doing things right? Is it normal that governmental institutions, public companies, banks, etc have awful interfaces, buggy behaviour, etc?
Isn't it normal that if you're commanding a very comfortable salary there should be an expectation of continuous improvement, research and learning for the work you choose to do?
Perhaps I agree with what you're saying, or perhaps I completely oppose it, but I can't tell.
This is why I don’t share my GitHub profile during the hiring process. Code is just too easy to criticize.
Write a 1 liner? Too terse and hard to understand.
Use multiple lines? Too verbose.
I don’t want to give the people who are going to be deciding my future a target to pick on. It only takes one guy to feel insecure about hiring me to say something like “he over abstracts his code”.
The only things I share are the finished products, it keeps the nitpickers quiet.
I will respond to the other comment in a second, but the things you are concerned with are not what I am advocating. The nit picky stuff doesn’t expose one’s mental model. It exposes some modicum of taste.
What I’m advocating is to probe how people structure the problem. There is a noticeable percentage of people that manage to have technical competence but map problems in a complicated way.
I agree with the no silver bullet thing - and written on another reply I don't even know if I agree with the example in the article.
> The fact that a map() function tells you that you're converting elements of lists does not save you from understanding what is that conversion doing and why.
It can actually, say you have a query that comes in, this calls a function that fetches records from the database, it's not a basic query, it has joins, perhaps a subquery, etc.
Then you have another function that transforms the results into whatever presentational format, decorates, wtv, those results, and it's also more than a few basic couple lines of logic.
And now you have a bug report come in, that not all expected results are being shown.
If you have
func does_query -> loop transforms
You have 3 possibilities, the problem is on the storage layer, the problem is on the query, the problem is on the loop.
You read the query, because the bug is subtle, it seems ok, so now you move to the loop. It's a bit complex but seems to be correct too. Now you start debugging what's happening.
If you have
func does_query -> func maps_results
You know it's either underlying storage or the query. Since the probability of the storage being broken is less plausible, you know it must be the query. In the end it's a synch problem with something else, and everything is right, but now you only spent time on reproducing the query and being sure that it works as expected.
I'm not against using for loops when what you need is an actual loop.
The thing is most of the times, previously, for loops where actually doing something for which there are concepts that express exactly what was being done - though not in all languages.
For instance, map - I know that it will return a new collection of exactly the same number of items the iterable being iterated has. When used correctly it shouldn't produce any side-effects outside the mapping of each element.
In some languages now you have for x in y which in my opinion is quite ok as well, but still to change the collection it has to mutate it, and it's not immediate what it will do.
If I see a reduce I know it will iterate again a definite number of times, and that it will return something else than the original iterable (usually), reducing a given collection into something else.
On the other hand forEach should tell me that we're only interested in side-effects.
When these things are used with their semantic context in mind, it becomes slightly easier to grasp immediately what is the scope of what they're doing.
On the other hand, with a for (especially the common, old school one) loop you really never know.
I also don't understand what is complex about the functional counterparts -
for (initialise_var, condition, post/pre action) can only be simpler in my mind due to familiarity as it can have a lot of small nuances that impact how the iteration goes - although to be honest, most of the times it isn't complex either - but does seem slightly more complex and with less contextual information about the intent behind the code.
For me, code with reduce is less readable than a loop. With loop everything is obvious, but with reduce you need to know what arguments in a callback mean (I don't remember), and then think how the data are transformed. It's an awful choice in my opinion. Good old loop is so much better.
I disagree entirely. In most imperative programming languages, you can shove any sort of logic inside a loop, more loops, more branches, creating new objects, its all fair game.
Fold and map in functional languages are often much more restrictive in a sense. For example, with lists, you reduce down a collection [a]->a to a single object, or produce another collection with a map [a]->[a]. So map and fold etc are much more restrictive. That's what makes it clearer.
def factorial(n):
result = 1
for i in range(2,n+1):
result *= i
return result
I mean in this case the name kinda makes it obvious anyway :)
If the operation is conceptually accumulating something over the whole collection and if it's idiomatic in the language I'm using - I will use reduce. Same with map-y and filter-y operations.
But if I have to do some mental gymnastics to make the operation fit reduce - for loop it is. Or generator expression in case of python.
It can definitively happen, but I think more times than not the others are more readable.
To be honest this seems to be a familiarity thing
> but with reduce you need to know what arguments in a callback mean
If I didn't know for it would be mind boggling what those 3 things, separated by semicolons, are doing It doesn't look like anything in the usual language(s) they're implemented. It's the same with switch.
The only thing both of them have, for and switch, and helps, is that languages that offer it and aren't FP usually use the same *C* form across all, whereas reduce's args and the callback args vary a bit more between languages, and specially between mutable and immutable langs.
I still prefer most of the time the functional specific counterparts.
When used correctly it shouldn't produce any side-effects outside the mapping of each element.
But that's just a social convention. There's nothing stopping you from doing other things during your map or reduce.
In practice, the only difference between Map, Reduce and a For loop is that the first two return things. So depending on whether you want to end up with an array containing one item for each pass through the loop, "something else", or nothing, you'll use Map, Reduce or forEach.
You can still increment your global counters, launch the missiles or cause any side effects you like. "using it correctly" and not doing that is just a convention that you happen to prefer.
That is true (less so in FP languages though), but the for loop doesn't either - indeed I do prefer it most of the times, I think its a reasonable expectation to provide the most intention revealing constructs when possible, it's also easier to spot "code smells" when using those.
The exceptions I make is when there's significant speed concerns/gains, when what you're doing is an actual loop, when the readability by using a loop is improved.
(and I haven't read the article so not even sure I agree with the example there, this was more in general terms)
congruence_classes m l = map (\x -> ((x ==) . (`mod` m)) l) [0..m-1]
than
def congruence_classes(m, l):
sets = []
for i in range(m):
sets += [[]]
for v in l:
sets[v % m] += [v]
return sets
For-in is very neat and nice but it still takes two loops and mutation to get there. Simple things are sometimes better as one-line maps. Provability is higher on functional maps too.
Same one-liner in (slightly uglier) Python:
def congruent_sets(m, l):
return list(map(lambda x: list(filter(lambda v: v % m == x, l)), range(m)))
The one liner is far less readable and under the hood it actually is worse: for each value in [0, m] you're iterating l and filtering it, so it's a O(n^2) code now instead of O(n). That mistake would be far easier to notice if you had written the exact same algorithm with loops: one would see a loop inside a loop and O(n^2) alarms should be ringing already.
Ironically, it's a great example of why readability is so much more important than conciseness and one liners.
I agree and despite beeing a fan (kind of a convert from OO) of FP I am often wondering about readability of FP code.
One idea I have is, that often FP code is not modularized and violates the SOLID principle in doing several things in one line.
there are seldom named subfunctions where the name describe the purpose of the functions- take lamdas as an example: I have to parse the lamda code to learn what it does.
Even simple filtering might be improved (kinda C#):
var e = l.Filter(e => e.StartsWith("Comment"));
vs.
var e = l.Filter(ElementIsAComment);
or even using an extension method:
var e = l.FindComments();
sorry I could not come up with a better example- I hope you get my point...
True, it is computationally worse, though it's O(nm) so applying m at compile time to form a practical use as I used it will turn it into to O(n) in practice.
But that much is immediately obvious since it's mapping a filter, that is, has a loop within a loop.
I did consider the second one to also take quadratic time though. I forgot that in python getting list elements by index is O(1) instead of O(n) which is what I'm personally used to with lists.
It's also true that you can replace the filter with
[ v | v <- l, v `mod` m == x ]
but that's not as much fun as
(x ==) . (`mod` m)
I just love how it looks and it doesn't personally seem any less clear to me, maybe a bit more verbose.
> Have you considered that maybe this is a sign you're too deep into using impractical programming languages?
“Languages that use ‘list’ for linked lists and have different names for other integer-indexable ordered collections” aren’t necessarily “impractical”.
> True, it is computationally worse, though it's O(nm) so applying m at compile time to form a practical use as I used it will turn it into to O(n) in practice.
Even applying it at compile time, it's still O(nm). You have to compute 'v mod m' for each possible value of v and m.
> But that much is immediately obvious since it's mapping a filter, that is, has a loop within a loop.
It's not immediately obvious because you have to parse the calls and see exactly where is the filter and the map.
retval = []
for x in lst:
if not filter_things(x):
continue
retval.append(do_some_things(x))
and
for x in lst:
filtered = []
for y in in another_list:
if filter_things(x, y):
filtered.append(y)
retval.append(do_some_things(x, filtered))
In the first case, you have to parse the parenthesis and arguments to see where exactly are the map and filter cals. In the second, you see a for with a second level of indentation.
> I just love how it looks and it doesn't personally seem any less clear to me, maybe a bit more verbose.
It doesn't seem any less clear to you because you're used to it. But think about the things you need to know apart from what a loop, map, filters and lambdas are:
- What is (x ==). Is it a function that returns whether the argument is equal to x?
- What is '.'. Function composition? Filter?
- Same thing with `mod` m. What are the backticks for?
Compare that with the amount of things you need to know with the Python code with for loops. For that complexity to pay off you need some benefits, and in this case you're only getting disadvantages.
That's the whole point of this discussion. Production code needs to work, have enough performance for its purpose and be maintainable, those are the metrics that matter. Being smart, beautiful or concise are completely secondary, and focusing on them will make for worse code, and it's exactly what happened in this toy example.
Yes, but that's the case with all the functional approaches proposed.
If Python were focused on functional programming it would have a utility function for this similar to itertools.groupby (but with indices in an array instead of keys in a dictionary).
> If Python were focused on functional programming it would have a utility function for this similar to itertools.groupby (but with indices in an array instead of keys in a dictionary).
itertools.groupby doesn’t return a dictionary, it returns an iterator of (key, (iterator that produces values)) tuples. It sounds, though, like you want something like:
from itertools import groupby
def categorize_into_list(source, _range, key):
first = lambda x: x[0]
sublist_dict = {
k: list(v[1] for v in vs)
for k, vs in groupby(sorted(((key(v), v) for v in source), key=first), first))
}
return [sublist_dict.get(i, []) for i in _range]
Then you could do this with something like:
def congruent_sets(m, l):
categorize_into_list(l, m, lambda v: v % m)