Tail call optimization optimizes the situations where the recursion doesn't actually need any stack space, but I think the parent poster is asking about situations that aren't tail recirsive.
It's indeed very mechanical and some programming languages can do it for you.
I think you're mainly asking for heap-allocated stacks. Some languages always use the heap for stack frames instead of the native stack and can set the stack limit as high as there's memory available.
You might also want to look into stackful coroutines, which allow one to pause the execution of a recursive function and switch to another function. This can provide you with multiple call stacks, which is another reason people sometimes choose to use write explicit stacks.
I depends the beans and their freshness. If soaked and not 2yo+, it’s less than 1 hour for most of them. 30 min is enough for azuki and chickpeas if soaked 48h.
There’s other tricks: various beans can be found in the form of instant powder or flaskes (1 min watering - no cooking) semolina (5 min watering - no cooking) and pre steamed (no watering - 10/20 min cooking). I bring those to hike on the mountain and use gaz only to make them hot. Mixed with cereals semolina, spices, herbs and oil/nuts its the perfect submit meal.
I cook more with feeling than recipe and I as I hike for multi days I try to vary the meals to avoid getting bored. My typical bag includes multiples zip bag with ingredients and I pick a few to make a meal:
- semolina of wheat, whole wheat, rye, lentils and chickpeas
- flakes-instant smashed potatoes / adzuki beans. Instant quinoa packed with prots but miss carbs.
- sesame seeds, sunflower seeds, pumpkin seeds
- dried seasoning algae, yeast, zaatar or thyme. Curry powder or other spice mix.
One of my favorite mix is 1/3 lentil semolina, 2/3 wheat semolina, sesame seeds and yeast. Mix together, add water and cover for a few minutes.
Edit: last year I used a food dehydrator to pack some sauces and cooked vegetables. Works great for the ones in think slices.
Even without a pressure cooker, you can cook beans faster in a microwave oven.
However, you still need more than a half of hour if you want the beans to be soft, e.g. 45 minutes (after having soaked the beans for a half of day).
I cook all my food in a microwave oven. Except for beans, I have never encountered any vegetable that would need more than 15 minutes. For lentils or chickpeas, around 12 minutes is normally sufficient.
Students are also people. If we're managing a software project, a single deadline at the end is sure to suffer from delays. It's better to split things into shorter deliverables with more frequent feedback.
Few students do optional assignments unfortunately. Other tasks that are directly worth a gradetend to take priority (e.g. studying for another class that has an exam this week).
1. Class attendance is frequently optional, but students still attend.
2. I had a prof. that didn't require homework be done. He would give out "practice fun" and would gladly sit down, give feedback and 1:1 time to those who completed it, or tried. He also pointed out that it was rare to pass the exams for students who didn't do "practice fun". Most people did the work.
It leads me to believe - from my own experience too - that students generally aren't stupid, and will gladly do the work if there is a point. Plenty of homework is pure busywork though, even at the college level.
Students are very grade-motivated and unfortunately they rarely do the homework assignments if they are not worth points.
At-home coding projects, writing essays, etc also exercise different skiils than you can test for in a 2 hour written exam. It's unfortunate that due to rampant AI cheating, we can no longer reward the students who put in the work and develop these skills.
If you're gonna brown some onions, microwave them for a bit before tossing them into the pan. The first step of browning onions is just boiling away the water, which microwaves are great at. You may find that it begins to brown sooner this way.
If your language is typed it's good to know at least a bit, so you can do the type inference properly; there are many ways to shoot yourself in the foot when it's ad-hoc.
Bidirectional type inference is a type inference style where you traverse the syntax tree once. Sometimes the type info flows top to bottom and sometimes it flows bottom up. If your type inference algorithm works by traversing the syntax tree, I suggest reading more about bidirectional type inference to get a better idea of how to best coreograph when the type info goes up and when it goes down.
Hindley-Milner type inference works by solving constraints. First you go through the code and figure out all the type constraints (e.g. a function call f(x) introduces the constraint that x must have the same type as the argument of f). Then you solve the system of equations, as if you'd solve a sudoku puzzle. This sort of "global type inference" can sometimes figure out the types even if you don't have any type annotations at all. The catch is that some type system features introduce constraints that are hard to solve. For example, in object oriented languages the constraints are inequalities (instanceof) instead of equalities. If you plan to go this route it's worth learning how to make the algorithm efficient and which type system features would be difficult to infer.
> Bidirectional type inference is a type inference style where you traverse the syntax tree once.
Yes, in my language I just build code directly from syntax tree in single pass (with a couple of minor exceptions). No complex machinery for type deduction is involved. So, now I assume it's called bidirectional type inference.
Personally I find what Rust does with possibility to avoid specifying types for variables isn't that great. It allows writing code which is hard to read, since no type information is present in source code. Also I suppose that it makes compilation slower, since solving all these equations isn't so easy and computationally-cheap.
It's unclear from your comment whether you inadvertently have bidirectional type inference or if you just...don't have type inference.
So, just to be clear, bidirectional type inference is a particular kind of machinery for type deduction (complexity is in the eye of the beholder). The defining characteristic of bidirectional type inference is that type information flows in both directions, not that it takes a single pass for type checking over the tree.
And that's, again, a single pass for type checking - the compiler as a whole can and usually does still take many passes to go from syntax tree to code. Pascal was famously designed to be compiled in a single pass, but it doesn't have any type inference to speak of.
Indeed. The point I was trying to make is that an ad-hoc type inference scheme that works by recursively traversing the tree will probably be most similar to unidirectional or bidirectional type inference.
Considering that the primary source is South Korea's spy agency, I think it's worth taking this news with a grain of salt. When north and south korea release news about each other it is always hard to tell which parts are factual and which parts are propaganda.
Related to shadow stacks, I've had trouble convincing the C optimizer that no one else is aliasing my heap-allocated helper stacks. Supposedly there ought to be a way to tell it using restrict annotations, but those are quite fiddly: only work for function parameters, and can be dusmissed for many reasons. Does anyone know of a compiler that successfully used restrict pointers in their generated code? I'd love to be pointed towards something that works.
Note that declaring no aliasing is probably unsafe for concurrent or moving garbage collectors, as then the C compiler can conveniently "forget" to either store or load values to the shadow stack at some points...
(though it is fine if GC can only happen inside a function call and the call takes the shadow stack as an argument)
Concurrent GC's isn't a mess I've dealt with (majority single-threaded languages), moving should be ok if all heap accesses are in single statements through the shadow stack and a pointer to the shadow-stack is always passed on to called functions (Thus the compiler shouldn't be allowed to retain anything, I could be wrong on some slight C standard detail here though).