Excuse my ignorance in this space but I want to check my understanding:
About a year ago I toyed with writing a web app that was essentially a front-end to a diffusion image generator (very original I know). The site used socket-io -> flask -> redis queue -> distributed pytorch processes.
Am I correct that several of these services are selling some equivalent of the '-> redis queue -> model' component? Is part of the value proposition here that you don't need to staff people with pytorch/equivalent familiarity?
I tooled around with a similar idea sometime back. There are clear advantages of code over graphical-schematics when it comes to automatic generation of component values / re-use of elements / speed of development / automatic SPICE testing / etc.
The primary issue I ran into was that: electronic circuits are inherently graph-structured and the traditional circuit schematic is well suited, optimal even, for displaying this kind of information. Trying to understand an analog circuit that is described as code seems awkward.
Yeah we did run into a similar issue. Someone designed a power supply and it wasn't immediately obvious how the elements of the circuit were hooked up in ato.
I think a viewer would be nice ultimately. But we haven't exactly figured out how the solution might look like. ideally something that allows you to create datasheet-like snapshots of part of your design?
A nebulous description of my ideal setup would be something like this:
Side-by-side schematic symbol view / code view that are actively synced to one-another in real-time.
Schematic view allows basic arranging of parts, editing interconnects, triggering jump-to-reference within the code view, adding probe points for SPICE, displaying SPICE output graphs.
Code side does all the heavy lifting like creating new parts, scripted behaviors, editing component values, all the cool shit that would be a nightmare to sort into a GUI.
Yeah the side by side thing makes sense. Especially for very low level or analog designs. But in some cases it wouldn't be desirable to show the whole circuit. Say you are dealing with a module that has been characterized and that you know works. In this case, using a language server or a linter that shows you available interfaces might be easier to use.
For the spice graphs, having a jupyter notebook-like interface would be great to document why your design looks the way it does.
If you have specific ideas or drawings of how this might look like, please send them over in our discord server :)
Using neural networks to solve inverse-scattering problems (like wifi scattering off a human body, for example) seems to have a lot of potential. The lack of phase-information (i.e. not just signal intensity but instantaneous phase of the EM wave) captured by traditional receivers makes this class of problems so difficult to approach since you are blind to a significant portion of the available EM information. Mitigating this by constraining your solution-space to 'reasonable' outcomes is practically very difficult... for a human. Very cool to see such a practical demonstration of a neural network seeming to accomplish exactly this.
> Imagine that someone wants to illegally track the position of a person inside a laboratory, for instance to measure how much time is spent doing different activities at different desks, as depicted in the upper picture. How effective can this attack be? ... With CSI-MURDER, the localization becomes impossible because results will seem random, thus preserving the person privacy without destroying Wi-Fi communications.
There's a noticeable difference between self-checkout stations, for example:
The stations at my local Harris teeter are unable to handle the user quickly scanning+placing products in the bagging area. I imagine this has to do with how the software translates changes in the bagging-area weight to a count of items. When this happens you have to wait around for the system to decide you aren't, in-fact, stealing.
These same stations also prevent further item scanning once an age restricted item (e.g. alcohol) is scanned, forcing you the wait around for the attendant before you can continue with your items.
Contrasted to the self-checkout stations at my local food-lion, which have none of these issues, the lack of UX is frustrating.
To top off with some non-content: You'd really think more people would have learned how to reliably scan barcodes after observing others do it hundreds, if not thousands, of times.
Wrote a letter to my local Mazda place about this. I've had nothing but good experiences with the several Mazda vehicles I've owned (still daily-drive the rotary-engine RX8...). Seeing them engage in what appears to be dmca bullying of a project like Home Assistant feels like a betrayal.
A daily driver RX-8 that stills drives? How? What's your mileage and oil consumption like.
As quirky as the RX-8 was, I liked the car but not the engine.
Many days late on this reply but I am at 120K miles (2008 model). The engine compression is noticeably low now. Before warming up and at low RPM the car really struggles to make it up the hill out of my neighborhood. I've had 3 flooding events that were only fixed by rolling the car down that same hill 3-4 times. I have to slip the clutch a concerning amount when getting into first from a stop.
A new engine at the dealership price of $7K is still going to be cheaper than an equivalently sporty new car though!
How is paying out-of-pocket for an insulated water-bottle / better shoe insoles any different than paying a fee for a union to collectively negotiate these things for you?
In both cases you, the worker, exchange money for improved work conditions. In the second case you are simply paying comparatively more money for larger returns.
Because it takes people's freedom away to choose how to spend their own money. Plus it obviously adds another layer to siphon off your money too.
You get hot, so you want a ventilated truck. My feet hurt, I want a rubberized floor. My friend can't park so he wants 8 cameras on the truck. Mary has a bad back so she wants a hydraulic seat shock absorber. Eric doesn't eat at home in the morning so he wants more breaks so he can keep his energy up.
And on and on. When we could have all just kept more of our money, had extra money come to us instead of 6 different upgrades to the truck, and so on.
Right there with you. I've gotten so used to having it give me exactly the answer to my specific question that, when I must fall back to traditional search, it's noticeably unpleasant.
I always wonder how you do the risk calculus for this kind of scenario.
Clearly if the certainty of a world-ending impact is 100% then committing the entirety of humanities resources to deflecting it would be justified and expected.
The word 'unexpected' doesn't appear in the article. It's a fairly safe bet that the team of people capable of hitting a 581 foot object 6 million miles away with a 13,000 mph slug would have expected the ejection of material from the impact.
You're right, I interpreted "unintended" as "unexpected", because I assumed that if you know this before you do this then you know what to expect, so it must also be intended.
Certainly, but to the person who wrote the sub-heading (I suspect an editor rather than the article's author), it presents a "previously unanticipated risk." The further you get from the source...
About a year ago I toyed with writing a web app that was essentially a front-end to a diffusion image generator (very original I know). The site used socket-io -> flask -> redis queue -> distributed pytorch processes.
Am I correct that several of these services are selling some equivalent of the '-> redis queue -> model' component? Is part of the value proposition here that you don't need to staff people with pytorch/equivalent familiarity?