Hacker Timesnew | past | comments | ask | show | jobs | submit | nnmg's commentslogin

found this video about the author/origin: https://www.youtube.com/watch?v=Rofmr7_xc7A


I think you're spot on, it is easier to prove (or prosecutors have more experience prosecuting) financial crimes. To give them credit-- they at least tried to get Holmes for defrauding patients too, but she was acquitted of those charges [0]. Her partner, Sunny Balwani, was convicted on all counts, including defrauding patients [1].

[0]: https://en.wikipedia.org/wiki/Elizabeth_Holmes#U.S._v._Holme....

[1]: https://en.wikipedia.org/wiki/Sunny_Balwani#United_States_v....


I'll always admire clojure. Loved the simplicity and philosophy, and I wrote a few toy projects. Unfortunately I felt like I could never really take advantage of the power of clojure or do real work in it because I didn't know or have a history with Java. It always felt like clojure was for enlightened Java or JS programmers, and I didn't want to learn Java and clojure at once so I was stuck in beginner land.


You don't need to learn Java, only be aware of it. Nubank has 100(0?)s of devs that don't know Java at all and successfully use Clojure.

Even if your assumption is true, what's wrong with that? A language which offers more value over an existing base of knowledge is valuable and isn't uncommon.


Past a very baby level you kinda need to know the JVM world a bit. Lots of keywords like "classpath", Maven, reflection etc..

You def don't need to know Java as a whole, but it was actually a little challenging and confusing to catch up on the relevant JVM bits.

Educational material either doesn't use it at all or assumes you know it


You're not alone, I use noscript on firefox for exactly that. It makes some websites unusable, but for normal browsing, that's what I use. If the website is unusable after allowing a few scripts, then I don't want to be there. It is horrifying when you see how much JS some websites try to pull in.


Not exactly what you are looking for, but Open Secrets gets you almost there: https://www.opensecrets.org/federal-lobbying

Their API leaves a lot to be desired, but the data is much easier to access and analyze than the raw data from the US House or Senate websites.


This is an important point. What have the journals done? Raised their prices and business as usual.

Scientific editors do nothing for data validation. There is no accountability, even after retractions.

Scientific journal editors are glorified gatekeepers for "high impact" work (read: flashy), and then use free reviewer labor to cover themselves so they can call it 'peer reviewed'. In the rare cases when journals do require supporting data, they explicitly ask for excel spreadsheets :(


Excel is used as a database/storage/interchange format, especially after the initial analysis by someone who uses python or R. Bioinformatician does the analysis, then the PI wants to see it so they can Ctrl-F for genes they are interested in, so out comes an excel document.

And really, even if you know python or R, are you really going to fire up a jupyter notebook, load the data, and run pandas queries every time someone in lab meeting or after a talk asks you about this gene or that gene in your data?

I think the important question is why is date conversion a default? Would it really break backwards compatibility for MS Excel users if date conversions were explicit instead of automatic? Turning that off by default would fix a lot of this.


> Excel is used as a database/storage/interchange format, especially after the initial analysis by someone who uses python or R. Bioinformatician does the analysis

Sometimes, but the situation is in reality worse than that. Excel is also used as the gold standard database/storage/interchange format of record for random shit that clinical researchers have typed in by hand whether directly or transcribed from other notes, often when that data isn't actually fundamentally tabular in nature because people really like working in grids. Even when grids hurt more than help.

A big secret in genetic research is that the MDs, grad students, project managers, and coordinators running the research programs are often not super focused on what well-structured data looks like and don't know what things like "key-value store" or "nested tree-like structure" mean, and even if they did there aren't good GUI tools for entering them anyway, and it leads to countless errors that maybe (here I speculate) they just assume will wash out as noise.

> I think the important question is why is date conversion a default?

Yes, why any kind of conversion is ever the default is a real money question.


For the finance and business office worker, it seems to have traction. Just like auto-creating an emoji when you type a : character. Excel is for offices, not specializations of scientists. Bummer.


So maybe we need better software for scientists? Sounds like a hole in the market


The market for scientific software is a bit iffy. Scientific software also needs to be super super flexible since the users are, somewhat by definition, not doing something that's been done before. Hard market.


A good spreadsheet for scientists. That’s a lot of work for not much money. I don’t know that adapting LibreCalc would do the trick.


I don't work in bioinformatics, but what you are describing is a completely accurate description of what I experienced working in manufacturing quality control. Raw data came in from suppliers in the form of spreadsheets, and management wanted to see results in spreadsheets. Meaning all our quality data was subjected to these issues. The date formatting issue was a particularly annoying "gotcha", particularly when features were defined with a XX-XX numeric code. The number of times I had to deal with someone in a meeting saying "hey, why is this feature called October-13?!" Super frustrating.

If I could choose the tools used by the whole process involving multiple different companies and departments, hey I would! It would be python all the way down. But I was but a cog in a massive organization.


> A lot of "safety culture" is composed of things like checklists and hazard warnings which are more geared towards shifting the blame for accidents onto somebody else than actually preventing those accidents,

If you stay in spreadsheets these problems mostly don’t occur (that is, once data entry is squared away so that the initial spreadsheet has what you want it doesn't tend to get lost), its when you move in and out of spreadsheets via text and take the path of least resistance [0] to do the transition that the problem occurs.

[0] and to be fair, there is a lot of resistance off that path.


The process I had to deal with was filling out spreadsheets with data from a python-driven 3D inspection program that exported out data files in CSV format. Needless to say, these errors were inevitable for exactly the reasons you've stated. Why we didn't bypass the large, poorly formatted cumbersome spreadsheets and just directly export data via pandas? All the inspection was done by Python anyways. You tell me! Also, it did not help that the spreadsheets were not created by me, or any colleagues in my department.

God I hated working in old-school engineering/manufacturing. "That's not how we do things" is the answer to everything. I


Sorry about the misplaced quote. Meant to be a quote from the immediate upthread comment. Looking back, it probably wasn't even needed, the response works fine against the comment as a whole.


And really, even if you know python or R, are you really going to fire up a jupyter notebook, load the data, and run pandas queries every time someone in lab meeting or after a talk asks you about this gene or that gene in your data?

I don't do any scientific research, but I have been using jupyter as a replacement for excel since it was called the ipython notebook. I don't really use pandas all that often, I just find it easier to read and edit data in python. Though I first learned ipython added the notebook from a talk Wes McKinney gave about Pandas.


I don't know, I think that is a big jump and definitely not trivial.

"Reading" neural activity is much different than "writing", and modifying the circuits/neural activity precisely enough to modify emotions.

These devices are typically cortical surface level electrode meshes, placed over the motor region of the cortex, while emotions are thought to come from various deep brain structures. Not saying it won't happen, but we are much, much, further from the latter than the former.


I don't know about that. You're right that emotions seem to come from deeper structures, but these structures are also more primitive. We're able to modify emotions with something as simple as amphetamines, so controlling them with a few well-placed electrodes is maybe not so difficult. Seems to me that as brain interface technology starts progressing, we're going to hit an S-curve of technological progress that will make it advance very rapidly in one or two decades.


It's definitely possible, but I guess what I am saying is that research in this area hasn't really been explored in the context of humans.

In the lab, we use targeted genetic manipulations such as optogenetics [1] or chemogenetics (see DREADDS [2]) to achieve precise circuit manipulations that can (maybe/kinda) change emotional state (see [3] and [4] for manipulation of fear in mice, sorry may be pay-walled check sci-hub). But these are impractical in humans at the moment because they require specific genetic backgrounds (a CRISPR modified mouse expressing a specific artificial DNA sequence in certain types of neurons from birth), viral injections to add other genetic constructs that interact with the from-birth one, and implanting lights or adding drugs directly to the brain where the cells are. Precise electrical manipulation is not really done, even in animal labs because it is not precise or controllable for these types of things.

Again, I have no doubt that we will get there, maybe in a few decades too. But the techniques are much further from human use than the "reading" technology demonstrated here.

[1] https://en.wikipedia.org/wiki/Optogenetics [2] https://en.wikipedia.org/wiki/Receptor_activated_solely_by_a... [3] https://pubmed.ncbi.nlm.nih.gov/28288126/ [4] https://www.nature.com/articles/npp2015276/



I totally agree and had a very similar experience in graduate school. Writing about my experiences and things I had learned (technical and project management) had a huge impact on my ability to demonstrate my knowledge and is without a doubt why I quickly received two job offers before defending my phd (biology/neuroscience). I think papers are a really poor way to demonstrate the huge amounts of work you've done unless you stay in academia (and probably not even then).


This is one thing I messed up during my grad school studies. Now that I have a "real job", getting the ball rolling on blogging about what I'm looking into / learning about is harder (although that is still a convenient excuse).

Thank goodness I have been meticulously keeping track of what I've learned in Org mode for years. I've just gotta dredge that old database for some blog posts (starting with why folks who are similar to me should really consider not going to grad school...).


I work on a well established, closed source, trade secrets style e-commerce site. I can never seem to think about anything I could write up that would not involve me reworking everything to be more general. I also think it would largely boil down to a Stack Overflow link. I am doing more management now, so that might make this problem a little easier to solve for me.


I had one of these blogs years ago and I found the questions I had to look up were great subjects for blog posts. Many of them wound up being pretty basic: thread safe singleton in java, sort a list, etc. This isn't a PhD thesis, it doesn't have to be profound, you're just trying to demonstrate you can write some code and communicate.


I don't think I did a good job of articulating my point. By the time I wash out all the domain specific stuff, I believe I am left with a post that is even less valuable than a link to a Stack Overflow discussion about the same issue. Does that make sense? What value am I adding to the world if I spend an hour typing up my thoughts on this and I could have just linked to SO? Also, my blog traffic is effectively 0 people. I don't think it is wrong for others to do so, it just isn't the right thing for me.

EDIT - sorry, I missed the fact that we are talking about producing a blog as a proof that I understand and think about programming in a certain way (e.g. as useful to people evaluating me)... You are absolutely right, then, and I withdraw my objection.

I am not currently looking for work but if I were, I think a blog focused on dev would be more valuable than my collection of half-baked github repos. Food for thought.


I’d be interested in hearing those considerations.


The pithy way I tell people is that they should only do a PhD if they can't NOT do a PhD, i.e. they feel so compelled to work on a specific thing and have found an advisor who will advise them but ultimately let them do their own thing to a great degree. The only other viable option is to find a tenure-track junior professor who really has their stuff together (including their work ethic and emotional intelligence; often the latter can be lacking).

One also has to consider the time cost of doing a PhD, and whether spending the equivalent time working would have gotten them further not only in career, but also salary. Between a) people who go from undergrad to a job and don't really keep pushing themselves, b) people who go to grad school to hopefully skip to a more interesting job post-PhD, and c) people who go from undergrad to a job but really push hard to learn new skills (e.g. presenting at conferences, blogging about it, etc), option C is generally leaps and bounds ahead of the other two.

A PhD is worth considering if the thing you're interested in most is not really used widely in industry (perhaps some PL stuff?).

Also, prospective PhD students need to consider that there is a very asymmetric relationship between advisor / advisee compared to a normal job. If my job starts treating me like dirt, I can tell them to shove it and quit ASAP because I know that my skills can get me another job in short order. With a PhD, it is almost impossible to quit a PhD and then pick it up again if you and your advisor have some sort of falling out; every future PhD position will look at the prior "failure" with suspicion, losing the nuance of issues besides the actual work that triggered the separation.

Basically you need to really understand why you want a PhD (and whether you could do better towards your ultimate career goal without it), and if that's a "yes" you need to really make sure you can get along with your advisor for years. A strong advisor can "compensate" for a weak student (i.e. get them through the program), and a strong student can compensate for a weak advisor (e.g. students who basically do their own thing from the get go, and have high-ranking perpetually absentee advisors who do more research bureaucracy than research and advise by way of ominous single-word emails), but if both are weak it's a recipe for disaster, and only the student gets hurt.

Getting a visa into a country via graduate studies is definitely a good reason (especially in the US it seems), but often an MS is sufficient (except if one tries to get in on the green-card fast-track via the O1 visa, which requires an exceptional PhD track record).


Really insightful, thank you


Most are, but in biology/physics STED[1] and STORM are physics based methods for overcoming the diffraction limit[2]. STED is pure physics, no math/deconvolution/AI tricks.

[1] https://en.wikipedia.org/wiki/STED_microscopy

[2] https://en.wikipedia.org/wiki/Super-resolution_microscopy


They use extra tricks at the image capture level to supercharge how much information you can load into the captured images (and then decipher them), but the methods are still related at least in STORM - you’re effectively deconvolving lots of sparse images and then merging them! Gaussian fitting of point sources is literally dexonvolution right? You’re just estimating the psf as a 2d Gaussian!


I am not qualified to get too in the weeds on the physics, but 'Resolution' is... complicated. Usually, when we talk about resolution we are talking about the ability to distinguish two points.

The 'resolution limit' (Abbe diffraction limit [1]) is related to a few things, but practically by the wavelength of the excitation light and the numerical aperture (NA) of the lens (d = wavelength/2NA). When we (physicists/biologists) say 'super resolution', we mean resolving things smaller than what was previously possible based on the Abbe diffraction limit. So rather than only being able to resolve two points separated by a minimum of 174nm with a 488nm laser and a 1.4NA objective, we can resolve particles separated by as little as 40-70nm with STED (but it varies in practice).

STED does not accomplish this by estimating PSFs and fitting Gaussians, it uses a doughnut shaped depleting laser to force surrounding fluorescence sources to a 'depleted' state, and an excitation laser to excite a much smaller point in the middle of the depletion (see the doughnut in the STED wikipedia page, Stephen Hell and Thomas Klar won the Nobel Prize in Chemistry for this in 1999 [2].

I know PALM/STORM uses statistics, blinking fluorescence point sources, and long imaging times to build up a super resolution image based on the point sources and computational reconstruction.

Not as familiar with that one or SIM, but I know the "Pure physics/optics" folks I work with regard STED as the most pure physics based one that doesn't rely on fitting, deconvolution, or tricks (not that any of that is bad or wrong!).

[1] https://en.wikipedia.org/wiki/Diffraction-limited_system#The... [2] https://en.wikipedia.org/wiki/STED_microscopy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: