Hacker Timesnew | past | comments | ask | show | jobs | submit | zfrenchee's commentslogin

Webcola is great -- but no docs or support.


Support as in commercial support, a bug tracker or what?


Seems like rigorous data science on the statistics of people graduating with humanities degrees. Doesn't that seem just a little bit ironic?


This is beautiful work!


Thanks for the kind words!


I'm really perplexed by this refrain that CDB is "not psychoactive"

> A psychoactive drug, psychopharmaceutical, or psychotropic is a chemical substance that changes brain function and results in alterations in perception, mood, consciousness, cognition, or behavior. source: https://en.wikipedia.org/wiki/Psychoactive_drug

Isn't the point of taking it that it's psychoactive?


"Not psychoactive" is definitely not the right term. CBD affects, indirectly, the GABA receptors in your brain. It's pretty clearly psychoactive. The "non-psychoative" seems to have picked up a colloquial usage that essentials equates to "doesn't get me high".


This definition does not match people's expectation of what counts as "psychoactive". By this definition, if eating chocolate alleviates my bad mood, it's a psychoactive drug.


Seems like that's a good definition then, if it's reproducible across a large number of people in a scientifically rigorous format.


Chocolate contains caffeine, which is indeed psychoactive.


Base editors can make all 12 conversions as of 4ish months ago I think.


Cas9 does not edit DNA, it cleaves DNA. What happens afterwards is up to the cell. The hype machine has ignored this fact.

One of my favorite George Church quotes: "CRISPR: some call it genome editing. I call it genome vandalism".

Base Editors can actually edit DNA, but only single bases at a time. http://www.sciencemag.org/news/2017/10/novel-crispr-derived-...

There's a big prize waiting for the person who can harness DNA repair pathways in conjunction with Cas9 to make precise, multi-base DNA edits. Lots of folks are working on that now.


Does it cleave a whole segment off (two point cut), or just a single cut?


Single cut.


The original paper is https://www.cell.com/cell/fulltext/S0092-8674(17)30629-3 There was a nice follow up last week https://www.cell.com/cell/abstract/S0092-8674(18)30714-1 There's a lot of good theoretical biology to be done here. I don't think any of us systems biologists are surprised about this result, but pinning down exactly the structure of the genetic basis for complex traits is going to be an interesting enterprise.


You might want to check out the evo-devo community. They've been doing that for some time.


There are slots devoted specifically to folks who get workshop papers in. I registered a couple days ago.

To be fair, I agree NIPS registration is getting a little out of hand, but we should get the facts straight.


My qualm with this article is disappointingly poorly backed up. The author makes claims, but does not justify those claims well enough to convince anyone but people who already agree with him. In that sense, this piece is an opinion piece, masquerading as a science.

> This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models [why?]—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task [why?], or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex [???], or there may not be appropriate data available to learn it [like what?].

> Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues [why?]. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold. [really? why?]

I tend to disagree with these opinions, but I think the authors opinions aren't unreasonable, I just wish he would explain them rather than re-iterating them.


For one, input and output size has to be fixed. All these NNs doing image transformations or recognition only work on fixed-size images. How would you sort a set of integers of arbitrary size using a neural network? What does "solve with a NN" even mean in that context?

Another problems/limitation I can think of is that in NNs you don't have state. The NN can't push something on a stack, and then iterate. How do you divide and conquer using NNs?

Are NNs Turing complete? I don't see how they possibly could be.


Input and output sizes don't have to be fixed. E.g. speech recognition doesn't work with fixed sized inputs. Natural language processing deals with many different length sequences. seq2seq networks are explicitly designed to deal with problems that have variable length inputs and outputs that are also variable in length and different from the input.

How would you sort integers? using neural turing machines: https://arxiv.org/abs/1410.5401

NMTs and other memory network architectures have explicit memory as state (including stacks!), indeed any recurrent neural net has state.

Are NNs Turing complete? Yes! http://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf


Interesting, thanks! On https://www.tensorflow.org/tutorials/seq2seq I found a link to https://arxiv.org/abs/1406.1078, which says

> $One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols.$

To me it sounds like they use an RNN to learn a hash function.

Thanks for the NTM link, I'll check it out.


It seems unfair to level the criticism of being incomplete and not fully explaining all the points given that the lead-in to the piece says it's a book excerpt and doesn't explain a lot of stuff that a reader of the book would already have encountered.


Who is behind this?


Me (as an individual), and a few great people that helped me along the way. Not commercially endorsed, or supported.


How does this compare to Dask, Luigi or Airflow?


As soon as I can, I'll include comparison pages to the documentation, trying to keep it as objective as possible. I can't seriously answer this question in depth here, but it is planned, so at least experts from other systems can also jump in and complement/correct my understanding of each systems. I used a bunch of them, but I'm in no mean expert user of each so making it collaborative sound like a better idea than just giving my point of view.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: