I'm really perplexed by this refrain that CDB is "not psychoactive"
> A psychoactive drug, psychopharmaceutical, or psychotropic is a chemical substance that changes brain function and results in alterations in perception, mood, consciousness, cognition, or behavior. source: https://en.wikipedia.org/wiki/Psychoactive_drug
Isn't the point of taking it that it's psychoactive?
"Not psychoactive" is definitely not the right term. CBD affects, indirectly, the GABA receptors in your brain. It's pretty clearly psychoactive. The "non-psychoative" seems to have picked up a colloquial usage that essentials equates to
"doesn't get me high".
This definition does not match people's expectation of what counts as "psychoactive". By this definition, if eating chocolate alleviates my bad mood, it's a psychoactive drug.
There's a big prize waiting for the person who can harness DNA repair pathways in conjunction with Cas9 to make precise, multi-base DNA edits. Lots of folks are working on that now.
My qualm with this article is disappointingly poorly backed up. The author makes claims, but does not justify those claims well enough to convince anyone but people who already agree with him. In that sense, this piece is an opinion piece, masquerading as a science.
> This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models [why?]—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task [why?], or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex [???], or there may not be appropriate data available to learn it [like what?].
> Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues [why?]. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold. [really? why?]
I tend to disagree with these opinions, but I think the authors opinions aren't unreasonable, I just wish he would explain them rather than re-iterating them.
For one, input and output size has to be fixed. All these NNs doing image transformations or recognition only work on fixed-size images. How would you sort a set of integers of arbitrary size using a neural network? What does "solve with a NN" even mean in that context?
Another problems/limitation I can think of is that in NNs you don't have state. The NN can't push something on a stack, and then iterate. How do you divide and conquer using NNs?
Are NNs Turing complete? I don't see how they possibly could be.
Input and output sizes don't have to be fixed. E.g. speech recognition doesn't work with fixed sized inputs. Natural language processing deals with many different length sequences. seq2seq networks are explicitly designed to deal with problems that have variable length inputs and outputs that are also variable in length and different from the input.
> $One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols.$
To me it sounds like they use an RNN to learn a hash function.
It seems unfair to level the criticism of being incomplete and not fully explaining all the points given that the lead-in to the piece says it's a book excerpt and doesn't explain a lot of stuff that a reader of the book would already have encountered.
As soon as I can, I'll include comparison pages to the documentation, trying to keep it as objective as possible. I can't seriously answer this question in depth here, but it is planned, so at least experts from other systems can also jump in and complement/correct my understanding of each systems. I used a bunch of them, but I'm in no mean expert user of each so making it collaborative sound like a better idea than just giving my point of view.