This is what I like. When I start such projects, I am criticised by my peers, told that it is a waste of time and effort, that the best methods and most optimized libraries already exist and that I will never make anything better.
Well, not always the case. Of course I understand the merit of standing on the shoulders of giant(corporation)s, but sometimes you just want to try out something new (to you) just to test your intuition and improve your own perception of existing knowledge.
This is like saying "I will never open a restaurant so it's pointless to learn how to cook". At the very least it gives you an appreciation and understanding for the domain that you wouldn't otherwise have.
Cool project! Makes me want to recommend to my brother to start something like this.
Would definitely recommend trying vectorised calculus (instead of calculating the output with loops [0])
I'd also advise you to steer from some apparently neurology-inspired terms (stimulus) and use terms more common in machine learning (vector, features, etc.).
If you want to go further with this, you could try to implement back propagation with different activation functions and any depth. Check CS231n [1], the reference course for this.
Hi! I'm just doing it right now, I'm in the middle of sigmoid testing :D I'm agree with you about terms, for my master thesis I wrote a neuro-computational model to analyze taxonomic responses in early childhood, so I think in term of stimulus because of my background in AN (if interested, you can find the work in my Github page https://made2591.github.com in About section)...I will port my private repo on Github soon :) (but is written in MATLAB)
I'm sorry that this isn't really adding to the conversation, but I have to say just how awesome that whole title sounds to someone who only knows a few of the words.
Hi All, I dev the multi layer perceptron with back propagation algorithm. If you want to have a look, please go to: https://github.com/made2591/go-perceptron-go
Have a nice day!
This is interesting; I once tried to create my own ANN library for Go, but didn't get far with it (one of those half-baked projects). I even had a branding and naming of things, but somewhere along the line moved on to something else.
This is nice to see.
I do wonder where the activation function is in the code, though? I looked it over, and maybe I missed it? If it's in there, it's gotta be something very simple (binary or relu?) - enough to overlook it easily...
Or maybe I'm looking at the code wrong - is this really "a perceptron" - that is, an implementation of the McColloch-Pitts model (basically a single neuron, and basic activation)?
Yes, is exactly as simple as you imagined. I'm working on multilayer + backprop, while re-studying theory XD I used som and gsom in recent years. I think the idea of an ML library in Go is a good idea: I shared my repo just to, you know...share the idea, find co-workers, open to some source in Go. I'm honest, I'm really happy in finding such an interest from web... and other coders ^^
Thank you for your supports guys!!! I hope I will attract some others Go guru to work with!! I truly appreciate your interest, it's a little step, but I promise I will Go (haha) through it!
Well, not always the case. Of course I understand the merit of standing on the shoulders of giant(corporation)s, but sometimes you just want to try out something new (to you) just to test your intuition and improve your own perception of existing knowledge.