Friday, May 3, 2013

Venture into AI, Machine Learning and all those algorithms that go with it.

It's been a 4 months since my last blog entry, I took it easy for a little while as we all need to do from time to time... but before long my brain got these nagging ideas and questions:

How hard can AI and Machine learning actually be?
How does it work?
I bet people are just over complicating it..
How are they currently trying to solve it?
Is it actually that difficult?
Could it be done it differently?

So off I went search the internet, some of useful sites I came across:
http://www.ai-junkie.com
Machine-learning Stanford Video course
Genetic algorithm example

I also ended up buying 2 books on Amazon:

Firstly, from many different recommendations:
Programming Collective Intelligence

I will be "working" through this book. While reading I will be translating, implementing and blogging the algorithms defined (in Python) as well as any mentioned that I will research separately in Java. Mainly for my own understanding and for the benefit of reusing them later, and an excuse to play with Java v7.

However, since I want to practically work through that book, I needed another for some "light" reading before sleep, I found another book from an article on MIT technology review Deep Learning, a bit that caught my eye was:


For all the advances, not everyone thinks deep learning can move artificial intelligence toward something rivaling human intelligence. Some critics say deep learning and AI in general ignore too much of the brain’s biology in favor of brute-force computing.
One such critic is Jeff Hawkins, founder of Palm Computing, whose latest venture, Numenta, is developing a machine-learning system that is biologically inspired but does not use deep learning. Numenta’s system can help predict energy consumption patterns and the likelihood that a machine such as a windmill is about to fail. Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time. Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment. “Google’s attitude is: lots of data makes up for everything,” Hawkins says.



So the second book I purchased - On Intelligence
So far (only page upto page 54) 2 things have from this book have imbedded themselves in my brain:
"Complexity is a symptom of confusion, not a cause" - so so common in the software development world.
&
"AI defenders also like to point out historical instances in which the engineering solution differs radically from natures version"
...
"Some philosophers of mind have taken a shine to the metaphor of the cognitive wheel, that is, an AI solution to some problem that although entirely different from how the brain does it is just as good"

Jeff himself believes we need to look deeper into the brain for a better understanding, but could it be possible to have completely different approach to solve the "intelligence" problem?

No comments:

Post a Comment

Popular Posts

Followers