Back to Wonderant - http://www.wonderant.com

Wednesday 14 September 2011

Turning Point for AI - follow up

This just out from Numenta...

-----------------------
Numenta Newsletter
September 2011
Dear Numenta newsletter subscriber:

The year 2011 has been a turning point for Numenta. Prior to 2011, we were a research company, focused on biological learning algorithms. Starting in January of this year, we started creating a commercial product based on these algorithms. In this newsletter I want to tell you a bit about our product plans and invite you to join our private beta.

At the heart of the product we are building is the Cortical Learning Algorithm, which we believe is unique in its ability to discover patterns in temporal data. (A paper describing the Cortical Learning Algorithm is available here.) Our focus over the last six months has been on exploring how the Cortical Learning Algorithm can be applied to almost any kind of data.


Over much of this year, we have been developing a new service using the CLA algorithm that will allow anyone to find patterns and make predictions from data. We think this service will have wide-ranging uses, from web click prediction to sales forecasting to resource planning to health management. There are several compelling innovations in our product related to functionality and ease of use. We look forward to sharing more details on these features with you early next year, as we get closer to our product launch.

In a few months, we will launch a private beta program. We are seeking dedicated partners who would like to get early access to our service and would be willing to provide feedback on the design of the product. We are looking for both developer participants (who will access the service through an API) as well as business users (with basic database and spreadsheet skills).

Here are additional qualifications needed to participate in the private beta:

  • Access to historical and current data, generally at least 1,000 records, in time order
  • Data streams can be any type of structured or semi-structured data, including date/time, numbers, categories, and text (note that we are no longer doing vision work, so unprocessed photos and videos are not appropriate data streams)
  • Data is not subject to confidentiality or HIPAA compliance issues
  • Finding patterns in the data and making predictions has commercial value
  • You are willing to have Numenta talk generally about the results in a case study (you will continue to be the sole owner of the data and any value derived from using our product)
  • The beta program will be free up to some limits relative to server time
If you are interested in applying to participate in the private beta please let us knowhere.

As we make this transition to a product company, you will notice a few changes.

First, we are changing our web site in preparation for our launch by moving the legacy software and forums to a separate area rather than being featured on the main site. While you are welcome to continue to use our previous software, and to write on the forums, we do not expect to continue to evolve this work. We anticipate eliminating this information, as well as access to the legacy software, around the end of this year, so we are making it available to you for several months more.

Second, while I had previously tweeted under the ddubinsky account, we are switching to a Numenta Twitter stream. For those of you who have followed me at ddubinsky, please sign up for Numenta now.

Third, we are growing our team, so I urge you to keep an eye on our job postings. We are building an incredibly exciting product with a big opportunity, so we welcome your interest in joining us.

Thanks for your ongoing interest in our work. We can't wait to have you try out our product in 2012!

Donna Dubinsky

CEO, Numenta

Friday 26 August 2011

Turning Point for AI

Something happened in 2010 inside the Artificial Intelligence (AI) industry. If you're familiar with Jeff Hawkins' work around Hierarchical Temporal Memory (HTM), you'll also know that he founded a brain research institute and a for-profit company. The institute is the Redwood Center for Theoretical Neuroscience. The company is Numenta.

These entities were founded using the money Hawkins made from selling the shares in his company, Palm. As a side note, I think the amount of drive you need to create a multi-billion dollar company in order to do what you really want, study the brain, is quite breathtaking. However, I'm also pretty sure that that accomplishment will be completely overshadowed by the things that are about to come out of the AI work that he has pioneered over the past decade.

Numenta was co-founded by Hawkins, and Dileep George with the aims of 1) producing a commercially viable API that can be used to solve human cognition problems and 2) pushing results and findings back into the Redwood Institute.

At some point in 2010, Dileep decided to leave Numenta to start his own company, Vicarious Systems. According to Dileep himself in this email extract, everyone is still on good terms and Numenta is in a place where they can continue without him.

"...I did agonize a lot before deciding to take a leave from Numenta, but after taking the decision I do feel it was the right one. Yes, everything is friendly between me and the team. I made sure that Numenta will be fine without me and I am ready to help if needed. Jeff and Donna were supportive of my decision to take a break and explore..."

I wrote to the founders of Vicarious Systems, and was told that they do intend to push research findings back into the Redwood Institute, just as Numenta does.

At about this time, activity from Numenta dropped off. A new API was hinted at on the Numenta website, but as yet, nothing has been released. Activity in the Numenta forums also dropped.

I went to have a look at the Redwood Institute. If you're willing to hack away at LinkedIn, you can come away with some interesting insights.


For example, lots of the primary researchers in the Redwood Center for Theoretical Neuroscience are also involved in a company called IQ Engines. IQ Engines is a Software as a Service (SaaS) for image processing. You send in an image, the service will give you back a JavaScript Object Notation (JSON) description of the component pieces of that image.

Going back into LinkedIn and looking at the company activity graphs for IQ Engines and Numenta you can see that both of these companies began ramping up in late 2010.

Numenta

IQ Engines



Numenta has recently created some interesting job postings. One for a Product Manager, and a contract for a user experience designer as they presumably gear up to release the hinted at new API to the community.

So what happened mid 2010?

It would seem that it was not a rift, as everyone is, at least in appearance, giving signals that all the parties involved are still friends, and information will still be shared back to academia.

Maybe there was a breakthrough? Something that made everyone involved with Jeff Hawkins realize that now was the time to set up a company and make a bunch of cash...

And there is lots of potential cash to make. With the advent of a reliable human cognition in computer form, everything that humans are capable of (and an unimaginable number of things that humans can't do) could be done with software and machines that you can sell.

As William Gibson famously said, "The future is already here - it's just not very evenly distributed." I think this is beautifully illustrated by the fact that there is a very interesting space where HTMs are being used and that is in the area of quantum computing. D-Wave systems, based in Vancouver, have produced the first commercially available quantum computer which has been demonstrated by Google to very quickly train HTM networks for image recognition.

A quick summary of quantum computing: Because of weird quantum effects that are little understood, massively parallel operations can be performed which enables quantum computers to accomplish a task an order (or many orders) of magnitude faster than traditional computers. However, as the CEO of D-Wave, Vern Brownell says, "One of the things I’m learning about quantum computing is that if anyone says they understand it, I think they’re probably mistaken."

All in all, I think the rate at which change is coming will astound us all. Lets see what comes out of Numenta in the next few months.

If you forced my hand, here's the order in which I see it happening.
  1. Software solutions to existing predictive analytic problems and human cognition problems (eg. is this picture a dog or a cat?)
  2. Military robotics. Robots that can do complex (non-)human tasks.
  3. Chip based intelligence for replacement of any human based activity deemed profitable.
  4. Unknown period ending in
  5. Human enslavement
But I think it will be a happy enslavement. We'll most likely end up in what we think of as a luxury resort being waited on hand and foot. Just like our DNA is cared for by our cells, or our cells are cared for by our body, or our body is cared for by our lizard brain, or our lizard brain is looked after by the cortex. The AI will look after us like we look after our body. Sometimes an arm is expendable, but we regret having to cut it off.



Monday 30 May 2011

Of mice and many keyboard shortcuts

I'm old enough to remember what the world was like before there was the mouse.  Or the trackball. Or the stylus. Or the touch screen.  Or 3D interactive cameras that watch you (even on stand-by).

Back in the day, all we had was a keyboard.  Nothing else has allowed humans to interact with computers more effectively.

Mouse free since '83

My journey to keyboard shortcut nirvana began when I was a shy 19 year old. At the time, I worked for a telecommunications company in England. I built software for them with the new, and as yet, non-standard Java 1.1. As I was quite shy, when my mouse happened to break, I didn't speak up to the intimidating manager and ask for a replacement, but went about trying to figuring out how to use keyboard shortcuts to do everything I would have otherwise have done with a mouse.

I figured, "I'll be slower, but at least I'll be cheap".

I came to realize over the next week that I wasn't slower without a mouse. I was faster. And not just a bit faster, much faster.  Strangely, with that speed came a sure-footedness that I hadn't experienced before.

It turned out that everything I needed to do on a computer (as a programmer) can be done with a combination of key presses.  For example, maximize a screen: alt-space-x, restore a screen: alt-space-r, switch program: alt-tab, switch to the previous program: alt-shift-tab.





Mmmmmm Conventions
There were global shortcuts that worked on all of the windows and screens; copy: ctrl-c, paste: ctrl-v, go to the next anything: tab.  I came to learn that there were also conventions for keyboard shortcuts that most programs followed; alt-f-o (or ctrl-o) to open a file, alt-f-a to save as.
And finally, there were keyboard shortcuts for all the specific pieces of software that I used to do my job; open a tab in Firefox: ctrl t, open a previously closed tab: ctrl-shift-t.

Initially, when faced with a task that I had previously done with the mouse, it took me a little while longer to figure out how to get to the right menu, select the right sub-menu and then activate the dialog to do a specific task.  Once I had performed the keyboard shortcut a couple of times it became seared into my muscle memory.  From that point on, all I had to do was think of what I wanted and the keyboard shortcut was performed by my hands, without me needing to think of the individual steps. That is, I don't think "alt-space-x" I think, maximize, and the screen is magically maximized.

The keyboard gives you a way to utilize muscle memory to control the computer.  This is very important. Humans are great at learning new muscle skills. Walking, talking, singing, jumping, painting, tying your shoelace.

Turns out the keyboard is the only input device that will let you use muscle memory.

There are two other interesting things about a keyboard. One is that it's operations are discrete, individually separate and distinct.  That is to say, you can either have pressed a key (or combination of keys) or you haven't.  There is no concept of half-pressing a key, or pressing a key hard, or the letter 'a' moving position on you.

Actually, the 'a' can move on this keyboard.

The next thing is that the operating system responsible for listening to the keyboard queues up your actions and processes them, in order, as fast as it can. I know I can type a word or perform 4 keyboard shortcuts in a row without having to know what is happening on the screen.  I'm always confident that the computer will execute them in the order that I thought of them, and that the next key stroke wont take effect until the previous one completes.

That the keyboard allows you to enter discrete commands and they are queued in a trusted way, means that I can now apply whole "sentences" to my work.  Just like you can think of typing the word "trusted" into a text editor - a series of characters represented by a series of muscle movements - so you can tie together sequences of shortcuts to perform complex operations.

Keyboard Shortcut Mecca
For example, in Eclipse (a programming editor), I quite often want to format the code, organize my library imports, save, and close the editor. This is ctrl-shift-f, ctrl-f-o, ctrl-s, ctrl-w. I don't actually think any of those things, I think "end" and the rest all happens with no conscious interaction. The keystrokes take about half a second, the program updates in between 1 and 5 seconds.  My brain has applied a label to a complex pattern.  I have shortened the muscle memory location to one place, to one word, to one feeling "end" in the context of Eclipse.





You cannot do this with the mouse, and to prove it, here's a challenge.  Without looking at the screen, use a mouse and try and save your work. You can think "save" as much as you want, it isn't going to help. In the interest of scientific fairness, I tried to do this 25 times and I managed it twice. The way I did it was to maximize the program, close my eyes and then drag the mouse to the furthest top left position.  Then, using the force, guessed where the file menu is, then guessed where save is. You might get it right. You probably wont. You'll probably lose work.
Even with your eyes open, saving with a mouse takes at 3 seconds; go to the file menu, find the save option, click.  Ctrl-s is virtually instantaneous.  How often do you save?  How many three seconds of your life would you have lost if you always used the mouse?

Could you become as fast with the mouse?
No, for these reasons:  Mice are variable. They accelerate and speed up so you can't be sure that your hand gesture will correlate to an exact position on screen.  Mouse usage also requires the computer to tell you where the mouse cursor is so you can adjust your hand movements.  In fact, as you use the mouse, you and the computer are continually communicating through your hand, the CPU and then the monitor, and then your eyes, your cortex based CPU and back to your hand again. This is a much slower and error prone process than thinking "save".

All input devices, apart from the keyboard, require this kind of human computer feedback loop to be used. This means that they are missing out on one of our must fundamental skills. Muscle memory.

Having worked with a fair number of illustrators and graphic designers, it turns out that it's not just programming where keyboard shortcuts are critical, it's in Photoshop and Illustrator too. The best graphic artists hardly seem to use the mouse. They use it for the bezier curve tool, airbrushing, and moving the page around, but nearly everything else seems to be a dazzling array of keyboard shortcuts; copy that layer, make the outer colour transparent, move it up four layers, paste it as an object, move it (using the cursor keys) up 5 pixels, etc...

The typewriter was invented in 1870.  Really?  141 years of the same interface and nothing has changed?  It would seem that nothing has come along that even remotely matches the keyboard's ability to allow us to interact effectively with our computer friends.  Keyboards have gotten way cooler, but quite a lot of these new keyboards have lost their discrete functioning.  For example, I don't know if I've pressed a key on this projected keyboard until I see the confirmation on screen.


iFail

Put your hands in the laser beam,
it probably doesn't give you cancer.


 Show me a new user interface that doesn't require feedback and that can be used blindfolded.

All I can think of are the new neural integration devices. Devices that allow the computer to learn what you're thinking, and then perform associated operations. I think these devices are the only ones that can possibly match and hopefully blow past the speed of the keyboard.
However, the designers need to keep in mind that they'll only beat the keyboard if they allow, discrete, trusted queues of commands to be issued.  Having to wait for a fancy three dimensional world to re-render before the next thought can be issued will make sure this interface is never taken seriously.

Have a look at the OCZ version here

Friday 27 May 2011

The future of mobile computing

With every new way of measuring the world that is put inside a mobile device, there is a combinatorial number of possible applications that can be built.

A simple example with the iPhone is the compass. When the compass was added, it created a way that the iPhone could tell you if you were facing East, West, North or South. However, the compass doesn't measure direction, it measures magnetic fields.  A few weeks after the compass was introduced, a Stud Finder app was built to help finding nails in studs for putting up secure shelves.  A novel way of using a magnetic field sensor.

http://www.fastappstore.com/~334839465


Another example on novel uses of measuring the world was first shown on Android.  Augmented Reality uses the compass, the 3 axis accelerometer (for position) and the video to create an app where you can hold the video camera up to the world, and the app will overlay places of interest, your nearest pizza location etc...

http://techsplurge.com/3214/mega-list-33-awesome-augmented-reality-apps-games-android/

At time of writing there are more than ten ways that mobile devices can measure the world.
  • Cellular network detection. (useful for making calls, sending texts, and triangulating position)
  • Wireless network detection.
  • Video recording and picture taking.
  • Magnetic field detection.
  • GPS - for detection of broadcasting objects in geostationary orbit.
  • 3D spacial position of the device.
  • Human contact (touch screen devices)
  • Acceleration of the device. (shaking, walking, etc...)
  • Sound detection.
  • Light sensor.
  • Proximity sensor.
  • Detecting other devices (blue tooth) 
And a bunch of ways that it can interact with the world:
  • Display graphics
  • Create sound
  • Connect to WIFI
  • Create WIFI
  • Connect to cellular networks
  • Connect to blue tooth devices
  • Vibrate

    Combine any two of these measuring devices and you have a useful app, combine three or more and you have a killer app.  Use one of the measurement devices in a novel way and you have a viral app.  (Shazam, Word Lens, etc...)
    This list of sensors is going to grow.  Notably, the measurement hardware doesn't have to live on the mobile device.  Apple and others have created protocols to allow third party hardware to be plugged into the device.  For example, Nike's shoe app that measures your pace, alarm clocks that measure your REM sleep and wake you at well rested points and countless other medical applications.


    As the ways of measuring and interacting with the world grow, and every conceivable app filling all the niches, mobile computing is going to become more and more personalized, more ubiquitous and as long as we don't all die of brain cancer from exposure to these measuring devices, more and more useful. It would be nice if there was a systematic way of creating every possible app from every combination of measuring device. 

    Here are some predictions of useful measuring devices:
    • Heart rate sensor.
    • Heat sensor (combined with heart rate can measure, mood, stress etc...)
    • Brainwave sensor (for all those useful psychic apps coming down the line)
    • Genetic Protein sensors (general health)

    Tuesday 24 May 2011

    A taglib for modular JavaScript and CSS

    I would like to have nice modular style sheets.  I want a tabs.css, I want a global.css, I want a specific style sheet for overrides for a particular page - homepage.css.

    What I don't want, is to end up with more than one style sheet include on my html pages which leads to long page load times.


    Furthermore, I want this to work dynamically in my test environment, but statically in my live environment.  That is to say, when I'm testing in Jetty, I want the global style sheet to be regenerated each time one of the modular style sheets changes, but when I'm on the live site, I only want the global style sheet generated once, and persisted to disk.

    And while we're at it, I want this to work with JavaScript includes as well, and everything needs to be minimized.

    It seems that this is a bit unreasonable as I couldn't find any tag libs out there that did this.  For anyone who thinks this might be useful, here's what it ended up looking like in the jsp,

    Stylesheets
    <c:set var="location">
        <wa:compressor 
            files="/stylesheets/style.css,/stylesheets/tagcloud.css,etc..." 
            outputDir="/stylesheetsGlobal/" 
            regenerate="${true}" />
    </c:set>
    <link rel="stylesheet" href="${location}" type="text/css" />


    JavaScript
    <c:set var="location">
        <wa:compressor 
            files="/javascript/jquery.js,/javascript/main.js,etc..." 
            outputDir="/javascriptGlobal/" 
            regenerate="${true}" />
    </c:set>
    <script src="${location}"></script>

    The jar file for this can be downloaded here:  http://www.warrenprojects.com/files/wonderant-tags-1-dist.zip
    The maven source project here: http://www.warrenprojects.com/files/wonderant-tags-1-src.zip