The idea is that you shouldn't have to be a mad scientist (not that there is anything wrong with that) to learn, discover, and build machine learning, deep learning and AI solutions for fun, for work, or for solving the world's most pressing and difficult problems.
The idea is to leverage the work of a lot of very smart people who over the last several decades have shown us how to make computers help ordinary people to do amazing things. This is what computers are for, right?
The idea is to make a modern user-friendly application that helps humans train computers to be smarter computers - much smarter computers that do wonderful things for, and with, us humans.
The rest of us...
* The rest of us who are not experts in:
CPU, GPU, TPU optimizations
Python programming, test, debugging, and optimization
Tricks & gotchyas of the machine learning framework de jour
* The rest of us who do not want to spend a whole lot of time doing boring stuff:
Find the right training data
Prepare the training data
Manage 100s of versions of models, tests, and results
Read 100s of technical papers to figure out what might work
Read technical papers everyday to figure out what does work
Spend days tuning hundreds, or millions of Hyperparameters
Spend days just trying to find the right learning rate
Spend days just trying to find the right dropout rate
Spend days waiting to find out it still doesn't work
* The rest of us who do want to use (and maybe learn about) CNNs, RNNs, RL, LSTMs, GANs (and more) to do stuff:
to build recommendation engines
to build personalization engines
to build image search engines
to build text semantic categorization engines
to build text sentiment engines
to build speech recognition engines
to build big data predictive analytics engines
to build smart bots
to build friendly bots
to optimize workflows
to optimize business processes
to optimize supply chains
...to save money, make money, or just, you know, save the world.
It's about Getting Stuff Done
Do we have to know Color Science and Digital Signal Processing and DirectX or OpenGL and GPU and framebuffer architectures and then write a bunch of code just to resize an image?
Or do we just use Photoshop?
OpenAIDE, like Photoshop, is plugin-based and extensible, so if you ARE a mad scientist, you can do all those mad scientist kind of things [i.e. OpenAIDE was originally designed for and will have complete support for the advanced researcher].
Call for Participation
In the next few weeks we will ask for your feedback about what features OpenAIDE needs to have in order to satisfy your AI and machine learning needs.
In the meantime, for the geeks out there, the following pages are the high-level description of the current architecture of OpenAIDE.
We welcome all kinds of feedback!
Make slides for adhoc presentation Jan 2018
Purchase openaide.org domain Mar 2018
Create landing page Mar 2018
Add high-level architecture designs to landing page Mar 2018
Write "Origin Story" announcement for Medium blog here
Write "First Steps" plan and call to action for Medium blog
In order for OpenAIDE to remain free, it has to offload the major computations required when training your machine learning solution to either your local machine or to your optional cloud account (e.g. AWS).
OpenAIDE deployed in the cloud (the default configuration)
The browser extension is only needed if you want to execute training runs on your local machine - e.g. you do not have an account with a cloud service like AWS, or you are trying to save money by running locally as much as possible, or you have a custom GPU on your machine that you prefer to use, or your business has its own corporate compute network you want to use.
OpenAIDE deployed on your local machine - localhost (possibly deprecated)
The OpenAIDE application is downloaded and run on your localhost webserver which you have previously configured.
It is felt that for the goals of this application: ease-of-use, democratization, collaboration, social contributions, etc. the advantages of hosting OpenAIDE in a browser (accessibility, portability, convenience) outweighs the disadvantage (access to standard python frameworks). This disadvantage then, access to python, is replaced by two lessor disadvantages: either having to install a browser extension or setting up your own account in the cloud that hosts your own suite of python libraries.
OpenAIDE High-level Brain Architecture
At this level the brain is a black box. This 'brain' may just be a simple CNN model or something much more complex. As you can see there are similarities between the so-called (thinking and doing) passive and active brains. The design attempts to exploit this to the greatest degree possible, and at the very highest level, the brain just takes in input and generates output and there is vast amount of knowledge about how to work with and implement these kinds of brains.
The Monitor, perhaps misnamed, is included in these diagrams as the god-like combination of teacher, meta-feedback loop, tuning and debug tool that we think may need to be present in order to generalize brains beyond single task designs, and elevate humans from having to explicitly instruct the brain about how it should do things to a higher, more supervisory role.
Passive Brain (e.g. classification, predicting)
This brain receives notifications about data, whether training or runtime, and reports the results of its analysis of the data e.g. makes some kind of predictions about it. The runtime brain may differ, and in some deterministic manner is derived from, from the training brain.
Active Brain (e.g. working, playing, interacting)
This brain inquires about its environment and receives events about changes to the environment, and takes actions based on the results of its analysis of the situation. The environment is separated into two distinct entities as there is at present no clear advantage to combining them and, in fact, there appears to be clear benefits in terms of generalizing the brain by keeping them separate (i.e. to take advantage of the similarities with the passive brain).