BrainFrame: An Operating System That Comes With A Smart Vision AI Developers Kit

BrainFrame: An Operating System That Comes With A Smart Vision AI Developers Kit
BrainFrame: An Operating System That Comes With A Smart Vision AI Developers Kit

Imagine being able to embark on a real-time computer vision project in a few hours, with no code to build a traffic control system, a warehouse monitoring system, or an in-store point of sale optimization system. Like the apps that are built on top of smartphone operating systems, these smart computer vision projects can use a multitude of proprietary and vendor algorithms. Because they are built on top of BrainFrame, an operating system for computer vision that comes with a Smart Vision AI Developers Kit, they take a fraction of the time to build than other computer vision projects.

BrainFrame is one of the core products of, started by two founders, Stephen Li and Alex Thiel. Stephen applied his experience building out the Android operating system to BrainFrame. In collaboration with leading chipmakers such as Intel, Nvidia, etc., BrainFrame is positioning itself to take center stage as more developers rush into the space to experiment with computer vision applications in a variety of industries.

Recently, BrainFrame received the Nvidia Metropolis Certification, and Aotu in partnership with AAEON and Intel, just announced the release of its Smart Vision AI Developers Kit on the Intel AI Platform for IoT.

Stephen Li, CEO and Founder of says, “, initially focused on developing robotic solutions. As we completed early robotic projects, we found computer vision was at the heart of what we were building and that you need great performance. We decided to figure out how to achieve this great performance without writing a lot of code and led to the creation of BrainFrame. We then realized the need for a developer’s toolkit to help developers to customize and deploy computer vision projects quickly.”

Computer vision requires real-time video processing. Traditional video processing that uses artificial intelligence will often involve delay. Sending real-time data to Cloud servers where AI algorithms make sense of massive amounts of data has high costs. The delayed response time from having to navigate endpoints on the network can limit the use cases for video processing.

For instance, a Smart City application of AI where traffic cameras can detect traffic violations in real-time can have significant performance issues when sending data to and from the Cloud. This type of delay may not be an issue if the use case is not time-sensitive. If a traffic accident just happened, and the ambulance needs to be alerted immediately, the delay can render the application useless.

AI is increasingly being deployed on the Edge as well as over the Cloud., when developing BrainFrame, made sure that it functions on the Cloud, on the Edge, and as a Hybrid solution. BrainFrame is designed to be an operating system for computer vision, much like Android OS for Smart Mobile Applications. Intelligent VisionCapsules, which are AI algorithms encapsulated in an open-source format, enable the BrainFrame engine to run algorithms against real-time video streaming data with minimal coding.

For instance, in the above example of detecting traffic accidents, an intelligent VisionCapsule can be easily created in hours and deployed in seconds to recognize an accident. This VisionCapsule can be deployed as an Edge AI solution to enable an emergency response system that alerts emergency personnel to verify the scene of an accident and send help.

Alex Thiel says, “Initially when we started, we had a customer that needed 400 simultaneously processed cameras. We figured out how to run BrainFrame across multiple computers in parallel then stream all of the results in a way that makes sense.”

In the area of computer vision, privacy is also a considerable concern. By processing some of the personalized intelligence on the Edge instead of sending it to the Cloud, there’s an inherent privacy level that can be maintained. More and more, companies that are building computer vision systems realize the importance of safeguarding the privacy of individuals. They define specific use cases using more objective criteria and processing intelligence on the Edge to achieve the desired level of privacy.

Stephen Li says, “Privacy is viewed differently in Europe, China, etc. This is one of the reasons we built BrainFrame to work on the Edge, over the Cloud, and as a Hybrid model. This way, companies can choose to deploy solutions at the level of Privacy they are comfortable with.”

Just like in the early days of Big Data, when we moved beyond relational databases into other types of databases and large data warehouses, there’s currently a movement toward Edge AI from the Cloud. Many companies are increasingly using the Hybrid model of Edge and the Cloud to host a variety of AI capabilities for specific use cases. For machine learning engineers, data scientists, and application developers, keeping up with the stack of technologies is not easy.

Udacity’s program for Edge AI is a collaboration with Intel to embrace the developer’s community. BrainFrame is now a part of the program, and Stephen’s team regularly makes presentations to demonstrate the power of Edge AI when applied to Computer Vision. BrainFrame can shorten a developer’s time creating, using, and testing algorithms related to computer vision. From the product development perspective, developers can experiment more in a shorter amount of time for specific implementations. What can take weeks to write, test, and implement can now be done in a few days.

When in many instances, Edge AI’s computer vision use cases are not particularly clear, the ability to experiment and iterate through different use cases in a short amount of time can be beneficial.

Alex Thiel says, “We have many VisionCapsules that are ready for customers to use. It’s as simple as a drag and drop. Developers who participate in the training will develop their own projects using BrainFrame. With VisionCapsule sharing, a rich repository can be created and shared.”

Embedded computing is an industry that is often little known. It’s an industry with highly customized computers that operate inside warehouses, manufacturing facilities, vehicles, and restaurants chains. Typically, chip manufacturers will often quickly provide upgrades to existing chips every year. It’s the embedded computing manufacturers such as AAEON Technologies Inc, which works to ensure that large commercial clients can keep the version control and upgrade their hardware and software when they are ready.

Jason Lu says, “For our clients, it’s important to control both the electronics and mechanical that goes into their systems. The same applies to the software that’s running on top of it. We provide the consistency that allows the clients to use the same hardware and software for 15 to 20 years if they don’t want to upgrade to the newest version.”

Edge AI has become increasingly popular in commercial settings partially due to the increasing need for computer vision in systems. A warehouse monitoring system might require the computer vision system to monitor parts, materials, and tools that are used during the manufacturing process.  An access control system  can recognize who is at the door so that they can automatically open the door for employees of the company. In addition, an alarm can be triggered if non-verified individuals are present, emergency situations or suspicious activities are detected. The biggest issues for manufacturers and large scale system integrators is the stability of the systems and the time in development for AI solutions.

Historically, computer vision projects require software engineers copious amounts of time to develop the software. However, with BrainFrame, manufacturers can define use cases quickly, without much coding and design a solution quickly. Operational use cases can be tested readily instead of going through iterations with the software development teams or finding a vendor software to buy.  For example, a License Plate Recognition solution for a parking structure will require a system integrator to pull resources from software engineers and AI scientists to build a software module. By adding the BrainFrame platform, these resources will not be required.

Jason Lu says, “Before BrainFrame, you would need software engineering to develop the vision processing software and train the vision models to recognize the license plate.  Now, you don’t have to program. You just  configure instead of programming. That saves a lot of time and effort.”

The future of intelligent computer vision relies on companies to consistently experiment with new use cases as the operating environment and the business environment change. Only with the rapid deployment of use cases can intelligent computer vision take off in many industries. Whether’s BrainFrame will be the OS for intelligent computer vision remains to be seen. It is certainly off to a great start. Developers can give it a try using Intel’s IoT Development Platform.

Stephen Li says, “If you look at the world today, you go to the factory, and you go to the restaurant, you go to the grocery store, there are many use cases you have to optimize for the workflow. You have people right now watching to make sure things are done right. But that’s very tedious and fatiguing work. That work is perfect for machines to do. We want to empower people to build systems to optimize these workflows with smart computer vision.”

originally posted on by Jun Wu