AWS Machine Learning Inference at the Edge

AWS Machine Learning Inference at the Edge

AWS has combined three of its technologies for an innovative new service: Machine Learning Inference

  • IoT (Internet of Things) — where internet-ready devices communicate with the cloud and provide information about their usage.
  • Machine Learning – Where systems are trained to draw conclusions from raw data, useful for giving recommendations to customers such as where to shop and what to purchase.
  • Edge Computing – having compute resources in sync and make decisions while in different locations, with little to no connectivity to the cloud.

With Machine Learning Inference at the Edge, you have all three services linked together. This service use AWS Greengrass to help you build, train, and test your Machine Learning modules before you deploy them to IoT devices that consume little power and only intermittently connect online, such as those running in factories, vehicles, mines, and homes.

Greengrass ML Inference can be used for physical security, such as the case of smart devices which can detect events, objects, and faces. It may be used for maintaining and monitoring industrial machines, triggered by power consumption, noise levels, or various other anomalies.

This new service has several key features:

  • Machine Learning models — precompiled TensorFlow and MX libraries that can make good use of GPU and FPGA hardware accelerators.
  • Model Building and Training — use Amazon SageMaker to and other cloud ML tools to create, train and test your models prior to deployment.
  • Model Deployment — AWS Greengrass groups can reference SageMaker models. You may also reference models in your S3 buckets.

To learn more about this services, you can check out the ML Inference documentation. If you would like to see how this technology can apply to your particular business, consult with our AWS experts at PolarSeven today.