copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2024-01-30 |
satellite, hybrid, multicloud, edge, use case, machine learning |
satellite |
{{site.data.keyword.attribute-definition-list}}
{: #edge-usecase}
Create an {{site.data.keyword.satellitelong}} location with {{site.data.keyword.redhat_openshift_notm}} clusters on compute infrastructure that is deployed at the edge near your Internet of Things (IoT) devices. Then, through your {{site.data.keyword.satelliteshort}} location, your apps can access a suite of {{site.data.keyword.cloud_notm}} artificial intelligence (AI) and machine learning services to maximize the value of your data wherever the data is located. {: shortdesc}
{: #edge-challenges}
Common challenges of edge workloads include training the machine learning models and using predictive model inferencing. With {{site.data.keyword.satelliteshort}}, you can access the {{site.data.keyword.cloud_notm}} services that addresses these challenges where your edge workloads actually run. {: shortdesc}
Training a machine learning model : Training your machine learning model typically involves significant compute resources for memory, GPU, and storage. Instead of installing and managing training model software onto your compute infrastructure, you can add the compute infrastructure to a {{site.data.keyword.satelliteshort}} location. Then, you can access {{site.data.keyword.cpd_full}}, which includes tools such as {{site.data.keyword.DSX_short}} and {{site.data.keyword.pm_full}} for data analysis and model training. By accessing these tools as cloud services, you simplify the installation and management of the software. You also can use these same cloud services across all your edge infrastructure, no matter the underlying infrastructure provider.
Model inferencing : Model inferencing is the task of using a trained model to make predictions, detect anomalies, and categorize data from your edge environment. Because of memory, storage, and latency requirements, model inferencing is most effectively run as near to your IoT sensors and other data sources as possible. You can create a {{site.data.keyword.satelliteshort}} location with managed {{site.data.keyword.redhat_openshift_notm}} clusters where your data is located in your edge environments. Then, you can set up a serverless programming model such as Red Hat™ OpenShift™ Serverless™ to provide a simplified programming model with a REST interface to query your trained model to produce a prediction.
{: #edge-solution}
While you can set up many possible solutions to the challenges of your edge environment, you can use {{site.data.keyword.satelliteshort}} to provide a consistent, scalable experience across environments. An example setup is as follows. {: shortdesc}
- Set up machine learning and model training for your data.
- Deploy {{site.data.keyword.satelliteshort}} with a serverless component to your edge environment.
- Run model inferencing at the edge.
{: #edge-example-ml}
As an AI model developer, you prepare your edge data with machine learning and AI tools in {{site.data.keyword.cloud_notm}}. Before you begin, you must have access to {{site.data.keyword.DSX}} and {{site.data.keyword.pm_short}} instances, such as in an {{site.data.keyword.cloud_notm}} account or through {{site.data.keyword.cpd_full_notm}}. {: shortdesc}
- Upload the training data to {{site.data.keyword.cos_full_notm}}.
- Use Watson Studio{: external} and {{site.data.keyword.pm_short}} to pull the training data from {{site.data.keyword.cos_full_notm}}, analyze the data, and train a model with TensorFlow, Keras, SciKit-Learn, or another popular machine learning algorithm.
The trained model is saved back to {{site.data.keyword.cos_full_notm}}, so that the data does not take up storage space in your edge environment.
Step 2: Deploy {{site.data.keyword.satelliteshort}} with a serverless component to your edge environment
{: #edge-example-serverless}
As the edge environment system administrator, you enable a serverless tool to simplify model inferencing at the edge. {: shortdesc}
- Create a {{site.data.keyword.satelliteshort}} location on your edge computing infrastructure.
- Create a managed {{site.data.keyword.redhat_openshift_notm}} cluster in the {{site.data.keyword.satelliteshort}} location.
- Access the {{site.data.keyword.redhat_openshift_notm}} web console.
- Using the OperatorHub, install the {{site.data.keyword.redhat_openshift_notm}} Serverless operator{: external}.
- Install the Knative Serving Operator{: external}.
You deployed {{site.data.keyword.satelliteshort}} with a serverless component.
{: #edge-example-inferencing}
As the AI developer, run model inferencing on your edge data by using the serverless processing that the edge administrator set up. {: shortdesc}
- Download the trained model from {{site.data.keyword.cos_full_notm}} to your local development environment.
- Create a Knative-compliant container image{: external}.
- Deploy the image{: external} to your {{site.data.keyword.redhat_openshift_notm}} Serverless processor that runs in your {{site.data.keyword.satelliteshort}} cluster. You can use the {{site.data.keyword.redhat_openshift_notm}} web console in the developer perspective, or use the
kn
command line tool for {{site.data.keyword.satelliteshort}} Serverless.
Now, you have a managed {{site.data.keyword.satelliteshort}} location that runs on your edge environment and performs on-demand model inferencing for your edge data through your AI-trained model and {{site.data.keyword.redhat_openshift_notm}} Serverless.