Skip to content

Latest commit

 

History

History
98 lines (92 loc) · 8.08 KB

File metadata and controls

98 lines (92 loc) · 8.08 KB

Development Environment

Regarding web api development strategy, there are many approaches in term of technology that could be used to develop, such as ASP.NET Core or NoteJS, etc. This article we choose ASP.NET Core Web API as it is part of MS technology stack.

  1. MS Visual Studio. At the time of this demo Im using MS Visual Studio 2017.
  2. As we are working on AWS, AWS Toolkit for Visual Studio 2017 and 2019 is an essential tool to help us with some pre-defined templates to create a new project for Lambda AWS function (web api with .net core). This also supports to publish the function from local into AWS directly via VS.
  3. To use the AWS SDK for .NET with .NET Core, you can use the AWSSDK.Extensions.NETCore.Setup NuGet package. Ref here.
  4. As we are working on microservices and related disciplines, Docker is one of the most powerfully platform supporting this at the moment. We need to install Docker Desktop on Windows in your Windows local.
  5. Install the Linux Bash shell on Windowds 10 (Windows Subsytem Linux). This article shows how to install with step by step.
  6. Install AWS Command Line Interface (AWS CLI) version 2 on Linux, as following article.
  7. Install Docker in Windows Subsystem Linux, as following article. Note: This step can be optional if you already have installed Docker for Desktop. If Docker has been already running on Windows system, to enable docker can be used from Windows subsystem Linux, on the Bash shell command line of Windows subsystem Linux, do as following:
    • Docker can expose a TCP endpoint which the CLI can attach to.
      • This TCP endpoint is turned off by default; to activate it, right-click the Docker icon in your Windows taskbar and choose Settings, and tick the box next to "Expose daemon on tcp://localhost:2375 without TLS".
      • With that done, all we need to do is instruct the CLI under Bash to connect to the engine running under Windows instead of to the non-existing engine running under Bash, like this:
      $ docker -H tcp://localhost:2375 images
      • There are two ways to make this permanent � either add an alias for the above command or export an environment variable which instructs Docker where to find the host engine:
      $ echo "export DOCKER_HOST='tcp://localhost:2375'" >> ~/.bashrc
      $ source ~/.bashrc
  8. To be able to connect to EC2 Linux environment from your local Windows, we use Putty. This link shows you how to do this.

Microservice hosted in AWS

AWS has integrated building blocks that support the development of microservices. Popular approaches are:

  • AWS Lambda
    • No administration of infrastructure needed.
    • Support serveral programming languages.
    • Can be triggerd from other AWS Service or be called directly from external application.
    • Limitations: Ref here.
  • AWS Elastic Container Service (ECS), including EC2 instances and Fargate, is a logical grouping (cluster) of virtual machines/instances and related storage, memory, cpu management.
    • ECS EC2:
      • Remote (virutal) machines.
      • Comunicate with ECS via container agent running on each of the instances.
      • Docker container-based deployment.
      • Deploy and manage both applications and infrastructure.
      • Pay for EC2 instances.
    • Fargate:
      • No EC2 instance types.
      • Support Docker and enables you to run and manage Docker containers (hosted by AWS).
      • Deploy and manage applications, not infrastructure.
      • Pay for requested compute resources when used.

Demo - Application Architecture

High level

A screenshot of the High level Application Architecture

Applicate Architecture - Deploy a simple microservice (Web APIs) to AWS ECS using EC2.

A screenshot of the High level Application Architecture

  • Elastic Load Banlancing service
    • 3 types: Application Load Balancing (ALB), Network Load Balancing (NLB) and Classic Load Balancing.
    • ALB is used by default if you are following the ECS wizard.
  • AWS API Getway
    • Authentication & authorization.
    • Throttling.
    • Caching responses.
    • API lifecycle management: dev, QA, prod.
    • SDK generation.
    • API operations monitoring: API calls, latency & error rates.
    • CloudWatch alarms for abnormal API behaviors.
    • API keys for 3rd-party devs.
    • Work only with Network Load Balancing, not Application Load Balancing.
    • Reference here as the architecture for Using API Gateway as a Single Entry Point for Web Applications and API Microservices.

Steps to work through

  1. Create your custom cluster in ECS
  2. Create api project(s) (using ASP.NET CORE WebApi project type) with Dockerfile to build and package an api project into docker image and stored in local docker repository.
  3. Create task defination (define docker container(s) with storage capacity / cpu usage from built image(s) in (2) above).
  4. Create build and deploy scripts (using AWS CLI with Bash)
  5. Deploy task defination into ECS
  6. Deploy built docker images into ECS Repository
  7. Create Elastic Load Balancer
  8. Create EC2 instances (two) with 'ecs-optimized' behavior. With 'ecs-optimized', the container agent is automatically deployed in the instance. To launch your container instance into a non-default cluster, choose the Advanced Details list. Then, paste the following script into the User data field, replacing your_cluster_name with the name of your cluster. Note that Amazon EC2 user data scripts are executed only one time, when the instance is first launched.
       #!/bin/bash
       echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
  9. Pull the jwilder/nginx-proxy Docker Image from Dockers public repository which sets up a new Docker container running nginx and docker-gen which is what will enable our no-touch deployments where it will generate the reverse proxy configs for nginx each time a new Docker App is deployed.
       #!/bin/bash
       $ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
  10. Create new service with 2 tasks, with above created task defination and load balancing (type is Network Load Balancing). The tasks will be created automatically and pull docker images from ECS repository into EC2 instacnes to create coresponding docker containers.
  11. Check existing the ECS Load balancing, a new target group was auto created with existing listener port (80). There are 2 targets inside the target group.
  12. Manually:
    • Inside the load balaning, create new listener with port 1025.
    • Create new target group, add 2 remaining running docker containers as 2 targets in this target group. Note: make sure the listening ports are correct.
    • Associate the listener with the target group
  13. Create new API Getway
  14. Create new VPCLink as the bridge between the Getway API and the Load Balancing
  15. Create API Getway resources
  16. For the client to call your API Getway, you must create a deployment and associate a state with it. Ref here
  17. Testing methods.