Steps for AWS Hosted VPC running on customer's AWS account:

1. Customers can specify Grid nodes settings, including instance type, region, spot pricing, elastic IP, and AWS tags. Which are applied to an Auto Scale Group (ASG) on the customer's account.

2. ASG can be associated with pre-existing customer specified Security Group (SG), allowing the customer to ingress/egress specific ports to/from their grid nodes.

3. Grid nodes are typically deployed within a pre-existing customer specified Virtual Private Cloud (VPC) inside a subnet. We recommend the use of private subnets for complete inbound isolation from the Internet.

4. Grid nodes will require outbound HTTPS connectivity to the Internet to several endpoints via a customer specified NAT Gateway. This is for test control, file access, and reporting of aggregated time series results.

5. Flood maintains its VPC hosted in US West Oregon for the multiple hosting public-facing endpoints with Elastic Load Balancers (ELBs) in a public subnet.

6. The drain.flood.io ELB receives HTTPS traffic in the form of time series results from distributed grid nodes hosted by the customer.

7. The beacon.flood.io ELB received HTTPS traffic in the form of health/status monitoring data from distributed grid nodes hosted by the customer.

8. The SQS endpoint operates as a dynamic/short-lived fan out subscription for each grid node, to the customer's own (multi-tenant, separated by account key) SNS published endpoint on Flood. This is for the purpose of test control (start/stop floods).

9. Flood hosts multiple S3 buckets (multi-tenant, separated by account key) for the customer's test files. This is for test files and archived results.

10. Grid nodes should have access to any target application of the customer, typically hosted on the same subnet, or ingressed via appropriate network SGs.

Did this answer your question?