What Are AWS Elastic Network Interfaces (ENIs), and How Do You Use Them?

AWS Logo

Essentially, ENIs are virtual network adapters that you can attach to your EC2 instances. They are used to enable network connectivity for your instances, and having more than one of them connected to your instance allows it to communicate on two different subnets.

What are Elastic Network Interfaces?

You are already using them if you are using EC2. The default interface, eth0, is associated with an ENI that was created when you launched the instance and is used to handle all traffic sent and received from the instance.

However, you are not limited to a single network interface: connecting a secondary network interface allows you to connect your EC2 instance to two networks at the same time, which can be very useful when designing your network architecture. . You can use them to host load balancers, proxy servers, and NAT servers on an EC2 instance, routing traffic from one subnet to another.

ENIs have security groups, just like EC2 instances, which act as a built-in firewall. You can use them, rather than a Linux firewall like iptables, to handle inter-subnet traffic.

A common use case of ENIs is the creation of management networks. This allows you to have public applications such as web servers in a public subnet, but to lock down SSH access to a private subnet on a secondary network interface. In this scenario, you connect using a VPN to the private management subnet, and then administer your servers as usual.

ENI: the creation of management networks.

In this diagram, the subnet on the left is the public subnet, which communicates with the Internet through the Internet gateway for the VPC. The subnet on the right is the private management subnet, which in this example is only accessible by the AWS Direct Connect Gateway, which allows the on-premises network to handle authentication and simply extends that network into the cloud. You can also use AWS Client VPN, which will run an accessible VPN server with private key credentials.

ENIs are also often used as the primary network interfaces for Docker containers launched on ECS using Fargate. This allows Fargate tasks to manage complex networks, set up firewalls using security groups, and be launched into private subnets.

ENIs used as the primary network interfaces for Docker containers launched on ECS using Fargate, tasked with managing a complex network, setting up firewalls using security groups, and being launched in sub -private networks.ENIs are also often used as the primary network interfaces for Docker containers launched on ECS using Fargate. This allows Fargate tasks to manage complex networks, set up firewalls using security groups, and be launched into private subnets.

According to AWS, ENIs have the following attributes:

A primary private IPv4 address from your VPC’s IPv4 address range
One or more secondary private IPv4 addresses from your VPC’s IPv4 address range
One Elastic IP (IPv4) address per private IPv4 address
A public IPv4 address
One or more IPv6 addresses
One or more security groups
MAC address
A source / destination control indicator
A description

ENIs are completely free, although they are not exempt from standard AWS data charges.

Implement inexpensive failover with ENIs

Because ENIs can have their association assigned dynamically, they are commonly used to implement failover in network design. If you are running a service that requires high availability, you can run two servers, a primary server and a standby server. If the primary server goes down for some reason, service can be failed over to the standby server.

ENIs can fulfill this model quite easily: just launch two servers, create a secondary ENI instance to use as a switch, and associate it with the primary server, possibly with an elastic IP address. Whenever you need to switch to the standby instance, all you need to do is switch the ENI (albeit manually, or with a script).

However, ENIs are not the best way to achieve this in the AWS ecosystem. AWS supports autoscaling, which can be used to achieve the same effect more cost effectively. Rather than paying extra for redundancy, you would instead be using many smaller servers in an autoscaling fleet. If one of the instances goes down, that’s okay. A new server can be started quickly to handle the drop in traffic.

While it is easy to manually fail over the ENI to the standby instance, automating the failover process is much more complicated. You will need to configure a CloudWatch alarm on the primary instance to trigger when the instance goes down (possibly sending you a message in the process), subscribe it to an SNS message queue, and trigger a Lambda function that will run. queue and manage the detach and reattachment process using the AWS SDK. It is doable, but we strongly recommend that you consider using Route 53 DNS failover rather than that.

If you want to automate the process, you can follow this AWS guide on how to bind a CloudWatch alarm to Lambda.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.