How to Run Full Ethereum Geth Node on AWS EC2 with Nginx and SSL

ADD Photo by Pixabay from Pexels

A full Ethereum node is often necessary for development purposes or if you don’t want to rely on 3rd parties like Infura for blockchain access. Compared to the “Ethereum killers”, running a full ETH node is relatively affordable and requires only a basic dev ops skillset. In this blog post, I’ll describe a step-by-step process to setup a full Geth node on AWS EC2. We’ll discuss topics including hardware cost and requirements, synchronizing light nodes, and NGNIX proxy for connecting Metamask wallet to your node using a secure HTTPS connection.

This tutorial covers Geth version 1.10.16 on Ubuntu 20.04. Only the first part is specific to AWS. The rest of the steps will be identical on any other Cloud VPS provider or proprietary server running Ubuntu.

We have a lot of ground to cover, so let’s get started!

Spinning up an EC2 instance

Start with provisioning a new EC2 instance. Go to EC2 > Instances > Launch instances. Select Ubuntu Server 20.04 LTS (HVM), SSD Volume Type AMI. In the next step, choose the m5.large (8 GiB RAM, 2 vCPUs) instance type (cost ∼$75/month).

On Step 3: Configure Instance Details screen, you can leave everything at default values and click Next: Add storage.

At the time of writing Ethereum full node needs ~600GB of disk space. Check the current space requirements before choosing a disk size. Depending on how long you want to keep the node running, you have to leave some threshold for the new blocks. The current growth rate for full nodes seems to be at ~50GB/month.

For Volume type choose General Purpose SSD (gp3) with the default IOPS and throughput settings. It is 20% cheaper than the older generation gp2 disks. For the purpose of this tutorial, I’ve added 750GB disk, so the monthly cost for storage space would be ~$60. Also, make sure to choose (default) aws/ebs encryption.

Now click Next: Add Tags and Next: Configure Security Group. On this screen, select Create a new security group. Modify the inbound traffic rule to whitelist TCP port 22 for all the IPs, i.e.,

EC2 SSH inbound rule

You also have to allow inbound traffic for TCP and UDP port 30303 because it’s needed for P2P discovery and synchronization. Port 30303 should also be exposed for the wildcard address. Additionally, if you want to configure external access to the node JSON-RPC API, you’ll have to open TCP ports 80 and 443.

Next, click Review and launch and Launch. When prompted about the key pair, select, Create a new key pair and give it any meaningful name. Press Download Key Pair to save it on your local disk and later Launch instances.

Now back in your terminal, change permissions for your key pair by running:

chmod 400 keypair-ec2.pem

Next, go to EC2 > Instances and go to your new server details page. Copy its Public IPv4 address. Back in your terminal, you can now SSH into your EC2:

ssh [email protected] -i keypair-ec2.pem

Configuring Geth on Ubuntu

We’ll start the Geth process as a systemd service to run it in the background and enable automatic restarts. Start by running these commands to install Geth from the official repository:

sudo add-apt-repository ppa:ethereum/ethereum
sudo apt-get update
sudo apt-get install ethereum

Now create /lib/systemd/system/geth.service file with the following contents:


Description=Geth Full Node


ExecStart=/usr/bin/geth --syncmode snap --http --http.api personal,eth,net,web3,txpool


ExecStart command --syncmode snap determines that we’ll be spinning up a full node. The snap sync mode has replaced fast mode as of Geth 1.10.16. If you try to use the legacy fast mode, you’ll see the following error:

invalid value "fast" for flag -syncmode: unknown sync mode "fast", want "full", "snap" or "light"

--http flag (replacement for a legacy --rpc) enables HTTP API which we’ll use to connect our Metamask client.

Now you can enable and start the Geth service by running:

sudo systemctl enable geth
sudo systemctl start geth

and see the log output using:

sudo journalctl -f -u geth

You can now verify that the node is up and running by launching a Geth console:

geth attach

Inside the console, now run:


You should get a similar output indicating that the node has started the synchronization:

  currentBlock: 2254868,
  healedBytecodeBytes: 0,
  healedBytecodes: 0,
  healedTrienodeBytes: 0,
  healedTrienodes: 0,
  healingBytecode: 0,
  healingTrienodes: 0,
  highestBlock: 14426316,
  startingBlock: 2250487,
  syncedAccountBytes: 2670602107,
  syncedAccounts: 11057974,
  syncedBytecodeBytes: 257393098,
  syncedBytecodes: 50954,
  syncedStorage: 42499504,
  syncedStorageBytes: 9161595917

If you’re getting false, you should wait for a minute or two for synchronization to kick off. In case you have any issues with completing the synchronization, you can run:

sudo journalctl -f -u geth

to tail the log output. Optionally, you can run the geth process with --verbosity 5 flag to increase logs granularity.

A few hours after the node has finished synchronization, it should be discoverable on You can double-check that you’ve correctly opened all the necessary ports by going to the geth attach console and running: =>

You should see both true and false values meaning that your node is discoverable in the P2P network. If you’re seeing only false, you probably did not publicly expose the TCP and UDP port 30303.

The initial synchronization time depends on the hardware configuration (more details later). You can check if our node is fully synchronized by going to the geth console and running:


and compare the value with an external data source, e.g., Etherscan. You can check out official Geth docs for more info on available API methods.

If you’re getting 0 then check your logs for similar entries:

State heal in progress

Their presence means that your node got out of sync and might need a few hours to catch up. If the issue does not fix itself after 10+ hours, your server probably lacks CPU, memory, or disk throughput.

Password protected HTTPS access to full Geth node with NGINX

Each console method has its JSON-RPC equivalent. You can check the current block number with HTTP API by running the following cURL command:

curl -X POST \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0", "method":"eth_blockNumber", "id":1}'

But right now, you can only talk to the node from inside the EC2 instance. Let’s see how we can safely expose the API to public by adding by proxing JSON-RPC traffic with NGINX.

You’ll need a domain to implement this solution. It can be a root domain or a subdomain. You have to add an A DNS record pointing to the IP of your EC2 instance. It is recommended to use an Elastic IP address so that the address would not change if you have to change the instance configuration.

Next, inside the instance, you have to install the necessary packages:

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install nginx apache2-utils
sudo apt-get install python3-certbot-nginx

You can now generate an SSL certificate and initial NGINX configuration by running:

sudo certbot --nginx -d

To automatically renew your certificate add this line to /etc/crontab file:

@monthly root certbot -q renew

Once you complete these steps, you should see an NGINX welcome screen on your domain:

NGINX welcome screen

NGINX welcome page

Next generate a HTTP basic authentication user and password:

sudo htpasswd -c /etc/nginx/htpasswd.users your_user

Now you need to edit the NGINX configuration file /etc/nginx/sites-enabled/default:

server {

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:8545;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_cache_bypass $http_upgrade;

    listen [::]:443 ssl ipv6only=on;
    listen 443 ssl;
    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

server {
  if ($host = {
      return 301 https://$host$request_uri;

  listen 80 ;
  listen [::]:80 ;
  return 404;
The SSL certificate files are automatically generated by the certbot command

We use a proxy_pass directive to proxy traffic from an encrypted 443 HTTPS port to Geth node port 8545 on our EC2 instance without exposing it publicly. Additionally, HTTP basic authentication headers are required for every request.

Now verify that the config is correct:

sudo nginx -t

and restart the NGINX process to apply changes:

sudo service nginx restart

The default welcome page should no longer be accessible. You can check if your full node is available via a secure HTTPS connection using this command executed from outside of your EC2:

curl -X POST \
  -H "Content-Type: application/json" \
  -u your_user:your_password \
  --data '{"jsonrpc":"2.0", "method":"eth_blockNumber", "id": 1}'

Once you have it working, you can now connect your browser Metamask extension to use your personal full node for blockchain access. To do it go to Metamask Settings > Networks > Add a network. Give your network any name, and in the New RPC URL, input your full node connection URL in the following format:

https://user:[email protected]

Metamask custom network configuration

Metamask custom network configuration

Input ETH for Currency Symbol. Chain ID should be auto-filled to 1, representing the Ethereum Mainnet. You can now click Save and use your Metamask wallet as you would normally. You’re now talking directly to the Ethereum blockchain without a trusted 3rd party like Infura or Alchemy. And if AWS is still too centralized for your blockchain needs, remember that you can use a similar setup on your proprietary hardware.

Unfortunately, I could only get the custom network config working on the Brave/Chrome version of Metamask. On Firefox, there seems to be a bug as of 10.11.3. I’ve submitted an issue on GH, so hopefully, this one will get resolved.

Full and light node hardware requirements

Below you can see graphs showing CPU, memory, and disk utilization of the m5.large (8 GiB RAM, 2 vCPUs) EC2 instance during a full synchronization process.

EC2 metrics during Ethereum full node synchronization

You can see that the complete process took ~20 hours. Apparently CPU was maxed out, but memory usage was consistently below 80%. After synchronization finished, both CPU and memory usage dropped significantly.

I’ve also tested full synchronization on the m5.xlarge (16 GiB RAM, 4 vCPUs) instance, and it took 12 instead of 20 hours. But, CPU and RAM metrics were almost identical.

It means that the choice of hardware depends on how urgently you need the full node up and running. But, make sure to avoid using t2/t3 instances. They feature a so-called “burstable” CPU, meaning that consistent processor usage above the baseline (between 5% and 40% depending on instance size) would be throttled or incur additional charges.

After the synchronization is finished, Node hardware requirements will be different depending on your use case. If you’re running an arbitrage bot scanning the mempool or thousands of AMM contracts on each block, you’ll need a beefier server than if you occasionally submit a few transactions. Optionally, using --light.serve flag, you can devote a part of your node’s processing power to serve P2P light nodes.

EC2 metrics for full nodes with and without light clients

The above graph shows the volume of disk read operations for two full nodes. You can see that it’s 10x more intensive for the node serving light clients. CPU and memory usage was comparable on both nodes. Running a full node that publicly accepts light client connections is a way to improve the decentralization and security of the Ethereum network. But, remember that AWS incurs additional charges for outgoing data. Adding budget alerts is highly recommended if you want to support light nodes.

The best way to determine the most cost-effective instance type is to continuously observe the metrics to see if you’re not running out of CPU, memory, or disk IOPS. AWS Cloudwatch makes it easy to configure email alerts when metrics exceed predefined thresholds. Check out these docs for info on how to collect disk and memory usage data because they are not enabled by default.

Light node synchronization

If you’ve ever tried to spin up a light Geth node, you might be familiar with the following log output:

Looking for peers peercount=0 tried=16 static=0
Looking for peers peercount=0 tried=16 static=0
Looking for peers peercount=0 tried=16 static=0
Looking for peers peercount=0 tried=16 static=0

Pablo waiting for Geth sync meme

Light nodes have significantly lower hardware and disk space requirements than the full nodes. I’ve managed to run a light Geth node on an AWS free tier t2.micro instance with an 8GB disk. After synchronization finished, the actual stored blockchain size was ~350MB compared to over 500GB for full nodes. Light nodes don’t keep and verify the whole blockchain but only the last few dozen blocks. But, they rely on full nodes to share the current state of the blockchain with them. As we’ve discussed, supporting light nodes is disabled by default and incurs additional costs. Depending on the congestion of the Ethereum network, your light node might not be able to peer enough full nodes to catch up with the current blockchain state. Hence the dreaded Looking for peers message.

I could not find a consistent pattern on what factors determine if a light node will start syncing. I guess it all goes down to the current network congestion. So, if you’re out of luck in one AWS region, a solution could be to spin up an EC2 instance across the globe. Usually, it takes at least 10 minutes for the light node to start syncing. And you can always try turning it off and on again. Sometimes sync kicked off right after a reebot after being stuck for over an hour.

You can investigate what nodes you’ve managed to connect to by running:


But, compared to full nodes, the network.inbound property of peer nodes will always be false because light nodes do not accept incoming connections.


Infura and Alchemy are currently an industry standard for everyday blockchain interactions. But, knowing that I’ll always be able to access my funds even if the centralized gatekeepers are out of business vastly increase my trust in the Ethereum network. Furthermore, even after the upcoming Merge, you’ll still be able to host full nodes on similar hardware. Storage space is only about the get cheaper. So the constantly growing size of the blockchain should never be a blocker for regular users to host full nodes and support the network.

Back to index