From time to time you might need to resize one of the volumes attached to an EC2 instance. Perhaps it’s too big and you’re wanting to downsize? Or maybe it’s too small and you’re wanting to upscale? You only have the option of increasing the size of a volume. If you want a smaller one, then you’ll need to create a new volume and migrate the data across. However, if you’re making it bigger, then everything you need to know is in this post.
Resize the Volume
First you’ll need to change the size of the underlying volume.
- Go to the EC2 console and choose the Volumes tab.
- Select the volume you want to resize and press the button.
- Choose the required new size. You can also, incidentally, change the volume type here, which may be useful if you are wanting to select a more cost-effective or performant option.
- Press the button.
Resize the Partition
Now you need to SSH to the EC2 instance and inform it of the changes. You could also restart the instance, in which case these changes should be applied automatically. But this is not always a feasible option.
Check on the device names and partition numbers with lsblk
.
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 16G 0 disk
└─xvdf1 202:81 0 8G 0 part /home
There might be a bunch of other records in the result, but the ones that you are interested in should look something like that. The device on /dev/xvdf
has a size of 16 GiB but the partition /dev/xvdf1
mounted on /home
is only 8 GiB. We’ll want to grow the partition to the same size as the device.
For reference, this is what the output looked like before we resized the volume. Observe that the /dev/xvdf1
partition was the same size as the device on /dev/xvdf
.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part /home
Change the partition size with growpart
. We need to specify two arguments: the name of the device and the partition number.
sudo growpart /dev/xvdf 1
Take another look at the output from lsblk
:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 16G 0 disk
└─xvdf1 202:81 0 16G 0 part
We’re almost done now. The final step is to expand the file system to fill the partition.
Expand the File System
Lastly we need to grow the file system. First let’s check on the current file system size.
df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/xvdf1 7.9G 36M 7.4G 1% /home
Okay, it’s effectively 8 GiB. We want that to grow to 16 GiB so that all of that fresh new space is available. To do this we use resize2fs
and specify the partition label, /dev/xvdf1
.
sudo resize2fs /dev/xvdf1
A final look at the output from df
shows that the file system now fills the new partition size.
Filesystem Size Used Avail Use% Mounted on
/dev/xvdf1 16G 44M 15G 1% /home
Conclusion
This might seem like a lot of work, but really it’s quite simple:
- Resize the volume on the EC2 dashboard.
- Connect to the EC2 instance via SSH.
- Grow the partition with
growpart
. - Grow the file system with
resize2fs
.