Proxmox and Ceph for the NucNucNuc

Proxmox and Ceph for the NucNucNuc

In the latest incarnation of the NucNucNuc, I get Proxmox and Ceph installed

The idea of Ceph is very attractive. Distributed storage eliminates a huge concern of mine, which is being forced to replace a handful of very expensive Nimble storage units in the near future.


The first step was to get Proxmox installed and get the three NucNucNuc nodes in a cluster.

pvecm create NucNucNuc -> First node (192.168.202.10)

pvecm add 192.168.202.10 -> Second and third nodes

[email protected]:~# pvecm status
Quorum information
------------------
Date:             Wed Apr 26 09:57:39 2017
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000001
Ring ID:          1/28
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
Nodeid      Votes Name
0x00000001          1 192.168.202.10 (local)
0x00000002          1 192.168.202.11
0x00000003          1 192.168.202.12
[email protected]:~#

Next, a dedicated network interface for the Ceph traffic. I used a USB-C to Gigabit Ethernet dongle that’s well-known as having decent Linux support.

In a production deployment, as I discovered in my benchmarking, 10GbE is practically a must. You also wouldn’t bridge the devices; you would want a NIC (or more) dedicated to the Ceph traffic. For this testing, I figured I might want to use the separate NIC in my VMs.

Installing Ceph

The next step was to get Ceph installed. Recent versions of Proxmox have made this a very simple task. I’m using the Proxmox 5.0 Beta1, so the version of Ceph that is available is luminous

Initializing Ceph and setting up monitors is next.

This is where I ended up making the first mistake. I figured I would be clever, and paste my commands into three terminals at once. This caused some sort of weird deadlock, which forced me to have to uninstall and reinstall Ceph.

A quick ceph quorom_status, ceph health, and a ceph mon_status tells me everything is properly set up.

Initializing and Configuring the Disks

Once I had Ceph up and rolling, it was time to set up the disk. For now, this is just going to be a single disk setup, where the disk used in each NUC is a 500GB M2 SATA SSD. In a more extensive setup, you would want more disks, and 10 GbE for the Ceph network.

Once the osds are configured and the disks are set up, the Proxmox GUI should be showing everything.

Adding the Storage to the Cluster

Once it’s been confirmed that Ceph is running, and everything appears to be working, it’s time to add the storage to Proxmox. At this point, Proxmox just sees the storage as remote, sort of like NFS or iSCSI, so it’s a simple handful clicks to add.

The storage type is “RDB”. In this case, I’ve added the IPs for my three monitor IPs, 192.168.203.10/11/12.

I’ve set this up twice now, and both time the hassle has been remembering this step.

Permission must be granted to access the storage “remotely”, even if remote in this case are the Proxmox nodes themselves. This must be done from the command line on one of the hosts.

In this case, my storage is named “ssd_ceph”.

mkdir /etc/pve/priv/ceph && cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/ssd_ceph.keyring

Some Benchmarks

After it was all set up, I installed Ubuntu in a VM on the storage to do some benchmarks.

It’s very obvious why 10GbE is recommended for the Ceph storage network. As I am writing files to the VM that was running on the first node, it is transferring data out to both of the other nodes.

This is just the iftop results of some minor benchmarking. On these relatively slow NUCs with slow SSDs in them, it was coming close to saturating the USB-C gigabit connection.

A simple rsync inside of the VM tells a similar story.

Final thoughts

Sure, it’s not monster speeds, but on these slow NUCs with a gigabit Ceph network and a single Ceph OSD per drive, it’s definitely acceptable. With a 10GbE backplane and array of SSDs as the recommended, I think this would perform admirably.

See more Nucnucnuc stuff here.

Please follow and like us: