Skip to content

docs(tuto): restore tutorial ceph #5173

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 30, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
Binary file added tutorials/ceph-cluster/assets/scaleway_ceph.webp
Binary file not shown.
227 changes: 227 additions & 0 deletions tutorials/ceph-cluster/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
---
meta:
title: Building your own Ceph distributed storage cluster on dedicated servers
description: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
content:
h1: Building your own Ceph distributed storage cluster on dedicated servers
paragraph: Learn how to set up a Ceph cluster on Scaleway Dedibox and use it as datastore for your VMware ESXi machines.
categories:
- object-storage
- dedibox
tags: dedicated-servers dedibox Ceph object-storage
hero: assets/scaleway_ceph.webp
dates:
validation: 2025-06-25
validation_frequency: 18
posted: 2020-06-29
---

Ceph is an open-source, software-defined storage solution that provides object, block, and file storage at exabyte scale. It is self-healing, self-managing, and fault-tolerant, using commodity hardware to minimize costs.
This tutorial guides you through deploying a three-node Ceph cluster with a RADOS Gateway (RGW) for S3-compatible object storage on Dedibox dedicated servers running Ubuntu 24.04 LTS.

<Macro id="requirements" />

- A Dedibox account logged into the [console](https://console.online.net)
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- 3 Dedibox servers (Ceph nodes) running Ubuntu 24.04 LTS, each with:
- At least 8GB RAM, 4 CPU cores, and one unused data disk (e.g., /dev/sdb) for OSDs.
- Network connectivity between nodes and the admin machine.
- An admin machine (Ubuntu 24.04 LTS recommended) with SSH access to Ceph nodes.

## Configure networking and SSH

1. Log into each of the ceph nodes and the admin machine using SSH

2. Install software dependencies on all nodes and the admin machine:
```
sudo apt update
sudo apt install -y python3 chrony lvm2 podman
sudo systemctl enable chrony
```

3. Set unique hostnames on each Ceph node:
```
sudo hostnamectl set-hostname ceph-node-a # Repeat for ceph-node-b, ceph-node-c
```

4. Configure `/etc/hosts` on all nodes and the admin machine to resolve hostnames:
```
echo "<node-a-ip> ceph-node-a" | sudo tee -a /etc/hosts
echo "<node-b-ip> ceph-node-b" | sudo tee -a /etc/hosts
echo "<node-c-ip> ceph-node-c" | sudo tee -a /etc/hosts
```

5. Create a deployment user (cephadm) on each Ceph node:
```
sudo useradd -m -s /bin/bash cephadm
sudo passwd cephadm
echo "cephadm ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadm
sudo chmod 0440 /etc/sudoers.d/cephadm
```

6. Enable passwordless SSH from the admin machine to Ceph nodes:
```
ssh-keygen -t ed25519 -N "" -f ~/.ssh/id_ed25519
ssh-copy-id cephadm@ceph-node-a
ssh-copy-id cephadm@ceph-node-b
ssh-copy-id cephadm@ceph-node-c
```

7. Verify time synchronization on all nodes:
```
chronyc sources
```

## Install cephadm on the admin machine

1. Add the Ceph repository for the latest stable release (e.g., Reef or newer):
```
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo tee /etc/apt/trusted.gpg.d/ceph.asc
sudo apt-add-repository 'deb https://download.ceph.com/debian-squid/ jammy main'
sudo apt update
```

2. Install `cephadm`:
```
sudo apt install -y cephadm
```

3. Verify the installation:
```
cephadm --version
```


## Bootstrap the Ceph cluster

1. Bootstrap the cluster on the admin machine, using the admin node’s IP:
```
sudo cephadm bootstrap \
--mon-ip <admin-node-ip> \
--initial-dashboard-user admin \
--initial-dashboard-password <strong-password> \
--dashboard-ssl
```

2. Access the Ceph dashboard at `https://<admin-node-ip>:8443` to verify the setup.


## Add Ceph nodes to the cluster

1. Add each Ceph node to the cluster:
```
sudo cephadm orch host add ceph-node-a
sudo cephadm orch host add ceph-node-b
sudo cephadm orch host add ceph-node-c
```

2. Verify hosts:
```
sudo ceph orch host ls
```



### Deploy Object Storage devices (OSDs)

1. List available disks on each node:
```
sudo ceph orch device ls
```

2. Deploy OSDs on all available unused disks:
```
sudo ceph orch apply osd --all-available-devices
```

3. Verify the OSD deployment:
```
sudo ceph osd tree
```


### Deploy RADOS gateway (RGW)

1. Deploy a single RGW instance on ceph-node-a:
```
sudo ceph orch apply rgw default --placement="count:1 host:ceph-node-a" --port=80
```

2. To use HTTPS (recommended), generate a self-signed certificate:
```
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout /etc/ceph/private/rgw.key \
-out /etc/ceph/private/rgw.crt \
-subj "/CN=ceph-node-a"
```

3. Redeploy RGW with HTTPS:
```
sudo ceph orch apply rgw default \
--placement="count:1 host:ceph-node-a" \
--port=443 \
--ssl-cert=/etc/ceph/private/rgw.crt \
--ssl-key=/etc/ceph/private/rgw.key
```

4. Verify RGW by accessing http://ceph-node-a:80 (or https://ceph-node-a:443 for HTTPS).

## Create an RGW user

1. Create a user for S3-compatible access:
```
sudo radosgw-admin user create \
--uid=johndoe \
--display-name="John Doe" \
[email protected]
```

Note the generated `access_key` and `secret_key` from the output.


## Step 8: Configure AWS-CLI for Object Storage

1. Install AWS-CLI on the admin machine or the "a" client:
```
pip3 install awscli awscli-plugin-endpoint
```

2. Create a configuration file `~/.aws/config`:
```
[plugins]
endpoint = awscli_plugin_endpoint

[default]
region = default
s3 =
endpoint_url = http://ceph-node-a:80
signature_version = s3v4
s3api =
endpoint_url = http://ceph-node-a:80

For HTTPS, use https://ceph-node-a:443.

Create ~/.aws/credentials:
[default]
aws_access_key_id=<access_key>
aws_secret_access_key=<secret_key>
```

3. Test the setup:
```sh
aws s3 mb s3://mybucket --endpoint-url http://ceph-node-a:80
echo "Hello Ceph!" > testfile.txt
aws s3 cp testfile.txt s3://mybucket --endpoint-url http://ceph-node-a:80
aws s3 ls s3://mybucket --endpoint-url http://ceph-node-a:80
```

4. Verify the cluster health status:
```
sudo ceph -s
```

Ensure the output shows `HEALTH_OK`.

## Conclusion

You have deployed a Ceph storage cluster with S3-compatible object storage using three Dedibox servers on Ubuntu 24.04 LTS. The cluster is managed with `cephadm`, ensuring modern orchestration and scalability. For advanced configurations (e.g., multi-zone RGW, monitoring with Prometheus), refer to the official [Ceph documentation](https://docs.ceph.com/docs/master/).
Loading