RadOS Mac OS

  • Down to the file you just downloaded and click to select it. MAC USERS: The Mac OS adds an extra file with the same name but with a leading dot. Skip over this file and select the one without the leading dot instead.
  • Dec 01, 2015 Rados Gateway (Data flow) ● RGW nodes – Frontend – (Apache/FastCGI, CivetWeb) – ReST dialect (Pluggable architecture for other dialects) – RGW core execution layer – RGW-RADOS object mapping layer – Librados ● RADOS – Object Classes for RGW – RADOS backend (OSDs) 15.

CEPH is a free and opensource object storage system. This article describes the basic terminology, installation and configuration parameters required to build your own CEPH environment.

CEPH has been designed to be a distributed storage system which is highly fault tolerant, scalable and configurable. It can run on large number of distributed commodity hardware thus eliminating the need for very large central storage solutions.

This post aims to be a basic, short and self contained article that explains all the key details to understand and play with CEPH.

Rados Mac Os X

Ceph的底层是RADOS(可靠、自动、分布式对象存储),可以通过LIBRADOS直接访问到RADOS的对象存储系统。Ceph还提供三种标准的访问接口:RBD(块设备接口)、RADOS Gateway(对象存储接口)、Ceph File System(POSIX文件接口)。 对于CephFS主要由三个组件构成:MON、OSD和MDS。.

Environment

I’ve setup CEPH on my laptop with a few LXC containers. My setup has –

  • Host: Ubuntu 16.04 xenial 64-bit OS
  • LXC version: 2.0.6
  • 40G disk partition that is free to use for this experiement.
  • CEPH Release – Jewel

The Ubuntu site has documentation on LXC configuration here. But this site talks about creating unprivileged containers. However, we’ll need “privileged” containers. Privileged containers are’t considered secure, since processes are mapped to root user on the host. Hence it has been used only for the purpose of experimenting.

See documentation on LXC site here to understand how to create privileged LXC containers.

RadOS Mac OS

Now, create these containers (as privileged) with names like these –

  • cephadmin
  • cephmon
  • cephosd0
  • cephosd1
  • cephosd2
  • cephradosgw

Here’s a command to create a container with Ubuntu Xenial 64-bit container. You could choose another distro, in which case some of the instructions might not be applicable. Using the names above run the command to create each container –

Though CEPH installations talk about private and public networks. For testing purposes, the ‘lxcbr0’ available from LXC is sufficient to successfully work with CEPH.

Start the containers and install SSH servers on all of them –

RadOS

Repeat the above commands for all the containers you’ve created. Re-start all the containers. Register the container-names into your /etc/hosts for easily being able to login to them.

Disk Setup

On the 40G disk partition you’ve allocated for this exercise perform –

  1. Delete the existing partition
  2. Create 3 new partitions
  3. Format each of them with ‘XFS’ filesystem.

CEPH recommends using XFS / BTRFS / EXT4. I’ve used XFS in my tests. I tried with EXT4 but received warnings related to limited xattr sizes while CEPH was being deployed.

Note down the device major and minor numbers for the partitions you’ve created from the above steps.

Make each of the partition you’ve created available to each of the cephosd0, cephosd1 and cephosd2 containers. If the above operation resulted in ‘/dev/sda8’, ‘/dev/sda9’ and ‘/dev/sda10’ devices. Assign each of the device major and minor number to a corresponding cephosd<N> container. To do that, you’ll need to edit the LXC configuration file for each of the cephosd* container by performing the steps –

  • Login to each cephosd<N> node, and run
  • Stop your cephosd containers (cephosd0, cephosd1, cephosd2) –
  • On the host create corresponding ‘fstab’ file for each container. You’d assign /dev/sda8 to cephosd0, /dev/sda9 to cephosd1 and so on. The syntax of fstab would have lines like this depending on how many disks / partitions you intend to share –

Example –

IMPORTANT NOTE

The <mount-point-in-container> has no preceding forward slash. The preceding slash is not required. Refer this post on askubuntu.com for more details.

  • For each of the cephosd<N> node edit the configuration file

Edit the container configuration file ‘config’. The lines below gives the container permission to access the device inside LXC and mount it to the mount point defined in the fstab file. Add the following lines –

Now, start/restart the containers. With the above set of steps we complete the creation of containers ready for us to install CEPH.

CEPH Installation

Complete the pre-flight steps on the CEPH quick install from here. The steps in pre-flight –

  • Setup the ‘cephadmin’ node with the ceph-deploy package.
  • Installs ‘ntp’ on all the nodes (required where OSD or MON run).
  • Create a common ceph deploy user (password-less ssh sudo access) that will be used for CEPH installation on all nodes.

After the pre-flight steps are complete check –

  • To ensure the password-less access works from ‘cephadmin’ to all the other nodes on your cluster via the ceph deploy user you have created.
  • The XFS partition created earlier is available on all OSDs under /mnt/xfsdisk.

Next, you need to complete the CEPH deployment. The steps with all the illustrations are available here. To get a good understanding of each command read the description from the link. Since our experiement has 3 OSDs and 3 monitor daemons; You could run these commands from the ‘cephadmin’ node using the ceph deploy user created in pre-flight –

  • Create a directory and run the commands below from within it.
  • Designate following nodes as CEPH Monitors
  • Install CEPH on all nodes including admin node.
  • Add the initial monitors and gather their keys
  • Prepare the disk on each OSD
  • Activate each OSD
  • Run this command to ensure cephadmin can perform administrative activities on all your nodes on the CEPH cluster
  • Check health of cluster (You must see HEALTH_OK) if you’ve performed all the steps correctly.
  • Install and deploy the instance of RADOS gateway

NOTE

If you haven’t configured the OSD daemons to start automatically via upstart, they won’t after a LXC startup/restart. And running ‘ceph -s’ or ‘ceph -w’ or ‘ceph health’ on admin node shows HEALTH_ERR and degraded cluster since the OSDs are down. In such a case, manually login to each OSD and start the OSD instance with the command sudo systemctl start ceph-osd@<instance>; <instance> in our case is 0, 1, or 2.

With the above steps, the cluster must be up with all the PGs showing ‘active + clean’ status;

Access data on cluster

To run these commands you’ll need a subset ceph.conf with monitor node information and admin keyring file. For test purposes you could run it on the cephadmin node under the cluster directory you’ve created in the first step of CEPH installation section.

To list all available data pools

To create a pool

To insert data in a file into a pool

Mac Os Catalina

To list all objects in a pool

To get a data object from the pool

Rados Mac Os X

IDNamePlatformArchitectureSingle-core ScoreMulti-core Score
6814601ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Windows 64x86_6411886747
5993642Mac mini (Late 2020)Apple M13194 MHz(8 cores)macOS 64aarch6417477708
5992416Mac mini (Late 2020)Apple M13194 MHz(8 cores)macOS 64aarch6417517690
5992368Mac mini (Late 2020)Apple M13194 MHz(8 cores)macOS 64aarch6417457655
5882335Mac mini (Late 2020)Apple M13194 MHz(8 cores)macOS 64aarch6417457690
5865635MacBook Air (Late 2020)Apple M13196 MHz(8 cores)macOS 64aarch6417487697
5863237MacBook Air (Late 2020)Apple M13180 MHz(8 cores)macOS 64aarch6417447612
5749095Mac mini (Late 2020)Apple M13194 MHz(8 cores)macOS 64aarch6417387632
5327591Dell Inc. Precision 3541Intel Core i9-9880H2294 MHz(8 cores)Windows 64x86_6412326349
4892072iPhone13,1ARM2993 MHz(6 cores)iOS 64aarch6415883706
4864748iPhone13,1ARM2993 MHz(6 cores)iOS 64aarch6415793815
4005303ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Windows 64x86_6411866996
3708112ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Linux 64x86_6412387148
3544454ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Linux 64x86_6412467060
3535101ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Linux 64x86_6412517092
3513849ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Linux 64x86_6412497080
3512001ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Linux 64x86_6412487032
3483696ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Windows 64x86_6411746873
3461630ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Windows 64x86_6411786882
3461570ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Windows 64x86_6411746864
3458463ASUSTeK COMPUTER INC. MINIPC PN50AMD Ryzen 7 4800U1800 MHz(8 cores)Linux 64x86_6412566987
2827703Dell Inc. Precision 3541Intel Core i9-9880H2293 MHz(8 cores)Windows 64x86_6412296408
2703482Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6411206044
2699321Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6413586339
2678639Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U1595 MHz(6 cores)Windows 64x86_6412095967
2663145Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6413556448
2659680Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6413576460
2655406Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6413616451
2655292Intel(R) Client Systems NUC10i7FNKIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6413586457
2539095Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6410745262
2539033Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6411725464
2538881Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6411655463
2538796Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6410625430
2482338Intel(R) Client Systems NUC8i3BEKIntel Core i3-8109U3600 MHz(2 cores)Linux 64x86_6410212363
2475144Dell Inc. Precision 3541Intel Core i9-9880H2294 MHz(8 cores)Windows 64x86_6412506529
2471821Dell Inc. Precision 3541Intel Core i9-9880H4800 MHz(8 cores)Linux 64x86_6413386849
2426572Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6412085717
2426167Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6412125680
2425988Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1596 MHz(6 cores)Windows 64x86_6411995671
2416719Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6411135888
2337361ZOTAC ZBOX-CI549Intel Core i5-7300U2693 MHz(2 cores)Windows 64x86_648491724
2333373ZOTAC ZBOX-CI549Intel Core i5-7300U3500 MHz(2 cores)Linux 64x86_649532013
2333349ZOTAC ZBOX-CI549Intel Core i5-7300U3500 MHz(2 cores)Linux 64x86_648881865
1490604Intel(R) Client Systems NUC8i3BEKIntel Core i3-8109U2992 MHz(2 cores)Windows 64x86_649842304
1489300Intel(R) Client Systems NUC8i3BEKIntel Core i3-8109U3600 MHz(2 cores)Linux 64x86_6410142337
1489276Intel(R) Client Systems NUC8i3BEKIntel Core i3-8109U3600 MHz(2 cores)Linux 64x86_6410122341
1489116Intel(R) Client Systems NUC8i3BEKIntel Core i3-8109U3600 MHz(2 cores)Linux 64x86_6410152337
1442143UnknownIntel Core i7-6770HQ2595 MHz(4 cores)Windows 64x86_649333355
1441599System manufacturer System Product NameIntel Core i7-2600K3411 MHz(4 cores)Windows 64x86_647672861
1277870UnknownIntel Core i7-5557U3092 MHz(2 cores)Windows 64x86_648491838
1236149HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U1995 MHz(4 cores)Windows 64x86_6410993501
1231260HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U1995 MHz(4 cores)Windows 64x86_6411073482
1225600HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U4000 MHz(4 cores)Linux 64x86_6411003360
1225575HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U4000 MHz(4 cores)Linux 64x86_6410853482
1225549HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U4000 MHz(4 cores)Linux 64x86_6410943470
1225512HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U4000 MHz(4 cores)Linux 64x86_6410893402
1101915Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1608 MHz(6 cores)Windows 64x86_6412185894
1096702Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U1608 MHz(6 cores)Windows 64x86_6411835795
1096311Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6411176019
1096251Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6411146007
1095778Intel(R) Client Systems NUC10i7FNHIntel Core i7-10710U4700 MHz(6 cores)Linux 64x86_6411185985
932593iPad Pro (12.9-inch 3rd Generation)Apple A12X Bionic2490 MHz(8 cores)iOS 64aarch6411164649
526491HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U1995 MHz(4 cores)Windows 64x86_6410833374
526413HP HP Spectre x360 Convertible 15-bl1XXIntel Core i7-8550U1995 MHz(4 cores)Windows 64x86_6410833371
139896Intel Corporation NUC7i5BNHIntel Core i5-7260U2194 MHz(2 cores)Windows 64x86_648912032
86107MacBook Pro (15-inch Early 2010)Intel Core i5-520M2400 MHz(2 cores)macOS 64x86_64391858
85698iPad Pro (12.9-inch)Apple A9X2260 MHz(2 cores)iOS 64aarch646591224
40575Dell Inc. Precision 3541Intel Core i9-9880H2294 MHz(8 cores)Windows 64x86_6412266210
37075Dell Inc. Precision 3541Intel Core i9-9880H2294 MHz(8 cores)Windows 64x86_6412256168
36147iPhone XS MaxApple A12 Bionic2490 MHz(6 cores)iOS 64aarch6411162780
30830iPhone XS MaxApple A12 Bionic2490 MHz(6 cores)iOS 64aarch6411152630
20078iPad Pro (12.9-inch 3rd Generation)Apple A12X Bionic2490 MHz(8 cores)iOS 64aarch6410894348