{"id":2428,"date":"2019-08-08T17:52:43","date_gmt":"2019-08-08T17:52:43","guid":{"rendered":"https:\/\/owncloud.com\/?p=2428"},"modified":"2020-07-08T10:46:53","modified_gmt":"2020-07-08T10:46:53","slug":"running-owncloud-in-kubernetes-with-rook-ceph-storage-step-by-step","status":"publish","type":"post","link":"https:\/\/owncloud.com\/de\/blogs\/running-owncloud-in-kubernetes-with-rook-ceph-storage-step-by-step\/","title":{"rendered":"Running ownCloud in Kubernetes With Rook Ceph Storage \u2013 Step by Step"},"content":{"rendered":"<div class=\"headline-wrap\">\n<div class=\"excerpt bold\">The <a href=\"https:\/\/owncloud.com\/running-owncloud-in-kubernetes-with-rook-ceph-storage\/\">first part<\/a> of this series explained what we need for an ownCloud deployment in a Kubernetes cluster and gave a high level overview. You can find the example files for this guide in <a href=\"https:\/\/github.com\/galexrt\/owncloud-kubernetes-rook\" target=\"_blank\" rel=\"noopener noreferrer\">this GitHub repository<\/a>.<\/div>\n<\/div>\n<div class=\"content\">\n<h2 id=\"preparations\">Preparations<\/h2>\n<p>To follow this guide, you need admin access to a Kubernetes cluster with an Ingress controller. If you don\u2019t have that already, you can follow these steps:<\/p>\n<h3 id=\"kubernetes-cluster-access\">Kubernetes Cluster Access<\/h3>\n<p>If you don\u2019t have a Kubernetes cluster, you can try using the following projects\u00a0<a href=\"https:\/\/github.com\/xetys\/hetzner-kube\" target=\"_blank\" rel=\"noopener noreferrer\">xetys\/hetzner-kube on GitHub<\/a>,\u00a0<a href=\"https:\/\/kubespray.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Kubespray<\/a>\u00a0and\u00a0<a href=\"https:\/\/kubernetes.io\/docs\/setup\/\" target=\"_blank\" rel=\"noopener noreferrer\">others (Kubernetes documentation)<\/a>.<\/p>\n<p>minikube is not enough when started with the default resources, be sure to give minikube extra resources otherwise you will run into problems! Be sure to\u00a0add the following flags\u00a0to the\u00a0<code>minikube start<\/code>\u00a0command:\u00a0<code>--memory=4096 --cpus=3 --disk-size=40g<\/code>.<\/p>\n<p>You should have\u00a0<code>cluster-admin<\/code>\u00a0access to the Kubernetes cluster! Other access can also work, but due to the nature of objects that are created along the way it is easier to have the\u00a0<code>cluster-admin<\/code>\u00a0access.<\/p>\n<h3 id=\"kubernetes-cluster\">Kubernetes Cluster<\/h3>\n<h4 id=\"ingress-controller\">Ingress Controller<\/h4>\n<p>WARNING:\u00a0Only follow this section, if your Kubernetes cluster does not have an Ingress controller yet.<\/p>\n<p>We are going to install the Kubernetes NGINX Ingress Controller.<\/p>\n<pre><code># Taken <span class=\"hljs-keyword\">from<\/span> https:<span class=\"hljs-comment\">\/\/github.com\/kubernetes\/ingress-nginx\/blob\/master\/deploy\/static\/mandatory.yaml<\/span>\r\nkubectl apply -f ingress-nginx\/\r\n<\/code><\/pre>\n<p>The instructions shown here are for an environment without\u00a0<code>LoadBalancer<\/code>\u00a0Service type support (e.g., bare metal, \u201cnormal\u201d VM provider, not cloud), for installation instructions for other environments check out\u00a0<a href=\"https:\/\/kubernetes.github.io\/ingress-nginx\/deploy\/\" target=\"_blank\" rel=\"noopener noreferrer\">Installation Guide \u2013 NGINX Ingress Controller<\/a>.<\/p>\n<pre><code># Taken <span class=\"hljs-keyword\">from<\/span> https:<span class=\"hljs-comment\">\/\/github.com\/kubernetes\/ingress-nginx\/blob\/master\/deploy\/static\/provider\/baremetal\/service-nodeport.yaml<\/span>\r\nkubectl apply -f ingress-nginx\/service-nodeport.yaml\r\n<\/code><\/pre>\n<p>As these are bare metal installation instructions, the NGINX Ingress controller will be available through a Service of type\u00a0<code>NodePort<\/code>. This Service type exposes one or more ports on all Nodes in the Kubernetes cluster.<\/p>\n<p>To get that port run:<\/p>\n<pre><code>$ kubectl get -n ingress-nginx service ingress-nginx\r\n<span class=\"hljs-keyword\">NAME<\/span>            <span class=\"hljs-keyword\">TYPE<\/span>       CLUSTER-IP       <span class=\"hljs-keyword\">EXTERNAL<\/span>-IP   PORT(S)                      AGE\r\ningress-nginx   NodePort   <span class=\"hljs-number\">10.108.<\/span><span class=\"hljs-number\">254.160<\/span>   &lt;<span class=\"hljs-keyword\">none<\/span>&gt;        <span class=\"hljs-number\">80<\/span>:<span class=\"hljs-number\">30512<\/span>\/TCP,<span class=\"hljs-number\">443<\/span>:<span class=\"hljs-number\">30243<\/span>\/TCP   3m\r\n<\/code><\/pre>\n<p>In that output you can see the NodePorts for HTTP and HTTPS on which you can connect to the NGINX Ingress controller and ownCloud later.<\/p>\n<p>Though as written you probably want to look into a more \u201csolid\u201d way to expose the NGINX Ingress controller(s), for bare metal where there is no Kubernetes LoadBalancer integration one can consider using\u00a0<code>hostNetwork<\/code>\u00a0option for that:\u00a0<a href=\"https:\/\/kubernetes.github.io\/ingress-nginx\/deploy\/baremetal\/#via-the-host-network\" target=\"_blank\" rel=\"noopener noreferrer\">bare-metal considerations \u2013 NGINX Ingress Controller<\/a>.<\/p>\n<h4 id=\"namespaces\">Namespaces<\/h4>\n<p>Through the whole installation we will create 4 Namespaces:<\/p>\n<ul>\n<li><code>rook-ceph<\/code>\u00a0\u2013 For the Rook-run Ceph cluster + the Rook Ceph operator (will be created below).<\/li>\n<li><code>owncloud<\/code>\u00a0\u2013 For ownCloud and the other operators, such as Zalando\u2019s Postgres Operator and KubeDB for Redis.<\/li>\n<li><code>ingress-nginx<\/code>\u00a0\u2013 If you don\u2019t have an Ingress controller running yet, the namespace is used for the Ingress NGINX controller (it was already created with the Ingress Controller).<\/li>\n<\/ul>\n<pre><code>kubectl create <span class=\"hljs-_\">-f<\/span> namespaces.yaml\r\n<\/code><\/pre>\n<h2 id=\"rook-ceph-storage\">Rook Ceph Storage<\/h2>\n<p>Now on to running Ceph in Kubernetes, using the\u00a0<a href=\"https:\/\/rook.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Rook.io project<\/a>.<\/p>\n<p>In the following sections make sure to use the available\u00a0<code>-test<\/code>\u00a0suffixed files if you have less than 3 Nodes which are available to any application \/ Pod (e.g., depending on your cluster the masters are not available for Pods). (You can change that, for that be sure to dig into the\u00a0<code>CephCluster<\/code>\u00a0object\u2019s\u00a0<code>spec.placement.tolerations<\/code>\u00a0and the Operator environment variables for the discover and agent daemons. Running application Pods on the masters is not recommended though.)<\/p>\n<h3 id=\"operator\">Operator<\/h3>\n<p>The operator will take care of starting up the Ceph components one by one and also preparing of disks and health checking.<\/p>\n<pre><code>kubectl <span class=\"hljs-keyword\">create<\/span> -f rook-ceph\/common.yaml\r\nkubectl <span class=\"hljs-keyword\">create<\/span> -f rook-ceph\/<span class=\"hljs-keyword\">operator<\/span>.yaml\r\n<\/code><\/pre>\n<p>You can check on the Pods to see how it looks:<\/p>\n<pre><code>$ kubectl get -n rook-ceph pod\r\nNAME                                  READY   STATUS    RESTARTS   AGE\r\nrook-ceph-agent-cbrgv                 <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">90<\/span>s\r\nrook-ceph-agent-wfznr                 <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">90<\/span>s\r\nrook-ceph-agent-zhgg7                 <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">90<\/span>s\r\nrook-ceph-operator<span class=\"hljs-number\">-6897<\/span>f5c696-j724m   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">2<\/span>m18s\r\nrook-discover-jg798                   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">90<\/span>s\r\nrook-discover-kfxc8                   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">90<\/span>s\r\nrook-discover-qbhfs                   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">90<\/span>s\r\n<\/code><\/pre>\n<p>The\u00a0<code>rook-discover-*<\/code>\u00a0Pods are each one on each Node of your Kubernetes cluster, as they are discovering the disks of the Nodes so the operator can plan the actions for a given\u00a0<code>CephCluster<\/code>\u00a0object which comes up next.<\/p>\n<p>&nbsp;<\/p>\n<div id=\"attachment_19547\" class=\"wp-caption aligncenter\" style=\"width: 551px;\">\n<p><a class=\"fancybox image\" href=\"https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure.jpg\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-19547\" src=\"https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure.jpg\" sizes=\"(max-width: 1920px) 100vw, 1920px\" srcset=\"https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure.jpg 1920w, https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure-300x225.jpg 300w, https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure-768x576.jpg 768w, https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure-1024x768.jpg 1024w, https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure-1800x1350.jpg 1800w, https:\/\/owncloud.org\/wp-content\/uploads\/2019\/08\/ownCloud-kubernetes-rook-ceph-order-structure-1320x990.jpg 1320w\" alt=\"ownCloud kubernetes rook ceph order structure\" width=\"551\" height=\"413\" aria-describedby=\"caption-attachment-19547\" \/><\/a><em>Order and structure prevail in the realm of Kubernetes.<\/em><\/p>\n<\/div>\n<h3 id=\"ceph-cluster\">Ceph Cluster<\/h3>\n<p>This is the definition of Ceph cluster that will be created in Kubernetes. It contains the lists and options on which disks to use and on which Nodes.<\/p>\n<p>If you wanna see some example CephCluster objects to see what is possible, be sure to check out\u00a0<a href=\"https:\/\/rook.io\/docs\/rook\/v1.0\/ceph-cluster-crd.html\" target=\"_blank\" rel=\"noopener noreferrer\">Rook v1.0 Documentation \u2013 CephCluster CRD<\/a>.<\/p>\n<p>INFO:\u00a0Use the\u00a0<code>cluster-test.yaml<\/code>\u00a0when your Kubernetes cluster has less than 3 schedulable Nodes (e.g., minikube)! When using the\u00a0<code>cluster-test.yaml<\/code>\u00a0only one\u00a0<code>mon<\/code>\u00a0is started. If that\u00a0<code>mon<\/code>\u00a0is down for whatever reason, the Ceph Cluster will come to a halt to prevent any data \u201ccorruption\u201d.<\/p>\n<pre><code>$ kubectl create <span class=\"hljs-_\">-f<\/span> rook-ceph\/cluster.yaml\r\n<\/code><\/pre>\n<p>This will now cause the operator to start the Ceph cluster after the specifications in the CephCluster object.<\/p>\n<p>To see which Pods have already been created by the operator, you can run (output example from a three node cluster):<\/p>\n<pre><code>$ kubectl get -n rook-ceph pod\r\nNAME                                                     READY   STATUS      RESTARTS   AGE\r\nrook-ceph-agent-cbrgv                                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-ceph-agent-wfznr                                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-ceph-agent-zhgg7                                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-ceph-mgr-a<span class=\"hljs-number\">-77<\/span>fc54c489<span class=\"hljs-number\">-66<\/span>mpd                         <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">6<\/span>m45s\r\nrook-ceph-mon-a<span class=\"hljs-number\">-68<\/span>b94cd66-m48lm                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">8<\/span>m6s\r\nrook-ceph-mon-b<span class=\"hljs-number\">-7<\/span>b679476f-mc7wj                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">8<\/span>m\r\nrook-ceph-mon-c-b5c468c94-f8knt                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">7<\/span>m54s\r\nrook-ceph-operator<span class=\"hljs-number\">-6897<\/span>f5c696-j724m                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-ceph-osd<span class=\"hljs-number\">-0<\/span><span class=\"hljs-number\">-5<\/span>c8d8fcdd-m4gl7                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">5<\/span>m55s\r\nrook-ceph-osd<span class=\"hljs-number\">-1<\/span><span class=\"hljs-number\">-67<\/span>bfb7d647-vzmpv                         <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">5<\/span>m56s\r\nrook-ceph-osd<span class=\"hljs-number\">-2<\/span>-c8c55548f-ws8sl                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">5<\/span>m11s\r\nrook-ceph-osd-prepare-owncloudrookceph-worker<span class=\"hljs-number\">-01<\/span>-svvz9   <span class=\"hljs-number\">0<\/span>\/<span class=\"hljs-number\">2<\/span>     Completed   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">6<\/span>m7s\r\nrook-ceph-osd-prepare-owncloudrookceph-worker<span class=\"hljs-number\">-02<\/span>-mhvf2   <span class=\"hljs-number\">0<\/span>\/<span class=\"hljs-number\">2<\/span>     Completed   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">6<\/span>m7s\r\nrook-ceph-osd-prepare-owncloudrookceph-worker<span class=\"hljs-number\">-03<\/span>-nt2gs   <span class=\"hljs-number\">0<\/span>\/<span class=\"hljs-number\">2<\/span>     Completed   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">6<\/span>m7s\r\nrook-discover-jg798                                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-discover-kfxc8                                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-discover-qbhfs                                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\n<\/code><\/pre>\n<h3 id=\"block-storage-rbd-\">Block Storage (RBD)<\/h3>\n<p>Before creating the CephFS filesystem, let\u2019s create a block storage pool with a StorageClass. The StorageClass is for the PostgreSQL, and if you want, even the Redis cluster.<\/p>\n<p>INFO:\u00a0Use the\u00a0<code>storageclass-test.yaml<\/code>\u00a0when your Kubernetes cluster has less than 3 schedulable Nodes!<\/p>\n<pre><code>kubectl create <span class=\"hljs-_\">-f<\/span> rook-ceph\/storageclass.yaml\r\n<\/code><\/pre>\n<p>In case of a block storage Pool there are no additional Pods that will be started, we\u2019ll verify that the block storage Pool has been created in the \u201cToolbox\u201d section above.<\/p>\n<p>One more thing to do: set the created StorageClass as default in the Kubernetes cluster by running the following command:<\/p>\n<pre><code>kubectl patch storageclass rook-ceph-<span class=\"hljs-built_in\">block<\/span> -p '{<span class=\"hljs-string\">\"metadata\"<\/span>: {<span class=\"hljs-string\">\"annotations\"<\/span>:{<span class=\"hljs-string\">\"storageclass.kubernetes.io\/is-default-class\"<\/span>:<span class=\"hljs-string\">\"true\"<\/span>}}}'\r\n<\/code><\/pre>\n<p>Now you are ready to move onto the storage for the actual data to be stored in ownCloud!<\/p>\n<h3 id=\"cephfs\">CephFS<\/h3>\n<p>CephFS is the filesystem that Ceph offers. With its POSIX compliance it is a perfect fit to be used with ownCloud.<\/p>\n<p>INFO:\u00a0Use the\u00a0<code>filesystem-test.yaml<\/code>\u00a0when your Kubernetes cluster has less than 3 schedulable Nodes!<\/p>\n<pre><code>kubectl create <span class=\"hljs-_\">-f<\/span> rook-ceph\/filesystem.yaml\r\n<\/code><\/pre>\n<p>The creation of the CephFS will cause so called MDS daemons, MDS Pods, to be started.<\/p>\n<pre><code>kubectl get -n rook-ceph pod\r\nNAME                                    READY   STATUS      RESTARTS   AGE\r\n[...]\r\nrook-ceph-mds-myfs-a<span class=\"hljs-number\">-747<\/span>b75bdc7<span class=\"hljs-number\">-9<\/span>nzwx                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>s\r\nrook-ceph-mds-myfs-b<span class=\"hljs-number\">-76<\/span>b9fcc8cc-md8bz                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">10<\/span>s\r\n[...]\r\n<\/code><\/pre>\n<h3 id=\"toolbox\">Toolbox<\/h3>\n<p>This will create a Pod which will allow us to run Ceph commands. It will be useful to quickly check the Ceph cluster\u2019s status.<\/p>\n<pre><code>kubectl <span class=\"hljs-keyword\">create<\/span> -f rook-ceph\/toolbox.yaml\r\n# <span class=\"hljs-keyword\">Wait<\/span> <span class=\"hljs-keyword\">for<\/span> the Pod <span class=\"hljs-keyword\">to<\/span> be <span class=\"hljs-string\">`Running`<\/span>\r\nkubectl <span class=\"hljs-keyword\">get<\/span> -n rook-ceph pod -l <span class=\"hljs-string\">\"app=rook-ceph-tools\"<\/span>\r\n<span class=\"hljs-keyword\">NAME<\/span>                                    READY   <span class=\"hljs-keyword\">STATUS<\/span>      RESTARTS   AGE\r\n[...]\r\nrook-ceph-tools<span class=\"hljs-number\">-5966446<\/span>d7b-nrw5n                         <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">10<\/span>s\r\n[...]\r\n<\/code><\/pre>\n<p>Now use\u00a0<code>kubectl exec<\/code>\u00a0to enter the Rook Ceph Toolbox Pod:<\/p>\n<pre><code>kubectl exec -n rook-ceph -<span class=\"hljs-keyword\">it<\/span> $(kubectl <span class=\"hljs-keyword\">get<\/span> -n rook-ceph pod -l <span class=\"hljs-string\">\"app=rook-ceph-tools\"<\/span> -o jsonpath='{.items[<span class=\"hljs-number\">0<\/span>].metadata.<span class=\"hljs-built_in\">name<\/span>}') bash\r\n<\/code><\/pre>\n<p>In the Rook Ceph Toolbox Pod, run the following command to get the\u00a0Ceph cluster health status\u00a0(example output from a 7 Node Kubernetes Rook Ceph cluster):<\/p>\n<pre><code>$ ceph -s\r\n<span class=\"hljs-symbol\"> cluster:<\/span>\r\n<span class=\"hljs-symbol\">   id:<\/span>     f8492cd9<span class=\"hljs-number\">-3<\/span>d14<span class=\"hljs-number\">-432<\/span>c-b681<span class=\"hljs-number\">-6<\/span>f73425d6851\r\n<span class=\"hljs-symbol\">   health:<\/span> HEALTH_OK\r\n<span class=\"hljs-symbol\">\r\nservices:<\/span>\r\n<span class=\"hljs-symbol\">   mon:<\/span> <span class=\"hljs-number\">3<\/span> daemons, quorum c,b,a\r\n<span class=\"hljs-symbol\">   mgr:<\/span> a(active)\r\n<span class=\"hljs-symbol\">   mds:<\/span> repl<span class=\"hljs-number\">-2<\/span><span class=\"hljs-number\">-1<\/span><span class=\"hljs-number\">-2<\/span>\/<span class=\"hljs-number\">2<\/span>\/<span class=\"hljs-number\">2<\/span> up  {<span class=\"hljs-number\">0<\/span>=repl<span class=\"hljs-number\">-2<\/span><span class=\"hljs-number\">-1<\/span>-c=up:active,<span class=\"hljs-number\">1<\/span>=repl<span class=\"hljs-number\">-2<\/span><span class=\"hljs-number\">-1<\/span>-b=up:active}, <span class=\"hljs-number\">2<\/span> up:standby-replay\r\n<span class=\"hljs-symbol\">   osd:<\/span> <span class=\"hljs-number\">7<\/span> osds: <span class=\"hljs-number\">7<\/span> up, <span class=\"hljs-number\">7<\/span> in\r\n<span class=\"hljs-symbol\">\r\ndata:<\/span>\r\n<span class=\"hljs-symbol\">   pools:<\/span>   <span class=\"hljs-number\">3<\/span> pools, <span class=\"hljs-number\">300<\/span> pgs\r\n<span class=\"hljs-symbol\">   objects:<\/span> <span class=\"hljs-number\">1.41<\/span> M objects, <span class=\"hljs-number\">4.0<\/span> TiB\r\n<span class=\"hljs-symbol\">   usage:<\/span>   <span class=\"hljs-number\">8.2<\/span> TiB used, <span class=\"hljs-number\">17<\/span> TiB \/ <span class=\"hljs-number\">25<\/span> TiB avail\r\n<span class=\"hljs-symbol\">   pgs:<\/span>     <span class=\"hljs-number\">300<\/span> active+clean\r\n<span class=\"hljs-symbol\">\r\nio:<\/span>\r\n<span class=\"hljs-symbol\">   client:<\/span>   <span class=\"hljs-number\">6.2<\/span> KiB\/s rd, <span class=\"hljs-number\">1.5<\/span> MiB\/s wr, <span class=\"hljs-number\">4<\/span> op\/s rd, <span class=\"hljs-number\">140<\/span> op\/s wr\r\n<\/code><\/pre>\n<p>You can also get it by using\u00a0<code>kubectl<\/code>:<\/p>\n<pre><code>$ kubectl get -n rook-ceph cephcluster rook-ceph\r\nNAME        DATADIRHOSTPATH   MONCOUNT   AGE   STATE     HEALTH\r\nrook-ceph   \/mnt\/sda1\/rook    <span class=\"hljs-number\">3<\/span>          <span class=\"hljs-number\">14<\/span>m   Created   HEALTH_OK\r\n<\/code><\/pre>\n<p>That even shows you some additional information directly through\u00a0<code>kubectl<\/code>\u00a0instead of having to read the\u00a0<code>ceph -s<\/code>\u00a0output.<\/p>\n<h3 id=\"summary\">Rook Ceph Summary<\/h3>\n<p>This is how it should look like in your\u00a0<code>rook-ceph<\/code>\u00a0Namespace now (example output from a 3 Node Kubernetes cluster):<\/p>\n<pre><code>$ kubectl get -n rook-ceph pod\r\nNAME                                                     READY   STATUS      RESTARTS   AGE\r\nrook-ceph-agent-cbrgv                                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">15<\/span>m\r\nrook-ceph-agent-wfznr                                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">15<\/span>m\r\nrook-ceph-agent-zhgg7                                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">15<\/span>m\r\nrook-ceph-mds-myfs-a<span class=\"hljs-number\">-747<\/span>b75bdc7<span class=\"hljs-number\">-9<\/span>nzwx                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">42<\/span>s\r\nrook-ceph-mds-myfs-b<span class=\"hljs-number\">-76<\/span>b9fcc8cc-md8bz                    <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">41<\/span>s\r\nrook-ceph-mgr-a<span class=\"hljs-number\">-77<\/span>fc54c489<span class=\"hljs-number\">-66<\/span>mpd                         <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">11<\/span>m\r\nrook-ceph-mon-a<span class=\"hljs-number\">-68<\/span>b94cd66-m48lm                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">12<\/span>m\r\nrook-ceph-mon-b<span class=\"hljs-number\">-7<\/span>b679476f-mc7wj                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">2<\/span>m22s\r\nrook-ceph-mon-c-b5c468c94-f8knt                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">2<\/span>m6s\r\nrook-ceph-operator<span class=\"hljs-number\">-6897<\/span>f5c696-j724m                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">16<\/span>m\r\nrook-ceph-osd<span class=\"hljs-number\">-0<\/span><span class=\"hljs-number\">-5<\/span>c8d8fcdd-m4gl7                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">10<\/span>m\r\nrook-ceph-osd<span class=\"hljs-number\">-1<\/span><span class=\"hljs-number\">-67<\/span>bfb7d647-vzmpv                         <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">10<\/span>m\r\nrook-ceph-osd<span class=\"hljs-number\">-2<\/span>-c8c55548f-ws8sl                          <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">9<\/span>m48s\r\nrook-ceph-osd-prepare-owncloudrookceph-worker<span class=\"hljs-number\">-01<\/span><span class=\"hljs-number\">-5<\/span>xpqk   <span class=\"hljs-number\">0<\/span>\/<span class=\"hljs-number\">2<\/span>     Completed   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">73<\/span>s\r\nrook-ceph-osd-prepare-owncloudrookceph-worker<span class=\"hljs-number\">-02<\/span>-xnl8p   <span class=\"hljs-number\">0<\/span>\/<span class=\"hljs-number\">2<\/span>     Completed   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">70<\/span>s\r\nrook-ceph-osd-prepare-owncloudrookceph-worker<span class=\"hljs-number\">-03<\/span><span class=\"hljs-number\">-2<\/span>qggs   <span class=\"hljs-number\">0<\/span>\/<span class=\"hljs-number\">2<\/span>     Completed   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">68<\/span>s\r\nrook-ceph-tools<span class=\"hljs-number\">-5966446<\/span>d7b-nrw5n                         <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">8<\/span>s\r\nrook-discover-jg798                                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">15<\/span>m\r\nrook-discover-kfxc8                                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">15<\/span>m\r\nrook-discover-qbhfs                                      <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running     <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">15<\/span>m\r\n<\/code><\/pre>\n<p>The important thing is that the\u00a0<code>ceph -s<\/code>\u00a0output or the\u00a0<code>kubectl get cephcluster<\/code>\u00a0output shows that the\u00a0<code>health<\/code>\u00a0is\u00a0<code>HEALTH_OK<\/code>\u00a0and that you have OSD Pods running. The\u00a0<code>ceph -s<\/code>\u00a0output line should say:\u00a0<code>osd: 3 osds: 3 up, 3 in<\/code>\u00a0(where 3 is basically the amount of OSD Pods).<\/p>\n<p>Should you not have any OSD Pod, make sure all your Nodes are\u00a0<code>Ready<\/code>\u00a0and schedulable (e.g., no taints preventing \u201cnormal\u201d Pods to run) and make sure to check out the logs of the\u00a0<code>rook-ceph-osd-prepare-*<\/code>\u00a0and of existing\u00a0<code>rook-ceph-osd-[0-9]*<\/code>\u00a0Pods.<\/p>\n<p>If you don\u2019t have any Pods related to\u00a0<code>rook-ceph-osd-*<\/code>\u00a0look into the\u00a0<code>rook-ceph-operator-*<\/code>\u00a0logs for error messages, be sure to go over each line to make sure you don\u2019t miss an error message.<\/p>\n<h2 id=\"postgresql\">PostgreSQL<\/h2>\n<p>Moving on to the PostgreSQL for ownCloud. Zalando\u2019s PostgreSQL operator does a great job for running PostgreSQL in Kubernetes.<\/p>\n<p>First thing to create is the PostgreSQL Operator which brings the CustomResourceDefinitions, remember the custom Kubernetes objects, with itself. Using the Ceph block storage (RBD) we are going to create a redundant PostgreSQL instance for ownCloud to use.<\/p>\n<pre><code>$ kubectl <span class=\"hljs-keyword\">create<\/span> -n owncloud -f postgres\/postgres-operator.yaml\r\n# <span class=\"hljs-keyword\">Check<\/span> <span class=\"hljs-keyword\">for<\/span> the PostgreSQL <span class=\"hljs-keyword\">operator<\/span> Pod <span class=\"hljs-keyword\">to<\/span> be created <span class=\"hljs-keyword\">and<\/span> running\r\n$ kubectl <span class=\"hljs-keyword\">get<\/span> -n owncloud pod\r\n<span class=\"hljs-keyword\">NAME<\/span>                                 READY   <span class=\"hljs-keyword\">STATUS<\/span>    RESTARTS   AGE\r\npostgres-<span class=\"hljs-keyword\">operator<\/span><span class=\"hljs-number\">-6464<\/span>fc9c48<span class=\"hljs-number\">-6<\/span>twrd   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">5<\/span>m23s\r\n<\/code><\/pre>\n<p>With the operator created, move on to the PostgreSQL custom resource object that will cause the operator to create a PostgreSQL instance for use in Kubernetes:<\/p>\n<pre><code><span class=\"hljs-comment\"># Make sure the CustomResourceDefinition of the PostgreSQL has been created<\/span>\r\n<span class=\"hljs-variable\">$ <\/span>kubectl get customresourcedefinitions.apiextensions.k8s.io postgresqls.acid.zalan.<span class=\"hljs-keyword\">do<\/span>\r\nNAME                        CREATED AT\r\npostgresqls.acid.zalan.<span class=\"hljs-keyword\">do<\/span>   <span class=\"hljs-number\">2019<\/span>-08-<span class=\"hljs-number\">04<\/span><span class=\"hljs-symbol\">T10:<\/span><span class=\"hljs-number\">27<\/span><span class=\"hljs-symbol\">:<\/span><span class=\"hljs-number\">59<\/span>Z\r\n<\/code><\/pre>\n<p>The CustomResourceDefinition exists? Perfect, continue with the creation:<\/p>\n<pre><code>kubectl create -n owncloud <span class=\"hljs-_\">-f<\/span> postgres\/postgres.yaml\r\n<\/code><\/pre>\n<p>It will take a bit for the two PostgreSQL Pods to appear, but in the end you should have two\u00a0<code>owncloud-postgres<\/code>\u00a0Pods:<\/p>\n<pre><code>$ kubectl get -n owncloud pod\r\nNAME                                 READY   STATUS    RESTARTS   AGE\r\nowncloud-postgres<span class=\"hljs-number\">-0<\/span>                  <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">92<\/span>s\r\nowncloud-postgres<span class=\"hljs-number\">-1<\/span>                  <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">64<\/span>s\r\npostgres-operator<span class=\"hljs-number\">-6464<\/span>fc9c48<span class=\"hljs-number\">-6<\/span>twrd   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">7<\/span>m\r\n<\/code><\/pre>\n<p><code>owncloud-postgres-0<\/code>\u00a0and\u00a0<code>owncloud-postgres-1<\/code>\u00a0in\u00a0<code>Running<\/code>\u00a0status? That looks good.<\/p>\n<p>Now that the database is running, let\u2019s continue to the Redis.<\/p>\n<h2 id=\"redis\">Redis<\/h2>\n<p>To run a Redis cluster we need the KubeDB Operator. You can install it with a bash script or Helm. To keep it quick\u2019n\u2019easy we\u2019ll use their bash script for that:<\/p>\n<pre><code>curl -fsSL https:<span class=\"hljs-comment\">\/\/raw.githubusercontent.com\/kubedb\/cli\/0.12.0\/hack\/deploy\/kubedb.sh -o kubedb.sh<\/span>\r\n# Take a look at the script using, e.g., `cat kubedb.sh`\r\n#\r\n# If you are fine <span class=\"hljs-keyword\">with<\/span> it, run it:\r\nchmod +x kubedb.sh\r\n.\/kubedb.sh\r\n# It will install the KubeDB operator to the cluster <span class=\"hljs-keyword\">in<\/span> the `kube-<span class=\"hljs-keyword\">system<\/span>` Namespace\r\n<\/code><\/pre>\n<p>(You can remove the script afterwards:\u00a0<code>rm kubedb.sh<\/code>)<\/p>\n<p>For more information on the bash script and \/ or the Helm installation, checkout\u00a0<a href=\"https:\/\/kubedb.com\/docs\/0.12.0\/setup\/install\/#install-kubedb-operator\" target=\"_blank\" rel=\"noopener noreferrer\">KubeDB<\/a>.<\/p>\n<p>Now move on to create the Redis cluster. Run:<\/p>\n<pre><code>kubectl create -n owncloud <span class=\"hljs-_\">-f<\/span> redis.yaml\r\n<\/code><\/pre>\n<p>It will take a few seconds for the first Redis Pod(s) to be started, to check that it worked, look for Pods with\u00a0<code>redis-owncloud-<\/code>\u00a0in their name:<\/p>\n<pre><code>$ kubectl get -n owncloud pods\r\nNAME                                 READY   STATUS    RESTARTS   AGE\r\nowncloud-postgres<span class=\"hljs-number\">-0<\/span>                  <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">6<\/span>m41s\r\nowncloud-postgres<span class=\"hljs-number\">-1<\/span>                  <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">6<\/span>m13s\r\npostgres-operator<span class=\"hljs-number\">-6464<\/span>fc9c48<span class=\"hljs-number\">-6<\/span>twrd   <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">12<\/span>m\r\nredis-owncloud-shard0<span class=\"hljs-number\">-0<\/span>              <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">49<\/span>s\r\nredis-owncloud-shard0<span class=\"hljs-number\">-1<\/span>              <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">40<\/span>s\r\nredis-owncloud-shard1<span class=\"hljs-number\">-0<\/span>              <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">29<\/span>s\r\nredis-owncloud-shard1<span class=\"hljs-number\">-1<\/span>              <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">19<\/span>s\r\nredis-owncloud-shard2<span class=\"hljs-number\">-0<\/span>              <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">14<\/span>s\r\nredis-owncloud-shard2<span class=\"hljs-number\">-1<\/span>              <span class=\"hljs-number\">1<\/span>\/<span class=\"hljs-number\">1<\/span>     Running   <span class=\"hljs-number\">0<\/span>          <span class=\"hljs-number\">10<\/span>s\r\n<\/code><\/pre>\n<p>That is how it should look like now.<\/p>\n<h2 id=\"owncloud\">ownCloud<\/h2>\n<p>Now the final \u201cpiece\u201d: ownCloud. The folder\u00a0<code>owncloud\/<\/code>\u00a0contains all the manifests we need:<\/p>\n<ul>\n<li>ConfigMap and Secret for basic configuration of the ownCloud.<\/li>\n<li>Deployment to get ownCloud Pods running in Kubernetes.<\/li>\n<li>Service and Ingress to expose ownCloud to the internet.<\/li>\n<li>CronJob to run the ownCloud cron task execution (e.g., cleanup and others), instead of having the cron run per instance.<\/li>\n<\/ul>\n<p>The ownCloud Deployment currently uses a custom built image (<code>galexrt\/owncloud-server:latest<\/code>) which has a fix for a clustered Redis configuration issue (There is already an open\u00a0<a href=\"https:\/\/github.com\/owncloud-docker\/base\/pull\/95\" target=\"_blank\" rel=\"noopener noreferrer\">pull request<\/a>).<\/p>\n<pre><code>kubectl create -n owncloud -f owncloud\/\r\n# Now we<span class=\"hljs-symbol\">'ll<\/span> <span class=\"hljs-keyword\">wait<\/span> <span class=\"hljs-keyword\">for<\/span> ownCloud <span class=\"hljs-keyword\">to<\/span> have installed the database <span class=\"hljs-keyword\">to<\/span> <span class=\"hljs-keyword\">then<\/span> scale the ownCloud up <span class=\"hljs-keyword\">to<\/span> `<span class=\"hljs-number\">2<\/span>` (<span class=\"hljs-keyword\">or<\/span> more <span class=\"hljs-keyword\">if<\/span> you want)\r\n<\/code><\/pre>\n<p>The admin username is\u00a0<code>myowncloudadmin<\/code>\u00a0and can be changed in the\u00a0<code>owncloud\/owncloud-configmap.yaml<\/code>\u00a0file. Be sure to restart both ownCloud Pods after changing values in the ConfigMaps and Secrets.<\/p>\n<p>If you want to change the admin password, edit the\u00a0<code>owncloud\/owncloud-secret.yaml<\/code>\u00a0file line\u00a0<code>OWNCLOUD_ADMIN_PASSWORD<\/code>. The values in a Kubernetes Secret object are base64 encoded (e.g.,\u00a0<code>echo -n YOUR_PASSWORD | base64 -w0<\/code>)!<\/p>\n<p>To know when your ownCloud is up\u2019n\u2019running check the logs, e.g.:<\/p>\n<pre><code><span class=\"hljs-symbol\">$<\/span> kubectl logs -n owncloud -f owncloud<span class=\"hljs-number\">-856<\/span>fcc4947-crscn\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Creating<\/span><\/span> volume folders...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Creating<\/span><\/span> hook folders...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Waiting<\/span><\/span> <span class=\"hljs-keyword\">for<\/span> PostgreSQL...\r\nwait-<span class=\"hljs-keyword\">for<\/span>-it: waiting <span class=\"hljs-number\">180<\/span> seconds <span class=\"hljs-keyword\">for<\/span> owncloud-postgres:<span class=\"hljs-number\">5432<\/span>\r\nwait-<span class=\"hljs-keyword\">for<\/span>-it: owncloud-postgres:<span class=\"hljs-number\">5432<\/span> is available after <span class=\"hljs-number\">1<\/span> seconds\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Removing<\/span><\/span> custom folder...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Linking<\/span><\/span> custom folder...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Removing<\/span><\/span> config folder...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Linking<\/span><\/span> config folder...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Writing<\/span><\/span> config <span class=\"hljs-keyword\">file<\/span>...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Fixing<\/span><\/span> base perms...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Fixing<\/span><\/span> data perms...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Fixing<\/span><\/span> hook perms...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Installing<\/span><\/span> server database...\r\nownCloud was successfully installed\r\nownCloud is already latest version\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Writing<\/span><\/span> objectstore config...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Writing<\/span><\/span> php config...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Updating<\/span><\/span> htaccess config...\r\n.htaccess has been updated\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Writing<\/span><\/span> apache config...\r\n<span class=\"hljs-function\"><span class=\"hljs-title\">Enabling<\/span><\/span> webcron background...\r\n<span class=\"hljs-keyword\">Set<\/span> mode <span class=\"hljs-comment\">for background jobs to<\/span> <span class=\"hljs-comment\">'webcron'<\/span>\r\nTouching <span class=\"hljs-comment\">cron configs...<\/span>\r\nStarting <span class=\"hljs-comment\">cron daemon...<\/span>\r\nStarting <span class=\"hljs-comment\">apache daemon...<\/span>\r\n[Sun <span class=\"hljs-comment\">Aug 04 13:26:18.986407 2019] [mpm_prefork:notice] [pid 190] AH00163: Apache<\/span>\/<span class=\"hljs-number\">2.4<\/span><span class=\"hljs-number\">.29<\/span> (Ubuntu) configured -- resuming <span class=\"hljs-built-in\">normal<\/span> operations\r\n[Sun Aug <span class=\"hljs-number\">04<\/span> <span class=\"hljs-number\">13<\/span>:<span class=\"hljs-number\">26<\/span>:<span class=\"hljs-number\">18.986558<\/span> <span class=\"hljs-number\">2019<\/span>] [core:notice] [pid <span class=\"hljs-number\">190<\/span>] AH00094: Command line: <span class=\"hljs-comment\">'\/usr\/sbin\/apache2 -f \/etc\/apache2\/apache2.conf -D FOREGROUND'<\/span>\r\n<\/code><\/pre>\n<p>The\u00a0<code>Installing server database...<\/code>\u00a0will take some time depending on your network, storage and other factors.<\/p>\n<p>After the\u00a0<code>[Sun Aug 04 13:26:18.986558 2019] [core:notice] [pid<br \/>\n190] AH00094: Command line: '\/usr\/sbin\/apache2 -f<br \/>\n\/etc\/apache2\/apache2.conf -D FOREGROUND'<\/code>\u00a0you should be able to reach your ownCloud instance through the NodePort Service Port (on HTTP) or through the Ingress (default address\u00a0<code>owncloud.example.com<\/code>). If you are using the Ingress from the example files, be sure to edit it to use a (sub-) domain pointing to the Ingress controllers in your Kubernetes cluster.<\/p>\n<p>You now have a ownCloud instance running!<\/p>\n<h3 id=\"further-points\">Further points<\/h3>\n<h4 id=\"https\">HTTPS<\/h4>\n<p>To further improve the experience of running ownCloud in Kubernetes, you will probably want to checkout\u00a0<a href=\"https:\/\/github.com\/jetstack\/cert-manager\" target=\"_blank\" rel=\"noopener noreferrer\">Jetstack\u2019s cert-manager project on GitHub<\/a>\u00a0to get yourself Letsencrypt certificates for your Ingress controller. The\u00a0<code>cert-manager<\/code>\u00a0allows you to request\u00a0<a href=\"https:\/\/letsencrypt.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">Let\u2019s Encrypt<\/a>\u00a0certificates easily through Kubernetes custom objects and keep them uptodate.<\/p>\n<p>Meaning the ownCloud will then be reachable via HTTPS which combined with the ownCloud encryption makes it pretty secure.<\/p>\n<p>For more information on using TLS with Kubernetes Ingress, checkout\u00a0<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#tls\" target=\"_blank\" rel=\"noopener noreferrer\">Ingress \u2013 Kubernetes<\/a>.<\/p>\n<h4 id=\"pod-health-checks\">Pod Health Checks<\/h4>\n<p>In the\u00a0<code>owncloud\/owncloud-deployment.yaml<\/code>\u00a0there is a\u00a0<code>readinessProbe<\/code>\u00a0and\u00a0<code>livenessProbe<\/code>\u00a0in the Deployment sepc but commented out. After the ownCloud has been installed and you have verified it is running, you can go ahead and uncomment those lines and use\u00a0<code>kubectl apply<\/code>\u00a0\/\u00a0<code>kubectl replace<\/code>\u00a0(don\u2019t forget to specify the Namespace\u00a0<code>-n owncloud<\/code>).<\/p>\n<h4 id=\"upload-filesize\">Upload Filesize<\/h4>\n<p>When changing the upload filesize on the ownCloud instance itself through the environment variables, be sure to also update the Ingress controller with the \u201cmax upload file size\u201d.<\/p>\n<h4 id=\"other-configuration-options\">Other Configuration Options<\/h4>\n<p>When wanting to change config options, you need to provide them through environment variables. You can specify them in the\u00a0<code>owncloud\/owncloud-configmap.yaml<\/code>.<\/p>\n<p>A list of all available environment variables can be found here:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/owncloud-docker\/server#available-environment-variables\" target=\"_blank\" rel=\"noopener noreferrer\">github.com\/owncloud-docker\/server#available-environment-variables<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/owncloud-docker\/base#available-environment-variables\" target=\"_blank\" rel=\"noopener noreferrer\">github.com\/owncloud-docker\/base#available-environment-variables<\/a><\/li>\n<\/ul>\n<h3 id=\"updating-owncloud-in-kubernetes\">Updating ownCloud in Kubernetes<\/h3>\n<p>It is the same procedure as with running ownCloud with, e.g.,\u00a0<a href=\"https:\/\/owncloud.org\/news\/how-to-set-up-an-owncloud-in-3-minutes\/\" target=\"_blank\" rel=\"noopener\">docker-compose<\/a>.<\/p>\n<p>To update ownCloud you need to scale down the Deployment to\u00a0<code>1<\/code>\u00a0(<code>replicas<\/code>), then update the image, wait for the one single Pod come up again and then scale up the ownCloud Deployment again to, e.g.,\u00a0<code>2<\/code>\u00a0or more.<\/p>\n<h2 id=\"summary\">Summary<\/h2>\n<p>This is the end of the two part series on running ownCloud in Kubernetes \u2013 thanks for reading into it. Hopefully it is helpful. Feedback is appreciated! Share this guide with others.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The best practices for high availability, scalability, and performance? Read this guide about running ownCloud in Kubernetes with using Rook for a Ceph Cluster.<\/p>\n","protected":false},"author":7,"featured_media":5025,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[43],"tags":[],"class_list":["post-2428","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"_links":{"self":[{"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/posts\/2428","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/comments?post=2428"}],"version-history":[{"count":0,"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/posts\/2428\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/media\/5025"}],"wp:attachment":[{"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/media?parent=2428"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/categories?post=2428"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/owncloud.com\/de\/wp-json\/wp\/v2\/tags?post=2428"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}