celeborn/docs/deploy_on_k8s.md
SteNicholas dd87419044
[CELEBORN-1380][FOLLOWUP] leveldbjni uses org.openlabtesting.leveldbjni to support linux aarch64 platform for leveldb via aarch64 profile
### What changes were proposed in this pull request?

Dependency leveldbjni uses `org.openlabtesting.leveldbjni` to support linux aarch64 platform for leveldb via `aarch64` profile.

Follow up #2476.

### Why are the changes needed?

Celeborn worker could not start on arm arch devices if db backend is `LevelDB`, which should support leveldbjni on the aarch64 platform.

aarch64 uses `org.openlabtesting.leveldbjni:leveldbjni-all.1.8`, and other platforms use `org.fusesource.leveldbjni:leveldbjni-all.1.8`. Meanwhile, because some hadoop dependencies packages are also depend on `org.fusesource.leveldbjni:leveldbjni-all`, but hadoop merge the similar change on trunk, details see
[HADOOP-16614](https://issues.apache.org/jira/browse/HADOOP-16614), therefore it should exclude the dependency of `org.fusesource.leveldbjni` for these hadoop packages related.

In addtion, `org.openlabtesting.leveldbjni` requires glibc version 3.4.21. Otherwise, there will be the following potential runtime risks:

```
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x00007fad3630b12a, pid=62, tid=0x00007f93394ef700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_162-b12) (build 1.8.0_162-b12)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.162-b12 mixed mode linux-amd64 )
# Problematic frame:
# C  [libc.so.6+0x8412a]
#
# Core dump written. Default location: /data/service/celeborn/core or core.62
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

---------------  T H R E A D  ---------------

Current thread (0x00007f9308001000):  JavaThread "leveldb" [_thread_in_native, id=878, stack(0x00007f9338cf0000,0x00007f93394f0000)]

siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 0x00007f97380d2220
```

Backport:

- https://github.com/apache/spark/pull/26636
- https://github.com/apache/spark/pull/31036

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

No.

Closes #2530 from SteNicholas/CELEBORN-1380.

Authored-by: SteNicholas <programgeek@163.com>
Signed-off-by: mingji <fengmingxiao.fmx@alibaba-inc.com>
2024-05-27 14:07:02 +08:00

6.4 KiB

license
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Deploy Celeborn on Kubernetes

Celeborn currently supports rapid deployment by using helm.

Before Deploy

  1. You should have a Running Kubernetes Cluster.
  2. You should understand simple Kubernetes deploy related, e.g. Kubernetes Resources.
  3. You have enough permissions to create resources.
  4. Installed Helm.

Deploy

1. Get Celeborn Binary Package

You can find released version of Celeborn on Downloading Page.

Of course, you can build binary package from master branch or your own branch by using ./build/make-distribution.sh in source code.

Notice: Celeborn supports automatic builds on linux aarch64 platform via aarch64 profile. aarch64 profile requires glibc version 3.4.21. There is potential problematic frame C [libc.so.6+0x8412a] for other glibc version like 2.x etc.

Anyway, you should unzip and into binary package.

2. Modify Celeborn Configurations

Notice: Celeborn Charts Template Files is in the experimental instability stage, the subsequent optimization will be adjusted.

The configuration in ./charts/celeborn/values.yaml you should focus on modifying is:

  • image repository - Get images from which repository
  • image tag - Which version of image to use
  • masterReplicas - Number of celeborn master replicas
  • workerReplicas - Number of celeborn worker replicas
  • volumes - How and where to mount volumes (For more information, Volumes)

[Optional] Build Celeborn Docker Image

Maybe you want to make your own celeborn docker image, you can use docker build . -f docker/Dockerfile in Celeborn Binary.

3. Helm Install Celeborn Charts

More details in Helm Install

cd ./charts/celeborn

helm install celeborn -n <namespace> .

4. Check Celeborn

After the above operation, you should be able to find the corresponding Celeborn Master/Worker by kubectl get pods -n <namespace>

Etc.

NAME                READY     STATUS             RESTARTS   AGE
celeborn-master-0   1/1       Running            0          1m
...
celeborn-worker-0   1/1       Running            0          1m
...

Given that Celeborn Master/Worker Pod takes time to start, you can see the following phenomenon:

** server can't find celeborn-master-0.celeborn-master-svc.default.svc.cluster.local: NXDOMAIN

waiting for master
Server:         172.17.0.10
Address:        172.17.0.10#53

...

Name:   celeborn-master-0.celeborn-master-svc.default.svc.cluster.local
Address: 10.225.139.80

Server:         172.17.0.10
Address:        172.17.0.10#53

starting org.apache.celeborn.service.deploy.master.Master, logging to /opt/celeborn/logs/celeborn--org.apache.celeborn.service.deploy.master.Master-1-celeborn-master-0.out

...

23/03/23 14:10:56,081 INFO [main] RaftServer: 0: start RPC server
23/03/23 14:10:56,132 INFO [nioEventLoopGroup-2-1] LoggingHandler: [id: 0x83032bf1] REGISTERED
23/03/23 14:10:56,132 INFO [nioEventLoopGroup-2-1] LoggingHandler: [id: 0x83032bf1] BIND: 0.0.0.0/0.0.0.0:9872
23/03/23 14:10:56,134 INFO [nioEventLoopGroup-2-1] LoggingHandler: [id: 0x83032bf1, L:/0:0:0:0:0:0:0:0:9872] ACTIVE
23/03/23 14:10:56,135 INFO [JvmPauseMonitor0] JvmPauseMonitor: JvmPauseMonitor-0: Started
23/03/23 14:10:56,208 INFO [main] Master: Metrics system enabled.
23/03/23 14:10:56,216 INFO [main] HttpServer: master: HttpServer started on port 9098.
23/03/23 14:10:56,216 INFO [main] Master: Master started.

5. Access Celeborn Service

The Celeborn Master/Worker nodes deployed via official Helm charts run as StatefulSet, it can be accessed through Pod IP or Stable Network ID (DNS name), in above case, the Master/Worker nodes can be accessed through:

celeborn-master-0.celeborn-master-svc.default.svc.cluster.local`
...
celeborn-worker-0.celeborn-worker-svc.default.svc.cluster.local`
...

After a restart, the StatefulSet Pod IP changes but the DNS name remains, this is important for rolling upgrade.

When bind address is not set explicitly, Celeborn worker is going to find the first non-loopback address to bind. By default, it uses IP address both for address binding and registering, that causes the Master and Client use the IP address to access the Worker, it's problematic after Worker restart as explained above, especially when Graceful Shutdown is enabled.

You may want to set celeborn.network.bind.preferIpAddress=false to address such issue. Note that, depends on your Kubernetes network infrastructure, this may cause pressure on DNS service or other network issues compared with using IP address directly.

6. Build Celeborn Client

Here, without going into detail on how to configure Spark/Flink/MapReduce to find celeborn master/worker, mention the key configuration:

spark.celeborn.master.endpoints: celeborn-master-0.celeborn-master-svc.<namespace>:9097,celeborn-master-1.celeborn-master-svc.<namespace>:9097,celeborn-master-2.celeborn-master-svc.<namespace>:9097

You can find why config endpoints such way in Kubernetes DNS for Service And Pods

Notice: You should ensure that Spark/Flink/MapReduce can find the Celeborn Master/Worker via IP or the Kubernetes DNS mentioned above