GZCTF Platform Deployment with k3s

星盟安全团队长期招新中~ 我们的目标是星辰大海!

  • 简历格式:ID、联系方式、掌握技术、比赛情况
  • 简历投递邮箱: xmcve@qq.com
  • 联系QQ:1609410364

前言

GZ::CTF official guide document strongly recommended us to build GZCTF with docker + k3s solution , which can substains small / middle competition for public.

It means we need two servers at least, Docker one used to build gzctf frontend and k3s one for the containers backend.

Local Deployment

Asset Topology:

  • 192.168.1.100 : Used for GZCTF platform deployment (frontend),OS: ubuntu 20.04
  • 192.168.1.195 : Used for k3s container cluster deployment (backend) ,OS ubuntu 20.04

k3s Deployment

k3s installation:

1
2
3
4
5
6
## Domestic
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
## Foreign
curl -sfL https://get.k3s.io | sh -
## check the installation
k3s -v

NOTICE: Original config of k3s can’t pull docker images for some reasons. So we need to set third-party image proxy.

Just add onfigration to /etc/rancher/k3s/registries.yaml like below:

title:"/etc/rancher/k3s/registries.yaml"
1
2
3
4
5
6
7
8
mirrors:
"docker.io":
endpoint:
- "https://hub.littlediary.cn"
- "https://hub.xdark.top"
- "https://cjie.eu.org"
- "https://docker.1panel.live"
- "https://docker.unsee.tech"

Kuboard installation , a WebUI of k3s for convenience of control k3s. Kuboard depends on dcoker, so we install docker and docker-compose first.

Install docker and docker-compose command:

1
2
3
4
5
6
7
8
9
10
11
12
13
## Install docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/v2.32.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

## Install docker
sudo apt install docker.io

## Optional, make docker/docker-compose command don't need type "sudo" front of them anymore.
## It's troublesome to always input the password :x
sudo groupadd docker
sudo gpasswd -a ${USER} docker
sudo systemctl restart docker
sudo chmod a+rw /var/run/docker.sock

next, install Kuboard. Actually, Code block below just pull a docker image and run a web container.

Just try chmod +x run_kuboard.sh && ./run_kuboard.sh command to run the below shell script.

NOTICE: KUBOARD_BASE_URL need to change to your own k3s vps, for me it’s http://192.168.0.195

title:"run_kuboard.sh"
1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash

## Run the Docker container
docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 3271:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="http://192.168.0.195:3271" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /root/kuboard-data:/data \
swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3

After that, just visit http://192.168.1.195:3271 to see the kuboard panel. Default account is : admin/Kuboard123

Now we got k3s and kuboard, but k3s has not been bound to kuboard yet. We need to add k3s cluster to kuboard.

There are two inputs that we need to pay attention to, kubeconfig and ApiServer地址. 名称 and 描述 just take a name and description you want And the Context position just have one chioce which is default after we set kubeconfig

For the kubeconfig position, We need to completely copy the content of file in /etc/rancher/k3s/k3s.yaml.

title:"kubeconfig"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tL....
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0tL....
client-key-data: LS0tLS....

For the ApiServer地址 position, we need to input the k3s server ip, kuboard and k3s is the same vps in this case, so just set below:

1
https://192.168.0.195:6443

Finally, we bound k3s to kuboard, the last operation is to set the access rule of k3s like below picture.

GZCTF Deployment

GZCTF frontend platform deployed by docker-compose. The main file is docker-compose.yml, but it depends on appsettings.json and kube-config.yaml. So there are three file we need to create and config, structure like below.

title:"create basic env"
1
2
3
4
5
6
7
8
9
mkdir ~/gzctf
cd ~/gzctf
tree .
.
├── appsettings.json
├── docker-compose.yml
└── kube-config.yaml

0 directories, 3 files

Docker is necessary to build it, so we install docker first:

title:"docker install"
1
2
3
4
5
6
7
8
9
## Install docker
sudo apt install docker.io

## Optional, make docker/docker-compose command don't need sudo at the beginning anymore.
## It's troublesome to always input the password :x
sudo groupadd docker
sudo gpasswd -a ${USER} docker
sudo systemctl restart docker
sudo chmod a+rw /var/run/docker.sock

For the same reason of network, we need to set image proxy config. Just run the below shell script. The brand new third-party proxy address is illustrated by here: https://www.cnblogs.com/alex-oos/p/18417200.

title
1
2
3
4
5
6
7
8
9
10
11
12
13
14
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors":[
"https://hub.xdark.top",
"https://hub.littlediary.cn",
"https://cjie.eu.org",
"https://docker.1panel.live",
"https://docker.unsee.tech"
]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

Then we need to config the three necessary files, let’s config them one by one. All my configrations refers to Quick Start - GZ::CTF .

Firstly, the docker-compose.yml is the main file to build the gzctf platform. Below is my configration:

title:"docker-compose.yml"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
services:
gzctf:
image: registry.cn-shanghai.aliyuncs.com/gztime/gzctf:latest
restart: always
environment:
- "GZCTF_ADMIN_PASSWORD=123456"
## choose your backend language `en_US` / `zh_CN` / `ja_JP`
- "LC_ALL=zh_CN.UTF-8"
ports:
- "80:8080"
volumes:
- "./data/files:/app/files"
- "./appsettings.json:/app/appsettings.json:ro"
- "./kube-config.yaml:/app/kube-config.yaml:ro" ## this is required for k8s deployment
## - "/var/run/docker.sock:/var/run/docker.sock" ## this is required for docker deployment
depends_on:
- db
- cache
cache:
image: redis:alpine
restart: always

db:
image: postgres:alpine
restart: always
environment:
- "POSTGRES_PASSWORD=password_here"
volumes:
- "./data/db:/var/lib/postgresql/data"

The appsettings.json shows the basic configration of GZ::CTF paltform, such as database address, public entry, etc.

title:appsettings.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"AllowedHosts": "*",
"ConnectionStrings": {
"Database": "Host=db:5432;Database=gzctf;Username=postgres;Password=password_here"
},
"EmailConfig": {
"SenderAddress": "",
"SenderName": "",
"UserName": "",
"Password": "",
"Smtp": {
"Host": "localhost",
"Port": 587
}
},
"XorKey": "JasperSecretKey",
"ContainerProvider": {
"Type": "Kubernetes",
"PortMappingType": "Default",
"EnableTrafficCapture": false,
"PublicEntry": "192.168.0.195",
"DockerConfig": {
"SwarmMode": false,
"Uri": "unix:///var/run/docker.sock"
}
},
"RequestLogging": false,
"DisableRateLimit": true
}

kube-config.yaml is used to build a brige between gzctf platform (frontend) and k3s clusters (backend). Just copy the contents of /etc/rancher/k3s/k3s.yaml in k3s vps and change the server properties to ApiServer地址 we set before.

title:"kube-config.yaml"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS...
server: https://192.168.0.195:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0tLS1...
client-key-data: LS0tLS1...

Finally, run docker-compose up -d in your gzctf directory of gzpaltform vps, and visit to your vps ip then you will see the paltform has been built if the command run successfully. For example I visit to http://192.168.0.100 and I will see below:

Default username is Admin and password has been set in docker-compose.yml. For me it’s 123456.

Challenges Deployment

If we want to use gz platform api to pull images and run our challenges automatically, a standard docker container templates is required. Fortunately, official document give us two repo, the GZCTF-Challenges shows basic dynamic container challenge templates and the W4terCTF-2023 has provided pre-configured images for challenges deployment.

official docker templates address:

For example, I wanna use a web challenges like Help Newnew Find Flag,got the inner service port in file challenges/web/help-new-new-find-flag/README.md

Then visit the packages to get the docker images url: Packages · W4terDr0p

got the docker image address and pull it:

Finally, set docker image url and expose port we check before .You’ll able to visit the challenge service if everything correct.

Service will open on k3s vps, but NOTICE use docker ps -a can’t see the challenges container, you need to use Kuboard.

Remote deployment

Asset Topology:

  • aaa.aaa.aaa.aaa: for GZCTF platform deployment (frontend),OS: ubuntu 20.04
  • bbb.bbb.bbb.bbb : for k3s container cluster deployment (backend) ,OS ubuntu 20.04

k3s deployment

k3s installation:

1
2
3
4
5
6
## Domestic
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
## Foreign
curl -sfL https://get.k3s.io | sh -
## check the installation
k3s -v

modify proxy config to make sure pull images successfully. File address: /etc/rancher/k3s/registries.yaml

title:"/etc/rancher/k3s/registries.yaml"
1
2
3
4
5
6
7
8
mirrors:
"docker.io":
endpoint:
- "https://hub.littlediary.cn"
- "https://hub.xdark.top"
- "https://cjie.eu.org"
- "https://docker.1panel.live"
- "https://docker.unsee.tech"

As same, If we want to use Kuboard, we should install docker and docker-compose first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
## Install docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/v2.32.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

## Install docker
sudo apt install docker.io

## Optional, make docker/docker-compose command don't need sudo at the beginning anymore.

## It's troublesome to always input the password, let make it blow up :(
sudo groupadd docker
sudo gpasswd -a ${USER} docker
sudo systemctl restart docker
sudo chmod a+rw /var/run/docker.sock

next, install Kuboard. Actually, Code block below just pull a docker image and run a web container.

Just try chmod +x run_kuboard.sh && ./run_kuboard.sh command to run the below shell script.

NOTICE: KUBOARD_BASE_URL need to change to your own k3s vps, for me it’s http://192.168.0.195

title:"run_kuboard.sh"
1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash

## Run the Docker container
docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 3271:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="bbb.bbb.bbb.bbb:3271" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /root/kuboard-data:/data \
swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3

then visit address: http://bbb.bbb.bbb.bbb:3271 to see kuboard login page: (count: admin/Kuboard123

Then we add a cluster to bound to k3s, and set ServerApi to ensure gzctf-frontend could access backend.

NOTICE: kubeconfig completely copy the content of /etc/rancher/k3s/k3s.yaml, and ApiServer should change ip to public ip like https://bbb.bbb.bbb.bbb:6443

finally, chose you user role like below:

gzctf deployment

As we know, we need three file to build a frontend: docker-compose.yml, appsettings.json, kube-config.yaml .

title:"docker-compose.yml"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
services:
gzctf:
image: registry.cn-shanghai.aliyuncs.com/gztime/gzctf:latest
restart: always
environment:
- "GZCTF_ADMIN_PASSWORD=123456"
## choose your backend language `en_US` / `zh_CN` / `ja_JP`
- "LC_ALL=zh_CN.UTF-8"
ports:
- "80:8080"
volumes:
- "./data/files:/app/files"
- "./appsettings.json:/app/appsettings.json:ro"
- "./kube-config.yaml:/app/kube-config.yaml:ro" ## this is required for k8s deployment
## - "/var/run/docker.sock:/var/run/docker.sock" ## this is required for docker deployment
depends_on:
- db
- cache
cache:
image: redis:alpine
restart: always

db:
image: postgres:alpine
restart: always
environment:
- "POSTGRES_PASSWORD=password_here"
volumes:
- "./data/db:/var/lib/postgresql/data"

NOTICE: change PublicEntry to backend server ip: "PublicEntry": "bbb.bbb.bbb.bbb"

title:"appsettings.json"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"AllowedHosts": "*",
"ConnectionStrings": {
"Database": "Host=db:5432;Database=gzctf;Username=postgres;Password=password_here"
},
"EmailConfig": {
"SenderAddress": "",
"SenderName": "",
"UserName": "",
"Password": "",
"Smtp": {
"Host": "localhost",
"Port": 587
}
},
"XorKey": "JasperSecretKey",
"ContainerProvider": {
"Type": "Kubernetes",
"PortMappingType": "Default",
"EnableTrafficCapture": false,
"PublicEntry": "bbb.bbb.bbb.bbb",
"DockerConfig": {
"SwarmMode": false,
"Uri": "unix:///var/run/docker.sock"
}
},
"RequestLogging": false,
"DisableRateLimit": true
}

NOTICE: change server to https://<backend_server_ip>:6443

title:"kube-config.yaml"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1...
server: https://bbb.bbb.bbb.bbb:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0tLS1...
client-key-data: LS0tLS1...

Create the above config file properly, then run command docker-compose up -d

Try visit http://aaa.aaa.aaa.aaa, you will see the platform:

Reference