Fork project

This commit is contained in:
2025-09-19 15:59:08 +08:00
commit 2f921b6209
52 changed files with 4012 additions and 0 deletions

156
docs/advanced.md Normal file
View File

@ -0,0 +1,156 @@
# Advanced deployment
## Note
Please make sure that you have experienced the installation process on single node. This deployment method is *NOT* recommended on first try.
It would be easy for you to understand what we are going to do if you have some experience in using `docker` and `frp`.
## Goal
The goal of this advanced deployment is to deploy the CTFd and challenge containers on seperate machines for better experiences.
Overall, `ctfd-whale` can be decomposed into three compnents: `CTFd`, challenge containers along with frpc, and frps itself. The three components can be deployed seperately or together to satisfy different needs.
For example, if you're in a school or an organization that has a number of high-performance dedicated server *BUT* no public IP for public access, you can refer to this tutorial.
Here are some options:
* deploy frps on a server with public access
* deploy challenge containers on a seperate sever by joining the server into the swarm you created earlier
* deploy challenge containers on *rootless* docker
* deploy challenge containers on a remote server with public access, *securely*
You could achieve the first option with little effort by deploying the frps on the server and configure frpc with a different `server_addr`.
In a swarm with multiple nodes, you can configure CTFd to start challenge containers on nodes you specifies randomly. Just make sure the node `whale` controlls is a `Leader`. This is not covered in this guide. You'll find it rather simple, even if you have zero experience on docker swarm.
The [Docker docs](https://docs.docker.com/engine/security/rootless/) have a detailed introduction on how to set up a rootless docker, so it's also not covered in this guide.
In following paragraphs, the last option is introduced.
## Architecture
In this tutorial, we have 2 separate machines which we'll call them `web` and `target` server later. We will deploy CTFd on `web` and challenge containers (along with frp) on `target`.
This picture shows a brief glance.
![architecture](imgs/arch.png)
---
### Operate on `target` server
> root user is NOT recommended
> if you want to expose your docker deployment, you might also want to use [rootless docker](https://docs.docker.com/engine/security/rootless/)
Please read the [Docker docs](https://docs.docker.com/engine/security/protect-access/#use-tls-https-to-protect-the-docker-daemon-socket) thoroughly before continuing.
Setup docker swarm and clone this repo as described in [installation](./install.md), then follow the steps described in the Docker docs to sign your certificates.
> protect your certificates carefully
> one can take over the user running `dockerd` effortlessly with them
> and in most cases, the user is, unfortunately, root.
You can now create a network for your challenges by executing
```bash
docker network create --driver overlay --attachable challenges
```
Then setup frp on this machine. You might want to setup frps first:
```bash
# change to the version you prefer
wget https://github.com/fatedier/frp/releases/download/v0.37.0/frp_0.37.0_linux_amd64.tar.gz
tar xzvf frp_0.37.0_linux_amd64.tar.gz
cd frp_0.37.0_linux_amd64
mkdir /etc/frp
configure_frps frps.ini # refer to [installation](./install.md)
cp systemd/frps.service /etc/systemd/system
systemctl daemon-reload
systemctl enable frps
systemctl start frps
```
Then frpc. Frpc should be running in the same network with the challenge containers, so make sure you connect frpc to the network you just created.
```bash
docker run -it --restart=always -d --network challenges -p 7400:7400 frankli0324/frp:frpc \
--server_addr=host_ip:host_port \
--server_port=7000 \
--admin_addr=7400 \
--admin_port=7400 \
--admin_user=username \
--admin_pwd=password \
--token=your_token
```
You could use `docker-compose` for better experience.
Here are some pitfalls or problems you might run into:
#### working with `systemd`
Copy the systemd service file to `/etc/systemd` in order to prevent it from being overwritten by future updates.
```bash
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
```
Locate `ExecStart` in the file and change it into something like this:
```systemd
ExecStart=/usr/bin/dockerd \
--tlsverify \
--tlscacert=/etc/docker/certs/ca.pem \
--tlscert=/etc/docker/certs/server-cert.pem \
--tlskey=/etc/docker/certs/server-key.pem \
-H tcp://0.0.0.0:2376 \
-H unix:///var/run/docker.sock
```
Remember to reload `systemd` before restarting `docker.service`
```bash
systemctl daemon-reload
systemctl restart docker
```
#### cloud service providers
Most service providers provides you with a basic virus scanner in their system images, for example, AliCloud images comes with `YunDun`. You might want to disable it. The challenge containers often comes with backdoors, and is often accessed in a way cloud providers don't like (they are obviously attacks).
#### certificate security
Please follow the best practices when signing your certificates. If you gets used to signing both the client and server certicates on a single machine, you might run into troubles in the future.
If you feel inconvenient, at least sign them on your personal computer, and transfer only the needed files to client/server.
#### challenge networks and frpc
You could create an internal network for challenges, but you have to connect frpc to a different network *with* internet in order to map the ports so that CTFd can access the admin interface. Also, make sure frps is accessible by frpc.
### Operate on `web` server
Map your client certificates into docker. You might want to use `docker secrets`. Remember where the files are *inside the container*. In the case which you use `docker secrets`, the directory is `/run/secrets`.
You may also delete everything related to `frp` like `frp_network` since we are not going to run challenge containers on `web` server anymore. But if you just has one public IP for `web` server, you can leave `frps` service running.
Then recreate your containers:
```bash
docker-compose down # needed for removing unwanted networks
docker-compose up -d
```
Now you can configure CTFd accordingly.
Sample configurations:
![whale-config1](imgs/whale-config1.png)
![whale-config2](imgs/whale-config2.png)
![whale-config3](imgs/whale-config3.png)
refer to [installation](./install.md) for explanations.
---
Now you can add a challenge to test it out.

268
docs/advanced.zh-cn.md Normal file
View File

@ -0,0 +1,268 @@
# 高级部署
## 前提
请确认你有过单机部署的经验,不建议第一次就搞这样分布架构
建议有一定Docker部署及操作经验者阅读此文档
在进行以下步骤之前你需要先安装好ctfd-whale插件
## 目的
分离靶机与ctfd网站服务器CTFd通过tls api远程调用docker
## 架构
两台vps
- 一台作为安装CTFd的网站服务器称为 `web` 需要公网IP
- 一台作为给选手下发容器的服务器,称为 `target` 此文档用到的服务器是有公网IP的但如果没有可也在 `web` 服务器用 `frps` 做转发
本部署方式的架构如图所示
![架构](imgs/arch.png)
---
## 配置Docker的安全API
参考来源:[Docker官方文档](https://docs.docker.com/engine/security/protect-access/#use-tls-https-to-protect-the-docker-daemon-socket)
### target服务器配置
建议切换到 `root` 用户操作
### 克隆本仓库
```bash
$ git clone https://github.com/frankli0324/ctfd-whale
```
### 开启docker swarm
```bash
$ docker swarm init
$ docker node update --label-add "name=linux-target-1" $(docker node ls -q)
```
`name` 记住了,后面会用到
创建文件夹
```bash
$ mkdir /etc/docker/certs && cd /etc/docker/certs
```
设置口令需要输入2次
```bash
$ openssl genrsa -aes256 -out ca-key.pem 4096
```
用OpenSSL创建CA, 服务器, 客户端的keys
```bash
$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
```
生成server证书如果你的靶机服务器没有公网IP内网IP理论上也是可以的只要web服务器能访问到
```bash
$ openssl genrsa -out server-key.pem 4096
$ openssl req -subj "/CN=<target_ip>" -sha256 -new -key server-key.pem -out server.csr
```
配置白名单
```bash
$ echo subjectAltName = IP:0.0.0.0,IP:127.0.0.1 >> extfile.cnf
```
将Docker守护程序密钥的扩展使用属性设置为仅用于服务器身份验证
```bash
$ echo extendedKeyUsage = serverAuth >> extfile.cnf
```
生成签名证书,此处需要输入你之前设置的口令
```bash
$ openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out server-cert.pem -extfile extfile.cnf
```
生成客户端(web服务器)访问用的 `key.pem`
```bash
$ openssl genrsa -out key.pem 4096
```
生成 `client.csr` 此处IP与之前生成server证书的IP相同
```bash
$ openssl req -subj "/CN=<target_ip>" -new -key key.pem -out client.csr
```
创建扩展配置文件,把密钥设置为客户端身份验证用
```bash
$ echo extendedKeyUsage = clientAuth > extfile-client.cnf
```
生成 `cert.pem`
```bash
$ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -extfile extfile-client.cnf
```
删掉配置文件和两个证书的签名请求,不再需要
```bash
$ rm -v client.csr server.csr extfile.cnf extfile-client.cnf
```
为了防止私钥文件被更改以及被其他用户查看,修改其权限为所有者只读
```bash
$ chmod -v 0400 ca-key.pem key.pem server-key.pem
```
为了防止公钥文件被更改,修改其权限为只读
```bash
$ chmod -v 0444 ca.pem server-cert.pem cert.pem
```
打包公钥
```bash
$ tar cf certs.tar *.pem
```
修改Docker配置使Docker守护程序可以接受来自提供CA信任的证书的客户端的连接
拷贝安装包单元文件到 `/etc` 这样就不会因为docker升级而被覆盖
```bash
$ cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
```
将第 `13`
```
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
```
改为如下形式
```
ExecStart=/usr/bin/dockerd --tlsverify \
--tlscacert=/etc/docker/certs/ca.pem \
--tlscert=/etc/docker/certs/server-cert.pem \
--tlskey=/etc/docker/certs/server-key.pem \
-H tcp://0.0.0.0:2376 \
-H unix:///var/run/docker.sock
```
重新加载daemon并重启docker
```bash
$ systemctl daemon-reload
$ systemctl restart docker
```
**注意保存好生成的密钥任何持有密钥的用户都可以拥有target服务器的root权限**
---
### Web服务器配置
`root`用户下配置
```bash
$ cd CTFd
$ mkdir docker-certs
```
先把刚才打包好的公钥 `certs.tar` 复制到这台服务器上
然后解压
```bash
$ tar xf certs.tar
```
打开 `CTFd` 项目的 `docker-compose.yml` ,在`CTFd` 服务的 `volumes` 下加一条
```
./docker-certs:/etc/docker/certs:ro
```
顺便把 `frp` 有关的**所有**配置项删掉,比如`frp_network`之类
然后执行 `docker-compose up -d`
打开`CTFd-whale`的配置网页按照如下配置docker
![whale-config1](imgs/whale-config1.png)
注意事项
- `API URL` 一定要写成 `https://<target_ip>:<port>` 的形式
- `Swarm Nodes` 写初始化 `docker swarm` 时添加的 `lable name`
- `SSL CA Certificates` 等三个路径都是CTFd容器里的地址不要和物理机的地址搞混了如果你按照上一个步骤更改好了 `CTFd``docker-compose.yml` ,这里的地址照着填就好
对于单容器的题目,`Auto Connect Network` 中的网络地址为`<folder_name>_<network_name>`,如果没有改动,则默认为 `whale-target_frp_containers`
![whale-config2](imgs/whale-config2.png)
*多容器题目配置 未测试*
---
## FRP配置
### 添加泛解析域名用于HTTP模式访问
可以是这样
```
*.example.com
*.sub.example.com (以此为例)
```
### 在target服务器上配置
进入 `whale-target` 文件夹
```bash
$ cd ctfd-whale/whale-target
```
修改 `frp` 配置文件
```bash
$ cp frp/frps.ini.example frp/frps.ini
$ cp frp/frpc.ini.example frp/frpc.ini
```
打开 `frp/frps.ini`
- 修改 `token` 字段, 此token用于frpc与frps通信的验证
- 此处因为frps和frpc在同一台服务器中不改也行
- 如果你的target服务器处于内网中可以将 `frps` 放在 `web` 服务器中这时token就可以长一些比如[生成一个随机UUID](https://www.uuidgenerator.net/)
- 注意 `vhost_http_port` 与 [docker-compose.yml](/whale-target/docker-compose.yml) 里 `frps` 映射的端口相同
- `subdomain_host` 是你做泛解析之后的域名,如果泛解析记录为`*.sub.example.com`, 则填入`sub.example.com`
#### 打开 `frp/frpc.ini`
- 修改 `token` 字段与 `frps.ini` 里的相同
- 修改 `admin_user``admin_pwd`字段, 用于 `frpc` 的 basic auth
---
### 在WEB服务器上配置
打开whale的设置页面按照如下配置参数
![frp配置页面](imgs/whale-config3.png)
网页中,
- `API URL` 需要按照 `http://user:password@ip:port` 的形式来设置
- `Http Domain Suffix` 需要与 `frps.ini` 中的 `subdomain_host` 保持一致
- `HTTP Port``frps.ini``vhost_http_port` 保持一致
- `Direct Minimum Port``Direct Maximum Port``whale-target/docker-compose.yml` 中的段口范围保持一致
- 当 API 设置成功后whale 会自动获取`frpc.ini`的内容作为模板
---
至此分离部署的whale应该就能用了可以找个题目来测试一下不过注意docker_dynamic类型的题目似乎不可以被删除请注意不要让其他管理员把测试题公开
你可以用
```bash
$ docker-compose logs
```
来查看日志并调试Ctrl-C退出

BIN
docs/imgs/arch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

BIN
docs/imgs/whale-config1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

BIN
docs/imgs/whale-config2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
docs/imgs/whale-config3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

304
docs/install.md Normal file
View File

@ -0,0 +1,304 @@
# Installation & Usage Guide
## TLDR
If you never deployed a CTFd instance before:
```sh
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
docker swarm init
docker node update --label-add='name=linux-1' $(docker node ls -q)
git clone https://github.com/CTFd/CTFd --depth=1
git clone https://github.com/frankli0324/ctfd-whale CTFd/CTFd/plugins/ctfd-whale --depth=1
curl -fsSL https://cdn.jsdelivr.net/gh/frankli0324/ctfd-whale/docker-compose.example.yml -o CTFd/docker-compose.yml
# make sure you have pip3 installed on your rig
pip3 install docker-compose
docker-compose -f CTFd/docker-compose.yml up -d
# wait till the containers are ready
docker-compose -f CTFd/docker-compose.yml exec ctfd python manage.py set_config whale:auto_connect_network
```
The commands above tries to install `docker-ce``python3-pip` and `docker-compose`. Make sure the following requirements are satisfied before you execute them:
* have `curl`, `git`, `python3` and `pip` installed
* GitHub is reachable
* Docker Registry is reachable
## Installation
### Start from scratch
First of all, you should initialize a docker swarm and label the nodes
names of nodes running linux/windows should begin with `linux/windows-*`
```bash
docker swarm init
docker node update --label-add "name=linux-1" $(docker node ls -q)
```
Taken advantage of the orchestration ability of `docker swarm`, `ctfd-whale` is able to distribute challenge containers to different nodes(machines). Each time a user request for a challenge container, `ctfd-whale` will randomly pick a suitable node for running the container.
After initializing a swarm, make sure that CTFd runs as expected on your PC/server
Note that the included compose file in CTFd 2.5.0+ starts an nginx container by default, which takes the http/80 port. make sure there's no conflicts.
```bash
git clone https://github.com/CTFd/CTFd --depth=1
cd CTFd # the cwd will not change throughout this guide from this line on
```
Change the first line of `docker-compose.yml` to support `attachable` property
`version '2'` -> `version '3'`
```bash
docker-compose up -d
```
take a look at <http://localhost>(or port 8000) and setup CTFd
### Configure frps
frps could be started by docker-compose along with CTFd
define a network for communication between frpc and frps, and create a frps service block
```yml
services:
...
frps:
image: glzjin/frp
restart: always
volumes:
- ./conf/frp:/conf
entrypoint:
- /usr/local/bin/frps
- -c
- /conf/frps.ini
ports:
- 10000-10100:10000-10100 # for "direct" challenges
- 8001:8001 # for "http" challenges
networks:
default: # frps ports should be mapped to host
frp_connect:
networks:
...
frp_connect:
driver: overlay
internal: true
ipam:
config:
- subnet: 172.1.0.0/16
```
Create a folder in `conf/` called `frp`
```bash
mkdir ./conf/frp
```
then create a configuration file for frps `./conf/frp/frps.ini`, and fill it with:
```ini
[common]
# following ports must not overlap with "direct" port range defined in the compose file
bind_port = 7987 # port for frpc to connect to
vhost_http_port = 8001 # port for mapping http challenges
token = your_token
subdomain_host = node3.buuoj.cn
# hostname that's mapped to frps by some reverse proxy (or IS frps itself)
```
### Configure frpc
Likewise, create a network and a service for frpc
the network allows challenges to be accessed by frpc
```yml
services:
...
frpc:
image: glzjin/frp:latest
restart: always
volumes:
- ./conf/frp:/conf/
entrypoint:
- /usr/local/bin/frpc
- -c
- /conf/frpc.ini
depends_on:
- frps #need frps to run first
networks:
frp_containers:
frp_connect:
ipv4_address: 172.1.0.3
networks:
...
frp_containers: # challenge containers are attached to this network
driver: overlay
internal: true
# if challenge containers are allowed to access the internet, remove this line
attachable: true
ipam:
config:
- subnet: 172.2.0.0/16
```
Likewise, create an frpc config file `./conf/frp/frpc.ini`
```ini
[common]
token = your_token
server_addr = frps
server_port = 7897 # == frps.bind_port
admin_addr = 172.1.0.3 # refer to "Security"
admin_port = 7400
```
### Verify frp configurations
update compose stack with `docker-compose up -d`
by executing `docker-compose logs frpc`, you should see that frpc produced following logs:
```log
[service.go:224] login to server success, get run id [******], server udp port [******]
[service.go:109] admin server listen on ******
```
by seeing this, you can confirm that frpc/frps is set up correctly.
Note: folder layout in this guide:
```
CTFd/
conf/
nginx/ # included in CTFd 2.5.0+
frp/
frpc.ini
frps.ini
serve.py <- this is just an anchor
```
### Configure CTFd
After finishing everything above:
* map docker socket into CTFd container
* Attach CTFd container to frp_connect
```yml
services:
ctfd:
...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- frpc #need frpc to run ahead
networks:
...
frp_connect:
```
and then clone Whale into CTFd plugins directory (yes, finally)
```bash
git clone https://github.com/frankli0324/CTFd-Whale CTFd/plugins/ctfd-whale --depth=1
docker-compose build # for pip to find requirements.txt
docker-compose up -d
```
go to the Whale Configuration page (`/plugins/ctfd-whale/admin/settings`)
#### Docker related configs
`Auto Connect Network`, if you strictly followed the guide, should be `ctfd_frp_containers`
If you're not sure about that, this command lists all networks in the current stack
```bash
docker network ls -f "label=com.docker.compose.project=ctfd" --format "{{.Name}}"
```
#### frp related configs
* `HTTP Domain Suffix` should be consistent with `subdomain_host` in frps
* `HTTP Port` with `vhost_http_port` in frps
* `Direct IP Address` should be a hostname/ip address that can be used to access frps
* `Direct Minimum Port` and `Direct Maximum Port`, you know what to do
* as long as `API URL` is filled in correctly, Whale will read the config of the connected frpc into `Frpc config template`
* setting `Frpc config template` will override contents in `frpc.ini`
Whale should be kinda usable at this moment.
### Configure nginx
If you are using CTFd 2.5.0+, you can utilize the included nginx.
remove the port mapping rule for frps vhost http port(8001) in the compose file
If you wnat to go deeper:
* add nginx to `default` and `internal` network
* remove CTFd from `default` and remove the mapped 8000 port
add following server block to `./conf/nginx/nginx.conf`:
```conf
server {
listen 80;
server_name *.node3.buuoj.cn;
location / {
proxy_pass http://frps:8001;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
```
## Challenge Deployment
### Standalone Containers
Take a look at <https://github.com/CTFTraining>
In one word, a `FLAG` variable will be passed into the container when it's started. You should write your own startup script (usually with bash and sed) to:
* replace your flag with the generated flag
* remove or override the `FLAG` variable
PLEASE create challenge images with care.
### Grouped Containers
"name" the challenge image with a json object, for example:
```json
{
"hostname": "image",
}
```
Whale will keep the order of the keys in the json object, and take the first image as the "main container" of a challenge. The "main container" will be mapped to frp with same rules from standalone containers
see how grouped containers are created in the [code](utils/docker.py#L58)
## Security
* Please do not allow untrusted people to access the admin account. Theoretically there's an SSTI vulnerability in the config page.
* Do not set bind_addr of the frpc to `0.0.0.0` if you are following this guide. This may enable contestants to override frpc configurations.
* If you are annoyed by the complicated configuration, and you just want to set bind_addr = 0.0.0.0, remember to enable Basic Auth included in frpc, and set API URL accordingly, for example, `http://username:password@frpc:7400`
## Advanced Deployment
To separate the target server (for lunching instance) and CTFd web server with TLS secured docker API, please refer to [this document](advanced.md)

313
docs/install.zh-cn.md Normal file
View File

@ -0,0 +1,313 @@
# 使用指南
## TLDR
如果你从未部署过CTFd你可以通过执行:
```sh
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh --mirror Aliyun
docker swarm init
docker node update --label-add='name=linux-1' $(docker node ls -q)
git clone https://github.com/CTFd/CTFd --depth=1
git clone https://github.com/frankli0324/ctfd-whale CTFd/CTFd/plugins/ctfd-whale --depth=1
curl -fsSL https://cdn.jsdelivr.net/gh/frankli0324/ctfd-whale/docker-compose.example.yml -o CTFd/docker-compose.yml
# make sure you have pip3 installed on your rig
pip3 install docker-compose
docker-compose -f CTFd/docker-compose.yml up -d
docker-compose -f CTFd/docker-compose.yml exec ctfd python manage.py
```
脚本会在一台Linux机器上安装 ***docker.com版本的*** `docker-ce``python3-pip` 以及 `docker-compose`,请确保执行上述代码之前:
* 安装好curlgitpython3以及pip
* 网络环境良好能正常从GitHub克隆仓库
* 网络环境良好能正常从Docker Registry拖取镜像
## 手动安装
为了更好地理解ctfd-whale各个组件的作用更充分地利用ctfd-whale在真实使用ctfd-whale时建议用户手动、完整地从空白CTFd开始搭建一个实例。下面本文将引导你完成整个流程。
### 从零开始
首先需要初始化一个swarm集群并给节点标注名称
linux节点名称需要以 `linux-` 打头windows节点则以 `windows-` 打头
```bash
docker swarm init
docker node update --label-add "name=linux-1" $(docker node ls -q)
```
`ctfd-whale`利用`docker swarm`的集群管理能力,能够将题目容器分发到不同的节点上运行。选手每次请求启动题目容器时,`ctfd-whale`都将随机选择一个合适的节点运行这个题目容器。
然后我们需要确保CTFd可以正常运行。
注意2.5.0+版本CTFd的 `docker-compose.yml` 中包含了一个 `nginx` 反代占用了80端口
```bash
git clone https://github.com/CTFd/CTFd --depth=1
cd CTFd # 注以下全部内容的cwd均为此目录
```
先将 `docker-compose.yml` 的第一行进行修改,以支持 `attachable` 参数
`version '2'` -> `version '3'`
接着
```bash
docker-compose up -d
```
访问<http://localhost>或8000端口对CTFd进行初始配置
### 配置frps
frps可以直接通过docker-compose与CTFd同步启动。
首先在networks中添加一个网络用于frpc与frps之间的通信并添加frps service
```yml
services:
...
frps:
image: glzjin/frp
restart: always
volumes:
- ./conf/frp:/conf
entrypoint:
- /usr/local/bin/frps
- -c
- /conf/frps.ini
ports:
- 10000-10100:10000-10100 # 映射direct类型题目的端口
- 8001:8001 # 映射http类型题目的端口
networks:
default: # 需要将frps暴露到公网以正常访问题目容器
frp_connect:
networks:
...
frp_connect:
driver: overlay
internal: true
ipam:
config:
- subnet: 172.1.0.0/16
```
先创建目录 `./conf/frp`
```bash
mkdir ./conf/frp
```
接着创建 `./conf/frp/frps.ini` 文件,填写:
```ini
[common]
# 下面两个端口注意不要与direct类型题目端口范围重合
bind_port = 7987 # frpc 连接到 frps 的端口
vhost_http_port = 8001 # frps 映射http类型题目的端口
token = your_token
subdomain_host = node3.buuoj.cn # 访问http题目容器的主机名
```
### 配置frpc
同样在networks中再添加一个网络用于frpc与题目容器之间的通信并添加frpc service
```yml
services:
...
frpc:
image: glzjin/frp:latest
restart: always
volumes:
- ./conf/frp:/conf/
entrypoint:
- /usr/local/bin/frpc
- -c
- /conf/frpc.ini
depends_on:
- frps #frps需要先成功运行
networks:
frp_containers: # 供frpc访问题目容器
frp_connect: # 供frpc访问frps, CTFd访问frpc
ipv4_address: 172.1.0.3
networks:
...
frp_containers:
driver: overlay
internal: true # 如果允许题目容器访问外网,则可以去掉
attachable: true
ipam:
config:
- subnet: 172.2.0.0/16
```
同样,我们需要创建一个 `./conf/frp/frpc.ini`
```ini
[common]
token = your_token
server_addr = frps
server_port = 7897 # 对应 frps 的 bind_port
admin_addr = 172.1.0.3 # 请参考“安全事项”
admin_port = 7400
```
### 检查frp配置是否正确
此时可以执行 `docker-compose up -d` 更新compose配置
通过查看日志 `docker-compose logs frpc` 应当能看到frpc产生了以下日志
```log
[service.go:224] login to server success, get run id [******], server udp port [******]
[service.go:109] admin server listen on ******
```
说明frpc与frps皆配置正常
注:此例中目录结构为:
```
CTFd/
conf/
nginx # CTFd 2.5.0+中自带
frp/
frpc.ini
frps.ini
serve.py
```
### 配置CTFd
前面的工作完成后将本机docker的访问接口映射到CTFd所在容器内
并将CTFd添加到frpc所在network中注意不是containers这个network
```yml
services:
ctfd:
...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- frpc #frpc需要先运行
networks:
...
frp_connect:
```
将CTFd-Whale克隆至CTFd的插件目录
```bash
git clone https://github.com/frankli0324/CTFd-Whale CTFd/plugins/ctfd-whale --depth=1
docker-compose build # 需要安装依赖
docker-compose up -d
```
进入Whale的配置页面( `/plugins/ctfd-whale/admin/settings` )首先配置docker配置项
需要注意的是 `Auto Connect Network` ,如果按照上面的配置流程进行配置的话,应当是 `ctfd_frp_containers`
如果不确定的话可以通过下面的命令列出CTFd目录compose生成的所有network
```bash
docker network ls -f "label=com.docker.compose.project=ctfd" --format "{{.Name}}"
```
然后检查frp配置项是否正确
* `HTTP Domain Suffix` 与 frps 的 `subdomain_host` 保持一致
* `HTTP Port` 与 frps 的 `vhost_http_port` 保持一致
* `Direct IP Address` 为能访问到 frps 相应端口(例子中为10000-10100) 的IP
* `Direct Minimum Port``Direct Maximum Port` 显然可得
* 只要正确填写了 `API URL` Whale 会自动获取 frpc 的配置文件作为 `Frpc config template`
* 通过设置 `Frpc config template` 可以覆盖原有 `frpc.ini` 文件
至此CTFd-Whale 已经马马虎虎可以正常使用了。
### 配置nginx
如果你在使用2.5.0+版本的CTFd那么你可以直接利用自带的nginx进行http题目的反代
首先去除docker-compose.yml中对frps http端口的映射(8001)
如果想贯彻到底的话,可以
* 为nginx添加internal与default两个network
* 去除CTFd的default network并去除ports项
`./conf/nginx/nginx.conf` 的http block中添加以下server block
```conf
server {
listen 80;
server_name *.node3.buuoj.cn;
location / {
proxy_pass http://frps:8001;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
```
## 部署题目
### 单容器题目环境
请参考<https://github.com/CTFTraining>中的镜像进行题目镜像制作Dockerfile编写。总体而言题目在启动时会向**容器**内传入名为 `FLAG` 的环境变量你需要编写一个启动脚本一般为bash+sed组合拳将flag写入自己的题目中并删除这一环境变量。
请出题人制作镜像时请理清思路,不要搞混容器与镜像的概念。这样既方便自己,也方便部署人员。
### 多容器题目环境
在题目镜像名处填写一个json object即可创建一道多容器的题目
```json
{
"hostname": "image",
}
```
Whale会保留json的key顺序并将第一个容器作为"主容器"映射到外网,映射方式与单容器相同
以buuoj上的swpu2019 web2为例可以配置如下
```json
{
"ss": "shadowsocks-chall",
"web": "swpu2019-web2",
...
}
```
其中shadowsocks-chall的Dockerfile:
```dockerfile
FROM shadowsocks/shadowsocks-libev
ENV PASSWORD=123456
ENV METHOD=aes-256-cfb
```
> 由于写README的并不是buuoj管理员故上述仅作说明用与实际情况可能有较大出入
## 安全事项
* 后台配置中flag与domain模版理论上存在sstifeature请不要将管理员账号给不可信第三方
* 由于例子中frpc并没有开启鉴权请不要将frpc的bind_addr设置为`0.0.0.0`。这样会导致利用任何一道能发起http请求的题目都能修改frpc配置。
* 如果出于配置复杂性考虑题目容器能够访问frpc请开启frpc的Basic Auth并以 `http://username:password@frpc:7400` 的格式设置frpc API URL
## 高级部署
用于下发靶机实例的服务器与运行 `CTFd` 网站的服务器分离,`CTFd-whale` 通过启用了 `TLS/SSL` 验证的 `Dockers API`进行下发容器控制
参见 [advanced.zh-cn.md](advanced.zh-cn.md)