Fork project

This commit is contained in:
2025-09-19 15:59:08 +08:00
commit 2f921b6209
52 changed files with 4012 additions and 0 deletions

3
.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
__pycache__/
*.py[cod]
.DS_Store

16
.idea/ctfd-whale.iml generated Normal file
View File

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
<component name="TemplatesService">
<option name="TEMPLATE_CONFIGURATION" value="Jinja2" />
<option name="TEMPLATE_FOLDERS">
<list>
<option value="$MODULE_DIR$/templates" />
</list>
</option>
</component>
</module>

91
CHANGELOG.md Normal file
View File

@ -0,0 +1,91 @@
# Changelog
## 2020-03-18
- Allow non-dynamic flag.
## 2020-02-18
- Refine front for ctfd newer version.(@frankli0324)
## 2019-11-21
- Add network prefix & timeout setting.
- Refine port and network range search
- Refine frp request
- Refine lock timeout
## 2019-11-08
- Add Lan Domain
## 2019-11-04
- Change backend to Docker Swarm.
- Support depoly different os image to different os node.
You should init docker swarm, and add your node to it. And name them with following command:
```
docker node update --label-add name=windows-1 ****
docker node update --label-add name=linux-1 ****
```
Name of them should begin with windows- or linux-.
And put them in the setting panel.
Then if you want to deploy a instance to windows node, You should tag your name with prefix "windows", like "glzjin/super_sql:windows".
And please modify the container network driver to 'Overlay'!
## 2019-10-30
- Optimize for multi worker.
- Try to fix concurrency request problem.
Now You should set the redis with REDIS_HOST environment varible.
## 2019-09-26
- Add frp http port setting.
You should config it at the settings for http redirect.
## 2019-09-15
- Add Container Network Setting and DNS Setting.
Now You can setup a DNS Server in your Container Network.
- For single-instance network, Just connect your dns server to it and input the ip address in the seeting panel.
- For multi-instance network, You should rename the dns server to a name include "dns", than add it to auto connect instance. It will be used as a dns server.
## 2019-09-14
- Refine plugin path.
## 2019-09-13
- Refine removal.
## 2019-08-29
- Add CPU usage limit.
- Allow the multi-image challenge.
Upgrade:
1. Execute this SQL in ctfd database.
```
alter table dynamic_docker_challenge add column cpu_limit float default 0.5 after memory_limit;
```
2. Setting the containers you want to plugin to a single multi-image network. (In settings panel)
3. When you create a challenge you can set the docker image like this
```
{"socks": "serjs/go-socks5-proxy", "web": "blog_revenge_blog", "mysql": "blog_revenge_mysql", "oauth": "blog_revenge_oauth"}
```
The first one will be redirected the traffic.

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2019 glzjin
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

40
README.md Normal file
View File

@ -0,0 +1,40 @@
# CTFd-Whale
## [中文README](README.zh-cn.md)
A plugin that empowers CTFd to bring up separate environments for each user
## Features
- Deploys containers with `frp` and `docker swarm`
- Supports subdomain access by utilizing `frp`
- Contestants can start/renew/destroy their environments with a single click
- flags and subdomains are generated automatically with configurable rules
- Administrators can get a full list of running containers, with full control over them.
## Installation & Usage
refer to [installation guide](docs/install.md)
## Demo
[BUUCTF](https://buuoj.cn)
## Third-party Introductions (zh-CN)
- [CTFd-Whale 推荐部署实践](https://www.zhaoj.in/read-6333.html)
- [手把手教你如何建立一个支持ctf动态独立靶机的靶场ctfd+ctfd-whale)](https://blog.csdn.net/fjh1997/article/details/100850756)
## Screenshots
![](https://user-images.githubusercontent.com/20221896/105939593-7cca6f80-6094-11eb-92de-8a04554dc019.png)
![image](https://user-images.githubusercontent.com/20221896/105940182-a637cb00-6095-11eb-9525-8291986520c1.png)
![](https://user-images.githubusercontent.com/20221896/105939965-2e69a080-6095-11eb-9b31-7777a0cc41b9.png)
![](https://user-images.githubusercontent.com/20221896/105940026-50632300-6095-11eb-8512-6f19dd12c776.png)
## Twin Project
- [CTFd-Owl](https://github.com/D0g3-Lab/H1ve/tree/master/CTFd/plugins/ctfd-owl) (支持部署compose)

39
README.zh-cn.md Normal file
View File

@ -0,0 +1,39 @@
# CTFd-Whale
能够支持题目容器化部署的CTFd插件
## 功能
- 利用`frp``docker swarm`做到多容器部署
- web题目支持利用frp的subdomain实现每个用户单独的域名访问
- 参赛选手一键启动题目环境,支持容器续期
- 自动生成随机flag并通过环境变量传入容器
- 管理员可以在后台查看启动的容器
- 支持自定义flag生成方式与web题目子域名生成方式
## 使用方式
请参考[安装指南](docs/install.zh-cn.md)
## Demo
[BUUCTF](https://buuoj.cn)
## 第三方使用说明
- [CTFd-Whale 推荐部署实践](https://www.zhaoj.in/read-6333.html)
- [手把手教你如何建立一个支持ctf动态独立靶机的靶场ctfd+ctfd-whale)](https://blog.csdn.net/fjh1997/article/details/100850756)
## 使用案例
![](https://user-images.githubusercontent.com/20221896/105939593-7cca6f80-6094-11eb-92de-8a04554dc019.png)
![image](https://user-images.githubusercontent.com/20221896/105940182-a637cb00-6095-11eb-9525-8291986520c1.png)
![](https://user-images.githubusercontent.com/20221896/105939965-2e69a080-6095-11eb-9b31-7777a0cc41b9.png)
![](https://user-images.githubusercontent.com/20221896/105940026-50632300-6095-11eb-8512-6f19dd12c776.png)
## 友情链接
- [CTFd-Owl](https://github.com/D0g3-Lab/H1ve/tree/master/CTFd/plugins/ctfd-owl) (支持部署compose)

124
__init__.py Normal file
View File

@ -0,0 +1,124 @@
import fcntl
import warnings
import requests
from flask import Blueprint, render_template, session, current_app, request
from flask_apscheduler import APScheduler
from CTFd.api import CTFd_API_v1
from CTFd.plugins import (
register_plugin_assets_directory,
register_admin_plugin_menu_bar,
)
from CTFd.plugins.challenges import CHALLENGE_CLASSES
from CTFd.utils import get_config, set_config
from CTFd.utils.decorators import admins_only
from .api import user_namespace, admin_namespace, AdminContainers
from .challenge_type import DynamicValueDockerChallenge
from .utils.checks import WhaleChecks
from .utils.control import ControlUtil
from .utils.db import DBContainer
from .utils.docker import DockerUtils
from .utils.exceptions import WhaleWarning
from .utils.setup import setup_default_configs
from .utils.routers import Router
def load(app):
app.config['RESTX_ERROR_404_HELP'] = False
# upgrade()
plugin_name = __name__.split('.')[-1]
set_config('whale:plugin_name', plugin_name)
app.db.create_all()
if not get_config("whale:setup"):
setup_default_configs()
register_plugin_assets_directory(
app, base_path=f"/plugins/{plugin_name}/assets",
endpoint='plugins.ctfd-whale.assets'
)
register_admin_plugin_menu_bar(
title='Whale',
route='/plugins/ctfd-whale/admin/settings'
)
DynamicValueDockerChallenge.templates = {
"create": f"/plugins/{plugin_name}/assets/create.html",
"update": f"/plugins/{plugin_name}/assets/update.html",
"view": f"/plugins/{plugin_name}/assets/view.html",
}
DynamicValueDockerChallenge.scripts = {
"create": "/plugins/ctfd-whale/assets/create.js",
"update": "/plugins/ctfd-whale/assets/update.js",
"view": "/plugins/ctfd-whale/assets/view.js",
}
CHALLENGE_CLASSES["dynamic_docker"] = DynamicValueDockerChallenge
page_blueprint = Blueprint(
"ctfd-whale",
__name__,
template_folder="templates",
static_folder="assets",
url_prefix="/plugins/ctfd-whale"
)
CTFd_API_v1.add_namespace(admin_namespace, path="/plugins/ctfd-whale/admin")
CTFd_API_v1.add_namespace(user_namespace, path="/plugins/ctfd-whale")
worker_config_commit = None
@page_blueprint.route('/admin/settings')
@admins_only
def admin_list_configs():
nonlocal worker_config_commit
errors = WhaleChecks.perform()
if not errors and get_config("whale:refresh") != worker_config_commit:
worker_config_commit = get_config("whale:refresh")
DockerUtils.init()
Router.reset()
set_config("whale:refresh", "false")
return render_template('whale_config.html', errors=errors)
@page_blueprint.route("/admin/containers")
@admins_only
def admin_list_containers():
result = AdminContainers.get()
view_mode = request.args.get('mode', session.get('view_mode', 'list'))
session['view_mode'] = view_mode
return render_template("whale_containers.html",
plugin_name=plugin_name,
containers=result['data']['containers'],
pages=result['data']['pages'],
curr_page=abs(request.args.get("page", 1, type=int)),
curr_page_start=result['data']['page_start'])
def auto_clean_container():
with app.app_context():
results = DBContainer.get_all_expired_container()
for r in results:
ControlUtil.try_remove_container(r.user_id)
app.register_blueprint(page_blueprint)
try:
Router.check_availability()
DockerUtils.init()
except Exception:
warnings.warn("Initialization Failed. Please check your configs.", WhaleWarning)
try:
lock_file = open("/tmp/ctfd_whale.lock", "w")
lock_fd = lock_file.fileno()
fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
scheduler.add_job(
id='whale-auto-clean', func=auto_clean_container,
trigger="interval", seconds=10
)
print("[CTFd Whale] Started successfully")
except IOError:
pass

138
api.py Normal file
View File

@ -0,0 +1,138 @@
from datetime import datetime
from flask import request
from flask_restx import Namespace, Resource, abort
from CTFd.utils import get_config
from CTFd.utils import user as current_user
from CTFd.utils.decorators import admins_only, authed_only
from .decorators import challenge_visible, frequency_limited
from .utils.control import ControlUtil
from .utils.db import DBContainer
from .utils.routers import Router
admin_namespace = Namespace("ctfd-whale-admin")
user_namespace = Namespace("ctfd-whale-user")
@admin_namespace.errorhandler
@user_namespace.errorhandler
def handle_default(err):
return {
'success': False,
'message': 'Unexpected things happened'
}, 500
@admin_namespace.route('/container')
class AdminContainers(Resource):
@staticmethod
@admins_only
def get():
page = abs(request.args.get("page", 1, type=int))
results_per_page = abs(request.args.get("per_page", 20, type=int))
page_start = results_per_page * (page - 1)
page_end = results_per_page * (page - 1) + results_per_page
count = DBContainer.get_all_alive_container_count()
containers = DBContainer.get_all_alive_container_page(
page_start, page_end)
return {'success': True, 'data': {
'containers': containers,
'total': count,
'pages': int(count / results_per_page) + (count % results_per_page > 0),
'page_start': page_start,
}}
@staticmethod
@admins_only
def patch():
user_id = request.args.get('user_id', -1)
result, message = ControlUtil.try_renew_container(user_id=int(user_id))
if not result:
abort(403, message, success=False)
return {'success': True, 'message': message}
@staticmethod
@admins_only
def delete():
user_id = request.args.get('user_id')
result, message = ControlUtil.try_remove_container(user_id)
return {'success': result, 'message': message}
@user_namespace.route("/container")
class UserContainers(Resource):
@staticmethod
@authed_only
@challenge_visible
def get():
user_id = current_user.get_current_user().id
challenge_id = request.args.get('challenge_id')
container = DBContainer.get_current_containers(user_id=user_id)
if not container:
return {'success': True, 'data': {}}
timeout = int(get_config("whale:docker_timeout", "3600"))
c = container.challenge # build a url for quick jump. todo: escape dash in categories and names.
link = f'<a target="_blank" href="/challenges#{c.category}-{c.name}-{c.id}">{c.name}</a>'
if int(container.challenge_id) != int(challenge_id):
return abort(403, f'Container already started but not from this challenge ({link})', success=False)
return {
'success': True,
'data': {
'lan_domain': str(user_id) + "-" + container.uuid,
'user_access': Router.access(container),
'remaining_time': timeout - (datetime.now() - container.start_time).seconds,
}
}
@staticmethod
@authed_only
@challenge_visible
@frequency_limited
def post():
user_id = current_user.get_current_user().id
ControlUtil.try_remove_container(user_id)
current_count = DBContainer.get_all_alive_container_count()
if int(get_config("whale:docker_max_container_count")) <= int(current_count):
abort(403, 'Max container count exceed.', success=False)
challenge_id = request.args.get('challenge_id')
result, message = ControlUtil.try_add_container(
user_id=user_id,
challenge_id=challenge_id
)
if not result:
abort(403, message, success=False)
return {'success': True, 'message': message}
@staticmethod
@authed_only
@challenge_visible
@frequency_limited
def patch():
user_id = current_user.get_current_user().id
challenge_id = request.args.get('challenge_id')
docker_max_renew_count = int(get_config("whale:docker_max_renew_count", 5))
container = DBContainer.get_current_containers(user_id)
if container is None:
abort(403, 'Instance not found.', success=False)
if int(container.challenge_id) != int(challenge_id):
abort(403, f'Container started but not from this challenge{container.challenge.name}', success=False)
if container.renew_count >= docker_max_renew_count:
abort(403, 'Max renewal count exceed.', success=False)
result, message = ControlUtil.try_renew_container(user_id=user_id)
return {'success': result, 'message': message}
@staticmethod
@authed_only
@frequency_limited
def delete():
user_id = current_user.get_current_user().id
result, message = ControlUtil.try_remove_container(user_id)
if not result:
abort(403, message, success=False)
return {'success': True, 'message': message}

27
assets/config.js Normal file
View File

@ -0,0 +1,27 @@
const $ = CTFd.lib.$;
$(".config-section > form:not(.form-upload)").submit(async function (event) {
event.preventDefault();
const obj = $(this).serializeJSON();
const params = {};
for (let x in obj) {
if (obj[x] === "true") {
params[x] = true;
} else if (obj[x] === "false") {
params[x] = false;
} else {
params[x] = obj[x];
}
}
params['whale:refresh'] = btoa(+new Date).slice(-7, -2);
await CTFd.api.patch_config_list({}, params);
location.reload();
});
$(".config-section > form:not(.form-upload) > div > div > div > #router-type").change(async function () {
await CTFd.api.patch_config_list({}, {
'whale:router_type': $(this).val(),
'whale:refresh': btoa(+new Date).slice(-7, -2),
});
location.reload();
});

120
assets/containers.js Normal file
View File

@ -0,0 +1,120 @@
const $ = CTFd.lib.$;
function htmlentities(str) {
return String(str).replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;').replace(/"/g, '&quot;');
}
function copyToClipboard(event, str) {
// Select element
const el = document.createElement('textarea');
el.value = str;
el.setAttribute('readonly', '');
el.style.position = 'absolute';
el.style.left = '-9999px';
document.body.appendChild(el);
el.select();
document.execCommand('copy');
document.body.removeChild(el);
$(event.target).tooltip({
title: "Copied!",
trigger: "manual"
});
$(event.target).tooltip("show");
setTimeout(function () {
$(event.target).tooltip("hide");
}, 1500);
}
$(".click-copy").click(function (e) {
copyToClipboard(e, $(this).data("copy"));
})
async function delete_container(user_id) {
let response = await CTFd.fetch("/api/v1/plugins/ctfd-whale/admin/container?user_id=" + user_id, {
method: "DELETE",
credentials: "same-origin",
headers: {
Accept: "application/json",
"Content-Type": "application/json"
}
});
response = await response.json();
return response.success;
}
async function renew_container(user_id) {
let response = await CTFd.fetch(
"/api/v1/plugins/ctfd-whale/admin/container?user_id=" + user_id, {
method: "PATCH",
credentials: "same-origin",
headers: {
Accept: "application/json",
"Content-Type": "application/json"
}
});
response = await response.json();
return response.success;
}
$('#containers-renew-button').click(function (e) {
let users = $("input[data-user-id]:checked").map(function () {
return $(this).data("user-id");
});
CTFd.ui.ezq.ezQuery({
title: "Renew Containers",
body: `Are you sure you want to renew the selected ${users.length} container(s)?`,
success: async function () {
await Promise.all(users.toArray().map((user) => renew_container(user)));
location.reload();
}
});
});
$('#containers-delete-button').click(function (e) {
let users = $("input[data-user-id]:checked").map(function () {
return $(this).data("user-id");
});
CTFd.ui.ezq.ezQuery({
title: "Delete Containers",
body: `Are you sure you want to delete the selected ${users.length} container(s)?`,
success: async function () {
await Promise.all(users.toArray().map((user) => delete_container(user)));
location.reload();
}
});
});
$(".delete-container").click(function (e) {
e.preventDefault();
let container_id = $(this).attr("container-id");
let user_id = $(this).attr("user-id");
CTFd.ui.ezq.ezQuery({
title: "Destroy Container",
body: "<span>Are you sure you want to delete <strong>Container #{0}</strong>?</span>".format(
htmlentities(container_id)
),
success: async function () {
await delete_container(user_id);
location.reload();
}
});
});
$(".renew-container").click(function (e) {
e.preventDefault();
let container_id = $(this).attr("container-id");
let user_id = $(this).attr("user-id");
CTFd.ui.ezq.ezQuery({
title: "Renew Container",
body: "<span>Are you sure you want to renew <strong>Container #{0}</strong>?</span>".format(
htmlentities(container_id)
),
success: async function () {
await renew_container(user_id);
location.reload();
},
});
});

100
assets/create.html Normal file
View File

@ -0,0 +1,100 @@
{% extends "admin/challenges/create.html" %}
{% block header %}
<div class="alert alert-secondary" role="alert">
Dynamic docker challenge allows players to deploy their per-challenge standalone instances.
</div>
{% endblock %}
{% block value %}
<div class="form-group">
<label for="value">Docker Image<br>
<small class="form-text text-muted">
The docker image used to deploy
</small>
</label>
<input type="text" class="form-control" name="docker_image" placeholder="Enter docker image name" required>
</div>
<div class="form-group">
<label for="value">Frp Redirect Type<br>
<small class="form-text text-muted">
Decide the redirect type how frp redirect traffic
</small>
</label>
<select class="form-control" name="redirect_type">
<option value="http" selected>HTTP</option>
<option value="direct">Direct</option>
</select>
</div>
<div class="form-group">
<label for="value">Frp Redirect Port<br>
<small class="form-text text-muted">
Decide which port in the instance that frp should redirect traffic for
</small>
</label>
<input type="number" class="form-control" name="redirect_port" placeholder="Enter the port you want to open"
required>
</div>
<div class="form-group">
<label for="value">Docker Container Memory Limit<br>
<small class="form-text text-muted">
The memory usage limit
</small>
</label>
<input type="text" class="form-control" name="memory_limit" placeholder="Enter the memory limit" value="128m"
required>
</div>
<div class="form-group">
<label for="value">Docker Container CPU Limit<br>
<small class="form-text text-muted">
The CPU usage limit
</small>
</label>
<input type="number" class="form-control" name="cpu_limit" placeholder="Enter the cpu limit" value="0.5"
required>
</div>
<div class="form-group">
<label for="value">Initial Value<br>
<small class="form-text text-muted">
This is how many points the challenge is worth initially.
</small>
</label>
<input type="number" class="form-control" name="value" placeholder="Enter value" required>
</div>
<div class="form-group">
<label for="value">Decay Limit<br>
<small class="form-text text-muted">
The amount of solves before the challenge reaches its minimum value
</small>
</label>
<input type="number" class="form-control" name="decay" placeholder="Enter decay limit" required>
</div>
<div class="form-group">
<label for="value">Minimum Value<br>
<small class="form-text text-muted">
This is the lowest that the challenge can be worth
</small>
</label>
<input type="number" class="form-control" name="minimum" placeholder="Enter minimum value" required>
</div>
<div class="form-group">
<label for="value">Score Type<br>
<small class="form-text text-muted">
Decide it use dynamic score or not
</small>
</label>
<select class="form-control" name="dynamic_score">
<option value="0" selected>Static Score</option>
<option value="1">Dynamic Score</option>
</select>
</div>
{% endblock %}
{% block type %}
<input type="hidden" value="dynamic_docker" name="type" id="chaltype">
{% endblock %}

30
assets/create.js Normal file
View File

@ -0,0 +1,30 @@
// Markdown Preview
if ($ === undefined) $ = CTFd.lib.$
$('#desc-edit').on('shown.bs.tab', function(event) {
if (event.target.hash == '#desc-preview') {
var editor_value = $('#desc-editor').val();
$(event.target.hash).html(
CTFd._internal.challenge.render(editor_value)
);
}
});
$('#new-desc-edit').on('shown.bs.tab', function(event) {
if (event.target.hash == '#new-desc-preview') {
var editor_value = $('#new-desc-editor').val();
$(event.target.hash).html(
CTFd._internal.challenge.render(editor_value)
);
}
});
$("#solve-attempts-checkbox").change(function() {
if (this.checked) {
$('#solve-attempts-input').show();
} else {
$('#solve-attempts-input').hide();
$('#max_attempts').val('');
}
});
$(document).ready(function() {
$('[data-toggle="tooltip"]').tooltip();
});

94
assets/update.html Normal file
View File

@ -0,0 +1,94 @@
{% extends "admin/challenges/update.html" %}
{% block value %}
<div class="form-group">
<label for="value">Current Value<br>
<small class="form-text text-muted">
This is how many points the challenge is worth right now.
</small>
</label>
<input type="number" class="form-control chal-value" name="value" value="{{ challenge.value }}" disabled>
</div>
<div class="form-group">
<label for="value">Initial Value<br>
<small class="form-text text-muted">
This is how many points the challenge was worth initially.
</small>
</label>
<input type="number" class="form-control chal-initial" name="initial" value="{{ challenge.initial }}" required>
</div>
<div class="form-group">
<label for="value">Decay Limit<br>
<small class="form-text text-muted">
The amount of solves before the challenge reaches its minimum value
</small>
</label>
<input type="number" class="form-control chal-decay" name="decay" value="{{ challenge.decay }}" required>
</div>
<div class="form-group">
<label for="value">Minimum Value<br>
<small class="form-text text-muted">
This is the lowest that the challenge can be worth
</small>
</label>
<input type="number" class="form-control chal-minimum" name="minimum" value="{{ challenge.minimum }}" required>
</div>
<div class="form-group">
<label for="value">Docker Image<br>
<small class="form-text text-muted">
The docker image used to deploy
</small>
</label>
<input type="text" class="form-control" name="docker_image" placeholder="Enter docker image name"
required value="{{ challenge.docker_image }}">
</div>
<div class="form-group">
<label for="value">Frp Redirect Type<br>
<small class="form-text text-muted">
Decide the redirect type how frp redirect traffic
</small>
</label>
<select class="form-control" name="redirect_type">
<option value="http" {% if challenge.redirect_type == "http" %}selected{% endif %}>HTTP</option>
<option value="direct" {% if challenge.redirect_type == "direct" %}selected{% endif %}>Direct</option>
</select>
</div>
<div class="form-group">
<label for="value">Frp Redirect Port<br>
<small class="form-text text-muted">
Decide which port in the instance that frp should redirect traffic for
</small>
</label>
<input type="number" class="form-control" name="redirect_port" placeholder="Enter the port you want to open"
required value="{{ challenge.redirect_port }}">
</div>
<div class="form-group">
<label for="value">Docker Container Memory Limit<br>
<small class="form-text text-muted">
The memory usage limit
</small>
</label>
<input type="text" class="form-control" name="memory_limit" placeholder="Enter the memory limit"
value="{{ challenge.memory_limit }}" required>
</div>
<div class="form-group">
<label for="value">Docker Container CPU Limit<br>
<small class="form-text text-muted">
The CPU usage limit
</small>
</label>
<input type="number" class="form-control" name="cpu_limit" placeholder="Enter the cpu limit"
value="{{ challenge.cpu_limit }}" required>
</div>
<div class="form-group">
<label for="value">Score Type<br>
<small class="form-text text-muted">
Decide it use dynamic score or not
</small>
</label>
<select class="form-control" name="dynamic_score">
<option value="0" {% if challenge.dynamic_score == 0 %}selected{% endif %}>Static Score</option>
<option value="1" {% if challenge.dynamic_score == 1 %}selected{% endif %}>Dynamic Score</option>
</select>
</div>
{% endblock %}

52
assets/update.js Normal file
View File

@ -0,0 +1,52 @@
if ($ === undefined) $ = CTFd.lib.$
$('#submit-key').click(function(e) {
submitkey($('#chalid').val(), $('#answer').val())
});
$('#submit-keys').click(function(e) {
e.preventDefault();
$('#update-keys').modal('hide');
});
$('#limit_max_attempts').change(function() {
if (this.checked) {
$('#chal-attempts-group').show();
} else {
$('#chal-attempts-group').hide();
$('#chal-attempts-input').val('');
}
});
// Markdown Preview
$('#desc-edit').on('shown.bs.tab', function(event) {
if (event.target.hash == '#desc-preview') {
var editor_value = $('#desc-editor').val();
$(event.target.hash).html(
window.challenge.render(editor_value)
);
}
});
$('#new-desc-edit').on('shown.bs.tab', function(event) {
if (event.target.hash == '#new-desc-preview') {
var editor_value = $('#new-desc-editor').val();
$(event.target.hash).html(
window.challenge.render(editor_value)
);
}
});
function loadchal(id, update) {
$.get(script_root + '/admin/chal/' + id, function(obj) {
$('#desc-write-link').click(); // Switch to Write tab
if (typeof update === 'undefined')
$('#update-challenge').modal();
});
}
function openchal(id) {
loadchal(id);
}
$(document).ready(function() {
$('[data-toggle="tooltip"]').tooltip();
});

36
assets/view.html Normal file
View File

@ -0,0 +1,36 @@
{% extends "challenge.html" %}
{% block description %}
{{ challenge.html }}
<div class="row text-center pb-3">
<div id="whale-panel" style="width: 100%;">
<div id="whale-panel-stopped" class="card" style="width: 100%;">
<div class="card-body">
<h5 class="card-title">Instance Info</h5>
<button class="btn btn-primary card-link" id="whale-button-boot" type="button"
onclick="CTFd._internal.challenge.boot()">Launch an instance</button>
</div>
</div>
<div id="whale-panel-started" type="hidden" class="card" style="width: 100%;">
<div class="card-body">
<h5 class="card-title">Instance Info</h5>
<h6 class="card-subtitle mb-2 text-muted">
Remaining Time: <span id="whale-challenge-count-down"></span>s
</h6>
<h6 class="card-subtitle mb-2 text-muted">
Lan Domain: <span id="whale-challenge-lan-domain"></span>
</h6>
<p id="whale-challenge-user-access" class="card-text"></p>
<button type="button" class="btn btn-danger card-link" id="whale-button-destroy"
onclick="CTFd._internal.challenge.destroy()">
Destroy this instance
</button>
<button type="button" class="btn btn-success card-link" id="whale-button-renew"
onclick="CTFd._internal.challenge.renew()">
Renew this instance
</button>
</div>
</div>
</div>
</div>
{% endblock %}

239
assets/view.js Normal file
View File

@ -0,0 +1,239 @@
CTFd._internal.challenge.data = undefined
CTFd._internal.challenge.renderer = null;
CTFd._internal.challenge.preRender = function () {
}
CTFd._internal.challenge.render = null;
CTFd._internal.challenge.postRender = function () {
loadInfo();
}
if (window.$ === undefined) window.$ = CTFd.lib.$;
function loadInfo() {
var challenge_id = CTFd._internal.challenge.data.id;
var url = "/api/v1/plugins/ctfd-whale/container?challenge_id=" + challenge_id;
CTFd.fetch(url, {
method: 'GET',
credentials: 'same-origin',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
}
}).then(function (response) {
if (response.status === 429) {
// User was ratelimited but process response
return response.json();
}
if (response.status === 403) {
// User is not logged in or CTF is paused.
return response.json();
}
return response.json();
}).then(function (response) {
if (window.t !== undefined) {
clearInterval(window.t);
window.t = undefined;
}
if (response.success) response = response.data;
else CTFd._functions.events.eventAlert({
title: "Fail",
html: response.message,
button: "OK"
});
if (response.remaining_time != undefined) {
$('#whale-challenge-user-access').html(response.user_access);
$('#whale-challenge-lan-domain').html(response.lan_domain);
$('#whale-challenge-count-down').text(response.remaining_time);
$('#whale-panel-stopped').hide();
$('#whale-panel-started').show();
window.t = setInterval(() => {
const c = $('#whale-challenge-count-down').text();
if (!c) return;
let second = parseInt(c) - 1;
if (second <= 0) {
loadInfo();
}
$('#whale-challenge-count-down').text(second);
}, 1000);
} else {
$('#whale-panel-started').hide();
$('#whale-panel-stopped').show();
}
});
};
CTFd._internal.challenge.destroy = function () {
var challenge_id = CTFd._internal.challenge.data.id;
var url = "/api/v1/plugins/ctfd-whale/container?challenge_id=" + challenge_id;
$('#whale-button-destroy').text("Waiting...");
$('#whale-button-destroy').prop('disabled', true);
var params = {};
CTFd.fetch(url, {
method: 'DELETE',
credentials: 'same-origin',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify(params)
}).then(function (response) {
if (response.status === 429) {
// User was ratelimited but process response
return response.json();
}
if (response.status === 403) {
// User is not logged in or CTF is paused.
return response.json();
}
return response.json();
}).then(function (response) {
if (response.success) {
loadInfo();
CTFd._functions.events.eventAlert({
title: "Success",
html: "Your instance has been destroyed!",
button: "OK"
});
} else {
CTFd._functions.events.eventAlert({
title: "Fail",
html: response.message,
button: "OK"
});
}
}).finally(() => {
$('#whale-button-destroy').text("Destroy this instance");
$('#whale-button-destroy').prop('disabled', false);
});
};
CTFd._internal.challenge.renew = function () {
var challenge_id = CTFd._internal.challenge.data.id;
var url = "/api/v1/plugins/ctfd-whale/container?challenge_id=" + challenge_id;
$('#whale-button-renew').text("Waiting...");
$('#whale-button-renew').prop('disabled', true);
var params = {};
CTFd.fetch(url, {
method: 'PATCH',
credentials: 'same-origin',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify(params)
}).then(function (response) {
if (response.status === 429) {
// User was ratelimited but process response
return response.json();
}
if (response.status === 403) {
// User is not logged in or CTF is paused.
return response.json();
}
return response.json();
}).then(function (response) {
if (response.success) {
loadInfo();
CTFd._functions.events.eventAlert({
title: "Success",
html: "Your instance has been renewed!",
button: "OK"
});
} else {
CTFd._functions.events.eventAlert({
title: "Fail",
html: response.message,
button: "OK"
});
}
}).finally(() => {
$('#whale-button-renew').text("Renew this instance");
$('#whale-button-renew').prop('disabled', false);
});
};
CTFd._internal.challenge.boot = function () {
var challenge_id = CTFd._internal.challenge.data.id;
var url = "/api/v1/plugins/ctfd-whale/container?challenge_id=" + challenge_id;
$('#whale-button-boot').text("Waiting...");
$('#whale-button-boot').prop('disabled', true);
var params = {};
CTFd.fetch(url, {
method: 'POST',
credentials: 'same-origin',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify(params)
}).then(function (response) {
if (response.status === 429) {
// User was ratelimited but process response
return response.json();
}
if (response.status === 403) {
// User is not logged in or CTF is paused.
return response.json();
}
return response.json();
}).then(function (response) {
if (response.success) {
loadInfo();
CTFd._functions.events.eventAlert({
title: "Success",
html: "Your instance has been deployed!",
button: "OK"
});
} else {
CTFd._functions.events.eventAlert({
title: "Fail",
html: response.message,
button: "OK"
});
}
}).finally(() => {
$('#whale-button-boot').text("Launch an instance");
$('#whale-button-boot').prop('disabled', false);
});
};
CTFd._internal.challenge.submit = function (preview) {
var challenge_id = CTFd._internal.challenge.data.id;
var submission = $('#challenge-input').val()
var body = {
'challenge_id': challenge_id,
'submission': submission,
}
var params = {}
if (preview)
params['preview'] = true
return CTFd.api.post_challenge_attempt(params, body).then(function (response) {
if (response.status === 429) {
// User was ratelimited but process response
return response
}
if (response.status === 403) {
// User is not logged in or CTF is paused.
return response
}
return response
})
};

108
challenge_type.py Normal file
View File

@ -0,0 +1,108 @@
from flask import Blueprint
from CTFd.models import (
db,
Flags,
)
from CTFd.plugins.challenges import BaseChallenge
from CTFd.plugins.dynamic_challenges import DynamicValueChallenge
from CTFd.plugins.flags import get_flag_class
from CTFd.utils import user as current_user
from .models import WhaleContainer, DynamicDockerChallenge
from .utils.control import ControlUtil
class DynamicValueDockerChallenge(BaseChallenge):
id = "dynamic_docker" # Unique identifier used to register challenges
name = "dynamic_docker" # Name of a challenge type
# Blueprint used to access the static_folder directory.
blueprint = Blueprint(
"ctfd-whale-challenge",
__name__,
template_folder="templates",
static_folder="assets",
)
challenge_model = DynamicDockerChallenge
@classmethod
def read(cls, challenge):
challenge = DynamicDockerChallenge.query.filter_by(id=challenge.id).first()
data = {
"id": challenge.id,
"name": challenge.name,
"value": challenge.value,
"initial": challenge.initial,
"decay": challenge.decay,
"minimum": challenge.minimum,
"description": challenge.description,
"category": challenge.category,
"state": challenge.state,
"max_attempts": challenge.max_attempts,
"type": challenge.type,
"type_data": {
"id": cls.id,
"name": cls.name,
"templates": cls.templates,
"scripts": cls.scripts,
},
}
return data
@classmethod
def update(cls, challenge, request):
data = request.form or request.get_json()
for attr, value in data.items():
# We need to set these to floats so that the next operations don't operate on strings
if attr in ("initial", "minimum", "decay"):
value = float(value)
if attr == 'dynamic_score':
value = int(value)
setattr(challenge, attr, value)
if challenge.dynamic_score == 1:
return DynamicValueChallenge.calculate_value(challenge)
db.session.commit()
return challenge
@classmethod
def attempt(cls, challenge, request):
data = request.form or request.get_json()
submission = data["submission"].strip()
flags = Flags.query.filter_by(challenge_id=challenge.id).all()
if len(flags) > 0:
for flag in flags:
if get_flag_class(flag.type).compare(flag, submission):
return True, "Correct"
return False, "Incorrect"
else:
user_id = current_user.get_current_user().id
q = db.session.query(WhaleContainer)
q = q.filter(WhaleContainer.user_id == user_id)
q = q.filter(WhaleContainer.challenge_id == challenge.id)
records = q.all()
if len(records) == 0:
return False, "Please solve it during the container is running"
container = records[0]
if container.flag == submission:
return True, "Correct"
return False, "Incorrect"
@classmethod
def solve(cls, user, team, challenge, request):
super().solve(user, team, challenge, request)
if challenge.dynamic_score == 1:
DynamicValueChallenge.calculate_value(challenge)
@classmethod
def delete(cls, challenge):
for container in WhaleContainer.query.filter_by(
challenge_id=challenge.id
).all():
ControlUtil.try_remove_container(container.user_id)
super().delete(challenge)

53
decorators.py Normal file
View File

@ -0,0 +1,53 @@
import functools
import time
from flask import request, current_app, session
from flask_restx import abort
from sqlalchemy.sql import and_
from CTFd.models import Challenges
from CTFd.utils.user import is_admin, get_current_user
from .utils.cache import CacheProvider
def challenge_visible(func):
@functools.wraps(func)
def _challenge_visible(*args, **kwargs):
challenge_id = request.args.get('challenge_id')
if is_admin():
if not Challenges.query.filter(
Challenges.id == challenge_id
).first():
abort(404, 'no such challenge', success=False)
else:
if not Challenges.query.filter(
Challenges.id == challenge_id,
and_(Challenges.state != "hidden", Challenges.state != "locked"),
).first():
abort(403, 'challenge not visible', success=False)
return func(*args, **kwargs)
return _challenge_visible
def frequency_limited(func):
@functools.wraps(func)
def _frequency_limited(*args, **kwargs):
if is_admin():
return func(*args, **kwargs)
redis_util = CacheProvider(app=current_app, user_id=get_current_user().id)
if not redis_util.acquire_lock():
abort(403, 'Request Too Fast!', success=False)
# last request was unsuccessful. this is for protection.
if "limit" not in session:
session["limit"] = int(time.time())
else:
if int(time.time()) - session["limit"] < 60:
abort(403, 'Frequency limit, You should wait at least 1 min.', success=False)
session["limit"] = int(time.time())
result = func(*args, **kwargs)
redis_util.release_lock() # if any exception is raised, lock will not be released
return result
return _frequency_limited

105
docker-compose.example.yml Normal file
View File

@ -0,0 +1,105 @@
version: '3.7'
services:
ctfd:
build: .
user: root
restart: always
ports:
- "8000:8000"
environment:
- UPLOAD_FOLDER=/var/uploads
- DATABASE_URL=mysql+pymysql://ctfd:ctfd@db/ctfd
- REDIS_URL=redis://cache:6379
- WORKERS=1
- LOG_FOLDER=/var/log/CTFd
- ACCESS_LOG=-
- ERROR_LOG=-
- REVERSE_PROXY=true
volumes:
- .data/CTFd/logs:/var/log/CTFd
- .data/CTFd/uploads:/var/uploads
- .:/opt/CTFd:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- db
networks:
default:
internal:
nginx:
image: nginx:1.17
restart: always
volumes:
- ./conf/nginx/http.conf:/etc/nginx/nginx.conf
ports:
- 80:80
depends_on:
- ctfd
db:
image: mariadb:10.4.12
restart: always
environment:
- MYSQL_ROOT_PASSWORD=ctfd
- MYSQL_USER=ctfd
- MYSQL_PASSWORD=ctfd
- MYSQL_DATABASE=ctfd
volumes:
- .data/mysql:/var/lib/mysql
networks:
internal:
# This command is required to set important mariadb defaults
command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci, --wait_timeout=28800, --log-warnings=0]
cache:
image: redis:4
restart: always
volumes:
- .data/redis:/data
networks:
internal:
frpc:
image: frankli0324/frp:frpc
restart: always
command: [
"--server_addr=frps",
"--server_port=7000",
"--token=your_token",
"--admin_addr=0.0.0.0",
"--admin_port=7000",
"--admin_user=frank",
"--admin_pwd=qwer",
]
networks:
frp:
internal:
containers:
frps:
image: frankli0324/frp:frps
restart: always
command: [
"--bind_addr=0.0.0.0",
"--bind_port=7000",
"--token=your_token",
"--subdomain_host=127.0.0.1.nip.io",
"--vhost_http_port=8080",
]
ports:
- 8080:8080
networks:
frp:
default:
networks:
default:
internal:
internal: true
frp:
internal: true
containers:
internal: true
driver: overlay
attachable: true

156
docs/advanced.md Normal file
View File

@ -0,0 +1,156 @@
# Advanced deployment
## Note
Please make sure that you have experienced the installation process on single node. This deployment method is *NOT* recommended on first try.
It would be easy for you to understand what we are going to do if you have some experience in using `docker` and `frp`.
## Goal
The goal of this advanced deployment is to deploy the CTFd and challenge containers on seperate machines for better experiences.
Overall, `ctfd-whale` can be decomposed into three compnents: `CTFd`, challenge containers along with frpc, and frps itself. The three components can be deployed seperately or together to satisfy different needs.
For example, if you're in a school or an organization that has a number of high-performance dedicated server *BUT* no public IP for public access, you can refer to this tutorial.
Here are some options:
* deploy frps on a server with public access
* deploy challenge containers on a seperate sever by joining the server into the swarm you created earlier
* deploy challenge containers on *rootless* docker
* deploy challenge containers on a remote server with public access, *securely*
You could achieve the first option with little effort by deploying the frps on the server and configure frpc with a different `server_addr`.
In a swarm with multiple nodes, you can configure CTFd to start challenge containers on nodes you specifies randomly. Just make sure the node `whale` controlls is a `Leader`. This is not covered in this guide. You'll find it rather simple, even if you have zero experience on docker swarm.
The [Docker docs](https://docs.docker.com/engine/security/rootless/) have a detailed introduction on how to set up a rootless docker, so it's also not covered in this guide.
In following paragraphs, the last option is introduced.
## Architecture
In this tutorial, we have 2 separate machines which we'll call them `web` and `target` server later. We will deploy CTFd on `web` and challenge containers (along with frp) on `target`.
This picture shows a brief glance.
![architecture](imgs/arch.png)
---
### Operate on `target` server
> root user is NOT recommended
> if you want to expose your docker deployment, you might also want to use [rootless docker](https://docs.docker.com/engine/security/rootless/)
Please read the [Docker docs](https://docs.docker.com/engine/security/protect-access/#use-tls-https-to-protect-the-docker-daemon-socket) thoroughly before continuing.
Setup docker swarm and clone this repo as described in [installation](./install.md), then follow the steps described in the Docker docs to sign your certificates.
> protect your certificates carefully
> one can take over the user running `dockerd` effortlessly with them
> and in most cases, the user is, unfortunately, root.
You can now create a network for your challenges by executing
```bash
docker network create --driver overlay --attachable challenges
```
Then setup frp on this machine. You might want to setup frps first:
```bash
# change to the version you prefer
wget https://github.com/fatedier/frp/releases/download/v0.37.0/frp_0.37.0_linux_amd64.tar.gz
tar xzvf frp_0.37.0_linux_amd64.tar.gz
cd frp_0.37.0_linux_amd64
mkdir /etc/frp
configure_frps frps.ini # refer to [installation](./install.md)
cp systemd/frps.service /etc/systemd/system
systemctl daemon-reload
systemctl enable frps
systemctl start frps
```
Then frpc. Frpc should be running in the same network with the challenge containers, so make sure you connect frpc to the network you just created.
```bash
docker run -it --restart=always -d --network challenges -p 7400:7400 frankli0324/frp:frpc \
--server_addr=host_ip:host_port \
--server_port=7000 \
--admin_addr=7400 \
--admin_port=7400 \
--admin_user=username \
--admin_pwd=password \
--token=your_token
```
You could use `docker-compose` for better experience.
Here are some pitfalls or problems you might run into:
#### working with `systemd`
Copy the systemd service file to `/etc/systemd` in order to prevent it from being overwritten by future updates.
```bash
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
```
Locate `ExecStart` in the file and change it into something like this:
```systemd
ExecStart=/usr/bin/dockerd \
--tlsverify \
--tlscacert=/etc/docker/certs/ca.pem \
--tlscert=/etc/docker/certs/server-cert.pem \
--tlskey=/etc/docker/certs/server-key.pem \
-H tcp://0.0.0.0:2376 \
-H unix:///var/run/docker.sock
```
Remember to reload `systemd` before restarting `docker.service`
```bash
systemctl daemon-reload
systemctl restart docker
```
#### cloud service providers
Most service providers provides you with a basic virus scanner in their system images, for example, AliCloud images comes with `YunDun`. You might want to disable it. The challenge containers often comes with backdoors, and is often accessed in a way cloud providers don't like (they are obviously attacks).
#### certificate security
Please follow the best practices when signing your certificates. If you gets used to signing both the client and server certicates on a single machine, you might run into troubles in the future.
If you feel inconvenient, at least sign them on your personal computer, and transfer only the needed files to client/server.
#### challenge networks and frpc
You could create an internal network for challenges, but you have to connect frpc to a different network *with* internet in order to map the ports so that CTFd can access the admin interface. Also, make sure frps is accessible by frpc.
### Operate on `web` server
Map your client certificates into docker. You might want to use `docker secrets`. Remember where the files are *inside the container*. In the case which you use `docker secrets`, the directory is `/run/secrets`.
You may also delete everything related to `frp` like `frp_network` since we are not going to run challenge containers on `web` server anymore. But if you just has one public IP for `web` server, you can leave `frps` service running.
Then recreate your containers:
```bash
docker-compose down # needed for removing unwanted networks
docker-compose up -d
```
Now you can configure CTFd accordingly.
Sample configurations:
![whale-config1](imgs/whale-config1.png)
![whale-config2](imgs/whale-config2.png)
![whale-config3](imgs/whale-config3.png)
refer to [installation](./install.md) for explanations.
---
Now you can add a challenge to test it out.

268
docs/advanced.zh-cn.md Normal file
View File

@ -0,0 +1,268 @@
# 高级部署
## 前提
请确认你有过单机部署的经验,不建议第一次就搞这样分布架构
建议有一定Docker部署及操作经验者阅读此文档
在进行以下步骤之前你需要先安装好ctfd-whale插件
## 目的
分离靶机与ctfd网站服务器CTFd通过tls api远程调用docker
## 架构
两台vps
- 一台作为安装CTFd的网站服务器称为 `web` 需要公网IP
- 一台作为给选手下发容器的服务器,称为 `target` 此文档用到的服务器是有公网IP的但如果没有可也在 `web` 服务器用 `frps` 做转发
本部署方式的架构如图所示
![架构](imgs/arch.png)
---
## 配置Docker的安全API
参考来源:[Docker官方文档](https://docs.docker.com/engine/security/protect-access/#use-tls-https-to-protect-the-docker-daemon-socket)
### target服务器配置
建议切换到 `root` 用户操作
### 克隆本仓库
```bash
$ git clone https://github.com/frankli0324/ctfd-whale
```
### 开启docker swarm
```bash
$ docker swarm init
$ docker node update --label-add "name=linux-target-1" $(docker node ls -q)
```
`name` 记住了,后面会用到
创建文件夹
```bash
$ mkdir /etc/docker/certs && cd /etc/docker/certs
```
设置口令需要输入2次
```bash
$ openssl genrsa -aes256 -out ca-key.pem 4096
```
用OpenSSL创建CA, 服务器, 客户端的keys
```bash
$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
```
生成server证书如果你的靶机服务器没有公网IP内网IP理论上也是可以的只要web服务器能访问到
```bash
$ openssl genrsa -out server-key.pem 4096
$ openssl req -subj "/CN=<target_ip>" -sha256 -new -key server-key.pem -out server.csr
```
配置白名单
```bash
$ echo subjectAltName = IP:0.0.0.0,IP:127.0.0.1 >> extfile.cnf
```
将Docker守护程序密钥的扩展使用属性设置为仅用于服务器身份验证
```bash
$ echo extendedKeyUsage = serverAuth >> extfile.cnf
```
生成签名证书,此处需要输入你之前设置的口令
```bash
$ openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out server-cert.pem -extfile extfile.cnf
```
生成客户端(web服务器)访问用的 `key.pem`
```bash
$ openssl genrsa -out key.pem 4096
```
生成 `client.csr` 此处IP与之前生成server证书的IP相同
```bash
$ openssl req -subj "/CN=<target_ip>" -new -key key.pem -out client.csr
```
创建扩展配置文件,把密钥设置为客户端身份验证用
```bash
$ echo extendedKeyUsage = clientAuth > extfile-client.cnf
```
生成 `cert.pem`
```bash
$ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -extfile extfile-client.cnf
```
删掉配置文件和两个证书的签名请求,不再需要
```bash
$ rm -v client.csr server.csr extfile.cnf extfile-client.cnf
```
为了防止私钥文件被更改以及被其他用户查看,修改其权限为所有者只读
```bash
$ chmod -v 0400 ca-key.pem key.pem server-key.pem
```
为了防止公钥文件被更改,修改其权限为只读
```bash
$ chmod -v 0444 ca.pem server-cert.pem cert.pem
```
打包公钥
```bash
$ tar cf certs.tar *.pem
```
修改Docker配置使Docker守护程序可以接受来自提供CA信任的证书的客户端的连接
拷贝安装包单元文件到 `/etc` 这样就不会因为docker升级而被覆盖
```bash
$ cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
```
将第 `13`
```
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
```
改为如下形式
```
ExecStart=/usr/bin/dockerd --tlsverify \
--tlscacert=/etc/docker/certs/ca.pem \
--tlscert=/etc/docker/certs/server-cert.pem \
--tlskey=/etc/docker/certs/server-key.pem \
-H tcp://0.0.0.0:2376 \
-H unix:///var/run/docker.sock
```
重新加载daemon并重启docker
```bash
$ systemctl daemon-reload
$ systemctl restart docker
```
**注意保存好生成的密钥任何持有密钥的用户都可以拥有target服务器的root权限**
---
### Web服务器配置
`root`用户下配置
```bash
$ cd CTFd
$ mkdir docker-certs
```
先把刚才打包好的公钥 `certs.tar` 复制到这台服务器上
然后解压
```bash
$ tar xf certs.tar
```
打开 `CTFd` 项目的 `docker-compose.yml` ,在`CTFd` 服务的 `volumes` 下加一条
```
./docker-certs:/etc/docker/certs:ro
```
顺便把 `frp` 有关的**所有**配置项删掉,比如`frp_network`之类
然后执行 `docker-compose up -d`
打开`CTFd-whale`的配置网页按照如下配置docker
![whale-config1](imgs/whale-config1.png)
注意事项
- `API URL` 一定要写成 `https://<target_ip>:<port>` 的形式
- `Swarm Nodes` 写初始化 `docker swarm` 时添加的 `lable name`
- `SSL CA Certificates` 等三个路径都是CTFd容器里的地址不要和物理机的地址搞混了如果你按照上一个步骤更改好了 `CTFd``docker-compose.yml` ,这里的地址照着填就好
对于单容器的题目,`Auto Connect Network` 中的网络地址为`<folder_name>_<network_name>`,如果没有改动,则默认为 `whale-target_frp_containers`
![whale-config2](imgs/whale-config2.png)
*多容器题目配置 未测试*
---
## FRP配置
### 添加泛解析域名用于HTTP模式访问
可以是这样
```
*.example.com
*.sub.example.com (以此为例)
```
### 在target服务器上配置
进入 `whale-target` 文件夹
```bash
$ cd ctfd-whale/whale-target
```
修改 `frp` 配置文件
```bash
$ cp frp/frps.ini.example frp/frps.ini
$ cp frp/frpc.ini.example frp/frpc.ini
```
打开 `frp/frps.ini`
- 修改 `token` 字段, 此token用于frpc与frps通信的验证
- 此处因为frps和frpc在同一台服务器中不改也行
- 如果你的target服务器处于内网中可以将 `frps` 放在 `web` 服务器中这时token就可以长一些比如[生成一个随机UUID](https://www.uuidgenerator.net/)
- 注意 `vhost_http_port` 与 [docker-compose.yml](/whale-target/docker-compose.yml) 里 `frps` 映射的端口相同
- `subdomain_host` 是你做泛解析之后的域名,如果泛解析记录为`*.sub.example.com`, 则填入`sub.example.com`
#### 打开 `frp/frpc.ini`
- 修改 `token` 字段与 `frps.ini` 里的相同
- 修改 `admin_user``admin_pwd`字段, 用于 `frpc` 的 basic auth
---
### 在WEB服务器上配置
打开whale的设置页面按照如下配置参数
![frp配置页面](imgs/whale-config3.png)
网页中,
- `API URL` 需要按照 `http://user:password@ip:port` 的形式来设置
- `Http Domain Suffix` 需要与 `frps.ini` 中的 `subdomain_host` 保持一致
- `HTTP Port``frps.ini``vhost_http_port` 保持一致
- `Direct Minimum Port``Direct Maximum Port``whale-target/docker-compose.yml` 中的段口范围保持一致
- 当 API 设置成功后whale 会自动获取`frpc.ini`的内容作为模板
---
至此分离部署的whale应该就能用了可以找个题目来测试一下不过注意docker_dynamic类型的题目似乎不可以被删除请注意不要让其他管理员把测试题公开
你可以用
```bash
$ docker-compose logs
```
来查看日志并调试Ctrl-C退出

BIN
docs/imgs/arch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

BIN
docs/imgs/whale-config1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

BIN
docs/imgs/whale-config2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
docs/imgs/whale-config3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

304
docs/install.md Normal file
View File

@ -0,0 +1,304 @@
# Installation & Usage Guide
## TLDR
If you never deployed a CTFd instance before:
```sh
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
docker swarm init
docker node update --label-add='name=linux-1' $(docker node ls -q)
git clone https://github.com/CTFd/CTFd --depth=1
git clone https://github.com/frankli0324/ctfd-whale CTFd/CTFd/plugins/ctfd-whale --depth=1
curl -fsSL https://cdn.jsdelivr.net/gh/frankli0324/ctfd-whale/docker-compose.example.yml -o CTFd/docker-compose.yml
# make sure you have pip3 installed on your rig
pip3 install docker-compose
docker-compose -f CTFd/docker-compose.yml up -d
# wait till the containers are ready
docker-compose -f CTFd/docker-compose.yml exec ctfd python manage.py set_config whale:auto_connect_network
```
The commands above tries to install `docker-ce``python3-pip` and `docker-compose`. Make sure the following requirements are satisfied before you execute them:
* have `curl`, `git`, `python3` and `pip` installed
* GitHub is reachable
* Docker Registry is reachable
## Installation
### Start from scratch
First of all, you should initialize a docker swarm and label the nodes
names of nodes running linux/windows should begin with `linux/windows-*`
```bash
docker swarm init
docker node update --label-add "name=linux-1" $(docker node ls -q)
```
Taken advantage of the orchestration ability of `docker swarm`, `ctfd-whale` is able to distribute challenge containers to different nodes(machines). Each time a user request for a challenge container, `ctfd-whale` will randomly pick a suitable node for running the container.
After initializing a swarm, make sure that CTFd runs as expected on your PC/server
Note that the included compose file in CTFd 2.5.0+ starts an nginx container by default, which takes the http/80 port. make sure there's no conflicts.
```bash
git clone https://github.com/CTFd/CTFd --depth=1
cd CTFd # the cwd will not change throughout this guide from this line on
```
Change the first line of `docker-compose.yml` to support `attachable` property
`version '2'` -> `version '3'`
```bash
docker-compose up -d
```
take a look at <http://localhost>(or port 8000) and setup CTFd
### Configure frps
frps could be started by docker-compose along with CTFd
define a network for communication between frpc and frps, and create a frps service block
```yml
services:
...
frps:
image: glzjin/frp
restart: always
volumes:
- ./conf/frp:/conf
entrypoint:
- /usr/local/bin/frps
- -c
- /conf/frps.ini
ports:
- 10000-10100:10000-10100 # for "direct" challenges
- 8001:8001 # for "http" challenges
networks:
default: # frps ports should be mapped to host
frp_connect:
networks:
...
frp_connect:
driver: overlay
internal: true
ipam:
config:
- subnet: 172.1.0.0/16
```
Create a folder in `conf/` called `frp`
```bash
mkdir ./conf/frp
```
then create a configuration file for frps `./conf/frp/frps.ini`, and fill it with:
```ini
[common]
# following ports must not overlap with "direct" port range defined in the compose file
bind_port = 7987 # port for frpc to connect to
vhost_http_port = 8001 # port for mapping http challenges
token = your_token
subdomain_host = node3.buuoj.cn
# hostname that's mapped to frps by some reverse proxy (or IS frps itself)
```
### Configure frpc
Likewise, create a network and a service for frpc
the network allows challenges to be accessed by frpc
```yml
services:
...
frpc:
image: glzjin/frp:latest
restart: always
volumes:
- ./conf/frp:/conf/
entrypoint:
- /usr/local/bin/frpc
- -c
- /conf/frpc.ini
depends_on:
- frps #need frps to run first
networks:
frp_containers:
frp_connect:
ipv4_address: 172.1.0.3
networks:
...
frp_containers: # challenge containers are attached to this network
driver: overlay
internal: true
# if challenge containers are allowed to access the internet, remove this line
attachable: true
ipam:
config:
- subnet: 172.2.0.0/16
```
Likewise, create an frpc config file `./conf/frp/frpc.ini`
```ini
[common]
token = your_token
server_addr = frps
server_port = 7897 # == frps.bind_port
admin_addr = 172.1.0.3 # refer to "Security"
admin_port = 7400
```
### Verify frp configurations
update compose stack with `docker-compose up -d`
by executing `docker-compose logs frpc`, you should see that frpc produced following logs:
```log
[service.go:224] login to server success, get run id [******], server udp port [******]
[service.go:109] admin server listen on ******
```
by seeing this, you can confirm that frpc/frps is set up correctly.
Note: folder layout in this guide:
```
CTFd/
conf/
nginx/ # included in CTFd 2.5.0+
frp/
frpc.ini
frps.ini
serve.py <- this is just an anchor
```
### Configure CTFd
After finishing everything above:
* map docker socket into CTFd container
* Attach CTFd container to frp_connect
```yml
services:
ctfd:
...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- frpc #need frpc to run ahead
networks:
...
frp_connect:
```
and then clone Whale into CTFd plugins directory (yes, finally)
```bash
git clone https://github.com/frankli0324/CTFd-Whale CTFd/plugins/ctfd-whale --depth=1
docker-compose build # for pip to find requirements.txt
docker-compose up -d
```
go to the Whale Configuration page (`/plugins/ctfd-whale/admin/settings`)
#### Docker related configs
`Auto Connect Network`, if you strictly followed the guide, should be `ctfd_frp_containers`
If you're not sure about that, this command lists all networks in the current stack
```bash
docker network ls -f "label=com.docker.compose.project=ctfd" --format "{{.Name}}"
```
#### frp related configs
* `HTTP Domain Suffix` should be consistent with `subdomain_host` in frps
* `HTTP Port` with `vhost_http_port` in frps
* `Direct IP Address` should be a hostname/ip address that can be used to access frps
* `Direct Minimum Port` and `Direct Maximum Port`, you know what to do
* as long as `API URL` is filled in correctly, Whale will read the config of the connected frpc into `Frpc config template`
* setting `Frpc config template` will override contents in `frpc.ini`
Whale should be kinda usable at this moment.
### Configure nginx
If you are using CTFd 2.5.0+, you can utilize the included nginx.
remove the port mapping rule for frps vhost http port(8001) in the compose file
If you wnat to go deeper:
* add nginx to `default` and `internal` network
* remove CTFd from `default` and remove the mapped 8000 port
add following server block to `./conf/nginx/nginx.conf`:
```conf
server {
listen 80;
server_name *.node3.buuoj.cn;
location / {
proxy_pass http://frps:8001;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
```
## Challenge Deployment
### Standalone Containers
Take a look at <https://github.com/CTFTraining>
In one word, a `FLAG` variable will be passed into the container when it's started. You should write your own startup script (usually with bash and sed) to:
* replace your flag with the generated flag
* remove or override the `FLAG` variable
PLEASE create challenge images with care.
### Grouped Containers
"name" the challenge image with a json object, for example:
```json
{
"hostname": "image",
}
```
Whale will keep the order of the keys in the json object, and take the first image as the "main container" of a challenge. The "main container" will be mapped to frp with same rules from standalone containers
see how grouped containers are created in the [code](utils/docker.py#L58)
## Security
* Please do not allow untrusted people to access the admin account. Theoretically there's an SSTI vulnerability in the config page.
* Do not set bind_addr of the frpc to `0.0.0.0` if you are following this guide. This may enable contestants to override frpc configurations.
* If you are annoyed by the complicated configuration, and you just want to set bind_addr = 0.0.0.0, remember to enable Basic Auth included in frpc, and set API URL accordingly, for example, `http://username:password@frpc:7400`
## Advanced Deployment
To separate the target server (for lunching instance) and CTFd web server with TLS secured docker API, please refer to [this document](advanced.md)

313
docs/install.zh-cn.md Normal file
View File

@ -0,0 +1,313 @@
# 使用指南
## TLDR
如果你从未部署过CTFd你可以通过执行:
```sh
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh --mirror Aliyun
docker swarm init
docker node update --label-add='name=linux-1' $(docker node ls -q)
git clone https://github.com/CTFd/CTFd --depth=1
git clone https://github.com/frankli0324/ctfd-whale CTFd/CTFd/plugins/ctfd-whale --depth=1
curl -fsSL https://cdn.jsdelivr.net/gh/frankli0324/ctfd-whale/docker-compose.example.yml -o CTFd/docker-compose.yml
# make sure you have pip3 installed on your rig
pip3 install docker-compose
docker-compose -f CTFd/docker-compose.yml up -d
docker-compose -f CTFd/docker-compose.yml exec ctfd python manage.py
```
脚本会在一台Linux机器上安装 ***docker.com版本的*** `docker-ce``python3-pip` 以及 `docker-compose`,请确保执行上述代码之前:
* 安装好curlgitpython3以及pip
* 网络环境良好能正常从GitHub克隆仓库
* 网络环境良好能正常从Docker Registry拖取镜像
## 手动安装
为了更好地理解ctfd-whale各个组件的作用更充分地利用ctfd-whale在真实使用ctfd-whale时建议用户手动、完整地从空白CTFd开始搭建一个实例。下面本文将引导你完成整个流程。
### 从零开始
首先需要初始化一个swarm集群并给节点标注名称
linux节点名称需要以 `linux-` 打头windows节点则以 `windows-` 打头
```bash
docker swarm init
docker node update --label-add "name=linux-1" $(docker node ls -q)
```
`ctfd-whale`利用`docker swarm`的集群管理能力,能够将题目容器分发到不同的节点上运行。选手每次请求启动题目容器时,`ctfd-whale`都将随机选择一个合适的节点运行这个题目容器。
然后我们需要确保CTFd可以正常运行。
注意2.5.0+版本CTFd的 `docker-compose.yml` 中包含了一个 `nginx` 反代占用了80端口
```bash
git clone https://github.com/CTFd/CTFd --depth=1
cd CTFd # 注以下全部内容的cwd均为此目录
```
先将 `docker-compose.yml` 的第一行进行修改,以支持 `attachable` 参数
`version '2'` -> `version '3'`
接着
```bash
docker-compose up -d
```
访问<http://localhost>或8000端口对CTFd进行初始配置
### 配置frps
frps可以直接通过docker-compose与CTFd同步启动。
首先在networks中添加一个网络用于frpc与frps之间的通信并添加frps service
```yml
services:
...
frps:
image: glzjin/frp
restart: always
volumes:
- ./conf/frp:/conf
entrypoint:
- /usr/local/bin/frps
- -c
- /conf/frps.ini
ports:
- 10000-10100:10000-10100 # 映射direct类型题目的端口
- 8001:8001 # 映射http类型题目的端口
networks:
default: # 需要将frps暴露到公网以正常访问题目容器
frp_connect:
networks:
...
frp_connect:
driver: overlay
internal: true
ipam:
config:
- subnet: 172.1.0.0/16
```
先创建目录 `./conf/frp`
```bash
mkdir ./conf/frp
```
接着创建 `./conf/frp/frps.ini` 文件,填写:
```ini
[common]
# 下面两个端口注意不要与direct类型题目端口范围重合
bind_port = 7987 # frpc 连接到 frps 的端口
vhost_http_port = 8001 # frps 映射http类型题目的端口
token = your_token
subdomain_host = node3.buuoj.cn # 访问http题目容器的主机名
```
### 配置frpc
同样在networks中再添加一个网络用于frpc与题目容器之间的通信并添加frpc service
```yml
services:
...
frpc:
image: glzjin/frp:latest
restart: always
volumes:
- ./conf/frp:/conf/
entrypoint:
- /usr/local/bin/frpc
- -c
- /conf/frpc.ini
depends_on:
- frps #frps需要先成功运行
networks:
frp_containers: # 供frpc访问题目容器
frp_connect: # 供frpc访问frps, CTFd访问frpc
ipv4_address: 172.1.0.3
networks:
...
frp_containers:
driver: overlay
internal: true # 如果允许题目容器访问外网,则可以去掉
attachable: true
ipam:
config:
- subnet: 172.2.0.0/16
```
同样,我们需要创建一个 `./conf/frp/frpc.ini`
```ini
[common]
token = your_token
server_addr = frps
server_port = 7897 # 对应 frps 的 bind_port
admin_addr = 172.1.0.3 # 请参考“安全事项”
admin_port = 7400
```
### 检查frp配置是否正确
此时可以执行 `docker-compose up -d` 更新compose配置
通过查看日志 `docker-compose logs frpc` 应当能看到frpc产生了以下日志
```log
[service.go:224] login to server success, get run id [******], server udp port [******]
[service.go:109] admin server listen on ******
```
说明frpc与frps皆配置正常
注:此例中目录结构为:
```
CTFd/
conf/
nginx # CTFd 2.5.0+中自带
frp/
frpc.ini
frps.ini
serve.py
```
### 配置CTFd
前面的工作完成后将本机docker的访问接口映射到CTFd所在容器内
并将CTFd添加到frpc所在network中注意不是containers这个network
```yml
services:
ctfd:
...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- frpc #frpc需要先运行
networks:
...
frp_connect:
```
将CTFd-Whale克隆至CTFd的插件目录
```bash
git clone https://github.com/frankli0324/CTFd-Whale CTFd/plugins/ctfd-whale --depth=1
docker-compose build # 需要安装依赖
docker-compose up -d
```
进入Whale的配置页面( `/plugins/ctfd-whale/admin/settings` )首先配置docker配置项
需要注意的是 `Auto Connect Network` ,如果按照上面的配置流程进行配置的话,应当是 `ctfd_frp_containers`
如果不确定的话可以通过下面的命令列出CTFd目录compose生成的所有network
```bash
docker network ls -f "label=com.docker.compose.project=ctfd" --format "{{.Name}}"
```
然后检查frp配置项是否正确
* `HTTP Domain Suffix` 与 frps 的 `subdomain_host` 保持一致
* `HTTP Port` 与 frps 的 `vhost_http_port` 保持一致
* `Direct IP Address` 为能访问到 frps 相应端口(例子中为10000-10100) 的IP
* `Direct Minimum Port``Direct Maximum Port` 显然可得
* 只要正确填写了 `API URL` Whale 会自动获取 frpc 的配置文件作为 `Frpc config template`
* 通过设置 `Frpc config template` 可以覆盖原有 `frpc.ini` 文件
至此CTFd-Whale 已经马马虎虎可以正常使用了。
### 配置nginx
如果你在使用2.5.0+版本的CTFd那么你可以直接利用自带的nginx进行http题目的反代
首先去除docker-compose.yml中对frps http端口的映射(8001)
如果想贯彻到底的话,可以
* 为nginx添加internal与default两个network
* 去除CTFd的default network并去除ports项
`./conf/nginx/nginx.conf` 的http block中添加以下server block
```conf
server {
listen 80;
server_name *.node3.buuoj.cn;
location / {
proxy_pass http://frps:8001;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
```
## 部署题目
### 单容器题目环境
请参考<https://github.com/CTFTraining>中的镜像进行题目镜像制作Dockerfile编写。总体而言题目在启动时会向**容器**内传入名为 `FLAG` 的环境变量你需要编写一个启动脚本一般为bash+sed组合拳将flag写入自己的题目中并删除这一环境变量。
请出题人制作镜像时请理清思路,不要搞混容器与镜像的概念。这样既方便自己,也方便部署人员。
### 多容器题目环境
在题目镜像名处填写一个json object即可创建一道多容器的题目
```json
{
"hostname": "image",
}
```
Whale会保留json的key顺序并将第一个容器作为"主容器"映射到外网,映射方式与单容器相同
以buuoj上的swpu2019 web2为例可以配置如下
```json
{
"ss": "shadowsocks-chall",
"web": "swpu2019-web2",
...
}
```
其中shadowsocks-chall的Dockerfile:
```dockerfile
FROM shadowsocks/shadowsocks-libev
ENV PASSWORD=123456
ENV METHOD=aes-256-cfb
```
> 由于写README的并不是buuoj管理员故上述仅作说明用与实际情况可能有较大出入
## 安全事项
* 后台配置中flag与domain模版理论上存在sstifeature请不要将管理员账号给不可信第三方
* 由于例子中frpc并没有开启鉴权请不要将frpc的bind_addr设置为`0.0.0.0`。这样会导致利用任何一道能发起http请求的题目都能修改frpc配置。
* 如果出于配置复杂性考虑题目容器能够访问frpc请开启frpc的Basic Auth并以 `http://username:password@frpc:7400` 的格式设置frpc API URL
## 高级部署
用于下发靶机实例的服务器与运行 `CTFd` 网站的服务器分离,`CTFd-whale` 通过启用了 `TLS/SSL` 验证的 `Dockers API`进行下发容器控制
参见 [advanced.zh-cn.md](advanced.zh-cn.md)

105
models.py Normal file
View File

@ -0,0 +1,105 @@
import random
import uuid
from datetime import datetime
from jinja2 import Template
from CTFd.utils import get_config
from CTFd.models import db
from CTFd.plugins.dynamic_challenges import DynamicChallenge
class WhaleConfig(db.Model):
key = db.Column(db.String(length=128), primary_key=True)
value = db.Column(db.Text)
def __init__(self, key, value):
self.key = key
self.value = value
def __repr__(self):
return "<WhaleConfig {0} {1}>".format(self.key, self.value)
class WhaleRedirectTemplate(db.Model):
key = db.Column(db.String(20), primary_key=True)
frp_template = db.Column(db.Text)
access_template = db.Column(db.Text)
def __init__(self, key, access_template, frp_template):
self.key = key
self.access_template = access_template
self.frp_template = frp_template
def __repr__(self):
return "<WhaleRedirectTemplate {0}>".format(self.key)
class DynamicDockerChallenge(DynamicChallenge):
__mapper_args__ = {"polymorphic_identity": "dynamic_docker"}
id = db.Column(
db.Integer, db.ForeignKey("dynamic_challenge.id", ondelete="CASCADE"), primary_key=True
)
memory_limit = db.Column(db.Text, default="128m")
cpu_limit = db.Column(db.Float, default=0.5)
dynamic_score = db.Column(db.Integer, default=0)
docker_image = db.Column(db.Text, default=0)
redirect_type = db.Column(db.Text, default=0)
redirect_port = db.Column(db.Integer, default=0)
def __init__(self, *args, **kwargs):
kwargs["initial"] = kwargs["value"]
super(DynamicDockerChallenge, self).__init__(**kwargs)
class WhaleContainer(db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
user_id = db.Column(None, db.ForeignKey("users.id"))
challenge_id = db.Column(None, db.ForeignKey("challenges.id"))
start_time = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
renew_count = db.Column(db.Integer, nullable=False, default=0)
status = db.Column(db.Integer, default=1)
uuid = db.Column(db.String(256))
port = db.Column(db.Integer, nullable=True, default=0)
flag = db.Column(db.String(128), nullable=False)
# Relationships
user = db.relationship(
"Users", foreign_keys="WhaleContainer.user_id", lazy="select")
challenge = db.relationship(
"DynamicDockerChallenge", foreign_keys="WhaleContainer.challenge_id", lazy="select"
)
@property
def http_subdomain(self):
return Template(get_config(
'whale:template_http_subdomain', '{{ container.uuid }}'
)).render(container=self)
def __init__(self, user_id, challenge_id):
self.user_id = user_id
self.challenge_id = challenge_id
self.start_time = datetime.now()
self.renew_count = 0
self.uuid = str(uuid.uuid4())
self.flag = Template(get_config(
'whale:template_chall_flag', '{{ "flag{"+uuid.uuid4()|string+"}" }}'
)).render(container=self, uuid=uuid, random=random, get_config=get_config)
@property
def user_access(self):
return Template(WhaleRedirectTemplate.query.filter_by(
key=self.challenge.redirect_type
).first().access_template).render(container=self, get_config=get_config)
@property
def frp_config(self):
return Template(WhaleRedirectTemplate.query.filter_by(
key=self.challenge.redirect_type
).first().frp_template).render(container=self, get_config=get_config)
def __repr__(self):
return "<WhaleContainer ID:{0} {1} {2} {3} {4}>".format(self.id, self.user_id, self.challenge_id,
self.start_time, self.renew_count)

4
requirements.txt Normal file
View File

@ -0,0 +1,4 @@
docker==4.1.0
Flask-APScheduler==1.11.0
flask-redis==0.4.0
redis==3.3.11

View File

@ -0,0 +1,24 @@
<div class="tab-pane fade" id="router" role="tabpanel">
{% set value = get_config('whale:router_type') %}
{% set cur_type = get_config("whale:router_type", "frp") %}
<div class="form-group">
<label for="router-type">
Router type
<small class="form-text text-muted">
Select which router backend to use
</small>
</label>
<select id="router-type" class="form-control custom-select" onchange="window.updateConfigs">
{% for type in ["frp", "trp"] %}
<option value="{{ type }}" {{ "selected" if value == type }}>{{ type }}</option>
{% endfor %}
</select>
</div>
{% set template = "config/" + cur_type + ".router.config.html" %}
{% include template %}
<div class="submit-row float-right">
<button type="submit" tabindex="0" class="btn btn-md btn-primary btn-outlined">
Submit
</button>
</div>
</div>

View File

@ -0,0 +1,25 @@
<div class="tab-pane fade" id="challenges" role="tabpanel">
{% for config, val in {
"Subdomain Template": ("template_http_subdomain", "Controls how the subdomain of a container is generated"),
"Flag Template": ("template_chall_flag", "Controls how a flag is generated"),
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">
{{ val[1] }}
</small>
</label>
<input type="text" class="form-control"
id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}"
{% if value != None %}value="{{ value }}"{% endif %}>
</div>
{% endfor %}
<div class="submit-row float-right">
<button type="submit" tabindex="0" class="btn btn-md btn-primary btn-outlined">
Submit
</button>
</div>
</div>

View File

@ -0,0 +1,122 @@
<div class="tab-pane fade show active" id="docker" role="tabpanel" aria-autocomplete="none">
<h5>Common</h5>
<small class="form-text text-muted">
Common configurations for both standalone and grouped containers
</small><br>
{% for config, val in {
"API URL": ("docker_api_url", "Docker API to connect to"),
"Credentials": ("docker_credentials", "docker.io username and password, separated by ':'. useful for private images"),
"Swarm Nodes": ("docker_swarm_nodes", "Will pick up one from it, You should set your node with label name=windows-* or name=linux-*. Separated by commas."),
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">{{ val[1] }}</small>
</label>
<input type="text" class="form-control"
id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}"
{% if value != None %}value="{{ value }}"{% endif %}>
</div>
{% endfor %}
{% set use_ssl = get_config('whale:docker_use_ssl') %}
<div class="form-check">
<input type="checkbox" id="docker-use-ssl" name="whale:docker_use_ssl"
{% if use_ssl == True %}checked{% endif %}>
<label for="docker-use-ssl">Use SSL</label>
</div>
<div class="container" id="docker-ssl-config">
<div class="form-group">
<label for="docker-ssl-ca-cert">
SSL CA Certificate
<small class="form-text text-muted">
the location of the CA certificate file used in ssl connection
</small>
</label>
<input type="text" class="form-control"
id="docker-ssl-ca-cert" name="whale:docker_ssl_ca_cert"
value="{{ get_config('whale:docker_ssl_ca_cert') }}">
</div>
<div class="form-group">
<label for="docker-ssl-client-cert">
SSL Client Certificate
<small class="form-text text-muted">
the location of the client certificate file used in ssl connection
</small>
</label>
<input type="text" class="form-control"
id="docker-ssl-client-cert" name="whale:docker_ssl_client_cert"
value="{{ get_config('whale:docker_ssl_client_cert') }}">
</div>
<div class="form-group">
<label for="docker-ssl-client-key">
SSL Client Key
<small class="form-text text-muted">
the location of the client key file used in ssl connection
</small>
</label>
<input type="text" class="form-control"
id="docker-ssl-client-key" name="whale:docker_ssl_client_key"
value="{{ get_config('whale:docker_ssl_client_key') }}">
</div>
</div>
<script>
(function () {
let config = document.getElementById('docker-ssl-config');
let option = document.getElementById('docker-use-ssl');
config.hidden = !option.checked;
option.onclick = () => (config.hidden = !option.checked) || true;
}) ()
</script>
<hr>
<h5>Standalone Containers</h5>
<small class="form-text text-muted">
Typical challenges. Under most circumstances you only need to set these.
</small><br>
{% for config, val in {
"Auto Connect Network": ("docker_auto_connect_network", "The network connected for single-containers. It's usually the same network as the frpc is in."),
"Dns Setting": ("docker_dns", "Decide which dns will be used in container network."),
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">
{{ val[1] }}
</small>
</label>
<input type="text" class="form-control"
id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}"
{% if value != None %}value="{{ value }}"{% endif %}>
</div>
{% endfor %}
<hr>
<h5>Grouped Containers</h5>
<small class="form-text text-muted">
Designed for multi-container challenges
</small><br>
{% for config, val in {
"Auto Connect Containers": ("docker_auto_connect_containers","Decide which container will be connected to multi-container-network automatically. Separated by commas."),
"Multi-Container Network Subnet": ("docker_subnet", "Subnet which will be used by auto created networks for multi-container challenges."),
"Multi-Container Network Subnet New Prefix": ("docker_subnet_new_prefix", "Prefix for auto created network.")
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">
{{ val[1] }}
</small>
</label>
<input type="text" class="form-control"
id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}"
{% if value != None %}value="{{ value }}"{% endif %}>
</div>
{% endfor %}
<div class="submit-row float-right">
<button type="submit" tabindex="0" class="btn btn-md btn-primary btn-outlined">
Submit
</button>
</div>
</div>

View File

@ -0,0 +1,50 @@
{% for config, val in {
"API URL": ("frp_api_url", "Frp API to connect to"),
"Http Domain Suffix": ("frp_http_domain_suffix", "Will be appended to the hash of a container"),
"External Http Port": ("frp_http_port", "Keep in sync with frps:vhost_http_port"),
"Direct IP Address":("frp_direct_ip_address","For direct redirect"),
"Direct Minimum Port": ("frp_direct_port_minimum", "For direct redirect (pwn challenges)"),
"Direct Maximum Port": ("frp_direct_port_maximum", "For direct redirect (pwn challenges)"),
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">
{{ val[1] }}
</small>
</label>
<input type="text" class="form-control" id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}" {% if
value !=None %}value="{{ value }}" {% endif %}>
</div>
{% endfor %}
{% set frpc_template = get_config("whale:frp_config_template", "") %}
<div class="form-group">
<label for="frp-config-template">
Frpc config template
<small class="form-text text-muted">
Frp config template, only need common section!
</small>
</label>
<textarea class="form-control input-filled-valid" id="frp-config-template" rows="7"
name="whale:frp_config_template">{{ frpc_template }}</textarea>
</div>
{% if frpc_template %}
<div class="form-group">
<label for="frps-config-template">
Frps config template [generated]
<small class="form-text text-muted">
This configuration is generated with your settings above.
</small>
</label>
<textarea class="form-control input-filled-valid grey-text" id="frps-config-template" rows="6" disabled>
[common]
{% for i in frpc_template.split('\n') %}
{%- if 'token' in i -%}{{ i }}{%- endif -%}
{%- if 'server_port' in i -%}{{ i.replace('server_port', 'bind_port') }}{%- endif -%}
{% endfor %}
vhost_http_port = {{ get_config('whale:frp_http_port') }}
subdomain_host = {{ get_config('whale:frp_http_domain_suffix', '127.0.0.1.xip.io').lstrip('.') }}
</textarea>
</div>
{% endif %}

View File

@ -0,0 +1,26 @@
<div class="tab-pane fade" id="limits" role="tabpanel">
{% for config, val in {
"Max Container Count": ("docker_max_container_count", "The maximum number of countainers allowed on the server"),
"Max Renewal Times": ("docker_max_renew_count", "The maximum times a user is allowed to renew a container"),
"Docker Container Timeout": ("docker_timeout", "A container times out after [timeout] seconds."),
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">
{{ val[1] }}
</small>
</label>
<input type="text" class="form-control"
id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}"
{% if value != None %}value="{{ value }}"{% endif %}>
</div>
{% endfor %}
<div class="submit-row float-right">
<button type="submit" tabindex="0" class="btn btn-md btn-primary btn-outlined">
Submit
</button>
</div>
</div>

View File

@ -0,0 +1,17 @@
{% for config, val in {
"API URL": ("trp_api_url", "trp API to connect to"),
"Domain Suffix": ("trp_domain_suffix", "Will be used to generated the access link of a challenge"),
"Listening Port": ("trp_listening_port", "Will be used to generated the access link of a challenge"),
}.items() %}
{% set value = get_config('whale:' + val[0]) %}
<div class="form-group">
<label for="{{ val[0].replace('_', '-') }}">
{{ config }}
<small class="form-text text-muted">
{{ val[1] }}
</small>
</label>
<input type="text" class="form-control" id="{{ val[0].replace('_', '-') }}" name="{{ 'whale:' + val[0] }}"
{% if value != None %}value="{{ value }}" {% endif %}>
</div>
{% endfor %}

View File

@ -0,0 +1,57 @@
<style>
.info-card.card {
height: 11rem;
}
.card-text {
text-overflow: ellipsis;
white-space: nowrap;
overflow: hidden;
}
.card-text:hover {
white-space: pre-line;
overflow: visible;
}
</style>
<div class="row">
{% for container in containers %}
<div class="col-sm-6 pb-3">
<div class="info-card card">
<div class="card-body">
<h5 class="d-inline-block card-title">
<a style="width: 5rem;"
href="{{ url_for('admin.challenges_detail', challenge_id=container.challenge.id) }}"
>{{ container.challenge.name | truncate(15) }}
</a>
</h5>
<h6 class="d-inline-block card-subtitle float-right">
<a style="width: 5rem;"
class="btn btn-outline-secondary rounded"
href="{{ url_for('admin.users_detail', user_id=container.user.id) }}"
>{{ container.user.name | truncate(5) }}
</a>
</h6>
<p class="card-text">{{ container.user_access }}</p>
<p class="card-text">{{ container.flag }}</p>
Time Started: {{ container.start_time }}
<a class="delete-container float-right" container-id="{{ container.id }}"
data-toggle="tooltip" data-placement="top"
user-id="{{ container.user.id }}"
style="margin-right: 0.5rem;"
title="Destroy Container #{{ container.id }}">
<i class="fas fa-stop-circle"></i>
</a>
<a class="renew-container float-right" container-id="{{ container.id }}"
data-toggle="tooltip" data-placement="top"
user-id="{{ container.user.id }}"
style="margin-right: 0.5rem;"
title="Renew Container #{{ container.id }}">
<i class="fas fa-clock"></i>
</a>
</div>
</div>
</div>
{% endfor %}
</div>

View File

@ -0,0 +1,78 @@
<div class="row">
<div class="col-md-12">
<table class="table table-striped border">
<thead>
<tr>
<th class="border-right" data-checkbox>
<div class="form-check text-center">&nbsp;
<input type="checkbox" class="form-check-input" data-checkbox-all>
</div>
</th>
<th class="sort-col text-center"><b>ID</b></td>
<th class="text-center"><b>User</b></td>
<th class="sort-col text-center"><b>Challenge</b></td>
<th class="text-center"><b>Access Method</b></td>
<th class="text-center"><b>Flag</b></td>
<th class="sort-col text-center"><b>Startup Time</b></td>
<th class="sort-col text-center"><b>Renewal Times</b></td>
<th class="text-center"><b>Delete</b></td>
</tr>
</thead>
<tbody>
{% for container in containers %}
<tr>
<td class="border-right" data-checkbox>
<div class="form-check text-center">&nbsp;
<input type="checkbox" class="form-check-input" data-user-id="{{ container.user.id }}">
</div>
</td>
<td class="text-center">
{{ container.id }}
</td>
<td class="text-center">
<a href="{{ url_for('admin.users_detail', user_id=container.user.id) }}">
{{ container.user.name | truncate(12) }}
</a>
</td>
<td class="text-center">
<a href="{{ url_for('admin.challenges_detail', challenge_id=container.challenge.id) }}">
{{ container.challenge.name }}
</a>
</td>
<td class="text-center">
{{ container.challenge.redirect_type }}&nbsp;
<button class="btn btn-link p-0 click-copy" data-copy="{{ container.user_access }}">
<i class="fas fa-clipboard"></i>
</button>
</td>
<td class="text-center">
<button class="btn btn-link p-0 click-copy" data-copy="{{ container.flag }}">
<i class="fas fa-clipboard"></i>
</button>
</td>
<td class="text-center">
<span data-time="{{ container.start_time | isoformat }}"></span>
</td>
<td class="text-center">
{{ container.renew_count }}&nbsp;
<button class="btn btn-link p-0 renew-container"
container-id="{{ container.id }}" data-toggle="tooltip"
user-id="{{ container.user.id }}" data-placement="top"
title="Renew Container #{{ container.id }}">
<i class="fas fa-sync"></i>
</button>
</td>
<td class="text-center">
<button class="btn btn-link p-0 delete-container"
container-id="{{ container.id }}" data-toggle="tooltip"
user-id="{{ container.user.id }}" data-placement="top"
title="Destroy Container #{{ container.id }}">
<i class="fas fa-times"></i>
</button>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div>

25
templates/whale_base.html Normal file
View File

@ -0,0 +1,25 @@
{% extends "admin/base.html" %}
{% block content %}
<div class="jumbotron">
<div class="container">
<h1>CTFd Whale</h1>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-md-3">
<ul class="nav nav-pills flex-column">
{% block menu %}
{% endblock %}
</ul>
</div>
<div class="col-md-9">
<div class="tab-content">
{% block panel %}
{% endblock %}
</div>
</div>
</div>
</div>
{% endblock %}

View File

@ -0,0 +1,38 @@
{% extends "whale_base.html" %}
{% block menu %}
<li class="nav-item">
<a class="nav-link active" data-toggle="pill" href="#docker">Docker</a>
</li>
<li class="nav-item">
<a class="nav-link" data-toggle="pill" href="#router">Router</a>
</li>
<li class="nav-item">
<a class="nav-link" data-toggle="pill" href="#limits">Limits</a>
</li>
<li class="nav-item">
<a class="nav-link" data-toggle="pill" href="#challenges">Challenges</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/plugins/ctfd-whale/admin/containers">🔗 Instances</a>
</li>
{% endblock %}
{% block panel %}
{% include "components/errors.html" %}
<div role="tabpanel" class="tab-pane config-section active" id="settings">
<form method="POST" accept-charset="utf-8" action="/admin/plugins/ctfd-whale"
class="form-horizontal">
<div class="tab-content">
{% include "config/docker.config.html" %}
{% include "config/base.router.config.html" %}
{% include "config/limits.config.html" %}
{% include "config/challenges.config.html" %}
</div>
</form>
</div>
{% endblock %}
{% block scripts %}
<script defer src="{{ url_for('plugins.ctfd-whale.assets', path='config.js') }}"></script>
{% endblock %}

View File

@ -0,0 +1,69 @@
{% extends "whale_base.html" %}
{% block menu %}
<li class="nav-item">
<a class="nav-link" href="/plugins/ctfd-whale/admin/settings">🔗 Settings</a>
</li>
<li class="nav-item">
<a class="nav-link active" href="#">Instances</a>
</li>
<li class="nav-item nav-link">
<div class="btn-group" role="group">
<button type="button" class="btn btn-outline-secondary"
data-toggle="tooltip" title="Renew Containers" id="containers-renew-button">
<i class="btn-fa fas fa-sync"></i>
</button>
<button type="button" class="btn btn-outline-danger"
data-toggle="tooltip" title="Stop Containers" id="containers-delete-button">
<i class="btn-fa fas fa-times"></i>
</button>
</div>
</li>
<li class="nav-item nav-link">
<ul class="pagination">
<li class="page-item{{ ' disabled' if curr_page <= 1 else '' }}">
<a class="page-link" aria-label="Previous"
href="/plugins/ctfd-whale/admin/containers?page={{ curr_page - 1 }}"
>
<span aria-hidden="true">&laquo;</span>
<span class="sr-only">Previous</span>
</a>
</li>
{% set range_l = [[curr_page - 1, 1]|max, [pages - 3, 1]|max]|min %}
{% set range_r = [[curr_page + 2, pages]|min, [4, pages]|min]|max %}
{% for page in range(range_l, range_r + 1) %}
<li class="page-item{{ ' active' if curr_page == page }}">
<a class="page-link"
href="/plugins/ctfd-whale/admin/containers?page={{ page }}"
>{{ page }}</a>
</li>
{% endfor %}
<li class="page-item{{ ' disabled' if curr_page >= pages else '' }}">
<a class="page-link" aria-label="Next"
href="/plugins/ctfd-whale/admin/containers?page={{ curr_page + 1 }}"
>
<span aria-hidden="true">&raquo;</span>
<span class="sr-only">Next</span>
</a>
</li>
</ul>
</li>
<li class="nav-item nav-link">
{% if session['view_mode'] == 'card' %}
<a href="?mode=list">Switch to list mode</a>
{% else %}
<a href="?mode=card">Switch to card mode</a>
{% endif %}
</li>
{% endblock %}
{% block panel %}
{% include "containers/" + session["view_mode"] + ".containers.html" %}
{% endblock %}
{% block scripts %}
<script defer src="{{ url_for('plugins.ctfd-whale.assets', path='containers.js') }}"></script>
{% endblock %}

0
utils/__init__.py Normal file
View File

150
utils/cache.py Normal file
View File

@ -0,0 +1,150 @@
import ipaddress
import warnings
from CTFd.cache import cache
from CTFd.utils import get_config
from flask_redis import FlaskRedis
from redis.exceptions import LockError
from .db import DBContainer
class CacheProvider:
def __init__(self, app, *args, **kwargs):
if app.config['CACHE_TYPE'] == 'redis':
self.provider = RedisCacheProvider(app, *args, **kwargs)
elif app.config['CACHE_TYPE'] in ['filesystem', 'simple']:
if not hasattr(CacheProvider, 'cache'):
CacheProvider.cache = {}
self.provider = FilesystemCacheProvider(app, *args, **kwargs)
self.init_port_sets()
def init_port_sets(self):
self.clear()
containers = DBContainer.get_all_container()
used_port_list = []
for container in containers:
if container.port != 0:
used_port_list.append(container.port)
for port in range(int(get_config("whale:frp_direct_port_minimum", 29000)),
int(get_config("whale:frp_direct_port_maximum", 28000)) + 1):
if port not in used_port_list:
self.add_available_port(port)
from .docker import get_docker_client
client = get_docker_client()
docker_subnet = get_config("whale:docker_subnet", "174.1.0.0/16")
docker_subnet_new_prefix = int(
get_config("whale:docker_subnet_new_prefix", "24"))
exist_networks = []
available_networks = []
for network in client.networks.list(filters={'label': 'prefix'}):
exist_networks.append(str(network.attrs['Labels']['prefix']))
for network in list(ipaddress.ip_network(docker_subnet).subnets(new_prefix=docker_subnet_new_prefix)):
if str(network) not in exist_networks:
available_networks.append(str(network))
self.add_available_network_range(*set(available_networks))
def __getattr__(self, name):
return self.provider.__getattribute__(name)
class FilesystemCacheProvider:
def __init__(self, app, *args, **kwargs):
warnings.warn(
'\n[CTFd Whale] Warning: looks like you are using filesystem cache. '
'\nThis is for TESTING purposes only, DO NOT USE on production sites.',
RuntimeWarning
)
self.key = 'ctfd_whale_lock-' + str(kwargs.get('user_id', 0))
self.global_port_key = "ctfd_whale-port-set"
self.global_network_key = "ctfd_whale-network-set"
def clear(self):
cache.set(self.global_port_key, set())
cache.set(self.global_network_key, set())
def add_available_network_range(self, *ranges):
s = cache.get(self.global_network_key)
s.update(ranges)
cache.set(self.global_network_key, s)
def get_available_network_range(self):
try:
s = cache.get(self.global_network_key)
r = s.pop()
cache.set(self.global_network_key, s)
return r
except KeyError:
return None
def add_available_port(self, port):
s = cache.get(self.global_port_key)
s.add(port)
cache.set(self.global_port_key, s)
def get_available_port(self):
try:
s = cache.get(self.global_port_key)
r = s.pop()
cache.set(self.global_port_key, s)
return r
except KeyError:
return None
def acquire_lock(self):
# for testing purposes only, so no need to set this limit
return True
def release_lock(self):
return True
class RedisCacheProvider(FlaskRedis):
def __init__(self, app, *args, **kwargs):
super().__init__(app)
self.key = 'ctfd_whale_lock-' + str(kwargs.get('user_id', 0))
self.current_lock = None
self.global_port_key = "ctfd_whale-port-set"
self.global_network_key = "ctfd_whale-network-set"
def clear(self):
self.delete(self.global_port_key)
self.delete(self.global_network_key)
def add_available_network_range(self, *ranges):
self.sadd(self.global_network_key, *ranges)
def get_available_network_range(self):
return self.spop(self.global_network_key).decode()
def add_available_port(self, port):
self.sadd(self.global_port_key, str(port))
def get_available_port(self):
return int(self.spop(self.global_port_key))
def acquire_lock(self):
lock = self.lock(name=self.key, timeout=10)
if not lock.acquire(blocking=True, blocking_timeout=2.0):
return False
self.current_lock = lock
return True
def release_lock(self):
if self.current_lock is None:
return False
try:
self.current_lock.release()
return True
except LockError:
return False

50
utils/checks.py Normal file
View File

@ -0,0 +1,50 @@
from docker.errors import DockerException, TLSParameterError, APIError, requests
from CTFd.utils import get_config
from .docker import get_docker_client
from .routers import Router, _routers
class WhaleChecks:
@staticmethod
def check_docker_api():
try:
client = get_docker_client()
except TLSParameterError as e:
return f'Docker TLS Parameters incorrect ({e})'
except DockerException as e:
return f'Docker API url incorrect ({e})'
try:
client.ping()
except (APIError, requests.RequestException):
return f'Unable to connect to Docker API, check your API connectivity'
credentials = get_config("whale:docker_credentials")
if credentials and credentials.count(':') == 1:
try:
client.login(*credentials.split(':'))
except DockerException:
return f'Unable to log into docker registry, check your credentials'
swarm = client.info()['Swarm']
if not swarm['ControlAvailable']:
return f'Docker swarm not available. You should initialize a swarm first. ($ docker swarm init)'
@staticmethod
def check_frp_connection():
router_conftype = get_config("whale:router_type", "frp")
if router_conftype not in _routers:
return "invalid router type: " + router_conftype
ok, msg = _routers[router_conftype]().check_availability()
if not ok:
return msg
@staticmethod
def perform():
errors = []
for attr in dir(WhaleChecks):
if attr.startswith('check_'):
err = getattr(WhaleChecks, attr)()
if err:
errors.append(err)
return errors

61
utils/control.py Normal file
View File

@ -0,0 +1,61 @@
import datetime
import traceback
from CTFd.utils import get_config
from .db import DBContainer, db
from .docker import DockerUtils
from .routers import Router
class ControlUtil:
@staticmethod
def try_add_container(user_id, challenge_id):
container = DBContainer.create_container_record(user_id, challenge_id)
try:
DockerUtils.add_container(container)
except Exception as e:
DBContainer.remove_container_record(user_id)
print(traceback.format_exc())
return False, 'Docker Creation Error'
ok, msg = Router.register(container)
if not ok:
DockerUtils.remove_container(container)
DBContainer.remove_container_record(user_id)
return False, msg
return True, 'Container created'
@staticmethod
def try_remove_container(user_id):
container = DBContainer.get_current_containers(user_id=user_id)
if not container:
return False, 'No such container'
for _ in range(3): # configurable? as "onerror_retry_cnt"
try:
ok, msg = Router.unregister(container)
if not ok:
return False, msg
DockerUtils.remove_container(container)
DBContainer.remove_container_record(user_id)
return True, 'Container destroyed'
except Exception as e:
print(traceback.format_exc())
return False, 'Failed when destroying instance, please contact admin!'
@staticmethod
def try_renew_container(user_id):
container = DBContainer.get_current_containers(user_id)
if not container:
return False, 'No such container'
timeout = int(get_config("whale:docker_timeout", "3600"))
container.start_time = container.start_time + \
datetime.timedelta(seconds=timeout)
if container.start_time > datetime.datetime.now():
container.start_time = datetime.datetime.now()
# race condition? useless maybe?
# useful when docker_timeout < poll timeout (10 seconds)
# doesn't make any sense
else:
return False, 'Invalid container'
container.renew_count += 1
db.session.commit()
return True, 'Container Renewed'

104
utils/db.py Normal file
View File

@ -0,0 +1,104 @@
import datetime
from CTFd.models import db
from CTFd.utils import get_config
from ..models import WhaleContainer, WhaleRedirectTemplate
class DBContainer:
@staticmethod
def create_container_record(user_id, challenge_id):
container = WhaleContainer(user_id=user_id, challenge_id=challenge_id)
db.session.add(container)
db.session.commit()
return container
@staticmethod
def get_current_containers(user_id):
q = db.session.query(WhaleContainer)
q = q.filter(WhaleContainer.user_id == user_id)
return q.first()
@staticmethod
def get_container_by_port(port):
q = db.session.query(WhaleContainer)
q = q.filter(WhaleContainer.port == port)
return q.first()
@staticmethod
def remove_container_record(user_id):
q = db.session.query(WhaleContainer)
q = q.filter(WhaleContainer.user_id == user_id)
q.delete()
db.session.commit()
@staticmethod
def get_all_expired_container():
timeout = int(get_config("whale:docker_timeout", "3600"))
q = db.session.query(WhaleContainer)
q = q.filter(
WhaleContainer.start_time <
datetime.datetime.now() - datetime.timedelta(seconds=timeout)
)
return q.all()
@staticmethod
def get_all_alive_container():
timeout = int(get_config("whale:docker_timeout", "3600"))
q = db.session.query(WhaleContainer)
q = q.filter(
WhaleContainer.start_time >=
datetime.datetime.now() - datetime.timedelta(seconds=timeout)
)
return q.all()
@staticmethod
def get_all_container():
q = db.session.query(WhaleContainer)
return q.all()
@staticmethod
def get_all_alive_container_page(page_start, page_end):
timeout = int(get_config("whale:docker_timeout", "3600"))
q = db.session.query(WhaleContainer)
q = q.filter(
WhaleContainer.start_time >=
datetime.datetime.now() - datetime.timedelta(seconds=timeout)
)
q = q.slice(page_start, page_end)
return q.all()
@staticmethod
def get_all_alive_container_count():
timeout = int(get_config("whale:docker_timeout", "3600"))
q = db.session.query(WhaleContainer)
q = q.filter(
WhaleContainer.start_time >=
datetime.datetime.now() - datetime.timedelta(seconds=timeout)
)
return q.count()
class DBRedirectTemplate:
@staticmethod
def get_all_templates():
return WhaleRedirectTemplate.query.all()
@staticmethod
def create_template(name, access_template, frp_template):
if WhaleRedirectTemplate.query.filter_by(key=name).first():
return # already existed
db.session.add(WhaleRedirectTemplate(
name, access_template, frp_template
))
db.session.commit()
@staticmethod
def delete_template(name):
WhaleRedirectTemplate.query.filter_by(key=name).delete()
db.session.commit()

202
utils/docker.py Normal file
View File

@ -0,0 +1,202 @@
import json
import random
import uuid
from collections import OrderedDict
import docker
from flask import current_app
from CTFd.utils import get_config
from .cache import CacheProvider
from .exceptions import WhaleError
def get_docker_client():
if get_config("whale:docker_use_ssl", False):
tls_config = docker.tls.TLSConfig(
verify=True,
ca_cert=get_config("whale:docker_ssl_ca_cert") or None,
client_cert=(
get_config("whale:docker_ssl_client_cert"),
get_config("whale:docker_ssl_client_key")
),
)
return docker.DockerClient(
base_url=get_config("whale:docker_api_url"),
tls=tls_config,
)
else:
return docker.DockerClient(base_url=get_config("whale:docker_api_url"))
class DockerUtils:
@staticmethod
def init():
try:
DockerUtils.client = get_docker_client()
# docker-py is thread safe: https://github.com/docker/docker-py/issues/619
except Exception:
raise WhaleError(
'Docker Connection Error\n'
'Please ensure the docker api url (first config item) is correct\n'
'if you are using unix:///var/run/docker.sock, check if the socket is correctly mapped'
)
credentials = get_config("whale:docker_credentials")
if credentials and credentials.count(':') == 1:
try:
DockerUtils.client.login(*credentials.split(':'))
except Exception:
raise WhaleError('docker.io failed to login, check your credentials')
@staticmethod
def add_container(container):
if container.challenge.docker_image.startswith("{"):
DockerUtils._create_grouped_container(DockerUtils.client, container)
else:
DockerUtils._create_standalone_container(DockerUtils.client, container)
@staticmethod
def _create_standalone_container(client, container):
dns = get_config("whale:docker_dns", "").split(",")
node = DockerUtils.choose_node(
container.challenge.docker_image,
get_config("whale:docker_swarm_nodes", "").split(",")
)
client.services.create(
image=container.challenge.docker_image,
name=f'{container.user_id}-{container.uuid}',
env={'FLAG': container.flag}, dns_config=docker.types.DNSConfig(nameservers=dns),
networks=[get_config("whale:docker_auto_connect_network", "ctfd_frp-containers")],
resources=docker.types.Resources(
mem_limit=DockerUtils.convert_readable_text(
container.challenge.memory_limit),
cpu_limit=int(container.challenge.cpu_limit * 1e9)
),
labels={
'whale_id': f'{container.user_id}-{container.uuid}'
}, # for container deletion
constraints=['node.labels.name==' + node],
endpoint_spec=docker.types.EndpointSpec(mode='dnsrr', ports={})
)
@staticmethod
def _create_grouped_container(client, container):
range_prefix = CacheProvider(app=current_app).get_available_network_range()
ipam_pool = docker.types.IPAMPool(subnet=range_prefix)
ipam_config = docker.types.IPAMConfig(
driver='default', pool_configs=[ipam_pool])
network_name = f'{container.user_id}-{container.uuid}'
network = client.networks.create(
network_name, internal=True,
ipam=ipam_config, attachable=True,
labels={'prefix': range_prefix},
driver="overlay", scope="swarm"
)
dns = []
containers = get_config("whale:docker_auto_connect_containers", "").split(",")
for c in containers:
if not c:
continue
network.connect(c)
if "dns" in c:
network.reload()
for name in network.attrs['Containers']:
if network.attrs['Containers'][name]['Name'] == c:
dns.append(network.attrs['Containers'][name]['IPv4Address'].split('/')[0])
has_processed_main = False
try:
images = json.loads(
container.challenge.docker_image,
object_pairs_hook=OrderedDict
)
except json.JSONDecodeError:
raise WhaleError(
"Challenge Image Parse Error\n"
"plase check the challenge image string"
)
for name, image in images.items():
if has_processed_main:
container_name = f'{container.user_id}-{uuid.uuid4()}'
else:
container_name = f'{container.user_id}-{container.uuid}'
node = DockerUtils.choose_node(image, get_config("whale:docker_swarm_nodes", "").split(","))
has_processed_main = True
client.services.create(
image=image, name=container_name, networks=[
docker.types.NetworkAttachmentConfig(network_name, aliases=[name])
],
env={'FLAG': container.flag},
dns_config=docker.types.DNSConfig(nameservers=dns),
resources=docker.types.Resources(
mem_limit=DockerUtils.convert_readable_text(
container.challenge.memory_limit
),
cpu_limit=int(container.challenge.cpu_limit * 1e9)),
labels={
'whale_id': f'{container.user_id}-{container.uuid}'
}, # for container deletion
hostname=name, constraints=['node.labels.name==' + node],
endpoint_spec=docker.types.EndpointSpec(mode='dnsrr', ports={})
)
@staticmethod
def remove_container(container):
whale_id = f'{container.user_id}-{container.uuid}'
for s in DockerUtils.client.services.list(filters={'label': f'whale_id={whale_id}'}):
s.remove()
networks = DockerUtils.client.networks.list(names=[whale_id])
if len(networks) > 0: # is grouped containers
auto_containers = get_config("whale:docker_auto_connect_containers", "").split(",")
redis_util = CacheProvider(app=current_app)
for network in networks:
for container in auto_containers:
try:
network.disconnect(container, force=True)
except Exception:
pass
redis_util.add_available_network_range(network.attrs['Labels']['prefix'])
network.remove()
@staticmethod
def convert_readable_text(text):
lower_text = text.lower()
if lower_text.endswith("k"):
return int(text[:-1]) * 1024
if lower_text.endswith("m"):
return int(text[:-1]) * 1024 * 1024
if lower_text.endswith("g"):
return int(text[:-1]) * 1024 * 1024 * 1024
return 0
@staticmethod
def choose_node(image, nodes):
win_nodes = []
linux_nodes = []
for node in nodes:
if node.startswith("windows"):
win_nodes.append(node)
else:
linux_nodes.append(node)
try:
tag = image.split(":")[1:]
if len(tag) and tag[0].startswith("windows"):
return random.choice(win_nodes)
return random.choice(linux_nodes)
except IndexError:
raise WhaleError(
'No Suitable Nodes.\n'
'If you are using Whale for the first time, \n'
'Please Setup Swarm Nodes Correctly and Lable Them with\n'
'docker node update --label-add "name=linux-1" $(docker node ls -q)'
)

8
utils/exceptions.py Normal file
View File

@ -0,0 +1,8 @@
class WhaleError(Exception):
def __init__(self, msg):
super().__init__(msg)
self.message = msg
class WhaleWarning(Warning):
pass

34
utils/routers/__init__.py Normal file
View File

@ -0,0 +1,34 @@
from CTFd.utils import get_config
from .frp import FrpRouter
from .trp import TrpRouter
_routers = {
'frp': FrpRouter,
'trp': TrpRouter,
}
def instanciate(cls):
return cls()
@instanciate
class Router:
_name = ''
_router = None
def __getattr__(self, name: str):
router_conftype = get_config("whale:router_type", "frp")
if Router._name != router_conftype:
Router._router = _routers[router_conftype]()
Router._name = router_conftype
return getattr(Router._router, name)
@staticmethod
def reset():
Router._name = ''
Router._router = None
__all__ = ["Router"]

25
utils/routers/base.py Normal file
View File

@ -0,0 +1,25 @@
import typing
from ...models import WhaleContainer
class BaseRouter:
name = None
def __init__(self):
pass
def access(self, container: WhaleContainer):
pass
def register(self, container: WhaleContainer):
pass
def unregister(self, container: WhaleContainer):
pass
def reload(self):
pass
def check_availability(self) -> typing.Tuple[bool, str]:
pass

132
utils/routers/frp.py Normal file
View File

@ -0,0 +1,132 @@
import warnings
from flask import current_app
from requests import session, RequestException
from CTFd.models import db
from CTFd.utils import get_config, set_config, logging
from .base import BaseRouter
from ..cache import CacheProvider
from ..db import DBContainer
from ..exceptions import WhaleError, WhaleWarning
from ...models import WhaleContainer
class FrpRouter(BaseRouter):
name = "frp"
types = {
'direct': 'tcp',
'http': 'http',
}
class FrpRule:
def __init__(self, name, config):
self.name = name
self.config = config
def __str__(self) -> str:
return f'[{self.name}]\n' + '\n'.join(f'{k} = {v}' for k, v in self.config.items())
def __init__(self):
super().__init__()
self.ses = session()
self.url = get_config("whale:frp_api_url").rstrip("/")
self.common = ''
try:
CacheProvider(app=current_app).init_port_sets()
except Exception:
warnings.warn(
"cache initialization failed",
WhaleWarning
)
def reload(self, exclude=None):
rules = []
for container in DBContainer.get_all_alive_container():
if container.uuid == exclude:
continue
name = f'{container.challenge.redirect_type}_{container.user_id}_{container.uuid}'
config = {
'type': self.types[container.challenge.redirect_type],
'local_ip': f'{container.user_id}-{container.uuid}',
'local_port': container.challenge.redirect_port,
'use_compression': 'true',
}
if config['type'] == 'http':
config['subdomain'] = container.http_subdomain
elif config['type'] == 'tcp':
config['remote_port'] = container.port
rules.append(self.FrpRule(name, config))
try:
if not self.common:
common = get_config("whale:frp_config_template", '')
if '[common]' in common:
self.common = common
else:
remote = self.ses.get(f'{self.url}/api/config')
assert remote.status_code == 200
set_config("whale:frp_config_template", remote.text)
self.common = remote.text
config = self.common + '\n' + '\n'.join(str(r) for r in rules)
assert self.ses.put(
f'{self.url}/api/config', config, timeout=5
).status_code == 200
assert self.ses.get(
f'{self.url}/api/reload', timeout=5
).status_code == 200
except (RequestException, AssertionError) as e:
raise WhaleError(
'\nfrpc request failed\n' +
(f'{e}\n' if str(e) else '') +
'please check the frp related configs'
) from None
def access(self, container: WhaleContainer):
if container.challenge.redirect_type == 'direct':
return f'nc {get_config("whale:frp_direct_ip_address", "127.0.0.1")} {container.port}'
elif container.challenge.redirect_type == 'http':
host = get_config("whale:frp_http_domain_suffix", "")
port = get_config("whale:frp_http_port", "80")
host += f':{port}' if port != 80 else ''
return f'<a target="_blank" href="http://{container.http_subdomain}.{host}/">Link to the Challenge</a>'
return ''
def register(self, container: WhaleContainer):
if container.challenge.redirect_type == 'direct':
if not container.port:
port = CacheProvider(app=current_app).get_available_port()
if not port:
return False, 'No available ports. Please wait for a few minutes.'
container.port = port
db.session.commit()
elif container.challenge.redirect_type == 'http':
# config['subdomain'] = container.http_subdomain
pass
self.reload()
return True, 'success'
def unregister(self, container: WhaleContainer):
if container.challenge.redirect_type == 'direct':
try:
redis_util = CacheProvider(app=current_app)
redis_util.add_available_port(container.port)
except Exception as e:
logging.log(
'whale', 'Error deleting port from cache',
name=container.user.name,
challenge_id=container.challenge_id,
)
return False, 'Error deleting port from cache'
self.reload(exclude=container.uuid)
return True, 'success'
def check_availability(self):
try:
resp = self.ses.get(f'{self.url}/api/status', timeout=2.0)
except RequestException as e:
return False, 'Unable to access frpc admin api'
if resp.status_code == 401:
return False, 'frpc admin api unauthorized'
return True, 'Available'

69
utils/routers/trp.py Normal file
View File

@ -0,0 +1,69 @@
import traceback
from requests import session, RequestException, HTTPError
from CTFd.utils import get_config
from .base import BaseRouter
from ..db import DBContainer, WhaleContainer
class TrpRouter(BaseRouter):
name = "trp"
def __init__(self):
super().__init__()
self.ses = session()
self.url = get_config('whale:trp_api_url', '').rstrip("/")
self.common = ''
for container in DBContainer.get_all_alive_container():
self.register(container)
@staticmethod
def get_domain(container: WhaleContainer):
domain = get_config('whale:trp_domain_suffix', '127.0.0.1.nip.io').lstrip('.')
domain = f'{container.uuid}.{domain}'
return domain
def access(self, container: WhaleContainer):
ch_type = container.challenge.redirect_type
domain = self.get_domain(container)
port = get_config('whale:trp_listening_port', 1443)
if ch_type == 'direct':
return f'from pwn import *<br>remote("{domain}", {port}, ssl=True).interactive()'
elif ch_type == 'http':
return f'https://{domain}' + (f':{port}' if port != 443 else '')
else:
return f'[ssl] {domain} {port}'
def register(self, container: WhaleContainer):
try:
resp = self.ses.post(f'{self.url}/rule/{self.get_domain(container)}', json={
'target': f'{container.user_id}-{container.uuid}:{container.challenge.redirect_port}',
'source': None,
})
resp.raise_for_status()
return True, 'success'
except HTTPError as e:
return False, e.response.text
except RequestException as e:
print(traceback.format_exc())
return False, 'unable to access trp Api'
def unregister(self, container: WhaleContainer):
try:
resp = self.ses.delete(f'{self.url}/rule/{self.get_domain(container)}')
resp.raise_for_status()
return True, 'success'
except HTTPError as e:
return False, e.response.text
except RequestException as e:
print(traceback.format_exc())
return False, 'unable to access trp Api'
def check_availability(self):
try:
resp = self.ses.get(f'{self.url}/rules').json()
except RequestException as e:
return False, 'Unable to access trp admin api'
except Exception as e:
return False, 'Unknown trp error'
return True, 'Available'

60
utils/setup.py Normal file
View File

@ -0,0 +1,60 @@
from CTFd.utils import set_config
from ..models import WhaleRedirectTemplate, db
def setup_default_configs():
for key, val in {
'setup': 'true',
'docker_api_url': 'unix:///var/run/docker.sock',
'docker_credentials': '',
'docker_dns': '127.0.0.1',
'docker_max_container_count': '100',
'docker_max_renew_count': '5',
'docker_subnet': '174.1.0.0/16',
'docker_subnet_new_prefix': '24',
'docker_swarm_nodes': 'linux-1',
'docker_timeout': '3600',
'frp_api_url': 'http://frpc:7400',
'frp_http_port': '8080',
'frp_http_domain_suffix': '127.0.0.1.nip.io',
'frp_direct_port_maximum': '10100',
'frp_direct_port_minimum': '10000',
'template_http_subdomain': '{{ container.uuid }}',
'template_chall_flag': '{{ "flag{"+uuid.uuid4()|string+"}" }}',
}.items():
set_config('whale:' + key, val)
db.session.add(WhaleRedirectTemplate(
'http',
'http://{{ container.http_subdomain }}.'
'{{ get_config("whale:frp_http_domain_suffix", "") }}'
'{% if get_config("whale:frp_http_port", "80") != 80 %}:{{ get_config("whale:frp_http_port") }}{% endif %}/',
'''
[http_{{ container.user_id|string }}-{{ container.uuid }}]
type = http
local_ip = {{ container.user_id|string }}-{{ container.uuid }}
local_port = {{ container.challenge.redirect_port }}
subdomain = {{ container.http_subdomain }}
use_compression = true
'''
))
db.session.add(WhaleRedirectTemplate(
'direct',
'nc {{ get_config("whale:frp_direct_ip_address", "127.0.0.1") }} {{ container.port }}',
'''
[direct_{{ container.user_id|string }}-{{ container.uuid }}]
type = tcp
local_ip = {{ container.user_id|string }}-{{ container.uuid }}
local_port = {{ container.challenge.redirect_port }}
remote_port = {{ container.port }}
use_compression = true
[direct_{{ container.user_id|string }}-{{ container.uuid }}_udp]
type = udp
local_ip = {{ container.user_id|string }}-{{ container.uuid }}
local_port = {{ container.challenge.redirect_port }}
remote_port = {{ container.port }}
use_compression = true
'''
))
db.session.commit()