Compare commits

...

218 commits
1.0 ... master

Author SHA1 Message Date
esrrhs
66d34ba031
Merge pull request #60 from esrrhs/esrrhs-patch-1
Update go.yml
2023-11-02 20:23:13 +08:00
esrrhs
a350629abc
Update go.yml 2023-11-02 20:22:58 +08:00
esrrhs
5d53ddeb7a update 2023-11-02 20:22:15 +08:00
esrrhs
933b646d98
Merge pull request #59 from esrrhs/dependabot/go_modules/golang.org/x/net-0.17.0
Bump golang.org/x/net from 0.8.0 to 0.17.0
2023-10-12 09:19:58 +08:00
dependabot[bot]
404ea744fe
Bump golang.org/x/net from 0.8.0 to 0.17.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.8.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.8.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 23:35:00 +00:00
esrrhs
b30676729c
Merge pull request #58 from jiqing112/master
更新 main.go内的 -key 说明
2023-06-19 15:34:16 +08:00
jiqing112
6ab38e9d01
更新 main.go内的 -key 说明
更新 main.go内的 -key 说明
2023-06-19 14:40:06 +08:00
esrrhs
19d00b970e
Merge pull request #57 from jiqing112/patch-4
还是空格的错误,这次应该没有了
2023-04-01 16:11:44 +08:00
jiqing112
3d001c21f0
还是空格的错误,这次应该没有了 2023-04-01 15:48:50 +08:00
esrrhs
3c2bcf9bc9
Merge pull request #56 from jiqing112/patch-3
docker命令里的空格错误
2023-04-01 15:37:50 +08:00
jiqing112
a4d7ee02d9
docker命令里的空格错误 2023-04-01 14:53:46 +08:00
esrrhs
327cf680c7
Merge pull request #55 from jiqing112/patch-2
Update README.md
2023-04-01 08:11:05 +08:00
esrrhs
cab02d012b
Merge pull request #54 from jiqing112/patch-1
Update README.md
2023-04-01 08:10:35 +08:00
jiqing112
8824c495f5
Update README.md
补充关于 -key参数的正确使用。避免使用者误会 “-key” 参数是任意类型的字符串形式的密码
2023-03-31 13:53:43 +08:00
jiqing112
1c79a8db64
Update README.md
范例的命令有个位置打错了空格,会让参数报错,无法执行。
2023-03-31 13:38:40 +08:00
esrrhs
7af41a7723 update 2023-03-18 16:43:16 +08:00
esrrhs
aa216a7d72 change 2023-03-18 13:35:27 +08:00
esrrhs
520cdd7063 move dir 2023-03-18 13:32:35 +08:00
esrrhs
f8d9ed6b5e Merge branch 'master' of https://github.com/esrrhs/pingtunnel 2023-02-22 21:45:45 +08:00
esrrhs
e816385534 upadte 2023-02-22 21:45:34 +08:00
esrrhs
4a02bd6270
Update README.md 2023-02-22 21:42:13 +08:00
esrrhs
5163a571dc add 2023-02-22 21:41:13 +08:00
esrrhs
7187a57ceb
Create docker-image.yml 2023-02-22 21:36:59 +08:00
esrrhs
5f81102814 update 2023-02-22 21:36:00 +08:00
esrrhs
94a9dc1eed
Create go.yml 2023-02-22 21:32:12 +08:00
esrrhs
fe571e8e4e add 2023-02-22 21:30:11 +08:00
esrrhs
93ab736d55 add 2023-02-22 21:29:12 +08:00
esrrhs
a1a1f0462a delete 2022-10-06 21:57:54 +08:00
esrrhs
7970b38a68 delete 2022-10-06 21:56:15 +08:00
esrrhs
8d63ad5334
Update README.md 2022-10-06 21:51:40 +08:00
esrrhs
219755bd5c
Update README.md 2022-10-06 21:42:02 +08:00
esrrhs
622f631bbd
Update README.md 2022-07-05 11:58:31 +08:00
esrrhs
4422d7c1d0
Create docker-image.yml 2022-06-14 11:45:27 +08:00
esrrhs
cb246793e6
Update README.md 2022-01-27 15:16:25 +08:00
esrrhs
cd1be4debf
Update README.md 2022-01-27 15:16:04 +08:00
esrrhs
856300a446
Update README.md 2021-12-20 10:49:26 +08:00
esrrhs
ec19b924c8
Update README.md 2021-12-20 10:49:13 +08:00
esrrhs
ef8486c4eb
Update README.md 2021-11-16 12:20:44 +08:00
benderzhao
60e4971fdf Merge branch 'master' of https://github.com/esrrhs/pingtunnel 2021-11-16 12:19:54 +08:00
benderzhao
59dc4b4f79 add 2021-11-16 12:19:44 +08:00
esrrhs
4b22d08d0b
Update go.yml 2021-09-28 16:34:29 +08:00
benderzhao
31ebcc8bf8 add 2021-09-28 16:33:39 +08:00
zhao xin
3d75970db9
Delete qtrun.jpg 2021-07-05 11:05:48 +08:00
zhao xin
be7f7650a4
Update README_EN.md 2021-07-05 11:05:31 +08:00
zhao xin
dff22bc62c
Update README.md 2021-07-05 11:05:05 +08:00
zhao xin
f9bf4076f2
Update README_EN.md 2021-06-19 00:51:21 +08:00
zhao xin
e1d5c970e0
Update README.md 2021-06-19 00:50:53 +08:00
zhao xin
f3a608957a Set theme jekyll-theme-cayman 2021-05-07 12:19:39 +08:00
esrrhs
c7700e858f add 2021-04-22 13:24:02 +08:00
esrrhs
1632aa4389 add 2021-04-21 21:08:30 +08:00
esrrhs
86a7340189 Merge remote-tracking branch 'origin/master'
# Conflicts:
#	pack.sh
2021-04-21 21:06:42 +08:00
esrrhs
780f75ae7c add 2021-04-21 21:06:22 +08:00
zhao xin
2eab964f13
Update README_EN.md 2021-04-20 13:23:14 +08:00
zhao xin
dd70313cf5
Update README.md 2021-04-20 13:23:00 +08:00
esrrhs
d04d249538 add 2021-04-20 11:06:00 +08:00
esrrhs
174fe4d7a2 add 2021-04-19 22:22:45 +08:00
esrrhs
6e8c975859 Merge remote-tracking branch 'origin/master' 2021-04-19 22:05:22 +08:00
esrrhs
74f6108315 add 2021-04-19 22:04:37 +08:00
zhao xin
e746dc3616
Update README_EN.md 2021-04-01 11:50:28 +08:00
zhao xin
cd21d586ae
Update README.md 2021-04-01 11:49:07 +08:00
zhao xin
41ccdf6d7f
Update Dockerfile 2021-03-18 11:22:13 +08:00
zhao xin
9e8e9eb535
Merge pull request #51 from sajad-sadra/master
easy deploy with docker-compose
2021-02-24 11:04:44 +08:00
sajad sadrayieh
24fa2404dd a basic documentation for easy deploy 2021-02-23 16:46:12 +03:30
sajad sadrayieh
122143529e make configs reads from env file 2021-02-23 16:39:19 +03:30
sajad sadrayieh
bbee9243e8 [ADD] client compose file 2021-02-23 16:33:36 +03:30
sajad sadrayieh
847d475cc2 [ADD] server compose file 2021-02-23 16:30:21 +03:30
zhao xin
136db8d776
Merge pull request #48 from phanirithvij/master
fix some markdown issues in the readme
2020-12-23 10:44:42 +08:00
phanirithvij
7f8d5fa390 fix more 2020-12-23 08:14:27 +05:30
phanirithvij
8db60694b6 fix new issues 2020-12-23 08:08:49 +05:30
phanirithvij
49d1af71b4 fix some markdown issues in the readme
To improve coverage
2020-12-23 07:41:06 +05:30
zhao xin
6fc899c759
Update README_EN.md 2020-10-31 23:12:08 +08:00
zhao xin
7454c250dc
Update README.md 2020-10-31 23:11:29 +08:00
zhao xin
ad69639117
Update README.md 2020-10-31 23:11:15 +08:00
zhao xin
fd4c4ebb47
Update README_EN.md 2020-10-31 23:10:21 +08:00
zhao xin
db05e85500
Update README.md 2020-10-31 23:09:55 +08:00
zhao xin
6b7bc8e876
Update README_EN.md 2020-10-31 23:08:41 +08:00
zhao xin
395a890632
Update README.md 2020-10-31 23:07:13 +08:00
zhao xin
0d6b836173
Update README.md 2020-10-31 23:06:46 +08:00
zhao xin
1c51495b9a
Update README.md 2020-10-31 23:03:22 +08:00
zhao xin
76deebb4b0
Update README_EN.md 2020-10-14 20:21:46 +08:00
zhao xin
a3209179a0
Update README.md 2020-10-14 20:21:08 +08:00
zhao xin
f365e94395
Update README_EN.md 2020-09-30 17:39:13 +08:00
esrrhs
aa88696c37 add 2020-09-03 09:36:44 +08:00
esrrhs
e7b0c16282 add 2020-09-03 09:26:40 +08:00
zhao xin
8be6b166fc
Update README_EN.md 2020-07-06 17:25:21 +08:00
zhao xin
0bfccb395e
Update README.md 2020-07-06 17:24:59 +08:00
zhao xin
6c35c96929
Merge pull request #33 from honwen/master
Ensure static linked binary
2020-06-28 12:33:08 +08:00
hwchan
fc68e6449c Ensure static linked binary 2020-06-28 11:14:11 +08:00
zhao xin
b899f65b5a
Update README_EN.md 2020-06-27 16:22:06 +08:00
zhao xin
1ac3aabd08
Update README.md 2020-06-27 16:21:45 +08:00
esrrhs
a575babaa7 add 2020-05-31 20:05:06 +08:00
zhao xin
f2d1e1a5b3
Update README_EN.md 2020-05-25 11:21:15 +08:00
zhao xin
6fa90c210b
Update README.md 2020-05-25 11:20:13 +08:00
zhao xin
5a61d67923
Update README_EN.md 2020-05-20 15:13:01 +08:00
zhao xin
2de5b16e2e
Update README.md 2020-05-20 15:12:37 +08:00
zhao xin
29b3f73ada
Update README.md 2020-05-20 15:11:35 +08:00
zhao xin
112fb03249
Update README.md 2020-05-20 14:59:55 +08:00
esrrhs
a9848f254a add 2020-05-08 19:56:38 +08:00
zhao xin
de0b4c2730
Update README_EN.md 2020-05-07 18:16:51 +08:00
zhao xin
d038899851
Update README.md 2020-05-07 18:16:33 +08:00
zhao xin
0c6be88983
Update README_EN.md 2020-05-07 18:14:40 +08:00
zhao xin
f51245fda4
Delete test.png 2020-05-07 18:12:48 +08:00
zhao xin
f97c162f3f
Update README.md 2020-05-07 18:12:24 +08:00
esrrhs
fd8c39df04 add 2020-05-01 15:21:30 +08:00
esrrhs
b8b216c115 add 2020-05-01 13:23:35 +08:00
zhao xin
9fcaf84ac3
Update README.md 2020-04-30 10:35:45 +08:00
zhao xin
db7551731a
Update README_EN.md 2020-04-23 10:37:34 +08:00
zhao xin
c71b537898
Update README.md 2020-04-23 10:35:45 +08:00
zhao xin
7b90bc40a6
Update go.yml 2020-04-02 11:31:14 +08:00
esrrhs
5c313bff4e add
pingtunnel_windows64.zip
2020-03-30 10:04:42 +08:00
esrrhs
5b52521521 add 2020-03-07 16:40:15 +08:00
esrrhs
a797fd5cc7 add 2020-03-03 09:13:08 +08:00
zhao xin
a923fed6fc
Update README.md 2020-03-01 21:43:57 +08:00
esrrhs
1dc3fe11be Merge remote-tracking branch 'origin/master' 2020-03-01 09:28:13 +08:00
esrrhs
f988b2c483 add 2020-03-01 09:28:02 +08:00
esrrhs
0a18ded9cc add 2020-03-01 08:25:57 +08:00
esrrhs
dd04aa9b14 add 2020-03-01 00:34:45 +08:00
esrrhs
6913d315da add 2020-02-29 22:44:45 +08:00
zhao xin
b1f6acb659
Update README.md 2020-02-29 12:46:34 +08:00
zhao xin
3b6f4eebcf
Update README_EN.md 2020-02-28 16:13:52 +08:00
zhao xin
32ee72b850
Update README_EN.md 2020-02-28 16:12:18 +08:00
zhao xin
d7f3c68d89
Update README.md 2020-02-28 16:08:03 +08:00
esrrhs
9b26464d10 add 2020-02-27 21:40:09 +08:00
zhao xin
d9ebc5c9f6
Update README.md 2020-02-26 11:20:07 +08:00
zhao xin
aaff23309f
Update README.md 2020-02-25 20:55:19 +08:00
esrrhs
ba73a631a6 add 2020-02-23 16:52:28 +08:00
esrrhs
610b029dd9 Merge remote-tracking branch 'origin/master' 2020-02-23 16:48:12 +08:00
esrrhs
2d3ed37feb add 2020-02-23 16:47:41 +08:00
zhao xin
2734daddc6
Update README_EN.md 2020-02-21 12:57:44 +08:00
zhao xin
fc865f754d
Update README.md 2020-02-21 12:57:00 +08:00
zhao xin
2b1abdf783
Update README.md 2020-02-19 22:08:50 +08:00
zhao xin
408c939da4
Update README.md 2020-02-19 22:01:35 +08:00
zhao xin
0ad2ad5fca
Update README.md 2020-02-19 21:18:07 +08:00
zhao xin
9917c5bdcb Update Dockerfile 2020-02-10 13:14:11 +08:00
zhao xin
41e144ceca
Update README_EN.md 2020-02-01 18:45:15 +08:00
zhao xin
ff392de1f1
Update README.md 2020-02-01 18:44:42 +08:00
zhao xin
44a1a5853d
Update README.md 2020-01-29 18:04:40 +08:00
zhao xin
43f40eda1b
Update README_EN.md 2020-01-08 11:22:51 +08:00
zhao xin
1d67562804
Update README.md 2020-01-08 11:22:40 +08:00
esrrhs
b4f7e69f6b add 2020-01-08 10:49:48 +08:00
esrrhs
aa046d8a2f add 2020-01-08 10:42:49 +08:00
esrrhs
d4731262db add 2020-01-08 10:38:33 +08:00
esrrhs
852c0b1761 Merge remote-tracking branch 'origin/master' 2020-01-08 10:36:08 +08:00
esrrhs
a33439b013 add 2020-01-08 10:35:57 +08:00
esrrhs
ba20b02358 add 2020-01-08 10:33:48 +08:00
esrrhs
00dfd7246b add 2020-01-08 10:33:02 +08:00
esrrhs
c79b7a2cea add 2020-01-07 22:10:28 +08:00
esrrhs
e0c5fd7b2f add 2020-01-07 19:39:46 +08:00
esrrhs
c66a9189eb Merge remote-tracking branch 'origin/master' 2020-01-07 19:36:03 +08:00
esrrhs
0f3d847962 add 2020-01-07 19:34:23 +08:00
zhao xin
97866a3486
Update README_EN.md 2020-01-07 18:06:41 +08:00
zhao xin
52337521dd
Update README.md 2020-01-07 18:06:15 +08:00
zhao xin
74c53223b0
Update README_EN.md 2020-01-06 10:42:53 +08:00
zhao xin
8954af9416
Update README.md 2020-01-06 10:42:34 +08:00
zhao xin
e5831b152c
Update README.md 2020-01-04 20:02:45 +08:00
esrrhs
85c558a677 add 2020-01-04 20:01:26 +08:00
zhao xin
c5a6c80561
Update README.md 2020-01-04 18:55:03 +08:00
zhao xin
e16bff920e
Update README.md 2020-01-04 18:53:10 +08:00
zhao xin
7b9113cd38
Add files via upload 2020-01-04 18:52:47 +08:00
zhao xin
57b58f555d
Update README.md 2020-01-04 18:52:30 +08:00
zhao xin
5eb78f2802
Update README.md 2020-01-04 18:30:28 +08:00
zhao xin
abd6db0ac5
Update README.md 2020-01-03 09:03:49 +08:00
zhao xin
8ba207626e
Update README.md 2020-01-01 16:54:54 +08:00
zhao xin
9e0ae35149
Update README.md 2020-01-01 16:33:40 +08:00
zhao xin
321517df80
Update README.md 2020-01-01 16:25:02 +08:00
esrrhs
ddea3a66bc Merge remote-tracking branch 'origin/master' 2020-01-01 16:02:03 +08:00
esrrhs
fd4b189c6f add 2020-01-01 16:01:50 +08:00
zhao xin
fa445c7d89
Update README.md 2019-12-27 12:24:58 +08:00
zhao xin
371dd4baa0
Update Dockerfile 2019-12-27 11:37:14 +08:00
zhao xin
9e62422db6
Create Dockerfile 2019-12-27 11:03:03 +08:00
zhao xin
1a994d448a
Update go.yml 2019-12-27 10:54:11 +08:00
zhao xin
0d61b60eaa
Update go.yml 2019-12-27 10:51:20 +08:00
zhao xin
d11d7c4b87
Update README.md 2019-12-27 10:45:40 +08:00
zhao xin
4986c989e3 Set theme jekyll-theme-hacker 2019-12-05 14:45:26 +08:00
zhao xin
c2ff8fa632
Create go.yml 2019-11-14 12:08:18 +08:00
esrrhs
0b0a9cdc6a add 2019-11-13 16:54:55 +08:00
esrrhs
8651c222c4 add 2019-11-12 17:11:30 +08:00
zhao xin
3050373508
Update README.md 2019-11-05 21:15:41 +08:00
zhao xin
1b02df4a4d
Update README.md 2019-11-05 21:09:36 +08:00
zhao xin
69273a94cc
Update README.md 2019-11-05 11:07:28 +08:00
esrrhs
524fdb836d add 2019-11-05 09:52:33 +08:00
esrrhs
bf6270387e add 2019-11-05 09:23:52 +08:00
esrrhs
91de7cb8f0 add 2019-11-05 09:07:14 +08:00
esrrhs
cb7d489988 add 2019-11-04 21:56:07 +08:00
esrrhs
728c2705b2 add 2019-11-02 15:50:54 +08:00
esrrhs
a70fcdff74 add 2019-11-01 23:54:44 +08:00
esrrhs
afa564c81d add 2019-11-01 23:51:49 +08:00
esrrhs
33b164e63f add 2019-11-01 23:22:24 +08:00
esrrhs
8fb7712b54 add 2019-11-01 23:18:57 +08:00
esrrhs
599fb597e8 add 2019-11-01 23:13:59 +08:00
esrrhs
18fa88a798 Merge branch 'master' of https://github.com/esrrhs/pingtunnel 2019-11-01 23:13:05 +08:00
esrrhs
a0a5ef06fb add 2019-11-01 22:38:09 +08:00
zhao xin
97ac0742b6
Update README.md 2019-11-01 21:53:14 +08:00
zhao xin
9665e03f6f
Update README.md 2019-11-01 21:50:45 +08:00
esrrhs
ae593ddeb2 add 2019-11-01 21:12:37 +08:00
esrrhs
db35790150 add 2019-11-01 21:10:32 +08:00
esrrhs
d84320fffb add 2019-11-01 20:54:55 +08:00
esrrhs
5c0c08b7f3 add 2019-11-01 20:52:18 +08:00
esrrhs
1657f46784 add 2019-11-01 18:48:08 +08:00
esrrhs
30b1cd117b add 2019-11-01 18:44:15 +08:00
esrrhs
8d1d2bd6a2 add 2019-10-31 21:55:40 +08:00
esrrhs
0ba2aa2297 add 2019-10-31 21:48:32 +08:00
esrrhs
2a75a80a0e add 2019-10-31 21:18:42 +08:00
esrrhs
64f8ec24e6 add 2019-10-31 21:16:14 +08:00
esrrhs
831a5ee51d add 2019-10-31 21:05:48 +08:00
esrrhs
a71309c8cc add 2019-10-31 20:30:38 +08:00
zhao xin
d25f8499f6
Update README.md 2019-10-30 20:40:13 +08:00
zhao xin
08f0f61840
Update README.md 2019-10-30 20:39:56 +08:00
esrrhs
f323941276 add 2019-10-30 20:15:43 +08:00
esrrhs
160d6efc89 add 2019-10-30 19:45:19 +08:00
esrrhs
a4573f1540 add 2019-10-30 19:28:00 +08:00
esrrhs
a43e5c435f add 2019-10-30 19:14:27 +08:00
esrrhs
9b86a95dfc Merge remote-tracking branch 'origin/master' 2019-10-30 18:57:40 +08:00
esrrhs
c348cfb2e9 add 2019-10-30 18:57:26 +08:00
esrrhs
4bfa31eea1 add 2019-10-29 20:39:12 +08:00
esrrhs
dc91b48b34 add 2019-10-28 19:48:50 +08:00
esrrhs
4bb047d433 add 2019-10-28 19:45:12 +08:00
zhao xin
6c7341375b
Update README.md 2019-10-28 17:13:00 +08:00
25 changed files with 1082 additions and 1538 deletions

34
.github/workflows/docker-image.yml vendored Normal file
View file

@ -0,0 +1,34 @@
name: Docker Image CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Set up QEMU
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
-
name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
tags: esrrhs/pingtunnel:latest

30
.github/workflows/go.yml vendored Normal file
View file

@ -0,0 +1,30 @@
# This workflow will build a golang project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go
name: Go
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: Build
run: |
go mod tidy
go build -v ./...
- name: Test
run: go test -v ./...

7
.idea/vcs.xml generated
View file

@ -1,7 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
<mapping directory="$PROJECT_DIR$/src/github.com/esrrhs/pingtunnel" vcs="Git" />
</component>
</project>

13
Dockerfile Normal file
View file

@ -0,0 +1,13 @@
FROM golang AS build-env
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . ./
RUN go build -v -o pingtunnel
FROM debian
COPY --from=build-env /app/pingtunnel .
COPY GeoLite2-Country.mmdb .
WORKDIR ./

BIN
GeoLite2-Country.mmdb Normal file

Binary file not shown.

134
README.md
View file

@ -1,98 +1,78 @@
# Pingtunnel
pingtunnel是把tcp/udp/sock5流量伪装成icmp流量进行转发的工具。用于突破网络封锁或是绕过WIFI网络的登陆验证或是在某些网络加快网络传输速度。
<br />Pingtunnel is a tool that advertises tcp/udp/sock5 traffic as icmp traffic for forwarding. Used to break through the network blockade, or to bypass the WIFI network login verification, or speed up network transmission speed on some networks.
![image](network.png)
[<img src="https://img.shields.io/github/license/esrrhs/pingtunnel">](https://github.com/esrrhs/pingtunnel)
[<img src="https://img.shields.io/github/languages/top/esrrhs/pingtunnel">](https://github.com/esrrhs/pingtunnel)
[![Go Report Card](https://goreportcard.com/badge/github.com/esrrhs/pingtunnel)](https://goreportcard.com/report/github.com/esrrhs/pingtunnel)
[<img src="https://img.shields.io/github/v/release/esrrhs/pingtunnel">](https://github.com/esrrhs/pingtunnel/releases)
[<img src="https://img.shields.io/github/downloads/esrrhs/pingtunnel/total">](https://github.com/esrrhs/pingtunnel/releases)
[<img src="https://img.shields.io/docker/pulls/esrrhs/pingtunnel">](https://hub.docker.com/repository/docker/esrrhs/pingtunnel)
[<img src="https://img.shields.io/github/actions/workflow/status/esrrhs/pingtunnel/go.yml?branch=master">](https://github.com/esrrhs/pingtunnel/actions)
# Why use this
* 因为网络审查ip会直接被ban但是却可以ping通这时候就可以用这个工具继续连接服务器。If the server's ip is blocked, all tcp udp packets are forbidden, but it can be pinged. At this point, you can continue to connect to the server with this tool.
* 在咖啡厅或是机场可以连接free wifi但是需要登录跳转验证这时候就可以用这个工具绕过登录上网因为wifi虽然不可以上网但是却可以ping通你的服务器。In the coffee shop or airport, you can connect to free wifi, but you need to log in to verify. At this time, you can use this tool to bypass the login, because wifi can not surf the Internet, but you can ping your server.
* 在某些网络tcp的传输很慢但是如果用icmp协议可能因为运营商的设置或是网络拓扑速度会变快实际测试在中国大陆连aws的服务器会有加速效果。In some networks, the transmission of tcp is very slow, but if the icmp protocol is used, the speed may be faster because of the operator's settings or the network topology. After testing, connecting the server of aws from mainland China has an accelerated effect.
Pingtunnel is a tool that send TCP/UDP traffic over ICMP.
## Note: This tool is only to be used for study and research, do not use it for illegal purposes
![image](network.jpg)
## Usage
### Install server
- First prepare a server with a public IP, such as EC2 on AWS, assuming the domain name or public IP is www.yourserver.com
- Download the corresponding installation package from [releases](https://github.com/esrrhs/pingtunnel/releases), such as pingtunnel_linux64.zip, then decompress and execute with **root** privileges
- “-key” parameter is **int** type, only supports numbers between 0-2147483647
# Sample
如把本机的:4455的UDP流量转发到www.yourserver.com:4455For example, the UDP traffic of the machine: 4545 is forwarded to www.yourserver.com:4455:
* 在www.yourserver.com的服务器上用root权限运行。Run with root privileges on the server at www.yourserver.com
```
sudo wget (link of latest release)
sudo unzip pingtunnel_linux64.zip
sudo ./pingtunnel -type server
```
* 在你本地电脑上用管理员权限运行。Run with administrator privileges on your local computer
- (Optional) Disable system default ping
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
```
* 如果看到客户端不停的ping、pong日志输出说明工作正常。If you see the client ping, pong log output, it means normal work
```
ping www.xx.com 2018-12-23 13:05:50.5724495 +0800 CST m=+3.023909301 8 0 1997 2
pong from xx.xx.xx.xx 210.8078ms
```
* 如果想转发tcp流量只需要在客户端加上-tcp的参数。If you want to forward tcp traffic, you only need to add the -tcp parameter to the client.
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455 -tcp 1
```
* 如果想转发sock5流量只需要在客户端加上-sock5的参数。If you want to forward sock5 traffic, you only need to add the -sock5 parameter to the client.
### Install the client
- Download the corresponding installation package from [releases](https://github.com/esrrhs/pingtunnel/releases), such as pingtunnel_windows64.zip, and decompress it
- Then run with **administrator** privileges. The commands corresponding to different forwarding functions are as follows.
- If you see a log of ping pong, the connection is normal
- “-key” parameter is **int** type, only supports numbers between 0-2147483647
#### Forward sock5
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -sock5 1
```
* 大功告成,然后你就可以开始和本机的:4455端口通信数据都被自动转发到远端如同连接到www.yourserver.com:4455一样。 Then you can start communicating with the local: 4455 port, the data is automatically forwarded to the remote, as you connect to www.yourserver.com:4455.
# Usage
通过伪造ping把tcp/udp/sock5流量通过远程服务器转发到目的服务器上。用于突破某些运营商封锁TCP/UDP流量。
By forging ping, the tcp/udp/sock5 traffic is forwarded to the destination server through the remote server. Used to break certain operators to block TCP/UDP traffic.
#### Forward tcp
Usage:
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455 -tcp 1
```
// server
pingtunnel -type server
#### Forward udp
// client, Forward udp
pingtunnel -type client -l LOCAL_IP:4455 -s SERVER_IP -t SERVER_IP:4455
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455
```
// client, Forward tcp
pingtunnel -type client -l LOCAL_IP:4455 -s SERVER_IP -t SERVER_IP:4455 -tcp 1
### Use Docker
It can also be started directly with docker, which is more convenient. Same parameters as above
- server:
```
docker run --name pingtunnel-server -d --privileged --network host --restart=always esrrhs/pingtunnel ./pingtunnel -type server -key 123456
```
- client:
```
docker run --name pingtunnel-client -d --restart=always -p 1080:1080 esrrhs/pingtunnel ./pingtunnel -type client -l :1080 -s www.yourserver.com -sock5 1 -key 123456
```
// client, Forward sock5, implicitly open tcp, so no target server is needed
pingtunnel -type client -l LOCAL_IP:4455 -s SERVER_IP -sock5 1
## Thanks for free JetBrains Open Source license
-type 服务器或者客户端
client or server
<img src="https://resources.jetbrains.com/storage/products/company/brand/logos/GoLand.png" height="200"/></a>
-l 本地的地址,发到这个端口的流量将转发到服务器
Local address, traffic sent to this port will be forwarded to the server
-s 服务器的地址,流量将通过隧道转发到这个服务器
The address of the server, the traffic will be forwarded to this server through the tunnel
-t 远端服务器转发的目的地址,流量将转发到这个地址
Destination address forwarded by the remote server, traffic will be forwarded to this address
-timeout 本地记录连接超时的时间单位是秒默认60s
The time when the local record connection timed out, in seconds, 60 seconds by default
-key 设置的密码默认0
Set password, default 0
-tcp 设置是否转发tcp默认0
Set the switch to forward tcp, the default is 0
-tcp_bs tcp的发送接收缓冲区大小默认10MB
Tcp send and receive buffer size, default 10MB
-tcp_mw tcp的最大窗口默认10000
The maximum window of tcp, the default is 10000
-tcp_rst tcp的超时发送时间默认400ms
Tcp timeout resend time, default 400ms
-tcp_gz 当数据包超过这个大小tcp将压缩数据0表示不压缩默认0
Tcp will compress data when the packet exceeds this size, 0 means no compression, default 0
-tcp_stat 打印tcp的监控默认0
Print tcp connection statistic, default 0 is off
-nolog 不写日志文件只打印标准输出默认0
Do not write log files, only print standard output, default 0 is off
-loglevel 日志文件等级默认info
log level, default is info
-sock5 开启sock5转发默认0
Turn on sock5 forwarding, default 0 is off

1
_config.yml Normal file
View file

@ -0,0 +1 @@
theme: jekyll-theme-cayman

452
client.go
View file

@ -1,13 +1,17 @@
package pingtunnel
import (
"github.com/esrrhs/go-engine/src/common"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/frame"
"github.com/esrrhs/gohome/loggo"
"github.com/esrrhs/gohome/network"
"github.com/golang/protobuf/proto"
"golang.org/x/net/icmp"
"io"
"math"
"math/rand"
"net"
"sync"
"time"
)
@ -18,7 +22,7 @@ const (
func NewClient(addr string, server string, target string, timeout int, key int,
tcpmode int, tcpmode_buffersize int, tcpmode_maxwin int, tcpmode_resend_timems int, tcpmode_compress int,
tcpmode_stat int, open_sock5 int) (*Client, error) {
tcpmode_stat int, open_sock5 int, maxconn int, sock5_filter *func(addr string) bool) (*Client, error) {
var ipaddr *net.UDPAddr
var tcpaddr *net.TCPAddr
@ -41,9 +45,11 @@ func NewClient(addr string, server string, target string, timeout int, key int,
return nil, err
}
r := rand.New(rand.NewSource(time.Now().UnixNano()))
rand.Seed(time.Now().UnixNano())
return &Client{
id: r.Intn(math.MaxInt16),
exit: false,
rtt: 0,
id: rand.Intn(math.MaxInt16),
ipaddr: ipaddr,
tcpaddr: tcpaddr,
addr: addr,
@ -59,10 +65,18 @@ func NewClient(addr string, server string, target string, timeout int, key int,
tcpmode_compress: tcpmode_compress,
tcpmode_stat: tcpmode_stat,
open_sock5: open_sock5,
maxconn: maxconn,
pongTime: time.Now(),
sock5_filter: sock5_filter,
}, nil
}
type Client struct {
exit bool
rtt time.Duration
workResultLock sync.WaitGroup
maxconn int
id int
sequence int
@ -76,7 +90,9 @@ type Client struct {
tcpmode_resend_timems int
tcpmode_compress int
tcpmode_stat int
open_sock5 int
open_sock5 int
sock5_filter *func(addr string) bool
ipaddr *net.UDPAddr
tcpaddr *net.TCPAddr
@ -91,16 +107,23 @@ type Client struct {
listenConn *net.UDPConn
tcplistenConn *net.TCPListener
localAddrToConnMap map[string]*ClientConn
localIdToConnMap map[string]*ClientConn
localAddrToConnMap sync.Map
localIdToConnMap sync.Map
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
localAddrToConnMapSize int
localIdToConnMapSize int
recvcontrol chan int
pongTime time.Time
}
type ClientConn struct {
exit bool
ipaddr *net.UDPAddr
tcpaddr *net.TCPAddr
id string
@ -108,7 +131,7 @@ type ClientConn struct {
activeSendTime time.Time
close bool
fm *FrameMgr
fm *frame.FrameMgr
}
func (p *Client) Addr() string {
@ -131,37 +154,59 @@ func (p *Client) ServerAddr() string {
return p.addrServer
}
func (p *Client) Run() {
func (p *Client) RTT() time.Duration {
return p.rtt
}
func (p *Client) RecvPacketSize() uint64 {
return p.recvPacketSize
}
func (p *Client) SendPacketSize() uint64 {
return p.sendPacketSize
}
func (p *Client) RecvPacket() uint64 {
return p.recvPacket
}
func (p *Client) SendPacket() uint64 {
return p.sendPacket
}
func (p *Client) LocalIdToConnMapSize() int {
return p.localIdToConnMapSize
}
func (p *Client) LocalAddrToConnMapSize() int {
return p.localAddrToConnMapSize
}
func (p *Client) Run() error {
conn, err := icmp.ListenPacket("ip4:icmp", "")
if err != nil {
loggo.Error("Error listening for ICMP packets: %s", err.Error())
return
return err
}
defer conn.Close()
p.conn = conn
if p.tcpmode > 0 {
tcplistenConn, err := net.ListenTCP("tcp", p.tcpaddr)
if err != nil {
loggo.Error("Error listening for tcp packets: %s", err.Error())
return
return err
}
defer tcplistenConn.Close()
p.tcplistenConn = tcplistenConn
} else {
listener, err := net.ListenUDP("udp", p.ipaddr)
if err != nil {
loggo.Error("Error listening for udp packets: %s", err.Error())
return
return err
}
defer listener.Close()
p.listenConn = listener
}
p.localAddrToConnMap = make(map[string]*ClientConn)
p.localIdToConnMap = make(map[string]*ClientConn)
if p.tcpmode > 0 {
go p.AcceptTcp()
} else {
@ -169,28 +214,77 @@ func (p *Client) Run() {
}
recv := make(chan *Packet, 10000)
go recvICMP(*p.conn, recv)
p.recvcontrol = make(chan int, 1)
go recvICMP(&p.workResultLock, &p.exit, *p.conn, recv)
interval := time.NewTicker(time.Second)
defer interval.Stop()
go func() {
defer common.CrashLog()
for {
select {
case <-interval.C:
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
p.checkTimeoutConn()
p.ping()
p.showNet()
case r := <-recv:
p.processPacket(r)
time.Sleep(time.Second)
}
}()
go func() {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
p.updateServerAddr()
time.Sleep(time.Second)
}
}()
go func() {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
select {
case <-p.recvcontrol:
return
case r := <-recv:
p.processPacket(r)
}
}
}()
return nil
}
func (p *Client) Stop() {
p.exit = true
p.recvcontrol <- 1
p.workResultLock.Wait()
p.conn.Close()
if p.tcplistenConn != nil {
p.tcplistenConn.Close()
}
if p.listenConn != nil {
p.listenConn.Close()
}
}
func (p *Client) AcceptTcp() error {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("client waiting local accept tcp")
for {
for !p.exit {
p.tcplistenConn.SetDeadline(time.Now().Add(time.Millisecond * 1000))
conn, err := p.tcplistenConn.AcceptTCP()
@ -210,35 +304,45 @@ func (p *Client) AcceptTcp() error {
}
}
}
return nil
}
func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
uuid := UniqueId()
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
tcpsrcaddr := conn.RemoteAddr().(*net.TCPAddr)
fm := NewFrameMgr(p.tcpmode_buffersize, p.tcpmode_maxwin, p.tcpmode_resend_timems, p.tcpmode_compress, p.tcpmode_stat)
if p.maxconn > 0 && p.localIdToConnMapSize >= p.maxconn {
loggo.Info("too many connections %d, client accept new local tcp fail %s", p.localIdToConnMapSize, tcpsrcaddr.String())
return
}
uuid := common.UniqueId()
fm := frame.NewFrameMgr(FRAME_MAX_SIZE, FRAME_MAX_ID, p.tcpmode_buffersize, p.tcpmode_maxwin, p.tcpmode_resend_timems, p.tcpmode_compress, p.tcpmode_stat)
now := time.Now()
clientConn := &ClientConn{tcpaddr: tcpsrcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false,
clientConn := &ClientConn{exit: false, tcpaddr: tcpsrcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false,
fm: fm}
p.localAddrToConnMap[tcpsrcaddr.String()] = clientConn
p.localIdToConnMap[uuid] = clientConn
p.addClientConn(uuid, tcpsrcaddr.String(), clientConn)
loggo.Info("client accept new local tcp %s %s", uuid, tcpsrcaddr.String())
loggo.Info("start connect remote tcp %s %s", uuid, tcpsrcaddr.String())
clientConn.fm.Connect()
startConnectTime := time.Now()
for {
startConnectTime := common.GetNowUpdateInSecond()
for !p.exit && !clientConn.exit {
if clientConn.fm.IsConnected() {
break
}
clientConn.fm.Update()
sendlist := clientConn.fm.getSendList()
sendlist := clientConn.fm.GetSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
f := e.Value.(*frame.Frame)
mb, _ := clientConn.fm.MarshalFrame(f)
p.sequence++
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, targetAddr, clientConn.id, (uint32)(MyMsg_DATA), mb,
SEND_PROTO, RECV_PROTO, p.key,
@ -248,23 +352,26 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
p.sendPacketSize += (uint64)(len(mb))
}
time.Sleep(time.Millisecond * 10)
now := time.Now()
now := common.GetNowUpdateInSecond()
diffclose := now.Sub(startConnectTime)
if diffclose > time.Second*(time.Duration(p.timeout)) {
if diffclose > time.Second*5 {
loggo.Info("can not connect remote tcp %s %s", uuid, tcpsrcaddr.String())
p.Close(clientConn)
p.close(clientConn)
return
}
}
loggo.Info("connected remote tcp %s %s", uuid, tcpsrcaddr.String())
if !clientConn.exit {
loggo.Info("connected remote tcp %s %s", uuid, tcpsrcaddr.String())
}
bytes := make([]byte, 10240)
tcpActiveRecvTime := time.Now()
tcpActiveSendTime := time.Now()
tcpActiveRecvTime := common.GetNowUpdateInSecond()
tcpActiveSendTime := common.GetNowUpdateInSecond()
for {
now := time.Now()
for !p.exit && !clientConn.exit {
now := common.GetNowUpdateInSecond()
sleep := true
left := common.MinOfInt(clientConn.fm.GetSendBufferLeft(), len(bytes))
@ -288,13 +395,13 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
clientConn.fm.Update()
sendlist := clientConn.fm.getSendList()
sendlist := clientConn.fm.GetSendList()
if sendlist.Len() > 0 {
sleep = false
clientConn.activeSendTime = now
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
mb, err := proto.Marshal(f)
f := e.Value.(*frame.Frame)
mb, err := clientConn.fm.MarshalFrame(f)
if err != nil {
loggo.Error("Error tcp Marshal %s %s %s", uuid, tcpsrcaddr.String(), err)
continue
@ -337,7 +444,7 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
tcpdiffrecv := now.Sub(tcpActiveRecvTime)
tcpdiffsend := now.Sub(tcpActiveSendTime)
if diffrecv > time.Second*(time.Duration(p.timeout)) || diffsend > time.Second*(time.Duration(p.timeout)) ||
tcpdiffrecv > time.Second*(time.Duration(p.timeout)) || tcpdiffsend > time.Second*(time.Duration(p.timeout)) {
(tcpdiffrecv > time.Second*(time.Duration(p.timeout)) && tcpdiffsend > time.Second*(time.Duration(p.timeout))) {
loggo.Info("close inactive conn %s %s", clientConn.id, clientConn.tcpaddr.String())
clientConn.fm.Close()
break
@ -350,16 +457,18 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
}
}
startCloseTime := time.Now()
for {
now := time.Now()
clientConn.fm.Close()
startCloseTime := common.GetNowUpdateInSecond()
for !p.exit && !clientConn.exit {
now := common.GetNowUpdateInSecond()
clientConn.fm.Update()
sendlist := clientConn.fm.getSendList()
sendlist := clientConn.fm.GetSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
f := e.Value.(*frame.Frame)
mb, _ := clientConn.fm.MarshalFrame(f)
p.sequence++
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, targetAddr, clientConn.id, (uint32)(MyMsg_DATA), mb,
SEND_PROTO, RECV_PROTO, p.key,
@ -381,14 +490,12 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
}
diffclose := now.Sub(startCloseTime)
timeout := diffclose > time.Second*(time.Duration(p.timeout))
remoteclosed := clientConn.fm.IsRemoteClosed()
if timeout {
if diffclose > time.Second*60 {
loggo.Info("close conn had timeout %s %s", clientConn.id, clientConn.tcpaddr.String())
break
}
remoteclosed := clientConn.fm.IsRemoteClosed()
if remoteclosed && nodatarecv {
loggo.Info("remote conn had closed %s %s", clientConn.id, clientConn.tcpaddr.String())
break
@ -399,16 +506,21 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
loggo.Info("close tcp conn %s %s", clientConn.id, clientConn.tcpaddr.String())
conn.Close()
p.Close(clientConn)
p.close(clientConn)
}
func (p *Client) Accept() error {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("client waiting local accept udp")
bytes := make([]byte, 10240)
for {
for !p.exit {
p.listenConn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
n, srcaddr, err := p.listenConn.ReadFromUDP(bytes)
if err != nil {
@ -422,13 +534,16 @@ func (p *Client) Accept() error {
continue
}
now := time.Now()
clientConn := p.localAddrToConnMap[srcaddr.String()]
now := common.GetNowUpdateInSecond()
clientConn := p.getClientConnByAddr(srcaddr.String())
if clientConn == nil {
uuid := UniqueId()
clientConn = &ClientConn{ipaddr: srcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false}
p.localAddrToConnMap[srcaddr.String()] = clientConn
p.localIdToConnMap[uuid] = clientConn
if p.maxconn > 0 && p.localIdToConnMapSize >= p.maxconn {
loggo.Info("too many connections %d, client accept new local udp fail %s", p.localIdToConnMapSize, srcaddr.String())
continue
}
uuid := common.UniqueId()
clientConn = &ClientConn{exit: false, ipaddr: srcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false}
p.addClientConn(uuid, srcaddr.String(), clientConn)
loggo.Info("client accept new local udp %s %s", uuid, srcaddr.String())
}
@ -443,6 +558,7 @@ func (p *Client) Accept() error {
p.sendPacket++
p.sendPacketSize += (uint64)(n)
}
return nil
}
func (p *Client) processPacket(packet *Packet) {
@ -462,26 +578,37 @@ func (p *Client) processPacket(packet *Packet) {
if packet.my.Type == (int32)(MyMsg_PING) {
t := time.Time{}
t.UnmarshalBinary(packet.my.Data)
d := time.Now().Sub(t)
now := time.Now()
d := now.Sub(t)
loggo.Info("pong from %s %s", packet.src.String(), d.String())
p.rtt = d
p.pongTime = now
return
}
if packet.my.Type == (int32)(MyMsg_KICK) {
clientConn := p.getClientConnById(packet.my.Id)
if clientConn != nil {
p.close(clientConn)
loggo.Info("remote kick local %s", packet.my.Id)
}
return
}
loggo.Debug("processPacket %s %s %d", packet.my.Id, packet.src.String(), len(packet.my.Data))
clientConn := p.localIdToConnMap[packet.my.Id]
clientConn := p.getClientConnById(packet.my.Id)
if clientConn == nil {
loggo.Debug("processPacket no conn %s ", packet.my.Id)
p.remoteError(packet.my.Id)
return
}
addr := clientConn.ipaddr
now := time.Now()
now := common.GetNowUpdateInSecond()
clientConn.activeRecvTime = now
if p.tcpmode > 0 {
f := &Frame{}
f := &frame.Frame{}
err := proto.Unmarshal(packet.my.Data, f)
if err != nil {
loggo.Error("Unmarshal tcp Error %s", err)
@ -490,6 +617,10 @@ func (p *Client) processPacket(packet *Packet) {
clientConn.fm.OnRecvFrame(f)
} else {
if packet.my.Data == nil {
return
}
addr := clientConn.ipaddr
_, err := p.listenConn.WriteToUDP(packet.my.Data, addr)
if err != nil {
loggo.Info("WriteToUDP Error read udp %s", err)
@ -502,11 +633,10 @@ func (p *Client) processPacket(packet *Packet) {
p.recvPacketSize += (uint64)(len(packet.my.Data))
}
func (p *Client) Close(clientConn *ClientConn) {
if p.localIdToConnMap[clientConn.id] != nil {
delete(p.localIdToConnMap, clientConn.id)
delete(p.localAddrToConnMap, clientConn.ipaddr.String())
}
func (p *Client) close(clientConn *ClientConn) {
clientConn.exit = true
p.deleteClientConn(clientConn.id, clientConn.ipaddr.String())
p.deleteClientConn(clientConn.id, clientConn.tcpaddr.String())
}
func (p *Client) checkTimeoutConn() {
@ -515,8 +645,16 @@ func (p *Client) checkTimeoutConn() {
return
}
now := time.Now()
for _, conn := range p.localIdToConnMap {
tmp := make(map[string]*ClientConn)
p.localIdToConnMap.Range(func(key, value interface{}) bool {
id := key.(string)
clientConn := value.(*ClientConn)
tmp[id] = clientConn
return true
})
now := common.GetNowUpdateInSecond()
for _, conn := range tmp {
diffrecv := now.Sub(conn.activeRecvTime)
diffsend := now.Sub(conn.activeSendTime)
if diffrecv > time.Second*(time.Duration(p.timeout)) || diffsend > time.Second*(time.Duration(p.timeout)) {
@ -524,30 +662,41 @@ func (p *Client) checkTimeoutConn() {
}
}
for id, conn := range p.localIdToConnMap {
for id, conn := range tmp {
if conn.close {
loggo.Info("close inactive conn %s %s", id, conn.ipaddr.String())
p.Close(conn)
p.close(conn)
}
}
}
func (p *Client) ping() {
if p.sendPacket == 0 {
now := time.Now()
b, _ := now.MarshalBinary()
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, "", "", (uint32)(MyMsg_PING), b,
SEND_PROTO, RECV_PROTO, p.key,
0, 0, 0, 0, 0, 0,
0)
loggo.Info("ping %s %s %d %d %d %d", p.addrServer, now.String(), p.sproto, p.rproto, p.id, p.sequence)
p.sequence++
now := time.Now()
b, _ := now.MarshalBinary()
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, "", "", (uint32)(MyMsg_PING), b,
SEND_PROTO, RECV_PROTO, p.key,
0, 0, 0, 0, 0, 0,
0)
loggo.Info("ping %s %s %d %d %d %d", p.addrServer, now.String(), p.sproto, p.rproto, p.id, p.sequence)
p.sequence++
if now.Sub(p.pongTime) > time.Second*3 {
p.rtt = 0
}
}
func (p *Client) showNet() {
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024)
p.localAddrToConnMapSize = 0
p.localIdToConnMap.Range(func(key, value interface{}) bool {
p.localAddrToConnMapSize++
return true
})
p.localIdToConnMapSize = 0
p.localIdToConnMap.Range(func(key, value interface{}) bool {
p.localIdToConnMapSize++
return true
})
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s %d/%dConnections",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024, p.localAddrToConnMapSize, p.localIdToConnMapSize)
p.sendPacket = 0
p.recvPacket = 0
p.sendPacketSize = 0
@ -556,13 +705,18 @@ func (p *Client) showNet() {
func (p *Client) AcceptSock5Conn(conn *net.TCPConn) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
var err error = nil
if err = sock5Handshake(conn); err != nil {
if err = network.Sock5HandshakeBy(conn, "", ""); err != nil {
loggo.Error("socks handshake: %s", err)
conn.Close()
return
}
_, addr, err := sock5GetRequest(conn)
_, addr, err := network.Sock5GetRequest(conn)
if err != nil {
loggo.Error("error getting request: %s", err)
conn.Close()
@ -573,12 +727,104 @@ func (p *Client) AcceptSock5Conn(conn *net.TCPConn) {
// But if connection failed, the client will get connection reset error.
_, err = conn.Write([]byte{0x05, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x08, 0x43})
if err != nil {
loggo.Error("send connection confirmation:", err)
loggo.Error("send connection confirmation: %s", err)
conn.Close()
return
}
loggo.Info("accept new sock5 conn: %s", addr)
p.AcceptTcpConn(conn, addr)
if p.sock5_filter == nil {
p.AcceptTcpConn(conn, addr)
} else {
if (*p.sock5_filter)(addr) {
p.AcceptTcpConn(conn, addr)
return
}
p.AcceptDirectTcpConn(conn, addr)
}
}
func (p *Client) addClientConn(uuid string, addr string, clientConn *ClientConn) {
p.localAddrToConnMap.Store(addr, clientConn)
p.localIdToConnMap.Store(uuid, clientConn)
}
func (p *Client) getClientConnByAddr(addr string) *ClientConn {
ret, ok := p.localAddrToConnMap.Load(addr)
if !ok {
return nil
}
return ret.(*ClientConn)
}
func (p *Client) getClientConnById(uuid string) *ClientConn {
ret, ok := p.localIdToConnMap.Load(uuid)
if !ok {
return nil
}
return ret.(*ClientConn)
}
func (p *Client) deleteClientConn(uuid string, addr string) {
p.localIdToConnMap.Delete(uuid)
p.localAddrToConnMap.Delete(addr)
}
func (p *Client) remoteError(uuid string) {
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, "", uuid, (uint32)(MyMsg_KICK), []byte{},
SEND_PROTO, RECV_PROTO, p.key,
0, 0, 0, 0, 0, 0,
0)
}
func (p *Client) AcceptDirectTcpConn(conn *net.TCPConn, targetAddr string) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
tcpsrcaddr := conn.RemoteAddr().(*net.TCPAddr)
loggo.Info("client accept new direct local tcp %s %s", tcpsrcaddr.String(), targetAddr)
tcpaddrTarget, err := net.ResolveTCPAddr("tcp", targetAddr)
if err != nil {
loggo.Info("direct local tcp ResolveTCPAddr fail: %s %s", targetAddr, err.Error())
return
}
targetconn, err := net.DialTCP("tcp", nil, tcpaddrTarget)
if err != nil {
loggo.Info("direct local tcp DialTCP fail: %s %s", targetAddr, err.Error())
return
}
go p.transfer(conn, targetconn, conn.RemoteAddr().String(), targetconn.RemoteAddr().String())
go p.transfer(targetconn, conn, targetconn.RemoteAddr().String(), conn.RemoteAddr().String())
loggo.Info("client accept new direct local tcp ok %s %s", tcpsrcaddr.String(), targetAddr)
}
func (p *Client) transfer(destination io.WriteCloser, source io.ReadCloser, dst string, src string) {
defer common.CrashLog()
defer destination.Close()
defer source.Close()
loggo.Info("client begin transfer from %s -> %s", src, dst)
io.Copy(destination, source)
loggo.Info("client end transfer from %s -> %s", src, dst)
}
func (p *Client) updateServerAddr() {
ipaddrServer, err := net.ResolveIPAddr("ip", p.addrServer)
if err != nil {
return
}
if p.ipaddrServer.String() != ipaddrServer.String() {
p.ipaddrServer = ipaddrServer
}
}

View file

@ -3,9 +3,15 @@ package main
import (
"flag"
"fmt"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/geoip"
"github.com/esrrhs/gohome/loggo"
"github.com/esrrhs/pingtunnel"
"net"
"net/http"
_ "net/http/pprof"
"strconv"
"time"
)
var usage = `
@ -29,6 +35,34 @@ Usage:
-type 服务器或者客户端
client or server
服务器参数server param:
-key 设置的纯数字密码默认0, 参数为int类型范围从0-2147483647不可夹杂字母特殊符号
Set password, default 0
-nolog 不写日志文件只打印标准输出默认0
Do not write log files, only print standard output, default 0 is off
-noprint 不打印屏幕输出默认0
Do not print standard output, default 0 is off
-loglevel 日志文件等级默认info
log level, default is info
-maxconn 最大连接数默认0不受限制
the max num of connections, default 0 is no limit
-maxprt server最大处理线程数默认100
max process thread in server, default 100
-maxprb server最大处理线程buffer数默认1000
max process thread's buffer in server, default 1000
-conntt server发起连接到目标地址的超时时间默认1000ms
The timeout period for the server to initiate a connection to the destination address. The default is 1000ms.
客户端参数client param:
-l 本地的地址发到这个端口的流量将转发到服务器
Local address, traffic sent to this port will be forwarded to the server
@ -47,11 +81,11 @@ Usage:
-tcp 设置是否转发tcp默认0
Set the switch to forward tcp, the default is 0
-tcp_bs tcp的发送接收缓冲区大小默认10MB
Tcp send and receive buffer size, default 10MB
-tcp_bs tcp的发送接收缓冲区大小默认1MB
Tcp send and receive buffer size, default 1MB
-tcp_mw tcp的最大窗口默认10000
The maximum window of tcp, the default is 10000
-tcp_mw tcp的最大窗口默认20000
The maximum window of tcp, the default is 20000
-tcp_rst tcp的超时发送时间默认400ms
Tcp timeout resend time, default 400ms
@ -65,15 +99,29 @@ Usage:
-nolog 不写日志文件只打印标准输出默认0
Do not write log files, only print standard output, default 0 is off
-noprint 不打印屏幕输出默认0
Do not print standard output, default 0 is off
-loglevel 日志文件等级默认info
log level, default is info
-sock5 开启sock5转发默认0
Turn on sock5 forwarding, default 0 is off
-profile 在指定端口开启性能检测默认0不开启
Enable performance detection on the specified port. The default 0 is not enabled.
-s5filter sock5模式设置转发过滤默认全转发设置CN代表CN地区的直连不转发
Set the forwarding filter in the sock5 mode. The default is full forwarding. For example, setting the CN indicates that the Chinese address is not forwarded.
-s5ftfile sock5模式转发过滤的数据文件默认读取当前目录的GeoLite2-Country.mmdb
The data file in sock5 filter mode, the default reading of the current directory GeoLite2-Country.mmdb
`
func main() {
defer common.CrashLog()
t := flag.String("type", "", "client or server")
listen := flag.String("l", "", "listen addr")
target := flag.String("t", "", "target addr")
@ -81,14 +129,22 @@ func main() {
timeout := flag.Int("timeout", 60, "conn timeout")
key := flag.Int("key", 0, "key")
tcpmode := flag.Int("tcp", 0, "tcp mode")
tcpmode_buffersize := flag.Int("tcp_bs", 10*1024*1024, "tcp mode buffer size")
tcpmode_maxwin := flag.Int("tcp_mw", 10000, "tcp mode max win")
tcpmode_buffersize := flag.Int("tcp_bs", 1*1024*1024, "tcp mode buffer size")
tcpmode_maxwin := flag.Int("tcp_mw", 20000, "tcp mode max win")
tcpmode_resend_timems := flag.Int("tcp_rst", 400, "tcp mode resend time ms")
tcpmode_compress := flag.Int("tcp_gz", 0, "tcp data compress")
nolog := flag.Int("nolog", 0, "write log file")
noprint := flag.Int("noprint", 0, "print stdout")
tcpmode_stat := flag.Int("tcp_stat", 0, "print tcp stat")
loglevel := flag.String("loglevel", "info", "log level")
open_sock5 := flag.Int("sock5", 0, "sock5 mode")
maxconn := flag.Int("maxconn", 0, "max num of connections")
max_process_thread := flag.Int("maxprt", 100, "max process thread in server")
max_process_buffer := flag.Int("maxprb", 1000, "max process thread's buffer in server")
profile := flag.Int("profile", 0, "open profile")
conntt := flag.Int("conntt", 1000, "the connect call's timeout")
s5filter := flag.String("s5filter", "", "sock5 filter")
s5ftfile := flag.String("s5ftfile", "GeoLite2-Country.mmdb", "sock5 filter file")
flag.Usage = func() {
fmt.Printf(usage)
}
@ -126,20 +182,24 @@ func main() {
Prefix: "pingtunnel",
MaxDay: 3,
NoLogFile: *nolog > 0,
NoPrint: *noprint > 0,
})
loggo.Info("start...")
loggo.Info("key %d", *key)
if *t == "server" {
s, err := pingtunnel.NewServer(*key)
s, err := pingtunnel.NewServer(*key, *maxconn, *max_process_thread, *max_process_buffer, *conntt)
if err != nil {
loggo.Error("ERROR: %s", err.Error())
return
}
loggo.Info("Server start")
s.Run()
}
if *t == "client" {
err = s.Run()
if err != nil {
loggo.Error("Run ERROR: %s", err.Error())
return
}
} else if *t == "client" {
loggo.Info("type %s", *t)
loggo.Info("listen %s", *listen)
@ -150,17 +210,60 @@ func main() {
*tcpmode_buffersize = 0
*tcpmode_maxwin = 0
*tcpmode_resend_timems = 0
*tcpmode_compress = 0
*tcpmode_stat = 0
}
if len(*s5filter) > 0 {
err := geoip.Load(*s5ftfile)
if err != nil {
loggo.Error("Load Sock5 ip file ERROR: %s", err.Error())
return
}
}
filter := func(addr string) bool {
if len(*s5filter) <= 0 {
return true
}
taddr, err := net.ResolveTCPAddr("tcp", addr)
if err != nil {
return false
}
ret, err := geoip.GetCountryIsoCode(taddr.IP.String())
if err != nil {
return false
}
if len(ret) <= 0 {
return false
}
return ret != *s5filter
}
c, err := pingtunnel.NewClient(*listen, *server, *target, *timeout, *key,
*tcpmode, *tcpmode_buffersize, *tcpmode_maxwin, *tcpmode_resend_timems, *tcpmode_compress,
*tcpmode_stat, *open_sock5)
*tcpmode_stat, *open_sock5, *maxconn, &filter)
if err != nil {
loggo.Error("ERROR: %s", err.Error())
return
}
loggo.Info("Client Listen %s (%s) Server %s (%s) TargetPort %s:", c.Addr(), c.IPAddr(),
c.ServerAddr(), c.ServerIPAddr(), c.TargetAddr())
c.Run()
err = c.Run()
if err != nil {
loggo.Error("Run ERROR: %s", err.Error())
return
}
} else {
return
}
if *profile > 0 {
go http.ListenAndServe("0.0.0.0:"+strconv.Itoa(*profile), nil)
}
for {
time.Sleep(time.Hour)
}
}

2
docker-compose/.env Normal file
View file

@ -0,0 +1,2 @@
KEY=123456
SERVER=www.yourserver.com

16
docker-compose/Readme.md Normal file
View file

@ -0,0 +1,16 @@
Deploy with docker-compose
===========================
**First** edit `.env` file in this directory to your appropriate value.
**Then** run stack with these commands:
- in the server
```
docker-compose -f server.yml up -d
```
- in client machine
```
docker-compose -f client.yml up -d
```
**Now** use socks5 proxy at port `1080` of your client machine

View file

@ -0,0 +1,9 @@
version: "3.7"
services:
pingtunnelServer:
image: esrrhs/pingtunnel:latest
restart: always
ports:
- 1080:1080
command: "./pingtunnel -type client -l 0.0.0.0:1080 -s ${SERVER} -sock5 1 -key ${KEY}"

View file

@ -0,0 +1,8 @@
version: "3.7"
services:
pingtunnelServer:
image: esrrhs/pingtunnel:latest
restart: always
network_mode: host
command: "./pingtunnel -type server -key ${KEY}"

View file

@ -1,686 +0,0 @@
package pingtunnel
import (
"bytes"
"compress/zlib"
"container/list"
"github.com/esrrhs/go-engine/src/common"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/go-engine/src/rbuffergo"
"io"
"strconv"
"sync"
"time"
)
type FrameStat struct {
sendDataNum int
recvDataNum int
sendReqNum int
recvReqNum int
sendAckNum int
recvAckNum int
sendDataNumsMap map[int32]int
recvDataNumsMap map[int32]int
sendReqNumsMap map[int32]int
recvReqNumsMap map[int32]int
sendAckNumsMap map[int32]int
recvAckNumsMap map[int32]int
sendping int
sendpong int
recvping int
recvpong int
}
type FrameMgr struct {
sendb *rbuffergo.RBuffergo
recvb *rbuffergo.RBuffergo
recvlock sync.Locker
windowsize int
resend_timems int
compress int
sendwin *list.List
sendlist *list.List
sendid int
recvwin *list.List
recvlist *list.List
recvid int
close bool
remoteclosed bool
closesend bool
lastPingTime int64
rttns int64
reqmap map[int32]int64
sendmap map[int32]int64
connected bool
fs *FrameStat
openstat int
lastPrintStat int64
}
func NewFrameMgr(buffersize int, windowsize int, resend_timems int, compress int, openstat int) *FrameMgr {
sendb := rbuffergo.New(buffersize, false)
recvb := rbuffergo.New(buffersize, false)
fm := &FrameMgr{sendb: sendb, recvb: recvb,
recvlock: &sync.Mutex{},
windowsize: windowsize, resend_timems: resend_timems, compress: compress,
sendwin: list.New(), sendlist: list.New(), sendid: 0,
recvwin: list.New(), recvlist: list.New(), recvid: 0,
close: false, remoteclosed: false, closesend: false,
lastPingTime: time.Now().UnixNano(), rttns: (int64)(resend_timems * 1000),
reqmap: make(map[int32]int64), sendmap: make(map[int32]int64),
connected: false, openstat: openstat, lastPrintStat: time.Now().UnixNano()}
if openstat > 0 {
fm.resetStat()
}
return fm
}
func (fm *FrameMgr) GetSendBufferLeft() int {
left := fm.sendb.Capacity() - fm.sendb.Size()
return left
}
func (fm *FrameMgr) WriteSendBuffer(data []byte) {
fm.sendb.Write(data)
loggo.Debug("WriteSendBuffer %d %d", fm.sendb.Size(), len(data))
}
func (fm *FrameMgr) Update() {
fm.cutSendBufferToWindow()
fm.sendlist.Init()
tmpreq, tmpack, tmpackto := fm.preProcessRecvList()
fm.processRecvList(tmpreq, tmpack, tmpackto)
fm.combineWindowToRecvBuffer()
fm.calSendList()
fm.ping()
fm.printStat()
}
func (fm *FrameMgr) cutSendBufferToWindow() {
sendall := false
if fm.sendb.Size() < FRAME_MAX_SIZE {
sendall = true
}
for fm.sendb.Size() >= FRAME_MAX_SIZE && fm.sendwin.Len() < fm.windowsize {
fd := &FrameData{Type: (int32)(FrameData_USER_DATA),
Data: make([]byte, FRAME_MAX_SIZE)}
fm.sendb.Read(fd.Data)
if fm.compress > 0 && len(fd.Data) > fm.compress {
newb := fm.compressData(fd.Data)
if len(newb) < len(fd.Data) {
fd.Data = newb
fd.Compress = true
}
}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("cut frame push to send win %d %d %d", f.Id, FRAME_MAX_SIZE, fm.sendwin.Len())
}
if sendall && fm.sendb.Size() > 0 && fm.sendwin.Len() < fm.windowsize {
fd := &FrameData{Type: (int32)(FrameData_USER_DATA),
Data: make([]byte, fm.sendb.Size())}
fm.sendb.Read(fd.Data)
if fm.compress > 0 && len(fd.Data) > fm.compress {
newb := fm.compressData(fd.Data)
if len(newb) < len(fd.Data) {
fd.Data = newb
fd.Compress = true
}
}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("cut small frame push to send win %d %d %d", f.Id, len(f.Data.Data), fm.sendwin.Len())
}
if fm.sendb.Empty() && fm.close && !fm.closesend && fm.sendwin.Len() < fm.windowsize {
fd := &FrameData{Type: (int32)(FrameData_CLOSE)}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
fm.closesend = true
loggo.Debug("close frame push to send win %d %d", f.Id, fm.sendwin.Len())
}
}
func (fm *FrameMgr) calSendList() {
cur := time.Now().UnixNano()
for e := fm.sendwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Resend || cur-f.Sendtime > int64(fm.resend_timems*(int)(time.Millisecond)) {
oldsend := fm.sendmap[f.Id]
if cur-oldsend > fm.rttns {
f.Sendtime = cur
fm.sendlist.PushBack(f)
f.Resend = false
fm.sendmap[f.Id] = cur
if fm.openstat > 0 {
fm.fs.sendDataNum++
fm.fs.sendDataNumsMap[f.Id]++
}
loggo.Debug("push frame to sendlist %d %d", f.Id, len(f.Data.Data))
}
}
}
}
func (fm *FrameMgr) getSendList() *list.List {
return fm.sendlist
}
func (fm *FrameMgr) OnRecvFrame(f *Frame) {
fm.recvlock.Lock()
defer fm.recvlock.Unlock()
fm.recvlist.PushBack(f)
}
func (fm *FrameMgr) preProcessRecvList() (map[int32]int, map[int32]int, map[int32]*Frame) {
fm.recvlock.Lock()
defer fm.recvlock.Unlock()
tmpreq := make(map[int32]int)
tmpack := make(map[int32]int)
tmpackto := make(map[int32]*Frame)
for e := fm.recvlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Type == (int32)(Frame_REQ) {
for _, id := range f.Dataid {
tmpreq[id]++
loggo.Debug("recv req %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
} else if f.Type == (int32)(Frame_ACK) {
for _, id := range f.Dataid {
tmpack[id]++
loggo.Debug("recv ack %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
} else if f.Type == (int32)(Frame_DATA) {
tmpackto[f.Id] = f
if fm.openstat > 0 {
fm.fs.recvDataNum++
fm.fs.recvDataNumsMap[f.Id]++
}
loggo.Debug("recv data %d %d", f.Id, len(f.Data.Data))
} else if f.Type == (int32)(Frame_PING) {
fm.processPing(f)
} else if f.Type == (int32)(Frame_PONG) {
fm.processPong(f)
} else {
loggo.Error("error frame type %d", f.Type)
}
}
fm.recvlist.Init()
return tmpreq, tmpack, tmpackto
}
func (fm *FrameMgr) processRecvList(tmpreq map[int32]int, tmpack map[int32]int, tmpackto map[int32]*Frame) {
for id, num := range tmpreq {
for e := fm.sendwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == id {
f.Resend = true
loggo.Debug("choose resend win %d %d", f.Id, len(f.Data.Data))
break
}
}
if fm.openstat > 0 {
fm.fs.recvReqNum += num
fm.fs.recvReqNumsMap[id] += num
}
}
for id, num := range tmpack {
for e := fm.sendwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == id {
fm.sendwin.Remove(e)
delete(fm.sendmap, f.Id)
loggo.Debug("remove send win %d %d", f.Id, len(f.Data.Data))
break
}
}
if fm.openstat > 0 {
fm.fs.recvAckNum += num
fm.fs.recvAckNumsMap[id] += num
}
}
if len(tmpackto) > 0 {
tmp := make([]int32, len(tmpackto))
index := 0
for id, rf := range tmpackto {
if fm.addToRecvWin(rf) {
tmp[index] = id
index++
if fm.openstat > 0 {
fm.fs.sendAckNum++
fm.fs.sendAckNumsMap[id]++
}
loggo.Debug("add data to win %d %d", rf.Id, len(rf.Data.Data))
}
}
if index > 0 {
f := &Frame{Type: (int32)(Frame_ACK), Resend: false, Sendtime: 0,
Id: 0,
Dataid: tmp[0:index]}
fm.sendlist.PushBack(f)
loggo.Debug("send ack %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
}
}
func (fm *FrameMgr) addToRecvWin(rf *Frame) bool {
if !fm.isIdInRange((int)(rf.Id), FRAME_MAX_ID) {
loggo.Debug("recv frame not in range %d %d", rf.Id, fm.recvid)
if fm.isIdOld((int)(rf.Id), FRAME_MAX_ID) {
return true
}
return false
}
for e := fm.recvwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == rf.Id {
loggo.Debug("recv frame ignore %d %d", f.Id, len(f.Data.Data))
return true
}
}
for e := fm.recvwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
loggo.Debug("start insert recv win %d %d %d", fm.recvid, rf.Id, f.Id)
if fm.compareId((int)(rf.Id), (int)(f.Id)) < 0 {
fm.recvwin.InsertBefore(rf, e)
loggo.Debug("insert recv win %d %d before %d", rf.Id, len(rf.Data.Data), f.Id)
return true
}
}
fm.recvwin.PushBack(rf)
loggo.Debug("insert recv win last %d %d", rf.Id, len(rf.Data.Data))
return true
}
func (fm *FrameMgr) processRecvFrame(f *Frame) bool {
if f.Data.Type == (int32)(FrameData_USER_DATA) {
left := fm.recvb.Capacity() - fm.recvb.Size()
if left >= len(f.Data.Data) {
src := f.Data.Data
if f.Data.Compress {
err, old := fm.deCompressData(src)
if err != nil {
loggo.Error("recv frame deCompressData error %d", f.Id)
return false
}
if left < len(old) {
return false
}
loggo.Debug("deCompressData recv frame %d %d %d",
f.Id, len(src), len(old))
src = old
}
fm.recvb.Write(src)
loggo.Debug("combined recv frame to recv buffer %d %d",
f.Id, len(src))
return true
}
return false
} else if f.Data.Type == (int32)(FrameData_CLOSE) {
fm.remoteclosed = true
loggo.Debug("recv remote close frame %d", f.Id)
return true
} else if f.Data.Type == (int32)(FrameData_CONN) {
fm.sendConnectRsp()
fm.connected = true
loggo.Debug("recv remote conn frame %d", f.Id)
return true
} else if f.Data.Type == (int32)(FrameData_CONNRSP) {
fm.connected = true
loggo.Debug("recv remote conn rsp frame %d", f.Id)
return true
} else {
loggo.Error("recv frame type error %d", f.Data.Type)
return false
}
}
func (fm *FrameMgr) combineWindowToRecvBuffer() {
for {
done := false
for e := fm.recvwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == (int32)(fm.recvid) {
delete(fm.reqmap, f.Id)
if fm.processRecvFrame(f) {
fm.recvwin.Remove(e)
done = true
loggo.Debug("process recv frame ok %d %d",
f.Id, len(f.Data.Data))
break
}
}
}
if !done {
break
} else {
fm.recvid++
if fm.recvid >= FRAME_MAX_ID {
fm.recvid = 0
}
loggo.Debug("combined ok add recvid %d ", fm.recvid)
}
}
cur := time.Now().UnixNano()
reqtmp := make(map[int]int)
e := fm.recvwin.Front()
id := fm.recvid
for len(reqtmp) < fm.windowsize && len(reqtmp)*4 < FRAME_MAX_SIZE/2 && e != nil {
f := e.Value.(*Frame)
loggo.Debug("start add req id %d %d %d", fm.recvid, f.Id, id)
if f.Id != (int32)(id) {
oldReq := fm.reqmap[f.Id]
if cur-oldReq > fm.rttns {
reqtmp[id]++
fm.reqmap[f.Id] = cur
loggo.Debug("add req id %d ", id)
}
} else {
e = e.Next()
}
id++
if id >= FRAME_MAX_ID {
id = 0
}
}
if len(reqtmp) > 0 {
f := &Frame{Type: (int32)(Frame_REQ), Resend: false, Sendtime: 0,
Id: 0,
Dataid: make([]int32, len(reqtmp))}
index := 0
for id, _ := range reqtmp {
f.Dataid[index] = (int32)(id)
index++
if fm.openstat > 0 {
fm.fs.sendReqNum++
fm.fs.sendReqNumsMap[(int32)(id)]++
}
}
fm.sendlist.PushBack(f)
loggo.Debug("send req %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
}
func (fm *FrameMgr) GetRecvBufferSize() int {
return fm.recvb.Size()
}
func (fm *FrameMgr) GetRecvReadLineBuffer() []byte {
ret := fm.recvb.GetReadLineBuffer()
loggo.Debug("GetRecvReadLineBuffer %d %d", fm.recvb.Size(), len(ret))
return ret
}
func (fm *FrameMgr) SkipRecvBuffer(size int) {
fm.recvb.SkipRead(size)
loggo.Debug("SkipRead %d %d", fm.recvb.Size(), size)
}
func (fm *FrameMgr) Close() {
fm.recvlock.Lock()
defer fm.recvlock.Unlock()
fm.close = true
}
func (fm *FrameMgr) IsRemoteClosed() bool {
return fm.remoteclosed
}
func (fm *FrameMgr) ping() {
cur := time.Now().UnixNano()
if cur-fm.lastPingTime > (int64)(time.Second) {
fm.lastPingTime = cur
f := &Frame{Type: (int32)(Frame_PING), Resend: false, Sendtime: cur,
Id: 0}
fm.sendlist.PushBack(f)
loggo.Debug("send ping %d", cur)
if fm.openstat > 0 {
fm.fs.sendping++
}
}
}
func (fm *FrameMgr) processPing(f *Frame) {
rf := &Frame{Type: (int32)(Frame_PONG), Resend: false, Sendtime: f.Sendtime,
Id: 0}
fm.sendlist.PushBack(rf)
if fm.openstat > 0 {
fm.fs.recvping++
fm.fs.sendpong++
}
loggo.Debug("recv ping %d", f.Sendtime)
}
func (fm *FrameMgr) processPong(f *Frame) {
cur := time.Now().UnixNano()
if cur > f.Sendtime {
rtt := cur - f.Sendtime
fm.rttns = (fm.rttns + rtt) / 2
if fm.openstat > 0 {
fm.fs.recvpong++
}
loggo.Debug("recv pong %d %dms", rtt, fm.rttns/1000/1000)
}
}
func (fm *FrameMgr) isIdInRange(id int, maxid int) bool {
begin := fm.recvid
end := fm.recvid + fm.windowsize
if end >= maxid {
if id >= 0 && id < end-maxid {
return true
}
end = maxid
}
if id >= begin && id < end {
return true
}
return false
}
func (fm *FrameMgr) compareId(l int, r int) int {
if l < fm.recvid {
l += FRAME_MAX_ID
}
if r < fm.recvid {
r += FRAME_MAX_ID
}
return l - r
}
func (fm *FrameMgr) isIdOld(id int, maxid int) bool {
if id > fm.recvid {
return false
}
end := fm.recvid + fm.windowsize*2
if end >= maxid {
if id >= end-maxid && id < fm.recvid {
return true
}
} else {
if id < fm.recvid {
return true
}
}
return false
}
func (fm *FrameMgr) IsConnected() bool {
return fm.connected
}
func (fm *FrameMgr) Connect() {
fd := &FrameData{Type: (int32)(FrameData_CONN)}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("start connect")
}
func (fm *FrameMgr) sendConnectRsp() {
fd := &FrameData{Type: (int32)(FrameData_CONNRSP)}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("send connect rsp")
}
func (fm *FrameMgr) compressData(src []byte) []byte {
var b bytes.Buffer
w := zlib.NewWriter(&b)
w.Write(src)
w.Close()
return b.Bytes()
}
func (fm *FrameMgr) deCompressData(src []byte) (error, []byte) {
b := bytes.NewReader(src)
r, err := zlib.NewReader(b)
if err != nil {
return err, nil
}
var out bytes.Buffer
io.Copy(&out, r)
r.Close()
return nil, out.Bytes()
}
func (fm *FrameMgr) resetStat() {
fm.fs = &FrameStat{}
fm.fs.sendDataNumsMap = make(map[int32]int)
fm.fs.recvDataNumsMap = make(map[int32]int)
fm.fs.sendReqNumsMap = make(map[int32]int)
fm.fs.recvReqNumsMap = make(map[int32]int)
fm.fs.sendAckNumsMap = make(map[int32]int)
fm.fs.recvAckNumsMap = make(map[int32]int)
}
func (fm *FrameMgr) printStat() {
if fm.openstat > 0 {
cur := time.Now().UnixNano()
if cur-fm.lastPrintStat > (int64)(time.Second) {
fm.lastPrintStat = cur
fs := fm.fs
loggo.Info("\nsendDataNum %d\nrecvDataNum %d\nsendReqNum %d\nrecvReqNum %d\nsendAckNum %d\nrecvAckNum %d\n"+
"sendDataNumsMap %s\nrecvDataNumsMap %s\nsendReqNumsMap %s\nrecvReqNumsMap %s\nsendAckNumsMap %s\nrecvAckNumsMap %s\n"+
"sendping %d\nrecvping %d\nsendpong %d\nrecvpong %d\n"+
"sendwin %d\nrecvwin %d\n",
fs.sendDataNum, fs.recvDataNum,
fs.sendReqNum, fs.recvReqNum,
fs.sendAckNum, fs.recvAckNum,
fm.printStatMap(&fs.sendDataNumsMap), fm.printStatMap(&fs.recvDataNumsMap),
fm.printStatMap(&fs.sendReqNumsMap), fm.printStatMap(&fs.recvReqNumsMap),
fm.printStatMap(&fs.sendAckNumsMap), fm.printStatMap(&fs.recvAckNumsMap),
fs.sendping, fs.recvping,
fs.sendpong, fs.recvpong,
fm.sendwin.Len(), fm.recvwin.Len())
fm.resetStat()
}
}
}
func (fm *FrameMgr) printStatMap(m *map[int32]int) string {
tmp := make(map[int]int)
for _, v := range *m {
tmp[v]++
}
max := 0
for k, _ := range tmp {
if k > max {
max = k
}
}
var ret string
for i := 1; i <= max; i++ {
ret += strconv.Itoa(i) + "->" + strconv.Itoa(tmp[i]) + ","
}
if len(ret) <= 0 {
ret = "none"
}
return ret
}

18
go.mod Normal file
View file

@ -0,0 +1,18 @@
module github.com/esrrhs/pingtunnel
go 1.18
require (
github.com/esrrhs/gohome v0.0.0-20231102120537-c519efbde976
github.com/golang/protobuf v1.5.3
golang.org/x/net v0.17.0
)
require (
github.com/OneOfOne/xxhash v1.2.8 // indirect
github.com/google/uuid v1.4.0 // indirect
github.com/oschwald/geoip2-golang v1.9.0 // indirect
github.com/oschwald/maxminddb-golang v1.12.0 // indirect
golang.org/x/sys v0.13.0 // indirect
google.golang.org/protobuf v1.31.0 // indirect
)

29
go.sum Normal file
View file

@ -0,0 +1,29 @@
github.com/OneOfOne/xxhash v1.2.8 h1:31czK/TI9sNkxIKfaUfGlU47BAxQ0ztGgd9vPyqimf8=
github.com/OneOfOne/xxhash v1.2.8/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/esrrhs/gohome v0.0.0-20231102120537-c519efbde976 h1:av0d/lRou1Z5cxdSQFwtVcqJjokFI5pJyyr63iAuYis=
github.com/esrrhs/gohome v0.0.0-20231102120537-c519efbde976/go.mod h1:S5fYcOFy4nUPnkYg7D9hIp+SwBR9kCBiOYmWVW42Yhs=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/oschwald/geoip2-golang v1.9.0 h1:uvD3O6fXAXs+usU+UGExshpdP13GAqp4GBrzN7IgKZc=
github.com/oschwald/geoip2-golang v1.9.0/go.mod h1:BHK6TvDyATVQhKNbQBdrj9eAvuwOMi2zSFXizL3K81Y=
github.com/oschwald/maxminddb-golang v1.12.0 h1:9FnTOD0YOhP7DGxGsq4glzpGy5+w7pq50AS6wALUMYs=
github.com/oschwald/maxminddb-golang v1.12.0/go.mod h1:q0Nob5lTCqyQ8WT6FYgS1L7PXKVVbgiymefNwIjPzgY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

261
msg.pb.go
View file

@ -25,18 +25,21 @@ type MyMsg_TYPE int32
const (
MyMsg_DATA MyMsg_TYPE = 0
MyMsg_PING MyMsg_TYPE = 1
MyMsg_KICK MyMsg_TYPE = 2
MyMsg_MAGIC MyMsg_TYPE = 57005
)
var MyMsg_TYPE_name = map[int32]string{
0: "DATA",
1: "PING",
2: "KICK",
57005: "MAGIC",
}
var MyMsg_TYPE_value = map[string]int32{
"DATA": 0,
"PING": 1,
"KICK": 2,
"MAGIC": 57005,
}
@ -48,71 +51,6 @@ func (MyMsg_TYPE) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{0, 0}
}
type FrameData_TYPE int32
const (
FrameData_USER_DATA FrameData_TYPE = 0
FrameData_CONN FrameData_TYPE = 1
FrameData_CONNRSP FrameData_TYPE = 2
FrameData_CLOSE FrameData_TYPE = 3
)
var FrameData_TYPE_name = map[int32]string{
0: "USER_DATA",
1: "CONN",
2: "CONNRSP",
3: "CLOSE",
}
var FrameData_TYPE_value = map[string]int32{
"USER_DATA": 0,
"CONN": 1,
"CONNRSP": 2,
"CLOSE": 3,
}
func (x FrameData_TYPE) String() string {
return proto.EnumName(FrameData_TYPE_name, int32(x))
}
func (FrameData_TYPE) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{1, 0}
}
type Frame_TYPE int32
const (
Frame_DATA Frame_TYPE = 0
Frame_REQ Frame_TYPE = 1
Frame_ACK Frame_TYPE = 2
Frame_PING Frame_TYPE = 3
Frame_PONG Frame_TYPE = 4
)
var Frame_TYPE_name = map[int32]string{
0: "DATA",
1: "REQ",
2: "ACK",
3: "PING",
4: "PONG",
}
var Frame_TYPE_value = map[string]int32{
"DATA": 0,
"REQ": 1,
"ACK": 2,
"PING": 3,
"PONG": 4,
}
func (x Frame_TYPE) String() string {
return proto.EnumName(Frame_TYPE_name, int32(x))
}
func (Frame_TYPE) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{2, 0}
}
type MyMsg struct {
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Type int32 `protobuf:"varint,2,opt,name=type,proto3" json:"type,omitempty"`
@ -256,182 +194,35 @@ func (m *MyMsg) GetTcpmodeStat() int32 {
return 0
}
type FrameData struct {
Type int32 `protobuf:"varint,1,opt,name=type,proto3" json:"type,omitempty"`
Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
Compress bool `protobuf:"varint,3,opt,name=compress,proto3" json:"compress,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *FrameData) Reset() { *m = FrameData{} }
func (m *FrameData) String() string { return proto.CompactTextString(m) }
func (*FrameData) ProtoMessage() {}
func (*FrameData) Descriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{1}
}
func (m *FrameData) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_FrameData.Unmarshal(m, b)
}
func (m *FrameData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_FrameData.Marshal(b, m, deterministic)
}
func (m *FrameData) XXX_Merge(src proto.Message) {
xxx_messageInfo_FrameData.Merge(m, src)
}
func (m *FrameData) XXX_Size() int {
return xxx_messageInfo_FrameData.Size(m)
}
func (m *FrameData) XXX_DiscardUnknown() {
xxx_messageInfo_FrameData.DiscardUnknown(m)
}
var xxx_messageInfo_FrameData proto.InternalMessageInfo
func (m *FrameData) GetType() int32 {
if m != nil {
return m.Type
}
return 0
}
func (m *FrameData) GetData() []byte {
if m != nil {
return m.Data
}
return nil
}
func (m *FrameData) GetCompress() bool {
if m != nil {
return m.Compress
}
return false
}
type Frame struct {
Type int32 `protobuf:"varint,1,opt,name=type,proto3" json:"type,omitempty"`
Resend bool `protobuf:"varint,2,opt,name=resend,proto3" json:"resend,omitempty"`
Sendtime int64 `protobuf:"varint,3,opt,name=sendtime,proto3" json:"sendtime,omitempty"`
Id int32 `protobuf:"varint,4,opt,name=id,proto3" json:"id,omitempty"`
Data *FrameData `protobuf:"bytes,5,opt,name=data,proto3" json:"data,omitempty"`
Dataid []int32 `protobuf:"varint,6,rep,packed,name=dataid,proto3" json:"dataid,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Frame) Reset() { *m = Frame{} }
func (m *Frame) String() string { return proto.CompactTextString(m) }
func (*Frame) ProtoMessage() {}
func (*Frame) Descriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{2}
}
func (m *Frame) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Frame.Unmarshal(m, b)
}
func (m *Frame) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Frame.Marshal(b, m, deterministic)
}
func (m *Frame) XXX_Merge(src proto.Message) {
xxx_messageInfo_Frame.Merge(m, src)
}
func (m *Frame) XXX_Size() int {
return xxx_messageInfo_Frame.Size(m)
}
func (m *Frame) XXX_DiscardUnknown() {
xxx_messageInfo_Frame.DiscardUnknown(m)
}
var xxx_messageInfo_Frame proto.InternalMessageInfo
func (m *Frame) GetType() int32 {
if m != nil {
return m.Type
}
return 0
}
func (m *Frame) GetResend() bool {
if m != nil {
return m.Resend
}
return false
}
func (m *Frame) GetSendtime() int64 {
if m != nil {
return m.Sendtime
}
return 0
}
func (m *Frame) GetId() int32 {
if m != nil {
return m.Id
}
return 0
}
func (m *Frame) GetData() *FrameData {
if m != nil {
return m.Data
}
return nil
}
func (m *Frame) GetDataid() []int32 {
if m != nil {
return m.Dataid
}
return nil
}
func init() {
proto.RegisterEnum("MyMsg_TYPE", MyMsg_TYPE_name, MyMsg_TYPE_value)
proto.RegisterEnum("FrameData_TYPE", FrameData_TYPE_name, FrameData_TYPE_value)
proto.RegisterEnum("Frame_TYPE", Frame_TYPE_name, Frame_TYPE_value)
proto.RegisterType((*MyMsg)(nil), "MyMsg")
proto.RegisterType((*FrameData)(nil), "FrameData")
proto.RegisterType((*Frame)(nil), "Frame")
}
func init() { proto.RegisterFile("msg.proto", fileDescriptor_c06e4cca6c2cc899) }
var fileDescriptor_c06e4cca6c2cc899 = []byte{
// 493 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x52, 0xcb, 0x6e, 0xd3, 0x40,
0x14, 0x65, 0xfc, 0x48, 0xe2, 0x9b, 0x34, 0x4c, 0x87, 0x87, 0x46, 0x2c, 0x90, 0xb1, 0x84, 0x30,
0x0b, 0xba, 0x28, 0x12, 0xac, 0x53, 0x37, 0x44, 0x15, 0xe4, 0xc1, 0x24, 0x2c, 0x60, 0x13, 0xb9,
0xf1, 0xd4, 0xb2, 0xc0, 0x0f, 0xd9, 0x13, 0x41, 0xf8, 0x02, 0x7e, 0x86, 0x4f, 0xe0, 0x0f, 0x90,
0xf8, 0x25, 0x34, 0xb7, 0x63, 0xb7, 0x12, 0xac, 0x7c, 0xce, 0x3d, 0x27, 0xb9, 0x67, 0xee, 0xbd,
0xe0, 0xe5, 0x4d, 0x7a, 0x52, 0xd5, 0xa5, 0x2a, 0x83, 0xdf, 0x36, 0xb8, 0xf3, 0xc3, 0xbc, 0x49,
0xd9, 0x18, 0xac, 0x2c, 0xe1, 0xc4, 0x27, 0xa1, 0x27, 0xac, 0x2c, 0x61, 0x0c, 0x1c, 0x75, 0xa8,
0x24, 0xb7, 0x7c, 0x12, 0xba, 0x02, 0x31, 0x7b, 0x08, 0x3d, 0x15, 0xd7, 0xa9, 0x54, 0xdc, 0x46,
0x9f, 0x61, 0xda, 0x9b, 0xc4, 0x2a, 0xe6, 0x8e, 0x4f, 0xc2, 0x91, 0x40, 0xac, 0xbd, 0x35, 0xf6,
0xe0, 0xae, 0x4f, 0xc2, 0x63, 0x61, 0x18, 0xbb, 0x0f, 0x6e, 0x1e, 0xa7, 0xd9, 0x8e, 0xf7, 0xb0,
0x7c, 0x4d, 0x18, 0x05, 0xfb, 0xb3, 0x3c, 0xf0, 0x3e, 0xd6, 0x34, 0x64, 0x1c, 0xfa, 0x2a, 0xcb,
0x65, 0xb9, 0x57, 0x7c, 0x80, 0x11, 0x5a, 0x8a, 0xca, 0xae, 0xca, 0xcb, 0x44, 0x72, 0xcf, 0x28,
0xd7, 0x94, 0xbd, 0x00, 0x66, 0xe0, 0xf6, 0x72, 0x7f, 0x75, 0x25, 0xeb, 0x26, 0xfb, 0x2e, 0x39,
0xa0, 0xe9, 0xd8, 0x28, 0x67, 0x9d, 0xc0, 0x9e, 0xc2, 0xb8, 0xb5, 0xe7, 0xf1, 0xb7, 0xaf, 0x59,
0xc1, 0x87, 0x68, 0x3d, 0x32, 0xd5, 0x39, 0x16, 0xd9, 0x29, 0x3c, 0x68, 0x6d, 0xb5, 0x6c, 0x64,
0x91, 0x6c, 0x75, 0x92, 0xbc, 0xe1, 0x23, 0x74, 0xdf, 0x33, 0xa2, 0x40, 0x6d, 0x83, 0x12, 0x7b,
0x0e, 0xb4, 0xfd, 0xcd, 0xae, 0xcc, 0xab, 0x5a, 0x36, 0x0d, 0x3f, 0x42, 0xfb, 0x5d, 0x53, 0x8f,
0x4c, 0x99, 0x3d, 0x81, 0x51, 0x6b, 0x6d, 0x54, 0xac, 0xf8, 0x18, 0x6d, 0x43, 0x53, 0x5b, 0xab,
0x58, 0x05, 0xcf, 0xc0, 0xd9, 0x7c, 0x5c, 0x4d, 0xd9, 0x00, 0x9c, 0xf3, 0xc9, 0x66, 0x42, 0xef,
0x68, 0xb4, 0xba, 0x58, 0xcc, 0x28, 0x61, 0x43, 0x70, 0xe7, 0x93, 0xd9, 0x45, 0x44, 0x7f, 0xfe,
0xb2, 0x83, 0x1f, 0x04, 0xbc, 0x37, 0x75, 0x9c, 0xcb, 0x73, 0xbd, 0x82, 0x76, 0x85, 0xe4, 0xd6,
0x0a, 0xdb, 0x55, 0x59, 0xb7, 0x56, 0xf5, 0x08, 0x06, 0x5d, 0x48, 0xbd, 0xd8, 0x81, 0xe8, 0x78,
0xf0, 0xda, 0xb4, 0x3e, 0x02, 0xef, 0xc3, 0x7a, 0x2a, 0xb6, 0x37, 0xfd, 0xa3, 0xe5, 0x62, 0x81,
0xfd, 0xfb, 0x1a, 0x89, 0xf5, 0x8a, 0x5a, 0xcc, 0x03, 0x37, 0x7a, 0xb7, 0x5c, 0x4f, 0xa9, 0x1d,
0xfc, 0x21, 0xe0, 0x62, 0x94, 0xff, 0xc6, 0xd0, 0xd7, 0x81, 0xf3, 0xc2, 0x20, 0x03, 0x61, 0x98,
0x8e, 0xa2, 0xbf, 0x7a, 0xc0, 0x18, 0xc5, 0x16, 0x1d, 0x37, 0x17, 0xea, 0xe0, 0xbf, 0xe8, 0x0b,
0x7d, 0x6c, 0x9e, 0xa2, 0xef, 0x6b, 0x78, 0x0a, 0x27, 0xdd, 0xc3, 0x6f, 0x2e, 0x50, 0x7f, 0xb3,
0x84, 0xf7, 0x7c, 0x3b, 0x74, 0x85, 0x61, 0xc1, 0xab, 0x7f, 0xa6, 0xd9, 0x07, 0x5b, 0x4c, 0xdf,
0x53, 0xa2, 0xc1, 0x24, 0x7a, 0x4b, 0xad, 0x6e, 0xbe, 0x36, 0xa2, 0xe5, 0x62, 0x46, 0x9d, 0xb3,
0xd1, 0x27, 0xa8, 0xb2, 0x22, 0x55, 0xfb, 0xa2, 0x90, 0x5f, 0x2e, 0x7b, 0x78, 0xce, 0x2f, 0xff,
0x06, 0x00, 0x00, 0xff, 0xff, 0x5b, 0xf2, 0xbf, 0x87, 0x4d, 0x03, 0x00, 0x00,
// 342 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x3c, 0x90, 0xdb, 0x6a, 0xe2, 0x50,
0x14, 0x86, 0x27, 0x27, 0x0f, 0xcb, 0xe8, 0xc4, 0x35, 0x07, 0xd6, 0x65, 0x46, 0x18, 0xc8, 0x5c,
0xcc, 0xc0, 0xb4, 0x4f, 0xa0, 0xb6, 0x88, 0x48, 0x8a, 0xa4, 0xde, 0xb4, 0x37, 0x12, 0xcd, 0x36,
0x84, 0x36, 0x07, 0xb2, 0xb7, 0xb4, 0xf6, 0x9d, 0xfa, 0x08, 0x7d, 0x8d, 0x3e, 0x4f, 0xc9, 0x72,
0xa7, 0x77, 0xff, 0xff, 0x7f, 0x5f, 0xc8, 0x62, 0x43, 0x3f, 0x97, 0xe9, 0xbf, 0xaa, 0x2e, 0x55,
0x39, 0x79, 0xb7, 0xc0, 0x09, 0x4f, 0xa1, 0x4c, 0x71, 0x04, 0x66, 0x96, 0x90, 0xe1, 0x1b, 0x41,
0x3f, 0x32, 0xb3, 0x04, 0x11, 0x6c, 0x75, 0xaa, 0x04, 0x99, 0xbe, 0x11, 0x38, 0x11, 0x67, 0xfc,
0x09, 0x1d, 0x15, 0xd7, 0xa9, 0x50, 0x64, 0xb1, 0xa7, 0x5b, 0xe3, 0x26, 0xb1, 0x8a, 0xc9, 0xf6,
0x8d, 0xc0, 0x8d, 0x38, 0x37, 0x6e, 0xcd, 0xff, 0x20, 0xc7, 0x37, 0x82, 0x71, 0xa4, 0x1b, 0x7e,
0x07, 0x27, 0x8f, 0xd3, 0x6c, 0x4f, 0x1d, 0x9e, 0xcf, 0x05, 0x3d, 0xb0, 0x1e, 0xc4, 0x89, 0xba,
0xbc, 0x35, 0x11, 0x09, 0xba, 0x2a, 0xcb, 0x45, 0x79, 0x54, 0xd4, 0xe3, 0x13, 0xda, 0xca, 0x64,
0x5f, 0xe5, 0x65, 0x22, 0xa8, 0xaf, 0xc9, 0xb9, 0xe2, 0x5f, 0x40, 0x1d, 0xb7, 0xbb, 0xe3, 0xe1,
0x20, 0x6a, 0x99, 0xbd, 0x08, 0x02, 0x96, 0xc6, 0x9a, 0xcc, 0x3e, 0x01, 0xfe, 0x86, 0x51, 0xab,
0xe7, 0xf1, 0xf3, 0x53, 0x56, 0xd0, 0x80, 0xd5, 0xa1, 0x5e, 0x43, 0x1e, 0xf1, 0x02, 0x7e, 0xb4,
0x5a, 0x2d, 0xa4, 0x28, 0x92, 0x6d, 0x73, 0x49, 0x2e, 0xc9, 0x65, 0xfb, 0x9b, 0x86, 0x11, 0xb3,
0x0d, 0x23, 0xfc, 0x03, 0x5e, 0xfb, 0xcd, 0xbe, 0xcc, 0xab, 0x5a, 0x48, 0x49, 0x43, 0xd6, 0xbf,
0xea, 0x7d, 0xae, 0x67, 0xfc, 0x05, 0x6e, 0xab, 0x4a, 0x15, 0x2b, 0x1a, 0xb1, 0x36, 0xd0, 0xdb,
0xad, 0x8a, 0xd5, 0xe4, 0x3f, 0xd8, 0x9b, 0xbb, 0xf5, 0x35, 0xf6, 0xc0, 0xbe, 0x9a, 0x6e, 0xa6,
0xde, 0x97, 0x26, 0xad, 0x97, 0x37, 0x0b, 0xcf, 0x68, 0xd2, 0x6a, 0x39, 0x5f, 0x79, 0x26, 0x0e,
0xc0, 0x09, 0xa7, 0x8b, 0xe5, 0xdc, 0x7b, 0x7d, 0xb3, 0x66, 0xee, 0x3d, 0x54, 0x59, 0x91, 0xaa,
0x63, 0x51, 0x88, 0xc7, 0x5d, 0x87, 0xdf, 0xfe, 0xf2, 0x23, 0x00, 0x00, 0xff, 0xff, 0x59, 0xbc,
0x55, 0x76, 0xfa, 0x01, 0x00, 0x00,
}

View file

@ -5,6 +5,7 @@ message MyMsg {
enum TYPE {
DATA = 0;
PING = 1;
KICK = 2;
MAGIC = 0xdead;
}
@ -23,32 +24,3 @@ message MyMsg {
int32 tcpmode_compress = 13;
int32 tcpmode_stat = 14;
}
message FrameData {
enum TYPE {
USER_DATA = 0;
CONN = 1;
CONNRSP = 2;
CLOSE = 3;
}
int32 type = 1;
bytes data = 2;
bool compress = 3;
}
message Frame {
enum TYPE {
DATA = 0;
REQ = 1;
ACK = 2;
PING = 3;
PONG = 4;
}
int32 type = 1;
bool resend = 2;
int64 sendtime = 3;
int32 id = 4;
FrameData data = 5;
repeated int32 dataid = 6;
}

BIN
network.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

56
pack.sh Executable file
View file

@ -0,0 +1,56 @@
#! /bin/bash
#set -x
NAME="pingtunnel"
export GO111MODULE=on
#go tool dist list
build_list=$(go tool dist list)
rm pack -rf
rm pack.zip -f
mkdir pack
go mod tidy
for line in $build_list; do
os=$(echo "$line" | awk -F"/" '{print $1}')
arch=$(echo "$line" | awk -F"/" '{print $2}')
echo "os="$os" arch="$arch" start build"
if [ $os == "android" ]; then
continue
fi
if [ $os == "ios" ]; then
continue
fi
if [ $arch == "wasm" ]; then
continue
fi
CGO_ENABLED=0 GOOS=$os GOARCH=$arch go build -ldflags="-s -w"
if [ $? -ne 0 ]; then
echo "os="$os" arch="$arch" build fail"
exit 1
fi
if [ $os = "windows" ]; then
zip ${NAME}_"${os}"_"${arch}"".zip" $NAME".exe"
if [ $? -ne 0 ]; then
echo "os="$os" arch="$arch" zip fail"
exit 1
fi
mv ${NAME}_"${os}"_"${arch}"".zip" pack/
rm $NAME".exe" -f
else
zip ${NAME}_"${os}"_"${arch}"".zip" $NAME
if [ $? -ne 0 ]; then
echo "os="$os" arch="$arch" zip fail"
exit 1
fi
mv ${NAME}_"${os}"_"${arch}"".zip" pack/
rm $NAME -f
fi
echo "os="$os" arch="$arch" done build"
done
zip pack.zip pack/ -r
echo "all done"

View file

@ -1,18 +1,14 @@
package pingtunnel
import (
"crypto/md5"
"crypto/rand"
"encoding/base64"
"encoding/binary"
"encoding/hex"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/loggo"
"github.com/golang/protobuf/proto"
"golang.org/x/net/icmp"
"golang.org/x/net/ipv4"
"io"
"net"
"syscall"
"sync"
"time"
)
@ -62,25 +58,18 @@ func sendICMP(id int, sequence int, conn icmp.PacketConn, server *net.IPAddr, ta
return
}
for {
if _, err := conn.WriteTo(bytes, server); err != nil {
if neterr, ok := err.(*net.OpError); ok {
if neterr.Err == syscall.ENOBUFS {
continue
}
}
loggo.Info("sendICMP WriteTo error %s %s", server.String(), err)
}
break
}
return
conn.WriteTo(bytes, server)
}
func recvICMP(conn icmp.PacketConn, recv chan<- *Packet) {
func recvICMP(workResultLock *sync.WaitGroup, exit *bool, conn icmp.PacketConn, recv chan<- *Packet) {
defer common.CrashLog()
(*workResultLock).Add(1)
defer (*workResultLock).Done()
bytes := make([]byte, 10240)
for {
for !*exit {
conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
n, srcaddr, err := conn.ReadFrom(bytes)
@ -124,22 +113,7 @@ type Packet struct {
echoSeq int
}
func UniqueId() string {
b := make([]byte, 48)
if _, err := io.ReadFull(rand.Reader, b); err != nil {
return ""
}
return GetMd5String(base64.URLEncoding.EncodeToString(b))
}
func GetMd5String(s string) string {
h := md5.New()
h.Write([]byte(s))
return hex.EncodeToString(h.Sum(nil))
}
const (
FRAME_MAX_SIZE int = 888
FRAME_MAX_ID int = 100000
FRAME_MAX_ID int = 1000000
)

View file

@ -24,93 +24,4 @@ func Test0001(t *testing.T) {
proto.Unmarshal(dst[0:4], my1)
fmt.Println("my1 = ", my1)
fm := FrameMgr{}
fm.recvid = 4
fm.windowsize = 100
lr := &Frame{}
rr := &Frame{}
lr.Id = 1
rr.Id = 4
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
lr.Id = 99
rr.Id = 8
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 9000
lr.Id = 9998
rr.Id = 9999
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 9000
lr.Id = 9998
rr.Id = 8
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 0
lr.Id = 9998
rr.Id = 8
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 0
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(4, 10))
fm.recvid = 0
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(5, 10))
fm.recvid = 4
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(1, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(1, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(2, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(9, 10))
fm.recvid = 10
fm.windowsize = 10000
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(0, FRAME_MAX_ID))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdOld = ", fm.isIdOld(2, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdOld = ", fm.isIdOld(1, 10))
fm.recvid = 3
fm.windowsize = 5
fmt.Println("fm.isIdOld = ", fm.isIdOld(1, 10))
fm.recvid = 13
fm.windowsize = 10000
fmt.Println("fm.isIdOld = ", fm.isIdOld(9, FRAME_MAX_ID))
dd := fm.compressData(([]byte)("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"))
fmt.Println("fm.compressData = ", len(dd))
_, ddd := fm.deCompressData(dd)
fmt.Println("fm.deCompressData = ", (string)(ddd))
mm := make(map[int32]int)
mm[1] = 1
mm[2] = 1
mm[3] = 1
mm[4] = 2
mm[6] = 7
mms := fm.printStatMap(&mm)
fmt.Println("fm.printStatMap = ", mms)
fm.openstat = 1
fm.resetStat()
fm.printStat()
}

430
server.go
View file

@ -1,37 +1,63 @@
package pingtunnel
import (
"github.com/esrrhs/go-engine/src/common"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/frame"
"github.com/esrrhs/gohome/loggo"
"github.com/esrrhs/gohome/threadpool"
"github.com/golang/protobuf/proto"
"golang.org/x/net/icmp"
"net"
"sync"
"time"
)
func NewServer(key int) (*Server, error) {
return &Server{
key: key,
}, nil
func NewServer(key int, maxconn int, maxprocessthread int, maxprocessbuffer int, connecttmeout int) (*Server, error) {
s := &Server{
exit: false,
key: key,
maxconn: maxconn,
maxprocessthread: maxprocessthread,
maxprocessbuffer: maxprocessbuffer,
connecttmeout: connecttmeout,
}
if maxprocessthread > 0 {
s.processtp = threadpool.NewThreadPool(maxprocessthread, maxprocessbuffer, func(v interface{}) {
packet := v.(*Packet)
s.processDataPacket(packet)
})
}
return s, nil
}
type Server struct {
key int
exit bool
key int
workResultLock sync.WaitGroup
maxconn int
maxprocessthread int
maxprocessbuffer int
connecttmeout int
conn *icmp.PacketConn
localConnMap map[string]*ServerConn
localConnMap sync.Map
connErrorMap sync.Map
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
localConnMapSize int
echoId int
echoSeq int
processtp *threadpool.ThreadPool
recvcontrol chan int
}
type ServerConn struct {
exit bool
timeout int
ipaddrTarget *net.UDPAddr
conn *net.UDPConn
@ -42,36 +68,64 @@ type ServerConn struct {
activeSendTime time.Time
close bool
rproto int
fm *FrameMgr
fm *frame.FrameMgr
tcpmode int
echoId int
echoSeq int
}
func (p *Server) Run() {
func (p *Server) Run() error {
conn, err := icmp.ListenPacket("ip4:icmp", "")
if err != nil {
loggo.Error("Error listening for ICMP packets: %s", err.Error())
return
return err
}
p.conn = conn
p.localConnMap = make(map[string]*ServerConn)
recv := make(chan *Packet, 10000)
go recvICMP(*p.conn, recv)
p.recvcontrol = make(chan int, 1)
go recvICMP(&p.workResultLock, &p.exit, *p.conn, recv)
interval := time.NewTicker(time.Second)
defer interval.Stop()
go func() {
defer common.CrashLog()
for {
select {
case <-interval.C:
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
p.checkTimeoutConn()
p.showNet()
case r := <-recv:
p.processPacket(r)
p.updateConnError()
time.Sleep(time.Second)
}
}
}()
go func() {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
select {
case <-p.recvcontrol:
return
case r := <-recv:
p.processPacket(r)
}
}
}()
return nil
}
func (p *Server) Stop() {
p.exit = true
p.recvcontrol <- 1
p.workResultLock.Wait()
p.processtp.Stop()
p.conn.Close()
}
func (p *Server) processPacket(packet *Packet) {
@ -80,9 +134,6 @@ func (p *Server) processPacket(packet *Packet) {
return
}
p.echoId = packet.echoId
p.echoSeq = packet.echoSeq
if packet.my.Type == (int32)(MyMsg_PING) {
t := time.Time{}
t.UnmarshalBinary(packet.my.Data)
@ -94,69 +145,112 @@ func (p *Server) processPacket(packet *Packet) {
return
}
if packet.my.Type == (int32)(MyMsg_KICK) {
localConn := p.getServerConnById(packet.my.Id)
if localConn != nil {
p.close(localConn)
loggo.Info("remote kick local %s", packet.my.Id)
}
return
}
if p.maxprocessthread > 0 {
p.processtp.AddJob((int)(common.HashString(packet.my.Id)), packet)
} else {
p.processDataPacket(packet)
}
}
func (p *Server) processDataPacketNewConn(id string, packet *Packet) *ServerConn {
now := common.GetNowUpdateInSecond()
loggo.Info("start add new connect %s %s", id, packet.my.Target)
if p.maxconn > 0 && p.localConnMapSize >= p.maxconn {
loggo.Info("too many connections %d, server connected target fail %s", p.localConnMapSize, packet.my.Target)
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
return nil
}
addr := packet.my.Target
if p.isConnError(addr) {
loggo.Info("addr connect Error before: %s %s", id, addr)
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
return nil
}
if packet.my.Tcpmode > 0 {
c, err := net.DialTimeout("tcp", addr, time.Millisecond*time.Duration(p.connecttmeout))
if err != nil {
loggo.Error("Error listening for tcp packets: %s %s", id, err.Error())
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
p.addConnError(addr)
return nil
}
targetConn := c.(*net.TCPConn)
ipaddrTarget := targetConn.RemoteAddr().(*net.TCPAddr)
fm := frame.NewFrameMgr(FRAME_MAX_SIZE, FRAME_MAX_ID, (int)(packet.my.TcpmodeBuffersize), (int)(packet.my.TcpmodeMaxwin), (int)(packet.my.TcpmodeResendTimems), (int)(packet.my.TcpmodeCompress),
(int)(packet.my.TcpmodeStat))
localConn := &ServerConn{exit: false, timeout: (int)(packet.my.Timeout), tcpconn: targetConn, tcpaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), fm: fm, tcpmode: (int)(packet.my.Tcpmode)}
p.addServerConn(id, localConn)
go p.RecvTCP(localConn, id, packet.src)
return localConn
} else {
c, err := net.DialTimeout("udp", addr, time.Millisecond*time.Duration(p.connecttmeout))
if err != nil {
loggo.Error("Error listening for udp packets: %s %s", id, err.Error())
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
p.addConnError(addr)
return nil
}
targetConn := c.(*net.UDPConn)
ipaddrTarget := targetConn.RemoteAddr().(*net.UDPAddr)
localConn := &ServerConn{exit: false, timeout: (int)(packet.my.Timeout), conn: targetConn, ipaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), tcpmode: (int)(packet.my.Tcpmode)}
p.addServerConn(id, localConn)
go p.Recv(localConn, id, packet.src)
return localConn
}
return nil
}
func (p *Server) processDataPacket(packet *Packet) {
loggo.Debug("processPacket %s %s %d", packet.my.Id, packet.src.String(), len(packet.my.Data))
now := time.Now()
now := common.GetNowUpdateInSecond()
id := packet.my.Id
localConn := p.localConnMap[id]
localConn := p.getServerConnById(id)
if localConn == nil {
if packet.my.Tcpmode > 0 {
addr := packet.my.Target
ipaddrTarget, err := net.ResolveTCPAddr("tcp", addr)
if err != nil {
loggo.Error("Error ResolveUDPAddr for tcp addr: %s %s", addr, err.Error())
return
}
targetConn, err := net.DialTCP("tcp", nil, ipaddrTarget)
if err != nil {
loggo.Error("Error listening for tcp packets: %s", err.Error())
return
}
fm := NewFrameMgr((int)(packet.my.TcpmodeBuffersize), (int)(packet.my.TcpmodeMaxwin), (int)(packet.my.TcpmodeResendTimems), (int)(packet.my.TcpmodeCompress),
(int)(packet.my.TcpmodeStat))
localConn = &ServerConn{timeout: (int)(packet.my.Timeout), tcpconn: targetConn, tcpaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), fm: fm, tcpmode: (int)(packet.my.Tcpmode)}
p.localConnMap[id] = localConn
go p.RecvTCP(localConn, id, packet.src)
} else {
addr := packet.my.Target
ipaddrTarget, err := net.ResolveUDPAddr("udp", addr)
if err != nil {
loggo.Error("Error ResolveUDPAddr for udp addr: %s %s", addr, err.Error())
return
}
targetConn, err := net.DialUDP("udp", nil, ipaddrTarget)
if err != nil {
loggo.Error("Error listening for udp packets: %s", err.Error())
return
}
localConn = &ServerConn{timeout: (int)(packet.my.Timeout), conn: targetConn, ipaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), tcpmode: (int)(packet.my.Tcpmode)}
p.localConnMap[id] = localConn
go p.Recv(localConn, id, packet.src)
localConn = p.processDataPacketNewConn(id, packet)
if localConn == nil {
return
}
}
localConn.activeRecvTime = now
localConn.echoId = packet.echoId
localConn.echoSeq = packet.echoSeq
if packet.my.Type == (int32)(MyMsg_DATA) {
if packet.my.Tcpmode > 0 {
f := &Frame{}
f := &frame.Frame{}
err := proto.Unmarshal(packet.my.Data, f)
if err != nil {
loggo.Error("Unmarshal tcp Error %s", err)
@ -166,6 +260,9 @@ func (p *Server) processPacket(packet *Packet) {
localConn.fm.OnRecvFrame(f)
} else {
if packet.my.Data == nil {
return
}
_, err := localConn.conn.Write(packet.my.Data)
if err != nil {
loggo.Info("WriteToUDP Error %s", err)
@ -181,20 +278,25 @@ func (p *Server) processPacket(packet *Packet) {
func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("server waiting target response %s -> %s %s", conn.tcpaddrTarget.String(), conn.id, conn.tcpconn.LocalAddr().String())
loggo.Info("start wait remote connect tcp %s %s", conn.id, conn.tcpaddrTarget.String())
startConnectTime := time.Now()
for {
startConnectTime := common.GetNowUpdateInSecond()
for !p.exit && !conn.exit {
if conn.fm.IsConnected() {
break
}
conn.fm.Update()
sendlist := conn.fm.getSendList()
sendlist := conn.fm.GetSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
f := e.Value.(*frame.Frame)
mb, _ := conn.fm.MarshalFrame(f)
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -202,24 +304,27 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
p.sendPacketSize += (uint64)(len(mb))
}
time.Sleep(time.Millisecond * 10)
now := time.Now()
now := common.GetNowUpdateInSecond()
diffclose := now.Sub(startConnectTime)
if diffclose > time.Second*(time.Duration(conn.timeout)) {
if diffclose > time.Second*5 {
loggo.Info("can not connect remote tcp %s %s", conn.id, conn.tcpaddrTarget.String())
p.Close(conn)
p.close(conn)
p.remoteError(conn.echoId, conn.echoSeq, id, conn.rproto, src)
return
}
}
loggo.Info("remote connected tcp %s %s", conn.id, conn.tcpaddrTarget.String())
if !conn.exit {
loggo.Info("remote connected tcp %s %s", conn.id, conn.tcpaddrTarget.String())
}
bytes := make([]byte, 10240)
tcpActiveRecvTime := time.Now()
tcpActiveSendTime := time.Now()
tcpActiveRecvTime := common.GetNowUpdateInSecond()
tcpActiveSendTime := common.GetNowUpdateInSecond()
for {
now := time.Now()
for !p.exit && !conn.exit {
now := common.GetNowUpdateInSecond()
sleep := true
left := common.MinOfInt(conn.fm.GetSendBufferLeft(), len(bytes))
@ -243,18 +348,18 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
conn.fm.Update()
sendlist := conn.fm.getSendList()
sendlist := conn.fm.GetSendList()
if sendlist.Len() > 0 {
sleep = false
conn.activeSendTime = now
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
mb, err := proto.Marshal(f)
f := e.Value.(*frame.Frame)
mb, err := conn.fm.MarshalFrame(f)
if err != nil {
loggo.Error("Error tcp Marshal %s %s %s", conn.id, conn.tcpaddrTarget.String(), err)
continue
}
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -291,7 +396,7 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
tcpdiffrecv := now.Sub(tcpActiveRecvTime)
tcpdiffsend := now.Sub(tcpActiveSendTime)
if diffrecv > time.Second*(time.Duration(conn.timeout)) || diffsend > time.Second*(time.Duration(conn.timeout)) ||
tcpdiffrecv > time.Second*(time.Duration(conn.timeout)) || tcpdiffsend > time.Second*(time.Duration(conn.timeout)) {
(tcpdiffrecv > time.Second*(time.Duration(conn.timeout)) && tcpdiffsend > time.Second*(time.Duration(conn.timeout))) {
loggo.Info("close inactive conn %s %s", conn.id, conn.tcpaddrTarget.String())
conn.fm.Close()
break
@ -304,17 +409,19 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
}
}
startCloseTime := time.Now()
for {
now := time.Now()
conn.fm.Close()
startCloseTime := common.GetNowUpdateInSecond()
for !p.exit && !conn.exit {
now := common.GetNowUpdateInSecond()
conn.fm.Update()
sendlist := conn.fm.getSendList()
sendlist := conn.fm.GetSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
f := e.Value.(*frame.Frame)
mb, _ := conn.fm.MarshalFrame(f)
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -334,14 +441,12 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
}
diffclose := now.Sub(startCloseTime)
timeout := diffclose > time.Second*(time.Duration(conn.timeout))
remoteclosed := conn.fm.IsRemoteClosed()
if timeout {
if diffclose > time.Second*60 {
loggo.Info("close conn had timeout %s %s", conn.id, conn.tcpaddrTarget.String())
break
}
remoteclosed := conn.fm.IsRemoteClosed()
if remoteclosed && nodatarecv {
loggo.Info("remote conn had closed %s %s", conn.id, conn.tcpaddrTarget.String())
break
@ -353,15 +458,21 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
time.Sleep(time.Second)
loggo.Info("close tcp conn %s %s", conn.id, conn.tcpaddrTarget.String())
p.Close(conn)
p.close(conn)
}
func (p *Server) Recv(conn *ServerConn, id string, src *net.IPAddr) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("server waiting target response %s -> %s %s", conn.ipaddrTarget.String(), conn.id, conn.conn.LocalAddr().String())
for {
bytes := make([]byte, 2000)
bytes := make([]byte, 2000)
for !p.exit {
conn.conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
n, _, err := conn.conn.ReadFromUDP(bytes)
@ -374,10 +485,10 @@ func (p *Server) Recv(conn *ServerConn, id string, src *net.IPAddr) {
}
}
now := time.Now()
now := common.GetNowUpdateInSecond()
conn.activeSendTime = now
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), bytes[:n],
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), bytes[:n],
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -387,22 +498,31 @@ func (p *Server) Recv(conn *ServerConn, id string, src *net.IPAddr) {
}
}
func (p *Server) Close(conn *ServerConn) {
if p.localConnMap[conn.id] != nil {
func (p *Server) close(conn *ServerConn) {
if p.getServerConnById(conn.id) != nil {
conn.exit = true
if conn.conn != nil {
conn.conn.Close()
}
if conn.tcpconn != nil {
conn.tcpconn.Close()
}
delete(p.localConnMap, conn.id)
p.deleteServerConn(conn.id)
}
}
func (p *Server) checkTimeoutConn() {
now := time.Now()
for _, conn := range p.localConnMap {
tmp := make(map[string]*ServerConn)
p.localConnMap.Range(func(key, value interface{}) bool {
id := key.(string)
serverConn := value.(*ServerConn)
tmp[id] = serverConn
return true
})
now := common.GetNowUpdateInSecond()
for _, conn := range tmp {
if conn.tcpmode > 0 {
continue
}
@ -413,22 +533,82 @@ func (p *Server) checkTimeoutConn() {
}
}
for id, conn := range p.localConnMap {
for id, conn := range tmp {
if conn.tcpmode > 0 {
continue
}
if conn.close {
loggo.Info("close inactive conn %s %s", id, conn.ipaddrTarget.String())
p.Close(conn)
p.close(conn)
}
}
}
func (p *Server) showNet() {
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024)
p.localConnMapSize = 0
p.localConnMap.Range(func(key, value interface{}) bool {
p.localConnMapSize++
return true
})
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s %dConnections",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024, p.localConnMapSize)
p.sendPacket = 0
p.recvPacket = 0
p.sendPacketSize = 0
p.recvPacketSize = 0
}
func (p *Server) addServerConn(uuid string, serverConn *ServerConn) {
p.localConnMap.Store(uuid, serverConn)
}
func (p *Server) getServerConnById(uuid string) *ServerConn {
ret, ok := p.localConnMap.Load(uuid)
if !ok {
return nil
}
return ret.(*ServerConn)
}
func (p *Server) deleteServerConn(uuid string) {
p.localConnMap.Delete(uuid)
}
func (p *Server) remoteError(echoId int, echoSeq int, uuid string, rprpto int, src *net.IPAddr) {
sendICMP(echoId, echoSeq, *p.conn, src, "", uuid, (uint32)(MyMsg_KICK), []byte{},
rprpto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
}
func (p *Server) addConnError(addr string) {
_, ok := p.connErrorMap.Load(addr)
if !ok {
now := common.GetNowUpdateInSecond()
p.connErrorMap.Store(addr, now)
}
}
func (p *Server) isConnError(addr string) bool {
_, ok := p.connErrorMap.Load(addr)
return ok
}
func (p *Server) updateConnError() {
tmp := make(map[string]time.Time)
p.connErrorMap.Range(func(key, value interface{}) bool {
id := key.(string)
t := value.(time.Time)
tmp[id] = t
return true
})
now := common.GetNowUpdateInSecond()
for id, t := range tmp {
diff := now.Sub(t)
if diff > time.Second*5 {
p.connErrorMap.Delete(id)
}
}
}

136
sock5.go
View file

@ -1,136 +0,0 @@
package pingtunnel
import (
"encoding/binary"
"errors"
"io"
"net"
"strconv"
"time"
)
var (
errAddrType = errors.New("socks addr type not supported")
errVer = errors.New("socks version not supported")
errMethod = errors.New("socks only support 1 method now")
errAuthExtraData = errors.New("socks authentication get extra data")
errReqExtraData = errors.New("socks request get extra data")
errCmd = errors.New("socks command not supported")
)
const (
socksVer5 = 5
socksCmdConnect = 1
)
func sock5Handshake(conn net.Conn) (err error) {
const (
idVer = 0
idNmethod = 1
)
// version identification and method selection message in theory can have
// at most 256 methods, plus version and nmethod field in total 258 bytes
// the current rfc defines only 3 authentication methods (plus 2 reserved),
// so it won't be such long in practice
buf := make([]byte, 258)
var n int
conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
// make sure we get the nmethod field
if n, err = io.ReadAtLeast(conn, buf, idNmethod+1); err != nil {
return
}
if buf[idVer] != socksVer5 {
return errVer
}
nmethod := int(buf[idNmethod])
msgLen := nmethod + 2
if n == msgLen { // handshake done, common case
// do nothing, jump directly to send confirmation
} else if n < msgLen { // has more methods to read, rare case
if _, err = io.ReadFull(conn, buf[n:msgLen]); err != nil {
return
}
} else { // error, should not get extra data
return errAuthExtraData
}
// send confirmation: version 5, no authentication required
_, err = conn.Write([]byte{socksVer5, 0})
return
}
func sock5GetRequest(conn net.Conn) (rawaddr []byte, host string, err error) {
const (
idVer = 0
idCmd = 1
idType = 3 // address type index
idIP0 = 4 // ip address start index
idDmLen = 4 // domain address length index
idDm0 = 5 // domain address start index
typeIPv4 = 1 // type is ipv4 address
typeDm = 3 // type is domain address
typeIPv6 = 4 // type is ipv6 address
lenIPv4 = 3 + 1 + net.IPv4len + 2 // 3(ver+cmd+rsv) + 1addrType + ipv4 + 2port
lenIPv6 = 3 + 1 + net.IPv6len + 2 // 3(ver+cmd+rsv) + 1addrType + ipv6 + 2port
lenDmBase = 3 + 1 + 1 + 2 // 3 + 1addrType + 1addrLen + 2port, plus addrLen
)
// refer to getRequest in server.go for why set buffer size to 263
buf := make([]byte, 263)
var n int
conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
// read till we get possible domain length field
if n, err = io.ReadAtLeast(conn, buf, idDmLen+1); err != nil {
return
}
// check version and cmd
if buf[idVer] != socksVer5 {
err = errVer
return
}
if buf[idCmd] != socksCmdConnect {
err = errCmd
return
}
reqLen := -1
switch buf[idType] {
case typeIPv4:
reqLen = lenIPv4
case typeIPv6:
reqLen = lenIPv6
case typeDm:
reqLen = int(buf[idDmLen]) + lenDmBase
default:
err = errAddrType
return
}
if n == reqLen {
// common case, do nothing
} else if n < reqLen { // rare case
if _, err = io.ReadFull(conn, buf[n:reqLen]); err != nil {
return
}
} else {
err = errReqExtraData
return
}
rawaddr = buf[idType:reqLen]
switch buf[idType] {
case typeIPv4:
host = net.IP(buf[idIP0 : idIP0+net.IPv4len]).String()
case typeIPv6:
host = net.IP(buf[idIP0 : idIP0+net.IPv6len]).String()
case typeDm:
host = string(buf[idDm0 : idDm0+buf[idDmLen]])
}
port := binary.BigEndian.Uint16(buf[reqLen-2 : reqLen])
host = net.JoinHostPort(host, strconv.Itoa(int(port)))
return
}