Compare commits

..

No commits in common. "master" and "1.0" have entirely different histories.
master ... 1.0

25 changed files with 1541 additions and 1085 deletions

View file

@ -1,34 +0,0 @@
name: Docker Image CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Set up QEMU
uses: docker/setup-qemu-action@v1
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
-
name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
tags: esrrhs/pingtunnel:latest

View file

@ -1,30 +0,0 @@
# This workflow will build a golang project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go
name: Go
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.21
- name: Build
run: |
go mod tidy
go build -v ./...
- name: Test
run: go test -v ./...

7
.idea/vcs.xml generated Normal file
View file

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
<mapping directory="$PROJECT_DIR$/src/github.com/esrrhs/pingtunnel" vcs="Git" />
</component>
</project>

View file

@ -1,13 +0,0 @@
FROM golang AS build-env
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . ./
RUN go build -v -o pingtunnel
FROM debian
COPY --from=build-env /app/pingtunnel .
COPY GeoLite2-Country.mmdb .
WORKDIR ./

Binary file not shown.

140
README.md
View file

@ -1,78 +1,98 @@
# Pingtunnel
pingtunnel是把tcp/udp/sock5流量伪装成icmp流量进行转发的工具。用于突破网络封锁或是绕过WIFI网络的登陆验证或是在某些网络加快网络传输速度。
<br />Pingtunnel is a tool that advertises tcp/udp/sock5 traffic as icmp traffic for forwarding. Used to break through the network blockade, or to bypass the WIFI network login verification, or speed up network transmission speed on some networks.
[<img src="https://img.shields.io/github/license/esrrhs/pingtunnel">](https://github.com/esrrhs/pingtunnel)
[<img src="https://img.shields.io/github/languages/top/esrrhs/pingtunnel">](https://github.com/esrrhs/pingtunnel)
[![Go Report Card](https://goreportcard.com/badge/github.com/esrrhs/pingtunnel)](https://goreportcard.com/report/github.com/esrrhs/pingtunnel)
[<img src="https://img.shields.io/github/v/release/esrrhs/pingtunnel">](https://github.com/esrrhs/pingtunnel/releases)
[<img src="https://img.shields.io/github/downloads/esrrhs/pingtunnel/total">](https://github.com/esrrhs/pingtunnel/releases)
[<img src="https://img.shields.io/docker/pulls/esrrhs/pingtunnel">](https://hub.docker.com/repository/docker/esrrhs/pingtunnel)
[<img src="https://img.shields.io/github/actions/workflow/status/esrrhs/pingtunnel/go.yml?branch=master">](https://github.com/esrrhs/pingtunnel/actions)
![image](network.png)
Pingtunnel is a tool that send TCP/UDP traffic over ICMP.
## Note: This tool is only to be used for study and research, do not use it for illegal purposes
![image](network.jpg)
## Usage
### Install server
- First prepare a server with a public IP, such as EC2 on AWS, assuming the domain name or public IP is www.yourserver.com
- Download the corresponding installation package from [releases](https://github.com/esrrhs/pingtunnel/releases), such as pingtunnel_linux64.zip, then decompress and execute with **root** privileges
- “-key” parameter is **int** type, only supports numbers between 0-2147483647
# Why use this
* 因为网络审查ip会直接被ban但是却可以ping通这时候就可以用这个工具继续连接服务器。If the server's ip is blocked, all tcp udp packets are forbidden, but it can be pinged. At this point, you can continue to connect to the server with this tool.
* 在咖啡厅或是机场可以连接free wifi但是需要登录跳转验证这时候就可以用这个工具绕过登录上网因为wifi虽然不可以上网但是却可以ping通你的服务器。In the coffee shop or airport, you can connect to free wifi, but you need to log in to verify. At this time, you can use this tool to bypass the login, because wifi can not surf the Internet, but you can ping your server.
* 在某些网络tcp的传输很慢但是如果用icmp协议可能因为运营商的设置或是网络拓扑速度会变快实际测试在中国大陆连aws的服务器会有加速效果。In some networks, the transmission of tcp is very slow, but if the icmp protocol is used, the speed may be faster because of the operator's settings or the network topology. After testing, connecting the server of aws from mainland China has an accelerated effect.
# Sample
如把本机的:4455的UDP流量转发到www.yourserver.com:4455For example, the UDP traffic of the machine: 4545 is forwarded to www.yourserver.com:4455:
* 在www.yourserver.com的服务器上用root权限运行。Run with root privileges on the server at www.yourserver.com
```
sudo wget (link of latest release)
sudo unzip pingtunnel_linux64.zip
sudo ./pingtunnel -type server
```
- (Optional) Disable system default ping
```
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
```
### Install the client
- Download the corresponding installation package from [releases](https://github.com/esrrhs/pingtunnel/releases), such as pingtunnel_windows64.zip, and decompress it
- Then run with **administrator** privileges. The commands corresponding to different forwarding functions are as follows.
- If you see a log of ping pong, the connection is normal
- “-key” parameter is **int** type, only supports numbers between 0-2147483647
#### Forward sock5
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -sock5 1
```
#### Forward tcp
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455 -tcp 1
```
#### Forward udp
* 在你本地电脑上用管理员权限运行。Run with administrator privileges on your local computer
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455
```
### Use Docker
It can also be started directly with docker, which is more convenient. Same parameters as above
- server:
* 如果看到客户端不停的ping、pong日志输出说明工作正常。If you see the client ping, pong log output, it means normal work
```
docker run --name pingtunnel-server -d --privileged --network host --restart=always esrrhs/pingtunnel ./pingtunnel -type server -key 123456
ping www.xx.com 2018-12-23 13:05:50.5724495 +0800 CST m=+3.023909301 8 0 1997 2
pong from xx.xx.xx.xx 210.8078ms
```
- client:
* 如果想转发tcp流量只需要在客户端加上-tcp的参数。If you want to forward tcp traffic, you only need to add the -tcp parameter to the client.
```
docker run --name pingtunnel-client -d --restart=always -p 1080:1080 esrrhs/pingtunnel ./pingtunnel -type client -l :1080 -s www.yourserver.com -sock5 1 -key 123456
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -t www.yourserver.com:4455 -tcp 1
```
* 如果想转发sock5流量只需要在客户端加上-sock5的参数。If you want to forward sock5 traffic, you only need to add the -sock5 parameter to the client.
```
pingtunnel.exe -type client -l :4455 -s www.yourserver.com -sock5 1
```
* 大功告成,然后你就可以开始和本机的:4455端口通信数据都被自动转发到远端如同连接到www.yourserver.com:4455一样。 Then you can start communicating with the local: 4455 port, the data is automatically forwarded to the remote, as you connect to www.yourserver.com:4455.
## Thanks for free JetBrains Open Source license
# Usage
通过伪造ping把tcp/udp/sock5流量通过远程服务器转发到目的服务器上。用于突破某些运营商封锁TCP/UDP流量。
By forging ping, the tcp/udp/sock5 traffic is forwarded to the destination server through the remote server. Used to break certain operators to block TCP/UDP traffic.
<img src="https://resources.jetbrains.com/storage/products/company/brand/logos/GoLand.png" height="200"/></a>
Usage:
// server
pingtunnel -type server
// client, Forward udp
pingtunnel -type client -l LOCAL_IP:4455 -s SERVER_IP -t SERVER_IP:4455
// client, Forward tcp
pingtunnel -type client -l LOCAL_IP:4455 -s SERVER_IP -t SERVER_IP:4455 -tcp 1
// client, Forward sock5, implicitly open tcp, so no target server is needed
pingtunnel -type client -l LOCAL_IP:4455 -s SERVER_IP -sock5 1
-type 服务器或者客户端
client or server
-l 本地的地址,发到这个端口的流量将转发到服务器
Local address, traffic sent to this port will be forwarded to the server
-s 服务器的地址,流量将通过隧道转发到这个服务器
The address of the server, the traffic will be forwarded to this server through the tunnel
-t 远端服务器转发的目的地址,流量将转发到这个地址
Destination address forwarded by the remote server, traffic will be forwarded to this address
-timeout 本地记录连接超时的时间单位是秒默认60s
The time when the local record connection timed out, in seconds, 60 seconds by default
-key 设置的密码默认0
Set password, default 0
-tcp 设置是否转发tcp默认0
Set the switch to forward tcp, the default is 0
-tcp_bs tcp的发送接收缓冲区大小默认10MB
Tcp send and receive buffer size, default 10MB
-tcp_mw tcp的最大窗口默认10000
The maximum window of tcp, the default is 10000
-tcp_rst tcp的超时发送时间默认400ms
Tcp timeout resend time, default 400ms
-tcp_gz 当数据包超过这个大小tcp将压缩数据0表示不压缩默认0
Tcp will compress data when the packet exceeds this size, 0 means no compression, default 0
-tcp_stat 打印tcp的监控默认0
Print tcp connection statistic, default 0 is off
-nolog 不写日志文件只打印标准输出默认0
Do not write log files, only print standard output, default 0 is off
-loglevel 日志文件等级默认info
log level, default is info
-sock5 开启sock5转发默认0
Turn on sock5 forwarding, default 0 is off

View file

@ -1 +0,0 @@
theme: jekyll-theme-cayman

452
client.go
View file

@ -1,17 +1,13 @@
package pingtunnel
import (
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/frame"
"github.com/esrrhs/gohome/loggo"
"github.com/esrrhs/gohome/network"
"github.com/esrrhs/go-engine/src/common"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/golang/protobuf/proto"
"golang.org/x/net/icmp"
"io"
"math"
"math/rand"
"net"
"sync"
"time"
)
@ -22,7 +18,7 @@ const (
func NewClient(addr string, server string, target string, timeout int, key int,
tcpmode int, tcpmode_buffersize int, tcpmode_maxwin int, tcpmode_resend_timems int, tcpmode_compress int,
tcpmode_stat int, open_sock5 int, maxconn int, sock5_filter *func(addr string) bool) (*Client, error) {
tcpmode_stat int, open_sock5 int) (*Client, error) {
var ipaddr *net.UDPAddr
var tcpaddr *net.TCPAddr
@ -45,11 +41,9 @@ func NewClient(addr string, server string, target string, timeout int, key int,
return nil, err
}
rand.Seed(time.Now().UnixNano())
r := rand.New(rand.NewSource(time.Now().UnixNano()))
return &Client{
exit: false,
rtt: 0,
id: rand.Intn(math.MaxInt16),
id: r.Intn(math.MaxInt16),
ipaddr: ipaddr,
tcpaddr: tcpaddr,
addr: addr,
@ -65,18 +59,10 @@ func NewClient(addr string, server string, target string, timeout int, key int,
tcpmode_compress: tcpmode_compress,
tcpmode_stat: tcpmode_stat,
open_sock5: open_sock5,
maxconn: maxconn,
pongTime: time.Now(),
sock5_filter: sock5_filter,
}, nil
}
type Client struct {
exit bool
rtt time.Duration
workResultLock sync.WaitGroup
maxconn int
id int
sequence int
@ -90,9 +76,7 @@ type Client struct {
tcpmode_resend_timems int
tcpmode_compress int
tcpmode_stat int
open_sock5 int
sock5_filter *func(addr string) bool
open_sock5 int
ipaddr *net.UDPAddr
tcpaddr *net.TCPAddr
@ -107,23 +91,16 @@ type Client struct {
listenConn *net.UDPConn
tcplistenConn *net.TCPListener
localAddrToConnMap sync.Map
localIdToConnMap sync.Map
localAddrToConnMap map[string]*ClientConn
localIdToConnMap map[string]*ClientConn
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
localAddrToConnMapSize int
localIdToConnMapSize int
recvcontrol chan int
pongTime time.Time
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
}
type ClientConn struct {
exit bool
ipaddr *net.UDPAddr
tcpaddr *net.TCPAddr
id string
@ -131,7 +108,7 @@ type ClientConn struct {
activeSendTime time.Time
close bool
fm *frame.FrameMgr
fm *FrameMgr
}
func (p *Client) Addr() string {
@ -154,59 +131,37 @@ func (p *Client) ServerAddr() string {
return p.addrServer
}
func (p *Client) RTT() time.Duration {
return p.rtt
}
func (p *Client) RecvPacketSize() uint64 {
return p.recvPacketSize
}
func (p *Client) SendPacketSize() uint64 {
return p.sendPacketSize
}
func (p *Client) RecvPacket() uint64 {
return p.recvPacket
}
func (p *Client) SendPacket() uint64 {
return p.sendPacket
}
func (p *Client) LocalIdToConnMapSize() int {
return p.localIdToConnMapSize
}
func (p *Client) LocalAddrToConnMapSize() int {
return p.localAddrToConnMapSize
}
func (p *Client) Run() error {
func (p *Client) Run() {
conn, err := icmp.ListenPacket("ip4:icmp", "")
if err != nil {
loggo.Error("Error listening for ICMP packets: %s", err.Error())
return err
return
}
defer conn.Close()
p.conn = conn
if p.tcpmode > 0 {
tcplistenConn, err := net.ListenTCP("tcp", p.tcpaddr)
if err != nil {
loggo.Error("Error listening for tcp packets: %s", err.Error())
return err
return
}
defer tcplistenConn.Close()
p.tcplistenConn = tcplistenConn
} else {
listener, err := net.ListenUDP("udp", p.ipaddr)
if err != nil {
loggo.Error("Error listening for udp packets: %s", err.Error())
return err
return
}
defer listener.Close()
p.listenConn = listener
}
p.localAddrToConnMap = make(map[string]*ClientConn)
p.localIdToConnMap = make(map[string]*ClientConn)
if p.tcpmode > 0 {
go p.AcceptTcp()
} else {
@ -214,77 +169,28 @@ func (p *Client) Run() error {
}
recv := make(chan *Packet, 10000)
p.recvcontrol = make(chan int, 1)
go recvICMP(&p.workResultLock, &p.exit, *p.conn, recv)
go recvICMP(*p.conn, recv)
go func() {
defer common.CrashLog()
interval := time.NewTicker(time.Second)
defer interval.Stop()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
for {
select {
case <-interval.C:
p.checkTimeoutConn()
p.ping()
p.showNet()
time.Sleep(time.Second)
case r := <-recv:
p.processPacket(r)
}
}()
go func() {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
p.updateServerAddr()
time.Sleep(time.Second)
}
}()
go func() {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
select {
case <-p.recvcontrol:
return
case r := <-recv:
p.processPacket(r)
}
}
}()
return nil
}
func (p *Client) Stop() {
p.exit = true
p.recvcontrol <- 1
p.workResultLock.Wait()
p.conn.Close()
if p.tcplistenConn != nil {
p.tcplistenConn.Close()
}
if p.listenConn != nil {
p.listenConn.Close()
}
}
func (p *Client) AcceptTcp() error {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("client waiting local accept tcp")
for !p.exit {
for {
p.tcplistenConn.SetDeadline(time.Now().Add(time.Millisecond * 1000))
conn, err := p.tcplistenConn.AcceptTCP()
@ -304,45 +210,35 @@ func (p *Client) AcceptTcp() error {
}
}
}
return nil
}
func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
uuid := UniqueId()
tcpsrcaddr := conn.RemoteAddr().(*net.TCPAddr)
if p.maxconn > 0 && p.localIdToConnMapSize >= p.maxconn {
loggo.Info("too many connections %d, client accept new local tcp fail %s", p.localIdToConnMapSize, tcpsrcaddr.String())
return
}
uuid := common.UniqueId()
fm := frame.NewFrameMgr(FRAME_MAX_SIZE, FRAME_MAX_ID, p.tcpmode_buffersize, p.tcpmode_maxwin, p.tcpmode_resend_timems, p.tcpmode_compress, p.tcpmode_stat)
fm := NewFrameMgr(p.tcpmode_buffersize, p.tcpmode_maxwin, p.tcpmode_resend_timems, p.tcpmode_compress, p.tcpmode_stat)
now := time.Now()
clientConn := &ClientConn{exit: false, tcpaddr: tcpsrcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false,
clientConn := &ClientConn{tcpaddr: tcpsrcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false,
fm: fm}
p.addClientConn(uuid, tcpsrcaddr.String(), clientConn)
p.localAddrToConnMap[tcpsrcaddr.String()] = clientConn
p.localIdToConnMap[uuid] = clientConn
loggo.Info("client accept new local tcp %s %s", uuid, tcpsrcaddr.String())
loggo.Info("start connect remote tcp %s %s", uuid, tcpsrcaddr.String())
clientConn.fm.Connect()
startConnectTime := common.GetNowUpdateInSecond()
for !p.exit && !clientConn.exit {
startConnectTime := time.Now()
for {
if clientConn.fm.IsConnected() {
break
}
clientConn.fm.Update()
sendlist := clientConn.fm.GetSendList()
sendlist := clientConn.fm.getSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*frame.Frame)
mb, _ := clientConn.fm.MarshalFrame(f)
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
p.sequence++
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, targetAddr, clientConn.id, (uint32)(MyMsg_DATA), mb,
SEND_PROTO, RECV_PROTO, p.key,
@ -352,26 +248,23 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
p.sendPacketSize += (uint64)(len(mb))
}
time.Sleep(time.Millisecond * 10)
now := common.GetNowUpdateInSecond()
now := time.Now()
diffclose := now.Sub(startConnectTime)
if diffclose > time.Second*5 {
if diffclose > time.Second*(time.Duration(p.timeout)) {
loggo.Info("can not connect remote tcp %s %s", uuid, tcpsrcaddr.String())
p.close(clientConn)
p.Close(clientConn)
return
}
}
if !clientConn.exit {
loggo.Info("connected remote tcp %s %s", uuid, tcpsrcaddr.String())
}
loggo.Info("connected remote tcp %s %s", uuid, tcpsrcaddr.String())
bytes := make([]byte, 10240)
tcpActiveRecvTime := common.GetNowUpdateInSecond()
tcpActiveSendTime := common.GetNowUpdateInSecond()
tcpActiveRecvTime := time.Now()
tcpActiveSendTime := time.Now()
for !p.exit && !clientConn.exit {
now := common.GetNowUpdateInSecond()
for {
now := time.Now()
sleep := true
left := common.MinOfInt(clientConn.fm.GetSendBufferLeft(), len(bytes))
@ -395,13 +288,13 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
clientConn.fm.Update()
sendlist := clientConn.fm.GetSendList()
sendlist := clientConn.fm.getSendList()
if sendlist.Len() > 0 {
sleep = false
clientConn.activeSendTime = now
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*frame.Frame)
mb, err := clientConn.fm.MarshalFrame(f)
f := e.Value.(*Frame)
mb, err := proto.Marshal(f)
if err != nil {
loggo.Error("Error tcp Marshal %s %s %s", uuid, tcpsrcaddr.String(), err)
continue
@ -444,7 +337,7 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
tcpdiffrecv := now.Sub(tcpActiveRecvTime)
tcpdiffsend := now.Sub(tcpActiveSendTime)
if diffrecv > time.Second*(time.Duration(p.timeout)) || diffsend > time.Second*(time.Duration(p.timeout)) ||
(tcpdiffrecv > time.Second*(time.Duration(p.timeout)) && tcpdiffsend > time.Second*(time.Duration(p.timeout))) {
tcpdiffrecv > time.Second*(time.Duration(p.timeout)) || tcpdiffsend > time.Second*(time.Duration(p.timeout)) {
loggo.Info("close inactive conn %s %s", clientConn.id, clientConn.tcpaddr.String())
clientConn.fm.Close()
break
@ -457,18 +350,16 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
}
}
clientConn.fm.Close()
startCloseTime := common.GetNowUpdateInSecond()
for !p.exit && !clientConn.exit {
now := common.GetNowUpdateInSecond()
startCloseTime := time.Now()
for {
now := time.Now()
clientConn.fm.Update()
sendlist := clientConn.fm.GetSendList()
sendlist := clientConn.fm.getSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*frame.Frame)
mb, _ := clientConn.fm.MarshalFrame(f)
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
p.sequence++
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, targetAddr, clientConn.id, (uint32)(MyMsg_DATA), mb,
SEND_PROTO, RECV_PROTO, p.key,
@ -490,12 +381,14 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
}
diffclose := now.Sub(startCloseTime)
if diffclose > time.Second*60 {
timeout := diffclose > time.Second*(time.Duration(p.timeout))
remoteclosed := clientConn.fm.IsRemoteClosed()
if timeout {
loggo.Info("close conn had timeout %s %s", clientConn.id, clientConn.tcpaddr.String())
break
}
remoteclosed := clientConn.fm.IsRemoteClosed()
if remoteclosed && nodatarecv {
loggo.Info("remote conn had closed %s %s", clientConn.id, clientConn.tcpaddr.String())
break
@ -506,21 +399,16 @@ func (p *Client) AcceptTcpConn(conn *net.TCPConn, targetAddr string) {
loggo.Info("close tcp conn %s %s", clientConn.id, clientConn.tcpaddr.String())
conn.Close()
p.close(clientConn)
p.Close(clientConn)
}
func (p *Client) Accept() error {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("client waiting local accept udp")
bytes := make([]byte, 10240)
for !p.exit {
for {
p.listenConn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
n, srcaddr, err := p.listenConn.ReadFromUDP(bytes)
if err != nil {
@ -534,16 +422,13 @@ func (p *Client) Accept() error {
continue
}
now := common.GetNowUpdateInSecond()
clientConn := p.getClientConnByAddr(srcaddr.String())
now := time.Now()
clientConn := p.localAddrToConnMap[srcaddr.String()]
if clientConn == nil {
if p.maxconn > 0 && p.localIdToConnMapSize >= p.maxconn {
loggo.Info("too many connections %d, client accept new local udp fail %s", p.localIdToConnMapSize, srcaddr.String())
continue
}
uuid := common.UniqueId()
clientConn = &ClientConn{exit: false, ipaddr: srcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false}
p.addClientConn(uuid, srcaddr.String(), clientConn)
uuid := UniqueId()
clientConn = &ClientConn{ipaddr: srcaddr, id: uuid, activeRecvTime: now, activeSendTime: now, close: false}
p.localAddrToConnMap[srcaddr.String()] = clientConn
p.localIdToConnMap[uuid] = clientConn
loggo.Info("client accept new local udp %s %s", uuid, srcaddr.String())
}
@ -558,7 +443,6 @@ func (p *Client) Accept() error {
p.sendPacket++
p.sendPacketSize += (uint64)(n)
}
return nil
}
func (p *Client) processPacket(packet *Packet) {
@ -578,37 +462,26 @@ func (p *Client) processPacket(packet *Packet) {
if packet.my.Type == (int32)(MyMsg_PING) {
t := time.Time{}
t.UnmarshalBinary(packet.my.Data)
now := time.Now()
d := now.Sub(t)
d := time.Now().Sub(t)
loggo.Info("pong from %s %s", packet.src.String(), d.String())
p.rtt = d
p.pongTime = now
return
}
if packet.my.Type == (int32)(MyMsg_KICK) {
clientConn := p.getClientConnById(packet.my.Id)
if clientConn != nil {
p.close(clientConn)
loggo.Info("remote kick local %s", packet.my.Id)
}
return
}
loggo.Debug("processPacket %s %s %d", packet.my.Id, packet.src.String(), len(packet.my.Data))
clientConn := p.getClientConnById(packet.my.Id)
clientConn := p.localIdToConnMap[packet.my.Id]
if clientConn == nil {
loggo.Debug("processPacket no conn %s ", packet.my.Id)
p.remoteError(packet.my.Id)
return
}
now := common.GetNowUpdateInSecond()
addr := clientConn.ipaddr
now := time.Now()
clientConn.activeRecvTime = now
if p.tcpmode > 0 {
f := &frame.Frame{}
f := &Frame{}
err := proto.Unmarshal(packet.my.Data, f)
if err != nil {
loggo.Error("Unmarshal tcp Error %s", err)
@ -617,10 +490,6 @@ func (p *Client) processPacket(packet *Packet) {
clientConn.fm.OnRecvFrame(f)
} else {
if packet.my.Data == nil {
return
}
addr := clientConn.ipaddr
_, err := p.listenConn.WriteToUDP(packet.my.Data, addr)
if err != nil {
loggo.Info("WriteToUDP Error read udp %s", err)
@ -633,10 +502,11 @@ func (p *Client) processPacket(packet *Packet) {
p.recvPacketSize += (uint64)(len(packet.my.Data))
}
func (p *Client) close(clientConn *ClientConn) {
clientConn.exit = true
p.deleteClientConn(clientConn.id, clientConn.ipaddr.String())
p.deleteClientConn(clientConn.id, clientConn.tcpaddr.String())
func (p *Client) Close(clientConn *ClientConn) {
if p.localIdToConnMap[clientConn.id] != nil {
delete(p.localIdToConnMap, clientConn.id)
delete(p.localAddrToConnMap, clientConn.ipaddr.String())
}
}
func (p *Client) checkTimeoutConn() {
@ -645,16 +515,8 @@ func (p *Client) checkTimeoutConn() {
return
}
tmp := make(map[string]*ClientConn)
p.localIdToConnMap.Range(func(key, value interface{}) bool {
id := key.(string)
clientConn := value.(*ClientConn)
tmp[id] = clientConn
return true
})
now := common.GetNowUpdateInSecond()
for _, conn := range tmp {
now := time.Now()
for _, conn := range p.localIdToConnMap {
diffrecv := now.Sub(conn.activeRecvTime)
diffsend := now.Sub(conn.activeSendTime)
if diffrecv > time.Second*(time.Duration(p.timeout)) || diffsend > time.Second*(time.Duration(p.timeout)) {
@ -662,41 +524,30 @@ func (p *Client) checkTimeoutConn() {
}
}
for id, conn := range tmp {
for id, conn := range p.localIdToConnMap {
if conn.close {
loggo.Info("close inactive conn %s %s", id, conn.ipaddr.String())
p.close(conn)
p.Close(conn)
}
}
}
func (p *Client) ping() {
now := time.Now()
b, _ := now.MarshalBinary()
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, "", "", (uint32)(MyMsg_PING), b,
SEND_PROTO, RECV_PROTO, p.key,
0, 0, 0, 0, 0, 0,
0)
loggo.Info("ping %s %s %d %d %d %d", p.addrServer, now.String(), p.sproto, p.rproto, p.id, p.sequence)
p.sequence++
if now.Sub(p.pongTime) > time.Second*3 {
p.rtt = 0
if p.sendPacket == 0 {
now := time.Now()
b, _ := now.MarshalBinary()
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, "", "", (uint32)(MyMsg_PING), b,
SEND_PROTO, RECV_PROTO, p.key,
0, 0, 0, 0, 0, 0,
0)
loggo.Info("ping %s %s %d %d %d %d", p.addrServer, now.String(), p.sproto, p.rproto, p.id, p.sequence)
p.sequence++
}
}
func (p *Client) showNet() {
p.localAddrToConnMapSize = 0
p.localIdToConnMap.Range(func(key, value interface{}) bool {
p.localAddrToConnMapSize++
return true
})
p.localIdToConnMapSize = 0
p.localIdToConnMap.Range(func(key, value interface{}) bool {
p.localIdToConnMapSize++
return true
})
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s %d/%dConnections",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024, p.localAddrToConnMapSize, p.localIdToConnMapSize)
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024)
p.sendPacket = 0
p.recvPacket = 0
p.sendPacketSize = 0
@ -705,18 +556,13 @@ func (p *Client) showNet() {
func (p *Client) AcceptSock5Conn(conn *net.TCPConn) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
var err error = nil
if err = network.Sock5HandshakeBy(conn, "", ""); err != nil {
if err = sock5Handshake(conn); err != nil {
loggo.Error("socks handshake: %s", err)
conn.Close()
return
}
_, addr, err := network.Sock5GetRequest(conn)
_, addr, err := sock5GetRequest(conn)
if err != nil {
loggo.Error("error getting request: %s", err)
conn.Close()
@ -727,104 +573,12 @@ func (p *Client) AcceptSock5Conn(conn *net.TCPConn) {
// But if connection failed, the client will get connection reset error.
_, err = conn.Write([]byte{0x05, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x08, 0x43})
if err != nil {
loggo.Error("send connection confirmation: %s", err)
loggo.Error("send connection confirmation:", err)
conn.Close()
return
}
loggo.Info("accept new sock5 conn: %s", addr)
if p.sock5_filter == nil {
p.AcceptTcpConn(conn, addr)
} else {
if (*p.sock5_filter)(addr) {
p.AcceptTcpConn(conn, addr)
return
}
p.AcceptDirectTcpConn(conn, addr)
}
}
func (p *Client) addClientConn(uuid string, addr string, clientConn *ClientConn) {
p.localAddrToConnMap.Store(addr, clientConn)
p.localIdToConnMap.Store(uuid, clientConn)
}
func (p *Client) getClientConnByAddr(addr string) *ClientConn {
ret, ok := p.localAddrToConnMap.Load(addr)
if !ok {
return nil
}
return ret.(*ClientConn)
}
func (p *Client) getClientConnById(uuid string) *ClientConn {
ret, ok := p.localIdToConnMap.Load(uuid)
if !ok {
return nil
}
return ret.(*ClientConn)
}
func (p *Client) deleteClientConn(uuid string, addr string) {
p.localIdToConnMap.Delete(uuid)
p.localAddrToConnMap.Delete(addr)
}
func (p *Client) remoteError(uuid string) {
sendICMP(p.id, p.sequence, *p.conn, p.ipaddrServer, "", uuid, (uint32)(MyMsg_KICK), []byte{},
SEND_PROTO, RECV_PROTO, p.key,
0, 0, 0, 0, 0, 0,
0)
}
func (p *Client) AcceptDirectTcpConn(conn *net.TCPConn, targetAddr string) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
tcpsrcaddr := conn.RemoteAddr().(*net.TCPAddr)
loggo.Info("client accept new direct local tcp %s %s", tcpsrcaddr.String(), targetAddr)
tcpaddrTarget, err := net.ResolveTCPAddr("tcp", targetAddr)
if err != nil {
loggo.Info("direct local tcp ResolveTCPAddr fail: %s %s", targetAddr, err.Error())
return
}
targetconn, err := net.DialTCP("tcp", nil, tcpaddrTarget)
if err != nil {
loggo.Info("direct local tcp DialTCP fail: %s %s", targetAddr, err.Error())
return
}
go p.transfer(conn, targetconn, conn.RemoteAddr().String(), targetconn.RemoteAddr().String())
go p.transfer(targetconn, conn, targetconn.RemoteAddr().String(), conn.RemoteAddr().String())
loggo.Info("client accept new direct local tcp ok %s %s", tcpsrcaddr.String(), targetAddr)
}
func (p *Client) transfer(destination io.WriteCloser, source io.ReadCloser, dst string, src string) {
defer common.CrashLog()
defer destination.Close()
defer source.Close()
loggo.Info("client begin transfer from %s -> %s", src, dst)
io.Copy(destination, source)
loggo.Info("client end transfer from %s -> %s", src, dst)
}
func (p *Client) updateServerAddr() {
ipaddrServer, err := net.ResolveIPAddr("ip", p.addrServer)
if err != nil {
return
}
if p.ipaddrServer.String() != ipaddrServer.String() {
p.ipaddrServer = ipaddrServer
}
p.AcceptTcpConn(conn, addr)
}

View file

@ -3,15 +3,9 @@ package main
import (
"flag"
"fmt"
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/geoip"
"github.com/esrrhs/gohome/loggo"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/pingtunnel"
"net"
"net/http"
_ "net/http/pprof"
"strconv"
"time"
)
var usage = `
@ -35,34 +29,6 @@ Usage:
-type 服务器或者客户端
client or server
服务器参数server param:
-key 设置的纯数字密码默认0, 参数为int类型范围从0-2147483647不可夹杂字母特殊符号
Set password, default 0
-nolog 不写日志文件只打印标准输出默认0
Do not write log files, only print standard output, default 0 is off
-noprint 不打印屏幕输出默认0
Do not print standard output, default 0 is off
-loglevel 日志文件等级默认info
log level, default is info
-maxconn 最大连接数默认0不受限制
the max num of connections, default 0 is no limit
-maxprt server最大处理线程数默认100
max process thread in server, default 100
-maxprb server最大处理线程buffer数默认1000
max process thread's buffer in server, default 1000
-conntt server发起连接到目标地址的超时时间默认1000ms
The timeout period for the server to initiate a connection to the destination address. The default is 1000ms.
客户端参数client param:
-l 本地的地址发到这个端口的流量将转发到服务器
Local address, traffic sent to this port will be forwarded to the server
@ -81,11 +47,11 @@ Usage:
-tcp 设置是否转发tcp默认0
Set the switch to forward tcp, the default is 0
-tcp_bs tcp的发送接收缓冲区大小默认1MB
Tcp send and receive buffer size, default 1MB
-tcp_bs tcp的发送接收缓冲区大小默认10MB
Tcp send and receive buffer size, default 10MB
-tcp_mw tcp的最大窗口默认20000
The maximum window of tcp, the default is 20000
-tcp_mw tcp的最大窗口默认10000
The maximum window of tcp, the default is 10000
-tcp_rst tcp的超时发送时间默认400ms
Tcp timeout resend time, default 400ms
@ -99,29 +65,15 @@ Usage:
-nolog 不写日志文件只打印标准输出默认0
Do not write log files, only print standard output, default 0 is off
-noprint 不打印屏幕输出默认0
Do not print standard output, default 0 is off
-loglevel 日志文件等级默认info
log level, default is info
-sock5 开启sock5转发默认0
Turn on sock5 forwarding, default 0 is off
-profile 在指定端口开启性能检测默认0不开启
Enable performance detection on the specified port. The default 0 is not enabled.
-s5filter sock5模式设置转发过滤默认全转发设置CN代表CN地区的直连不转发
Set the forwarding filter in the sock5 mode. The default is full forwarding. For example, setting the CN indicates that the Chinese address is not forwarded.
-s5ftfile sock5模式转发过滤的数据文件默认读取当前目录的GeoLite2-Country.mmdb
The data file in sock5 filter mode, the default reading of the current directory GeoLite2-Country.mmdb
`
func main() {
defer common.CrashLog()
t := flag.String("type", "", "client or server")
listen := flag.String("l", "", "listen addr")
target := flag.String("t", "", "target addr")
@ -129,22 +81,14 @@ func main() {
timeout := flag.Int("timeout", 60, "conn timeout")
key := flag.Int("key", 0, "key")
tcpmode := flag.Int("tcp", 0, "tcp mode")
tcpmode_buffersize := flag.Int("tcp_bs", 1*1024*1024, "tcp mode buffer size")
tcpmode_maxwin := flag.Int("tcp_mw", 20000, "tcp mode max win")
tcpmode_buffersize := flag.Int("tcp_bs", 10*1024*1024, "tcp mode buffer size")
tcpmode_maxwin := flag.Int("tcp_mw", 10000, "tcp mode max win")
tcpmode_resend_timems := flag.Int("tcp_rst", 400, "tcp mode resend time ms")
tcpmode_compress := flag.Int("tcp_gz", 0, "tcp data compress")
nolog := flag.Int("nolog", 0, "write log file")
noprint := flag.Int("noprint", 0, "print stdout")
tcpmode_stat := flag.Int("tcp_stat", 0, "print tcp stat")
loglevel := flag.String("loglevel", "info", "log level")
open_sock5 := flag.Int("sock5", 0, "sock5 mode")
maxconn := flag.Int("maxconn", 0, "max num of connections")
max_process_thread := flag.Int("maxprt", 100, "max process thread in server")
max_process_buffer := flag.Int("maxprb", 1000, "max process thread's buffer in server")
profile := flag.Int("profile", 0, "open profile")
conntt := flag.Int("conntt", 1000, "the connect call's timeout")
s5filter := flag.String("s5filter", "", "sock5 filter")
s5ftfile := flag.String("s5ftfile", "GeoLite2-Country.mmdb", "sock5 filter file")
flag.Usage = func() {
fmt.Printf(usage)
}
@ -182,24 +126,20 @@ func main() {
Prefix: "pingtunnel",
MaxDay: 3,
NoLogFile: *nolog > 0,
NoPrint: *noprint > 0,
})
loggo.Info("start...")
loggo.Info("key %d", *key)
if *t == "server" {
s, err := pingtunnel.NewServer(*key, *maxconn, *max_process_thread, *max_process_buffer, *conntt)
s, err := pingtunnel.NewServer(*key)
if err != nil {
loggo.Error("ERROR: %s", err.Error())
return
}
loggo.Info("Server start")
err = s.Run()
if err != nil {
loggo.Error("Run ERROR: %s", err.Error())
return
}
} else if *t == "client" {
s.Run()
}
if *t == "client" {
loggo.Info("type %s", *t)
loggo.Info("listen %s", *listen)
@ -210,60 +150,17 @@ func main() {
*tcpmode_buffersize = 0
*tcpmode_maxwin = 0
*tcpmode_resend_timems = 0
*tcpmode_compress = 0
*tcpmode_stat = 0
}
if len(*s5filter) > 0 {
err := geoip.Load(*s5ftfile)
if err != nil {
loggo.Error("Load Sock5 ip file ERROR: %s", err.Error())
return
}
}
filter := func(addr string) bool {
if len(*s5filter) <= 0 {
return true
}
taddr, err := net.ResolveTCPAddr("tcp", addr)
if err != nil {
return false
}
ret, err := geoip.GetCountryIsoCode(taddr.IP.String())
if err != nil {
return false
}
if len(ret) <= 0 {
return false
}
return ret != *s5filter
}
c, err := pingtunnel.NewClient(*listen, *server, *target, *timeout, *key,
*tcpmode, *tcpmode_buffersize, *tcpmode_maxwin, *tcpmode_resend_timems, *tcpmode_compress,
*tcpmode_stat, *open_sock5, *maxconn, &filter)
*tcpmode_stat, *open_sock5)
if err != nil {
loggo.Error("ERROR: %s", err.Error())
return
}
loggo.Info("Client Listen %s (%s) Server %s (%s) TargetPort %s:", c.Addr(), c.IPAddr(),
c.ServerAddr(), c.ServerIPAddr(), c.TargetAddr())
err = c.Run()
if err != nil {
loggo.Error("Run ERROR: %s", err.Error())
return
}
} else {
return
}
if *profile > 0 {
go http.ListenAndServe("0.0.0.0:"+strconv.Itoa(*profile), nil)
}
for {
time.Sleep(time.Hour)
c.Run()
}
}

View file

@ -1,2 +0,0 @@
KEY=123456
SERVER=www.yourserver.com

View file

@ -1,16 +0,0 @@
Deploy with docker-compose
===========================
**First** edit `.env` file in this directory to your appropriate value.
**Then** run stack with these commands:
- in the server
```
docker-compose -f server.yml up -d
```
- in client machine
```
docker-compose -f client.yml up -d
```
**Now** use socks5 proxy at port `1080` of your client machine

View file

@ -1,9 +0,0 @@
version: "3.7"
services:
pingtunnelServer:
image: esrrhs/pingtunnel:latest
restart: always
ports:
- 1080:1080
command: "./pingtunnel -type client -l 0.0.0.0:1080 -s ${SERVER} -sock5 1 -key ${KEY}"

View file

@ -1,8 +0,0 @@
version: "3.7"
services:
pingtunnelServer:
image: esrrhs/pingtunnel:latest
restart: always
network_mode: host
command: "./pingtunnel -type server -key ${KEY}"

686
framemgr.go Normal file
View file

@ -0,0 +1,686 @@
package pingtunnel
import (
"bytes"
"compress/zlib"
"container/list"
"github.com/esrrhs/go-engine/src/common"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/esrrhs/go-engine/src/rbuffergo"
"io"
"strconv"
"sync"
"time"
)
type FrameStat struct {
sendDataNum int
recvDataNum int
sendReqNum int
recvReqNum int
sendAckNum int
recvAckNum int
sendDataNumsMap map[int32]int
recvDataNumsMap map[int32]int
sendReqNumsMap map[int32]int
recvReqNumsMap map[int32]int
sendAckNumsMap map[int32]int
recvAckNumsMap map[int32]int
sendping int
sendpong int
recvping int
recvpong int
}
type FrameMgr struct {
sendb *rbuffergo.RBuffergo
recvb *rbuffergo.RBuffergo
recvlock sync.Locker
windowsize int
resend_timems int
compress int
sendwin *list.List
sendlist *list.List
sendid int
recvwin *list.List
recvlist *list.List
recvid int
close bool
remoteclosed bool
closesend bool
lastPingTime int64
rttns int64
reqmap map[int32]int64
sendmap map[int32]int64
connected bool
fs *FrameStat
openstat int
lastPrintStat int64
}
func NewFrameMgr(buffersize int, windowsize int, resend_timems int, compress int, openstat int) *FrameMgr {
sendb := rbuffergo.New(buffersize, false)
recvb := rbuffergo.New(buffersize, false)
fm := &FrameMgr{sendb: sendb, recvb: recvb,
recvlock: &sync.Mutex{},
windowsize: windowsize, resend_timems: resend_timems, compress: compress,
sendwin: list.New(), sendlist: list.New(), sendid: 0,
recvwin: list.New(), recvlist: list.New(), recvid: 0,
close: false, remoteclosed: false, closesend: false,
lastPingTime: time.Now().UnixNano(), rttns: (int64)(resend_timems * 1000),
reqmap: make(map[int32]int64), sendmap: make(map[int32]int64),
connected: false, openstat: openstat, lastPrintStat: time.Now().UnixNano()}
if openstat > 0 {
fm.resetStat()
}
return fm
}
func (fm *FrameMgr) GetSendBufferLeft() int {
left := fm.sendb.Capacity() - fm.sendb.Size()
return left
}
func (fm *FrameMgr) WriteSendBuffer(data []byte) {
fm.sendb.Write(data)
loggo.Debug("WriteSendBuffer %d %d", fm.sendb.Size(), len(data))
}
func (fm *FrameMgr) Update() {
fm.cutSendBufferToWindow()
fm.sendlist.Init()
tmpreq, tmpack, tmpackto := fm.preProcessRecvList()
fm.processRecvList(tmpreq, tmpack, tmpackto)
fm.combineWindowToRecvBuffer()
fm.calSendList()
fm.ping()
fm.printStat()
}
func (fm *FrameMgr) cutSendBufferToWindow() {
sendall := false
if fm.sendb.Size() < FRAME_MAX_SIZE {
sendall = true
}
for fm.sendb.Size() >= FRAME_MAX_SIZE && fm.sendwin.Len() < fm.windowsize {
fd := &FrameData{Type: (int32)(FrameData_USER_DATA),
Data: make([]byte, FRAME_MAX_SIZE)}
fm.sendb.Read(fd.Data)
if fm.compress > 0 && len(fd.Data) > fm.compress {
newb := fm.compressData(fd.Data)
if len(newb) < len(fd.Data) {
fd.Data = newb
fd.Compress = true
}
}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("cut frame push to send win %d %d %d", f.Id, FRAME_MAX_SIZE, fm.sendwin.Len())
}
if sendall && fm.sendb.Size() > 0 && fm.sendwin.Len() < fm.windowsize {
fd := &FrameData{Type: (int32)(FrameData_USER_DATA),
Data: make([]byte, fm.sendb.Size())}
fm.sendb.Read(fd.Data)
if fm.compress > 0 && len(fd.Data) > fm.compress {
newb := fm.compressData(fd.Data)
if len(newb) < len(fd.Data) {
fd.Data = newb
fd.Compress = true
}
}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("cut small frame push to send win %d %d %d", f.Id, len(f.Data.Data), fm.sendwin.Len())
}
if fm.sendb.Empty() && fm.close && !fm.closesend && fm.sendwin.Len() < fm.windowsize {
fd := &FrameData{Type: (int32)(FrameData_CLOSE)}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
fm.closesend = true
loggo.Debug("close frame push to send win %d %d", f.Id, fm.sendwin.Len())
}
}
func (fm *FrameMgr) calSendList() {
cur := time.Now().UnixNano()
for e := fm.sendwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Resend || cur-f.Sendtime > int64(fm.resend_timems*(int)(time.Millisecond)) {
oldsend := fm.sendmap[f.Id]
if cur-oldsend > fm.rttns {
f.Sendtime = cur
fm.sendlist.PushBack(f)
f.Resend = false
fm.sendmap[f.Id] = cur
if fm.openstat > 0 {
fm.fs.sendDataNum++
fm.fs.sendDataNumsMap[f.Id]++
}
loggo.Debug("push frame to sendlist %d %d", f.Id, len(f.Data.Data))
}
}
}
}
func (fm *FrameMgr) getSendList() *list.List {
return fm.sendlist
}
func (fm *FrameMgr) OnRecvFrame(f *Frame) {
fm.recvlock.Lock()
defer fm.recvlock.Unlock()
fm.recvlist.PushBack(f)
}
func (fm *FrameMgr) preProcessRecvList() (map[int32]int, map[int32]int, map[int32]*Frame) {
fm.recvlock.Lock()
defer fm.recvlock.Unlock()
tmpreq := make(map[int32]int)
tmpack := make(map[int32]int)
tmpackto := make(map[int32]*Frame)
for e := fm.recvlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Type == (int32)(Frame_REQ) {
for _, id := range f.Dataid {
tmpreq[id]++
loggo.Debug("recv req %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
} else if f.Type == (int32)(Frame_ACK) {
for _, id := range f.Dataid {
tmpack[id]++
loggo.Debug("recv ack %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
} else if f.Type == (int32)(Frame_DATA) {
tmpackto[f.Id] = f
if fm.openstat > 0 {
fm.fs.recvDataNum++
fm.fs.recvDataNumsMap[f.Id]++
}
loggo.Debug("recv data %d %d", f.Id, len(f.Data.Data))
} else if f.Type == (int32)(Frame_PING) {
fm.processPing(f)
} else if f.Type == (int32)(Frame_PONG) {
fm.processPong(f)
} else {
loggo.Error("error frame type %d", f.Type)
}
}
fm.recvlist.Init()
return tmpreq, tmpack, tmpackto
}
func (fm *FrameMgr) processRecvList(tmpreq map[int32]int, tmpack map[int32]int, tmpackto map[int32]*Frame) {
for id, num := range tmpreq {
for e := fm.sendwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == id {
f.Resend = true
loggo.Debug("choose resend win %d %d", f.Id, len(f.Data.Data))
break
}
}
if fm.openstat > 0 {
fm.fs.recvReqNum += num
fm.fs.recvReqNumsMap[id] += num
}
}
for id, num := range tmpack {
for e := fm.sendwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == id {
fm.sendwin.Remove(e)
delete(fm.sendmap, f.Id)
loggo.Debug("remove send win %d %d", f.Id, len(f.Data.Data))
break
}
}
if fm.openstat > 0 {
fm.fs.recvAckNum += num
fm.fs.recvAckNumsMap[id] += num
}
}
if len(tmpackto) > 0 {
tmp := make([]int32, len(tmpackto))
index := 0
for id, rf := range tmpackto {
if fm.addToRecvWin(rf) {
tmp[index] = id
index++
if fm.openstat > 0 {
fm.fs.sendAckNum++
fm.fs.sendAckNumsMap[id]++
}
loggo.Debug("add data to win %d %d", rf.Id, len(rf.Data.Data))
}
}
if index > 0 {
f := &Frame{Type: (int32)(Frame_ACK), Resend: false, Sendtime: 0,
Id: 0,
Dataid: tmp[0:index]}
fm.sendlist.PushBack(f)
loggo.Debug("send ack %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
}
}
func (fm *FrameMgr) addToRecvWin(rf *Frame) bool {
if !fm.isIdInRange((int)(rf.Id), FRAME_MAX_ID) {
loggo.Debug("recv frame not in range %d %d", rf.Id, fm.recvid)
if fm.isIdOld((int)(rf.Id), FRAME_MAX_ID) {
return true
}
return false
}
for e := fm.recvwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == rf.Id {
loggo.Debug("recv frame ignore %d %d", f.Id, len(f.Data.Data))
return true
}
}
for e := fm.recvwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
loggo.Debug("start insert recv win %d %d %d", fm.recvid, rf.Id, f.Id)
if fm.compareId((int)(rf.Id), (int)(f.Id)) < 0 {
fm.recvwin.InsertBefore(rf, e)
loggo.Debug("insert recv win %d %d before %d", rf.Id, len(rf.Data.Data), f.Id)
return true
}
}
fm.recvwin.PushBack(rf)
loggo.Debug("insert recv win last %d %d", rf.Id, len(rf.Data.Data))
return true
}
func (fm *FrameMgr) processRecvFrame(f *Frame) bool {
if f.Data.Type == (int32)(FrameData_USER_DATA) {
left := fm.recvb.Capacity() - fm.recvb.Size()
if left >= len(f.Data.Data) {
src := f.Data.Data
if f.Data.Compress {
err, old := fm.deCompressData(src)
if err != nil {
loggo.Error("recv frame deCompressData error %d", f.Id)
return false
}
if left < len(old) {
return false
}
loggo.Debug("deCompressData recv frame %d %d %d",
f.Id, len(src), len(old))
src = old
}
fm.recvb.Write(src)
loggo.Debug("combined recv frame to recv buffer %d %d",
f.Id, len(src))
return true
}
return false
} else if f.Data.Type == (int32)(FrameData_CLOSE) {
fm.remoteclosed = true
loggo.Debug("recv remote close frame %d", f.Id)
return true
} else if f.Data.Type == (int32)(FrameData_CONN) {
fm.sendConnectRsp()
fm.connected = true
loggo.Debug("recv remote conn frame %d", f.Id)
return true
} else if f.Data.Type == (int32)(FrameData_CONNRSP) {
fm.connected = true
loggo.Debug("recv remote conn rsp frame %d", f.Id)
return true
} else {
loggo.Error("recv frame type error %d", f.Data.Type)
return false
}
}
func (fm *FrameMgr) combineWindowToRecvBuffer() {
for {
done := false
for e := fm.recvwin.Front(); e != nil; e = e.Next() {
f := e.Value.(*Frame)
if f.Id == (int32)(fm.recvid) {
delete(fm.reqmap, f.Id)
if fm.processRecvFrame(f) {
fm.recvwin.Remove(e)
done = true
loggo.Debug("process recv frame ok %d %d",
f.Id, len(f.Data.Data))
break
}
}
}
if !done {
break
} else {
fm.recvid++
if fm.recvid >= FRAME_MAX_ID {
fm.recvid = 0
}
loggo.Debug("combined ok add recvid %d ", fm.recvid)
}
}
cur := time.Now().UnixNano()
reqtmp := make(map[int]int)
e := fm.recvwin.Front()
id := fm.recvid
for len(reqtmp) < fm.windowsize && len(reqtmp)*4 < FRAME_MAX_SIZE/2 && e != nil {
f := e.Value.(*Frame)
loggo.Debug("start add req id %d %d %d", fm.recvid, f.Id, id)
if f.Id != (int32)(id) {
oldReq := fm.reqmap[f.Id]
if cur-oldReq > fm.rttns {
reqtmp[id]++
fm.reqmap[f.Id] = cur
loggo.Debug("add req id %d ", id)
}
} else {
e = e.Next()
}
id++
if id >= FRAME_MAX_ID {
id = 0
}
}
if len(reqtmp) > 0 {
f := &Frame{Type: (int32)(Frame_REQ), Resend: false, Sendtime: 0,
Id: 0,
Dataid: make([]int32, len(reqtmp))}
index := 0
for id, _ := range reqtmp {
f.Dataid[index] = (int32)(id)
index++
if fm.openstat > 0 {
fm.fs.sendReqNum++
fm.fs.sendReqNumsMap[(int32)(id)]++
}
}
fm.sendlist.PushBack(f)
loggo.Debug("send req %d %s", f.Id, common.Int32ArrayToString(f.Dataid, ","))
}
}
func (fm *FrameMgr) GetRecvBufferSize() int {
return fm.recvb.Size()
}
func (fm *FrameMgr) GetRecvReadLineBuffer() []byte {
ret := fm.recvb.GetReadLineBuffer()
loggo.Debug("GetRecvReadLineBuffer %d %d", fm.recvb.Size(), len(ret))
return ret
}
func (fm *FrameMgr) SkipRecvBuffer(size int) {
fm.recvb.SkipRead(size)
loggo.Debug("SkipRead %d %d", fm.recvb.Size(), size)
}
func (fm *FrameMgr) Close() {
fm.recvlock.Lock()
defer fm.recvlock.Unlock()
fm.close = true
}
func (fm *FrameMgr) IsRemoteClosed() bool {
return fm.remoteclosed
}
func (fm *FrameMgr) ping() {
cur := time.Now().UnixNano()
if cur-fm.lastPingTime > (int64)(time.Second) {
fm.lastPingTime = cur
f := &Frame{Type: (int32)(Frame_PING), Resend: false, Sendtime: cur,
Id: 0}
fm.sendlist.PushBack(f)
loggo.Debug("send ping %d", cur)
if fm.openstat > 0 {
fm.fs.sendping++
}
}
}
func (fm *FrameMgr) processPing(f *Frame) {
rf := &Frame{Type: (int32)(Frame_PONG), Resend: false, Sendtime: f.Sendtime,
Id: 0}
fm.sendlist.PushBack(rf)
if fm.openstat > 0 {
fm.fs.recvping++
fm.fs.sendpong++
}
loggo.Debug("recv ping %d", f.Sendtime)
}
func (fm *FrameMgr) processPong(f *Frame) {
cur := time.Now().UnixNano()
if cur > f.Sendtime {
rtt := cur - f.Sendtime
fm.rttns = (fm.rttns + rtt) / 2
if fm.openstat > 0 {
fm.fs.recvpong++
}
loggo.Debug("recv pong %d %dms", rtt, fm.rttns/1000/1000)
}
}
func (fm *FrameMgr) isIdInRange(id int, maxid int) bool {
begin := fm.recvid
end := fm.recvid + fm.windowsize
if end >= maxid {
if id >= 0 && id < end-maxid {
return true
}
end = maxid
}
if id >= begin && id < end {
return true
}
return false
}
func (fm *FrameMgr) compareId(l int, r int) int {
if l < fm.recvid {
l += FRAME_MAX_ID
}
if r < fm.recvid {
r += FRAME_MAX_ID
}
return l - r
}
func (fm *FrameMgr) isIdOld(id int, maxid int) bool {
if id > fm.recvid {
return false
}
end := fm.recvid + fm.windowsize*2
if end >= maxid {
if id >= end-maxid && id < fm.recvid {
return true
}
} else {
if id < fm.recvid {
return true
}
}
return false
}
func (fm *FrameMgr) IsConnected() bool {
return fm.connected
}
func (fm *FrameMgr) Connect() {
fd := &FrameData{Type: (int32)(FrameData_CONN)}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("start connect")
}
func (fm *FrameMgr) sendConnectRsp() {
fd := &FrameData{Type: (int32)(FrameData_CONNRSP)}
f := &Frame{Type: (int32)(Frame_DATA),
Id: (int32)(fm.sendid),
Data: fd}
fm.sendid++
if fm.sendid >= FRAME_MAX_ID {
fm.sendid = 0
}
fm.sendwin.PushBack(f)
loggo.Debug("send connect rsp")
}
func (fm *FrameMgr) compressData(src []byte) []byte {
var b bytes.Buffer
w := zlib.NewWriter(&b)
w.Write(src)
w.Close()
return b.Bytes()
}
func (fm *FrameMgr) deCompressData(src []byte) (error, []byte) {
b := bytes.NewReader(src)
r, err := zlib.NewReader(b)
if err != nil {
return err, nil
}
var out bytes.Buffer
io.Copy(&out, r)
r.Close()
return nil, out.Bytes()
}
func (fm *FrameMgr) resetStat() {
fm.fs = &FrameStat{}
fm.fs.sendDataNumsMap = make(map[int32]int)
fm.fs.recvDataNumsMap = make(map[int32]int)
fm.fs.sendReqNumsMap = make(map[int32]int)
fm.fs.recvReqNumsMap = make(map[int32]int)
fm.fs.sendAckNumsMap = make(map[int32]int)
fm.fs.recvAckNumsMap = make(map[int32]int)
}
func (fm *FrameMgr) printStat() {
if fm.openstat > 0 {
cur := time.Now().UnixNano()
if cur-fm.lastPrintStat > (int64)(time.Second) {
fm.lastPrintStat = cur
fs := fm.fs
loggo.Info("\nsendDataNum %d\nrecvDataNum %d\nsendReqNum %d\nrecvReqNum %d\nsendAckNum %d\nrecvAckNum %d\n"+
"sendDataNumsMap %s\nrecvDataNumsMap %s\nsendReqNumsMap %s\nrecvReqNumsMap %s\nsendAckNumsMap %s\nrecvAckNumsMap %s\n"+
"sendping %d\nrecvping %d\nsendpong %d\nrecvpong %d\n"+
"sendwin %d\nrecvwin %d\n",
fs.sendDataNum, fs.recvDataNum,
fs.sendReqNum, fs.recvReqNum,
fs.sendAckNum, fs.recvAckNum,
fm.printStatMap(&fs.sendDataNumsMap), fm.printStatMap(&fs.recvDataNumsMap),
fm.printStatMap(&fs.sendReqNumsMap), fm.printStatMap(&fs.recvReqNumsMap),
fm.printStatMap(&fs.sendAckNumsMap), fm.printStatMap(&fs.recvAckNumsMap),
fs.sendping, fs.recvping,
fs.sendpong, fs.recvpong,
fm.sendwin.Len(), fm.recvwin.Len())
fm.resetStat()
}
}
}
func (fm *FrameMgr) printStatMap(m *map[int32]int) string {
tmp := make(map[int]int)
for _, v := range *m {
tmp[v]++
}
max := 0
for k, _ := range tmp {
if k > max {
max = k
}
}
var ret string
for i := 1; i <= max; i++ {
ret += strconv.Itoa(i) + "->" + strconv.Itoa(tmp[i]) + ","
}
if len(ret) <= 0 {
ret = "none"
}
return ret
}

18
go.mod
View file

@ -1,18 +0,0 @@
module github.com/esrrhs/pingtunnel
go 1.18
require (
github.com/esrrhs/gohome v0.0.0-20231102120537-c519efbde976
github.com/golang/protobuf v1.5.3
golang.org/x/net v0.17.0
)
require (
github.com/OneOfOne/xxhash v1.2.8 // indirect
github.com/google/uuid v1.4.0 // indirect
github.com/oschwald/geoip2-golang v1.9.0 // indirect
github.com/oschwald/maxminddb-golang v1.12.0 // indirect
golang.org/x/sys v0.13.0 // indirect
google.golang.org/protobuf v1.31.0 // indirect
)

29
go.sum
View file

@ -1,29 +0,0 @@
github.com/OneOfOne/xxhash v1.2.8 h1:31czK/TI9sNkxIKfaUfGlU47BAxQ0ztGgd9vPyqimf8=
github.com/OneOfOne/xxhash v1.2.8/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/esrrhs/gohome v0.0.0-20231102120537-c519efbde976 h1:av0d/lRou1Z5cxdSQFwtVcqJjokFI5pJyyr63iAuYis=
github.com/esrrhs/gohome v0.0.0-20231102120537-c519efbde976/go.mod h1:S5fYcOFy4nUPnkYg7D9hIp+SwBR9kCBiOYmWVW42Yhs=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/oschwald/geoip2-golang v1.9.0 h1:uvD3O6fXAXs+usU+UGExshpdP13GAqp4GBrzN7IgKZc=
github.com/oschwald/geoip2-golang v1.9.0/go.mod h1:BHK6TvDyATVQhKNbQBdrj9eAvuwOMi2zSFXizL3K81Y=
github.com/oschwald/maxminddb-golang v1.12.0 h1:9FnTOD0YOhP7DGxGsq4glzpGy5+w7pq50AS6wALUMYs=
github.com/oschwald/maxminddb-golang v1.12.0/go.mod h1:q0Nob5lTCqyQ8WT6FYgS1L7PXKVVbgiymefNwIjPzgY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

261
msg.pb.go
View file

@ -25,21 +25,18 @@ type MyMsg_TYPE int32
const (
MyMsg_DATA MyMsg_TYPE = 0
MyMsg_PING MyMsg_TYPE = 1
MyMsg_KICK MyMsg_TYPE = 2
MyMsg_MAGIC MyMsg_TYPE = 57005
)
var MyMsg_TYPE_name = map[int32]string{
0: "DATA",
1: "PING",
2: "KICK",
57005: "MAGIC",
}
var MyMsg_TYPE_value = map[string]int32{
"DATA": 0,
"PING": 1,
"KICK": 2,
"MAGIC": 57005,
}
@ -51,6 +48,71 @@ func (MyMsg_TYPE) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{0, 0}
}
type FrameData_TYPE int32
const (
FrameData_USER_DATA FrameData_TYPE = 0
FrameData_CONN FrameData_TYPE = 1
FrameData_CONNRSP FrameData_TYPE = 2
FrameData_CLOSE FrameData_TYPE = 3
)
var FrameData_TYPE_name = map[int32]string{
0: "USER_DATA",
1: "CONN",
2: "CONNRSP",
3: "CLOSE",
}
var FrameData_TYPE_value = map[string]int32{
"USER_DATA": 0,
"CONN": 1,
"CONNRSP": 2,
"CLOSE": 3,
}
func (x FrameData_TYPE) String() string {
return proto.EnumName(FrameData_TYPE_name, int32(x))
}
func (FrameData_TYPE) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{1, 0}
}
type Frame_TYPE int32
const (
Frame_DATA Frame_TYPE = 0
Frame_REQ Frame_TYPE = 1
Frame_ACK Frame_TYPE = 2
Frame_PING Frame_TYPE = 3
Frame_PONG Frame_TYPE = 4
)
var Frame_TYPE_name = map[int32]string{
0: "DATA",
1: "REQ",
2: "ACK",
3: "PING",
4: "PONG",
}
var Frame_TYPE_value = map[string]int32{
"DATA": 0,
"REQ": 1,
"ACK": 2,
"PING": 3,
"PONG": 4,
}
func (x Frame_TYPE) String() string {
return proto.EnumName(Frame_TYPE_name, int32(x))
}
func (Frame_TYPE) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{2, 0}
}
type MyMsg struct {
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Type int32 `protobuf:"varint,2,opt,name=type,proto3" json:"type,omitempty"`
@ -194,35 +256,182 @@ func (m *MyMsg) GetTcpmodeStat() int32 {
return 0
}
type FrameData struct {
Type int32 `protobuf:"varint,1,opt,name=type,proto3" json:"type,omitempty"`
Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
Compress bool `protobuf:"varint,3,opt,name=compress,proto3" json:"compress,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *FrameData) Reset() { *m = FrameData{} }
func (m *FrameData) String() string { return proto.CompactTextString(m) }
func (*FrameData) ProtoMessage() {}
func (*FrameData) Descriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{1}
}
func (m *FrameData) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_FrameData.Unmarshal(m, b)
}
func (m *FrameData) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_FrameData.Marshal(b, m, deterministic)
}
func (m *FrameData) XXX_Merge(src proto.Message) {
xxx_messageInfo_FrameData.Merge(m, src)
}
func (m *FrameData) XXX_Size() int {
return xxx_messageInfo_FrameData.Size(m)
}
func (m *FrameData) XXX_DiscardUnknown() {
xxx_messageInfo_FrameData.DiscardUnknown(m)
}
var xxx_messageInfo_FrameData proto.InternalMessageInfo
func (m *FrameData) GetType() int32 {
if m != nil {
return m.Type
}
return 0
}
func (m *FrameData) GetData() []byte {
if m != nil {
return m.Data
}
return nil
}
func (m *FrameData) GetCompress() bool {
if m != nil {
return m.Compress
}
return false
}
type Frame struct {
Type int32 `protobuf:"varint,1,opt,name=type,proto3" json:"type,omitempty"`
Resend bool `protobuf:"varint,2,opt,name=resend,proto3" json:"resend,omitempty"`
Sendtime int64 `protobuf:"varint,3,opt,name=sendtime,proto3" json:"sendtime,omitempty"`
Id int32 `protobuf:"varint,4,opt,name=id,proto3" json:"id,omitempty"`
Data *FrameData `protobuf:"bytes,5,opt,name=data,proto3" json:"data,omitempty"`
Dataid []int32 `protobuf:"varint,6,rep,packed,name=dataid,proto3" json:"dataid,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Frame) Reset() { *m = Frame{} }
func (m *Frame) String() string { return proto.CompactTextString(m) }
func (*Frame) ProtoMessage() {}
func (*Frame) Descriptor() ([]byte, []int) {
return fileDescriptor_c06e4cca6c2cc899, []int{2}
}
func (m *Frame) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Frame.Unmarshal(m, b)
}
func (m *Frame) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Frame.Marshal(b, m, deterministic)
}
func (m *Frame) XXX_Merge(src proto.Message) {
xxx_messageInfo_Frame.Merge(m, src)
}
func (m *Frame) XXX_Size() int {
return xxx_messageInfo_Frame.Size(m)
}
func (m *Frame) XXX_DiscardUnknown() {
xxx_messageInfo_Frame.DiscardUnknown(m)
}
var xxx_messageInfo_Frame proto.InternalMessageInfo
func (m *Frame) GetType() int32 {
if m != nil {
return m.Type
}
return 0
}
func (m *Frame) GetResend() bool {
if m != nil {
return m.Resend
}
return false
}
func (m *Frame) GetSendtime() int64 {
if m != nil {
return m.Sendtime
}
return 0
}
func (m *Frame) GetId() int32 {
if m != nil {
return m.Id
}
return 0
}
func (m *Frame) GetData() *FrameData {
if m != nil {
return m.Data
}
return nil
}
func (m *Frame) GetDataid() []int32 {
if m != nil {
return m.Dataid
}
return nil
}
func init() {
proto.RegisterEnum("MyMsg_TYPE", MyMsg_TYPE_name, MyMsg_TYPE_value)
proto.RegisterEnum("FrameData_TYPE", FrameData_TYPE_name, FrameData_TYPE_value)
proto.RegisterEnum("Frame_TYPE", Frame_TYPE_name, Frame_TYPE_value)
proto.RegisterType((*MyMsg)(nil), "MyMsg")
proto.RegisterType((*FrameData)(nil), "FrameData")
proto.RegisterType((*Frame)(nil), "Frame")
}
func init() { proto.RegisterFile("msg.proto", fileDescriptor_c06e4cca6c2cc899) }
var fileDescriptor_c06e4cca6c2cc899 = []byte{
// 342 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x3c, 0x90, 0xdb, 0x6a, 0xe2, 0x50,
0x14, 0x86, 0x27, 0x27, 0x0f, 0xcb, 0xe8, 0xc4, 0x35, 0x07, 0xd6, 0x65, 0x46, 0x18, 0xc8, 0x5c,
0xcc, 0xc0, 0xb4, 0x4f, 0xa0, 0xb6, 0x88, 0x48, 0x8a, 0xa4, 0xde, 0xb4, 0x37, 0x12, 0xcd, 0x36,
0x84, 0x36, 0x07, 0xb2, 0xb7, 0xb4, 0xf6, 0x9d, 0xfa, 0x08, 0x7d, 0x8d, 0x3e, 0x4f, 0xc9, 0x72,
0xa7, 0x77, 0xff, 0xff, 0x7f, 0x5f, 0xc8, 0x62, 0x43, 0x3f, 0x97, 0xe9, 0xbf, 0xaa, 0x2e, 0x55,
0x39, 0x79, 0xb7, 0xc0, 0x09, 0x4f, 0xa1, 0x4c, 0x71, 0x04, 0x66, 0x96, 0x90, 0xe1, 0x1b, 0x41,
0x3f, 0x32, 0xb3, 0x04, 0x11, 0x6c, 0x75, 0xaa, 0x04, 0x99, 0xbe, 0x11, 0x38, 0x11, 0x67, 0xfc,
0x09, 0x1d, 0x15, 0xd7, 0xa9, 0x50, 0x64, 0xb1, 0xa7, 0x5b, 0xe3, 0x26, 0xb1, 0x8a, 0xc9, 0xf6,
0x8d, 0xc0, 0x8d, 0x38, 0x37, 0x6e, 0xcd, 0xff, 0x20, 0xc7, 0x37, 0x82, 0x71, 0xa4, 0x1b, 0x7e,
0x07, 0x27, 0x8f, 0xd3, 0x6c, 0x4f, 0x1d, 0x9e, 0xcf, 0x05, 0x3d, 0xb0, 0x1e, 0xc4, 0x89, 0xba,
0xbc, 0x35, 0x11, 0x09, 0xba, 0x2a, 0xcb, 0x45, 0x79, 0x54, 0xd4, 0xe3, 0x13, 0xda, 0xca, 0x64,
0x5f, 0xe5, 0x65, 0x22, 0xa8, 0xaf, 0xc9, 0xb9, 0xe2, 0x5f, 0x40, 0x1d, 0xb7, 0xbb, 0xe3, 0xe1,
0x20, 0x6a, 0x99, 0xbd, 0x08, 0x02, 0x96, 0xc6, 0x9a, 0xcc, 0x3e, 0x01, 0xfe, 0x86, 0x51, 0xab,
0xe7, 0xf1, 0xf3, 0x53, 0x56, 0xd0, 0x80, 0xd5, 0xa1, 0x5e, 0x43, 0x1e, 0xf1, 0x02, 0x7e, 0xb4,
0x5a, 0x2d, 0xa4, 0x28, 0x92, 0x6d, 0x73, 0x49, 0x2e, 0xc9, 0x65, 0xfb, 0x9b, 0x86, 0x11, 0xb3,
0x0d, 0x23, 0xfc, 0x03, 0x5e, 0xfb, 0xcd, 0xbe, 0xcc, 0xab, 0x5a, 0x48, 0x49, 0x43, 0xd6, 0xbf,
0xea, 0x7d, 0xae, 0x67, 0xfc, 0x05, 0x6e, 0xab, 0x4a, 0x15, 0x2b, 0x1a, 0xb1, 0x36, 0xd0, 0xdb,
0xad, 0x8a, 0xd5, 0xe4, 0x3f, 0xd8, 0x9b, 0xbb, 0xf5, 0x35, 0xf6, 0xc0, 0xbe, 0x9a, 0x6e, 0xa6,
0xde, 0x97, 0x26, 0xad, 0x97, 0x37, 0x0b, 0xcf, 0x68, 0xd2, 0x6a, 0x39, 0x5f, 0x79, 0x26, 0x0e,
0xc0, 0x09, 0xa7, 0x8b, 0xe5, 0xdc, 0x7b, 0x7d, 0xb3, 0x66, 0xee, 0x3d, 0x54, 0x59, 0x91, 0xaa,
0x63, 0x51, 0x88, 0xc7, 0x5d, 0x87, 0xdf, 0xfe, 0xf2, 0x23, 0x00, 0x00, 0xff, 0xff, 0x59, 0xbc,
0x55, 0x76, 0xfa, 0x01, 0x00, 0x00,
// 493 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x52, 0xcb, 0x6e, 0xd3, 0x40,
0x14, 0x65, 0xfc, 0x48, 0xe2, 0x9b, 0x34, 0x4c, 0x87, 0x87, 0x46, 0x2c, 0x90, 0xb1, 0x84, 0x30,
0x0b, 0xba, 0x28, 0x12, 0xac, 0x53, 0x37, 0x44, 0x15, 0xe4, 0xc1, 0x24, 0x2c, 0x60, 0x13, 0xb9,
0xf1, 0xd4, 0xb2, 0xc0, 0x0f, 0xd9, 0x13, 0x41, 0xf8, 0x02, 0x7e, 0x86, 0x4f, 0xe0, 0x0f, 0x90,
0xf8, 0x25, 0x34, 0xb7, 0x63, 0xb7, 0x12, 0xac, 0x7c, 0xce, 0x3d, 0x27, 0xb9, 0x67, 0xee, 0xbd,
0xe0, 0xe5, 0x4d, 0x7a, 0x52, 0xd5, 0xa5, 0x2a, 0x83, 0xdf, 0x36, 0xb8, 0xf3, 0xc3, 0xbc, 0x49,
0xd9, 0x18, 0xac, 0x2c, 0xe1, 0xc4, 0x27, 0xa1, 0x27, 0xac, 0x2c, 0x61, 0x0c, 0x1c, 0x75, 0xa8,
0x24, 0xb7, 0x7c, 0x12, 0xba, 0x02, 0x31, 0x7b, 0x08, 0x3d, 0x15, 0xd7, 0xa9, 0x54, 0xdc, 0x46,
0x9f, 0x61, 0xda, 0x9b, 0xc4, 0x2a, 0xe6, 0x8e, 0x4f, 0xc2, 0x91, 0x40, 0xac, 0xbd, 0x35, 0xf6,
0xe0, 0xae, 0x4f, 0xc2, 0x63, 0x61, 0x18, 0xbb, 0x0f, 0x6e, 0x1e, 0xa7, 0xd9, 0x8e, 0xf7, 0xb0,
0x7c, 0x4d, 0x18, 0x05, 0xfb, 0xb3, 0x3c, 0xf0, 0x3e, 0xd6, 0x34, 0x64, 0x1c, 0xfa, 0x2a, 0xcb,
0x65, 0xb9, 0x57, 0x7c, 0x80, 0x11, 0x5a, 0x8a, 0xca, 0xae, 0xca, 0xcb, 0x44, 0x72, 0xcf, 0x28,
0xd7, 0x94, 0xbd, 0x00, 0x66, 0xe0, 0xf6, 0x72, 0x7f, 0x75, 0x25, 0xeb, 0x26, 0xfb, 0x2e, 0x39,
0xa0, 0xe9, 0xd8, 0x28, 0x67, 0x9d, 0xc0, 0x9e, 0xc2, 0xb8, 0xb5, 0xe7, 0xf1, 0xb7, 0xaf, 0x59,
0xc1, 0x87, 0x68, 0x3d, 0x32, 0xd5, 0x39, 0x16, 0xd9, 0x29, 0x3c, 0x68, 0x6d, 0xb5, 0x6c, 0x64,
0x91, 0x6c, 0x75, 0x92, 0xbc, 0xe1, 0x23, 0x74, 0xdf, 0x33, 0xa2, 0x40, 0x6d, 0x83, 0x12, 0x7b,
0x0e, 0xb4, 0xfd, 0xcd, 0xae, 0xcc, 0xab, 0x5a, 0x36, 0x0d, 0x3f, 0x42, 0xfb, 0x5d, 0x53, 0x8f,
0x4c, 0x99, 0x3d, 0x81, 0x51, 0x6b, 0x6d, 0x54, 0xac, 0xf8, 0x18, 0x6d, 0x43, 0x53, 0x5b, 0xab,
0x58, 0x05, 0xcf, 0xc0, 0xd9, 0x7c, 0x5c, 0x4d, 0xd9, 0x00, 0x9c, 0xf3, 0xc9, 0x66, 0x42, 0xef,
0x68, 0xb4, 0xba, 0x58, 0xcc, 0x28, 0x61, 0x43, 0x70, 0xe7, 0x93, 0xd9, 0x45, 0x44, 0x7f, 0xfe,
0xb2, 0x83, 0x1f, 0x04, 0xbc, 0x37, 0x75, 0x9c, 0xcb, 0x73, 0xbd, 0x82, 0x76, 0x85, 0xe4, 0xd6,
0x0a, 0xdb, 0x55, 0x59, 0xb7, 0x56, 0xf5, 0x08, 0x06, 0x5d, 0x48, 0xbd, 0xd8, 0x81, 0xe8, 0x78,
0xf0, 0xda, 0xb4, 0x3e, 0x02, 0xef, 0xc3, 0x7a, 0x2a, 0xb6, 0x37, 0xfd, 0xa3, 0xe5, 0x62, 0x81,
0xfd, 0xfb, 0x1a, 0x89, 0xf5, 0x8a, 0x5a, 0xcc, 0x03, 0x37, 0x7a, 0xb7, 0x5c, 0x4f, 0xa9, 0x1d,
0xfc, 0x21, 0xe0, 0x62, 0x94, 0xff, 0xc6, 0xd0, 0xd7, 0x81, 0xf3, 0xc2, 0x20, 0x03, 0x61, 0x98,
0x8e, 0xa2, 0xbf, 0x7a, 0xc0, 0x18, 0xc5, 0x16, 0x1d, 0x37, 0x17, 0xea, 0xe0, 0xbf, 0xe8, 0x0b,
0x7d, 0x6c, 0x9e, 0xa2, 0xef, 0x6b, 0x78, 0x0a, 0x27, 0xdd, 0xc3, 0x6f, 0x2e, 0x50, 0x7f, 0xb3,
0x84, 0xf7, 0x7c, 0x3b, 0x74, 0x85, 0x61, 0xc1, 0xab, 0x7f, 0xa6, 0xd9, 0x07, 0x5b, 0x4c, 0xdf,
0x53, 0xa2, 0xc1, 0x24, 0x7a, 0x4b, 0xad, 0x6e, 0xbe, 0x36, 0xa2, 0xe5, 0x62, 0x46, 0x9d, 0xb3,
0xd1, 0x27, 0xa8, 0xb2, 0x22, 0x55, 0xfb, 0xa2, 0x90, 0x5f, 0x2e, 0x7b, 0x78, 0xce, 0x2f, 0xff,
0x06, 0x00, 0x00, 0xff, 0xff, 0x5b, 0xf2, 0xbf, 0x87, 0x4d, 0x03, 0x00, 0x00,
}

View file

@ -5,7 +5,6 @@ message MyMsg {
enum TYPE {
DATA = 0;
PING = 1;
KICK = 2;
MAGIC = 0xdead;
}
@ -24,3 +23,32 @@ message MyMsg {
int32 tcpmode_compress = 13;
int32 tcpmode_stat = 14;
}
message FrameData {
enum TYPE {
USER_DATA = 0;
CONN = 1;
CONNRSP = 2;
CLOSE = 3;
}
int32 type = 1;
bytes data = 2;
bool compress = 3;
}
message Frame {
enum TYPE {
DATA = 0;
REQ = 1;
ACK = 2;
PING = 3;
PONG = 4;
}
int32 type = 1;
bool resend = 2;
int64 sendtime = 3;
int32 id = 4;
FrameData data = 5;
repeated int32 dataid = 6;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

BIN
network.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

56
pack.sh
View file

@ -1,56 +0,0 @@
#! /bin/bash
#set -x
NAME="pingtunnel"
export GO111MODULE=on
#go tool dist list
build_list=$(go tool dist list)
rm pack -rf
rm pack.zip -f
mkdir pack
go mod tidy
for line in $build_list; do
os=$(echo "$line" | awk -F"/" '{print $1}')
arch=$(echo "$line" | awk -F"/" '{print $2}')
echo "os="$os" arch="$arch" start build"
if [ $os == "android" ]; then
continue
fi
if [ $os == "ios" ]; then
continue
fi
if [ $arch == "wasm" ]; then
continue
fi
CGO_ENABLED=0 GOOS=$os GOARCH=$arch go build -ldflags="-s -w"
if [ $? -ne 0 ]; then
echo "os="$os" arch="$arch" build fail"
exit 1
fi
if [ $os = "windows" ]; then
zip ${NAME}_"${os}"_"${arch}"".zip" $NAME".exe"
if [ $? -ne 0 ]; then
echo "os="$os" arch="$arch" zip fail"
exit 1
fi
mv ${NAME}_"${os}"_"${arch}"".zip" pack/
rm $NAME".exe" -f
else
zip ${NAME}_"${os}"_"${arch}"".zip" $NAME
if [ $? -ne 0 ]; then
echo "os="$os" arch="$arch" zip fail"
exit 1
fi
mv ${NAME}_"${os}"_"${arch}"".zip" pack/
rm $NAME -f
fi
echo "os="$os" arch="$arch" done build"
done
zip pack.zip pack/ -r
echo "all done"

View file

@ -1,14 +1,18 @@
package pingtunnel
import (
"crypto/md5"
"crypto/rand"
"encoding/base64"
"encoding/binary"
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/loggo"
"encoding/hex"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/golang/protobuf/proto"
"golang.org/x/net/icmp"
"golang.org/x/net/ipv4"
"io"
"net"
"sync"
"syscall"
"time"
)
@ -58,18 +62,25 @@ func sendICMP(id int, sequence int, conn icmp.PacketConn, server *net.IPAddr, ta
return
}
conn.WriteTo(bytes, server)
for {
if _, err := conn.WriteTo(bytes, server); err != nil {
if neterr, ok := err.(*net.OpError); ok {
if neterr.Err == syscall.ENOBUFS {
continue
}
}
loggo.Info("sendICMP WriteTo error %s %s", server.String(), err)
}
break
}
return
}
func recvICMP(workResultLock *sync.WaitGroup, exit *bool, conn icmp.PacketConn, recv chan<- *Packet) {
defer common.CrashLog()
(*workResultLock).Add(1)
defer (*workResultLock).Done()
func recvICMP(conn icmp.PacketConn, recv chan<- *Packet) {
bytes := make([]byte, 10240)
for !*exit {
for {
conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
n, srcaddr, err := conn.ReadFrom(bytes)
@ -113,7 +124,22 @@ type Packet struct {
echoSeq int
}
func UniqueId() string {
b := make([]byte, 48)
if _, err := io.ReadFull(rand.Reader, b); err != nil {
return ""
}
return GetMd5String(base64.URLEncoding.EncodeToString(b))
}
func GetMd5String(s string) string {
h := md5.New()
h.Write([]byte(s))
return hex.EncodeToString(h.Sum(nil))
}
const (
FRAME_MAX_SIZE int = 888
FRAME_MAX_ID int = 1000000
FRAME_MAX_ID int = 100000
)

View file

@ -24,4 +24,93 @@ func Test0001(t *testing.T) {
proto.Unmarshal(dst[0:4], my1)
fmt.Println("my1 = ", my1)
fm := FrameMgr{}
fm.recvid = 4
fm.windowsize = 100
lr := &Frame{}
rr := &Frame{}
lr.Id = 1
rr.Id = 4
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
lr.Id = 99
rr.Id = 8
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 9000
lr.Id = 9998
rr.Id = 9999
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 9000
lr.Id = 9998
rr.Id = 8
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 0
lr.Id = 9998
rr.Id = 8
fmt.Println("fm.compareId(lr, rr) = ", fm.compareId((int)(lr.Id), (int)(rr.Id)))
fm.recvid = 0
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(4, 10))
fm.recvid = 0
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(5, 10))
fm.recvid = 4
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(1, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(1, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(2, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(9, 10))
fm.recvid = 10
fm.windowsize = 10000
fmt.Println("fm.isIdInRange = ", fm.isIdInRange(0, FRAME_MAX_ID))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdOld = ", fm.isIdOld(2, 10))
fm.recvid = 7
fm.windowsize = 5
fmt.Println("fm.isIdOld = ", fm.isIdOld(1, 10))
fm.recvid = 3
fm.windowsize = 5
fmt.Println("fm.isIdOld = ", fm.isIdOld(1, 10))
fm.recvid = 13
fm.windowsize = 10000
fmt.Println("fm.isIdOld = ", fm.isIdOld(9, FRAME_MAX_ID))
dd := fm.compressData(([]byte)("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"))
fmt.Println("fm.compressData = ", len(dd))
_, ddd := fm.deCompressData(dd)
fmt.Println("fm.deCompressData = ", (string)(ddd))
mm := make(map[int32]int)
mm[1] = 1
mm[2] = 1
mm[3] = 1
mm[4] = 2
mm[6] = 7
mms := fm.printStatMap(&mm)
fmt.Println("fm.printStatMap = ", mms)
fm.openstat = 1
fm.resetStat()
fm.printStat()
}

430
server.go
View file

@ -1,63 +1,37 @@
package pingtunnel
import (
"github.com/esrrhs/gohome/common"
"github.com/esrrhs/gohome/frame"
"github.com/esrrhs/gohome/loggo"
"github.com/esrrhs/gohome/threadpool"
"github.com/esrrhs/go-engine/src/common"
"github.com/esrrhs/go-engine/src/loggo"
"github.com/golang/protobuf/proto"
"golang.org/x/net/icmp"
"net"
"sync"
"time"
)
func NewServer(key int, maxconn int, maxprocessthread int, maxprocessbuffer int, connecttmeout int) (*Server, error) {
s := &Server{
exit: false,
key: key,
maxconn: maxconn,
maxprocessthread: maxprocessthread,
maxprocessbuffer: maxprocessbuffer,
connecttmeout: connecttmeout,
}
if maxprocessthread > 0 {
s.processtp = threadpool.NewThreadPool(maxprocessthread, maxprocessbuffer, func(v interface{}) {
packet := v.(*Packet)
s.processDataPacket(packet)
})
}
return s, nil
func NewServer(key int) (*Server, error) {
return &Server{
key: key,
}, nil
}
type Server struct {
exit bool
key int
workResultLock sync.WaitGroup
maxconn int
maxprocessthread int
maxprocessbuffer int
connecttmeout int
key int
conn *icmp.PacketConn
localConnMap sync.Map
connErrorMap sync.Map
localConnMap map[string]*ServerConn
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
localConnMapSize int
sendPacket uint64
recvPacket uint64
sendPacketSize uint64
recvPacketSize uint64
processtp *threadpool.ThreadPool
recvcontrol chan int
echoId int
echoSeq int
}
type ServerConn struct {
exit bool
timeout int
ipaddrTarget *net.UDPAddr
conn *net.UDPConn
@ -68,64 +42,36 @@ type ServerConn struct {
activeSendTime time.Time
close bool
rproto int
fm *frame.FrameMgr
fm *FrameMgr
tcpmode int
echoId int
echoSeq int
}
func (p *Server) Run() error {
func (p *Server) Run() {
conn, err := icmp.ListenPacket("ip4:icmp", "")
if err != nil {
loggo.Error("Error listening for ICMP packets: %s", err.Error())
return err
return
}
p.conn = conn
p.localConnMap = make(map[string]*ServerConn)
recv := make(chan *Packet, 10000)
p.recvcontrol = make(chan int, 1)
go recvICMP(&p.workResultLock, &p.exit, *p.conn, recv)
go recvICMP(*p.conn, recv)
go func() {
defer common.CrashLog()
interval := time.NewTicker(time.Second)
defer interval.Stop()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
for {
select {
case <-interval.C:
p.checkTimeoutConn()
p.showNet()
p.updateConnError()
time.Sleep(time.Second)
case r := <-recv:
p.processPacket(r)
}
}()
go func() {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
for !p.exit {
select {
case <-p.recvcontrol:
return
case r := <-recv:
p.processPacket(r)
}
}
}()
return nil
}
func (p *Server) Stop() {
p.exit = true
p.recvcontrol <- 1
p.workResultLock.Wait()
p.processtp.Stop()
p.conn.Close()
}
}
func (p *Server) processPacket(packet *Packet) {
@ -134,6 +80,9 @@ func (p *Server) processPacket(packet *Packet) {
return
}
p.echoId = packet.echoId
p.echoSeq = packet.echoSeq
if packet.my.Type == (int32)(MyMsg_PING) {
t := time.Time{}
t.UnmarshalBinary(packet.my.Data)
@ -145,112 +94,69 @@ func (p *Server) processPacket(packet *Packet) {
return
}
if packet.my.Type == (int32)(MyMsg_KICK) {
localConn := p.getServerConnById(packet.my.Id)
if localConn != nil {
p.close(localConn)
loggo.Info("remote kick local %s", packet.my.Id)
}
return
}
if p.maxprocessthread > 0 {
p.processtp.AddJob((int)(common.HashString(packet.my.Id)), packet)
} else {
p.processDataPacket(packet)
}
}
func (p *Server) processDataPacketNewConn(id string, packet *Packet) *ServerConn {
now := common.GetNowUpdateInSecond()
loggo.Info("start add new connect %s %s", id, packet.my.Target)
if p.maxconn > 0 && p.localConnMapSize >= p.maxconn {
loggo.Info("too many connections %d, server connected target fail %s", p.localConnMapSize, packet.my.Target)
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
return nil
}
addr := packet.my.Target
if p.isConnError(addr) {
loggo.Info("addr connect Error before: %s %s", id, addr)
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
return nil
}
if packet.my.Tcpmode > 0 {
c, err := net.DialTimeout("tcp", addr, time.Millisecond*time.Duration(p.connecttmeout))
if err != nil {
loggo.Error("Error listening for tcp packets: %s %s", id, err.Error())
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
p.addConnError(addr)
return nil
}
targetConn := c.(*net.TCPConn)
ipaddrTarget := targetConn.RemoteAddr().(*net.TCPAddr)
fm := frame.NewFrameMgr(FRAME_MAX_SIZE, FRAME_MAX_ID, (int)(packet.my.TcpmodeBuffersize), (int)(packet.my.TcpmodeMaxwin), (int)(packet.my.TcpmodeResendTimems), (int)(packet.my.TcpmodeCompress),
(int)(packet.my.TcpmodeStat))
localConn := &ServerConn{exit: false, timeout: (int)(packet.my.Timeout), tcpconn: targetConn, tcpaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), fm: fm, tcpmode: (int)(packet.my.Tcpmode)}
p.addServerConn(id, localConn)
go p.RecvTCP(localConn, id, packet.src)
return localConn
} else {
c, err := net.DialTimeout("udp", addr, time.Millisecond*time.Duration(p.connecttmeout))
if err != nil {
loggo.Error("Error listening for udp packets: %s %s", id, err.Error())
p.remoteError(packet.echoId, packet.echoSeq, id, (int)(packet.my.Rproto), packet.src)
p.addConnError(addr)
return nil
}
targetConn := c.(*net.UDPConn)
ipaddrTarget := targetConn.RemoteAddr().(*net.UDPAddr)
localConn := &ServerConn{exit: false, timeout: (int)(packet.my.Timeout), conn: targetConn, ipaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), tcpmode: (int)(packet.my.Tcpmode)}
p.addServerConn(id, localConn)
go p.Recv(localConn, id, packet.src)
return localConn
}
return nil
}
func (p *Server) processDataPacket(packet *Packet) {
loggo.Debug("processPacket %s %s %d", packet.my.Id, packet.src.String(), len(packet.my.Data))
now := common.GetNowUpdateInSecond()
now := time.Now()
id := packet.my.Id
localConn := p.getServerConnById(id)
localConn := p.localConnMap[id]
if localConn == nil {
localConn = p.processDataPacketNewConn(id, packet)
if localConn == nil {
return
if packet.my.Tcpmode > 0 {
addr := packet.my.Target
ipaddrTarget, err := net.ResolveTCPAddr("tcp", addr)
if err != nil {
loggo.Error("Error ResolveUDPAddr for tcp addr: %s %s", addr, err.Error())
return
}
targetConn, err := net.DialTCP("tcp", nil, ipaddrTarget)
if err != nil {
loggo.Error("Error listening for tcp packets: %s", err.Error())
return
}
fm := NewFrameMgr((int)(packet.my.TcpmodeBuffersize), (int)(packet.my.TcpmodeMaxwin), (int)(packet.my.TcpmodeResendTimems), (int)(packet.my.TcpmodeCompress),
(int)(packet.my.TcpmodeStat))
localConn = &ServerConn{timeout: (int)(packet.my.Timeout), tcpconn: targetConn, tcpaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), fm: fm, tcpmode: (int)(packet.my.Tcpmode)}
p.localConnMap[id] = localConn
go p.RecvTCP(localConn, id, packet.src)
} else {
addr := packet.my.Target
ipaddrTarget, err := net.ResolveUDPAddr("udp", addr)
if err != nil {
loggo.Error("Error ResolveUDPAddr for udp addr: %s %s", addr, err.Error())
return
}
targetConn, err := net.DialUDP("udp", nil, ipaddrTarget)
if err != nil {
loggo.Error("Error listening for udp packets: %s", err.Error())
return
}
localConn = &ServerConn{timeout: (int)(packet.my.Timeout), conn: targetConn, ipaddrTarget: ipaddrTarget, id: id, activeRecvTime: now, activeSendTime: now, close: false,
rproto: (int)(packet.my.Rproto), tcpmode: (int)(packet.my.Tcpmode)}
p.localConnMap[id] = localConn
go p.Recv(localConn, id, packet.src)
}
}
localConn.activeRecvTime = now
localConn.echoId = packet.echoId
localConn.echoSeq = packet.echoSeq
if packet.my.Type == (int32)(MyMsg_DATA) {
if packet.my.Tcpmode > 0 {
f := &frame.Frame{}
f := &Frame{}
err := proto.Unmarshal(packet.my.Data, f)
if err != nil {
loggo.Error("Unmarshal tcp Error %s", err)
@ -260,9 +166,6 @@ func (p *Server) processDataPacket(packet *Packet) {
localConn.fm.OnRecvFrame(f)
} else {
if packet.my.Data == nil {
return
}
_, err := localConn.conn.Write(packet.my.Data)
if err != nil {
loggo.Info("WriteToUDP Error %s", err)
@ -278,25 +181,20 @@ func (p *Server) processDataPacket(packet *Packet) {
func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("server waiting target response %s -> %s %s", conn.tcpaddrTarget.String(), conn.id, conn.tcpconn.LocalAddr().String())
loggo.Info("start wait remote connect tcp %s %s", conn.id, conn.tcpaddrTarget.String())
startConnectTime := common.GetNowUpdateInSecond()
for !p.exit && !conn.exit {
startConnectTime := time.Now()
for {
if conn.fm.IsConnected() {
break
}
conn.fm.Update()
sendlist := conn.fm.GetSendList()
sendlist := conn.fm.getSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*frame.Frame)
mb, _ := conn.fm.MarshalFrame(f)
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -304,27 +202,24 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
p.sendPacketSize += (uint64)(len(mb))
}
time.Sleep(time.Millisecond * 10)
now := common.GetNowUpdateInSecond()
now := time.Now()
diffclose := now.Sub(startConnectTime)
if diffclose > time.Second*5 {
if diffclose > time.Second*(time.Duration(conn.timeout)) {
loggo.Info("can not connect remote tcp %s %s", conn.id, conn.tcpaddrTarget.String())
p.close(conn)
p.remoteError(conn.echoId, conn.echoSeq, id, conn.rproto, src)
p.Close(conn)
return
}
}
if !conn.exit {
loggo.Info("remote connected tcp %s %s", conn.id, conn.tcpaddrTarget.String())
}
loggo.Info("remote connected tcp %s %s", conn.id, conn.tcpaddrTarget.String())
bytes := make([]byte, 10240)
tcpActiveRecvTime := common.GetNowUpdateInSecond()
tcpActiveSendTime := common.GetNowUpdateInSecond()
tcpActiveRecvTime := time.Now()
tcpActiveSendTime := time.Now()
for !p.exit && !conn.exit {
now := common.GetNowUpdateInSecond()
for {
now := time.Now()
sleep := true
left := common.MinOfInt(conn.fm.GetSendBufferLeft(), len(bytes))
@ -348,18 +243,18 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
conn.fm.Update()
sendlist := conn.fm.GetSendList()
sendlist := conn.fm.getSendList()
if sendlist.Len() > 0 {
sleep = false
conn.activeSendTime = now
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*frame.Frame)
mb, err := conn.fm.MarshalFrame(f)
f := e.Value.(*Frame)
mb, err := proto.Marshal(f)
if err != nil {
loggo.Error("Error tcp Marshal %s %s %s", conn.id, conn.tcpaddrTarget.String(), err)
continue
}
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -396,7 +291,7 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
tcpdiffrecv := now.Sub(tcpActiveRecvTime)
tcpdiffsend := now.Sub(tcpActiveSendTime)
if diffrecv > time.Second*(time.Duration(conn.timeout)) || diffsend > time.Second*(time.Duration(conn.timeout)) ||
(tcpdiffrecv > time.Second*(time.Duration(conn.timeout)) && tcpdiffsend > time.Second*(time.Duration(conn.timeout))) {
tcpdiffrecv > time.Second*(time.Duration(conn.timeout)) || tcpdiffsend > time.Second*(time.Duration(conn.timeout)) {
loggo.Info("close inactive conn %s %s", conn.id, conn.tcpaddrTarget.String())
conn.fm.Close()
break
@ -409,19 +304,17 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
}
}
conn.fm.Close()
startCloseTime := common.GetNowUpdateInSecond()
for !p.exit && !conn.exit {
now := common.GetNowUpdateInSecond()
startCloseTime := time.Now()
for {
now := time.Now()
conn.fm.Update()
sendlist := conn.fm.GetSendList()
sendlist := conn.fm.getSendList()
for e := sendlist.Front(); e != nil; e = e.Next() {
f := e.Value.(*frame.Frame)
mb, _ := conn.fm.MarshalFrame(f)
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
f := e.Value.(*Frame)
mb, _ := proto.Marshal(f)
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), mb,
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -441,12 +334,14 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
}
diffclose := now.Sub(startCloseTime)
if diffclose > time.Second*60 {
timeout := diffclose > time.Second*(time.Duration(conn.timeout))
remoteclosed := conn.fm.IsRemoteClosed()
if timeout {
loggo.Info("close conn had timeout %s %s", conn.id, conn.tcpaddrTarget.String())
break
}
remoteclosed := conn.fm.IsRemoteClosed()
if remoteclosed && nodatarecv {
loggo.Info("remote conn had closed %s %s", conn.id, conn.tcpaddrTarget.String())
break
@ -458,21 +353,15 @@ func (p *Server) RecvTCP(conn *ServerConn, id string, src *net.IPAddr) {
time.Sleep(time.Second)
loggo.Info("close tcp conn %s %s", conn.id, conn.tcpaddrTarget.String())
p.close(conn)
p.Close(conn)
}
func (p *Server) Recv(conn *ServerConn, id string, src *net.IPAddr) {
defer common.CrashLog()
p.workResultLock.Add(1)
defer p.workResultLock.Done()
loggo.Info("server waiting target response %s -> %s %s", conn.ipaddrTarget.String(), conn.id, conn.conn.LocalAddr().String())
bytes := make([]byte, 2000)
for !p.exit {
for {
bytes := make([]byte, 2000)
conn.conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
n, _, err := conn.conn.ReadFromUDP(bytes)
@ -485,10 +374,10 @@ func (p *Server) Recv(conn *ServerConn, id string, src *net.IPAddr) {
}
}
now := common.GetNowUpdateInSecond()
now := time.Now()
conn.activeSendTime = now
sendICMP(conn.echoId, conn.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), bytes[:n],
sendICMP(p.echoId, p.echoSeq, *p.conn, src, "", id, (uint32)(MyMsg_DATA), bytes[:n],
conn.rproto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
@ -498,31 +387,22 @@ func (p *Server) Recv(conn *ServerConn, id string, src *net.IPAddr) {
}
}
func (p *Server) close(conn *ServerConn) {
if p.getServerConnById(conn.id) != nil {
conn.exit = true
func (p *Server) Close(conn *ServerConn) {
if p.localConnMap[conn.id] != nil {
if conn.conn != nil {
conn.conn.Close()
}
if conn.tcpconn != nil {
conn.tcpconn.Close()
}
p.deleteServerConn(conn.id)
delete(p.localConnMap, conn.id)
}
}
func (p *Server) checkTimeoutConn() {
tmp := make(map[string]*ServerConn)
p.localConnMap.Range(func(key, value interface{}) bool {
id := key.(string)
serverConn := value.(*ServerConn)
tmp[id] = serverConn
return true
})
now := common.GetNowUpdateInSecond()
for _, conn := range tmp {
now := time.Now()
for _, conn := range p.localConnMap {
if conn.tcpmode > 0 {
continue
}
@ -533,82 +413,22 @@ func (p *Server) checkTimeoutConn() {
}
}
for id, conn := range tmp {
for id, conn := range p.localConnMap {
if conn.tcpmode > 0 {
continue
}
if conn.close {
loggo.Info("close inactive conn %s %s", id, conn.ipaddrTarget.String())
p.close(conn)
p.Close(conn)
}
}
}
func (p *Server) showNet() {
p.localConnMapSize = 0
p.localConnMap.Range(func(key, value interface{}) bool {
p.localConnMapSize++
return true
})
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s %dConnections",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024, p.localConnMapSize)
loggo.Info("send %dPacket/s %dKB/s recv %dPacket/s %dKB/s",
p.sendPacket, p.sendPacketSize/1024, p.recvPacket, p.recvPacketSize/1024)
p.sendPacket = 0
p.recvPacket = 0
p.sendPacketSize = 0
p.recvPacketSize = 0
}
func (p *Server) addServerConn(uuid string, serverConn *ServerConn) {
p.localConnMap.Store(uuid, serverConn)
}
func (p *Server) getServerConnById(uuid string) *ServerConn {
ret, ok := p.localConnMap.Load(uuid)
if !ok {
return nil
}
return ret.(*ServerConn)
}
func (p *Server) deleteServerConn(uuid string) {
p.localConnMap.Delete(uuid)
}
func (p *Server) remoteError(echoId int, echoSeq int, uuid string, rprpto int, src *net.IPAddr) {
sendICMP(echoId, echoSeq, *p.conn, src, "", uuid, (uint32)(MyMsg_KICK), []byte{},
rprpto, -1, p.key,
0, 0, 0, 0, 0, 0,
0)
}
func (p *Server) addConnError(addr string) {
_, ok := p.connErrorMap.Load(addr)
if !ok {
now := common.GetNowUpdateInSecond()
p.connErrorMap.Store(addr, now)
}
}
func (p *Server) isConnError(addr string) bool {
_, ok := p.connErrorMap.Load(addr)
return ok
}
func (p *Server) updateConnError() {
tmp := make(map[string]time.Time)
p.connErrorMap.Range(func(key, value interface{}) bool {
id := key.(string)
t := value.(time.Time)
tmp[id] = t
return true
})
now := common.GetNowUpdateInSecond()
for id, t := range tmp {
diff := now.Sub(t)
if diff > time.Second*5 {
p.connErrorMap.Delete(id)
}
}
}

136
sock5.go Normal file
View file

@ -0,0 +1,136 @@
package pingtunnel
import (
"encoding/binary"
"errors"
"io"
"net"
"strconv"
"time"
)
var (
errAddrType = errors.New("socks addr type not supported")
errVer = errors.New("socks version not supported")
errMethod = errors.New("socks only support 1 method now")
errAuthExtraData = errors.New("socks authentication get extra data")
errReqExtraData = errors.New("socks request get extra data")
errCmd = errors.New("socks command not supported")
)
const (
socksVer5 = 5
socksCmdConnect = 1
)
func sock5Handshake(conn net.Conn) (err error) {
const (
idVer = 0
idNmethod = 1
)
// version identification and method selection message in theory can have
// at most 256 methods, plus version and nmethod field in total 258 bytes
// the current rfc defines only 3 authentication methods (plus 2 reserved),
// so it won't be such long in practice
buf := make([]byte, 258)
var n int
conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
// make sure we get the nmethod field
if n, err = io.ReadAtLeast(conn, buf, idNmethod+1); err != nil {
return
}
if buf[idVer] != socksVer5 {
return errVer
}
nmethod := int(buf[idNmethod])
msgLen := nmethod + 2
if n == msgLen { // handshake done, common case
// do nothing, jump directly to send confirmation
} else if n < msgLen { // has more methods to read, rare case
if _, err = io.ReadFull(conn, buf[n:msgLen]); err != nil {
return
}
} else { // error, should not get extra data
return errAuthExtraData
}
// send confirmation: version 5, no authentication required
_, err = conn.Write([]byte{socksVer5, 0})
return
}
func sock5GetRequest(conn net.Conn) (rawaddr []byte, host string, err error) {
const (
idVer = 0
idCmd = 1
idType = 3 // address type index
idIP0 = 4 // ip address start index
idDmLen = 4 // domain address length index
idDm0 = 5 // domain address start index
typeIPv4 = 1 // type is ipv4 address
typeDm = 3 // type is domain address
typeIPv6 = 4 // type is ipv6 address
lenIPv4 = 3 + 1 + net.IPv4len + 2 // 3(ver+cmd+rsv) + 1addrType + ipv4 + 2port
lenIPv6 = 3 + 1 + net.IPv6len + 2 // 3(ver+cmd+rsv) + 1addrType + ipv6 + 2port
lenDmBase = 3 + 1 + 1 + 2 // 3 + 1addrType + 1addrLen + 2port, plus addrLen
)
// refer to getRequest in server.go for why set buffer size to 263
buf := make([]byte, 263)
var n int
conn.SetReadDeadline(time.Now().Add(time.Millisecond * 100))
// read till we get possible domain length field
if n, err = io.ReadAtLeast(conn, buf, idDmLen+1); err != nil {
return
}
// check version and cmd
if buf[idVer] != socksVer5 {
err = errVer
return
}
if buf[idCmd] != socksCmdConnect {
err = errCmd
return
}
reqLen := -1
switch buf[idType] {
case typeIPv4:
reqLen = lenIPv4
case typeIPv6:
reqLen = lenIPv6
case typeDm:
reqLen = int(buf[idDmLen]) + lenDmBase
default:
err = errAddrType
return
}
if n == reqLen {
// common case, do nothing
} else if n < reqLen { // rare case
if _, err = io.ReadFull(conn, buf[n:reqLen]); err != nil {
return
}
} else {
err = errReqExtraData
return
}
rawaddr = buf[idType:reqLen]
switch buf[idType] {
case typeIPv4:
host = net.IP(buf[idIP0 : idIP0+net.IPv4len]).String()
case typeIPv6:
host = net.IP(buf[idIP0 : idIP0+net.IPv6len]).String()
case typeDm:
host = string(buf[idDm0 : idDm0+buf[idDmLen]])
}
port := binary.BigEndian.Uint16(buf[reqLen-2 : reqLen])
host = net.JoinHostPort(host, strconv.Itoa(int(port)))
return
}