- Use named constant for dataChan buffer size
- Add bounds checking to prevent panic if n > len(data)
- Only send valid data portion (buf[:n]) to dataChan
- Use sync.Once to prevent double-close panic in Close()
- Add comment explaining data loss risk (acceptable for UDP-like behavior)
All tests pass, no security vulnerabilities found.
Co-authored-by: RPRX <63339210+RPRX@users.noreply.github.com>
Replaced the competing reader goroutines with a unified reading architecture:
- Each peer connection continuously reads into a shared data channel
- A single dispatcher goroutine matches received data with read requests
- Eliminates blocking issues - all connections are monitored simultaneously
- No more race conditions between peer readers
This addresses @RPRX's suggestion to "统一 read 后再分给指定的 peer reader"
(unified read then distribute to specified peer readers).
Architecture:
- connectTo() registers connection and starts a dedicated reader per connection
- Each connection reader continuously reads and sends to dataChan
- unifiedReader() dispatcher waits for data, then matches with pending requests
- All peers can receive simultaneously without any blocking
Tests pass successfully.
Co-authored-by: RPRX <63339210+RPRX@users.noreply.github.com>
The issue was that with only 1 worker (default), when a peer's reader goroutine
blocked waiting for data, it prevented other peers from receiving packets.
Simple solution: Automatically set workers to the number of peers if not
explicitly configured. This allows each peer to have its own worker thread
for concurrent packet reception.
- Reverted complex architectural changes
- Added simple logic to set workers = len(peers) when NumWorkers not set
- Much simpler and easier to understand than previous approach
Co-authored-by: RPRX <63339210+RPRX@users.noreply.github.com>
Only clear reserved bytes if read was successful (err == nil). This prevents
processing invalid data when conn.Read() returns an error.
Code review feedback addressed.
Co-authored-by: RPRX <63339210+RPRX@users.noreply.github.com>
The root cause was architectural: each peer connection created a goroutine
that competed for the same readQueue. When a goroutine grabbed a read request
but its connection had no data, it would block, preventing other peers from
receiving packets. This caused the "only one peer works at a time" behavior.
Solution: Redesigned the packet flow:
- Each peer connection now continuously reads from its socket and sends
packets to a shared packetQueue
- A dispatcher goroutine matches readQueue requests (from WireGuard) with
packets from packetQueue
- This allows all peer connections to work simultaneously without blocking
Changes:
- Added packetQueue channel and receivedPacket struct to buffer packets
- Modified Open() to start a dispatcher goroutine
- Rewrote connectTo() to continuously read and queue packets
- Each peer connection now operates independently
Tests pass. This architectural fix addresses the fundamental issue with
multi-peer WireGuard support.
Co-authored-by: RPRX <63339210+RPRX@users.noreply.github.com>
* Refactor log
* Add new log methods
* Fix logger test
* Change all logging code
* Clean up pathObj
* Rebase to latest main
* Remove invoking method name after the dot
* feat: wireguard inbound
* feat(command): generate wireguard compatible keypair
* feat(wireguard): connection idle timeout
* fix(wireguard): close endpoint after connection closed
* fix(wireguard): resolve conflicts
* feat(wireguard): set cubic as default cc algorithm in gVisor TUN
* chore(wireguard): resolve conflict
* chore(wireguard): remove redurant code
* chore(wireguard): remove redurant code
* feat: rework server for gvisor tun
* feat: keep user-space tun as an option
* fix: exclude android from native tun build
* feat: auto kernel tun
* fix: build
* fix: regulate function name & fix test