feat(nodes): traffic-writer queue, full-mirror sync, WS event fixes

- Traffic-writer single-consumer queue (web/service/traffic_writer.go)
  serialises every DB write that touches up/down/all_time/last_online
  (AddTraffic, SetRemoteTraffic, Reset*, UpdateClientTrafficByEmail) so
  overlapping goroutines can no longer clobber each other's column-scoped
  Updates with a stale tx.Save.

- DB pool: WAL + busy_timeout=10s + synchronous=NORMAL + _txlock=
  immediate, MaxOpenConns=8 / MaxIdleConns=4. The immediate-tx PRAGMA
  fixes residual "database is locked [0ms]" cases where deferred-tx
  writer-upgrade conflicts bypass busy_timeout.

- SetRemoteTraffic full-mirrors node-authoritative state into central:
  settings JSON, remark, listen, port, total, expiry, all_time, enable,
  plus per-client total/expiry/reset/all_time. Inbounds and
  client_traffics rows present on node but missing from central are
  created; rows missing from snap are deleted (with cascading
  client_traffics removal).

- NodeTrafficSyncJob detects structural changes from the mirror and
  broadcasts invalidate(inbounds) so open central UIs re-fetch via REST
  on node-side add/del/edit without manual refresh.

- XrayTrafficJob broadcasts invalidate(inbounds) when auto-disable flips
  client_traffics.enable so the per-client toggle reflects depletion
  without manual refresh.

- Frontend: inbounds page now subscribes to the BroadcastInbounds 'inbounds'
  WS event (full-list pushes from add/del/update controllers were silently
  dropped). Fixes invalidate payload field (dataType -> type). Restart-
  panel modal switched from Promise-wrap to onOk-only so Cancel actually
  cancels.

- Node files trimmed of stale prose-comments; cron cadence dropped
  10s -> 5s to match the inbounds page UX.

- README badges and Go module path bumped v2 -> v3 to match module rename.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
This commit is contained in:
MHSanaei
2026-05-10 16:25:23 +02:00
parent 24cd271486
commit 8e7d215b4a
25 changed files with 559 additions and 639 deletions
-39
View File
@@ -8,14 +8,6 @@ import (
"github.com/mhsanaei/3x-ui/v3/database/model"
)
// Manager is the entry point for service code that needs a Runtime.
// One singleton lives in the package-level `manager` var, set at
// server bootstrap (web.go calls SetManager once). InboundService and
// friends read it via GetManager().
//
// Local runs forever; Remotes are built lazily per nodeID and cached.
// Cache invalidation runs on node Update/Delete (NodeService hooks
// InvalidateNode) so a token rotation surfaces the next call.
type Manager struct {
local Runtime
@@ -23,9 +15,6 @@ type Manager struct {
remotes map[int]*Remote
}
// NewManager wires the singleton with the deps Local needs. The runtime
// package can't import service so the caller (web.go) supplies the
// callbacks that bridge into XrayService.
func NewManager(localDeps LocalDeps) *Manager {
return &Manager{
local: NewLocal(localDeps),
@@ -33,10 +22,6 @@ func NewManager(localDeps LocalDeps) *Manager {
}
}
// RuntimeFor picks the right adapter for an inbound based on NodeID.
// Returns local when nodeID is nil; otherwise looks up the node row
// (or returns the cached Remote for it). The caller does not need to
// know which kind they got — that's the point of the abstraction.
func (m *Manager) RuntimeFor(nodeID *int) (Runtime, error) {
if nodeID == nil {
return m.local, nil
@@ -48,8 +33,6 @@ func (m *Manager) RuntimeFor(nodeID *int) (Runtime, error) {
}
m.mu.RUnlock()
// Cache miss — load the node row and build a Remote. We re-check
// under the write lock to avoid duplicate construction under load.
m.mu.Lock()
defer m.mu.Unlock()
if rt, ok := m.remotes[*nodeID]; ok {
@@ -67,16 +50,8 @@ func (m *Manager) RuntimeFor(nodeID *int) (Runtime, error) {
return rt, nil
}
// Local returns the singleton local runtime. Used by code that needs
// to operate on the panel's own xray regardless of which inbound it
// came from (e.g. on-demand restart from the UI).
func (m *Manager) Local() Runtime { return m.local }
// RemoteFor returns the Remote adapter for an already-loaded node row.
// Differs from RuntimeFor in two ways: it skips the DB lookup (caller
// hands in the node), and it returns the concrete *Remote so callers
// like NodeTrafficSyncJob can reach FetchTrafficSnapshot, which the
// Runtime interface doesn't expose.
func (m *Manager) RemoteFor(node *model.Node) (*Remote, error) {
if node == nil {
return nil, errors.New("node is nil")
@@ -98,18 +73,12 @@ func (m *Manager) RemoteFor(node *model.Node) (*Remote, error) {
return rt, nil
}
// InvalidateNode drops the cached Remote for nodeID so the next
// RuntimeFor call rebuilds it from the (possibly updated) node row.
// Called from NodeService.Update / Delete.
func (m *Manager) InvalidateNode(nodeID int) {
m.mu.Lock()
defer m.mu.Unlock()
delete(m.remotes, nodeID)
}
// loadNode reads a node row directly from the DB. Kept package-local
// to avoid pulling NodeService into the runtime — service depends on
// runtime, not the other way around.
func loadNode(id int) (*model.Node, error) {
db := database.GetDB()
n := &model.Node{}
@@ -119,25 +88,17 @@ func loadNode(id int) (*model.Node, error) {
return n, nil
}
// Singleton wiring -------------------------------------------------------
var (
managerMu sync.RWMutex
manager *Manager
)
// SetManager installs the process-wide Manager. web.go calls this once
// during NewServer. Tests can call it again with a stub.
func SetManager(m *Manager) {
managerMu.Lock()
defer managerMu.Unlock()
manager = m
}
// GetManager returns the installed Manager, or nil before SetManager
// has run. Callers should treat nil as "still booting" — the existing
// behaviour for code paths that only run on the local engine continues
// to work via a pre-wired fallback set up in init() below.
func GetManager() *Manager {
managerMu.RLock()
defer managerMu.RUnlock()