RPC Node
How to run the RPC node
Testnet Bandwidth management
On testnet, to protect the available network bandwidth which is not infinite, tc will be used to limit ingress bandwidth at the RPC node to limit bandwidth across the entire network.
First determine the double zero interface name to limit, e.g. and set these variables
- Setup environment variables based on requirements:
# confirm this matches DZ interface, drop the @
DEV="doublezero0"
# name of the IFB (keep this unless already using IFBs)
IFB="ifb0"
# For a 1 Gbps rate limit use
RATE="1gbit" BURST="2m"
# For a 2 Gbps rate limit use
RATE="2gbit" BURST="4m"
# For an MTU of 1500 use
QUANTUM=1514
# For an MTU of 1476 (DZ GRE Tunnel)
QUANTUM=1490- Clean up any prior state
sudo tc qdisc del dev "$DEV"
sudo tc qdisc del dev "$IFB"
sudo ip link del "$IFB"- Load IFB and create the IFB device
sudo modprobe ifb
sudo ip link add "$IFB" type ifb
sudo ip link set "$IFB" up- Attach ingress hook to the real NIC and redirect TCP (IPv4 + IPv6) to IFB using
flowerand theclsactqDisc
sudo tc qdisc add dev "$DEV" clsact
# IPv4 TCP -> IFB
sudo tc filter add dev "$DEV" ingress protocol ip pref 10 flower ip_proto tcp \
action mirred egress redirect dev "$IFB"
# IPv6 TCP -> IFB
sudo tc filter add dev "$DEV" ingress protocol ipv6 pref 20 flower ip_proto tcp \
action mirred egress redirect dev "$IFB"This takes incoming TCP packets on $DEV and feeds them into $IFB egress, where we can shape properly.
- Shape on IFB to
$RATEwith HTB, explicitly setting quantum and burst/cburst, thenfq_codel
sudo tc qdisc add dev "$IFB" root handle 1: htb default 10
sudo tc class add dev "$IFB" parent 1: classid 1:10 htb \
rate "$RATE" ceil "$RATE" \
quantum "$QUANTUM" \
burst "$BURST" cburst "$BURST"
sudo tc qdisc add dev "$IFB" parent 1:10 handle 10: fq_codelVerify it’s active
Device Ingress
tc filter show dev "$DEV" ingressExample output:
filter protocol ip pref 10 flower chain 0
filter protocol ip pref 10 flower chain 0 handle 0x1
eth_type ipv4
ip_proto tcp
not_in_hw
action order 1: mirred (Egress Redirect to device ifb0) stolen
index 1 ref 1 bind 1
filter protocol ipv6 pref 20 flower chain 0
filter protocol ipv6 pref 20 flower chain 0 handle 0x1
eth_type ipv6
ip_proto tcp
not_in_hw
action order 1: mirred (Egress Redirect to device ifb0) stolen
index 2 ref 1 bind 1- Mirrored redirect to
$IFB.
IFB Setup
tc -s qdisc show dev "$IFB"Example output:
qdisc htb 1: root refcnt 2 r2q 10 default 0x10 direct_packets_stat 48 direct_qlen 32
Sent 23398 bytes 374 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 10: parent 1:10 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
Sent 19476 bytes 311 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 104 drop_overlimit 0 new_flow_count 2 ecn_mark 0
new_flows_len 0 old_flows_len 1- clsact on
$DEV - Counters increasing on IFB qdisc
IFB Class
tc -s class show dev "$IFB"Example output:
class htb 1:10 root leaf 10: prio 0 rate 1Gbit ceil 1Gbit burst 2Mb cburst 2Mb
Sent 23204 bytes 371 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
lended: 371 borrowed: 0 giants: 0
tokens: 262133 ctokens: 262133
class fq_codel 10:a parent 10:
(dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
deficit 508 count 0 lastcount 0 ldelay 1usRemove / clean up
See (1)
Notes
- This shapes aggregate incoming TCP to 1 Gbps, but fq_codel gives per-flow fairness (much better than ingress policing).
- If you want to shape all ingress (TCP+UDP), remove the protocol match and just redirect all protocol ip (or add separate filters).