Carry-Coin 记服务迁移和流量优化

最近Contabo服务器频繁死机,发邮件给官方反应问题,一开始嘴硬说没问题让自查,沟通2天又是截图又是各种开票,最后承认问题说技术团队排查但是不给解决时间

邮件回复

VNC过去看到 sda3硬盘一直挂不上/dev/sda3: recovering journal,猜测不是硬件存储坏了就是虚拟化平台抽风;好在重启5-8次大概能进系统一次,赶紧拷数据闪人;

目前部署架构

原来contabo的机器8u24g,32TTraffic一个月$26, 相同配置国内厂商看了一圈没有羊毛,最后选择tx,但是相同配置明显贵上天,只能调整架构先开台低配2u4g/90ssd轻量服务器,把front、server、db弄回来,worker后面再说;


程序迁移后大问题没有,每种不足就是出口流量太吃紧了,轻量应用流量包只有2T,跑了10个小时流量80多G,一天毛估估200G;

优化过程

iftop大概观察下流量去向,其实心里也有数,整机对外一个是通过nginx访问的front 这是给自己看的前端,还有一个就是mysql,几个worker每秒多线程读写,这部分传输过程中流量花费巨大;
资源的话cpu的使用率基本维持在50左右,内存40%左右,优化空间还是有的

  • jeecgboot 前端肿的不行,所以nginx gzip该压的压起来,改了后观察监控发现提升可以忽略不计。
  • mysql是大头,这块翻了一些优化料大部分都是持久化侧的,表压缩之类的;所以换了个思路,既然是传输过程中的损耗那么大概率是jdbc驱动的事情,这么常规的场景应该有支持;

翻了mysql-connector-j-8.0.33.jar代码发现果然有戏com.mysql.cj.protocol.a.NativeProtocol中有个字段useCompression开启后mysql的传输过程会压缩,但是默认是关闭的,可以在jdbc连接字符串后面加上useCompression=true来开启压缩;

CompressedPacketSender

开关打开后会使用com.mysql.cj.protocol.a.CompressedPacketSender来发送数据;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
/**
* Packet sender implementation for the compressed MySQL protocol. For compressed transmission of multi-packets, split the packets up in the same way as the
* uncompressed protocol. We fit up to MAX_PACKET_SIZE bytes of split uncompressed packet, including the header, into an compressed packet. The first packet
* of the multi-packet is 4 bytes of header and MAX_PACKET_SIZE - 4 bytes of the payload. The next packet must send the remaining four bytes of the payload
* followed by a new header and payload. If the second split packet is also around MAX_PACKET_SIZE in length, then only MAX_PACKET_SIZE - 4 (from the
* previous packet) - 4 (for the new header) can be sent. This means the payload will be limited by 8 bytes and this will continue to increase by 4 at every
* iteration.
*
* @param packet
* data bytes
* @param packetLen
* packet length
* @param packetSequence
* sequence id
* @throws IOException
* if i/o exception occurs
*/
public void send(byte[] packet, int packetLen, byte packetSequence) throws IOException {
this.compressedSequenceId = packetSequence;

// short-circuit send small packets without compression and return
if (packetLen < MIN_COMPRESS_LEN) {
writeCompressedHeader(packetLen + NativeConstants.HEADER_LENGTH, this.compressedSequenceId, 0);
writeUncompressedHeader(packetLen, packetSequence);
this.outputStream.write(packet, 0, packetLen);
this.outputStream.flush();
return;
}

if (packetLen + NativeConstants.HEADER_LENGTH > NativeConstants.MAX_PACKET_SIZE) {
this.compressedPacket = new byte[NativeConstants.MAX_PACKET_SIZE];
} else {
this.compressedPacket = new byte[NativeConstants.HEADER_LENGTH + packetLen];
}

PacketSplitter packetSplitter = new PacketSplitter(packetLen);

int unsentPayloadLen = 0;
int unsentOffset = 0;
// loop over constructing and sending compressed packets
while (true) {
this.compressedPayloadLen = 0;

if (packetSplitter.nextPacket()) {
// rest of previous packet
if (unsentPayloadLen > 0) {
addPayload(packet, unsentOffset, unsentPayloadLen);
}

// current packet
int remaining = NativeConstants.MAX_PACKET_SIZE - unsentPayloadLen;
// if remaining is 0 then we are sending a very huge packet such that are 4-byte header-size carryover from last packet accumulated to the size
// of a whole packet itself. We don't handle this. Would require 4 million packet segments (64 gigs in one logical packet)
int len = Math.min(remaining, NativeConstants.HEADER_LENGTH + packetSplitter.getPacketLen());
int lenNoHdr = len - NativeConstants.HEADER_LENGTH;
addUncompressedHeader(packetSequence, packetSplitter.getPacketLen());
addPayload(packet, packetSplitter.getOffset(), lenNoHdr);

completeCompression();
// don't send payloads with incompressible data
if (this.compressedPayloadLen >= len) {
// combine the unsent and current packet in an uncompressed packet
writeCompressedHeader(unsentPayloadLen + len, this.compressedSequenceId++, 0);
this.outputStream.write(packet, unsentOffset, unsentPayloadLen);
writeUncompressedHeader(lenNoHdr, packetSequence);
this.outputStream.write(packet, packetSplitter.getOffset(), lenNoHdr);
} else {
sendCompressedPacket(len + unsentPayloadLen);
}

packetSequence++;
unsentPayloadLen = packetSplitter.getPacketLen() - lenNoHdr;
unsentOffset = packetSplitter.getOffset() + lenNoHdr;
resetPacket();
} else if (unsentPayloadLen > 0) {
// no more packets, send remaining unsent data
addPayload(packet, unsentOffset, unsentPayloadLen);
completeCompression();
if (this.compressedPayloadLen >= unsentPayloadLen) {
writeCompressedHeader(unsentPayloadLen, this.compressedSequenceId, 0);
this.outputStream.write(packet, unsentOffset, unsentPayloadLen);
} else {
sendCompressedPacket(unsentPayloadLen);
}
resetPacket();
break;
} else {
// nothing left to send (only happens on boundaries)
break;
}
}

this.outputStream.flush();

// release reference to (possibly large) compressed packet buffer
this.compressedPacket = null;
}

整体思路高效地发送数据包,无论是小包还是大包,同时通过压缩减少传输数据的大小。它通过拆分、压缩和适当的序列管理确保数据完整和顺序发送.


改了以后程序跑起来,超出预期,流量消耗少了近50%,代价是cpu提了10%左右,划算。

先跑着一个月后再看。

Author

Gavin

Posted on

2024-09-02

Updated on

2024-10-13

Licensed under

Comments