以太坊源码分析:fetcher模块和区块传播
刘艳琴
发表于 2022-12-21 01:53:48
161
0
0
当前代码是以太坊Release 1.8,如果版本不同,代码上可能存在差异。) N; } d7 q! {$ T, W
总体过程和传播策略; w! g q8 _) b/ x- @2 C
本节从宏观角度介绍,节点产生区块后,为了传播给远端节点做了啥,远端节点收到区块后又做了什么,每个节点都连接了很多Peer,它传播的策略是什么样的?
总体流程和策略可以总结为,传播给远端Peer节点,Peer验证区块无误后,加入到本地区块链,继续传播新区块信息。具体过程如下。
先看总体过程。产生区块后,miner模块会发布一个事件NewMinedBlockEvent,订阅事件的协程收到事件后,就会把新区块的消息,广播给它所有的peer,peer收到消息后,会交给自己的fetcher模块处理,fetcher进行基本的验证后,区块没问题,发现这个区块就是本地链需要的下一个区块,则交给blockChain进一步进行完整的验证,这个过程会执行区块所有的交易,无误后把区块加入到本地链,写入数据库,这个过程就是下面的流程图,图1。3 z, O, T G1 v# q5 } O
; n# l& B3 }; j; T1 _" ?
总体流程图,能看到有个分叉,是因为节点传播新区块是有策略的。它的传播策略为:3 [; s0 y# r. K" Z+ }2 @; s! q
假如节点连接了N个Peer,它只向Peer列表的sqrt(N)个Peer广播完整的区块消息。向所有的Peer广播只包含区块Hash的消息。0 D" r6 \: ]2 w' ?. C
策略图的效果如图2,红色节点将区块传播给黄色节点:
收到区块Hash的节点,需要从发送给它消息的Peer那里获取对应的完整区块,获取区块后就会按照图1的流程,加入到fetcher队列,最终插入本地区块链后,将区块的Hash值广播给和它相连,但还不知道这个区块的Peer。非产生区块节点的策略图,如图3,黄色节点将区块Hash传播给青色节点:* `0 C9 R& |) O) u- {6 V
至此,可以看出以太坊采用以石击水的方式,像水纹一样,层层扩散新产生的区块。5 R* \0 m( [ k# f- G; p, j+ T$ z: J
Fetcher模块是干啥的5 @( |' Q& d1 R4 Y" {
fetcher模块的功能,就是收集其他Peer通知它的区块信息:1)完整的区块2)区块Hash消息。根据通知的消息,获取完整的区块,然后传递给eth模块把区块插入区块链。
如果是完整区块,就可以传递给eth插入区块,如果只有区块Hash,则需要从其他的Peer获取此完整的区块,然后再传递给eth插入区块
$ `/ J2 c, b: a1 B+ Y
源码解读1 C8 W4 L% S$ d/ K5 U& n7 E( O
本节介绍区块传播和处理的细节东西,方式仍然是先用图解释流程,再是代码流程。$ g S3 a/ ~4 i4 `' {$ b
产块节点的传播新区块
节点产生区块后,广播的流程可以表示为图4:( I" H) D1 p$ O/ c) a
发布事件事件处理函数选择要广播完整的Peer,然后将区块加入到它们的队列事件处理函数把区块Hash添加到所有Peer的另外一个通知队列每个Peer的广播处理函数,会遍历它的待广播区块队列和通知队列,把数据封装成消息,调用P2P接口发送出去- o0 _3 V( n! A. Y% {1 B. A$ j
W7 S5 K/ t- K: a3 g
再看下代码上的细节。" B$ k3 C) c6 ^/ n
worker.wait()函数发布事件NewMinedBlockEvent。ProtocolManager.minedBroadcastLoop()是事件处理函数。它调用了2次pm.BroadcastBlock()。
// Mined broadcast loop& z: N# x6 k8 L8 |4 h1 z
func (pm *ProtocolManager) minedBroadcastLoop() {8 P; A9 m2 \; s$ E
// automatically stops if unsubscribe
for obj := range pm.minedBlockSub.Chan() {7 m3 N/ @4 ?: g, S% m& x" |
switch ev := obj.Data.(type) {: K% v# I* I# H2 L" r' F- B; M
case core.NewMinedBlockEvent:5 Q B# |; z. J; n) t- s
pm.BroadcastBlock(ev.Block, true) // First propagate block to peers: K' U( |( `. s$ |. M
pm.BroadcastBlock(ev.Block, false) // Only then announce to the rest
}
}
}% d+ _( h" j. W# ]% Q: f2 d0 R
pm.BroadcastBlock()的入参propagate为真时,向部分Peer广播完整的区块,调用peer.AsyncSendNewBlock(),否则向所有Peer广播区块头,调用peer.AsyncSendNewBlockHash(),这2个函数就是把数据放入队列,此处不再放代码。
// BroadcastBlock will either propagate a block to a subset of it's peers, or
// will only announce it's availability (depending what's requested).3 E5 h8 ?8 W+ s
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {% F5 Q% x7 R: a0 O; _1 ]
hash := block.Hash()* C I( t: |1 `# p
peers := pm.peers.PeersWithoutBlock(hash)" a d) z9 c! ~
// If propagation is requested, send to a subset of the peer2 o9 \4 v( n" @
// 这种情况,要把区块广播给部分peer
if propagate {' S0 r( s* Q. l, L+ D1 S4 v! Z
// Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
// 计算新的总难度$ E# N9 ? R% s1 L
var td *big.Int
if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {# `. U0 J: d3 z/ f' U* D0 y
td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))- O. n4 `; Q. d0 r$ L) l8 i- ?
} else {
log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
return
}
// Send the block to a subset of our peers9 ^& E4 C3 Q" R) l( {' L- O# Z
// 广播区块给部分peer" ?% _: ]! S) i8 x3 `) r
transfer := peers[:int(math.Sqrt(float64(len(peers))))]: }: V, r* g+ w2 ]( t1 O! X
for _, peer := range transfer {. d( H. R& Z0 z, W& n- o. T( b- B, n
peer.AsyncSendNewBlock(block, td)7 o3 M" h3 L) }3 t; U1 f+ G
}5 n- p; a: k- R/ o o0 ?
log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
return
}
// Otherwise if the block is indeed in out own chain, announce it' m) ~0 J y* }: \ N; A( w
// 把区块hash值广播给所有peer
if pm.blockchain.HasBlock(hash, block.NumberU64()) {" P; U% P4 r2 ~) b3 Y5 u" H2 f
for _, peer := range peers {
peer.AsyncSendNewBlockHash(block)
}* D- @" c- E9 Z5 e* b
log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))4 ]( ?* |$ S8 ^7 i0 u3 d% m
}6 E- ^- h" T1 o' W7 B, L" N
}$ }4 ` q0 F& P& C; C
peer.broadcase()是每个Peer连接的广播函数,它只广播3种消息:交易、完整的区块、区块的Hash,这样表明了节点只会主动广播这3中类型的数据,剩余的数据同步,都是通过请求-响应的方式。" `5 V% D0 @& }* I# T" ^
// broadcast is a write loop that multiplexes block propagations, announcements
// and transaction broadcasts into the remote peer. The goal is to have an async
// writer that does not lock up node internals.
func (p *peer) broadcast() {! V: H' ~$ }' b4 G
for {
select {: p; a, b6 o: W+ n- P8 h5 x9 |
// 广播交易6 s+ S, W6 V& ]' T
case txs :=
Peer节点处理新区块
本节介绍远端节点收到2种区块同步消息的处理,其中NewBlockMsg的处理流程比较清晰,也简洁。NewBlockHashesMsg消息的处理就绕了2绕,从总体流程图1上能看出来,它需要先从给他发送消息Peer那里获取到完整的区块,剩下的流程和NewBlockMsg又一致了。
这部分涉及的模块多,画出来有种眼花缭乱的感觉,但只要抓住上面的主线,代码看起来还是很清晰的。通过图5先看下整体流程。
消息处理的起点是ProtocolManager.handleMsg,NewBlockMsg的处理流程是蓝色标记的区域,红色区域是单独的协程,是fetcher处理队列中区块的流程,如果从队列中取出的区块是当前链需要的,校验后,调用blockchian.InsertChain()把区块插入到区块链,最后写入数据库,这是黄色部分。最后,绿色部分是NewBlockHashesMsg的处理流程,代码流程上是比较复杂的,为了能通过图描述整体流程,我把它简化掉了。8 t) c: g0 o$ |9 a& g* r! r
仔细看看这幅图,掌握整体的流程后,接下来看每个步骤的细节。2 e% T( T1 H5 G) a* {! R- t) Q. P7 f$ p7 Y
NewBlockMsg的处理6 z' X8 |" s7 o& d. f
本节介绍节点收到完整区块的处理,流程如下:
首先进行RLP编解码,然后标记发送消息的Peer已经知道这个区块,这样本节点最后广播这个区块的Hash时,不会再发送给该Peer。3 V9 \$ _! ]- D; i- u/ g
将区块存入到fetcher的队列,调用fetcher.Enqueue。
更新Peer的Head位置,然后判断本地链是否落后于Peer的链,如果是,则通过Peer更新本地链。7 w$ w+ C$ i" v8 `7 w8 S
只看handle.Msg()的NewBlockMsg相关的部分。1 D" z( h' C K9 k; `( U
case msg.Code == NewBlockMsg:5 P- ?! {2 y% _" d* a: W( v5 a/ N
// Retrieve and decode the propagated block
// 收到新区块,解码,赋值接收数据
var request newBlockData
if err := msg.Decode(&request); err != nil {
return errResp(ErrDecode, "%v: %v", msg, err)
}4 i. i! V! |8 ~) P; C
request.Block.ReceivedAt = msg.ReceivedAt
request.Block.ReceivedFrom = p
// Mark the peer as owning the block and schedule it for import$ n6 C; _, e: k# [; u0 K" s; [: t
// 标记peer知道这个区块 y+ {2 r5 A1 }4 @3 K; m
p.MarkBlock(request.Block.Hash())3 g& V8 @; _. q4 ?1 I( |+ N. ^
// 为啥要如队列?已经得到完整的区块了/ R) H% d5 S1 V L3 [
// 答:存入fetcher的优先级队列,fetcher会从队列中选取当前高度需要的块
pm.fetcher.Enqueue(p.id, request.Block)' K) l1 W9 O, }) v T
// Assuming the block is importable by the peer, but possibly not yet done so,
// calculate the head hash and TD that the peer truly must have.
// 截止到parent区块的头和难度7 ~% A9 Z+ y5 z0 [6 Y
var (
trueHead = request.Block.ParentHash()
trueTD = new(big.Int).Sub(request.TD, request.Block.Difficulty())3 l' d9 P) t; s& t. _; v
)( I9 q1 f9 G2 T
// Update the peers total difficulty if better than the previous
// 如果收到的块的难度大于peer之前的,以及自己本地的,就去和这个peer同步
// 问题:就只用了一下块里的hash指,为啥不直接使用这个块呢,如果这个块不能用,干嘛不少发送些数据,减少网络负载呢。
// 答案:实际上,这个块加入到了优先级队列中,当fetcher的loop检查到当前下一个区块的高度,正是队列中有的,则不再向peer请求! n) n# ] e$ ]# k
// 该区块,而是直接使用该区块,检查无误后交给block chain执行insertChain6 V% @& k' J# I. Y3 z" t
if _, td := p.Head(); trueTD.Cmp(td) > 0 {& N' M) M8 V6 u
p.SetHead(trueHead, trueTD); Q1 ^# X+ ^0 r! w1 u0 f& F
// Schedule a sync if above ours. Note, this will not fire a sync for a gap of
// a singe block (as the true TD is below the propagated block), however this6 d# R2 M) R$ f: o& a/ C% c: Y
// scenario should easily be covered by the fetcher. @. f# }6 k, N0 ~! \6 s
currentBlock := pm.blockchain.CurrentBlock()
if trueTD.Cmp(pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64())) > 0 {/ {& ^: L5 `/ k' O
go pm.synchronise(p)% q7 F! S0 _# h# D
}7 j5 u8 j% E; \' ?) y$ I. i
}; }% R# [. w G* z, J
//------------------------ 以上 handleMsg
// Enqueue tries to fill gaps the the fetcher's future import queue.
// 发给inject通道,当前协程在handleMsg,通过通道发送给fetcher的协程处理& I# H) A# K; T& r. k8 W- ~
func (f *Fetcher) Enqueue(peer string, block *types.Block) error {* T* a3 E0 w8 u8 Z. y/ R8 Q8 N
op := &inject{" j7 d9 Q$ w* T4 O
origin: peer,1 F" g9 N) a& ]! J' C% X; L2 B
block: block,
}; r( \' D% r" a! y# ~0 [0 ?) F" [& c
select {6 ?) A! a$ t0 O) J7 L
case f.inject blockLimit {8 {3 w3 Y- Q6 _, p* c/ d U
log.Debug("Discarded propagated block, exceeded allowance", "peer", peer, "number", block.Number(), "hash", hash, "limit", blockLimit)( z; Z" q1 D9 ~* v& ~. M/ K1 Q
propBroadcastDOSMeter.Mark(1) Z4 i6 P) d7 X4 A6 p
f.forgetHash(hash)
return
}. d1 y/ q4 X5 `3 W
// Discard any past or too distant blocks% z5 t) f0 @7 \( f8 Y3 v
// 高度检查:未来太远的块丢弃
if dist := int64(block.NumberU64()) - int64(f.chainHeight()); dist maxQueueDist {, {: J( o S# n+ `8 W" {5 r
log.Debug("Discarded propagated block, too far away", "peer", peer, "number", block.Number(), "hash", hash, "distance", dist)
propBroadcastDropMeter.Mark(1)2 V# t$ l; {1 o; A1 [& D
f.forgetHash(hash)
return t+ \$ e" j2 p( M
}
// Schedule the block for future importing6 `5 X$ W5 h; {3 Z
// 块先加入优先级队列,加入链之前,还有很多要做
if _, ok := f.queued[hash]; !ok {
op := &inject{
origin: peer,$ {5 T3 c# Q2 R" z L
block: block," Z* p- n9 S" z5 ]+ G) Z
}% i# x$ u, ]/ x( `
f.queues[peer] = count" j$ q) Z1 `% q+ C5 i" H! O* v
f.queued[hash] = op. Y K) G$ D* s# y! v0 Z! E0 H
f.queue.Push(op, -float32(block.NumberU64()))
if f.queueChangeHook != nil {
f.queueChangeHook(op.block.Hash(), true)6 @8 v* m/ K, J3 U6 R* x' l7 a
}9 {# t \# E# `% r% Y+ h
log.Debug("Queued propagated block", "peer", peer, "number", block.Number(), "hash", hash, "queued", f.queue.Size())
}
}
fetcher队列处理
本节我们看看,区块加入队列后,fetcher如何处理区块,为何不直接校验区块,插入到本地链?
由于以太坊又Uncle的机制,节点可能收到老一点的一些区块。另外,节点可能由于网络原因,落后了几个区块,所以可能收到“未来”的一些区块,这些区块都不能直接插入到本地链。8 ~; L8 U- |; X! _
区块入的队列是一个优先级队列,高度低的区块会被优先取出来。fetcher.loop是单独协程,不断运转,清理fecther中的事务和事件。首先会清理正在fetching的区块,但已经超时。然后处理优先级队列中的区块,判断高度是否是下一个区块,如果是则调用f.insert()函数,校验后调用BlockChain.InsertChain(),成功插入后,广播新区块的Hash。
// Loop is the main fetcher loop, checking and processing various notification
// events.+ W9 r& C: L( F; T$ R( G, ~
func (f *Fetcher) loop() {
// Iterate the block fetching until a quit is requested
fetchTimer := time.NewTimer(0)
completeTimer := time.NewTimer(0): I3 N+ C3 X/ R. h
for {
// Clean up any expired block fetches
// 清理过期的区块
for hash, announce := range f.fetching {
if time.Since(announce.time) > fetchTimeout {& F& V$ N, |" A& Y
f.forgetHash(hash)8 E( s0 w. v+ I, m8 I
}1 e& }3 S$ Z0 E& u9 ]7 ]7 o# ]
}
// Import any queued blocks that could potentially fit
// 导入队列中合适的块
height := f.chainHeight()1 c! C6 ^) h/ |' a& o( J
for !f.queue.Empty() {
op := f.queue.PopItem().(*inject)
hash := op.block.Hash()- t6 P0 u( H; g
if f.queueChangeHook != nil {; v1 Y( _6 |; m- l3 a- i
f.queueChangeHook(hash, false)
}
// If too high up the chain or phase, continue later' ?' w8 {0 C, N1 }9 b0 `) v
// 块不是链需要的下一个块,再入优先级队列,停止循环
number := op.block.NumberU64(). n) ] e8 F4 `" m
if number > height+1 {
f.queue.Push(op, -float32(number))
if f.queueChangeHook != nil {
f.queueChangeHook(hash, true)# {- F7 }8 Z4 J T9 q! T
}
break
}/ |# T$ w- q7 Q4 T" N( i9 T m( k
// Otherwise if fresh and still unknown, try and import9 w" i6 l! Y5 E% e
// 高度正好是我们想要的,并且链上也没有这个块( E) [4 l0 |: @9 n* K
if number+maxUncleDist . j3 H6 L: O/ S. c* k8 V
func (f *Fetcher) insert(peer string, block *types.Block) {, q3 L" V; `4 B1 }* k) @6 Y
hash := block.Hash()
// Run the import on a new thread
log.Debug("Importing propagated block", "peer", peer, "number", block.Number(), "hash", hash)
go func() {8 s4 d' y: H3 \+ z' Z+ S& N: z8 H
defer func() { f.done
NewBlockHashesMsg的处理" q# y( n& P/ ^3 Y4 y) H) v! |" v
本节介绍NewBlockHashesMsg的处理,其实,消息处理是简单的,而复杂一点的是从Peer哪获取完整的区块,下节再看。
流程如下:
对消息进行RLP解码,然后标记Peer已经知道此区块。寻找出本地区块链不存在的区块Hash值,把这些未知的Hash通知给fetcher。fetcher.Notify记录好通知信息,塞入notify通道,以便交给fetcher的协程。fetcher.loop()会对notify中的消息进行处理,确认区块并非DOS攻击,然后检查区块的高度,判断该区块是否已经在fetching或者comleting(代表已经下载区块头,在下载body),如果都没有,则加入到announced中,触发0s定时器,进行处理。
关于announced下节再介绍。8 [/ S9 _* T" I, i, N* x+ K( C8 r
& ^( _5 {9 f8 n9 Y# U/ i
// handleMsg()部分2 n& c# }3 p, a
case msg.Code == NewBlockHashesMsg:, b7 t) s/ d0 m' i! I
var announces newBlockHashesData) _0 A7 Y( Q) c
if err := msg.Decode(&announces); err != nil {5 Q0 ~* B9 z& [2 h7 ]
return errResp(ErrDecode, "%v: %v", msg, err)
}
// Mark the hashes as present at the remote node% z+ u) J+ E2 ~6 M! E
for _, block := range announces {
p.MarkBlock(block.Hash)8 M& O$ s3 E* i) h+ }3 y
}3 {* s- W8 V9 o0 I
// Schedule all the unknown hashes for retrieval
// 把本地链没有的块hash找出来,交给fetcher去下载- H9 j- L5 B5 A$ z$ a) N- B
unknown := make(newBlockHashesData, 0, len(announces)). P! n2 n1 b; f9 t2 E% p
for _, block := range announces {
if !pm.blockchain.HasBlock(block.Hash, block.Number) {: w9 r4 `8 y) p4 u# x' X E# E
unknown = append(unknown, block)
}
}3 Z# S1 _) _9 p& d+ I& e1 M3 w
for _, block := range unknown {
pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)# A4 q1 \% E- R+ x! [
}
// Notify announces the fetcher of the potential availability of a new block in. j1 r- E+ A9 ]7 ^
// the network.# X* A$ h- L! y8 ]" B- {
// 通知fetcher(自己)有新块产生,没有块实体,有hash、高度等信息
func (f *Fetcher) Notify(peer string, hash common.Hash, number uint64, time time.Time,$ H0 n0 S% m0 T
headerFetcher headerRequesterFn, bodyFetcher bodyRequesterFn) error {$ Z4 g1 b7 f( N4 l
block := &announce{; b a$ i& s8 ~' B, v$ f. Q8 b% G
hash: hash,! @1 E9 J4 g0 U, s# W6 K) m( ~
number: number,
time: time,
origin: peer,, a; k0 `( K( [ A* W
fetchHeader: headerFetcher,
fetchBodies: bodyFetcher,* Z( v% {+ E S* W
}
select {! _/ `5 D- d1 |6 ?/ y7 e
case f.notify hashLimit {. Z& N, E0 _# c7 P V
log.Debug("Peer exceeded outstanding announces", "peer", notification.origin, "limit", hashLimit)
propAnnounceDOSMeter.Mark(1)
break; y0 J* Z; H% ]3 r
}
// If we have a valid block number, check that it's potentially useful
// 高度检查
if notification.number > 0 {
if dist := int64(notification.number) - int64(f.chainHeight()); dist maxQueueDist {4 ~! D7 F1 l- v1 \
log.Debug("Peer discarded announcement", "peer", notification.origin, "number", notification.number, "hash", notification.hash, "distance", dist)
propAnnounceDropMeter.Mark(1)1 m- K) }3 w9 S# F7 [
break
}/ P9 d7 K3 U5 r! O; a
}
// All is well, schedule the announce if block's not yet downloading
// 检查是否已经在下载,已下载则忽略
if _, ok := f.fetching[notification.hash]; ok {' G1 h* `& j7 O% @
break5 P' j+ U% g. P5 U& X0 v2 {6 l
}
if _, ok := f.completing[notification.hash]; ok {# V6 s6 A) e& {# }6 F
break
}
// 更新peer已经通知给我们的区块数量5 j! j6 A" ?$ P c% ?
f.announces[notification.origin] = count' g9 Y. l Z8 \& ]
// 把通知信息加入到announced,供调度
f.announced[notification.hash] = append(f.announced[notification.hash], notification)1 L: J3 L' S7 S1 q, @
if f.announceChangeHook != nil && len(f.announced[notification.hash]) == 1 {
f.announceChangeHook(notification.hash, true)9 a% `$ h& `( q* }; e
}7 p9 v. }7 j- o* S
if len(f.announced) == 1 {& l4 C/ N4 i" M b$ \
// 有通知放入到announced,则重设0s定时器,loop的另外一个分支会处理这些通知
f.rescheduleFetch(fetchTimer)4 `3 n3 Z' N1 f/ f4 Q
}# U% }6 n |! e- d" o `: B
fetcher获取完整区块( h: o+ j+ g6 p2 N/ x
本节介绍fetcher获取完整区块的过程,这也是fetcher最重要的功能,会涉及到fetcher至少80%的代码。单独拉放一大节吧。
Fetcher的大头
Fetcher最主要的功能就是获取完整的区块,然后在合适的实际交给InsertChain去验证和插入到本地区块链。我们还是从宏观入手,看Fetcher是如何工作的,一定要先掌握好宏观,因为代码层面上没有这么清晰。
宏观% c" w7 H, j7 E1 D v
首先,看两个节点是如何交互,获取完整区块,使用时序图的方式看一下,见图6,流程很清晰不再文字介绍。
再看下获取区块过程中,fetcher内部的状态转移,它使用状态来记录,要获取的区块在什么阶段,见图7。我稍微解释一下:% W- y7 M. c% o: ?, F% q
收到NewBlockHashesMsg后,相关信息会记录到announced,进入announced状态,代表了本节点接收了消息。announced由fetcher协程处理,经过校验后,会向给他发送消息的Peer发送请求,请求该区块的区块头,然后进入fetching状态。获取区块头后,如果区块头表示没有交易和uncle,则转移到completing状态,并且使用区块头合成完整的区块,加入到queued优先级队列。获取区块头后,如果区块头表示该区块有交易和uncle,则转移到fetched状态,然后发送请求,请求交易和uncle,然后转移到completing状态。收到交易和uncle后,使用头、交易、uncle这3个信息,生成完整的区块,加入到队列queued。
$ Z2 K4 L# B; F
微观. G; s2 t$ G8 C. y3 \/ u. R) g
接下来就是从代码角度看如何获取完整区块的流程了,有点多,看不懂的时候,再回顾下上面宏观的介绍图。
首先看Fetcher的定义,它存放了通信数据和状态管理,捡加注释的看,上文提到的状态,里面都有。
// Fetcher is responsible for accumulating block announcements from various peers
// and scheduling them for retrieval.
// 积累块通知,然后调度获取这些块
type Fetcher struct {- @/ @$ }+ I6 Y$ x6 S7 M/ x) O
// Various event channels
// 收到区块hash值的通道" u1 O1 k' Z# Q6 x( L* h- _- I
notify chan *announce! h4 O( U8 {- r6 x
// 收到完整区块的通道9 U& ^& h' B' o a9 ~9 ?
inject chan *inject
blockFilter chan chan []*types.Block7 K1 D* L& h! C( ?4 ]" T
// 过滤header的通道的通道
headerFilter chan chan *headerFilterTask
// 过滤body的通道的通道
bodyFilter chan chan *bodyFilterTask
done chan common.Hash* V& Q2 O4 ^' J2 ]
quit chan struct{}
// Announce states* I9 x/ U4 s+ k
// Peer已经给了本节点多少区块头通知4 L/ }8 s& [7 n1 ^9 F% ~
announces map[string]int // Per peer announce counts to prevent memory exhaustion
// 已经announced的区块列表
announced map[common.Hash][]*announce // Announced blocks, scheduled for fetching+ m3 E5 L; P$ p; B2 u
// 正在fetching区块头的请求4 f. P3 }$ l, ^* C
fetching map[common.Hash]*announce // Announced blocks, currently fetching
// 已经fetch到区块头,还差body的请求,用来获取body
fetched map[common.Hash][]*announce // Blocks with headers fetched, scheduled for body retrieval# R9 ?# x; g3 ?2 H
// 已经得到区块头的" t* G3 l( J0 [& R. U! ?: l1 f) E
completing map[common.Hash]*announce // Blocks with headers, currently body-completing# Y9 }: J; c" V( L% E
// Block cache M) {2 Y5 r( K9 o/ `0 Z
// queue,优先级队列,高度做优先级
// queues,统计peer通告了多少块& J; \8 ]7 g* }: R$ n, b7 k
// queued,代表这个块如队列了,* }* o4 ]0 _9 ]
queue *prque.Prque // Queue containing the import operations (block number sorted)- V, X- V. \: y) Z! C3 U6 q
queues map[string]int // Per peer block counts to prevent memory exhaustion+ q& G ^, {# [, k7 J
queued map[common.Hash]*inject // Set of already queued blocks (to dedupe imports)# q5 e' ?: Y" e! O9 |! Y# z
// Callbacks
getBlock blockRetrievalFn // Retrieves a block from the local chain
verifyHeader headerVerifierFn // Checks if a block's headers have a valid proof of work,验证区块头,包含了PoW验证
broadcastBlock blockBroadcasterFn // Broadcasts a block to connected peers,广播给peer8 G0 X M5 v9 c$ H
chainHeight chainHeightFn // Retrieves the current chain's height+ n) G" c m& b3 l3 a
insertChain chainInsertFn // Injects a batch of blocks into the chain,插入区块到链的函数% X4 V4 u$ {3 v) \' N2 |5 j4 t' Y- `
dropPeer peerDropFn // Drops a peer for misbehaving
// Testing hooks
announceChangeHook func(common.Hash, bool) // Method to call upon adding or deleting a hash from the announce list5 J9 @. |* }5 x' W+ m% i4 W( ~
queueChangeHook func(common.Hash, bool) // Method to call upon adding or deleting a block from the import queue5 g# D$ k" r/ U( ] Z
fetchingHook func([]common.Hash) // Method to call upon starting a block (eth/61) or header (eth/62) fetch* q* e& }' F, O# p3 s& L9 `
completingHook func([]common.Hash) // Method to call upon starting a block body fetch (eth/62) `& _, b# O+ k; W0 R* v6 y
importedHook func(*types.Block) // Method to call upon successful block import (both eth/61 and eth/62)
}
NewBlockHashesMsg消息的处理前面的小节已经讲过了,不记得可向前翻看。这里从announced的状态处理说起。loop() 中,fetchTimer超时后,代表了收到了消息通知,需要处理,会从announced中选择出需要处理的通知,然后创建请求,请求区块头,由于可能有很多节点都通知了它某个区块的Hash,所以随机的从这些发送消息的Peer中选择一个Peer,发送请求的时候,为每个Peer都创建了单独的协程。, z( S7 J `/ q+ V% `
case arriveTimeout-gatherSlack {- R N* x6 j1 P- d
// Pick a random peer to retrieve from, reset all others
// 可能有很多peer都发送了这个区块的hash值,随机选择一个peer
announce := announces[rand.Intn(len(announces))] q/ y1 B) }/ L5 A/ K& O! ?& G
f.forgetHash(hash)
// If the block still didn't arrive, queue for fetching
// 本地还没有这个区块,创建获取区块的请求
if f.getBlock(hash) == nil {
request[announce.origin] = append(request[announce.origin], hash)
f.fetching[hash] = announce
}7 _0 {. |& t1 [# k3 q4 z9 W
}
}3 |( i8 ~! `' ]& i8 _+ c( N p: K
// Send out all block header requests( \: r7 L- y- S( u* i1 c2 y
// 把所有的request发送出去
// 为每一个peer都创建一个协程,然后请求所有需要从该peer获取的请求
for peer, hashes := range request {
log.Trace("Fetching scheduled headers", "peer", peer, "list", hashes)9 }# L/ w/ |. p0 B$ g
// Create a closure of the fetch and schedule in on a new thread& v- J% f; Q3 E1 K/ V1 h0 F
fetchHeader, hashes := f.fetching[hashes[0]].fetchHeader, hashes8 q% ~. i0 e% K# d. N. R
go func() {
if f.fetchingHook != nil {
f.fetchingHook(hashes)
}- }% A: { Z0 y" d, A
for _, hash := range hashes {' g+ f" B% c, E" B1 i9 t# X O
headerFetchMeter.Mark(1); I, | `8 X) u4 C' `& ^2 }7 P0 }
fetchHeader(hash) // Suboptimal, but protocol doesn't allow batch header retrievals
}
}()" N8 n5 o. E8 F% M( ~
}
// Schedule the next fetch if blocks are still pending
f.rescheduleFetch(fetchTimer)4 O2 B# H! |, N5 I5 y, ?
从Notify的调用中,可以看出,fetcherHeader()的实际函数是RequestOneHeader(),该函数使用的消息是GetBlockHeadersMsg,可以用来请求多个区块头,不过fetcher只请求一个。7 u$ A+ i& ^. [1 c
pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)
// RequestOneHeader is a wrapper around the header query functions to fetch a
// single header. It is used solely by the fetcher.
func (p *peer) RequestOneHeader(hash common.Hash) error {
p.Log().Debug("Fetching single header", "hash", hash)
return p2p.Send(p.rw, GetBlockHeadersMsg, &getBlockHeadersData{Origin: hashOrNumber{Hash: hash}, Amount: uint64(1), Skip: uint64(0), Reverse: false})6 E" S. B! s1 C8 h; l
}( V8 V1 s/ _' X8 |. m: P
GetBlockHeadersMsg的处理如下:因为它是获取多个区块头的,所以处理起来比较“麻烦”,还好,fetcher只获取一个区块头,其处理在20行~33行,获取下一个区块头的处理逻辑,这里就不看了,最后调用SendBlockHeaders()将区块头发送给请求的节点,消息是BlockHeadersMsg。9 h j* ^$ O& b3 ]7 H
···& a9 n' i. S4 t. a
// handleMsg()0 Z* z0 c, `& C3 Z5 `* k
// Block header query, collect the requested headers and reply% p$ X, I# Y6 ^ l: i( L% p W/ E# ]
case msg.Code == GetBlockHeadersMsg:
// Decode the complex header query
var query getBlockHeadersData+ O! O: T( ` \
if err := msg.Decode(&query); err != nil {* s8 @# _' `8 v7 X
return errResp(ErrDecode, “%v: %v”, msg, err)) R$ l. U, l/ P# S
}3 H- p8 P, E; k" G/ F
hashMode := query.Origin.Hash != (common.Hash{})
// Gather headers until the fetch or network limits is reached
// 收集区块头,直到达到限制
var (" T" q; ~. t" h3 T% g0 U0 @
bytes common.StorageSize4 P( @. r. _! K4 q5 G& h8 b
headers []*types.Header F$ ?2 Y# e( M
unknown bool; W6 c2 A5 s q+ c% o
)0 p+ w5 H& z4 j7 D. \9 h
// 自己已知区块 && 少于查询的数量 && 大小小于2MB && 小于能下载的最大数量
for !unknown && len(headers) ) c7 v1 x8 x0 i- J" A5 U7 y2 v
`BlockHeadersMsg`的处理很有意思,因为`GetBlockHeadersMsg`并不是fetcher独占的消息,downloader也可以调用,所以,响应消息的处理需要分辨出是fetcher请求的,还是downloader请求的。它的处理逻辑是:fetcher先过滤收到的区块头,如果fetcher不要的,那就是downloader的,在调用`fetcher.FilterHeaders`的时候,fetcher就将自己要的区块头拿走了。
// handleMsg()
case msg.Code == BlockHeadersMsg:
// A batch of headers arrived to one of our previous requests1 r& @8 }' ]1 |' v9 D8 c8 [
var headers []*types.Header+ f6 D% _% V, O; h6 E
if err := msg.Decode(&headers); err != nil {# A2 E h0 O, C# n7 V7 V
return errResp(ErrDecode, “msg %v: %v”, msg, err)
}
// If no headers were received, but we’re expending a DAO fork check, maybe it’s that% s: P4 J' z* D6 U
// 检查是不是当前DAO的硬分叉
if len(headers) == 0 && p.forkDrop != nil {
// Possibly an empty reply to the fork header checks, sanity check TDs
verifyDAO := true. `( Y3 s! ^+ R! ?, C- a' Z" m
// If we already have a DAO header, we can check the peer's TD against it. If
// the peer's ahead of this, it too must have a reply to the DAO check
if daoHeader := pm.blockchain.GetHeaderByNumber(pm.chainconfig.DAOForkBlock.Uint64()); daoHeader != nil {
if _, td := p.Head(); td.Cmp(pm.blockchain.GetTd(daoHeader.Hash(), daoHeader.Number.Uint64())) >= 0 {
verifyDAO = false
}
}
// If we're seemingly on the same chain, disable the drop timer" g y K) c6 ~" j
if verifyDAO {, d3 A- S! G: r% E f
p.Log().Debug("Seems to be on the same side of the DAO fork")- n( a, b3 i) J- Z2 b% Z: s+ g7 h
p.forkDrop.Stop()3 E- a5 `% k# U# a& a
p.forkDrop = nil3 Q8 M( n9 w7 I. ^% |
return nil
}
}
// Filter out any explicitly requested headers, deliver the rest to the downloader9 F: O% D9 C. @
// 过滤是不是fetcher请求的区块头,去掉fetcher请求的区块头再交给downloader
filter := len(headers) == 1
if filter {
// If it's a potential DAO fork check, validate against the rules
// 检查是否硬分叉4 b' h. f' Q3 a& N3 _, x5 U
if p.forkDrop != nil && pm.chainconfig.DAOForkBlock.Cmp(headers[0].Number) == 0 {
// Disable the fork drop timer
p.forkDrop.Stop()! `- U: u" v6 ~8 l) [- Q: o% B1 i
p.forkDrop = nil
// Validate the header and either drop the peer or continue
if err := misc.VerifyDAOHeaderExtraData(pm.chainconfig, headers[0]); err != nil {
p.Log().Debug("Verified to be on the other side of the DAO fork, dropping")+ t9 }- w0 l) @. `9 i% H
return err# k) L( B8 h/ U% h) t
}
p.Log().Debug("Verified to be on the same side of the DAO fork")
return nil. q* G' c3 b& r9 \, u
}+ \7 K! K' W. V( N3 s+ h
// Irrelevant of the fork checks, send the header to the fetcher just in case
// 使用fetcher过滤区块头3 ~4 j' `% ^: J* _. j* _( {
headers = pm.fetcher.FilterHeaders(p.id, headers, time.Now())0 {7 \: V. M6 R' H4 m
}
// 剩下的区块头交给downloader. m7 U o' G. u7 z6 f
if len(headers) > 0 || !filter {, N8 x. o7 i' \& G
err := pm.downloader.DeliverHeaders(p.id, headers)+ d' e3 |( z6 X) `7 h* W# Y
if err != nil {- M0 L3 N9 |1 Z5 Q) F5 D2 s1 f
log.Debug("Failed to deliver headers", "err", err)
}$ Y) M9 Y' Z7 c. c; w2 J2 I7 u7 C% O) w
}' i% n, b( e- G ? M' B
`FilterHeaders()`是一个很有大智慧的函数,看起来耐人寻味,但实在妙。它要把所有的区块头,都传递给fetcher协程,还要获取fetcher协程处理后的结果。`fetcher.headerFilter`是存放通道的通道,而`filter`是存放包含区块头过滤任务的通道。它先把`filter`传递给了`headerFilter`,这样`fetcher`协程就在另外一段等待了,而后将`headerFilterTask`传入`filter`,`fetcher`就能读到数据了,处理后,再将数据写回`filter`而刚好被`FilterHeaders`函数处理了,该函数实际运行在`handleMsg()`的协程中。& H0 |: c/ k1 _& i* O% k7 j
每个Peer都会分配一个ProtocolManager然后处理该Peer的消息,但`fetcher`只有一个事件处理协程,如果不创建一个`filter`,fetcher哪知道是谁发给它的区块头呢?过滤之后,该如何发回去呢?2 a+ N3 g( W9 U6 v+ y8 q
// FilterHeaders extracts all the headers that were explicitly requested by the fetcher,( {( x; y3 G4 G' S( @
// returning those that should be handled differently.# ~" [' c' q( F( V# \5 W) A
// 寻找出fetcher请求的区块头
func (f *Fetcher) FilterHeaders(peer string, headers []*types.Header, time time.Time) []*types.Header {8 {6 S U* ^( t, W
log.Trace(“Filtering headers”, “peer”, peer, “headers”, len(headers))
// Send the filter channel to the fetcher! m( W/ i+ a4 w0 J C
// 任务通道) F. D3 |) }) @7 g; j
filter := make(chan *headerFilterTask)- M8 O9 h% S3 |# O# m
select {4 Y' N3 d }+ W: l) j
// 任务通道发送到这个通道
case f.headerFilter ' a5 T% p9 A* y5 M4 H
}
接下来要看f.headerFilter的处理,这段代码有90行,它做了一下几件事:
1. 从`f.headerFilter`取出`filter`,然后取出过滤任务`task`。
2. 它把区块头分成3类:`unknown`这不是分是要返回给调用者的,即`handleMsg()`, `incomplete`存放还需要获取body的区块头,`complete`存放只包含区块头的区块。遍历所有的区块头,填到到对应的分类中,具体的判断可看18行的注释,记住宏观中将的状态转移图。
3. 把`unknonw`中的区块返回给`handleMsg()`。) h: L( s' _6 D% e* b& I
4. 把` incomplete`的区块头获取状态移动到`fetched`状态,然后触发定时器,以便去处理complete的区块。
5. 把`compelete`的区块加入到`queued`。! p/ }$ ?" l, r) b, S% M3 t
// fetcher.loop()7 N$ t9 m ?" N9 Z3 g; c! r( i M
case filter :=
// Split the batch of headers into unknown ones (to return to the caller),
// known incomplete ones (requiring body retrievals) and completed blocks.3 D8 \) e) U/ Z- M$ g# s
// unknown的不是fetcher请求的,complete放没有交易和uncle的区块,有头就够了,incomplete放
// 还需要获取uncle和交易的区块
unknown, incomplete, complete := []*types.Header{}, []*announce{}, []*types.Block{}
// 遍历所有收到的header
for _, header := range task.headers {
hash := header.Hash()
// Filter fetcher-requested headers from other synchronisation algorithms$ f9 H$ Y' P- ~3 C6 ^
// 是正在获取的hash,并且对应请求的peer,并且未fetched,未completing,未queued9 M0 E( t6 K: i* T5 j
if announce := f.fetching[hash]; announce != nil && announce.origin == task.peer && f.fetched[hash] == nil && f.completing[hash] == nil && f.queued[hash] == nil {# X+ E3 {$ J& i* J0 B7 [! i
// If the delivered header does not match the promised number, drop the announcer, D; |, F% U x3 f; J
// 高度校验,竟然不匹配,扰乱秩序,peer肯定是坏蛋。
if header.Number.Uint64() != announce.number {
log.Trace("Invalid block number fetched", "peer", announce.origin, "hash", header.Hash(), "announced", announce.number, "provided", header.Number)
f.dropPeer(announce.origin)0 R4 y7 d9 D/ |
f.forgetHash(hash)' T6 }" g5 T: q9 [; P
continue
}
// Only keep if not imported by other means
// 本地链没有当前区块
if f.getBlock(hash) == nil {* R; @1 `- w2 a9 P5 \$ Y {
announce.header = header
announce.time = task.time, L, j4 A) V( m2 k! g
// If the block is empty (header only), short circuit into the final import queue& g' ?2 o0 O/ Y0 l7 x6 x
// 如果区块没有交易和uncle,加入到complete. y; b" M* w& S5 F
if header.TxHash == types.DeriveSha(types.Transactions{}) && header.UncleHash == types.CalcUncleHash([]*types.Header{}) {
log.Trace("Block empty, skipping body retrieval", "peer", announce.origin, "number", header.Number, "hash", header.Hash())7 O9 w, `9 q+ A/ `- b$ S. [
block := types.NewBlockWithHeader(header)9 `( F" J& L: h6 n4 M
block.ReceivedAt = task.time
complete = append(complete, block)' G/ B. C% h6 [1 p; e' w
f.completing[hash] = announce
continue- \! Q4 J& T# u/ C0 |
}4 ^5 W! _0 ~8 s# G4 N6 D: t4 \
// Otherwise add to the list of blocks needing completion
// 否则就是不完整的区块5 a2 d0 f* m( f4 G* D
incomplete = append(incomplete, announce)
} else {/ k2 K: n* K v( v$ I' v8 L- q
log.Trace("Block already imported, discarding header", "peer", announce.origin, "number", header.Number, "hash", header.Hash())
f.forgetHash(hash)
}
} else {
// Fetcher doesn't know about it, add to the return list
// 没请求过的header, S" C( X# c4 v$ t' P. I$ V
unknown = append(unknown, header)
}
}
// 把未知的区块头,再传递会filter
headerFilterOutMeter.Mark(int64(len(unknown)))
select {3 g$ r/ l, X1 ]+ I! |
case filter ! l2 _$ {) c3 Z& c( ` K
跟随状态图的转义,剩下的工作是`fetched`转移到`completing` ,上面的流程已经触发了`completeTimer`定时器,超时后就会处理,流程与请求Header类似,不再赘述,此时发送的请求消息是`GetBlockBodiesMsg`,实际调的函数是`RequestBodies`。. e+ x3 v# N. {' c2 ` A/ `/ g9 P' U
// fetcher.loop() U$ r N6 ~' q F# z
case 5 a E+ t7 R2 e/ ~# A9 o
// 遍历所有待获取body的announce
for hash, announces := range f.fetched {/ X) f- F) ~) I3 o
// Pick a random peer to retrieve from, reset all others
// 随机选一个Peer发送请求,因为可能已经有很多Peer通知它这个区块了
announce := announces[rand.Intn(len(announces))]* A2 P( g K2 \, n5 T
f.forgetHash(hash)6 F, I/ y& o% Y; p$ M3 {( G3 E. Q
// If the block still didn't arrive, queue for completion& ?/ s$ S, f0 H2 C
// 如果本地没有这个区块,则放入到completing,创建请求5 l3 Y& R& ~' c2 A& K
if f.getBlock(hash) == nil {$ [2 V* Z h4 M6 ]
request[announce.origin] = append(request[announce.origin], hash)5 F6 e4 Y8 k2 L) |) H; M- x1 o) ^
f.completing[hash] = announce
}
}
// Send out all block body requests w( ?; \$ [5 w; B# q* o1 p
// 发送所有的请求,获取body,依然是每个peer一个单独协程. a% o7 P+ R+ l7 N$ V
for peer, hashes := range request {% S! Z, u) ] v0 l+ [
log.Trace("Fetching scheduled bodies", "peer", peer, "list", hashes)' ^$ U8 W) F j+ d2 F8 e0 O9 L
// Create a closure of the fetch and schedule in on a new thread$ z* G( ]4 a1 N/ S& g! _
if f.completingHook != nil {
f.completingHook(hashes)
}
bodyFetchMeter.Mark(int64(len(hashes)))& Y& S* w1 s1 o! O2 U( q- x
go f.completing[hashes[0]].fetchBodies(hashes)
}& F. O1 `/ d8 a) V" X: l: z# ]) f( z# y
// Schedule the next fetch if blocks are still pending x6 h3 t5 n: @3 k }! _/ |( h5 y4 y
f.rescheduleComplete(completeTimer)
`handleMsg()`处理该消息也是干净利落,直接获取RLP格式的body,然后发送响应消息。
// handleMsg()5 y2 ]. ^5 }5 W' ^* U# {9 K7 S
case msg.Code == GetBlockBodiesMsg:
// Decode the retrieval message
msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))" }) O2 t2 X: S4 l; S# x
if _, err := msgStream.List(); err != nil {
return err
}6 o. _7 H, N; H' k$ `4 X
// Gather blocks until the fetch or network limits is reached% w3 M1 f) s/ ]! S# T' g( X
var (
hash common.Hash
bytes int
bodies []rlp.RawValue
)7 e/ j6 x: N' k4 c
// 遍历所有请求( ]0 m I7 H2 Q t4 ]' E
for bytes 2 `3 _" G& g& E
响应消息`BlockBodiesMsg`的处理与处理获取header的处理原理相同,先交给fetcher过滤,然后剩下的才是downloader的。需要注意一点,响应消息里只包含交易列表和叔块列表。
// handleMsg()5 O: H$ ?8 q( [ S3 Q
case msg.Code == BlockBodiesMsg:
// A batch of block bodies arrived to one of our previous requests+ T+ p1 ]$ a6 _( t* @
var request blockBodiesData# @1 W5 \; N U" L. H4 n* F
if err := msg.Decode(&request); err != nil {; I8 a. l* N+ d ]' i j# V9 a# }( q
return errResp(ErrDecode, “msg %v: %v”, msg, err)
}
// Deliver them all to the downloader for queuing
// 传递给downloader去处理4 N7 g. m6 J* u: g* j) r
transactions := make([][]*types.Transaction, len(request))
uncles := make([][]*types.Header, len(request))4 p% r7 l+ S$ k! ~
for i, body := range request {4 U9 o! F3 Y6 p: I+ |
transactions = body.Transactions
uncles = body.Uncles
}
// Filter out any explicitly requested bodies, deliver the rest to the downloader
// 先让fetcher过滤去fetcher请求的body,剩下的给downloader% I: |) ?( M, L
filter := len(transactions) > 0 || len(uncles) > 0
if filter {5 P+ u6 ?# O4 e# b, y* i# U
transactions, uncles = pm.fetcher.FilterBodies(p.id, transactions, uncles, time.Now())
}! X0 e5 D+ I/ e0 I% X) u* A
// 剩下的body交给downloader
if len(transactions) > 0 || len(uncles) > 0 || !filter {
err := pm.downloader.DeliverBodies(p.id, transactions, uncles)
if err != nil {) \* \4 _; P6 F6 ^
log.Debug("Failed to deliver bodies", "err", err)8 ]+ I- R4 |9 d% o5 F- C' n
}2 o% O" d: _( n2 D9 w
}" O9 o0 }! d. k5 E
过滤函数的原理也与Header相同。
// FilterBodies extracts all the block bodies that were explicitly requested by1 z% c5 W, F/ X5 k4 n
// the fetcher, returning those that should be handled differently.
// 过去出fetcher请求的body,返回它没有处理的,过程类型header的处理/ f" s+ r$ d" n0 v
func (f *Fetcher) FilterBodies(peer string, transactions [][]*types.Transaction, uncles [][]*types.Header, time time.Time) ([][]*types.Transaction, [][]*types.Header) {
log.Trace(“Filtering bodies”, “peer”, peer, “txs”, len(transactions), “uncles”, len(uncles))
// Send the filter channel to the fetcher8 t/ w! v* ]- f# d+ Z
filter := make(chan *bodyFilterTask)
select {* f2 }7 m, o1 e/ Z- s, g9 o
case f.bodyFilter s1 J/ i: L e0 G& D! D7 f* B
}6 I7 ]+ }" f/ y& [1 L2 J: a
实际过滤body的处理瞧一下,这和Header的处理是不同的。直接看不点:
1. 它要的区块,单独取出来存到`blocks`中,它不要的继续留在`task`中。
2. 判断是不是fetcher请求的方法:如果交易列表和叔块列表计算出的hash值与区块头中的一样,并且消息来自请求的Peer,则就是fetcher请求的。
3. 将`blocks`中的区块加入到`queued`,终结。6 Q" R1 z: }$ v( B; N4 n
case filter := $ [) O9 @. ? }3 Q) b# Q* `
blocks := []*types.Block{}
// 获取的每个body的txs列表和uncle列表( Y( Q8 W+ @9 s. g8 h
// 遍历每个区块的txs列表和uncle列表,计算hash后判断是否是当前fetcher请求的body
for i := 0; i
}9 Z7 ?; e( c" ?/ e
* q! J* h* N4 R, Y# z- j c
至此,fetcher获取完整区块的流程讲完了,fetcher模块中80%的代码也都贴出来了,还有2个值得看看的函数:
1. `forgetHash(hash common.Hash)`:用于清空指定hash指的记/状态录信息。
2. `forgetBlock(hash common.Hash)`:用于从队列中移除一个区块。
最后了,再回到开始看看fetcher模块和新区块的传播流程,有没有豁然开朗。$ T( Y* T6 f+ o8 q: e, U2 y, Q
成为第一个吐槽的人