以太坊源码分析:fetcher模块和区块传播
刘艳琴
发表于 2022-12-21 01:53:48
157
0
0
当前代码是以太坊Release 1.8,如果版本不同,代码上可能存在差异。
总体过程和传播策略
本节从宏观角度介绍,节点产生区块后,为了传播给远端节点做了啥,远端节点收到区块后又做了什么,每个节点都连接了很多Peer,它传播的策略是什么样的?
总体流程和策略可以总结为,传播给远端Peer节点,Peer验证区块无误后,加入到本地区块链,继续传播新区块信息。具体过程如下。
先看总体过程。产生区块后,miner模块会发布一个事件NewMinedBlockEvent,订阅事件的协程收到事件后,就会把新区块的消息,广播给它所有的peer,peer收到消息后,会交给自己的fetcher模块处理,fetcher进行基本的验证后,区块没问题,发现这个区块就是本地链需要的下一个区块,则交给blockChain进一步进行完整的验证,这个过程会执行区块所有的交易,无误后把区块加入到本地链,写入数据库,这个过程就是下面的流程图,图1。( w) G- ]( b+ k" U: H4 R Z+ y; g' V# F
. E. T; D" k5 E( @. V$ | q, ]
总体流程图,能看到有个分叉,是因为节点传播新区块是有策略的。它的传播策略为:
假如节点连接了N个Peer,它只向Peer列表的sqrt(N)个Peer广播完整的区块消息。向所有的Peer广播只包含区块Hash的消息。
策略图的效果如图2,红色节点将区块传播给黄色节点:
- s, A! E1 B! w. p5 @
1 v. y, y3 G5 l7 [: a/ _' @: X, S
收到区块Hash的节点,需要从发送给它消息的Peer那里获取对应的完整区块,获取区块后就会按照图1的流程,加入到fetcher队列,最终插入本地区块链后,将区块的Hash值广播给和它相连,但还不知道这个区块的Peer。非产生区块节点的策略图,如图3,黄色节点将区块Hash传播给青色节点:
至此,可以看出以太坊采用以石击水的方式,像水纹一样,层层扩散新产生的区块。* E" E1 N: c/ E9 K
Fetcher模块是干啥的5 y1 o/ |2 S0 }1 @ c# y7 ^) w$ V0 W k
fetcher模块的功能,就是收集其他Peer通知它的区块信息:1)完整的区块2)区块Hash消息。根据通知的消息,获取完整的区块,然后传递给eth模块把区块插入区块链。0 _; |5 r2 r: l0 c1 E
如果是完整区块,就可以传递给eth插入区块,如果只有区块Hash,则需要从其他的Peer获取此完整的区块,然后再传递给eth插入区块+ U, P- a, X% G, u& `5 U
2 W) G7 v4 _4 ?4 G
源码解读6 v+ ]9 i( g% A) v" L7 X
本节介绍区块传播和处理的细节东西,方式仍然是先用图解释流程,再是代码流程。6 ~* P% @( @/ j* ^& v6 H
产块节点的传播新区块 |6 H7 \* ^% [
节点产生区块后,广播的流程可以表示为图4:& @* _. e0 v% A/ t2 d9 Q& a3 T
发布事件事件处理函数选择要广播完整的Peer,然后将区块加入到它们的队列事件处理函数把区块Hash添加到所有Peer的另外一个通知队列每个Peer的广播处理函数,会遍历它的待广播区块队列和通知队列,把数据封装成消息,调用P2P接口发送出去
再看下代码上的细节。7 A. I' g! u" ^, D2 L- c) c
worker.wait()函数发布事件NewMinedBlockEvent。ProtocolManager.minedBroadcastLoop()是事件处理函数。它调用了2次pm.BroadcastBlock()。; H4 K% q, Z; Q- U7 J7 K+ e
! w! ^1 P! x9 p5 A$ ~% X
// Mined broadcast loop' g# d) d7 E7 i) W; n8 W
func (pm *ProtocolManager) minedBroadcastLoop() {
// automatically stops if unsubscribe
for obj := range pm.minedBlockSub.Chan() {# h' {$ [8 j$ f
switch ev := obj.Data.(type) {. {& O+ [# V" d8 A. E# f+ Y
case core.NewMinedBlockEvent:8 {' f: H- ]- p" S7 c) r- |1 |1 w
pm.BroadcastBlock(ev.Block, true) // First propagate block to peers. E9 [7 g8 |$ N* _
pm.BroadcastBlock(ev.Block, false) // Only then announce to the rest7 e# Y6 @9 ~: S) J+ `& b$ B2 L
}
}
}
pm.BroadcastBlock()的入参propagate为真时,向部分Peer广播完整的区块,调用peer.AsyncSendNewBlock(),否则向所有Peer广播区块头,调用peer.AsyncSendNewBlockHash(),这2个函数就是把数据放入队列,此处不再放代码。
// BroadcastBlock will either propagate a block to a subset of it's peers, or
// will only announce it's availability (depending what's requested).+ v9 g; x$ C1 P; z+ o
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {5 H6 V/ X5 z# U- w- G$ j2 X7 K
hash := block.Hash()' f7 Y* v" h" b* l( i. }, N7 m
peers := pm.peers.PeersWithoutBlock(hash)
// If propagation is requested, send to a subset of the peer
// 这种情况,要把区块广播给部分peer
if propagate {
// Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
// 计算新的总难度: Y/ W' i4 k& J0 F5 C" b
var td *big.Int; z9 v& F. I1 A3 ^5 y7 D. I8 O/ Y8 N
if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {
td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))
} else {
log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
return
}7 A0 _. }+ F2 n& r3 n
// Send the block to a subset of our peers, V# z6 k: ?8 n5 A$ y* _' _
// 广播区块给部分peer
transfer := peers[:int(math.Sqrt(float64(len(peers))))]# r+ g' j& @& P5 R
for _, peer := range transfer {
peer.AsyncSendNewBlock(block, td)$ J5 G" |5 h/ V4 v
}% N( I5 Z* K7 w1 c$ p, W
log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
return7 H- G9 x5 O4 W, w( i) a+ M' _$ D
}/ m0 G5 d, K- z4 Y' t6 i
// Otherwise if the block is indeed in out own chain, announce it
// 把区块hash值广播给所有peer
if pm.blockchain.HasBlock(hash, block.NumberU64()) {
for _, peer := range peers {
peer.AsyncSendNewBlockHash(block)7 g+ t* M/ q- B% g: R9 Q
}. B8 W7 n, T( C. V1 o Q& C
log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))4 x: S3 S) L( @- w
}9 ?5 Z* e2 P: F" x
}
peer.broadcase()是每个Peer连接的广播函数,它只广播3种消息:交易、完整的区块、区块的Hash,这样表明了节点只会主动广播这3中类型的数据,剩余的数据同步,都是通过请求-响应的方式。4 [9 }( X; x0 ]7 ^9 C
// broadcast is a write loop that multiplexes block propagations, announcements
// and transaction broadcasts into the remote peer. The goal is to have an async! l( ^- ]) _' Y/ C& O9 Q# t
// writer that does not lock up node internals.% d# P6 u3 T+ R5 a
func (p *peer) broadcast() {5 R3 n/ `1 r8 _ i. G9 \# z- `. ]
for {
select {
// 广播交易
case txs :=
Peer节点处理新区块
本节介绍远端节点收到2种区块同步消息的处理,其中NewBlockMsg的处理流程比较清晰,也简洁。NewBlockHashesMsg消息的处理就绕了2绕,从总体流程图1上能看出来,它需要先从给他发送消息Peer那里获取到完整的区块,剩下的流程和NewBlockMsg又一致了。
这部分涉及的模块多,画出来有种眼花缭乱的感觉,但只要抓住上面的主线,代码看起来还是很清晰的。通过图5先看下整体流程。
消息处理的起点是ProtocolManager.handleMsg,NewBlockMsg的处理流程是蓝色标记的区域,红色区域是单独的协程,是fetcher处理队列中区块的流程,如果从队列中取出的区块是当前链需要的,校验后,调用blockchian.InsertChain()把区块插入到区块链,最后写入数据库,这是黄色部分。最后,绿色部分是NewBlockHashesMsg的处理流程,代码流程上是比较复杂的,为了能通过图描述整体流程,我把它简化掉了。
仔细看看这幅图,掌握整体的流程后,接下来看每个步骤的细节。
NewBlockMsg的处理( Z, Y6 h' S! x+ S4 D" O
本节介绍节点收到完整区块的处理,流程如下:
首先进行RLP编解码,然后标记发送消息的Peer已经知道这个区块,这样本节点最后广播这个区块的Hash时,不会再发送给该Peer。( S5 Z/ h2 }1 P8 J5 n8 V6 }2 z
将区块存入到fetcher的队列,调用fetcher.Enqueue。8 w8 y. B' n, l8 N
更新Peer的Head位置,然后判断本地链是否落后于Peer的链,如果是,则通过Peer更新本地链。
只看handle.Msg()的NewBlockMsg相关的部分。- g, h, G0 p. ~& c
case msg.Code == NewBlockMsg:
// Retrieve and decode the propagated block
// 收到新区块,解码,赋值接收数据, O4 _; Q! M8 v( C$ `
var request newBlockData6 K( d2 ^: D. ]' K( a
if err := msg.Decode(&request); err != nil {
return errResp(ErrDecode, "%v: %v", msg, err)% q: _* F% d9 ?. r
}7 [# e6 v7 ~8 |; b2 s
request.Block.ReceivedAt = msg.ReceivedAt' R5 C5 U7 p1 `5 A) v
request.Block.ReceivedFrom = p2 I# Y2 C" V d& u: z
// Mark the peer as owning the block and schedule it for import1 y: p0 t0 X5 B0 ]( s
// 标记peer知道这个区块
p.MarkBlock(request.Block.Hash())& d& J+ ^4 D* k4 H x! D+ V
// 为啥要如队列?已经得到完整的区块了1 }5 N$ n, ]2 s
// 答:存入fetcher的优先级队列,fetcher会从队列中选取当前高度需要的块. K* C% o( x5 E1 {( Z( w9 P% F
pm.fetcher.Enqueue(p.id, request.Block)
// Assuming the block is importable by the peer, but possibly not yet done so,
// calculate the head hash and TD that the peer truly must have.$ T, z) z* F, ]5 C
// 截止到parent区块的头和难度- v) t+ F, d( R# k! Y3 r
var (
trueHead = request.Block.ParentHash()* c* X& ~+ s5 e% }
trueTD = new(big.Int).Sub(request.TD, request.Block.Difficulty())* S0 v6 c/ P( e& L. [$ q! J
)/ {: C5 |9 ^3 w. w% [) [3 Y9 K6 o3 D2 T
// Update the peers total difficulty if better than the previous
// 如果收到的块的难度大于peer之前的,以及自己本地的,就去和这个peer同步" E5 ]$ Y8 \+ l9 a- |
// 问题:就只用了一下块里的hash指,为啥不直接使用这个块呢,如果这个块不能用,干嘛不少发送些数据,减少网络负载呢。* P. O) R& z% M3 q$ Y8 V
// 答案:实际上,这个块加入到了优先级队列中,当fetcher的loop检查到当前下一个区块的高度,正是队列中有的,则不再向peer请求
// 该区块,而是直接使用该区块,检查无误后交给block chain执行insertChain+ d/ J) @" L3 f2 @5 q
if _, td := p.Head(); trueTD.Cmp(td) > 0 {
p.SetHead(trueHead, trueTD)" t. g4 C L2 Z/ }" e' l
// Schedule a sync if above ours. Note, this will not fire a sync for a gap of
// a singe block (as the true TD is below the propagated block), however this$ n7 j/ G/ d$ ~1 l6 s" Z
// scenario should easily be covered by the fetcher., b# P# @0 ?6 V2 c" G
currentBlock := pm.blockchain.CurrentBlock()8 |. V7 e6 V' `6 A) h
if trueTD.Cmp(pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64())) > 0 {
go pm.synchronise(p)# s3 y1 r& G! {' I: K; S& I! c/ W/ J
}7 }0 Z! L' i+ X2 _; A9 I+ q
}
//------------------------ 以上 handleMsg9 _# O# V# m- k; c
// Enqueue tries to fill gaps the the fetcher's future import queue.
// 发给inject通道,当前协程在handleMsg,通过通道发送给fetcher的协程处理9 x8 r+ D- F: c3 a2 ^2 {# `
func (f *Fetcher) Enqueue(peer string, block *types.Block) error {
op := &inject{- j# \/ b0 n% Y1 w7 ? V% [( l
origin: peer,
block: block,; C% n3 v( R8 a. u* R2 @2 j
}- V8 ]; H' ^$ l
select {
case f.inject blockLimit {
log.Debug("Discarded propagated block, exceeded allowance", "peer", peer, "number", block.Number(), "hash", hash, "limit", blockLimit)' a! w) G; @3 j! @8 k* s2 N6 n
propBroadcastDOSMeter.Mark(1)
f.forgetHash(hash)9 o: \& l' h2 G/ @5 \7 n3 Z
return
}& z9 r6 _' o' u: {# ?
// Discard any past or too distant blocks! V1 J" |! U1 S8 n9 r
// 高度检查:未来太远的块丢弃
if dist := int64(block.NumberU64()) - int64(f.chainHeight()); dist maxQueueDist {
log.Debug("Discarded propagated block, too far away", "peer", peer, "number", block.Number(), "hash", hash, "distance", dist). @4 I( ]- n5 I
propBroadcastDropMeter.Mark(1)
f.forgetHash(hash)& H( S9 w5 }" a$ l$ ~* ^( K
return
}
// Schedule the block for future importing
// 块先加入优先级队列,加入链之前,还有很多要做
if _, ok := f.queued[hash]; !ok {
op := &inject{
origin: peer, c( x# S; ?' f
block: block,
}
f.queues[peer] = count$ F" t: h. d( k# C2 i1 X( c$ [( B
f.queued[hash] = op0 d6 l' D9 I8 h. s! g
f.queue.Push(op, -float32(block.NumberU64()))$ }6 |9 q$ C" a
if f.queueChangeHook != nil {8 u. k$ Q' u: ~! W
f.queueChangeHook(op.block.Hash(), true)
}6 J# N% s6 y$ Z! [/ y
log.Debug("Queued propagated block", "peer", peer, "number", block.Number(), "hash", hash, "queued", f.queue.Size())
}$ |: u" w9 _: r: {- U& Y/ h0 M
}5 i* s5 d: ]$ r- E$ O ^
fetcher队列处理
本节我们看看,区块加入队列后,fetcher如何处理区块,为何不直接校验区块,插入到本地链?4 U" T E5 ]" A$ H7 r
由于以太坊又Uncle的机制,节点可能收到老一点的一些区块。另外,节点可能由于网络原因,落后了几个区块,所以可能收到“未来”的一些区块,这些区块都不能直接插入到本地链。0 p3 |6 ]: k- z% ~- M" x5 X
区块入的队列是一个优先级队列,高度低的区块会被优先取出来。fetcher.loop是单独协程,不断运转,清理fecther中的事务和事件。首先会清理正在fetching的区块,但已经超时。然后处理优先级队列中的区块,判断高度是否是下一个区块,如果是则调用f.insert()函数,校验后调用BlockChain.InsertChain(),成功插入后,广播新区块的Hash。) u9 B4 A: \2 [# Z2 o8 r6 d: T
// Loop is the main fetcher loop, checking and processing various notification B6 V! V; P2 B9 }" n9 n
// events.
func (f *Fetcher) loop() {
// Iterate the block fetching until a quit is requested
fetchTimer := time.NewTimer(0). f5 Y" M& u* i: X7 r$ a
completeTimer := time.NewTimer(0)5 U/ ^# D% i5 T$ {$ v) l
for { ^4 i4 L; m8 d5 O0 h
// Clean up any expired block fetches n5 t$ |. @+ S: |
// 清理过期的区块
for hash, announce := range f.fetching {# O r0 B/ X [- [) b2 i
if time.Since(announce.time) > fetchTimeout {
f.forgetHash(hash)
}
}
// Import any queued blocks that could potentially fit
// 导入队列中合适的块
height := f.chainHeight()
for !f.queue.Empty() {
op := f.queue.PopItem().(*inject)
hash := op.block.Hash()* H5 [# X, j2 y+ z/ A6 { P
if f.queueChangeHook != nil {9 F0 S% n, T6 B# D; L9 D) a, n
f.queueChangeHook(hash, false), h$ k6 V$ f( e* i: ^1 [9 X
}. [/ {5 |8 M3 u# G6 r( \: ?
// If too high up the chain or phase, continue later
// 块不是链需要的下一个块,再入优先级队列,停止循环/ z7 r; q- P7 q( u1 s' X
number := op.block.NumberU64()) S3 ~3 e W$ H) ?5 I
if number > height+1 {
f.queue.Push(op, -float32(number))* q4 _5 n$ W- s3 W3 l
if f.queueChangeHook != nil {
f.queueChangeHook(hash, true)
}, E* o8 i! e' @( Q3 {0 B, v
break) w3 ?- |7 I+ L
}
// Otherwise if fresh and still unknown, try and import3 m: T/ e+ N: N
// 高度正好是我们想要的,并且链上也没有这个块
if number+maxUncleDist
func (f *Fetcher) insert(peer string, block *types.Block) {2 C) i. m; u$ o5 a& i. B8 H
hash := block.Hash()
// Run the import on a new thread5 M7 f0 m& H* x) x6 u- w B
log.Debug("Importing propagated block", "peer", peer, "number", block.Number(), "hash", hash)2 l, V" \+ `6 x2 }
go func() {
defer func() { f.done
NewBlockHashesMsg的处理: Q: M2 e5 t0 B1 I3 @$ s
本节介绍NewBlockHashesMsg的处理,其实,消息处理是简单的,而复杂一点的是从Peer哪获取完整的区块,下节再看。. x. w1 u2 K* }$ b
流程如下: n* @/ A$ i) V" T5 p# V
对消息进行RLP解码,然后标记Peer已经知道此区块。寻找出本地区块链不存在的区块Hash值,把这些未知的Hash通知给fetcher。fetcher.Notify记录好通知信息,塞入notify通道,以便交给fetcher的协程。fetcher.loop()会对notify中的消息进行处理,确认区块并非DOS攻击,然后检查区块的高度,判断该区块是否已经在fetching或者comleting(代表已经下载区块头,在下载body),如果都没有,则加入到announced中,触发0s定时器,进行处理。
关于announced下节再介绍。
" j6 u" X. |! Q( H& [
// handleMsg()部分1 r% |' S/ \. X) X. U' h
case msg.Code == NewBlockHashesMsg:& Y: P8 q: f( ^) y/ T0 r0 @
var announces newBlockHashesData
if err := msg.Decode(&announces); err != nil {
return errResp(ErrDecode, "%v: %v", msg, err)
}
// Mark the hashes as present at the remote node, `0 q: f& G6 s: p, s% W
for _, block := range announces {
p.MarkBlock(block.Hash)2 ~; V8 K8 n* p5 ]
}( X4 x$ ` g- a3 c3 C
// Schedule all the unknown hashes for retrieval: l# h% \0 X5 s/ M1 o" B( x
// 把本地链没有的块hash找出来,交给fetcher去下载5 D; X/ x0 U, _* l( r
unknown := make(newBlockHashesData, 0, len(announces))% v, g& p4 s# N
for _, block := range announces {
if !pm.blockchain.HasBlock(block.Hash, block.Number) {
unknown = append(unknown, block)
}
}
for _, block := range unknown {8 o# x A' R" ?6 k: K4 v N
pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)
}
// Notify announces the fetcher of the potential availability of a new block in+ D; z, |0 N( Z
// the network.
// 通知fetcher(自己)有新块产生,没有块实体,有hash、高度等信息
func (f *Fetcher) Notify(peer string, hash common.Hash, number uint64, time time.Time,8 r9 H/ E+ j! R* |7 v! D
headerFetcher headerRequesterFn, bodyFetcher bodyRequesterFn) error {' x( |3 W T9 O$ m7 \+ j
block := &announce{
hash: hash,
number: number,# \ @' @" W; N& T, x/ l& c6 J
time: time,3 \5 n9 Z u8 I) |1 A
origin: peer,
fetchHeader: headerFetcher,
fetchBodies: bodyFetcher,: n4 g/ v$ g5 ^! z2 V
}* x9 q- k F! E/ c* [6 O' c
select {
case f.notify hashLimit {9 X* @" Y% [! C7 f9 Y9 V0 v7 A
log.Debug("Peer exceeded outstanding announces", "peer", notification.origin, "limit", hashLimit)
propAnnounceDOSMeter.Mark(1)8 _# J' e% t8 |. a1 S/ r; c
break- D- s# m( y: k) |) r( @
}
// If we have a valid block number, check that it's potentially useful
// 高度检查
if notification.number > 0 {2 h$ u2 {# v! n3 ^
if dist := int64(notification.number) - int64(f.chainHeight()); dist maxQueueDist {
log.Debug("Peer discarded announcement", "peer", notification.origin, "number", notification.number, "hash", notification.hash, "distance", dist)8 E/ P5 I% a# T% G+ {9 y
propAnnounceDropMeter.Mark(1)
break
}
}5 Y; \5 g3 O5 I; [$ U0 \
// All is well, schedule the announce if block's not yet downloading
// 检查是否已经在下载,已下载则忽略* x: l' |$ Q7 r
if _, ok := f.fetching[notification.hash]; ok {( I) c- O4 z; _3 m. {. W6 _& x
break
}, z |3 h+ @. T& S3 {
if _, ok := f.completing[notification.hash]; ok {
break
}
// 更新peer已经通知给我们的区块数量3 z$ [# e. }* X5 f0 }2 F: W" }0 ?: H
f.announces[notification.origin] = count
// 把通知信息加入到announced,供调度! o9 d9 u3 X2 U, S: `: V! J
f.announced[notification.hash] = append(f.announced[notification.hash], notification)1 J7 u( G9 R; |* }$ x9 a# p2 J6 N1 Z
if f.announceChangeHook != nil && len(f.announced[notification.hash]) == 1 {
f.announceChangeHook(notification.hash, true)2 y8 R' [3 C7 M. X/ Q. z3 p) L
}
if len(f.announced) == 1 {- n6 s0 j2 y+ \; S: l
// 有通知放入到announced,则重设0s定时器,loop的另外一个分支会处理这些通知. r5 x/ O O; |! L( k$ M. y% C
f.rescheduleFetch(fetchTimer): p' L' t' d3 V7 _' m# k' t/ x* j
}* c1 I/ L- m/ Y+ p8 @
fetcher获取完整区块
本节介绍fetcher获取完整区块的过程,这也是fetcher最重要的功能,会涉及到fetcher至少80%的代码。单独拉放一大节吧。
Fetcher的大头3 F6 a& M* b8 A3 J
Fetcher最主要的功能就是获取完整的区块,然后在合适的实际交给InsertChain去验证和插入到本地区块链。我们还是从宏观入手,看Fetcher是如何工作的,一定要先掌握好宏观,因为代码层面上没有这么清晰。
宏观! s& w9 u8 ~7 [: W
首先,看两个节点是如何交互,获取完整区块,使用时序图的方式看一下,见图6,流程很清晰不再文字介绍。
再看下获取区块过程中,fetcher内部的状态转移,它使用状态来记录,要获取的区块在什么阶段,见图7。我稍微解释一下:$ U k% G" S! F+ }
收到NewBlockHashesMsg后,相关信息会记录到announced,进入announced状态,代表了本节点接收了消息。announced由fetcher协程处理,经过校验后,会向给他发送消息的Peer发送请求,请求该区块的区块头,然后进入fetching状态。获取区块头后,如果区块头表示没有交易和uncle,则转移到completing状态,并且使用区块头合成完整的区块,加入到queued优先级队列。获取区块头后,如果区块头表示该区块有交易和uncle,则转移到fetched状态,然后发送请求,请求交易和uncle,然后转移到completing状态。收到交易和uncle后,使用头、交易、uncle这3个信息,生成完整的区块,加入到队列queued。
微观
接下来就是从代码角度看如何获取完整区块的流程了,有点多,看不懂的时候,再回顾下上面宏观的介绍图。4 ^" k1 t* ]$ }; ]6 K7 k' A
首先看Fetcher的定义,它存放了通信数据和状态管理,捡加注释的看,上文提到的状态,里面都有。
// Fetcher is responsible for accumulating block announcements from various peers3 W7 S/ w+ {# p* T' J5 n
// and scheduling them for retrieval.8 F, F e" K( p: C# V r
// 积累块通知,然后调度获取这些块4 R8 d6 Q7 R. R( b6 p+ q+ D
type Fetcher struct {. e' K) ^: R% |% X$ J
// Various event channels
// 收到区块hash值的通道
notify chan *announce5 i2 w, d) G! I
// 收到完整区块的通道4 {; o R* ? w7 B9 ], n
inject chan *inject2 t9 d; }4 A w: H) o
blockFilter chan chan []*types.Block
// 过滤header的通道的通道- e y. Y- z% |' d) j/ v% Q2 W
headerFilter chan chan *headerFilterTask' {+ e7 ?* e! ~9 z
// 过滤body的通道的通道
bodyFilter chan chan *bodyFilterTask0 \. r9 I' J- i7 S* n% c
done chan common.Hash
quit chan struct{}+ B* U: \0 i! Z4 N
// Announce states1 Y. J5 j/ A3 K
// Peer已经给了本节点多少区块头通知6 B6 @. @3 }' [5 G1 L3 B1 c/ I" f6 M
announces map[string]int // Per peer announce counts to prevent memory exhaustion
// 已经announced的区块列表& P. g) X/ c" L, Z U( R3 ~8 X: x
announced map[common.Hash][]*announce // Announced blocks, scheduled for fetching
// 正在fetching区块头的请求% P( w1 ]8 T) W# g8 A
fetching map[common.Hash]*announce // Announced blocks, currently fetching4 G+ a2 j n6 v$ w( ?8 L
// 已经fetch到区块头,还差body的请求,用来获取body! b0 x+ s% |; _8 ^4 @& O7 S
fetched map[common.Hash][]*announce // Blocks with headers fetched, scheduled for body retrieval
// 已经得到区块头的% P0 Z( Z# L3 ]8 e5 @/ u
completing map[common.Hash]*announce // Blocks with headers, currently body-completing$ ~$ g4 M9 T8 t2 }5 Z
// Block cache( w# P5 L0 U$ l+ x" J' f
// queue,优先级队列,高度做优先级
// queues,统计peer通告了多少块
// queued,代表这个块如队列了,
queue *prque.Prque // Queue containing the import operations (block number sorted)* c- l5 ~( G S% l+ {
queues map[string]int // Per peer block counts to prevent memory exhaustion
queued map[common.Hash]*inject // Set of already queued blocks (to dedupe imports)5 i' a' M$ P+ T7 ]
// Callbacks
getBlock blockRetrievalFn // Retrieves a block from the local chain
verifyHeader headerVerifierFn // Checks if a block's headers have a valid proof of work,验证区块头,包含了PoW验证. S# h) B8 s5 x0 K8 J$ [$ v& x
broadcastBlock blockBroadcasterFn // Broadcasts a block to connected peers,广播给peer' R+ v" s. k P# o" L
chainHeight chainHeightFn // Retrieves the current chain's height1 ]% Y) {; Q! U7 u$ \0 p4 M
insertChain chainInsertFn // Injects a batch of blocks into the chain,插入区块到链的函数
dropPeer peerDropFn // Drops a peer for misbehaving# N1 q' S/ ?: |1 S
// Testing hooks* d6 l# q5 v( l* {4 v+ N: o
announceChangeHook func(common.Hash, bool) // Method to call upon adding or deleting a hash from the announce list
queueChangeHook func(common.Hash, bool) // Method to call upon adding or deleting a block from the import queue
fetchingHook func([]common.Hash) // Method to call upon starting a block (eth/61) or header (eth/62) fetch
completingHook func([]common.Hash) // Method to call upon starting a block body fetch (eth/62)
importedHook func(*types.Block) // Method to call upon successful block import (both eth/61 and eth/62)
}9 j! m" y* Z% T* P8 H6 A
NewBlockHashesMsg消息的处理前面的小节已经讲过了,不记得可向前翻看。这里从announced的状态处理说起。loop() 中,fetchTimer超时后,代表了收到了消息通知,需要处理,会从announced中选择出需要处理的通知,然后创建请求,请求区块头,由于可能有很多节点都通知了它某个区块的Hash,所以随机的从这些发送消息的Peer中选择一个Peer,发送请求的时候,为每个Peer都创建了单独的协程。% P) d% X5 Y0 o) a2 a/ [7 @" L
case arriveTimeout-gatherSlack {
// Pick a random peer to retrieve from, reset all others
// 可能有很多peer都发送了这个区块的hash值,随机选择一个peer
announce := announces[rand.Intn(len(announces))]
f.forgetHash(hash)& T3 u. B" N6 u$ q# p
// If the block still didn't arrive, queue for fetching
// 本地还没有这个区块,创建获取区块的请求. J4 l2 Y+ |: J7 m2 n
if f.getBlock(hash) == nil {3 `! Z) \" p7 U n8 X0 J0 W6 ]
request[announce.origin] = append(request[announce.origin], hash)# x) k# w$ z( F' q
f.fetching[hash] = announce
}: g! d l- |- V$ G5 q
}2 E5 y- E# h0 v* Y, v9 e8 L+ s
}3 _2 w0 @+ J% `5 u" k
// Send out all block header requests3 l. D+ s& X& E
// 把所有的request发送出去4 e% f Q: k+ q1 h
// 为每一个peer都创建一个协程,然后请求所有需要从该peer获取的请求( X0 s1 s8 d# D
for peer, hashes := range request {; l8 X2 O h/ Z# O% Z4 Z
log.Trace("Fetching scheduled headers", "peer", peer, "list", hashes)
// Create a closure of the fetch and schedule in on a new thread
fetchHeader, hashes := f.fetching[hashes[0]].fetchHeader, hashes
go func() { q* w. W$ q9 _" G
if f.fetchingHook != nil {: a) @" b! o8 P
f.fetchingHook(hashes)
} m( Z* X" `: @# r5 S$ u0 ~- z
for _, hash := range hashes {
headerFetchMeter.Mark(1)
fetchHeader(hash) // Suboptimal, but protocol doesn't allow batch header retrievals
}
}()( K' A3 H9 K( C- `! y
}) o. l* j- u7 C5 B
// Schedule the next fetch if blocks are still pending
f.rescheduleFetch(fetchTimer); S, {+ }) ?& D! G) o& @. f1 N
从Notify的调用中,可以看出,fetcherHeader()的实际函数是RequestOneHeader(),该函数使用的消息是GetBlockHeadersMsg,可以用来请求多个区块头,不过fetcher只请求一个。
pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)
// RequestOneHeader is a wrapper around the header query functions to fetch a! C7 k: P$ N9 d/ p
// single header. It is used solely by the fetcher. e( [/ j( u7 c% g- C
func (p *peer) RequestOneHeader(hash common.Hash) error {6 K; B2 O2 {0 [: W
p.Log().Debug("Fetching single header", "hash", hash)
return p2p.Send(p.rw, GetBlockHeadersMsg, &getBlockHeadersData{Origin: hashOrNumber{Hash: hash}, Amount: uint64(1), Skip: uint64(0), Reverse: false})
}1 ^$ b- I' Q+ I, y, [+ Z {
GetBlockHeadersMsg的处理如下:因为它是获取多个区块头的,所以处理起来比较“麻烦”,还好,fetcher只获取一个区块头,其处理在20行~33行,获取下一个区块头的处理逻辑,这里就不看了,最后调用SendBlockHeaders()将区块头发送给请求的节点,消息是BlockHeadersMsg。
···4 A- n5 S& v/ F7 v5 x& Y
// handleMsg()3 m5 n! z; r' |8 D; w
// Block header query, collect the requested headers and reply
case msg.Code == GetBlockHeadersMsg:7 I# E1 Y2 k# C- g$ x
// Decode the complex header query
var query getBlockHeadersData
if err := msg.Decode(&query); err != nil {
return errResp(ErrDecode, “%v: %v”, msg, err)7 ]" e3 P0 l, _. ]1 N* f, M* {
}
hashMode := query.Origin.Hash != (common.Hash{})
// Gather headers until the fetch or network limits is reached) M# v* g/ j3 ]
// 收集区块头,直到达到限制' b" E' o' o" \, l6 w& H% }
var (9 U5 a8 B0 p, }1 L4 w( K
bytes common.StorageSize* X7 H$ h) P, n& m5 _3 `% Q5 U
headers []*types.Header
unknown bool
)
// 自己已知区块 && 少于查询的数量 && 大小小于2MB && 小于能下载的最大数量0 m1 `2 s5 Q4 R7 T8 N; \1 O2 @; [
for !unknown && len(headers)
`BlockHeadersMsg`的处理很有意思,因为`GetBlockHeadersMsg`并不是fetcher独占的消息,downloader也可以调用,所以,响应消息的处理需要分辨出是fetcher请求的,还是downloader请求的。它的处理逻辑是:fetcher先过滤收到的区块头,如果fetcher不要的,那就是downloader的,在调用`fetcher.FilterHeaders`的时候,fetcher就将自己要的区块头拿走了。8 |2 Y: u7 }; f& M/ E+ W; ]& K6 d m
// handleMsg()
case msg.Code == BlockHeadersMsg:: n/ ^6 Y7 e0 j m8 B& ^4 }7 O/ N
// A batch of headers arrived to one of our previous requests
var headers []*types.Header& W: _4 S6 e' M# ^* Y% g/ x( e
if err := msg.Decode(&headers); err != nil {
return errResp(ErrDecode, “msg %v: %v”, msg, err)# }; J' x& L# l: _# ?4 o: t1 n2 ~
}
// If no headers were received, but we’re expending a DAO fork check, maybe it’s that
// 检查是不是当前DAO的硬分叉. I( K" H% q9 g/ M3 K8 s
if len(headers) == 0 && p.forkDrop != nil {
// Possibly an empty reply to the fork header checks, sanity check TDs
verifyDAO := true
// If we already have a DAO header, we can check the peer's TD against it. If D: H( N" a6 r$ q0 Q2 K
// the peer's ahead of this, it too must have a reply to the DAO check4 _8 q9 O% j5 i0 ~
if daoHeader := pm.blockchain.GetHeaderByNumber(pm.chainconfig.DAOForkBlock.Uint64()); daoHeader != nil {; Z5 s2 s0 P p/ v* Y6 I
if _, td := p.Head(); td.Cmp(pm.blockchain.GetTd(daoHeader.Hash(), daoHeader.Number.Uint64())) >= 0 {2 r4 a- ~# M% n& J# c
verifyDAO = false4 N/ A' v8 d- x* \% s3 ?6 o& x
}
}
// If we're seemingly on the same chain, disable the drop timer
if verifyDAO {3 Y7 b2 c/ ?% \7 _2 W+ e9 z/ d
p.Log().Debug("Seems to be on the same side of the DAO fork")
p.forkDrop.Stop()) ~9 C! V" I: y6 M. M4 }1 c) U% ~
p.forkDrop = nil
return nil
}
}
// Filter out any explicitly requested headers, deliver the rest to the downloader
// 过滤是不是fetcher请求的区块头,去掉fetcher请求的区块头再交给downloader
filter := len(headers) == 1( \7 ?3 D; T: ^$ r, j/ j3 L
if filter {
// If it's a potential DAO fork check, validate against the rules8 I% \! Y x3 A3 v) w; ]
// 检查是否硬分叉
if p.forkDrop != nil && pm.chainconfig.DAOForkBlock.Cmp(headers[0].Number) == 0 {
// Disable the fork drop timer6 t6 [" H/ r( w; B" ?
p.forkDrop.Stop(). H) u9 t- ~ l6 f
p.forkDrop = nil$ ?) G3 m P) K* H
// Validate the header and either drop the peer or continue
if err := misc.VerifyDAOHeaderExtraData(pm.chainconfig, headers[0]); err != nil {3 d4 S0 T. H9 S6 x
p.Log().Debug("Verified to be on the other side of the DAO fork, dropping")
return err: u* x, Y3 Y5 d: A" r5 S
}
p.Log().Debug("Verified to be on the same side of the DAO fork") \4 q" V% g! B
return nil
}& `# D4 C n; r% _2 K
// Irrelevant of the fork checks, send the header to the fetcher just in case' C; G5 @1 c8 [ a$ N
// 使用fetcher过滤区块头' |- T3 ] `3 ]- ^
headers = pm.fetcher.FilterHeaders(p.id, headers, time.Now())
}7 ]6 Z8 S" E. y2 Z
// 剩下的区块头交给downloader$ m& C* S$ ]% V( |
if len(headers) > 0 || !filter {
err := pm.downloader.DeliverHeaders(p.id, headers)/ F. d9 ]( j7 I5 @- r6 m- a
if err != nil {
log.Debug("Failed to deliver headers", "err", err)
}
}: {) h$ s% _2 K4 [; p& j) t
`FilterHeaders()`是一个很有大智慧的函数,看起来耐人寻味,但实在妙。它要把所有的区块头,都传递给fetcher协程,还要获取fetcher协程处理后的结果。`fetcher.headerFilter`是存放通道的通道,而`filter`是存放包含区块头过滤任务的通道。它先把`filter`传递给了`headerFilter`,这样`fetcher`协程就在另外一段等待了,而后将`headerFilterTask`传入`filter`,`fetcher`就能读到数据了,处理后,再将数据写回`filter`而刚好被`FilterHeaders`函数处理了,该函数实际运行在`handleMsg()`的协程中。
每个Peer都会分配一个ProtocolManager然后处理该Peer的消息,但`fetcher`只有一个事件处理协程,如果不创建一个`filter`,fetcher哪知道是谁发给它的区块头呢?过滤之后,该如何发回去呢?
// FilterHeaders extracts all the headers that were explicitly requested by the fetcher,
// returning those that should be handled differently.
// 寻找出fetcher请求的区块头* F" e9 s6 |9 D# h3 p: k
func (f *Fetcher) FilterHeaders(peer string, headers []*types.Header, time time.Time) []*types.Header {
log.Trace(“Filtering headers”, “peer”, peer, “headers”, len(headers))
// Send the filter channel to the fetcher
// 任务通道" j, ~. p4 \ i5 j- Z9 U
filter := make(chan *headerFilterTask)
select {
// 任务通道发送到这个通道
case f.headerFilter
}1 e/ o. z5 N' Y
接下来要看f.headerFilter的处理,这段代码有90行,它做了一下几件事:
1. 从`f.headerFilter`取出`filter`,然后取出过滤任务`task`。
2. 它把区块头分成3类:`unknown`这不是分是要返回给调用者的,即`handleMsg()`, `incomplete`存放还需要获取body的区块头,`complete`存放只包含区块头的区块。遍历所有的区块头,填到到对应的分类中,具体的判断可看18行的注释,记住宏观中将的状态转移图。( g$ x5 d- A" r' c; I7 C4 w# v& x
3. 把`unknonw`中的区块返回给`handleMsg()`。 w: Z# _% i8 g, A7 ]2 g
4. 把` incomplete`的区块头获取状态移动到`fetched`状态,然后触发定时器,以便去处理complete的区块。# x% l* ^7 }3 V3 S2 Y* F
5. 把`compelete`的区块加入到`queued`。& G: K# f& ?! ^8 b
// fetcher.loop()# Z% [' x4 S* {* H! R/ L: |. a- }
case filter := % a+ z8 Z z2 W: L+ B( ?
// Split the batch of headers into unknown ones (to return to the caller),1 _2 O- C9 R/ k- i, N- H1 [
// known incomplete ones (requiring body retrievals) and completed blocks.8 F0 n5 D; K) F# \
// unknown的不是fetcher请求的,complete放没有交易和uncle的区块,有头就够了,incomplete放
// 还需要获取uncle和交易的区块
unknown, incomplete, complete := []*types.Header{}, []*announce{}, []*types.Block{}# I. z& X3 {# D1 ^* @
// 遍历所有收到的header0 d2 `$ y2 S' V8 u, v
for _, header := range task.headers {
hash := header.Hash()
// Filter fetcher-requested headers from other synchronisation algorithms
// 是正在获取的hash,并且对应请求的peer,并且未fetched,未completing,未queued7 L3 c3 }: K( E6 `) i( @" f
if announce := f.fetching[hash]; announce != nil && announce.origin == task.peer && f.fetched[hash] == nil && f.completing[hash] == nil && f.queued[hash] == nil {
// If the delivered header does not match the promised number, drop the announcer! a0 Y% O) E( f% M, D
// 高度校验,竟然不匹配,扰乱秩序,peer肯定是坏蛋。
if header.Number.Uint64() != announce.number {
log.Trace("Invalid block number fetched", "peer", announce.origin, "hash", header.Hash(), "announced", announce.number, "provided", header.Number)
f.dropPeer(announce.origin)% |) i* E6 U/ i* L2 x2 p
f.forgetHash(hash)
continue' [2 h& j& ~' r, c
}
// Only keep if not imported by other means
// 本地链没有当前区块
if f.getBlock(hash) == nil {0 y' q" k; v1 T! S8 P
announce.header = header) F% ^, v' x4 t6 i. I
announce.time = task.time
// If the block is empty (header only), short circuit into the final import queue
// 如果区块没有交易和uncle,加入到complete
if header.TxHash == types.DeriveSha(types.Transactions{}) && header.UncleHash == types.CalcUncleHash([]*types.Header{}) {
log.Trace("Block empty, skipping body retrieval", "peer", announce.origin, "number", header.Number, "hash", header.Hash())/ n, I% f: t3 Q' H4 t) {1 o
block := types.NewBlockWithHeader(header)& D( N. v. F5 W
block.ReceivedAt = task.time* ~7 L2 I3 J. u; [2 `
complete = append(complete, block)
f.completing[hash] = announce
continue
}, d3 p2 u3 L: V
// Otherwise add to the list of blocks needing completion
// 否则就是不完整的区块( s+ C- A& V% Z; r, O1 }
incomplete = append(incomplete, announce)
} else {* M/ q& S7 u$ D5 m9 g4 w& R* a
log.Trace("Block already imported, discarding header", "peer", announce.origin, "number", header.Number, "hash", header.Hash()), y% q( k% E& M# ]7 @2 K
f.forgetHash(hash)0 i. T( o2 m6 b# f& h8 j
}
} else {
// Fetcher doesn't know about it, add to the return list
// 没请求过的header
unknown = append(unknown, header)
}
}# G E; a: \2 ^; H! j8 T: G
// 把未知的区块头,再传递会filter& o3 ]+ y$ y+ x. e3 y( ^
headerFilterOutMeter.Mark(int64(len(unknown)))
select {4 {+ k* u. ~* i
case filter
跟随状态图的转义,剩下的工作是`fetched`转移到`completing` ,上面的流程已经触发了`completeTimer`定时器,超时后就会处理,流程与请求Header类似,不再赘述,此时发送的请求消息是`GetBlockBodiesMsg`,实际调的函数是`RequestBodies`。5 v% ~& Q+ N$ n6 `- W& G, v' a- b
// fetcher.loop()
case
// 遍历所有待获取body的announce& V( L2 b2 a1 a' B* F. j8 ]% j9 i. g2 o
for hash, announces := range f.fetched {8 w, c3 w% B( \+ ^1 L/ u* b. s
// Pick a random peer to retrieve from, reset all others
// 随机选一个Peer发送请求,因为可能已经有很多Peer通知它这个区块了
announce := announces[rand.Intn(len(announces))]" w% W- [ N1 ]! c) ?
f.forgetHash(hash); W: @( n: |3 W% g, H
// If the block still didn't arrive, queue for completion
// 如果本地没有这个区块,则放入到completing,创建请求4 T& Q. X% m5 U) a7 A+ k
if f.getBlock(hash) == nil {: g% z1 U5 K2 p" M! B
request[announce.origin] = append(request[announce.origin], hash)7 F, _' B4 |' n8 S6 P
f.completing[hash] = announce
}
}- p2 i% E4 @- Z& T0 R! y
// Send out all block body requests
// 发送所有的请求,获取body,依然是每个peer一个单独协程2 z& W. D, H$ u* i
for peer, hashes := range request {
log.Trace("Fetching scheduled bodies", "peer", peer, "list", hashes)5 d6 G9 r* h/ V: r1 ~" x* W1 x" r) Q' B
// Create a closure of the fetch and schedule in on a new thread
if f.completingHook != nil {8 D' o+ X/ D, s
f.completingHook(hashes)5 |' S& p; l& r, _
}# S. Z) x; H# {; J- [
bodyFetchMeter.Mark(int64(len(hashes)))0 J: `. x4 t7 {1 d8 a
go f.completing[hashes[0]].fetchBodies(hashes)4 X% G2 i/ A8 I
}1 f4 ^/ \, M4 Q0 F# {7 }# U/ }
// Schedule the next fetch if blocks are still pending7 ?1 o+ F N% Y- ^
f.rescheduleComplete(completeTimer)
`handleMsg()`处理该消息也是干净利落,直接获取RLP格式的body,然后发送响应消息。
// handleMsg()" B3 B+ s6 a5 _+ v
case msg.Code == GetBlockBodiesMsg:. U! ?, V, f- p' c+ b
// Decode the retrieval message |( s2 \0 Q& ?. W
msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
if _, err := msgStream.List(); err != nil {1 i3 l: K: H1 T: |1 q
return err7 h: v- S% l- {! P
}
// Gather blocks until the fetch or network limits is reached& B/ f8 j. X7 Z* T* G- A
var (
hash common.Hash
bytes int" n/ j) ^. s; t' P1 l* j$ h! K
bodies []rlp.RawValue
)) z) |1 P/ Y6 A1 v* d6 m5 T
// 遍历所有请求2 @( e4 O+ [! i2 [9 U
for bytes % n+ e0 G* m2 h `
响应消息`BlockBodiesMsg`的处理与处理获取header的处理原理相同,先交给fetcher过滤,然后剩下的才是downloader的。需要注意一点,响应消息里只包含交易列表和叔块列表。
// handleMsg()$ B6 @. Y5 q$ B
case msg.Code == BlockBodiesMsg:
// A batch of block bodies arrived to one of our previous requests
var request blockBodiesData7 h* \" g* c* ?$ a: `: k
if err := msg.Decode(&request); err != nil { h. |5 ^8 {" Y4 [2 ?5 s9 u/ Q
return errResp(ErrDecode, “msg %v: %v”, msg, err)
}
// Deliver them all to the downloader for queuing
// 传递给downloader去处理
transactions := make([][]*types.Transaction, len(request))+ y) H6 q- m/ r& ]; d
uncles := make([][]*types.Header, len(request))
for i, body := range request {8 ~$ C( r5 a |5 M, y a5 D9 E
transactions = body.Transactions: z9 D' U' i8 r Z! D
uncles = body.Uncles' \7 Z0 z) l: u% W& H3 s, Q
}6 X! y9 s: ~4 i7 }& a
// Filter out any explicitly requested bodies, deliver the rest to the downloader
// 先让fetcher过滤去fetcher请求的body,剩下的给downloader
filter := len(transactions) > 0 || len(uncles) > 0
if filter {2 M4 N5 ]/ a- Q% |5 S6 F
transactions, uncles = pm.fetcher.FilterBodies(p.id, transactions, uncles, time.Now())
}
// 剩下的body交给downloader
if len(transactions) > 0 || len(uncles) > 0 || !filter {+ r1 g; [% M; H8 O+ D! {; l$ j* }
err := pm.downloader.DeliverBodies(p.id, transactions, uncles)
if err != nil {
log.Debug("Failed to deliver bodies", "err", err)7 c! [2 }5 S+ u* M; q0 H, S3 ?3 r
}; H0 y3 C3 v7 B/ W3 n( \9 e
}2 a. M s% ~" t0 u; o
过滤函数的原理也与Header相同。- r4 F/ n0 v% D, Q3 c$ P' j
// FilterBodies extracts all the block bodies that were explicitly requested by/ _4 c6 [# m1 j9 Q/ _
// the fetcher, returning those that should be handled differently.$ @8 k( H; q3 q* Y- O
// 过去出fetcher请求的body,返回它没有处理的,过程类型header的处理
func (f *Fetcher) FilterBodies(peer string, transactions [][]*types.Transaction, uncles [][]*types.Header, time time.Time) ([][]*types.Transaction, [][]*types.Header) {
log.Trace(“Filtering bodies”, “peer”, peer, “txs”, len(transactions), “uncles”, len(uncles))7 B, J. J) g9 ^% O8 y) d' E
// Send the filter channel to the fetcher
filter := make(chan *bodyFilterTask)6 Y& F5 d% V% z q. s
select {- Q2 C( L" `/ G1 z! F, n
case f.bodyFilter
}
实际过滤body的处理瞧一下,这和Header的处理是不同的。直接看不点:2 F9 u' f! u2 d, \9 d4 x
1. 它要的区块,单独取出来存到`blocks`中,它不要的继续留在`task`中。6 E2 S4 E4 \- b2 P( {$ l
2. 判断是不是fetcher请求的方法:如果交易列表和叔块列表计算出的hash值与区块头中的一样,并且消息来自请求的Peer,则就是fetcher请求的。
3. 将`blocks`中的区块加入到`queued`,终结。" n9 E3 H- b$ I8 n2 D+ t
case filter :=
blocks := []*types.Block{}3 m4 t& ~: W- m' C) t. q5 n1 n* `
// 获取的每个body的txs列表和uncle列表
// 遍历每个区块的txs列表和uncle列表,计算hash后判断是否是当前fetcher请求的body
for i := 0; i 8 v- w6 h5 P" Y2 `3 ?
}
. l) v, l1 R* g Q
至此,fetcher获取完整区块的流程讲完了,fetcher模块中80%的代码也都贴出来了,还有2个值得看看的函数:* B; l v- W$ J1 d0 }
1. `forgetHash(hash common.Hash)`:用于清空指定hash指的记/状态录信息。# @5 R8 O y/ F5 O* N
2. `forgetBlock(hash common.Hash)`:用于从队列中移除一个区块。
最后了,再回到开始看看fetcher模块和新区块的传播流程,有没有豁然开朗。
成为第一个吐槽的人