Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

seems btcd is way slower than bitcoin core in syncing? #1339

Open
lzl124631x opened this issue Oct 25, 2018 · 59 comments
Open

seems btcd is way slower than bitcoin core in syncing? #1339

lzl124631x opened this issue Oct 25, 2018 · 59 comments

Comments

@lzl124631x
Copy link

I remember I used bitcoin core to sync btc full node on EC2 (200+G) in one day. But I used btcd to sync full node -- two days later it just synced 108G ?

The EC2 config is the same.

@totaloutput
Copy link

@lzl124631x there are couple reasons that btcd is slower than bitcoind. The main reason I see is that btcd download from one peer only in bootstrapping the blocks, while bitcoind download from 8.
So you need to find a closer and faster node to your ec2 instance. Let me know if you need more details, I can share what I did to speed up my syncing of 200+ G..
But it will never be as fast as bitcoind.

@jcvernaleo
Copy link
Member

dcrd has an issue open related to multipeer downloads decred/dcrd#1145 which appears to be partially done.

I haven't been following it closely enough to know if it is likely to be backportable to btcd but might be interesting to take a look at.

@totaloutput
Copy link

@lzl124631x : If your node is in US (like mine), your connection will be slow to the default nodes in btcd DNSSeed under chaincfg/params.go . If you just manually add couple nodes in US, the bootstrapping speed will be much faster. For example add this to your startup btcd command:--connection=157.131.198.183:8333 --connection==209.108.206.229:8333 . The IPs keep changing, you can use the leaderboard to find latest IPs close to your country.

I put more details in this post, if you want to look.

@lzl124631x
Copy link
Author

@totaloutput Good to know that! But is bitcoind smart enough to sync with nodes geographical closer? If yes, why doesn't btcd have that yet?

@totaloutput
Copy link

@lzl124631x I don't think current bitcoind code is smart enough to find geographically closer nodes to sync. (I could be wrong) The current code is to find 1 node to get the headers of all the blocks, and find 8 nodes to ask for the content (data) of the blocks, if any of the node is slow/unresponse for 2 seconds, it will drop that node and ask for a new one.
btcd code doesn't have that build in yet. It's a good feature, especially when the blockchain data is getting bigger and bigger. But majority of the btcd development resource moved onto dcrd (decred). My guess is that feature will be first implemented there then backport to btcd if someone wants to do it.

@l0k18
Copy link
Contributor

l0k18 commented Nov 14, 2018

I am using the btcd codebase to update the parallelcoin network (it almost even still has malleability vulnerabilities) and looked at the p2p syncmanager stuff.

I made a small alteration that favours selecting the lowest ping out of the connected peers, the code is here: https://github.com/btcsuite/btcd/blob/master/netsync/manager.go#L261 where you need to make the change and here: https://github.com/btcsuite/btcd/blob/master/netsync/manager.go#L261 - I personally would think that more metadata stored about peers could get a more accurate picture of available block sources, such as a record of the last block/second rate when the peer was synced from. Still, lowest ping of up to 8 currently connected nodes probably still helps.

btcd as delivered in the git repo has a series of checkpoints. When checkpoints are set, it pre-fetches the block headers and compact filters (if available). This likely actually slows things down during initial sync. I'd suggest trying to disable the checkpoints, to eliminate this double-downloading. Checkpoints are mainly to stop miners filling mempools with new blocks forked from ancient blocks, which are extremely unlikely to ever exceed the cumulative work total of the current best block. With my work on a forked chain, I am finding that when I set checkpoints, everything after gets orphaned. I am looking into this.

Since checkpoints main purpose, in effect, is simply to reduce time-wasting old forks from clogging up the mempools, it's my opinion that the more effective strategy would be to simply disallow forks that would require an inordinate amount of hashpower to push ahead of the tip. Someone mentioned this a long time ago in a btctalk post, the idea of simply disallowing new blocks to attach before some reasonable number of blocks previously. I think it should be on the basis of cumulative weight. A reasonable figure could be derived from the known hashpower for an algorithm as currently exists on all networks.

And yes, the btcd netsync library only downloads from one source at a time. I think it would make sense to improve this when it is syncing pre-checkpoint to establish several links and download several segments of the chain from several sources at once, while grabbing the blocks-only from a few nodes at the same time. The consensus chain could then be established earlier in the sync process and of course nonconforming blocks, the node serving them up should get a huge banscore increment for this, i mean, weeks of no contact, trying to propagate a fake chain is pretty serious.

@awb99
Copy link

awb99 commented Mar 28, 2019

This is a huge bug. I am already waiting a week for BTCD to get all the blockchain. And this should
really only take a day maximum. BitcoinD is for sure 20x faster. How can one roll out any service if the bitcoin daemon is so slow to get started?

@ccconnor
Copy link

@lzl124631x : If your node is in US (like mine), your connection will be slow to the default nodes in btcd DNSSeed under chaincfg/params.go . If you just manually add couple nodes in US, the bootstrapping speed will be much faster. For example add this to your startup btcd command:--connection=157.131.198.183:8333 --connection==209.108.206.229:8333 . The IPs keep changing, you can use the leaderboard to find latest IPs close to your country.

I put more details in this post, if you want to look.

I set connect=127.0.0.1, but it took three days to sync to 333365, and it's getting slower and slower.

@justinmoon
Copy link

I'm getting about 1 block every 2 seconds. I tried modifying same addpeer and connect config parameters a few times but didn't see any change.

It was much faster than this for the first 100,00 or 200,000 blocks. Slowed down in the mid-high 200.000's ...

@ccconnor
Copy link

@justinmoon Same to me. At first, 200 or 300 blocks every 10 seconds; after 178411, less than 100 blocks every 10 seconds; now, somtimes 1 block every 10 seconds. Kick-ass!

@OlaStenberg
Copy link

OlaStenberg commented Jun 25, 2019

@justinmoon Same to me. At first, 200 or 300 blocks every 10 seconds; after 178411, less than 100 blocks every 10 seconds; now, somtimes 1 block every 10 seconds. Kick-ass!

What version are you running? I pulled roasbeef/btcd, installed latest, but ended up with 0.9.0-beta, which had a lot of stalled sync problems, was processing 1 block every 2 minutes at mid 300k.

Trying 0.12.0-beta now, I was doing 1k per 10s, but just passed 180k and I'm barely doing 200 per 10s now.

@ccconnor
Copy link

@OlaStenberg I'm running master(version 0.12.0-beta).

@OlaStenberg
Copy link

@OlaStenberg I'm running master(version 0.12.0-beta).

Ok, seem to be working a lot better for me with master, even if it slowed down to 20 blocks every 10s (around 350k). At what height did it slow down to 1 block every 10s for you?

@ccconnor
Copy link

ccconnor commented Jun 26, 2019

@OlaStenberg I started syncing from the begining again. At first, 17k blocks per 10s, now(204518) about 40 per 10s. And I find it's "fetchUtxosMain" that's so slow. Maybe there are some design flaws.

@begetan
Copy link

begetan commented Jul 16, 2019

Btcd sync time is around 10x slower than bitcoin core 0.18 for now.

I did short profiling of the code in sync state and found that the incredible amount of CPU time is wasting by the garbage collector.

This is in turn the result of the LevelDB calls. In the same time Btcd is using very low amount of the memory. In nowadays even a small devices like Raspberry PI has enough amount of the memory.

I thing the right way of optimization is to improve the algorithm of LevelDB handling and to optimize malloc usage

@ghost
Copy link

ghost commented Oct 29, 2019

For me it was dealbraker and I switched to bitcoin core :( I synced whole blockchain in little more than day. With btcd after few days I was still in the 60% of the height and like 30% of the size. :( It makes me sad, I would like to see more diversity but with this, I have no other option :(

@l0k18
Copy link
Contributor

l0k18 commented Oct 29, 2019

Two things stand out relating to this from my close work with the code on an old bitcoin fork (parallelcoin). One is the EC library. First thing I see is that the precompute could optionally be made a lot bigger, to improve performance, and on the other hand, also, using the fastest possible C library for koblitz curves optionally.

Secondly I don't think that all of the problem with the database is the leveldb back end, though there is probably issues there (again maybe cgo option?) and in between that and implementing the interface, common code exists also. This issue is common for Go. I watched a talk a few weeks ago about a video streaming service where they found they basically had to single thread the scheduling of the streams to reduce blocking, which of course points straight at the GC being incorrect for the task. It shouldn't be mysterious because the scheduler's main workload overall is IO bound and that's a bad fit for CPU bound stuff like transaction validation and database cache management.

Solving the problem probably will follow a pattern similar to the solution used by the streaming service, to explicitly schedule processing instead of trusting the scheduler, and minimising post init memory allocation with one big early allocation. All of which are a lot of work. The GC percent is default set to 10% and this is quite generous but still puts a ceiling on throughput, which conflicts with latency, in almost all cases.

@Rjected
Copy link
Collaborator

Rjected commented Oct 29, 2019

In my profiling, garbage collection and runtime operations were taking up a lot of CPU time during sync, which was worrying - so I profiled some more (for allocations), and immutable treap operations were by far the biggest allocators, so that may be the issue.

Here's what I have for allocations when syncing 2012 blocks:

File: btcd
Type: alloc_space
Time: Aug 16, 2019 at 2:53pm (EDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top10
Showing nodes accounting for 182.87GB, 85.76% of 213.25GB total
Dropped 379 nodes (cum <= 1.07GB)
Showing top 10 nodes out of 109
      flat  flat%   sum%        cum   cum%
   83.99GB 39.39% 39.39%    83.99GB 39.39%  github.com/btcsuite/btcd/database/internal/treap.cloneTreapNode
   64.99GB 30.48% 69.86%       65GB 30.48%  github.com/btcsuite/btcd/txscript.parseScriptTemplate
    6.75GB  3.16% 73.03%    34.08GB 15.98%  github.com/btcsuite/btcd/database/internal/treap.(*Immutable).Delete
    6.34GB  2.97% 76.00%     6.34GB  2.97%  github.com/btcsuite/goleveldb/leveldb/table.(*Reader).newBlockIter
    5.06GB  2.37% 78.37%    10.78GB  5.05%  github.com/btcsuite/goleveldb/leveldb/util.Hash
    4.92GB  2.31% 80.68%     4.92GB  2.31%  bytes.NewBuffer
    3.25GB  1.52% 82.20%     3.25GB  1.52%  github.com/btcsuite/btcd/database/internal/treap.newTreapNode
    2.87GB  1.35% 83.55%     2.87GB  1.35%  github.com/btcsuite/btcd/wire.(*MsgTx).BtcDecode
    2.52GB  1.18% 84.73%    16.38GB  7.68%  github.com/btcsuite/btcd/blockchain.(*UtxoViewpoint).addTxOut
    2.20GB  1.03% 85.76%     2.20GB  1.03%  github.com/btcsuite/btcd/database/internal/treap.(*parentStack).Push

For cloneTreapNode:

(pprof) list cloneTreapNode
Total: 213.25GB
ROUTINE ======================== github.com/btcsuite/btcd/database/internal/treap.cloneTreapNode in /home/dan-server/btcd/database/internal/treap/immutable.go
   83.99GB    83.99GB (flat, cum) 39.39% of Total
         .          .     14:	return &treapNode{
         .          .     15:		key:      node.key,
         .          .     16:		value:    node.value,
         .          .     17:		priority: node.priority,
         .          .     18:		left:     node.left,
   83.99GB    83.99GB     19:		right:    node.right,
         .          .     20:	}
         .          .     21:}
         .          .     22:
         .          .     23:// Immutable represents a treap data structure which is used to hold ordered
         .          .     24:// key/value pairs using a combination of binary search tree and heap semantics.

And parseScriptTemplate:

(pprof) list parseScriptTemplate
Total: 213.25GB
ROUTINE ======================== github.com/btcsuite/btcd/txscript.parseScriptTemplate in /home/dan-server/btcd/txscript/script.go
   64.99GB       65GB (flat, cum) 30.48% of Total
         .          .    193:
         .          .    194:// parseScriptTemplate is the same as parseScript but allows the passing of the
         .          .    195:// template list for testing purposes.  When there are parse errors, it returns
         .          .    196:// the list of parsed opcodes up to the point of failure along with the error.
         .          .    197:func parseScriptTemplate(script []byte, opcodes *[256]opcode) ([]parsedOpcode, error) {
   64.99GB    64.99GB    198:	retScript := make([]parsedOpcode, 0, len(script))
         .          .    199:	for i := 0; i < len(script); {
         .          .    200:		instr := script[i]
         .          .    201:		op := &opcodes[instr]
         .          .    202:		pop := parsedOpcode{opcode: op}

etc. etc. nothing else significant

So these combined account for about 70% of allocations, but account for less than 10% of in-use space at runtime. In both cases, keeping some state so we reduce the number of allocations (and deallocations) would be beneficial. I bet replacing the immutable treaps in the dbcache would really help sync speed. LevelDB calls are also fine, they are all fairly lightweight - IMO the issue is the cache. More profiling probably needs to be done, but my guess is the dbcache.

@matt24smith
Copy link

I'm trying to set up my own node with btcd instead of bitcoin core, since I want to support a healthy blockchain ecosystem. But after two weeks trying to sync the mainnet blockchain, it's just way too slow. I can't recommend anyone use this software when it takes so long just to get started. Sync times are increasingly slow as the block height rises - see the attached image.

Figure_1

(The gap on nov. 11 - 13 was from when I stopped syncing to mess with configs)

@l0k18
Copy link
Contributor

l0k18 commented Nov 26, 2019

I will be focusing on solving this problem in my project (github.com/p9c/pod), but thanks to Rjected I also have looked at some of the treap code as well as the script engine and I am pretty sure both have got serious garbage accumulation problems. Optimising them and aiming for zero runtime allocation will probably go a long way towards a solution.

Incidentally, a part of btcd that I have had INTENSE work with is the CPU miner. It uses two mutexes, and stops and creates new goroutines constantly. I built a library that allows me to attach an RPC to the standard output, built a small, dedicated single thread (two threads but primary work), I use two channels, stop and start, and I thought I would need an atomic or compareandswap but it turns out just using two channels and a for loop with a 'work mode' second loop, both parts of the loop drain the channels not relevant (the runner ignores the run signal channel and the pauser ignores the pause signal channel), and the lock contention is obviously so bad that a little over 10% of potential performance is chewed up with synchronisation.

I know well enough from what I saw of the script engine and database drivers/indexers that the programmers who write it are obviously former C++/Java programmers because they pretty much mainly rely on mutexes for synchronisation and mutexes, which are the slowest sync primitives, and where channels are used, in places that I would expect to see async calls used in these older, less network-focused languages.

For bitcoin forks, especially small, neglected ones like parallelcoin, its sync rate is fine. 8 minutes on my Ryzen 5 1600/SSD/32gb machine, at a height of about 210,000. The chain barely has maybe 2 transactions per block, on average. But even still, at 99000 and again around 160000 it bogs down badly and appears to be mainly garbage collecting, so with the typical block payload of bitcoin, I imagine the complexity of the chain of provenance of tokens explodes exponentially, and that graph exactly shows this pattern.

I'm not sure where I will start with it, but I strongly suspect write amplification is also hiding in there, a performance problem well known to be existing with LevelDB, and even RocksDB and BoltDB, and resolved in Badger, so first step will be building a badger driver. I'd guess that especially as the number of transactions grows that write amplification is causing an issue every time the database updates the values it has to write the keys again as well, combined with the geometric rise in complexity of validations to confirm they correctly chain back to the coinbase.

Second thing I expect to look at is the treaps. There is some parts of btcd that attempt to eliminate runtime allocations, at least one buffer freelist, but there is a lot of creation of small byte slices that are discarded later. As tends to be the case with Go, the naive (I did mention the mutexes, they are a naive use of Go) implementation does not take into account GC or thread scheduling, and when the bottlenecks are really bad, usually it means you have to take over both memory and scheduling work from the Go runtime to get a better result. I already saw one clear example of this just in the use of isolated processes connected via IPC pipes and using two channels and one ticker instead of 2 or 3 mutexes and 3 different tickers made it produce more than 10% more hashes.

If anyone is interested who is following, keep an eye on the repo I mentioned above as over the next 6 months I will be focusing on optimizing everything. I am nearly finished implementing the beta spec for my fork, and I have aimed to make it accessible enough and not stomping over top of too much of what is already there that differs for the chain I am working on. I am still a bit lost as to how to enable full segwit support, and you will see I have merged btcwallet, btcd, btcutil and btcjson repositories into one, created a unified configuration system, and mostly done and robust handling of concurrent running of wallet and node together for a more conventional integral node/wallet mode of operation. I understand some of the reasons behind so vigorously separating them but in my opinion in the absence of a really good SPV implementation makes doing this a step backwards.

Based on watching CPU utilization and a little bit of profiling I can see so much empty spaces between with CPU doing literally nothing for about 60-70% of the time, during the sync process, so I am very leaning towards the idea that synchronization is the bigger issue, and second is garbage generation, and thirdly, write amplification due to the database log structure, with updating metadata related to block nodes in the database.

@l0k18
Copy link
Contributor

l0k18 commented Nov 26, 2019

By the way, out of curiosity I tried disabling the addrindex and txindex while on initial sync. Not only did it not seem to take any less time, the amount of metadata generated in the database folder looked exactly the same as though it was on. I am not sure if it is ignoring my configuration change, I checked through all the places where that setting is read and it definitely should not have been running the indexers.

@btcbobby
Copy link

@l0k18
Copy link
Contributor

l0k18 commented Dec 13, 2019

I remember reading this one before. I am going to try it out with my fork at https://github.com/p9c/pod and I will report back if it gets that kind of result (it is on a different, much smaller change but even still appears to have at least one big GC cleanup every 50-100,000 blocks and that is at least part of the problem for sure, as one or two blocks end up taking like a minute to process.

@jakesylvestre
Copy link
Collaborator

jakesylvestre commented Jan 22, 2020

Hey @l0k18, what ended up happening? If it worked I'd be happy to port here since this has become a much bigger issue

@jakesylvestre
Copy link
Collaborator

jakesylvestre commented Feb 1, 2020

So the ballast (10gb) does appear to substantially increase the garbage collection rate:
Ballast:
ballast
No ballast:
ballastless

It looks like the greatest decrease here comes from:
Ballast without debug.SetGCPercent(10):
ballast_no10
No Ballast without debug.SetGCPercent(10):
gc_ballastless_no10

This appears to result in a 3x increase in heap size:
10% GC limit w/ ballast:
10ballast
10% GC limit w/ no ballast:
10noballast
No Limit no ballast
noballastno10
No Limit
ballastno10

@jakesylvestre
Copy link
Collaborator

I'll need to do a full sync to confirm, but it looks like a 3x increase in memory can get us a 3-4x increase in block sync speed. Ballast does not appear to improve allocation speed

@l0k18
Copy link
Contributor

l0k18 commented Feb 1, 2020

I have been working on a fork coin using this engine and I found it maxed out about 2000tx/s syncing from a node on my lan. I suspect this may be structural limit caused by signature verification performance. It has been a couple months since i was working on it so my memory might be hazy but i think it got best performance with gc limit at 100% which i think is 'no limit'.

I hadn't tried the ballast, and in a couple of days i can confirm.

@jakesylvestre
Copy link
Collaborator

Yeah, that's what I'm thinking. I think this is going to come down to a memory/cpu trade off. Maybe we can add a config option in here

@Colman
Copy link

Colman commented Sep 3, 2020

Has there been any update on this problem? With the txindex and addrindex set to on, I'm syncing 1 block every minute! At this rate it will take over a month to sync. This makes the software completely unusable. If the problem isn't being worked on, is there any other stable libraries that offer the the txindex and addrindex options?

@jakesylvestre
Copy link
Collaborator

@Colman I've never seen it that slow. This sounds like it might be a disk i/o issue. Are you using some kind of NAS?

Alternatively - there are a few high impact PR's that would go a long way toward speeding this up. The most notable is probably #1373. It'll depend on your disk i /o but as @Roasbeef point's out that PR gets sync >24 hours. Would be great if you could help get that merged.

As far as other servers, it looks like you could check out this fork bitcoin/bitcoin#14053

@Colman
Copy link

Colman commented Sep 5, 2020

Well I'm using Amazon's EBS which I think is considered NAS. I've tried their regular SSD and their high performance one which both yielded the same result. As far as I know, the latency between the machine and the EBS is <10ms, which IIRC, is the bottleneck right? Either way, why does the hard drive affect it that much? As far as I know, you're only accessing it to modify the UTXO set, TX index, and address index which can be done in batches.

Also, I checked out the @Roasbeef fork and even if a UTXO cache was implemented, I still don't get why it's so slow without one.

@jakesylvestre
Copy link
Collaborator

jakesylvestre commented Sep 5, 2020

In my experience - it has a pretty big affect. We've got like 6 btcd nodes running in gcp and we had to switch them all over from pd-standard (7 days) to pd-ssd (4 days) and finally got down to 2 with an nvme persistent disk. I haven't benchmarked #1373 on these different disk types.

I don't remember the exact numbers but bitcoind has a similiar step function with the sync times being much lower (but differing commensurately) across the board

@Colman
Copy link

Colman commented Sep 7, 2020

Okay I looked in to the specs for the AWS disk that I'm using and it's quite similar to yours. However, mine has been syncing for a month and still isn't done so obviously something else is up. What's your CPU and RAM specs? And are the timelines you gave me measured with the txindex and addrindex settings set to on? I'd assume they would increase the sync time.

@begetan
Copy link

begetan commented Sep 9, 2020

@Colman If you are using a general-purpose EBS volume ( gp2) you can only get 1500 IOPS and 250 Mbps baseline speed for the volume of 500Gb, and lower for a smaller volume. This is far below any SSD disc, so the disk IO may be a root cause of slow work in this case.

You may try to use provisioned IOPS mode which is more expensive but closer to SSD performance.

@Colman
Copy link

Colman commented Sep 10, 2020

Okay good to know thanks. I tend to avoid the IOPS ones because the last time I used it, I got charged like $120 just for the month which is totally outrageous. They probably pay off that drive in less than 2 months.

@Colman
Copy link

Colman commented Sep 10, 2020

Also, is there a way to make RPC calls while the blockchain is syncing? I would like to see if the tx index will be fast enough to use before I spend all this money on the drives to sync it.

@jakesylvestre
Copy link
Collaborator

n1-cpu-4, but there is a fair bit of variance here in our llvm setup. I actually wrote a custom provisioner to combine the local disks into a single logical volume and iops increases with the number of disks

image

@asemoon
Copy link

asemoon commented Jun 3, 2021

This problem still exists in 2021! It is taking more than one day for me sync the blocks for 2017

@BrannonKing
Copy link

I want to confirm that my memory-pressure measurements today show that the cloneTreapNode is the chief culprit. The second worst is the ecdsa.Verify; there are a surprising number of make calls deep in big.nat. The third worst is the leveldb find function.

@pakar
Copy link

pakar commented Jun 10, 2021

To anyone having performance issues when syncing you could try to set: (if you have enough memory that is)
sigcachemaxsize=8000000

It seems to have dropped from 1 block per 60-90 seconds down to 10-20 seconds for 1-3 block's, with a few hiccups here and there..... Unless there was some huge drop in block-complexity around this height.
height 524776, 2018-05-28 09:58:25 +0200 CEST

Summary of the behavior i can see on my system and some "gut-feeling" around a possible culprit that we might want to investigate, but too unfamiliar with the btcd codebase to do anything myself..

I'm doing a import on a 4 core 3.2Ghz with 32Gb RAM and 4x4TB HDD's running in raid10.

Funny thing is that i started a bitcoind sync, to the same filesystem, and it fully synced in 2 days while btcd is still at mid 2018 after over a week. ;)

One thing i have been thinking about is the lack of O_DIRECT fs access in go... Usually databases bypass the buffercache for writes (at least) to databases to reduce the memory fragmentation of pending writes.. Not sure if that's related to the issue here since my Dirty blocks usually stays around 10-15MB in /proc/meminfo) Currently the only application that is doing io against these disks btcd, with a few read-spikes of bitcoind from every 30-60 seconds or so.. IO usage by btcd, as reported by iotop, stays between 85-100% with ~500KB/s read and ~4-6MB/s writes and IOP's for the raid10 jumps around between 70-140 tps. in iostat. CPU usage of btcd jumps up and down all the time, but never breaks 100% so go routines does not seem to be used that much?

If wanted i can make a copy of the blockchain and run any tests you might want at this height... Just give me a shout...

@strusty
Copy link

strusty commented Jul 23, 2021

@pakar I think it would be great if you uploaded a snapshot of the btcd chain for posterity, especially if this situation is not going to improve anytime soon.

@pakar
Copy link

pakar commented Jul 23, 2021

@strusty Only been syncing during idle-time on the system so not yet fully synced.
Currently at block-height 575734 (2019-05-12) and DB is at 301Gb and growing.
I can upload to FTP / HTTP if someone wants to host it.

Create a new issue and reference me in it if i should do something.

@strusty
Copy link

strusty commented Jul 24, 2021

@pakar I will set this up, and await the good word from you when you have it all synced. I will let you know here once it is ready.

@Rakeshnohria
Copy link

Hi @pakar ,

As discussion with @strusty , we are ready with the FTP details. Please let us know how we can share the details with you.

Thanks

@rtreffer
Copy link

rtreffer commented Aug 9, 2021

Ok, same issue here, btcd is ridiculous slow.

This seems to be mostly goleveldb related (as others have stated). The following PR seems highly relevant: syndtr/goleveldb#338

bitcoind configures leveldb based on a memory flag. I am using WriteBuffer: defaultCacheSize, BlockCacheCapacity: 2*defaultCacheSize for similar results. It would be nice to have a flag for cache / memory usage to fine-tune this.

I am also using NoSync: true for the initial sync. And syncing from bitcoind on localhost.

That said the initial sync is still days off but at least it is making decent progress.

@rkfg
Copy link

rkfg commented Sep 14, 2022

Another idea: I ran btcd with GOGC=500 and my block processing speed went up from 60-70 to 200+ (in 10 seconds) while downloading at around height 290000. This parameter makes GC trigger when the new heap is 5x bigger than the heap size since last collection. The default is 100 (in percents). This value can be increased further if you have enough memory. See the GC work by setting another env var GODEBUG=gctrace=1.

It's not a solution of course but a mere workaround. A real solution is to stop producing so much garbage during IBD but I understand it might be harder than it sounds.

@l0k18
Copy link
Contributor

l0k18 commented Sep 14, 2022

Another idea: I ran btcd with GOGC=500 and my block processing speed went up from 60-70 to 200+ (in 10 seconds) while downloading at around height 290000. This parameter makes GC trigger when the new heap is 5x bigger than the heap size since last collection. The default is 100 (in percents). This value can be increased further if you have enough memory. See the GC work by setting another env var GODEBUG=gctrace=1.

It's not a solution of course but a mere workaround. A real solution is to stop producing so much garbage during IBD but I understand it might be harder than it sounds.

An easy first step then would be to add a parameter to the configuration to set the GOGC value and enable users to activate a high value for initial sync. This is a quite trivial PR. If nobody else wants to take it and it is acceptable to the dev team I'll put this together.

@l0k18
Copy link
Contributor

l0k18 commented Oct 10, 2022

#1899 I have created a PR that adds GOGC and GOMAXPROCS configuration to the configuration system. This change can almost double the speed of initial sync and improves the rate at which the DB is updated to new blocks, which is important for miners.

@jettero777
Copy link

jettero777 commented Jan 31, 2023

Hi, I'm doing DB indexing (all blocks are retrieved yet) on a VPS hosting and it is going VERY slow, I'm running it for 3 months yet and still 2 years of blockchain left to be processed. I suspect main reason is HDD IOPS limit on VPS hosting. RAM is 20G there.

I see that there are around 100k of small files *.ldb in ~/.btcd/data/mainnet/blocks_ffldb/metadata/ - is it DB index?
Maybe it can be optimized somehow to increase index file size or even to have it as a single blob on own partition? because accessing tons of small files is very inefficient for HDD and requires many IOPS.

Also will be good to have trusted precomputed DB hosted somewhere to sync from. And just hash checking then is needed to check that it is original and not tampered.

@l0k18
Copy link
Contributor

l0k18 commented Jan 31, 2023

ffldb is more optimized for HDD than RocksDB, Badger or Pogreb (it's a Go implementation of LevelDB, same as used in Bitcoin Core).

Badger and Rocks are designed for SSD, Badger's main difference is using a pair of logs for key and value. Pogreb would probably be a really good back-end for chain data due to its design optimization for infrequent writes and frequent reads, but during IBD it's gonna be heavy writing.

It's basically impossible to use btcd on a spinning disk. But there might be two things that would help you some ways:

  1. Sync the chain on a fast SSD machine at home, then copy the .btcd folder across the network, it is currently taking 687Gb, including tx and address indexes.
  2. Disabling these indexes will greatly reduce the amount of bulk read/write activity during IBD, it has been a little while since I have tampered with these settings but I think if you turn it off it will delete the index data first, and then replay the chain for the blocks it has so far and then resume syncing.

The indexes consume around 150-200gb extra space, or about 25-33% of the total data amount and generating them requires a lot of random access. On the other hand, if you were to upload a .btcd folder with them enabled and generated up to a recent block height, they accelerate chain data queries and possibly also help speed up validating new blocks and transactions. Set the node to 'blocks only' to reduce the amount of work done to handle validating transactions in the mempool.

Even a KVM or similar type VPS with SSD will still have a substantial iops limit, but you won't also have the slow and highly variable seek latency of a spinning disk.

If your plans include using chain data and you can't upgrade to a VPS with a faster disk, you might want to consider maybe running a full node at home, creating an SSH tunnel to the VPS, and run Neutrino on it, if your intention is to run lnd anyhow. On my systems I get IBD in about 4 days on laptop and even ARM64 hardware running on NVMe m.2 disks.

If you are a little experienced with writing Go server software, it is actually not hard to create a launcher that spins up Neutrino as a stand-alone server, possibly someone has already done this, and it is not difficult, as Neutrino is written to be able to run as a thread in another application (LND is also, but not quite so easy to work with). Indeed if the idea is to have a website with access to chain data, you can shoehorn it into your existing Go based web application. Possibly https://mempool.space already has this configuration option in its codebase.

The caveat is you need to give Neutrino access to a btcd or bitcoin core instance with BIP 157 (neutrino mode) enabled for the cfilters. Thus the suggestion you could make this connection from a home server for this, also, if you have ever used Zap wallet you would know that there are publicly accessible full nodes with neutrino mode enabled set up by Zap's developers zaphq.io, there may be others.

Regarding your last point, it is simple enough to do the initial sync this way, and for which reason I personally pause my btcd instance to dump the whole .btcd folder onto a backup storage, in case something happens to my node, it saves days of waiting to have a node working again. It's really a one-off operation, no real reason why it should be added to the server, the node should manage to function ok especially if it is set to 'blocks only' even on a HDD.

@jettero777
Copy link

jettero777 commented Jan 31, 2023

@l0k18

Disabling these indexes will greatly reduce the amount of bulk read/write activity during IBD, it has been a little while since I have tampered with these settings but I think if you turn it off it will delete the index data first, and then replay the chain for the blocks it has so far and then resume syncing.

Yeah, I did it this way. I already downloaded all blocks without indexing, it took more than month, and now it is doing just indexing and it is lasting 3 months yet and counting.

Thanks for other suggestions I will look into it.

@l0k18
Copy link
Contributor

l0k18 commented Jan 31, 2023

@l0k18

Disabling these indexes will greatly reduce the amount of bulk read/write activity during IBD, it has been a little while since I have tampered with these settings but I think if you turn it off it will delete the index data first, and then replay the chain for the blocks it has so far and then resume syncing.

Yeah, I did it this way. I already downloaded all blocks without indexing, it took more than month, and now it is doing just indexing and it is lasting 3 months yet and counting.

Thanks for other suggestions I will look into it.

Your question prompted me to try testing out running a btcd instance off my 2.5" backup drive, and the results so far look like for 5400RPM HDDs at least, each block as it's syncing is taking between about 2 and 13 minutes to simply validate the block. It is probably not as bad for a 7k RPM or even 10k RPM disk but I think the time is coming where it's simply not possible to use a standard HDD to run a node because it can't keep up with realtime block discovery speed.

Previous to this I had also used bcache on an instance, a few months back, and it was able to do better than 1 block per minute, which was with an SSD cache around 128Gb. So if you find a VPS service that provides SSD cached HDD it is still definitely practical even with as small a disk as a 128gb NVMe.

I think that for sure btcd would benefit from having a Badger database plugin added to it, as Badger can allow you to leverage large memory like the system you described to keep the entire index in memory and greatly accelerate key searches. In the small amount of reading I've done about using Badger and exploiting this in memory key index you can even store a small amount of value into the keys to accelerate it even more, but I doubt the driver interface specified in btcd would give much scope for taking advantage of this. But regardless, it would definitely make HDD usage more practical, if it can validate blocks faster than 2 minutes due to faster access to the key table.

@jettero777
Copy link

jettero777 commented Jan 31, 2023

The index speed right now from 30 sec to 5 min per block. But I suspect that it is hitting IOPS limits set in hypervisor for VPS instance as sometimes it is working 20-40 sec per block, sometimes it is 1-3+ min for long period of time.
I think after initial sync the speed will be good enough to keep up with new blocks.

Maybe I will fiddle with fs tuning to enable write-back in-memory file cache, but it can end up with corrupted data in case of shutdown.

@jagottsicher
Copy link
Contributor

Actually the problem seems still to persist.
Running a bitcoin core node (fully snyced) and btcd in a container does not have a significant impact in IOPS that is so bad that It could explain this slow behaviour.

As In China I addpeer the local bitcoin code node, on node in Germany, and several others in China/Shanghai (near my location), but omit any dnsseed.

Still all this is frustrating slow and hard to grasp that 2023 this is still an issue. Unfortunately the workaround suggested by @Roasbeef does not work. If yopu try to compile addblock from /cmd you run directly into an unfullfilable dependency (leveldb missing) and if you trygo geteverthing you need you end up in dependency hell like it was 2010. Basically, an ldb package is not available anymore and switching to an older branch does not work.

So I was really really thanksful for

  • some kind of bootstrap of the actual db (2023) preferably as a torrent.
  • getting addblock working again that I can try to build up a new DB from a bitcoin core node. (Shall I open a new issue?)

@Roasbeef
Copy link
Member

getting addblock working again that I can try to build up a new DB from a bitcoin core node. (Shall I open a new issue?)

Sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests