dcrd/database/ldb/block.go
Dave Collins b6d426241d blockchain: Rework to use new db interface.
This commit is the first stage of several that are planned to convert
the blockchain package into a concurrent safe package that will
ultimately allow support for multi-peer download and concurrent chain
processing.  The goal is to update btcd proper after each step so it can
take advantage of the enhancements as they are developed.

In addition to the aforementioned benefit, this staged approach has been
chosen since it is absolutely critical to maintain consensus.
Separating the changes into several stages makes it easier for reviewers
to logically follow what is happening and therefore helps prevent
consensus bugs.  Naturally there are significant automated tests to help
prevent consensus issues as well.

The main focus of this stage is to convert the blockchain package to use
the new database interface and implement the chain-related functionality
which it no longer handles.  It also aims to improve efficiency in
various areas by making use of the new database and chain capabilities.

The following is an overview of the chain changes:

- Update to use the new database interface
- Add chain-related functionality that the old database used to handle
  - Main chain structure and state
  - Transaction spend tracking
- Implement a new pruned unspent transaction output (utxo) set
  - Provides efficient direct access to the unspent transaction outputs
  - Uses a domain specific compression algorithm that understands the
    standard transaction scripts in order to significantly compress them
  - Removes reliance on the transaction index and paves the way toward
    eventually enabling block pruning
- Modify the New function to accept a Config struct instead of
  inidividual parameters
- Replace the old TxStore type with a new UtxoViewpoint type that makes
  use of the new pruned utxo set
- Convert code to treat the new UtxoViewpoint as a rolling view that is
  used between connects and disconnects to improve efficiency
- Make best chain state always set when the chain instance is created
  - Remove now unnecessary logic for dealing with unset best state
- Make all exported functions concurrent safe
  - Currently using a single chain state lock as it provides a straight
    forward and easy to review path forward however this can be improved
    with more fine grained locking
- Optimize various cases where full blocks were being loaded when only
  the header is needed to help reduce the I/O load
- Add the ability for callers to get a snapshot of the current best
  chain stats in a concurrent safe fashion
  - Does not block callers while new blocks are being processed
- Make error messages that reference transaction outputs consistently
  use <transaction hash>:<output index>
- Introduce a new AssertError type an convert internal consistency
  checks to use it
- Update tests and examples to reflect the changes
- Add a full suite of tests to ensure correct functionality of the new
  code

The following is an overview of the btcd changes:

- Update to use the new database and chain interfaces
- Temporarily remove all code related to the transaction index
- Temporarily remove all code related to the address index
- Convert all code that uses transaction stores to use the new utxo
  view
- Rework several calls that required the block manager for safe
  concurrency to use the chain package directly now that it is
  concurrent safe
- Change all calls to obtain the best hash to use the new best state
  snapshot capability from the chain package
- Remove workaround for limits on fetching height ranges since the new
  database interface no longer imposes them
- Correct the gettxout RPC handler to return the best chain hash as
  opposed the hash the txout was found in
- Optimize various RPC handlers:
  - Change several of the RPC handlers to use the new chain snapshot
    capability to avoid needlessly loading data
  - Update several handlers to use new functionality to avoid accessing
    the block manager so they are able to return the data without
    blocking when the server is busy processing blocks
  - Update non-verbose getblock to avoid deserialization and
    serialization overhead
  - Update getblockheader to request the block height directly from
    chain and only load the header
  - Update getdifficulty to use the new cached data from chain
  - Update getmininginfo to use the new cached data from chain
  - Update non-verbose getrawtransaction to avoid deserialization and
    serialization overhead
  - Update gettxout to use the new utxo store versus loading
    full transactions using the transaction index

The following is an overview of the utility changes:
- Update addblock to use the new database and chain interfaces
- Update findcheckpoint to use the new database and chain interfaces
- Remove the dropafter utility which is no longer supported

NOTE: The transaction index and address index will be reimplemented in
another commit.
2016-08-18 15:42:18 -04:00

350 lines
8.9 KiB
Go

// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package ldb
import (
"bytes"
"encoding/binary"
"github.com/btcsuite/goleveldb/leveldb"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// FetchBlockBySha - return a dcrutil Block
func (db *LevelDb) FetchBlockBySha(sha *chainhash.Hash) (blk *dcrutil.Block, err error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
return db.fetchBlockBySha(sha)
}
// fetchBlockBySha - return a dcrutil Block
// Must be called with db lock held.
func (db *LevelDb) fetchBlockBySha(sha *chainhash.Hash) (blk *dcrutil.Block, err error) {
buf, height, err := db.fetchSha(sha)
if err != nil {
return
}
blk, err = dcrutil.NewBlockFromBytes(buf)
if err != nil {
return
}
blk.SetHeight(height)
return
}
// FetchBlockHeightBySha returns the block height for the given hash. This is
// part of the database.Db interface implementation.
func (db *LevelDb) FetchBlockHeightBySha(sha *chainhash.Hash) (int64, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
return db.getBlkLoc(sha)
}
// FetchBlockHeaderBySha - return a ShaHash
func (db *LevelDb) FetchBlockHeaderBySha(sha *chainhash.Hash) (bh *wire.BlockHeader, err error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
// Read the raw block from the database.
buf, _, err := db.fetchSha(sha)
if err != nil {
return nil, err
}
// Only deserialize the header portion and ensure the transaction count
// is zero since this is a standalone header.
var blockHeader wire.BlockHeader
err = blockHeader.Deserialize(bytes.NewReader(buf))
if err != nil {
return nil, err
}
bh = &blockHeader
return bh, err
}
func (db *LevelDb) getBlkLoc(sha *chainhash.Hash) (int64, error) {
key := shaBlkToKey(sha)
data, err := db.lDb.Get(key, db.ro)
if err != nil {
if err == leveldb.ErrNotFound {
err = database.ErrBlockShaMissing
}
return 0, err
}
// deserialize
blkHeight := binary.LittleEndian.Uint64(data)
return int64(blkHeight), nil
}
func (db *LevelDb) getBlkByHeight(blkHeight int64) (rsha *chainhash.Hash, rbuf []byte, err error) {
var blkVal []byte
key := int64ToKey(blkHeight)
blkVal, err = db.lDb.Get(key, db.ro)
if err != nil {
log.Tracef("failed to find height %v", blkHeight)
return // exists ???
}
var sha chainhash.Hash
sha.SetBytes(blkVal[0:32])
blockdata := make([]byte, len(blkVal[32:]))
copy(blockdata[:], blkVal[32:])
return &sha, blockdata, nil
}
func (db *LevelDb) getBlk(sha *chainhash.Hash) (rblkHeight int64, rbuf []byte, err error) {
var blkHeight int64
blkHeight, err = db.getBlkLoc(sha)
if err != nil {
return
}
var buf []byte
_, buf, err = db.getBlkByHeight(blkHeight)
if err != nil {
return
}
return blkHeight, buf, nil
}
func (db *LevelDb) setBlk(sha *chainhash.Hash, blkHeight int64, buf []byte) {
// serialize
var lw [8]byte
binary.LittleEndian.PutUint64(lw[0:8], uint64(blkHeight))
shaKey := shaBlkToKey(sha)
blkKey := int64ToKey(blkHeight)
blkVal := make([]byte, len(sha)+len(buf))
copy(blkVal[0:], sha[:])
copy(blkVal[len(sha):], buf)
db.lBatch().Put(shaKey, lw[:])
db.lBatch().Put(blkKey, blkVal)
}
// insertSha stores a block hash and its associated data block with a
// previous sha of `prevSha'.
// insertSha shall be called with db lock held
func (db *LevelDb) insertBlockData(sha *chainhash.Hash, prevSha *chainhash.Hash, buf []byte) (int64, error) {
oBlkHeight, err := db.getBlkLoc(prevSha)
if err != nil {
// check current block count
// if count != 0 {
// err = database.PrevShaMissing
// return
// }
oBlkHeight = -1
if db.nextBlock != 0 {
return 0, err
}
}
// TODO(drahn) check curfile filesize, increment curfile if this puts it over
blkHeight := oBlkHeight + 1
db.setBlk(sha, blkHeight, buf)
// update the last block cache
db.lastBlkShaCached = true
db.lastBlkSha = *sha
db.lastBlkIdx = blkHeight
db.nextBlock = blkHeight + 1
return blkHeight, nil
}
// fetchSha returns the datablock for the given ShaHash.
func (db *LevelDb) fetchSha(sha *chainhash.Hash) (rbuf []byte,
rblkHeight int64, err error) {
var blkHeight int64
var buf []byte
blkHeight, buf, err = db.getBlk(sha)
if err != nil {
return
}
return buf, blkHeight, nil
}
// ExistsSha looks up the given block hash
// returns true if it is present in the database.
func (db *LevelDb) ExistsSha(sha *chainhash.Hash) (bool, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
// not in cache, try database
return db.blkExistsSha(sha)
}
// blkExistsSha looks up the given block hash
// returns true if it is present in the database.
// CALLED WITH LOCK HELD
func (db *LevelDb) blkExistsSha(sha *chainhash.Hash) (bool, error) {
key := shaBlkToKey(sha)
return db.lDb.Has(key, db.ro)
}
// FetchBlockShaByHeight returns a block hash based on its height in the
// block chain.
func (db *LevelDb) FetchBlockShaByHeight(height int64) (sha *chainhash.Hash, err error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
return db.fetchBlockShaByHeight(height)
}
// fetchBlockShaByHeight returns a block hash based on its height in the
// block chain.
func (db *LevelDb) fetchBlockShaByHeight(height int64) (rsha *chainhash.Hash, err error) {
key := int64ToKey(height)
blkVal, err := db.lDb.Get(key, db.ro)
if err != nil {
log.Tracef("failed to find height %v", height)
return // exists ???
}
var sha chainhash.Hash
sha.SetBytes(blkVal[0:32])
return &sha, nil
}
// FetchHeightRange looks up a range of blocks by the start and ending
// heights. Fetch is inclusive of the start height and exclusive of the
// ending height. To fetch all hashes from the start height until no
// more are present, use the special id `AllShas'.
func (db *LevelDb) FetchHeightRange(startHeight, endHeight int64) (rshalist []chainhash.Hash, err error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
var endidx int64
if endHeight == database.AllShas {
endidx = startHeight + 500
} else {
endidx = endHeight
}
shalist := make([]chainhash.Hash, 0, endidx-startHeight)
for height := startHeight; height < endidx; height++ {
// TODO(drahn) fix blkFile from height
key := int64ToKey(height)
blkVal, lerr := db.lDb.Get(key, db.ro)
if lerr != nil {
break
}
var sha chainhash.Hash
sha.SetBytes(blkVal[0:32])
shalist = append(shalist, sha)
}
if err != nil {
return
}
//log.Tracef("FetchIdxRange idx %v %v returned %v shas err %v", startHeight, endHeight, len(shalist), err)
return shalist, nil
}
// NewestSha returns the hash and block height of the most recent (end) block of
// the block chain. It will return the zero hash, -1 for the block height, and
// no error (nil) if there are not any blocks in the database yet.
func (db *LevelDb) NewestSha() (rsha *chainhash.Hash, rblkid int64, err error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
if db.lastBlkIdx == -1 {
return &chainhash.Hash{}, -1, nil
}
sha := db.lastBlkSha
return &sha, db.lastBlkIdx, nil
}
// checkAddrIndexVersion returns an error if the address index version stored
// in the database is less than the current version, or if it doesn't exist.
// This function is used on startup to signal OpenDB to drop the address index
// if it's in an old, incompatible format.
func (db *LevelDb) checkAddrIndexVersion() error {
db.dbLock.Lock()
defer db.dbLock.Unlock()
data, err := db.lDb.Get(addrIndexVersionKey, db.ro)
if err != nil {
return database.ErrAddrIndexDoesNotExist
}
indexVersion := binary.LittleEndian.Uint16(data)
if indexVersion != uint16(addrIndexCurrentVersion) {
return database.ErrAddrIndexDoesNotExist
}
return nil
}
// fetchAddrIndexTip returns the last block height and block sha to be indexed.
// Meta-data about the address tip is currently cached in memory, and will be
// updated accordingly by functions that modify the state. This function is
// used on start up to load the info into memory. Callers will use the public
// version of this function below, which returns our cached copy.
func (db *LevelDb) fetchAddrIndexTip() (*chainhash.Hash, int64, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
data, err := db.lDb.Get(addrIndexMetaDataKey, db.ro)
if err != nil {
return &chainhash.Hash{}, -1, database.ErrAddrIndexDoesNotExist
}
var blkSha chainhash.Hash
blkSha.SetBytes(data[0:32])
blkHeight := binary.LittleEndian.Uint64(data[32:])
return &blkSha, int64(blkHeight), nil
}
// FetchAddrIndexTip returns the hash and block height of the most recent
// block whose transactions have been indexed by address. It will return
// ErrAddrIndexDoesNotExist along with a zero hash, and -1 if the
// addrindex hasn't yet been built up.
func (db *LevelDb) FetchAddrIndexTip() (*chainhash.Hash, int64, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
if db.lastAddrIndexBlkIdx == -1 {
return &chainhash.Hash{}, -1, database.ErrAddrIndexDoesNotExist
}
sha := db.lastAddrIndexBlkSha
return &sha, db.lastAddrIndexBlkIdx, nil
}