dcrd/database/ldb/tx.go
Dave Collins b6d426241d blockchain: Rework to use new db interface.
This commit is the first stage of several that are planned to convert
the blockchain package into a concurrent safe package that will
ultimately allow support for multi-peer download and concurrent chain
processing.  The goal is to update btcd proper after each step so it can
take advantage of the enhancements as they are developed.

In addition to the aforementioned benefit, this staged approach has been
chosen since it is absolutely critical to maintain consensus.
Separating the changes into several stages makes it easier for reviewers
to logically follow what is happening and therefore helps prevent
consensus bugs.  Naturally there are significant automated tests to help
prevent consensus issues as well.

The main focus of this stage is to convert the blockchain package to use
the new database interface and implement the chain-related functionality
which it no longer handles.  It also aims to improve efficiency in
various areas by making use of the new database and chain capabilities.

The following is an overview of the chain changes:

- Update to use the new database interface
- Add chain-related functionality that the old database used to handle
  - Main chain structure and state
  - Transaction spend tracking
- Implement a new pruned unspent transaction output (utxo) set
  - Provides efficient direct access to the unspent transaction outputs
  - Uses a domain specific compression algorithm that understands the
    standard transaction scripts in order to significantly compress them
  - Removes reliance on the transaction index and paves the way toward
    eventually enabling block pruning
- Modify the New function to accept a Config struct instead of
  inidividual parameters
- Replace the old TxStore type with a new UtxoViewpoint type that makes
  use of the new pruned utxo set
- Convert code to treat the new UtxoViewpoint as a rolling view that is
  used between connects and disconnects to improve efficiency
- Make best chain state always set when the chain instance is created
  - Remove now unnecessary logic for dealing with unset best state
- Make all exported functions concurrent safe
  - Currently using a single chain state lock as it provides a straight
    forward and easy to review path forward however this can be improved
    with more fine grained locking
- Optimize various cases where full blocks were being loaded when only
  the header is needed to help reduce the I/O load
- Add the ability for callers to get a snapshot of the current best
  chain stats in a concurrent safe fashion
  - Does not block callers while new blocks are being processed
- Make error messages that reference transaction outputs consistently
  use <transaction hash>:<output index>
- Introduce a new AssertError type an convert internal consistency
  checks to use it
- Update tests and examples to reflect the changes
- Add a full suite of tests to ensure correct functionality of the new
  code

The following is an overview of the btcd changes:

- Update to use the new database and chain interfaces
- Temporarily remove all code related to the transaction index
- Temporarily remove all code related to the address index
- Convert all code that uses transaction stores to use the new utxo
  view
- Rework several calls that required the block manager for safe
  concurrency to use the chain package directly now that it is
  concurrent safe
- Change all calls to obtain the best hash to use the new best state
  snapshot capability from the chain package
- Remove workaround for limits on fetching height ranges since the new
  database interface no longer imposes them
- Correct the gettxout RPC handler to return the best chain hash as
  opposed the hash the txout was found in
- Optimize various RPC handlers:
  - Change several of the RPC handlers to use the new chain snapshot
    capability to avoid needlessly loading data
  - Update several handlers to use new functionality to avoid accessing
    the block manager so they are able to return the data without
    blocking when the server is busy processing blocks
  - Update non-verbose getblock to avoid deserialization and
    serialization overhead
  - Update getblockheader to request the block height directly from
    chain and only load the header
  - Update getdifficulty to use the new cached data from chain
  - Update getmininginfo to use the new cached data from chain
  - Update non-verbose getrawtransaction to avoid deserialization and
    serialization overhead
  - Update gettxout to use the new utxo store versus loading
    full transactions using the transaction index

The following is an overview of the utility changes:
- Update addblock to use the new database and chain interfaces
- Update findcheckpoint to use the new database and chain interfaces
- Remove the dropafter utility which is no longer supported

NOTE: The transaction index and address index will be reimplemented in
another commit.
2016-08-18 15:42:18 -04:00

767 lines
22 KiB
Go

// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package ldb
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"github.com/btcsuite/golangcrypto/ripemd160"
"github.com/btcsuite/goleveldb/leveldb"
"github.com/btcsuite/goleveldb/leveldb/iterator"
"github.com/btcsuite/goleveldb/leveldb/util"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
const (
// Each address index is 34 bytes:
// --------------------------------------------------------
// | Prefix | Hash160 | BlkHeight | Tx Offset | Tx Size |
// --------------------------------------------------------
// | 3 bytes | 20 bytes | 4 bytes | 4 bytes | 4 bytes |
// --------------------------------------------------------
addrIndexKeyLength = 3 + ripemd160.Size + 4 + 4 + 4
batchDeleteThreshold = 10000
addrIndexCurrentVersion = 1
)
var addrIndexMetaDataKey = []byte("addrindex")
// All address index entries share this prefix to facilitate the use of
// iterators.
var addrIndexKeyPrefix = []byte("a+-")
// Address index version is required to drop/rebuild address index if version
// is older than current as the format of the index may have changed. This is
// true when going from no version to version 1 as the address index is stored
// as big endian in version 1 and little endian in the original code. Version
// is stored as two bytes, little endian (to match all the code but the index).
var addrIndexVersionKey = []byte("addrindexversion")
type txUpdateObj struct {
txSha *chainhash.Hash
blkHeight int64
blkIndex uint32
txoff int
txlen int
ntxout int
spentData []byte
delete bool
}
type spentTx struct {
blkHeight int64
blkIndex uint32
txoff int
txlen int
numTxO int
delete bool
}
type spentTxUpdate struct {
txl []*spentTx
delete bool
}
// InsertTx inserts a tx hash and its associated data into the database.
func (db *LevelDb) InsertTx(txsha *chainhash.Hash, height int64, idx uint32, txoff int, txlen int, spentbuf []byte) (err error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
return db.insertTx(txsha, height, idx, txoff, txlen, spentbuf)
}
// insertTx inserts a tx hash and its associated data into the database.
// Must be called with db lock held.
func (db *LevelDb) insertTx(txSha *chainhash.Hash, height int64, idx uint32, txoff int, txlen int, spentbuf []byte) (err error) {
var txU txUpdateObj
txU.txSha = txSha
txU.blkHeight = height
txU.blkIndex = idx
txU.txoff = txoff
txU.txlen = txlen
txU.spentData = spentbuf
db.txUpdateMap[*txSha] = &txU
return nil
}
// formatTx generates the value buffer for the Tx db.
func (db *LevelDb) formatTx(txu *txUpdateObj) []byte {
blkHeight := uint64(txu.blkHeight)
txOff := uint32(txu.txoff)
txLen := uint32(txu.txlen)
spentbuf := txu.spentData
txW := make([]byte, 20+len(spentbuf))
binary.LittleEndian.PutUint64(txW[0:8], blkHeight)
binary.LittleEndian.PutUint32(txW[8:12], txu.blkIndex)
binary.LittleEndian.PutUint32(txW[12:16], txOff)
binary.LittleEndian.PutUint32(txW[16:20], txLen)
copy(txW[20:], spentbuf)
return txW[:]
}
func (db *LevelDb) getTxData(txsha *chainhash.Hash) (int64, uint32, int, int, []byte, error) {
key := shaTxToKey(txsha)
buf, err := db.lDb.Get(key, db.ro)
if err != nil {
return 0, 0, 0, 0, nil, err
}
blkHeight := binary.LittleEndian.Uint64(buf[0:8])
blkIndex := binary.LittleEndian.Uint32(buf[8:12])
txOff := binary.LittleEndian.Uint32(buf[12:16])
txLen := binary.LittleEndian.Uint32(buf[16:20])
spentBuf := make([]byte, len(buf)-20)
copy(spentBuf, buf[20:])
return int64(blkHeight), blkIndex, int(txOff), int(txLen), spentBuf, nil
}
func (db *LevelDb) getTxFullySpent(txsha *chainhash.Hash) ([]*spentTx, error) {
var badTxList, spentTxList []*spentTx
key := shaSpentTxToKey(txsha)
buf, err := db.lDb.Get(key, db.ro)
if err == leveldb.ErrNotFound {
return badTxList, database.ErrTxShaMissing
} else if err != nil {
return badTxList, err
}
txListLen := len(buf) / 24
spentTxList = make([]*spentTx, txListLen, txListLen)
for i := range spentTxList {
offset := i * 24
blkHeight := binary.LittleEndian.Uint64(buf[offset : offset+8])
blkIndex := binary.LittleEndian.Uint32(buf[offset+8 : offset+12])
txOff := binary.LittleEndian.Uint32(buf[offset+12 : offset+16])
txLen := binary.LittleEndian.Uint32(buf[offset+16 : offset+20])
numTxO := binary.LittleEndian.Uint32(buf[offset+20 : offset+24])
sTx := spentTx{
blkHeight: int64(blkHeight),
blkIndex: blkIndex,
txoff: int(txOff),
txlen: int(txLen),
numTxO: int(numTxO),
}
spentTxList[i] = &sTx
}
return spentTxList, nil
}
func (db *LevelDb) formatTxFullySpent(sTxList []*spentTx) []byte {
txW := make([]byte, 24*len(sTxList))
for i, sTx := range sTxList {
blkHeight := uint64(sTx.blkHeight)
blkIndex := sTx.blkIndex
txOff := uint32(sTx.txoff)
txLen := uint32(sTx.txlen)
numTxO := uint32(sTx.numTxO)
offset := i * 24
binary.LittleEndian.PutUint64(txW[offset:offset+8], blkHeight)
binary.LittleEndian.PutUint32(txW[offset+8:offset+12], blkIndex)
binary.LittleEndian.PutUint32(txW[offset+12:offset+16], txOff)
binary.LittleEndian.PutUint32(txW[offset+16:offset+20], txLen)
binary.LittleEndian.PutUint32(txW[offset+20:offset+24], numTxO)
}
return txW
}
// ExistsTxSha returns if the given tx sha exists in the database
func (db *LevelDb) ExistsTxSha(txsha *chainhash.Hash) (bool, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
return db.existsTxSha(txsha)
}
// existsTxSha returns if the given tx sha exists in the database.o
// Must be called with the db lock held.
func (db *LevelDb) existsTxSha(txSha *chainhash.Hash) (bool, error) {
key := shaTxToKey(txSha)
return db.lDb.Has(key, db.ro)
}
// FetchTxByShaList returns the most recent tx of the name fully spent or not
func (db *LevelDb) FetchTxByShaList(txShaList []*chainhash.Hash) []*database.TxListReply {
db.dbLock.Lock()
defer db.dbLock.Unlock()
// until the fully spent separation of tx is complete this is identical
// to FetchUnSpentTxByShaList
replies := make([]*database.TxListReply, len(txShaList))
for i, txsha := range txShaList {
tx, blockSha, height, blkIdx, txspent, err := db.fetchTxDataBySha(txsha)
btxspent := []bool{}
if err == nil {
btxspent = make([]bool, len(tx.TxOut), len(tx.TxOut))
for idx := range tx.TxOut {
byteidx := idx / 8
byteoff := uint(idx % 8)
btxspent[idx] = (txspent[byteidx] & (byte(1) << byteoff)) != 0
}
}
if err == database.ErrTxShaMissing {
// if the unspent pool did not have the tx,
// look in the fully spent pool (only last instance)
sTxList, fSerr := db.getTxFullySpent(txsha)
if fSerr == nil && len(sTxList) != 0 {
idx := len(sTxList) - 1
stx := sTxList[idx]
height = stx.blkHeight
blkIdx = stx.blkIndex
tx, blockSha, _, _, err = db.fetchTxDataByLoc(
stx.blkHeight, stx.txoff, stx.txlen, []byte{})
if err == nil {
btxspent = make([]bool, len(tx.TxOut))
for i := range btxspent {
btxspent[i] = true
}
}
}
}
txlre := database.TxListReply{Sha: txsha, Tx: tx, BlkSha: blockSha, Height: height, Index: blkIdx, TxSpent: btxspent, Err: err}
replies[i] = &txlre
}
return replies
}
// FetchUnSpentTxByShaList given a array of ShaHash, look up the transactions
// and return them in a TxListReply array.
func (db *LevelDb) FetchUnSpentTxByShaList(txShaList []*chainhash.Hash) []*database.TxListReply {
db.dbLock.Lock()
defer db.dbLock.Unlock()
replies := make([]*database.TxListReply, len(txShaList))
for i, txsha := range txShaList {
tx, blockSha, height, blkIdx, txspent, err := db.fetchTxDataBySha(txsha)
btxspent := []bool{}
if err == nil {
btxspent = make([]bool, len(tx.TxOut), len(tx.TxOut))
for idx := range tx.TxOut {
byteidx := idx / 8
byteoff := uint(idx % 8)
btxspent[idx] = (txspent[byteidx] & (byte(1) << byteoff)) != 0
}
}
txlre := database.TxListReply{Sha: txsha, Tx: tx, BlkSha: blockSha, Height: height, Index: blkIdx, TxSpent: btxspent, Err: err}
replies[i] = &txlre
}
return replies
}
// fetchTxDataBySha returns several pieces of data regarding the given sha.
func (db *LevelDb) fetchTxDataBySha(txsha *chainhash.Hash) (rtx *wire.MsgTx, rblksha *chainhash.Hash, rheight int64, ridx uint32, rtxspent []byte, err error) {
var blkHeight int64
var blkIndex uint32
var txspent []byte
var txOff, txLen int
blkHeight, blkIndex, txOff, txLen, txspent, err = db.getTxData(txsha)
if err != nil {
if err == leveldb.ErrNotFound {
err = database.ErrTxShaMissing
}
return
}
mtx, hash, _, _, err := db.fetchTxDataByLoc(blkHeight, txOff, txLen, txspent)
return mtx, hash, blkHeight, blkIndex, txspent, err
}
// fetchTxDataByLoc returns several pieces of data regarding the given tx
// located by the block/offset/size location
func (db *LevelDb) fetchTxDataByLoc(blkHeight int64, txOff int, txLen int, txspent []byte) (rtx *wire.MsgTx, rblksha *chainhash.Hash, rheight int64, rtxspent []byte, err error) {
var blksha *chainhash.Hash
var blkbuf []byte
blksha, blkbuf, err = db.getBlkByHeight(blkHeight)
if err != nil {
if err == leveldb.ErrNotFound {
err = database.ErrTxShaMissing
}
return
}
if len(blkbuf) < txOff+txLen {
log.Warnf("block buffer overrun while looking for tx: "+
"block %v %v txoff %v txlen %v", blkHeight, blksha, txOff, txLen)
err = database.ErrDbInconsistency
return
}
rbuf := bytes.NewReader(blkbuf[txOff : txOff+txLen])
var tx wire.MsgTx
err = tx.Deserialize(rbuf)
if err != nil {
log.Warnf("unable to decode tx block %v %v txoff %v txlen %v",
blkHeight, blksha, txOff, txLen)
err = database.ErrDbInconsistency
return
}
return &tx, blksha, blkHeight, txspent, nil
}
// FetchTxBySha returns some data for the given Tx Sha.
func (db *LevelDb) FetchTxBySha(txsha *chainhash.Hash) ([]*database.TxListReply, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
replylen := 0
replycnt := 0
tx, blksha, height, blkIdx, txspent, txerr := db.fetchTxDataBySha(txsha)
if txerr == nil {
replylen++
} else {
if txerr != database.ErrTxShaMissing {
return []*database.TxListReply{}, txerr
}
}
sTxList, fSerr := db.getTxFullySpent(txsha)
if fSerr != nil {
if fSerr != database.ErrTxShaMissing {
return []*database.TxListReply{}, fSerr
}
} else {
replylen += len(sTxList)
}
replies := make([]*database.TxListReply, replylen)
if fSerr == nil {
for _, stx := range sTxList {
tx, blksha, _, _, err := db.fetchTxDataByLoc(
stx.blkHeight, stx.txoff, stx.txlen, []byte{})
if err != nil {
if err != leveldb.ErrNotFound {
return []*database.TxListReply{}, err
}
continue
}
btxspent := make([]bool, len(tx.TxOut), len(tx.TxOut))
for i := range btxspent {
btxspent[i] = true
}
txlre := database.TxListReply{Sha: txsha, Tx: tx, BlkSha: blksha, Height: stx.blkHeight, Index: stx.blkIndex, TxSpent: btxspent, Err: nil}
replies[replycnt] = &txlre
replycnt++
}
}
if txerr == nil {
btxspent := make([]bool, len(tx.TxOut), len(tx.TxOut))
for idx := range tx.TxOut {
byteidx := idx / 8
byteoff := uint(idx % 8)
btxspent[idx] = (txspent[byteidx] & (byte(1) << byteoff)) != 0
}
txlre := database.TxListReply{Sha: txsha, Tx: tx, BlkSha: blksha, Height: height, Index: blkIdx, TxSpent: btxspent, Err: nil}
replies[replycnt] = &txlre
replycnt++
}
return replies, nil
}
// addrIndexToKey serializes the passed txAddrIndex for storage within the DB.
// We want to use BigEndian to store at least block height and TX offset
// in order to ensure that the transactions are sorted in the index.
// This gives us the ability to use the index in more client-side
// applications that are order-dependent (specifically by dependency).
func addrIndexToKey(index *database.TxAddrIndex) []byte {
record := make([]byte, addrIndexKeyLength, addrIndexKeyLength)
copy(record[0:3], addrIndexKeyPrefix)
copy(record[3:23], index.Hash160[:])
// The index itself.
binary.BigEndian.PutUint32(record[23:27], uint32(index.Height))
binary.BigEndian.PutUint32(record[27:31], uint32(index.TxOffset))
binary.BigEndian.PutUint32(record[31:35], uint32(index.TxLen))
return record
}
// unpackTxIndex deserializes the raw bytes of a address tx index.
func unpackTxIndex(rawIndex [database.AddrIndexKeySize]byte) *database.TxAddrIndex {
var addr [ripemd160.Size]byte
copy(addr[:], rawIndex[3:23])
return &database.TxAddrIndex{
Hash160: addr,
Height: binary.BigEndian.Uint32(rawIndex[23:27]),
TxOffset: binary.BigEndian.Uint32(rawIndex[27:31]),
TxLen: binary.BigEndian.Uint32(rawIndex[31:35]),
}
}
// bytesPrefix returns key range that satisfy the given prefix.
// This only applicable for the standard 'bytes comparer'.
func bytesPrefix(prefix []byte) *util.Range {
var limit []byte
for i := len(prefix) - 1; i >= 0; i-- {
c := prefix[i]
if c < 0xff {
limit = make([]byte, i+1)
copy(limit, prefix)
limit[i] = c + 1
break
}
}
return &util.Range{Start: prefix, Limit: limit}
}
func advanceIterator(iter iterator.IteratorSeeker, reverse bool) bool {
if reverse {
return iter.Prev()
}
return iter.Next()
}
// FetchTxsForAddr looks up and returns all transactions which either
// spend from a previously created output of the passed address, or
// create a new output locked to the passed address. The, `limit` parameter
// should be the max number of transactions to be returned. Additionally, if the
// caller wishes to seek forward in the results some amount, the 'seek'
// represents how many results to skip.
func (db *LevelDb) FetchTxsForAddr(addr dcrutil.Address, skip int,
limit int, reverse bool) ([]*database.TxListReply, int, error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
// Enforce constraints for skip and limit.
if skip < 0 {
return nil, 0, errors.New("offset for skip must be positive")
}
if limit < 0 {
return nil, 0, errors.New("value for limit must be positive")
}
// Parse address type, bailing on an unknown type.
var addrKey []byte
switch addr := addr.(type) {
case *dcrutil.AddressPubKeyHash:
hash160 := addr.Hash160()
addrKey = hash160[:]
case *dcrutil.AddressScriptHash:
hash160 := addr.Hash160()
addrKey = hash160[:]
case *dcrutil.AddressSecpPubKey:
hash160 := addr.AddressPubKeyHash().Hash160()
addrKey = hash160[:]
case *dcrutil.AddressEdwardsPubKey:
hash160 := addr.AddressPubKeyHash().Hash160()
addrKey = hash160[:]
case *dcrutil.AddressSecSchnorrPubKey:
hash160 := addr.AddressPubKeyHash().Hash160()
addrKey = hash160[:]
default:
return nil, 0, database.ErrUnsupportedAddressType
}
// Create the prefix for our search.
addrPrefix := make([]byte, 23, 23)
copy(addrPrefix[0:3], addrIndexKeyPrefix)
copy(addrPrefix[3:23], addrKey)
iter := db.lDb.NewIterator(bytesPrefix(addrPrefix), nil)
skipped := 0
if reverse {
// Go to the last element if reverse iterating.
iter.Last()
// Skip "one past" the last element so the loops below don't
// miss the last element due to Prev() being called first.
// We can safely ignore iterator exhaustion since the loops
// below will see there's no keys anyway.
iter.Next()
}
for skip != 0 && advanceIterator(iter, reverse) {
skip--
skipped++
}
// Iterate through all address indexes that match the targeted prefix.
var replies []*database.TxListReply
var rawIndex [database.AddrIndexKeySize]byte
for advanceIterator(iter, reverse) && limit != 0 {
copy(rawIndex[:], iter.Key())
addrIndex := unpackTxIndex(rawIndex)
tx, blkSha, blkHeight, _, err := db.fetchTxDataByLoc(
int64(addrIndex.Height),
int(addrIndex.TxOffset),
int(addrIndex.TxLen),
[]byte{})
if err != nil {
log.Warnf("tx listed in addrindex record not found, height: %v"+
" offset: %v, len: %v", addrIndex.Height, addrIndex.TxOffset,
addrIndex.TxLen)
limit--
continue
}
var txSha chainhash.Hash
if tx != nil {
txSha = tx.TxSha()
}
txReply := &database.TxListReply{Sha: &txSha, Tx: tx,
BlkSha: blkSha, Height: blkHeight, TxSpent: []bool{}, Err: err}
replies = append(replies, txReply)
limit--
}
iter.Release()
if err := iter.Error(); err != nil {
return nil, 0, err
}
return replies, skipped, nil
}
// UpdateAddrIndexForBlock updates the stored addrindex with passed
// index information for a particular block height. Additionally, it
// will update the stored meta-data related to the curent tip of the
// addr index. These two operations are performed in an atomic
// transaction which is committed before the function returns.
// Transactions indexed by address are stored with the following format:
// * prefix || hash160 || blockHeight || txoffset || txlen
// Indexes are stored purely in the key, with blank data for the actual value
// in order to facilitate ease of iteration by their shared prefix and
// also to allow limiting the number of returned transactions (RPC).
// Alternatively, indexes for each address could be stored as an
// append-only list for the stored value. However, this add unnecessary
// overhead when storing and retrieving since the entire list must
// be fetched each time.
func (db *LevelDb) UpdateAddrIndexForBlock(blkSha *chainhash.Hash,
blkHeight int64, addrIndexes database.BlockAddrIndex) error {
db.dbLock.Lock()
defer db.dbLock.Unlock()
var blankData []byte
batch := db.lBatch()
defer db.lbatch.Reset()
// Write all data for the new address indexes in a single batch
// transaction.
for _, addrIndex := range addrIndexes {
// The index is stored purely in the key.
packedIndex := addrIndexToKey(addrIndex)
batch.Put(packedIndex, blankData)
}
// Update tip of addrindex.
newIndexTip := make([]byte, 40, 40)
copy(newIndexTip[0:32], blkSha[:])
binary.LittleEndian.PutUint64(newIndexTip[32:40], uint64(blkHeight))
batch.Put(addrIndexMetaDataKey, newIndexTip)
// Ensure we're writing an address index version
newIndexVersion := make([]byte, 2, 2)
binary.LittleEndian.PutUint16(newIndexVersion[0:2],
uint16(addrIndexCurrentVersion))
batch.Put(addrIndexVersionKey, newIndexVersion)
if err := db.lDb.Write(batch, db.wo); err != nil {
return err
}
db.lastAddrIndexBlkIdx = blkHeight
db.lastAddrIndexBlkSha = *blkSha
return nil
}
// DropAddrIndexForBlock drops the address index db for a given block/height.
func (db *LevelDb) DropAddrIndexForBlock(blkSha *chainhash.Hash,
blkHeight int64, addrIndexes database.BlockAddrIndex) (rerr error) {
db.dbLock.Lock()
defer db.dbLock.Unlock()
defer db.lbatch.Reset()
batch := db.lBatch()
defer func() {
if rerr == nil {
rerr = db.lDb.Write(batch, db.wo)
} else {
batch.Reset()
}
}()
tipIdx := db.lastAddrIndexBlkIdx
tipHash := db.lastAddrIndexBlkSha
if tipIdx != blkHeight || !tipHash.IsEqual(blkSha) {
return fmt.Errorf("expected to receive a removal of hash %v, height %v"+
", but instead received hash %v, height %v",
tipHash, tipIdx, blkSha, blkHeight)
}
// Write all data for the new address indexes in a single batch
// transaction.
for _, addrIndex := range addrIndexes {
// The index is stored purely in the key.
packedIndex := addrIndexToKey(addrIndex)
batch.Delete(packedIndex)
}
parentHash, _, err := db.getBlkByHeight(blkHeight - 1)
if err != nil {
return err
}
phb := *parentHash
// Update tip of addrindex.
newIndexTip := make([]byte, 40, 40)
copy(newIndexTip[0:32], phb[:])
binary.LittleEndian.PutUint64(newIndexTip[32:40], uint64(blkHeight-1))
batch.Put(addrIndexMetaDataKey, newIndexTip)
// Ensure we're writing an address index version
newIndexVersion := make([]byte, 2, 2)
binary.LittleEndian.PutUint16(newIndexVersion[0:2],
uint16(addrIndexCurrentVersion))
batch.Put(addrIndexVersionKey, newIndexVersion)
if err := db.lDb.Write(batch, db.wo); err != nil {
return err
}
db.lastAddrIndexBlkIdx = blkHeight - 1
db.lastAddrIndexBlkSha = phb
return nil
}
// PurgeAddrIndex deletes the entire addrindex stored within the DB.
// It also resets the cached in-memory metadata about the addr index.
func (db *LevelDb) PurgeAddrIndex() error {
db.dbLock.Lock()
defer db.dbLock.Unlock()
batch := db.lBatch()
defer batch.Reset()
// Delete the entire index along with any metadata about it.
iter := db.lDb.NewIterator(bytesPrefix(addrIndexKeyPrefix), db.ro)
numInBatch := 0
for iter.Next() {
key := iter.Key()
// With a 24-bit index key prefix, 1 in every 2^24 keys is a collision.
// We check the length to make sure we only delete address index keys.
if len(key) == addrIndexKeyLength {
batch.Delete(key)
numInBatch++
}
// Delete in chunks to potentially avoid very large batches.
if numInBatch >= batchDeleteThreshold {
if err := db.lDb.Write(batch, db.wo); err != nil {
iter.Release()
return err
}
batch.Reset()
numInBatch = 0
}
}
iter.Release()
if err := iter.Error(); err != nil {
return err
}
batch.Delete(addrIndexMetaDataKey)
batch.Delete(addrIndexVersionKey)
if err := db.lDb.Write(batch, db.wo); err != nil {
return err
}
db.lastAddrIndexBlkIdx = -1
db.lastAddrIndexBlkSha = chainhash.Hash{}
return nil
}
// deleteOldAddrIndex deletes the entire addrindex stored within the DB for a
// 2-byte addrIndexKeyPrefix. It also resets the cached in-memory metadata about
// the addr index.
func (db *LevelDb) deleteOldAddrIndex() error {
db.dbLock.Lock()
defer db.dbLock.Unlock()
batch := db.lBatch()
defer batch.Reset()
// Delete the entire index along with any metadata about it.
iter := db.lDb.NewIterator(bytesPrefix([]byte("a-")), db.ro)
numInBatch := 0
for iter.Next() {
key := iter.Key()
// With a 24-bit index key prefix, 1 in every 2^24 keys is a collision.
// We check the length to make sure we only delete address index keys.
// We also check the last two bytes to make sure the suffix doesn't
// match other types of index that are 34 bytes long.
if len(key) == 34 && !bytes.HasSuffix(key, recordSuffixTx) &&
!bytes.HasSuffix(key, recordSuffixSpentTx) {
batch.Delete(key)
numInBatch++
}
// Delete in chunks to potentially avoid very large batches.
if numInBatch >= batchDeleteThreshold {
if err := db.lDb.Write(batch, db.wo); err != nil {
iter.Release()
return err
}
batch.Reset()
numInBatch = 0
}
}
iter.Release()
if err := iter.Error(); err != nil {
return err
}
batch.Delete(addrIndexMetaDataKey)
batch.Delete(addrIndexVersionKey)
if err := db.lDb.Write(batch, db.wo); err != nil {
return err
}
db.lastAddrIndexBlkIdx = -1
db.lastAddrIndexBlkSha = chainhash.Hash{}
return nil
}