blockchain: Rework to use new db interface.

This commit is the first stage of several that are planned to convert
the blockchain package into a concurrent safe package that will
ultimately allow support for multi-peer download and concurrent chain
processing.  The goal is to update btcd proper after each step so it can
take advantage of the enhancements as they are developed.

In addition to the aforementioned benefit, this staged approach has been
chosen since it is absolutely critical to maintain consensus.
Separating the changes into several stages makes it easier for reviewers
to logically follow what is happening and therefore helps prevent
consensus bugs.  Naturally there are significant automated tests to help
prevent consensus issues as well.

The main focus of this stage is to convert the blockchain package to use
the new database interface and implement the chain-related functionality
which it no longer handles.  It also aims to improve efficiency in
various areas by making use of the new database and chain capabilities.

The following is an overview of the chain changes:

- Update to use the new database interface
- Add chain-related functionality that the old database used to handle
  - Main chain structure and state
  - Transaction spend tracking
- Implement a new pruned unspent transaction output (utxo) set
  - Provides efficient direct access to the unspent transaction outputs
  - Uses a domain specific compression algorithm that understands the
    standard transaction scripts in order to significantly compress them
  - Removes reliance on the transaction index and paves the way toward
    eventually enabling block pruning
- Modify the New function to accept a Config struct instead of
  inidividual parameters
- Replace the old TxStore type with a new UtxoViewpoint type that makes
  use of the new pruned utxo set
- Convert code to treat the new UtxoViewpoint as a rolling view that is
  used between connects and disconnects to improve efficiency
- Make best chain state always set when the chain instance is created
  - Remove now unnecessary logic for dealing with unset best state
- Make all exported functions concurrent safe
  - Currently using a single chain state lock as it provides a straight
    forward and easy to review path forward however this can be improved
    with more fine grained locking
- Optimize various cases where full blocks were being loaded when only
  the header is needed to help reduce the I/O load
- Add the ability for callers to get a snapshot of the current best
  chain stats in a concurrent safe fashion
  - Does not block callers while new blocks are being processed
- Make error messages that reference transaction outputs consistently
  use <transaction hash>:<output index>
- Introduce a new AssertError type an convert internal consistency
  checks to use it
- Update tests and examples to reflect the changes
- Add a full suite of tests to ensure correct functionality of the new
  code

The following is an overview of the btcd changes:

- Update to use the new database and chain interfaces
- Temporarily remove all code related to the transaction index
- Temporarily remove all code related to the address index
- Convert all code that uses transaction stores to use the new utxo
  view
- Rework several calls that required the block manager for safe
  concurrency to use the chain package directly now that it is
  concurrent safe
- Change all calls to obtain the best hash to use the new best state
  snapshot capability from the chain package
- Remove workaround for limits on fetching height ranges since the new
  database interface no longer imposes them
- Correct the gettxout RPC handler to return the best chain hash as
  opposed the hash the txout was found in
- Optimize various RPC handlers:
  - Change several of the RPC handlers to use the new chain snapshot
    capability to avoid needlessly loading data
  - Update several handlers to use new functionality to avoid accessing
    the block manager so they are able to return the data without
    blocking when the server is busy processing blocks
  - Update non-verbose getblock to avoid deserialization and
    serialization overhead
  - Update getblockheader to request the block height directly from
    chain and only load the header
  - Update getdifficulty to use the new cached data from chain
  - Update getmininginfo to use the new cached data from chain
  - Update non-verbose getrawtransaction to avoid deserialization and
    serialization overhead
  - Update gettxout to use the new utxo store versus loading
    full transactions using the transaction index

The following is an overview of the utility changes:
- Update addblock to use the new database and chain interfaces
- Update findcheckpoint to use the new database and chain interfaces
- Remove the dropafter utility which is no longer supported

NOTE: The transaction index and address index will be reimplemented in
another commit.
This commit is contained in:
Dave Collins 2015-08-25 23:03:18 -05:00 committed by cjepson
parent 0a9a0f1969
commit b6d426241d
297 changed files with 10167 additions and 4867 deletions

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -180,6 +180,8 @@ func (b *BlockChain) checkBlockContext(block *dcrutil.Block, prevNode *blockNode
// The flags modify the behavior of this function as follows:
// - BFDryRun: The memory chain index will not be pruned and no accept
// notification will be sent since the block is not being accepted.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) maybeAcceptBlock(block *dcrutil.Block,
flags BehaviorFlags) (bool, error) {
dryRun := flags&BFDryRun == BFDryRun
@ -222,7 +224,7 @@ func (b *BlockChain) maybeAcceptBlock(block *dcrutil.Block,
var voteBitsStake []uint16
for _, stx := range block.STransactions() {
if is, _ := stake.IsSSGen(stx); is {
vb := stake.GetSSGenVoteBits(stx)
vb := stake.SSGenVoteBits(stx)
voteBitsStake = append(voteBitsStake, vb)
}
}
@ -247,8 +249,10 @@ func (b *BlockChain) maybeAcceptBlock(block *dcrutil.Block,
// chain. The caller would typically want to react by relaying the
// inventory to other peers.
if !dryRun {
b.chainLock.Unlock()
b.sendNotification(NTBlockAccepted,
&BlockAcceptedNtfnsData{onMainChain, block})
b.chainLock.Lock()
}
return onMainChain, nil

View File

@ -1,5 +1,5 @@
// Copyright (c) 2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,6 +7,7 @@ package blockchain
import (
"github.com/decred/dcrd/chaincfg/chainhash"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrd/wire"
)
@ -27,7 +28,7 @@ import (
// [17a 16a 15 14 13 12 11 10 9 8 6 2 genesis]
type BlockLocator []*chainhash.Hash
// BlockLocatorFromHash returns a block locator for the passed block hash.
// blockLocatorFromHash returns a block locator for the passed block hash.
// See BlockLocator for details on the algotirhm used to create a block locator.
//
// In addition to the general algorithm referenced above, there are a couple of
@ -37,7 +38,9 @@ type BlockLocator []*chainhash.Hash
// therefore the block locator will only consist of the genesis hash
// - If the passed hash is not currently known, the block locator will only
// consist of the passed hash
func (b *BlockChain) BlockLocatorFromHash(hash *chainhash.Hash) BlockLocator {
//
// This function MUST be called with the chain state lock held (for reads).
func (b *BlockChain) blockLocatorFromHash(hash *chainhash.Hash) BlockLocator {
// The locator contains the requested hash at the very least.
locator := make(BlockLocator, 0, wire.MaxBlockLocatorsPerMsg)
locator = append(locator, hash)
@ -57,7 +60,12 @@ func (b *BlockChain) BlockLocatorFromHash(hash *chainhash.Hash) BlockLocator {
// Try to look up the height for passed block hash. Assume an
// error means it doesn't exist and just return the locator for
// the block itself.
height, err := b.db.FetchBlockHeightBySha(hash)
var height int64
err := b.db.View(func(dbTx database.Tx) error {
var err error
height, err = dbFetchHeightByHash(dbTx, hash)
return err
})
if err != nil {
return locator
}
@ -79,73 +87,94 @@ func (b *BlockChain) BlockLocatorFromHash(hash *chainhash.Hash) BlockLocator {
}
// Generate the block locators according to the algorithm described in
// in the BlockLocator comment and make sure to leave room for the
// final genesis hash.
iterNode := node
increment := int64(1)
for len(locator) < wire.MaxBlockLocatorsPerMsg-1 {
// Once there are 10 locators, exponentially increase the
// distance between each block locator.
if len(locator) > 10 {
increment *= 2
}
blockHeight -= increment
if blockHeight < 1 {
break
// in the BlockLocator comment and make sure to leave room for the final
// genesis hash.
//
// The error is intentionally ignored here since the only way the code
// could fail is if there is something wrong with the database which
// will be caught in short order anyways and it's also safe to ignore
// block locators.
_ = b.db.View(func(dbTx database.Tx) error {
iterNode := node
increment := int64(1)
for len(locator) < wire.MaxBlockLocatorsPerMsg-1 {
// Once there are 10 locators, exponentially increase
// the distance between each block locator.
if len(locator) > 10 {
increment *= 2
}
blockHeight -= increment
if blockHeight < 1 {
break
}
// As long as this is still on the side chain, walk
// backwards along the side chain nodes to each block
// height.
if forkHeight != -1 && blockHeight > forkHeight {
// Intentionally use parent field instead of the
// getPrevNodeFromNode function since we don't
// want to dynamically load nodes when building
// block locators. Side chain blocks should
// always be in memory already, and if they
// aren't for some reason it's ok to skip them.
for iterNode != nil && blockHeight > iterNode.height {
iterNode = iterNode.parent
}
if iterNode != nil && iterNode.height == blockHeight {
locator = append(locator, iterNode.hash)
}
continue
}
// The desired block height is in the main chain, so
// look it up from the main chain database.
h, err := dbFetchHashByHeight(dbTx, blockHeight)
if err != nil {
// This shouldn't happen and it's ok to ignore
// block locators, so just continue to the next
// one.
log.Warnf("Lookup of known valid height failed %v",
blockHeight)
continue
}
locator = append(locator, h)
}
// As long as this is still on the side chain, walk backwards
// along the side chain nodes to each block height.
if forkHeight != -1 && blockHeight > forkHeight {
// Intentionally use parent field instead of the
// getPrevNodeFromNode function since we don't want to
// dynamically load nodes when building block locators.
// Side chain blocks should always be in memory already,
// and if they aren't for some reason it's ok to skip
// them.
for iterNode != nil && blockHeight > iterNode.height {
iterNode = iterNode.parent
}
if iterNode != nil && iterNode.height == blockHeight {
locator = append(locator, iterNode.hash)
}
continue
}
// The desired block height is in the main chain, so look it up
// from the main chain database.
h, err := b.db.FetchBlockShaByHeight(blockHeight)
if err != nil {
// This shouldn't happen and it's ok to ignore block
// locators, so just continue to the next one.
log.Warnf("Lookup of known valid height failed %v",
blockHeight)
continue
}
locator = append(locator, h)
}
return nil
})
// Append the appropriate genesis block.
locator = append(locator, b.chainParams.GenesisHash)
return locator
}
// BlockLocatorFromHash returns a block locator for the passed block hash.
// See BlockLocator for details on the algorithm used to create a block locator.
//
// In addition to the general algorithm referenced above, there are a couple of
// special cases which are handled:
//
// - If the genesis hash is passed, there are no previous hashes to add and
// therefore the block locator will only consist of the genesis hash
// - If the passed hash is not currently known, the block locator will only
// consist of the passed hash
//
// This function is safe for concurrent access.
func (b *BlockChain) BlockLocatorFromHash(hash *chainhash.Hash) BlockLocator {
b.chainLock.RLock()
locator := b.blockLocatorFromHash(hash)
b.chainLock.RUnlock()
return locator
}
// LatestBlockLocator returns a block locator for the latest known tip of the
// main (best) chain.
//
// This function is safe for concurrent access.
func (b *BlockChain) LatestBlockLocator() (BlockLocator, error) {
// Lookup the latest main chain hash if the best chain hasn't been set
// yet.
if b.bestChain == nil {
// Get the latest block hash for the main chain from the
// database.
hash, _, err := b.db.NewestSha()
if err != nil {
return nil, err
}
return b.BlockLocatorFromHash(hash), nil
}
// The best chain is set, so use its hash.
return b.BlockLocatorFromHash(b.bestChain.hash), nil
b.chainLock.RLock()
locator := b.blockLocatorFromHash(b.bestNode.hash)
b.chainLock.RUnlock()
return locator, nil
}

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +1,101 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
"bytes"
"compress/bzip2"
"encoding/gob"
"os"
"path/filepath"
"testing"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrutil"
)
// TestHaveBlock tests the HaveBlock API to ensure proper functionality.
func TestHaveBlock(t *testing.T) {
// TODO Come up with some kind of new test for this portion of the API?
// HaveBlock is already tested in the reorganization test.
// TestBlockchainFunction tests the various blockchain API to ensure proper
// functionality.
func TestBlockchainFunctions(t *testing.T) {
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("validateunittests",
simNetParams)
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return
}
defer teardownFunc()
// The genesis block should fail to connect since it's already inserted.
genesisBlock := simNetParams.GenesisBlock
err = chain.CheckConnectBlock(dcrutil.NewBlock(genesisBlock))
if err == nil {
t.Errorf("CheckConnectBlock: Did not receive expected error")
}
// Load up the rest of the blocks up to HEAD~1.
filename := filepath.Join("testdata/", "blocks0to168.bz2")
fi, err := os.Open(filename)
bcStream := bzip2.NewReader(fi)
defer fi.Close()
// Create a buffer of the read file.
bcBuf := new(bytes.Buffer)
bcBuf.ReadFrom(bcStream)
// Create decoder from the buffer and a map to store the data.
bcDecoder := gob.NewDecoder(bcBuf)
blockChain := make(map[int64][]byte)
// Decode the blockchain into the map.
if err := bcDecoder.Decode(&blockChain); err != nil {
t.Errorf("error decoding test blockchain: %v", err.Error())
}
// Insert blocks 1 to 168 and perform various tests.
timeSource := blockchain.NewMedianTime()
for i := 1; i <= 168; i++ {
bl, err := dcrutil.NewBlockFromBytes(blockChain[int64(i)])
if err != nil {
t.Errorf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Fatalf("ProcessBlock error at height %v: %v", i, err.Error())
}
}
val, err := chain.TicketPoolValue()
if err != nil {
t.Errorf("Failed to get ticket pool value: %v", err)
}
expectedVal := dcrutil.Amount(3495091704)
if val != expectedVal {
t.Errorf("Failed to get correct result for ticket pool value; "+
"want %v, got %v", expectedVal, val)
}
a, _ := dcrutil.DecodeNetworkAddress("SsbKpMkPnadDcZFFZqRPY8nvdFagrktKuzB")
hs, err := chain.TicketsWithAddress(a)
if err != nil {
t.Errorf("Failed to do TicketsWithAddress: %v", err)
}
expectedLen := 223
if len(hs) != expectedLen {
t.Errorf("Failed to get correct number of tickets for "+
"TicketsWithAddress; want %v, got %v", expectedLen, len(hs))
}
totalSubsidy := chain.TotalSubsidy()
expectedSubsidy := int64(35783267326630)
if expectedSubsidy != totalSubsidy {
t.Errorf("Failed to get correct total subsidy for "+
"TotalSubsidy; want %v, got %v", expectedSubsidy,
totalSubsidy)
}
}

1703
blockchain/chainio.go Normal file

File diff suppressed because it is too large Load Diff

1198
blockchain/chainio_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,6 +10,7 @@ import (
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrutil"
)
@ -30,14 +31,23 @@ func newShaHashFromStr(hexStr string) *chainhash.Hash {
// DisableCheckpoints provides a mechanism to disable validation against
// checkpoints which you DO NOT want to do in production. It is provided only
// for debug purposes.
//
// This function is safe for concurrent access.
func (b *BlockChain) DisableCheckpoints(disable bool) {
b.chainLock.Lock()
b.noCheckpoints = disable
b.chainLock.Unlock()
}
// Checkpoints returns a slice of checkpoints (regardless of whether they are
// already known). When checkpoints are disabled or there are no checkpoints
// for the active network, it will return nil.
//
// This function is safe for concurrent access.
func (b *BlockChain) Checkpoints() []chaincfg.Checkpoint {
b.chainLock.RLock()
defer b.chainLock.RUnlock()
if b.noCheckpoints || len(b.chainParams.Checkpoints) == 0 {
return nil
}
@ -45,10 +55,12 @@ func (b *BlockChain) Checkpoints() []chaincfg.Checkpoint {
return b.chainParams.Checkpoints
}
// LatestCheckpoint returns the most recent checkpoint (regardless of whether it
// latestCheckpoint returns the most recent checkpoint (regardless of whether it
// is already known). When checkpoints are disabled or there are no checkpoints
// for the active network, it will return nil.
func (b *BlockChain) LatestCheckpoint() *chaincfg.Checkpoint {
//
// This function MUST be called with the chain state lock held (for reads).
func (b *BlockChain) latestCheckpoint() *chaincfg.Checkpoint {
if b.noCheckpoints || len(b.chainParams.Checkpoints) == 0 {
return nil
}
@ -57,9 +69,23 @@ func (b *BlockChain) LatestCheckpoint() *chaincfg.Checkpoint {
return &checkpoints[len(checkpoints)-1]
}
// LatestCheckpoint returns the most recent checkpoint (regardless of whether it
// is already known). When checkpoints are disabled or there are no checkpoints
// for the active network, it will return nil.
//
// This function is safe for concurrent access.
func (b *BlockChain) LatestCheckpoint() *chaincfg.Checkpoint {
b.chainLock.RLock()
checkpoint := b.latestCheckpoint()
b.chainLock.RUnlock()
return checkpoint
}
// verifyCheckpoint returns whether the passed block height and hash combination
// match the hard-coded checkpoint data. It also returns true if there is no
// checkpoint data for the passed block height.
//
// This function MUST be called with the chain lock held (for reads).
func (b *BlockChain) verifyCheckpoint(height int64, hash *chainhash.Hash) bool {
if b.noCheckpoints || len(b.chainParams.Checkpoints) == 0 {
return true
@ -84,6 +110,8 @@ func (b *BlockChain) verifyCheckpoint(height int64, hash *chainhash.Hash) bool {
// available in the downloaded portion of the block chain and returns the
// associated block. It returns nil if a checkpoint can't be found (this should
// really only happen for blocks before the first checkpoint).
//
// This function MUST be called with the chain lock held (for reads).
func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
if b.noCheckpoints || len(b.chainParams.Checkpoints) == 0 {
return nil, nil
@ -99,20 +127,21 @@ func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
// Perform the initial search to find and cache the latest known
// checkpoint if the best chain is not known yet or we haven't already
// previously searched.
if b.bestChain == nil || (b.checkpointBlock == nil && b.nextCheckpoint == nil) {
if b.checkpointBlock == nil && b.nextCheckpoint == nil {
// Loop backwards through the available checkpoints to find one
// that we already have.
// that is already available.
checkpointIndex := -1
for i := numCheckpoints - 1; i >= 0; i-- {
exists, err := b.db.ExistsSha(checkpoints[i].Hash)
if err != nil {
return nil, err
}
if exists {
checkpointIndex = i
break
err := b.db.View(func(dbTx database.Tx) error {
for i := numCheckpoints - 1; i >= 0; i-- {
if dbMainChainHasBlock(dbTx, checkpoints[i].Hash) {
checkpointIndex = i
break
}
}
return nil
})
if err != nil {
return nil, err
}
// No known latest checkpoint. This will only happen on blocks
@ -126,19 +155,26 @@ func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
// Cache the latest known checkpoint block for future lookups.
checkpoint := checkpoints[checkpointIndex]
block, err := b.db.FetchBlockBySha(checkpoint.Hash)
err = b.db.View(func(dbTx database.Tx) error {
block, err := dbFetchBlockByHash(dbTx, checkpoint.Hash)
if err != nil {
return err
}
b.checkpointBlock = block
// Set the next expected checkpoint block accordingly.
b.nextCheckpoint = nil
if checkpointIndex < numCheckpoints-1 {
b.nextCheckpoint = &checkpoints[checkpointIndex+1]
}
return nil
})
if err != nil {
return nil, err
}
b.checkpointBlock = block
// Set the next expected checkpoint block accordingly.
b.nextCheckpoint = nil
if checkpointIndex < numCheckpoints-1 {
b.nextCheckpoint = &checkpoints[checkpointIndex+1]
}
return block, nil
return b.checkpointBlock, nil
}
// At this point we've already searched for the latest known checkpoint,
@ -151,7 +187,7 @@ func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
// When there is a next checkpoint and the height of the current best
// chain does not exceed it, the current checkpoint lockin is still
// the latest known checkpoint.
if b.bestChain.height < b.nextCheckpoint.Height {
if b.bestNode.height < b.nextCheckpoint.Height {
return b.checkpointBlock, nil
}
@ -164,11 +200,17 @@ func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
// that if this lookup fails something is very wrong since the chain
// has already passed the checkpoint which was verified as accurate
// before inserting it.
block, err := b.db.FetchBlockBySha(b.nextCheckpoint.Hash)
err := b.db.View(func(tx database.Tx) error {
block, err := dbFetchBlockByHash(tx, b.nextCheckpoint.Hash)
if err != nil {
return err
}
b.checkpointBlock = block
return nil
})
if err != nil {
return nil, err
}
b.checkpointBlock = block
// Set the next expected checkpoint.
checkpointIndex := -1
@ -189,8 +231,6 @@ func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
// isNonstandardTransaction determines whether a transaction contains any
// scripts which are not one of the standard types.
func isNonstandardTransaction(tx *dcrutil.Tx) bool {
// TODO(davec): Should there be checks for the input signature scripts?
// Check all of the output public key scripts for non-standard scripts.
for _, txOut := range tx.MsgTx().TxOut {
scriptClass := txscript.GetScriptClass(txOut.Version, txOut.PkScript)
@ -216,66 +256,80 @@ func isNonstandardTransaction(tx *dcrutil.Tx) bool {
//
// The intent is that candidates are reviewed by a developer to make the final
// decision and then manually added to the list of checkpoints for a network.
//
// This function is safe for concurrent access.
func (b *BlockChain) IsCheckpointCandidate(block *dcrutil.Block) (bool, error) {
b.chainLock.RLock()
defer b.chainLock.RUnlock()
// Checkpoints must be enabled.
if b.noCheckpoints {
return false, fmt.Errorf("checkpoints are disabled")
}
// A checkpoint must be in the main chain.
exists, err := b.db.ExistsSha(block.Sha())
if err != nil {
return false, err
}
if !exists {
return false, nil
}
// A checkpoint must be at least CheckpointConfirmations blocks before
// the end of the main chain.
blockHeight := block.Height()
_, mainChainHeight, err := b.db.NewestSha()
if err != nil {
return false, err
}
if blockHeight > (mainChainHeight - CheckpointConfirmations) {
return false, nil
}
// Get the previous block.
prevHash := &block.MsgBlock().Header.PrevBlock
prevBlock, err := b.db.FetchBlockBySha(prevHash)
if err != nil {
return false, err
}
// Get the next block.
nextHash, err := b.db.FetchBlockShaByHeight(blockHeight + 1)
if err != nil {
return false, err
}
nextBlock, err := b.db.FetchBlockBySha(nextHash)
if err != nil {
return false, err
}
// A checkpoint must have timestamps for the block and the blocks on
// either side of it in order (due to the median time allowance this is
// not always the case).
prevTime := prevBlock.MsgBlock().Header.Timestamp
curTime := block.MsgBlock().Header.Timestamp
nextTime := nextBlock.MsgBlock().Header.Timestamp
if prevTime.After(curTime) || nextTime.Before(curTime) {
return false, nil
}
// A checkpoint must have transactions that only contain standard
// scripts.
for _, tx := range block.Transactions() {
if isNonstandardTransaction(tx) {
return false, nil
var isCandidate bool
err := b.db.View(func(dbTx database.Tx) error {
// A checkpoint must be in the main chain.
blockHeight, err := dbFetchHeightByHash(dbTx, block.Sha())
if err != nil {
// Only return an error if it's not due to the block not
// being in the main chain.
if !isNotInMainChainErr(err) {
return err
}
return nil
}
}
return true, nil
// Ensure the height of the passed block and the entry for the
// block in the main chain match. This should always be the
// case unless the caller provided an invalid block.
if blockHeight != block.Height() {
return fmt.Errorf("passed block height of %d does not "+
"match the main chain height of %d",
block.Height(), blockHeight)
}
// A checkpoint must be at least CheckpointConfirmations blocks
// before the end of the main chain.
mainChainHeight := b.bestNode.height
if blockHeight > (mainChainHeight - CheckpointConfirmations) {
return nil
}
// Get the previous block header.
prevHash := &block.MsgBlock().Header.PrevBlock
prevHeader, err := dbFetchHeaderByHash(dbTx, prevHash)
if err != nil {
return err
}
// Get the next block header.
nextHeader, err := dbFetchHeaderByHeight(dbTx, blockHeight+1)
if err != nil {
return err
}
// A checkpoint must have timestamps for the block and the
// blocks on either side of it in order (due to the median time
// allowance this is not always the case).
prevTime := prevHeader.Timestamp
curTime := block.MsgBlock().Header.Timestamp
nextTime := nextHeader.Timestamp
if prevTime.After(curTime) || nextTime.Before(curTime) {
return nil
}
// A checkpoint must have transactions that only contain
// standard scripts.
for _, tx := range block.Transactions() {
if isNonstandardTransaction(tx) {
return nil
}
}
// All of the checks passed, so the block is a candidate.
isCandidate = true
return nil
})
return isCandidate, err
}

View File

@ -16,7 +16,8 @@ import (
// DebugBlockHeaderString dumps a verbose message containing information about
// the block header of a block.
func DebugBlockHeaderString(chainParams *chaincfg.Params, block *dcrutil.Block) string {
func DebugBlockHeaderString(chainParams *chaincfg.Params,
block *dcrutil.Block) string {
bh := block.MsgBlock().Header
var buffer bytes.Buffer
@ -150,7 +151,7 @@ func DebugMsgTxString(msgTx *wire.MsgTx) string {
if isSStx {
sstxType, sstxPkhs, sstxAmts, _, sstxRules, sstxLimits =
stake.GetSStxStakeOutputInfo(tx)
stake.TxSStxStakeOutputInfo(tx)
}
var buffer bytes.Buffer
@ -257,14 +258,14 @@ func DebugMsgTxString(msgTx *wire.MsgTx) string {
// SSGen block/block height OP_RETURN.
if isSSGen && i == 0 {
blkHash, blkHeight, _ := stake.GetSSGenBlockVotedOn(tx)
blkHash, blkHeight, _ := stake.SSGenBlockVotedOn(tx)
str = fmt.Sprintf("SSGen block hash voted on: %v, height: %v\n",
blkHash, blkHeight)
buffer.WriteString(str)
}
if isSSGen && i == 1 {
vb := stake.GetSSGenVoteBits(tx)
vb := stake.SSGenVoteBits(tx)
str = fmt.Sprintf("SSGen vote bits: %v\n", vb)
buffer.WriteString(str)
}
@ -308,7 +309,8 @@ func DebugTicketDataString(td *stake.TicketData) string {
// DebugTicketDBLiveString prints out the number of tickets in each
// bucket of the ticket database as a string.
func DebugTicketDBLiveString(tmdb *stake.TicketDB, chainParams *chaincfg.Params) (string, error) {
func DebugTicketDBLiveString(tmdb *stake.TicketDB,
chainParams *chaincfg.Params) (string, error) {
var buffer bytes.Buffer
buffer.WriteString("\n")
@ -333,7 +335,8 @@ func DebugTicketDBLiveString(tmdb *stake.TicketDB, chainParams *chaincfg.Params)
// DebugTicketDBLiveBucketString returns a string containing the ticket hashes
// found in a specific bucket of the live ticket database. If the verbose flag
// is called, it dumps the contents of the ticket data as well.
func DebugTicketDBLiveBucketString(tmdb *stake.TicketDB, bucket uint8, verbose bool) (string, error) {
func DebugTicketDBLiveBucketString(tmdb *stake.TicketDB, bucket uint8,
verbose bool) (string, error) {
var buffer bytes.Buffer
str := fmt.Sprintf("Contents of live ticket bucket %v:\n", bucket)
@ -360,7 +363,8 @@ func DebugTicketDBLiveBucketString(tmdb *stake.TicketDB, bucket uint8, verbose b
// DebugTicketDBSpentBucketString prints the contents of the spent tickets
// database bucket indicated to a string that is returned. If the verbose
// flag is indicated, the contents of each ticket are printed as well.
func DebugTicketDBSpentBucketString(tmdb *stake.TicketDB, height int64, verbose bool) (string, error) {
func DebugTicketDBSpentBucketString(tmdb *stake.TicketDB, height int64,
verbose bool) (string, error) {
var buffer bytes.Buffer
str := fmt.Sprintf("Contents of spent ticket bucket height %v:\n", height)
@ -393,7 +397,8 @@ func DebugTicketDBSpentBucketString(tmdb *stake.TicketDB, height int64, verbose
// DebugTicketDBMissedString prints out the contents of the missed ticket
// database to a string. If verbose is indicated, the ticket data itself
// is printed along with the ticket hashes.
func DebugTicketDBMissedString(tmdb *stake.TicketDB, verbose bool) (string, error) {
func DebugTicketDBMissedString(tmdb *stake.TicketDB, verbose bool) (string,
error) {
var buffer bytes.Buffer
str := fmt.Sprintf("Contents of missed ticket database:\n")
@ -443,27 +448,155 @@ func writeTicketDataToBuf(buf *bytes.Buffer, td *stake.TicketData) {
}
}
// DebugTxStoreData returns a string containing information about the data
// stored in the given TxStore.
func DebugTxStoreData(txs TxStore) string {
if txs == nil {
// DebugUtxoEntryData returns a string containing information about the data
// stored in the given UtxoEntry.
func DebugUtxoEntryData(hash chainhash.Hash, utx *UtxoEntry) string {
var buffer bytes.Buffer
str := fmt.Sprintf("Hash: %v\n", hash)
buffer.WriteString(str)
if utx == nil {
str := fmt.Sprintf("MISSING\n\n")
buffer.WriteString(str)
return buffer.String()
}
str = fmt.Sprintf("Height: %v\n", utx.height)
buffer.WriteString(str)
str = fmt.Sprintf("Index: %v\n", utx.index)
buffer.WriteString(str)
str = fmt.Sprintf("TxVersion: %v\n", utx.txVersion)
buffer.WriteString(str)
str = fmt.Sprintf("TxType: %v\n", utx.txType)
buffer.WriteString(str)
str = fmt.Sprintf("IsCoinbase: %v\n", utx.isCoinBase)
buffer.WriteString(str)
str = fmt.Sprintf("HasExpiry: %v\n", utx.hasExpiry)
buffer.WriteString(str)
str = fmt.Sprintf("FullySpent: %v\n", utx.IsFullySpent())
buffer.WriteString(str)
str = fmt.Sprintf("StakeExtra: %x\n\n", utx.stakeExtra)
buffer.WriteString(str)
outputOrdered := make([]int, 0, len(utx.sparseOutputs))
for outputIndex := range utx.sparseOutputs {
outputOrdered = append(outputOrdered, int(outputIndex))
}
sort.Ints(outputOrdered)
for _, idx := range outputOrdered {
utxo := utx.sparseOutputs[uint32(idx)]
str = fmt.Sprintf("Output index: %v\n", idx)
buffer.WriteString(str)
str = fmt.Sprintf("Amount: %v\n", utxo.amount)
buffer.WriteString(str)
str = fmt.Sprintf("ScriptVersion: %v\n", utxo.scriptVersion)
buffer.WriteString(str)
str = fmt.Sprintf("Script: %x\n", utxo.pkScript)
buffer.WriteString(str)
str = fmt.Sprintf("Spent: %v\n", utxo.spent)
buffer.WriteString(str)
}
str = fmt.Sprintf("\n")
buffer.WriteString(str)
return buffer.String()
}
// DebugUtxoViewpointData returns a string containing information about the data
// stored in the given UtxoView.
func DebugUtxoViewpointData(uv *UtxoViewpoint) string {
if uv == nil {
return ""
}
var buffer bytes.Buffer
for _, txd := range txs {
str := fmt.Sprintf("Hash: %v\n", txd.Hash)
for hash, utx := range uv.entries {
buffer.WriteString(DebugUtxoEntryData(hash, utx))
}
return buffer.String()
}
// DebugStxoData returns a string containing information about the data
// stored in the given STXO.
func DebugStxoData(stx *spentTxOut) string {
if stx == nil {
return ""
}
var buffer bytes.Buffer
str := fmt.Sprintf("amount: %v\n", stx.amount)
buffer.WriteString(str)
str = fmt.Sprintf("scriptVersion: %v\n", stx.scriptVersion)
buffer.WriteString(str)
str = fmt.Sprintf("pkScript: %x\n", stx.pkScript)
buffer.WriteString(str)
str = fmt.Sprintf("compressed: %v\n", stx.compressed)
buffer.WriteString(str)
str = fmt.Sprintf("stakeExtra: %x\n", stx.stakeExtra)
buffer.WriteString(str)
str = fmt.Sprintf("txVersion: %v\n", stx.txVersion)
buffer.WriteString(str)
str = fmt.Sprintf("height: %v\n", stx.height)
buffer.WriteString(str)
str = fmt.Sprintf("index: %v\n", stx.index)
buffer.WriteString(str)
str = fmt.Sprintf("isCoinbase: %v\n", stx.isCoinBase)
buffer.WriteString(str)
str = fmt.Sprintf("hasExpiry: %v\n", stx.hasExpiry)
buffer.WriteString(str)
str = fmt.Sprintf("txType: %v\n", stx.txType)
buffer.WriteString(str)
str = fmt.Sprintf("fullySpent: %v\n", stx.txFullySpent)
buffer.WriteString(str)
str = fmt.Sprintf("\n")
buffer.WriteString(str)
return buffer.String()
}
// DebugStxosData returns a string containing information about the data
// stored in the given slice of STXOs.
func DebugStxosData(stxs []spentTxOut) string {
if stxs == nil {
return ""
}
var buffer bytes.Buffer
// Iterate backwards.
var str string
for i := len(stxs) - 1; i >= 0; i-- {
str = fmt.Sprintf("STX index %v\n", i)
buffer.WriteString(str)
str = fmt.Sprintf("Height: %v\n", txd.BlockHeight)
str = fmt.Sprintf("amount: %v\n", stxs[i].amount)
buffer.WriteString(str)
str = fmt.Sprintf("Tx: %v\n", txd.Tx)
str = fmt.Sprintf("scriptVersion: %v\n", stxs[i].scriptVersion)
buffer.WriteString(str)
str = fmt.Sprintf("Spent: %v\n", txd.Spent)
str = fmt.Sprintf("pkScript: %x\n", stxs[i].pkScript)
buffer.WriteString(str)
str = fmt.Sprintf("Err: %v\n\n", txd.Err)
str = fmt.Sprintf("compressed: %v\n", stxs[i].compressed)
buffer.WriteString(str)
str = fmt.Sprintf("stakeExtra: %x\n", stxs[i].stakeExtra)
buffer.WriteString(str)
str = fmt.Sprintf("txVersion: %v\n", stxs[i].txVersion)
buffer.WriteString(str)
str = fmt.Sprintf("height: %v\n", stxs[i].height)
buffer.WriteString(str)
str = fmt.Sprintf("index: %v\n", stxs[i].index)
buffer.WriteString(str)
str = fmt.Sprintf("isCoinbase: %v\n", stxs[i].isCoinBase)
buffer.WriteString(str)
str = fmt.Sprintf("hasExpiry: %v\n", stxs[i].hasExpiry)
buffer.WriteString(str)
str = fmt.Sprintf("txType: %v\n", stxs[i].txType)
buffer.WriteString(str)
str = fmt.Sprintf("fullySpent: %v\n\n", stxs[i].txFullySpent)
buffer.WriteString(str)
}
str = fmt.Sprintf("\n")
buffer.WriteString(str)
return buffer.String()
}
@ -476,7 +609,8 @@ func DebugTxStoreData(txs TxStore) string {
// and (3) missed tickets.
// Do NOT use on mainnet or in production. For debug use only! Make sure
// the blockchain is frozen when you call this function.
func TicketDbThumbprint(tmdb *stake.TicketDB, chainParams *chaincfg.Params) ([]*chainhash.Hash, error) {
func TicketDbThumbprint(tmdb *stake.TicketDB,
chainParams *chaincfg.Params) ([]*chainhash.Hash, error) {
// Container for the three master hashes to go into.
dbThumbprints := make([]*chainhash.Hash, 3, 3)
@ -562,83 +696,3 @@ func TicketDbThumbprint(tmdb *stake.TicketDB, chainParams *chaincfg.Params) ([]*
return dbThumbprints, nil
}
// findWhereDoubleSpent determines where a tx was previously doublespent.
// VERY INTENSIVE BLOCKCHAIN SCANNING, USE TO DEBUG SIMULATED BLOCKCHAINS
// ONLY.
func (b *BlockChain) findWhereDoubleSpent(block *dcrutil.Block) error {
height := int64(1)
heightEnd := block.Height()
hashes, err := b.db.FetchHeightRange(height, heightEnd)
if err != nil {
return err
}
var allTxs []*dcrutil.Tx
txs := block.Transactions()[1:]
stxs := block.STransactions()
allTxs = append(txs, stxs...)
for _, hash := range hashes {
curBlock, err := b.getBlockFromHash(&hash)
if err != nil {
return err
}
log.Errorf("Cur block %v", curBlock.Height())
for _, localTx := range allTxs {
for _, localTxIn := range localTx.MsgTx().TxIn {
for _, tx := range curBlock.Transactions()[1:] {
for _, txIn := range tx.MsgTx().TxIn {
if txIn.PreviousOutPoint == localTxIn.PreviousOutPoint {
log.Errorf("Double spend of {hash: %v, idx: %v,"+
" tree: %b}, previously found in tx %v "+
"of block %v txtree regular",
txIn.PreviousOutPoint.Hash,
txIn.PreviousOutPoint.Index,
txIn.PreviousOutPoint.Tree,
tx.Sha(),
hash)
}
}
}
for _, tx := range curBlock.STransactions() {
for _, txIn := range tx.MsgTx().TxIn {
if txIn.PreviousOutPoint == localTxIn.PreviousOutPoint {
log.Errorf("Double spend of {hash: %v, idx: %v,"+
" tree: %b}, previously found in tx %v "+
"of block %v txtree stake\n",
txIn.PreviousOutPoint.Hash,
txIn.PreviousOutPoint.Index,
txIn.PreviousOutPoint.Tree,
tx.Sha(),
hash)
}
}
}
}
}
}
for _, localTx := range stxs {
for _, localTxIn := range localTx.MsgTx().TxIn {
for _, tx := range txs {
for _, txIn := range tx.MsgTx().TxIn {
if txIn.PreviousOutPoint == localTxIn.PreviousOutPoint {
log.Errorf("Double spend of {hash: %v, idx: %v,"+
" tree: %b}, previously found in tx %v "+
"of cur block stake txtree\n",
txIn.PreviousOutPoint.Hash,
txIn.PreviousOutPoint.Index,
txIn.PreviousOutPoint.Tree,
tx.Sha())
}
}
}
}
}
return nil
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -17,18 +17,23 @@ import (
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/chaincfg/chainhash"
_ "github.com/decred/dcrd/database/memdb"
database "github.com/decred/dcrd/database2"
_ "github.com/decred/dcrd/database2/ffldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// testDbType is the database backend type to use for the tests.
const testDbType = "leveldb"
const (
// testDbType is the database backend type to use for the tests.
testDbType = "ffldb"
// testDbRoot is the root directory used to create all test databases.
const testDbRoot = "testdbs"
// testDbRoot is the root directory used to create all test databases.
testDbRoot = "testdbs"
// blockDataNet is the expected network in the test block data.
blockDataNet = wire.MainNet
)
// filesExists returns whether or not the named file or directory exists.
func fileExists(name string) bool {
@ -43,9 +48,9 @@ func fileExists(name string) bool {
// isSupportedDbType returns whether or not the passed database type is
// currently supported.
func isSupportedDbType(dbType string) bool {
supportedDBs := database.SupportedDBs()
for _, sDbType := range supportedDBs {
if dbType == sDbType {
supportedDrivers := database.SupportedDrivers()
for _, driver := range supportedDrivers {
if dbType == driver {
return true
}
}
@ -63,12 +68,12 @@ func chainSetup(dbName string, params *chaincfg.Params) (*blockchain.BlockChain,
// Handle memory database specially since it doesn't need the disk
// specific handling.
var db database.Db
var db database.DB
tmdb := new(stake.TicketDB)
var teardown func()
if testDbType == "memdb" {
ndb, err := database.CreateDB(testDbType)
ndb, err := database.Create(testDbType)
if err != nil {
return nil, nil, fmt.Errorf("error creating db: %v", err)
}
@ -93,7 +98,7 @@ func chainSetup(dbName string, params *chaincfg.Params) (*blockchain.BlockChain,
// Create a new database to store the accepted blocks into.
dbPath := filepath.Join(testDbRoot, dbName)
_ = os.RemoveAll(dbPath)
ndb, err := database.CreateDB(testDbType, dbPath)
ndb, err := database.Create(testDbType, dbPath, blockDataNet)
if err != nil {
return nil, nil, fmt.Errorf("error creating db: %v", err)
}
@ -102,45 +107,44 @@ func chainSetup(dbName string, params *chaincfg.Params) (*blockchain.BlockChain,
// Setup a teardown function for cleaning up. This function is
// returned to the caller to be invoked when it is done testing.
teardown = func() {
dbVersionPath := filepath.Join(testDbRoot, dbName+".ver")
tmdb.Close()
db.Sync()
db.Close()
os.RemoveAll(dbPath)
os.Remove(dbVersionPath)
os.RemoveAll(testDbRoot)
}
}
// Insert the main network genesis block. This is part of the initial
// database setup.
genesisBlock := dcrutil.NewBlock(params.GenesisBlock)
genesisBlock.SetHeight(int64(0))
_, err := db.InsertBlock(genesisBlock)
// Create the main chain instance.
chain, err := blockchain.New(&blockchain.Config{
DB: db,
TMDB: tmdb,
ChainParams: params,
})
if err != nil {
teardown()
err := fmt.Errorf("failed to insert genesis block: %v", err)
err := fmt.Errorf("failed to create chain instance: %v", err)
return nil, nil, err
}
// Start the ticket database.
tmdb.Initialize(params, db)
err = tmdb.RescanTicketDB()
if err != nil {
return nil, nil, err
}
// Start the ticket database.
tmdb.Initialize(params, db)
tmdb.RescanTicketDB()
chain := blockchain.New(db, tmdb, params, nil, nil)
return chain, teardown, nil
}
// loadTxStore returns a transaction store loaded from a file.
func loadTxStore(filename string) (blockchain.TxStore, error) {
// The txstore file format is:
// <num tx data entries> <tx length> <serialized tx> <blk height>
// <num spent bits> <spent bits>
// loadUtxoView returns a utxo view loaded from a file.
func loadUtxoView(filename string) (*blockchain.UtxoViewpoint, error) {
// The utxostore file format is:
// <tx hash><serialized utxo len><serialized utxo>
//
// All num and length fields are little-endian uint32s. The spent bits
// field is padded to a byte boundary.
// The serialized utxo len is a little endian uint32 and the serialized
// utxo uses the format described in chainio.go.
filename = filepath.Join("testdata/", filename)
filename = filepath.Join("testdata", filename)
fi, err := os.Open(filename)
if err != nil {
return nil, err
@ -155,80 +159,40 @@ func loadTxStore(filename string) (blockchain.TxStore, error) {
}
defer fi.Close()
// Num of transaction store objects.
var numItems uint32
if err := binary.Read(r, binary.LittleEndian, &numItems); err != nil {
return nil, err
}
txStore := make(blockchain.TxStore)
var uintBuf uint32
for height := uint32(0); height < numItems; height++ {
txD := blockchain.TxData{}
// Serialized transaction length.
err = binary.Read(r, binary.LittleEndian, &uintBuf)
view := blockchain.NewUtxoViewpoint()
for {
// Hash of the utxo entry.
var hash chainhash.Hash
_, err := io.ReadAtLeast(r, hash[:], len(hash[:]))
if err != nil {
return nil, err
}
serializedTxLen := uintBuf
if serializedTxLen > wire.MaxBlockPayload {
return nil, fmt.Errorf("Read serialized transaction "+
"length of %d is larger max allowed %d",
serializedTxLen, wire.MaxBlockPayload)
}
// Transaction.
var msgTx wire.MsgTx
err = msgTx.Deserialize(r)
if err != nil {
return nil, err
}
txD.Tx = dcrutil.NewTx(&msgTx)
// Transaction hash.
txHash := msgTx.TxSha()
txD.Hash = &txHash
// Block height the transaction came from.
err = binary.Read(r, binary.LittleEndian, &uintBuf)
if err != nil {
return nil, err
}
txD.BlockHeight = int64(uintBuf)
// Num spent bits.
err = binary.Read(r, binary.LittleEndian, &uintBuf)
if err != nil {
return nil, err
}
numSpentBits := uintBuf
numSpentBytes := numSpentBits / 8
if numSpentBits%8 != 0 {
numSpentBytes++
}
// Packed spent bytes.
spentBytes := make([]byte, numSpentBytes)
_, err = io.ReadFull(r, spentBytes)
if err != nil {
return nil, err
}
// Populate spent data based on spent bits.
txD.Spent = make([]bool, numSpentBits)
for byteNum, spentByte := range spentBytes {
for bit := 0; bit < 8; bit++ {
if uint32((byteNum*8)+bit) < numSpentBits {
if spentByte&(1<<uint(bit)) != 0 {
txD.Spent[(byteNum*8)+bit] = true
}
}
// Expected EOF at the right offset.
if err == io.EOF {
break
}
return nil, err
}
txStore[*txD.Hash] = &txD
// Num of serialize utxo entry bytes.
var numBytes uint32
err = binary.Read(r, binary.LittleEndian, &numBytes)
if err != nil {
return nil, err
}
// Serialized utxo entry.
serialized := make([]byte, numBytes)
_, err = io.ReadAtLeast(r, serialized, int(numBytes))
if err != nil {
return nil, err
}
// Deserialize it and add it to the view.
utxoEntry, err := blockchain.TstDeserializeUtxoEntry(serialized)
if err != nil {
return nil, err
}
view.Entries()[hash] = utxoEntry
}
return txStore, nil
return view, nil
}

751
blockchain/compress.go Normal file
View File

@ -0,0 +1,751 @@
// Copyright (c) 2015-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainec"
"github.com/decred/dcrd/txscript"
)
// currentCompressionVersion is the current script compression version of the
// database.
const currentCompressionVersion = 1
// -----------------------------------------------------------------------------
// A variable length quantity (VLQ) is an encoding that uses an arbitrary number
// of binary octets to represent an arbitrarily large integer. The scheme
// employs a most significant byte (MSB) base-128 encoding where the high bit in
// each byte indicates whether or not the byte is the final one. In addition,
// to ensure there are no redundant encodings, an offset is subtracted every
// time a group of 7 bits is shifted out. Therefore each integer can be
// represented in exactly one way, and each representation stands for exactly
// one integer.
//
// Another nice property of this encoding is that it provides a compact
// representation of values that are typically used to indicate sizes. For
// example, the values 0 - 127 are represented with a single byte, 128 - 16511
// with two bytes, and 16512 - 2113663 with three bytes.
//
// While the encoding allows arbitrarily large integers, it is artificially
// limited in this code to an unsigned 64-bit integer for efficiency purposes.
//
// Example encodings:
// 0 -> [0x00]
// 127 -> [0x7f] * Max 1-byte value
// 128 -> [0x80 0x00]
// 129 -> [0x80 0x01]
// 255 -> [0x80 0x7f]
// 256 -> [0x81 0x00]
// 16511 -> [0xff 0x7f] * Max 2-byte value
// 16512 -> [0x80 0x80 0x00]
// 32895 -> [0x80 0xff 0x7f]
// 2113663 -> [0xff 0xff 0x7f] * Max 3-byte value
// 270549119 -> [0xff 0xff 0xff 0x7f] * Max 4-byte value
// 2^64-1 -> [0x80 0xfe 0xfe 0xfe 0xfe 0xfe 0xfe 0xfe 0xfe 0x7f]
//
// References:
// https://en.wikipedia.org/wiki/Variable-length_quantity
// http://www.codecodex.com/wiki/Variable-Length_Integers
// -----------------------------------------------------------------------------
// serializeSizeVLQ returns the number of bytes it would take to serialize the
// passed number as a variable-length quantity according to the format described
// above.
func serializeSizeVLQ(n uint64) int {
size := 1
for ; n > 0x7f; n = (n >> 7) - 1 {
size++
}
return size
}
// putVLQ serializes the provided number to a variable-length quantity according
// to the format described above and returns the number of bytes of the encoded
// value. The result is placed directly into the passed byte slice which must
// be at least large enough to handle the number of bytes returned by the
// serializeSizeVLQ function or it will panic.
func putVLQ(target []byte, n uint64) int {
offset := 0
for ; ; offset++ {
// The high bit is set when another byte follows.
highBitMask := byte(0x80)
if offset == 0 {
highBitMask = 0x00
}
target[offset] = byte(n&0x7f) | highBitMask
if n <= 0x7f {
break
}
n = (n >> 7) - 1
}
// Reverse the bytes so it is MSB-encoded.
for i, j := 0, offset; i < j; i, j = i+1, j-1 {
target[i], target[j] = target[j], target[i]
}
return offset + 1
}
// deserializeVLQ deserializes the provided variable-length quantity according
// to the format described above. It also returns the number of bytes
// deserialized.
func deserializeVLQ(serialized []byte) (uint64, int) {
var n uint64
var size int
for _, val := range serialized {
size++
n = (n << 7) | uint64(val&0x7f)
if val&0x80 != 0x80 {
break
}
n++
}
return n, size
}
// -----------------------------------------------------------------------------
// In order to reduce the size of stored scripts, a domain specific compression
// algorithm is used which recognizes standard scripts and stores them using
// less bytes than the original script. The compression algorithm used here was
// obtained from Bitcoin Core, so all credits for the algorithm go to it.
//
// The general serialized format is:
//
// <script size or type><script data>
//
// Field Type Size
// script size or type VLQ variable
// script data []byte variable
//
// The specific serialized format for each recognized standard script is:
//
// - Pay-to-pubkey-hash: (21 bytes) - <0><20-byte pubkey hash>
// - Pay-to-script-hash: (21 bytes) - <1><20-byte script hash>
// - Pay-to-pubkey**: (33 bytes) - <2, 3, 4, or 5><32-byte pubkey X value>
// 2, 3 = compressed pubkey with bit 0 specifying the y coordinate to use
// 4, 5 = uncompressed pubkey with bit 0 specifying the y coordinate to use
// ** Only valid public keys starting with 0x02, 0x03, and 0x04 are supported.
//
// Any scripts which are not recognized as one of the aforementioned standard
// scripts are encoded using the general serialized format and encode the script
// size as the sum of the actual size of the script and the number of special
// cases.
// -----------------------------------------------------------------------------
// The following constants specify the special constants used to identify a
// special script type in the domain-specific compressed script encoding.
//
// NOTE: This section specifically does not use iota since these values are
// serialized and must be stable for long-term storage.
const (
// cstPayToPubKeyHash identifies a compressed pay-to-pubkey-hash script.
cstPayToPubKeyHash = 0
// cstPayToScriptHash identifies a compressed pay-to-script-hash script.
cstPayToScriptHash = 1
// cstPayToPubKeyCompEven identifies a compressed pay-to-pubkey script to
// a compressed pubkey whose y coordinate is not odd.
cstPayToPubKeyCompEven = 2
// cstPayToPubKeyCompOdd identifies a compressed pay-to-pubkey script to
// a compressed pubkey whose y coordinate is odd.
cstPayToPubKeyCompOdd = 3
// cstPayToPubKeyUncompEven identifies a compressed pay-to-pubkey script to
// an uncompressed pubkey whose y coordinate is not odd when compressed.
cstPayToPubKeyUncompEven = 4
// cstPayToPubKeyUncompOdd identifies a compressed pay-to-pubkey script to
// an uncompressed pubkey whose y coordinate is odd when compressed.
cstPayToPubKeyUncompOdd = 5
// numSpecialScripts is the number of special scripts possibly recognized
// by the domain-specific script compression algorithm. It is one more
// than half the number required to overflow a single byte in VLQ format
// (127). All scripts prefixed 64 and higher for their size are considered
// uncompressed scripts that are stored uncompressed. Because only 5
// special script types are currently stored by Decred, there is a large
// amount of room for future upgrades to the compression algorithm with
// scripts that are common, such as those for the staking system.
numSpecialScripts = 64
)
// isPubKeyHash returns whether or not the passed public key script is a
// standard pay-to-pubkey-hash script along with the pubkey hash it is paying to
// if it is.
func isPubKeyHash(script []byte) (bool, []byte) {
if len(script) == 25 && script[0] == txscript.OP_DUP &&
script[1] == txscript.OP_HASH160 &&
script[2] == txscript.OP_DATA_20 &&
script[23] == txscript.OP_EQUALVERIFY &&
script[24] == txscript.OP_CHECKSIG {
return true, script[3:23]
}
return false, nil
}
// isScriptHash returns whether or not the passed public key script is a
// standard pay-to-script-hash script along with the script hash it is paying to
// if it is.
func isScriptHash(script []byte) (bool, []byte) {
if len(script) == 23 && script[0] == txscript.OP_HASH160 &&
script[1] == txscript.OP_DATA_20 &&
script[22] == txscript.OP_EQUAL {
return true, script[2:22]
}
return false, nil
}
// isPubKey returns whether or not the passed public key script is a standard
// pay-to-pubkey script that pays to a valid compressed or uncompressed public
// key along with the serialized pubkey it is paying to if it is.
//
// NOTE: This function ensures the public key is actually valid since the
// compression algorithm requires valid pubkeys. It does not support hybrid
// pubkeys. This means that even if the script has the correct form for a
// pay-to-pubkey script, this function will only return true when it is paying
// to a valid compressed or uncompressed pubkey.
func isPubKey(script []byte) (bool, []byte) {
// Pay-to-compressed-pubkey script.
if len(script) == 35 && script[0] == txscript.OP_DATA_33 &&
script[34] == txscript.OP_CHECKSIG && (script[1] == 0x02 ||
script[1] == 0x03) {
// Ensure the public key is valid.
serializedPubKey := script[1:34]
_, err := chainec.Secp256k1.ParsePubKey(serializedPubKey)
if err == nil {
return true, serializedPubKey
}
}
// Pay-to-uncompressed-pubkey script.
if len(script) == 67 && script[0] == txscript.OP_DATA_65 &&
script[66] == txscript.OP_CHECKSIG && script[1] == 0x04 {
// Ensure the public key is valid.
serializedPubKey := script[1:66]
_, err := chainec.Secp256k1.ParsePubKey(serializedPubKey)
if err == nil {
return true, serializedPubKey
}
}
return false, nil
}
// compressedScriptSize returns the number of bytes the passed script would take
// when encoded with the domain specific compression algorithm described above.
func compressedScriptSize(scriptVersion uint16, pkScript []byte,
compressionVersion uint32) int {
// Pay-to-pubkey-hash script.
if valid, _ := isPubKeyHash(pkScript); valid {
return 21
}
// Pay-to-script-hash script.
if valid, _ := isScriptHash(pkScript); valid {
return 21
}
// Pay-to-pubkey (compressed or uncompressed) script.
if valid, _ := isPubKey(pkScript); valid {
return 33
}
// When none of the above special cases apply, encode the script as is
// preceded by the sum of its size and the number of special cases
// encoded as a variable length quantity.
return serializeSizeVLQ(uint64(len(pkScript)+numSpecialScripts)) +
len(pkScript)
}
// decodeCompressedScriptSize treats the passed serialized bytes as a compressed
// script, possibly followed by other data, and returns the number of bytes it
// occupies taking into account the special encoding of the script size by the
// domain specific compression algorithm described above.
func decodeCompressedScriptSize(serialized []byte, compressionVersion uint32) int {
scriptSize, bytesRead := deserializeVLQ(serialized)
if bytesRead == 0 {
return 0
}
switch scriptSize {
case cstPayToPubKeyHash:
return 21
case cstPayToScriptHash:
return 21
case cstPayToPubKeyCompEven, cstPayToPubKeyCompOdd,
cstPayToPubKeyUncompEven, cstPayToPubKeyUncompOdd:
return 33
}
scriptSize -= numSpecialScripts
scriptSize += uint64(bytesRead)
return int(scriptSize)
}
// putCompressedScript compresses the passed script according to the domain
// specific compression algorithm described above directly into the passed
// target byte slice. The target byte slice must be at least large enough to
// handle the number of bytes returned by the compressedScriptSize function or
// it will panic.
func putCompressedScript(target []byte, scriptVersion uint16, pkScript []byte,
compressionVersion uint32) int {
if len(target) == 0 {
target[0] = 0x00
return 1
}
// Pay-to-pubkey-hash script.
if valid, hash := isPubKeyHash(pkScript); valid {
target[0] = cstPayToPubKeyHash
copy(target[1:21], hash)
return 21
}
// Pay-to-script-hash script.
if valid, hash := isScriptHash(pkScript); valid {
target[0] = cstPayToScriptHash
copy(target[1:21], hash)
return 21
}
// Pay-to-pubkey (compressed or uncompressed) script.
if valid, serializedPubKey := isPubKey(pkScript); valid {
pubKeyFormat := serializedPubKey[0]
switch pubKeyFormat {
case 0x02, 0x03:
if pubKeyFormat == 0x02 {
target[0] = cstPayToPubKeyCompEven
}
if pubKeyFormat == 0x03 {
target[0] = cstPayToPubKeyCompOdd
}
copy(target[1:33], serializedPubKey[1:33])
return 33
case 0x04:
// Encode the oddness of the serialized pubkey into the
// compressed script type.
target[0] = cstPayToPubKeyUncompEven
if (serializedPubKey[64] & 0x01) == 0x01 {
target[0] = cstPayToPubKeyUncompOdd
}
copy(target[1:33], serializedPubKey[1:33])
return 33
}
}
// When none of the above special cases apply, encode the unmodified
// script preceded by the script version, the sum of its size and
// the number of special cases encoded as a variable length quantity.
encodedSize := uint64(len(pkScript) + numSpecialScripts)
vlqSizeLen := putVLQ(target, encodedSize)
copy(target[vlqSizeLen:], pkScript)
return vlqSizeLen + len(pkScript)
}
// decompressScript returns the original script obtained by decompressing the
// passed compressed script according to the domain specific compression
// algorithm described above.
//
// NOTE: The script parameter must already have been proven to be long enough
// to contain the number of bytes returned by decodeCompressedScriptSize or it
// will panic. This is acceptable since it is only an internal function.
func decompressScript(compressedPkScript []byte,
compressionVersion uint32) []byte {
// Empty scripts, specified by 0x00, are considered nil.
if len(compressedPkScript) == 0 {
return nil
}
// Decode the script size and examine it for the special cases.
encodedScriptSize, bytesRead := deserializeVLQ(compressedPkScript)
switch encodedScriptSize {
// Pay-to-pubkey-hash script. The resulting script is:
// <OP_DUP><OP_HASH160><20 byte hash><OP_EQUALVERIFY><OP_CHECKSIG>
case cstPayToPubKeyHash:
pkScript := make([]byte, 25)
pkScript[0] = txscript.OP_DUP
pkScript[1] = txscript.OP_HASH160
pkScript[2] = txscript.OP_DATA_20
copy(pkScript[3:], compressedPkScript[bytesRead:bytesRead+20])
pkScript[23] = txscript.OP_EQUALVERIFY
pkScript[24] = txscript.OP_CHECKSIG
return pkScript
// Pay-to-script-hash script. The resulting script is:
// <OP_HASH160><20 byte script hash><OP_EQUAL>
case cstPayToScriptHash:
pkScript := make([]byte, 23)
pkScript[0] = txscript.OP_HASH160
pkScript[1] = txscript.OP_DATA_20
copy(pkScript[2:], compressedPkScript[bytesRead:bytesRead+20])
pkScript[22] = txscript.OP_EQUAL
return pkScript
// Pay-to-compressed-pubkey script. The resulting script is:
// <OP_DATA_33><33 byte compressed pubkey><OP_CHECKSIG>
case cstPayToPubKeyCompEven, cstPayToPubKeyCompOdd:
pkScript := make([]byte, 35)
pkScript[0] = txscript.OP_DATA_33
oddness := byte(0x02)
if encodedScriptSize == cstPayToPubKeyCompOdd {
oddness = 0x03
}
pkScript[1] = oddness
copy(pkScript[2:], compressedPkScript[bytesRead:bytesRead+32])
pkScript[34] = txscript.OP_CHECKSIG
return pkScript
// Pay-to-uncompressed-pubkey script. The resulting script is:
// <OP_DATA_65><65 byte uncompressed pubkey><OP_CHECKSIG>
case cstPayToPubKeyUncompEven, cstPayToPubKeyUncompOdd:
// Change the leading byte to the appropriate compressed pubkey
// identifier (0x02 or 0x03) so it can be decoded as a
// compressed pubkey. This really should never fail since the
// encoding ensures it is valid before compressing to this type.
compressedKey := make([]byte, 33)
oddness := byte(0x02)
if encodedScriptSize == cstPayToPubKeyUncompOdd {
oddness = 0x03
}
compressedKey[0] = oddness
copy(compressedKey[1:], compressedPkScript[1:])
key, err := chainec.Secp256k1.ParsePubKey(compressedKey)
if err != nil {
return nil
}
pkScript := make([]byte, 67)
pkScript[0] = txscript.OP_DATA_65
copy(pkScript[1:], key.SerializeUncompressed())
pkScript[66] = txscript.OP_CHECKSIG
return pkScript
}
// When none of the special cases apply, the script was encoded using
// the general format, so reduce the script size by the number of
// special cases and return the unmodified script.
scriptSize := int(encodedScriptSize - numSpecialScripts)
pkScript := make([]byte, scriptSize)
copy(pkScript, compressedPkScript[bytesRead:bytesRead+scriptSize])
return pkScript
}
// -----------------------------------------------------------------------------
// In order to reduce the size of stored amounts, a domain specific compression
// algorithm is used which relies on there typically being a lot of zeroes at
// end of the amounts. The compression algorithm used here was obtained from
// Bitcoin Core, so all credits for the algorithm go to it.
//
// While this is simply exchanging one uint64 for another, the resulting value
// for typical amounts has a much smaller magnitude which results in fewer bytes
// when encoded as variable length quantity. For example, consider the amount
// of 0.1 DCR which is 10000000 atoms. Encoding 10000000 as a VarInt would take
// 4 bytes while encoding the compressed value of 8 as a VarInt only takes 1 byte.
//
// Essentially the compression is achieved by splitting the value into an
// exponent in the range [0-9] and a digit in the range [1-9], when possible,
// and encoding them in a way that can be decoded. More specifically, the
// encoding is as follows:
// - 0 is 0
// - Find the exponent, e, as the largest power of 10 that evenly divides the
// value up to a maximum of 9
// - When e < 9, the final digit can't be 0 so store it as d and remove it by
// dividing the value by 10 (call the result n). The encoded value is thus:
// 1 + 10*(9*n + d-1) + e
// - When e==9, the only thing known is the amount is not 0. The encoded value
// is thus:
// 1 + 10*(n-1) + e == 10 + 10*(n-1)
//
// Example encodings:
// (The numbers in parenthesis are the number of bytes when serialized as a VarInt)
// 0 (1) -> 0 (1) * 0.00000000 BTC
// 1000 (2) -> 4 (1) * 0.00001000 BTC
// 10000 (2) -> 5 (1) * 0.00010000 BTC
// 12345678 (4) -> 111111101(4) * 0.12345678 BTC
// 50000000 (4) -> 47 (1) * 0.50000000 BTC
// 100000000 (4) -> 9 (1) * 1.00000000 BTC
// 500000000 (5) -> 49 (1) * 5.00000000 BTC
// 1000000000 (5) -> 10 (1) * 10.00000000 BTC
// -----------------------------------------------------------------------------
// compressTxOutAmount compresses the passed amount according to the domain
// specific compression algorithm described above.
func compressTxOutAmount(amount uint64) uint64 {
// No need to do any work if it's zero.
if amount == 0 {
return 0
}
// Find the largest power of 10 (max of 9) that evenly divides the
// value.
exponent := uint64(0)
for amount%10 == 0 && exponent < 9 {
amount /= 10
exponent++
}
// The compressed result for exponents less than 9 is:
// 1 + 10*(9*n + d-1) + e
if exponent < 9 {
lastDigit := amount % 10
amount /= 10
return 1 + 10*(9*amount+lastDigit-1) + exponent
}
// The compressed result for an exponent of 9 is:
// 1 + 10*(n-1) + e == 10 + 10*(n-1)
return 10 + 10*(amount-1)
}
// decompressTxOutAmount returns the original amount the passed compressed
// amount represents according to the domain specific compression algorithm
// described above.
func decompressTxOutAmount(amount uint64) uint64 {
// No need to do any work if it's zero.
if amount == 0 {
return 0
}
// The decompressed amount is either of the following two equations:
// x = 1 + 10*(9*n + d - 1) + e
// x = 1 + 10*(n - 1) + 9
amount--
// The decompressed amount is now one of the following two equations:
// x = 10*(9*n + d - 1) + e
// x = 10*(n - 1) + 9
exponent := amount % 10
amount /= 10
// The decompressed amount is now one of the following two equations:
// x = 9*n + d - 1 | where e < 9
// x = n - 1 | where e = 9
n := uint64(0)
if exponent < 9 {
lastDigit := amount%9 + 1
amount /= 9
n = amount*10 + lastDigit
} else {
n = amount + 1
}
// Apply the exponent.
for ; exponent > 0; exponent-- {
n *= 10
}
return n
}
// -----------------------------------------------------------------------------
// Compressed transaction outputs for UTXOS consist of an amount and a public
// key script both compressed using the domain specific compression algorithms
// previously described.
//
// The serialized format is:
//
// <compressed amount><compressed script>
//
// Field Type Size
// compressed amount VLQ variable
// compressed script []byte variable
// -----------------------------------------------------------------------------
// compressedTxOutSize returns the number of bytes the passed transaction output
// fields would take when encoded with the format described above. The
// preCompressed flag indicates the provided amount and script are already
// compressed. This is useful since loaded utxo entries are not decompressed
// until the output is accessed.
func compressedTxOutSize(amount uint64, scriptVersion uint16, pkScript []byte,
compressionVersion uint32, preCompressed bool, hasAmount bool) int {
scriptVersionSize := serializeSizeVLQ(uint64(scriptVersion))
if preCompressed && !hasAmount {
return scriptVersionSize + len(pkScript)
}
if preCompressed && hasAmount {
return scriptVersionSize + serializeSizeVLQ(compressTxOutAmount(amount)) +
len(pkScript)
}
if !preCompressed && !hasAmount {
return scriptVersionSize + compressedScriptSize(scriptVersion,
pkScript, compressionVersion)
}
// if !preCompressed && hasAmount
return scriptVersionSize + serializeSizeVLQ(compressTxOutAmount(amount)) +
compressedScriptSize(scriptVersion, pkScript, compressionVersion)
}
// putCompressedTxOut potentially compresses the passed amount and script
// according to their domain specific compression algorithms and encodes them
// directly into the passed target byte slice with the format described above.
// The preCompressed flag indicates the provided amount and script are already
// compressed in which case the values are not modified. This is useful since
// loaded utxo entries are not decompressed until the output is accessed. The
// target byte slice must be at least large enough to handle the number of bytes
// returned by the compressedTxOutSize function or it will panic.
func putCompressedTxOut(target []byte, amount uint64, scriptVersion uint16,
pkScript []byte, compressionVersion uint32, preCompressed bool,
hasAmount bool) int {
if preCompressed && hasAmount {
offset := putVLQ(target, compressTxOutAmount(amount))
offset += putVLQ(target[offset:], uint64(scriptVersion))
copy(target[offset:], pkScript)
return offset + len(pkScript)
}
if preCompressed && !hasAmount {
offset := putVLQ(target, uint64(scriptVersion))
copy(target[offset:], pkScript)
return offset + len(pkScript)
}
if !preCompressed && !hasAmount {
offset := putVLQ(target, uint64(scriptVersion))
offset += putCompressedScript(target[offset:], scriptVersion, pkScript,
compressionVersion)
return offset
}
// if !preCompressed && hasAmount
offset := putVLQ(target, compressTxOutAmount(amount))
offset += putVLQ(target[offset:], uint64(scriptVersion))
offset += putCompressedScript(target[offset:], scriptVersion, pkScript,
compressionVersion)
return offset
}
// decodeCompressedTxOut decodes the passed compressed txout, possibly followed
// by other data, into its compressed amount and compressed script and returns
// them along with the number of bytes they occupied.
func decodeCompressedTxOut(serialized []byte, compressionVersion uint32,
hasAmount bool) (int64, uint16, []byte, int, error) {
var amount int64
var bytesRead int
var offset int
if hasAmount {
// Deserialize the compressed amount and ensure there are bytes
// remaining for the compressed script.
var compressedAmount uint64
compressedAmount, bytesRead = deserializeVLQ(serialized)
if bytesRead >= len(serialized) {
return 0, 0, nil, bytesRead, errDeserialize("unexpected end of " +
"data after compressed amount")
}
amount = int64(decompressTxOutAmount(compressedAmount))
offset += bytesRead
}
// Decode the script version.
var scriptVersion uint64
scriptVersion, bytesRead = deserializeVLQ(serialized[offset:])
offset += bytesRead
// Decode the compressed script size and ensure there are enough bytes
// left in the slice for it.
scriptSize := decodeCompressedScriptSize(serialized[offset:],
compressionVersion)
if scriptSize < 0 {
return 0, 0, nil, offset, errDeserialize("negative script size")
}
if len(serialized[offset:]) < scriptSize {
return 0, 0, nil, offset, errDeserialize(fmt.Sprintf("unexpected end of "+
"data after script size (got %v, need %v)", len(serialized[offset:]),
scriptSize))
}
// Make a copy of the compressed script so the original serialized data
// can be released as soon as possible.
compressedScript := make([]byte, scriptSize)
copy(compressedScript, serialized[offset:offset+scriptSize])
return amount, uint16(scriptVersion), compressedScript,
offset + scriptSize, nil
}
// -----------------------------------------------------------------------------
// Decred specific transaction encoding flags
//
// Details about a transaction needed to determine how it may be spent
// according to consensus rules are given by these flags.
//
// The following details are encoded into a single byte, where the index
// of the bit is given in zeroeth order:
// 0: Is coinbase
// 1: Has an expiry
// 2-3: Transaction type
// 4: Fully spent
// 5-7: Unused
//
// 0, 1, and 4 are bit flags, while the transaction type is encoded with a bitmask
// and used to describe the underlying int.
//
// The fully spent flag should always come as the *last* flag (highest bit index)
// in this data type should flags be updated to include more rules in the future,
// such as rules governing new script OP codes. This ensures that we may still use
// these flags in the UTX serialized data without consequence, where the last flag
// indicating fully spent will always be zeroed.
//
// -----------------------------------------------------------------------------
const (
// txTypeBitmask describes the bitmask that yields the 3rd and 4th bits
// from the flags byte.
txTypeBitmask = 0x0c
// txTypeShift is the number of bits to shift falgs to the right to yield the
// correct integer value after applying the bitmask with AND.
txTypeShift = 2
)
// encodeFlags encodes transaction flags into a single byte.
func encodeFlags(isCoinBase bool, hasExpiry bool, txType stake.TxType,
fullySpent bool) byte {
b := uint8(txType)
b <<= txTypeShift
if isCoinBase {
b |= 0x01 // Set bit 0
}
if hasExpiry {
b |= 0x02 // Set bit 1
}
if fullySpent {
b |= 0x10 // Set bit 4
}
return b
}
// decodeFlags decodes transaction flags from a single byte into their respective
// data types.
func decodeFlags(b byte) (bool, bool, stake.TxType, bool) {
isCoinBase := b&0x01 != 0
hasExpiry := b&(1<<1) != 0
fullySpent := b&(1<<4) != 0
txType := stake.TxType((b & txTypeBitmask) >> txTypeShift)
return isCoinBase, hasExpiry, txType, fullySpent
}
// decodeFlagsFullySpent decodes whether or not a transaction was fully spent.
func decodeFlagsFullySpent(b byte) bool {
return b&(1<<4) != 0
}

575
blockchain/compress_test.go Normal file
View File

@ -0,0 +1,575 @@
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"bytes"
"encoding/hex"
"testing"
)
// hexToBytes converts the passed hex string into bytes and will panic if there
// is an error. This is only provided for the hard-coded constants so errors in
// the source code can be detected. It will only (and must only) be called with
// hard-coded values.
func hexToBytes(s string) []byte {
b, err := hex.DecodeString(s)
if err != nil {
panic("invalid hex in source file: " + s)
}
return b
}
// TestVLQ ensures the variable length quantity serialization, deserialization,
// and size calculation works as expected.
func TestVLQ(t *testing.T) {
t.Parallel()
tests := []struct {
val uint64
serialized []byte
}{
{0, hexToBytes("00")},
{1, hexToBytes("01")},
{127, hexToBytes("7f")},
{128, hexToBytes("8000")},
{129, hexToBytes("8001")},
{255, hexToBytes("807f")},
{256, hexToBytes("8100")},
{16383, hexToBytes("fe7f")},
{16384, hexToBytes("ff00")},
{16511, hexToBytes("ff7f")}, // Max 2-byte value
{16512, hexToBytes("808000")},
{16513, hexToBytes("808001")},
{16639, hexToBytes("80807f")},
{32895, hexToBytes("80ff7f")},
{2113663, hexToBytes("ffff7f")}, // Max 3-byte value
{2113664, hexToBytes("80808000")},
{270549119, hexToBytes("ffffff7f")}, // Max 4-byte value
{270549120, hexToBytes("8080808000")},
{2147483647, hexToBytes("86fefefe7f")},
{2147483648, hexToBytes("86fefeff00")},
{4294967295, hexToBytes("8efefefe7f")}, // Max uint32, 5 bytes
}
for _, test := range tests {
// Ensure the function to calculate the serialized size without
// actually serializing the value is calculated properly.
gotSize := serializeSizeVLQ(test.val)
if gotSize != len(test.serialized) {
t.Errorf("serializeSizeVLQ: did not get expected size "+
"for %d - got %d, want %d", test.val, gotSize,
len(test.serialized))
continue
}
// Ensure the value serializes to the expected bytes.
gotBytes := make([]byte, gotSize)
gotBytesWritten := putVLQ(gotBytes, test.val)
if !bytes.Equal(gotBytes, test.serialized) {
t.Errorf("putVLQUnchecked: did not get expected bytes "+
"for %d - got %x, want %x", test.val, gotBytes,
test.serialized)
continue
}
if gotBytesWritten != len(test.serialized) {
t.Errorf("putVLQUnchecked: did not get expected number "+
"of bytes written for %d - got %d, want %d",
test.val, gotBytesWritten, len(test.serialized))
continue
}
// Ensure the serialized bytes deserialize to the expected
// value.
gotVal, gotBytesRead := deserializeVLQ(test.serialized)
if gotVal != test.val {
t.Errorf("deserializeVLQ: did not get expected value "+
"for %x - got %d, want %d", test.serialized,
gotVal, test.val)
continue
}
if gotBytesRead != len(test.serialized) {
t.Errorf("deserializeVLQ: did not get expected number "+
"of bytes read for %d - got %d, want %d",
test.serialized, gotBytesRead,
len(test.serialized))
continue
}
}
}
// TestScriptCompression ensures the domain-specific script compression and
// decompression works as expected.
func TestScriptCompression(t *testing.T) {
t.Parallel()
tests := []struct {
name string
version uint32
scriptVersion uint16
uncompressed []byte
compressed []byte
}{
{
name: "nil",
version: 1,
scriptVersion: 0,
uncompressed: nil,
compressed: hexToBytes("40"),
},
{
name: "pay-to-pubkey-hash 1",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("76a9141018853670f9f3b0582c5b9ee8ce93764ac32b9388ac"),
compressed: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
},
{
name: "pay-to-pubkey-hash 2",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("76a914e34cce70c86373273efcc54ce7d2a491bb4a0e8488ac"),
compressed: hexToBytes("00e34cce70c86373273efcc54ce7d2a491bb4a0e84"),
},
{
name: "pay-to-script-hash 1",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("a914da1745e9b549bd0bfa1a569971c77eba30cd5a4b87"),
compressed: hexToBytes("01da1745e9b549bd0bfa1a569971c77eba30cd5a4b"),
},
{
name: "pay-to-script-hash 2",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("a914f815b036d9bbbce5e9f2a00abd1bf3dc91e9551087"),
compressed: hexToBytes("01f815b036d9bbbce5e9f2a00abd1bf3dc91e95510"),
},
{
name: "pay-to-pubkey compressed 0x02",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("2102192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4ac"),
compressed: hexToBytes("02192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4"),
},
{
name: "pay-to-pubkey compressed 0x03",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("2103b0bd634234abbb1ba1e986e884185c61cf43e001f9137f23c2c409273eb16e65ac"),
compressed: hexToBytes("03b0bd634234abbb1ba1e986e884185c61cf43e001f9137f23c2c409273eb16e65"),
},
{
name: "pay-to-pubkey uncompressed 0x04 even",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("4104192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b40d45264838c0bd96852662ce6a847b197376830160c6d2eb5e6a4c44d33f453eac"),
compressed: hexToBytes("04192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4"),
},
{
name: "pay-to-pubkey uncompressed 0x04 odd",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac"),
compressed: hexToBytes("0511db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5c"),
},
{
name: "pay-to-pubkey invalid pubkey",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("3302aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaac"),
compressed: hexToBytes("633302aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaac"),
},
{
name: "null data",
version: 1,
scriptVersion: 0,
uncompressed: hexToBytes("6a200102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20"),
compressed: hexToBytes("626a200102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20"),
},
{
name: "requires 2 size bytes - data push 200 bytes",
version: 1,
scriptVersion: 0,
uncompressed: append(hexToBytes("4cc8"), bytes.Repeat([]byte{0x00}, 200)...),
// [0x80, 0x50] = 208 as a variable length quantity
// [0x4c, 0xc8] = OP_PUSHDATA1 200
compressed: append(hexToBytes("810a4cc8"), bytes.Repeat([]byte{0x00}, 200)...),
},
}
for _, test := range tests {
// Ensure the function to calculate the serialized size without
// actually serializing the value is calculated properly.
gotSize := compressedScriptSize(test.scriptVersion, test.uncompressed,
test.version)
if gotSize != len(test.compressed) {
t.Errorf("compressedScriptSize (%s): did not get "+
"expected size - got %d, want %d", test.name,
gotSize, len(test.compressed))
continue
}
// Ensure the script compresses to the expected bytes.
gotCompressed := make([]byte, gotSize)
gotBytesWritten := putCompressedScript(gotCompressed, test.scriptVersion,
test.uncompressed, test.version)
if !bytes.Equal(gotCompressed, test.compressed) {
t.Errorf("putCompressedScript (%s): did not get "+
"expected bytes - got %x, want %x", test.name,
gotCompressed, test.compressed)
continue
}
if gotBytesWritten != len(test.compressed) {
t.Errorf("putCompressedScript (%s): did not get "+
"expected number of bytes written - got %d, "+
"want %d", test.name, gotBytesWritten,
len(test.compressed))
continue
}
// Ensure the compressed script size is properly decoded from
// the compressed script.
gotDecodedSize := decodeCompressedScriptSize(test.compressed,
test.version)
if gotDecodedSize != len(test.compressed) {
t.Errorf("decodeCompressedScriptSize (%s): did not get "+
"expected size - got %d, want %d", test.name,
gotDecodedSize, len(test.compressed))
continue
}
// Ensure the script decompresses to the expected bytes.
gotDecompressed := decompressScript(test.compressed, test.version)
if !bytes.Equal(gotDecompressed, test.uncompressed) {
t.Errorf("decompressScript (%s): did not get expected "+
"bytes - got %x, want %x", test.name,
gotDecompressed, test.uncompressed)
continue
}
}
}
// TestScriptCompressionErrors ensures calling various functions related to
// script compression with incorrect data returns the expected results.
func TestScriptCompressionErrors(t *testing.T) {
t.Parallel()
// A nil script must result in a decoded size of 0.
if gotSize := decodeCompressedScriptSize(nil, 1); gotSize != 0 {
t.Fatalf("decodeCompressedScriptSize with nil script did not "+
"return 0 - got %d", gotSize)
}
// A nil script must result in a nil decompressed script.
if gotScript := decompressScript(nil, 1); gotScript != nil {
t.Fatalf("decompressScript with nil script did not return nil "+
"decompressed script - got %x", gotScript)
}
// A compressed script for a pay-to-pubkey (uncompressed) that results
// in an invalid pubkey must result in a nil decompressed script.
compressedScript := hexToBytes("04012d74d0cb94344c9569c2e77901573d8d" +
"7903c3ebec3a957724895dca52c6b4")
if gotScript := decompressScript(compressedScript, 1); gotScript != nil {
t.Fatalf("decompressScript with compressed pay-to-"+
"uncompressed-pubkey that is invalid did not return "+
"nil decompressed script - got %x", gotScript)
}
}
// TestAmountCompression ensures the domain-specific transaction output amount
// compression and decompression works as expected.
func TestAmountCompression(t *testing.T) {
t.Parallel()
tests := []struct {
name string
uncompressed uint64
compressed uint64
}{
{
name: "0 DCR (sometimes used in nulldata)",
uncompressed: 0,
compressed: 0,
},
{
name: "546 atoms (current network dust value)",
uncompressed: 546,
compressed: 4911,
},
{
name: "0.00001 DCR (typical transaction fee)",
uncompressed: 1000,
compressed: 4,
},
{
name: "0.0001 DCR (typical transaction fee)",
uncompressed: 10000,
compressed: 5,
},
{
name: "0.12345678 DCR",
uncompressed: 12345678,
compressed: 111111101,
},
{
name: "0.5 DCR",
uncompressed: 50000000,
compressed: 48,
},
{
name: "1 DCR",
uncompressed: 100000000,
compressed: 9,
},
{
name: "5 DCR",
uncompressed: 500000000,
compressed: 49,
},
{
name: "21000000 DCR (max minted coins)",
uncompressed: 2100000000000000,
compressed: 21000000,
},
}
for _, test := range tests {
// Ensure the amount compresses to the expected value.
gotCompressed := compressTxOutAmount(test.uncompressed)
if gotCompressed != test.compressed {
t.Errorf("compressTxOutAmount (%s): did not get "+
"expected value - got %d, want %d", test.name,
gotCompressed, test.compressed)
continue
}
// Ensure the value decompresses to the expected value.
gotDecompressed := decompressTxOutAmount(test.compressed)
if gotDecompressed != test.uncompressed {
t.Errorf("decompressTxOutAmount (%s): did not get "+
"expected value - got %d, want %d", test.name,
gotDecompressed, test.uncompressed)
continue
}
}
}
// TestCompressedTxOut ensures the transaction output serialization and
// deserialization works as expected.
func TestCompressedTxOut(t *testing.T) {
t.Parallel()
tests := []struct {
name string
amount uint64
scriptVersion uint16
pkScript []byte
compPkScript []byte
version uint32
compressed []byte
hasAmount bool
isCompressed bool
}{
{
name: "nulldata with 0 DCR",
amount: 0,
scriptVersion: 0,
pkScript: hexToBytes("6a200102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20"),
compPkScript: hexToBytes("626a200102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20"),
version: 1,
compressed: hexToBytes("00626a200102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20"),
hasAmount: false,
isCompressed: false,
},
{
name: "pay-to-pubkey-hash dust, no amount",
amount: 0,
scriptVersion: 0,
pkScript: hexToBytes("76a9141018853670f9f3b0582c5b9ee8ce93764ac32b9388ac"),
compPkScript: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
version: 1,
compressed: hexToBytes("00001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
hasAmount: false,
isCompressed: false,
},
{
name: "pay-to-pubkey-hash dust, no amount, precompressed",
amount: 0,
scriptVersion: 0,
pkScript: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
compPkScript: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
version: 1,
compressed: hexToBytes("00001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
hasAmount: false,
isCompressed: true,
},
{
name: "pay-to-pubkey-hash dust, amount",
amount: 546,
scriptVersion: 0,
pkScript: hexToBytes("76a9141018853670f9f3b0582c5b9ee8ce93764ac32b9388ac"),
compPkScript: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
version: 1,
compressed: hexToBytes("a52f00001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
hasAmount: true,
isCompressed: false,
},
{
name: "pay-to-pubkey-hash dust, amount, precompressed",
amount: 546,
scriptVersion: 0,
pkScript: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
compPkScript: hexToBytes("001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
version: 1,
compressed: hexToBytes("a52f00001018853670f9f3b0582c5b9ee8ce93764ac32b93"),
hasAmount: true,
isCompressed: true,
},
{
name: "pay-to-pubkey uncompressed, no amount",
amount: 0,
scriptVersion: 0,
pkScript: hexToBytes("4104192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b40d45264838c0bd96852662ce6a847b197376830160c6d2eb5e6a4c44d33f453eac"),
compPkScript: hexToBytes("04192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4"),
version: 1,
compressed: hexToBytes("0004192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4"),
hasAmount: false,
isCompressed: false,
},
{
name: "pay-to-pubkey uncompressed 1 DCR, amount present",
amount: 100000000,
scriptVersion: 0,
pkScript: hexToBytes("4104192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b40d45264838c0bd96852662ce6a847b197376830160c6d2eb5e6a4c44d33f453eac"),
compPkScript: hexToBytes("04192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4"),
version: 1,
compressed: hexToBytes("090004192d74d0cb94344c9569c2e77901573d8d7903c3ebec3a957724895dca52c6b4"),
hasAmount: true,
isCompressed: false,
},
}
for _, test := range tests {
targetSz := compressedTxOutSize(0, test.scriptVersion, test.pkScript, currentCompressionVersion, test.isCompressed, test.hasAmount) - 1
target := make([]byte, targetSz)
putCompressedScript(target, test.scriptVersion, test.pkScript, currentCompressionVersion)
// Ensure the function to calculate the serialized size without
// actually serializing the txout is calculated properly.
gotSize := compressedTxOutSize(test.amount, test.scriptVersion,
test.pkScript, test.version, test.isCompressed, test.hasAmount)
if gotSize != len(test.compressed) {
t.Errorf("compressedTxOutSize (%s): did not get "+
"expected size - got %d, want %d", test.name,
gotSize, len(test.compressed))
continue
}
// Ensure the txout compresses to the expected value.
gotCompressed := make([]byte, gotSize)
gotBytesWritten := putCompressedTxOut(gotCompressed,
test.amount, test.scriptVersion, test.pkScript,
test.version, test.isCompressed, test.hasAmount)
if !bytes.Equal(gotCompressed, test.compressed) {
t.Errorf("compressTxOut (%s): did not get expected "+
"bytes - got %x, want %x", test.name,
gotCompressed, test.compressed)
continue
}
if gotBytesWritten != len(test.compressed) {
t.Errorf("compressTxOut (%s): did not get expected "+
"number of bytes written - got %d, want %d",
test.name, gotBytesWritten,
len(test.compressed))
continue
}
// Ensure the serialized bytes are decoded back to the expected
// compressed values.
gotAmount, gotScrVersion, gotScript, gotBytesRead, err :=
decodeCompressedTxOut(test.compressed, test.version,
test.hasAmount)
if err != nil {
t.Errorf("decodeCompressedTxOut (%s): unexpected "+
"error: %v", test.name, err)
continue
}
if gotAmount != int64(test.amount) {
t.Errorf("decodeCompressedTxOut (%s): did not get "+
"expected amount - got %d, want %d",
test.name, gotAmount, test.amount)
continue
}
if gotScrVersion != test.scriptVersion {
t.Errorf("decodeCompressedTxOut (%s): did not get "+
"expected script version - got %d, want %d",
test.name, gotScrVersion, test.scriptVersion)
continue
}
if !bytes.Equal(gotScript, test.compPkScript) {
t.Errorf("decodeCompressedTxOut (%s): did not get "+
"expected script - got %x, want %x",
test.name, gotScript, test.compPkScript)
continue
}
if gotBytesRead != len(test.compressed) {
t.Errorf("decodeCompressedTxOut (%s): did not get "+
"expected number of bytes read - got %d, want %d",
test.name, gotBytesRead, len(test.compressed))
continue
}
// Ensure the compressed values decompress to the expected
// txout.
gotScript = decompressScript(gotScript, test.version)
localScript := make([]byte, len(test.pkScript))
copy(localScript, test.pkScript)
if test.isCompressed {
localScript = decompressScript(localScript, test.version)
}
if !bytes.Equal(gotScript, localScript) {
t.Errorf("decompressTxOut (%s): did not get expected "+
"script - got %x, want %x", test.name,
gotScript, test.pkScript)
continue
}
}
}
// TestTxOutCompressionErrors ensures calling various functions related to
// txout compression with incorrect data returns the expected results.
func TestTxOutCompressionErrors(t *testing.T) {
t.Parallel()
// A compressed txout with a value and missing compressed script must error.
compressedTxOut := hexToBytes("00")
_, _, _, _, err := decodeCompressedTxOut(compressedTxOut, 1, true)
if !isDeserializeErr(err) {
t.Fatalf("decodeCompressedTxOut with value and missing "+
"compressed script did not return expected error type "+
"- got %T, want errDeserialize", err)
}
// A compressed txout without a value and with an empty compressed
// script returns empty but is valid.
compressedTxOut = hexToBytes("00")
_, _, _, _, err = decodeCompressedTxOut(compressedTxOut, 1, false)
if err != nil {
t.Fatalf("decodeCompressedTxOut with missing compressed script "+
"did not return expected error type - got %T, want "+
"errDeserialize", err)
}
// A compressed txout with short compressed script must error.
compressedTxOut = hexToBytes("0010")
_, _, _, _, err = decodeCompressedTxOut(compressedTxOut, 1, false)
if !isDeserializeErr(err) {
t.Fatalf("decodeCompressedTxOut with short compressed script "+
"did not return expected error type - got %T, want "+
"errDeserialize", err)
}
}

View File

@ -0,0 +1,39 @@
// Package dbnamespace contains constants that define the database namespaces
// for the purpose of the blockchain, so that external callers may easily access
// this data.
package dbnamespace
import (
"encoding/binary"
)
var (
// ByteOrder is the preferred byte order used for serializing numeric
// fields for storage in the database.
ByteOrder = binary.LittleEndian
// BlockChainDbInfoBucketName is the name of the database bucket used to
// house a single k->v that stores global versioning and date information for
// the database.
BlockChainDbInfoBucketName = []byte("dbinfo")
// HashIndexBucketName is the name of the db bucket used to house to the
// block hash -> block height index.
HashIndexBucketName = []byte("hashidx")
// HeightIndexBucketName is the name of the db bucket used to house to
// the block height -> block hash index.
HeightIndexBucketName = []byte("heightidx")
// ChainStateKeyName is the name of the db key used to store the best
// chain state.
ChainStateKeyName = []byte("chainstate")
// SpendJournalBucketName is the name of the db bucket used to house
// transactions outputs that are spent in each block.
SpendJournalBucketName = []byte("spendjournal")
// UtxoSetBucketName is the name of the db bucket used to house the
// unspent transaction output set.
UtxoSetBucketName = []byte("utxoset")
)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -204,6 +204,8 @@ func (b *BlockChain) calcEasiestDifficulty(bits uint32,
// findPrevTestNetDifficulty returns the difficulty of the previous block which
// did not have the special testnet minimum difficulty rule applied.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32,
error) {
// Search backwards through the chain for the last block without
@ -212,7 +214,7 @@ func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32,
b.chainParams.WorkDiffWindows
iterNode := startNode
for iterNode != nil && iterNode.height%blocksPerRetarget != 0 &&
iterNode.bits == b.chainParams.PowLimitBits {
iterNode.header.Bits == b.chainParams.PowLimitBits {
// Get the previous block node. This function is used over
// simply accessing iterNode.parent directly as it will
@ -231,7 +233,7 @@ func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32,
// appropriate block was found.
lastBits := b.chainParams.PowLimitBits
if iterNode != nil {
lastBits = iterNode.bits
lastBits = iterNode.header.Bits
}
return lastBits, nil
}
@ -241,6 +243,8 @@ func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32,
// This function differs from the exported CalcNextRequiredDifficulty in that
// the exported version uses the current best chain as the previous block node
// while this function accepts any block node.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) calcNextRequiredDifficulty(curNode *blockNode,
newBlockTime time.Time) (uint32, error) {
// Genesis block.
@ -262,13 +266,13 @@ func (b *BlockChain) calcNextRequiredDifficulty(curNode *blockNode,
// Return minimum difficulty when more than twice the
// desired amount of time needed to generate a block has
// elapsed.
allowMinTime := curNode.timestamp.Add(b.chainParams.TimePerBlock *
b.chainParams.MinDiffResetTimeFactor)
allowMinTime := curNode.header.Timestamp.Add(
b.chainParams.TimePerBlock * b.chainParams.MinDiffResetTimeFactor)
// For every extra target timespan that passes, we halve the
// difficulty.
if newBlockTime.After(allowMinTime) {
timePassed := newBlockTime.Sub(curNode.timestamp)
timePassed := newBlockTime.Sub(curNode.header.Timestamp)
timePassed -= (b.chainParams.TimePerBlock *
b.chainParams.MinDiffResetTimeFactor)
shifts := uint((timePassed / b.chainParams.TimePerBlock) + 1)
@ -447,10 +451,13 @@ func (b *BlockChain) CalcNextRequiredDiffFromNode(hash *chainhash.Hash,
// after the end of the current best chain based on the difficulty retarget
// rules.
//
// This function is NOT safe for concurrent access.
// This function is safe for concurrent access.
func (b *BlockChain) CalcNextRequiredDifficulty(timestamp time.Time) (uint32,
error) {
return b.calcNextRequiredDifficulty(b.bestChain, timestamp)
b.chainLock.Lock()
difficulty, err := b.calcNextRequiredDifficulty(b.bestNode, timestamp)
b.chainLock.Unlock()
return difficulty, err
}
// mergeDifficulty takes an original stake difficulty and two new, scaled
@ -740,7 +747,7 @@ func (b *BlockChain) calcNextRequiredStakeDifficulty(curNode *blockNode) (int64,
// CalcNextRequiredStakeDifficulty is the exported version of the above function.
// This function is NOT safe for concurrent access.
func (b *BlockChain) CalcNextRequiredStakeDifficulty() (int64, error) {
return b.calcNextRequiredStakeDifficulty(b.bestChain)
return b.calcNextRequiredStakeDifficulty(b.bestNode)
}
// estimateNextStakeDifficulty returns a user-specified estimate for the next
@ -831,7 +838,6 @@ func (b *BlockChain) estimateNextStakeDifficulty(curNode *blockNode,
thisNode.hash = &emptyHeaderHash
thisNode.height = i
thisNode.parent = topNode
thisNode.parentHash = topNode.hash
topNode = thisNode
}
}
@ -1062,6 +1068,6 @@ func (b *BlockChain) estimateNextStakeDifficulty(curNode *blockNode,
// This function is NOT safe for concurrent access.
func (b *BlockChain) EstimateNextStakeDifficulty(ticketsInWindow int64,
useMaxTickets bool) (int64, error) {
return b.estimateNextStakeDifficulty(b.bestChain, ticketsInWindow,
return b.estimateNextStakeDifficulty(b.bestNode, ticketsInWindow,
useMaxTickets)
}

View File

@ -1,20 +1,21 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
// "fmt"
"math/big"
"testing"
"time"
// "time"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
"github.com/decred/dcrutil"
// "github.com/decred/dcrd/blockchain/stake"
// "github.com/decred/dcrd/chaincfg"
// database "github.com/decred/dcrd/database2"
//"github.com/decred/dcrutil"
)
func TestBigToCompact(t *testing.T) {
@ -81,53 +82,57 @@ func TestCalcWork(t *testing.T) {
// but we should really have a unit test for them that includes tests for
// edge cases.
func TestDiff(t *testing.T) {
db, err := database.CreateDB("memdb")
if err != nil {
t.Errorf("Failed to create database: %v\n", err)
return
}
defer db.Close()
/*
db, err := database.Create("memdb")
if err != nil {
t.Errorf("error creating db: %v", err)
}
var tmdb *stake.TicketDB
// Setup a teardown function for cleaning up. This function is
// returned to the caller to be invoked when it is done testing.
teardown := func() {
db.Close()
}
defer teardown()
genesisBlock := dcrutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
_, err = db.InsertBlock(genesisBlock)
if err != nil {
t.Errorf("Failed to insert genesis block: %v\n", err)
return
}
// var tmdb *stake.TicketDB
chain := blockchain.New(db, tmdb, &chaincfg.MainNetParams, nil, nil)
// Create the main chain instance.
chain, err := blockchain.New(&blockchain.Config{
DB: db,
ChainParams: &chaincfg.MainNetParams,
})
//timeSource := blockchain.NewMedianTime()
//timeSource := blockchain.NewMedianTime()
// Grab some blocks
// Grab some blocks
// Build fake blockchain
// Build fake blockchain
// Calc new difficulty
// Calc new difficulty
ts := time.Now()
ts := time.Now()
d, err := chain.CalcNextRequiredDifficulty(ts)
if err != nil {
t.Errorf("Failed to get difficulty: %v\n", err)
return
}
if d != 486604799 { // This is hardcoded in genesis block but not exported anywhere.
t.Error("Failed to get initial difficulty.")
}
d, err := chain.CalcNextRequiredDifficulty(ts)
if err != nil {
t.Errorf("Failed to get difficulty: %v\n", err)
return
}
if d != 486604799 { // This is hardcoded in genesis block but not exported anywhere.
t.Error("Failed to get initial difficulty.")
}
sd, err := chain.CalcNextRequiredStakeDifficulty()
if err != nil {
t.Errorf("Failed to get stake difficulty: %v\n", err)
return
}
if sd != chaincfg.MainNetParams.MinimumStakeDiff {
t.Error("Incorrect initial stake difficulty.")
}
sd, err := chain.CalcNextRequiredStakeDifficulty()
if err != nil {
t.Errorf("Failed to get stake difficulty: %v\n", err)
return
}
if sd != chaincfg.MainNetParams.MinimumStakeDiff {
t.Error("Incorrect initial stake difficulty.")
}
// Compare
// Compare
// Repeat for a few more
// Repeat for a few more
*/
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2014-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,6 +9,16 @@ import (
"fmt"
)
// AssertError identifies an error that indicates an internal code consistency
// issue and should be treated as a critical and unrecoverable error.
type AssertError string
// Error returns the assertion error as a huma-readable string and satisfies
// the error interface.
func (e AssertError) Error() string {
return "assertion failed: " + string(e)
}
// ErrorCode identifies a kind of error.
type ErrorCode int
@ -269,7 +279,7 @@ const (
// ErrInvalidRevNum indicates that the number of revocations from the
// header was not the same as the number of SSRtx included in the block.
ErrInvalidRevNum
ErrRevocationsMismatch
// ErrTooManyRevocations indicates more revocations were found in a block
// than were allowed.
@ -472,7 +482,7 @@ var errorCodeStrings = map[ErrorCode]string{
ErrVotesMismatch: "ErrVotesMismatch",
ErrIncongruentVotebit: "ErrIncongruentVotebit",
ErrInvalidSSRtx: "ErrInvalidSSRtx",
ErrInvalidRevNum: "ErrInvalidRevNum",
ErrRevocationsMismatch: "ErrRevocationsMismatch",
ErrTooManyRevocations: "ErrTooManyRevocations",
ErrSStxCommitment: "ErrSStxCommitment",
ErrUnparseableSSGen: "ErrUnparseableSSGen",

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2014-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -8,12 +8,15 @@ package blockchain_test
import (
"fmt"
"math/big"
"os"
"path/filepath"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/memdb"
database "github.com/decred/dcrd/database2"
_ "github.com/decred/dcrd/database2/ffldb"
"github.com/decred/dcrutil"
)
@ -24,44 +27,50 @@ import (
// block to illustrate how an invalid block is handled.
func ExampleBlockChain_ProcessBlock() {
// Create a new database to store the accepted blocks into. Typically
// this would be opening an existing database and would not use memdb
// which is a memory-only database backend, but we create a new db
// here so this is a complete working example.
db, err := database.CreateDB("memdb")
// this would be opening an existing database and would not be deleting
// and creating a new database like this, but it is done here so this is
// a complete working example and does not leave temporary files laying
// around.
dbPath := filepath.Join(os.TempDir(), "exampleprocessblock")
_ = os.RemoveAll(dbPath)
db, err := database.Create("ffldb", dbPath, chaincfg.MainNetParams.Net)
if err != nil {
fmt.Printf("Failed to create database: %v\n", err)
return
}
defer os.RemoveAll(dbPath)
defer db.Close()
var tmdb *stake.TicketDB
// Insert the main network genesis block. This is part of the initial
// database setup. Like above, this typically would not be needed when
// opening an existing database.
genesisBlock := dcrutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
_, err = db.InsertBlock(genesisBlock)
// Create a new BlockChain instance using the underlying database for
// the main bitcoin network. This example does not demonstrate some
// of the other available configuration options such as specifying a
// notification callback and signature cache.
chain, err := blockchain.New(&blockchain.Config{
DB: db,
TMDB: tmdb,
ChainParams: &chaincfg.MainNetParams,
})
if err != nil {
fmt.Printf("Failed to insert genesis block: %v\n", err)
fmt.Printf("Failed to create chain instance: %v\n", err)
return
}
// Create a new BlockChain instance without an initialized signature
// verification cache, using the underlying database for the main
// bitcoin network and ignore notifications.
chain := blockchain.New(db, tmdb, &chaincfg.MainNetParams, nil, nil)
// Create a new median time source that is required by the upcoming
// call to ProcessBlock. Ordinarily this would also add time values
// obtained from other peers on the network so the local time is
// adjusted to be in agreement with other peers.
timeSource := blockchain.NewMedianTime()
// Process a block. For this example, we are going to intentionally
// cause an error by trying to process the genesis block which already
// exists.
isOrphan, _, err := chain.ProcessBlock(genesisBlock, timeSource, blockchain.BFNone)
// Create a new BlockChain instance using the underlying database for
// the main bitcoin network. This example does not demonstrate some
// of the other available configuration options such as specifying a
// notification callback and signature cache.
genesisBlock := dcrutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
_, isOrphan, err := chain.ProcessBlock(genesisBlock, timeSource, blockchain.BFNone)
if err != nil {
fmt.Printf("Failed to process block: %v\n", err)
fmt.Printf("Failed to create chain instance: %v\n", err)
return
}
fmt.Printf("Block accepted. Is it an orphan?: %v", isOrphan)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -33,3 +33,7 @@ func TstSetMaxMedianTimeEntries(val int) {
// TstCheckBlockScripts makes the internal checkBlockScripts function available
// to the test package.
var TstCheckBlockScripts = checkBlockScripts
// TstDeserializeUtxoEntry makes the internal deserializeUtxoEntry function
// available to the test package.
var TstDeserializeUtxoEntry = deserializeUtxoEntry

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,6 +10,7 @@ import (
"time"
"github.com/decred/dcrd/chaincfg/chainhash"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrutil"
)
@ -40,14 +41,22 @@ const (
// blockExists determines whether a block with the given hash exists either in
// the main chain or any side chains.
//
// This function MUST be called with the chain state lock held (for reads).
func (b *BlockChain) blockExists(hash *chainhash.Hash) (bool, error) {
// Check memory chain first (could be main chain or side chain blocks).
if _, ok := b.index[*hash]; ok {
return true, nil
}
// Check in database (rest of main chain not in memory).
return b.db.ExistsSha(hash)
// Check in the database.
var exists bool
err := b.db.View(func(dbTx database.Tx) error {
var err error
exists, err = dbTx.HasBlock(hash)
return err
})
return exists, err
}
// processOrphans determines if there are any orphans which depend on the passed
@ -57,6 +66,8 @@ func (b *BlockChain) blockExists(hash *chainhash.Hash) (bool, error) {
//
// The flags do not modify the behavior of this function directly, however they
// are needed to pass along to maybeAcceptBlock.
//
// This function MUST be called with the chain state lock held (for writes).
func (b *BlockChain) processOrphans(hash *chainhash.Hash, flags BehaviorFlags) error {
// Start with processing at least the passed hash. Leave a little room
// for additional orphan blocks that need to be processed without
@ -114,11 +125,12 @@ func (b *BlockChain) processOrphans(hash *chainhash.Hash, flags BehaviorFlags) e
// It returns a first bool specifying whether or not the block is on on a fork
// or on a side chain. True means it's on the main chain.
//
// It returns a second bool which indicates whether or not the block is an orphan
// and any errors that occurred during processing. The returned bool is only
// valid when the error is nil.
// This function is safe for concurrent access.
func (b *BlockChain) ProcessBlock(block *dcrutil.Block,
timeSource MedianTimeSource, flags BehaviorFlags) (bool, bool, error) {
b.chainLock.Lock()
defer b.chainLock.Unlock()
fastAdd := flags&BFFastAdd == BFFastAdd
dryRun := flags&BFDryRun == BFDryRun

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -18,9 +18,8 @@ import (
"github.com/decred/dcrutil"
)
// TestReorganization loads a set of test blocks which force a chain
// reorganization to test the block chain handling code.
func TestReorganization(t *testing.T) {
// reorgTestLong does a single, large reorganization.
func reorgTestLong(t *testing.T) {
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("reorgunittest",
simNetParams)
@ -30,11 +29,6 @@ func TestReorganization(t *testing.T) {
}
defer teardownFunc()
err = chain.GenerateInitialIndex()
if err != nil {
t.Errorf("GenerateInitialIndex: %v", err)
}
// The genesis block should fail to connect since it's already
// inserted.
genesisBlock := simNetParams.GenesisBlock
@ -68,13 +62,13 @@ func TestReorganization(t *testing.T) {
for i := 1; i < finalIdx1+1; i++ {
bl, err := dcrutil.NewBlockFromBytes(blockChain[int64(i)])
if err != nil {
t.Errorf("NewBlockFromBytes error: %v", err.Error())
t.Fatalf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock error: %v", err.Error())
t.Fatalf("ProcessBlock error at height %v: %v", i, err.Error())
}
}
@ -104,13 +98,13 @@ func TestReorganization(t *testing.T) {
for i := forkPoint; i < finalIdx2+1; i++ {
bl, err := dcrutil.NewBlockFromBytes(blockChain[int64(i)])
if err != nil {
t.Errorf("NewBlockFromBytes error: %v", err.Error())
t.Fatalf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock error: %v", err.Error())
t.Fatalf("ProcessBlock error: %v", err.Error())
}
}
@ -131,5 +125,134 @@ func TestReorganization(t *testing.T) {
t.Errorf("unexpected error testing for presence of new tip block "+
"after reorg test: %v", err)
}
return
}
// reorgTestsShort does short reorganizations to test multiple, frequent
// reorganizations.
func reorgTestShort(t *testing.T) {
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("reorgunittest",
simNetParams)
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return
}
defer teardownFunc()
// The genesis block should fail to connect since it's already
// inserted.
genesisBlock := simNetParams.GenesisBlock
err = chain.CheckConnectBlock(dcrutil.NewBlock(genesisBlock))
if err == nil {
t.Errorf("CheckConnectBlock: Did not receive expected error")
}
// Load up the rest of the blocks up to HEAD.
filename := filepath.Join("testdata/", "reorgto179.bz2")
fi, err := os.Open(filename)
bcStream := bzip2.NewReader(fi)
defer fi.Close()
// Create a buffer of the read file
bcBuf := new(bytes.Buffer)
bcBuf.ReadFrom(bcStream)
// Create decoder from the buffer and a map to store the data
bcDecoder := gob.NewDecoder(bcBuf)
blockChain1 := make(map[int64][]byte)
// Decode the blockchain into the map
if err := bcDecoder.Decode(&blockChain1); err != nil {
t.Errorf("error decoding test blockchain: %v", err.Error())
}
timeSource := blockchain.NewMedianTime()
// Load the long chain and begin loading blocks from that too,
// forcing a reorganization
// Load up the rest of the blocks up to HEAD.
filename = filepath.Join("testdata/", "reorgto180.bz2")
fi, err = os.Open(filename)
bcStream = bzip2.NewReader(fi)
defer fi.Close()
// Create a buffer of the read file
bcBuf = new(bytes.Buffer)
bcBuf.ReadFrom(bcStream)
// Create decoder from the buffer and a map to store the data
bcDecoder = gob.NewDecoder(bcBuf)
blockChain2 := make(map[int64][]byte)
// Decode the blockchain into the map
if err := bcDecoder.Decode(&blockChain2); err != nil {
t.Errorf("error decoding test blockchain: %v", err.Error())
}
forkPoint := 131
finalIdx2 := 180
for i := 1; i < forkPoint+1; i++ {
bl, err := dcrutil.NewBlockFromBytes(blockChain1[int64(i)])
if err != nil {
t.Fatalf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Fatalf("ProcessBlock error at height %v: %v", i, err.Error())
}
}
// Reorg each block.
dominant := blockChain2
orphaned := blockChain1
for i := forkPoint; i < finalIdx2; i++ {
for j := 0; j < 2; j++ {
bl, err := dcrutil.NewBlockFromBytes(dominant[int64(i+j)])
if err != nil {
t.Fatalf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i + j))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Fatalf("ProcessBlock error: %v", err.Error())
}
}
dominant, orphaned = orphaned, dominant
}
// Ensure our blockchain is at the correct best tip
topBlock, _ := chain.GetTopBlock()
tipHash := topBlock.Sha()
expected, _ := chainhash.NewHashFromStr("5ab969d0afd8295b6cd1506f2a310d" +
"259322015c8bd5633f283a163ce0e50594")
if *tipHash != *expected {
t.Errorf("Failed to correctly reorg; expected tip %v, got tip %v",
expected, tipHash)
}
have, err := chain.HaveBlock(expected)
if !have {
t.Errorf("missing tip block after reorganization test")
}
if err != nil {
t.Errorf("unexpected error testing for presence of new tip block "+
"after reorg test: %v", err)
}
return
}
// TestReorganization loads a set of test blocks which force a chain
// reorganization to test the block chain handling code.
func TestReorganization(t *testing.T) {
reorgTestLong(t)
// This can take a while, do not enable it by default.
// reorgTestShort(t)
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -29,7 +29,7 @@ type txValidator struct {
validateChan chan *txValidateItem
quitChan chan struct{}
resultChan chan error
txStore TxStore
utxoView *UtxoViewpoint
flags txscript.ScriptFlags
sigCache *txscript.SigCache
}
@ -56,8 +56,9 @@ out:
// Ensure the referenced input transaction is available.
txIn := txVI.txIn
originTxHash := &txIn.PreviousOutPoint.Hash
originTx, exists := v.txStore[*originTxHash]
if !exists || originTx.Err != nil || originTx.Tx == nil {
originTxIndex := txIn.PreviousOutPoint.Index
txEntry := v.utxoView.LookupEntry(originTxHash)
if txEntry == nil {
str := fmt.Sprintf("unable to find input "+
"transaction %v referenced from "+
"transaction %v", originTxHash,
@ -66,17 +67,16 @@ out:
v.sendResult(err)
break out
}
originMsgTx := originTx.Tx.MsgTx()
// Ensure the output index in the referenced transaction
// is available.
originTxIndex := txIn.PreviousOutPoint.Index
if originTxIndex >= uint32(len(originMsgTx.TxOut)) {
str := fmt.Sprintf("out of bounds "+
"input index %d in transaction %v "+
"referenced from transaction %v",
originTxIndex, originTxHash,
txVI.tx.Sha())
// Ensure the referenced input transaction public key
// script is available.
pkScript := txEntry.PkScriptByIndex(originTxIndex)
if pkScript == nil {
str := fmt.Sprintf("unable to find unspent "+
"output %v script referenced from "+
"transaction %s:%d",
txIn.PreviousOutPoint, txVI.tx.Sha(),
txVI.txInIndex)
err := ruleError(ErrBadTxInput, str)
v.sendResult(err)
break out
@ -84,8 +84,7 @@ out:
// Create a new script engine for the script pair.
sigScript := txIn.SignatureScript
pkScript := originMsgTx.TxOut[originTxIndex].PkScript
version := originMsgTx.TxOut[originTxIndex].Version
version := txEntry.ScriptVersionByIndex(originTxIndex)
vm, err := txscript.NewEngine(pkScript, txVI.tx.MsgTx(),
txVI.txInIndex, v.flags, version, v.sigCache)
@ -183,12 +182,12 @@ func (v *txValidator) Validate(items []*txValidateItem) error {
// newTxValidator returns a new instance of txValidator to be used for
// validating transaction scripts asynchronously.
func newTxValidator(txStore TxStore, flags txscript.ScriptFlags, sigCache *txscript.SigCache) *txValidator {
func newTxValidator(utxoView *UtxoViewpoint, flags txscript.ScriptFlags, sigCache *txscript.SigCache) *txValidator {
return &txValidator{
validateChan: make(chan *txValidateItem),
quitChan: make(chan struct{}),
resultChan: make(chan error),
txStore: txStore,
utxoView: utxoView,
sigCache: sigCache,
flags: flags,
}
@ -196,9 +195,7 @@ func newTxValidator(txStore TxStore, flags txscript.ScriptFlags, sigCache *txscr
// ValidateTransactionScripts validates the scripts for the passed transaction
// using multiple goroutines.
func ValidateTransactionScripts(tx *dcrutil.Tx, txStore TxStore,
flags txscript.ScriptFlags, sigCache *txscript.SigCache) error {
func ValidateTransactionScripts(tx *dcrutil.Tx, utxoView *UtxoViewpoint, flags txscript.ScriptFlags, sigCache *txscript.SigCache) error {
// Collect all of the transaction inputs and required information for
// validation.
txIns := tx.MsgTx().TxIn
@ -218,7 +215,7 @@ func ValidateTransactionScripts(tx *dcrutil.Tx, txStore TxStore,
}
// Validate all of the inputs.
validator := newTxValidator(txStore, flags, sigCache)
validator := newTxValidator(utxoView, flags, sigCache)
if err := validator.Validate(txValItems); err != nil {
return err
}
@ -228,9 +225,9 @@ func ValidateTransactionScripts(tx *dcrutil.Tx, txStore TxStore,
}
// checkBlockScripts executes and validates the scripts for all transactions in
// the passed block.
// the passed block using multiple goroutines.
// txTree = true is TxTreeRegular, txTree = false is TxTreeStake.
func checkBlockScripts(block *dcrutil.Block, txStore TxStore, txTree bool,
func checkBlockScripts(block *dcrutil.Block, utxoView *UtxoViewpoint, txTree bool,
scriptFlags txscript.ScriptFlags, sigCache *txscript.SigCache) error {
// Collect all of the transaction inputs and required information for
@ -266,7 +263,7 @@ func checkBlockScripts(block *dcrutil.Block, txStore TxStore, txTree bool,
}
// Validate all of the inputs.
validator := newTxValidator(txStore, scriptFlags, sigCache)
validator := newTxValidator(utxoView, scriptFlags, sigCache)
if err := validator.Validate(txValItems); err != nil {
return err
}

View File

@ -1,18 +1,52 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
// "fmt"
// "runtime"
import (
"testing"
)
// "github.com/decred/dcrd/blockchain"
// "github.com/decred/dcrd/txscript"
// TestCheckBlockScripts ensures that validating the all of the scripts in a
// known-good block doesn't return an error.
func TestCheckBlockScripts(t *testing.T) {
// TODO In the future, add a block here with a lot of tx to validate.
// The blockchain tests already validate a ton of scripts with signatures,
// so we don't really need to make a new test for this immediately.
/*
// TODO In the future, add a block here with a lot of tx to validate.
// The blockchain tests already validate a ton of scripts with signatures,
// so we don't really need to make a new test for this immediately.
runtime.GOMAXPROCS(runtime.NumCPU())
testBlockNum := 277647
blockDataFile := fmt.Sprintf("%d.dat.bz2", testBlockNum)
blocks, err := loadBlocks(blockDataFile)
if err != nil {
t.Errorf("Error loading file: %v\n", err)
return
}
if len(blocks) > 1 {
t.Errorf("The test block file must only have one block in it")
}
storeDataFile := fmt.Sprintf("%d.utxostore.bz2", testBlockNum)
view, err := loadUtxoView(storeDataFile)
if err != nil {
t.Errorf("Error loading txstore: %v\n", err)
return
}
scriptFlags := txscript.ScriptBip16
err = blockchain.TstCheckBlockScripts(blocks[0], view, scriptFlags,
nil)
if err != nil {
t.Errorf("Transaction script validation failed: %v\n", err)
return
}
*/
}

View File

@ -0,0 +1,130 @@
// Copyright (c) 2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package stake_test
import (
"fmt"
"os"
"path/filepath"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
_ "github.com/decred/dcrd/database/memdb"
database "github.com/decred/dcrd/database2"
_ "github.com/decred/dcrd/database2/ffldb"
"github.com/decred/dcrd/wire"
)
const (
// testDbType is the database backend type to use for the tests.
testDbType = "ffldb"
// testDbRoot is the root directory used to create all test databases.
testDbRoot = "testdbs"
// blockDataNet is the expected network in the test block data.
blockDataNet = wire.MainNet
)
// filesExists returns whether or not the named file or directory exists.
func fileExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {
return false
}
}
return true
}
// isSupportedDbType returns whether or not the passed database type is
// currently supported.
func isSupportedDbType(dbType string) bool {
supportedDrivers := database.SupportedDrivers()
for _, driver := range supportedDrivers {
if dbType == driver {
return true
}
}
return false
}
// chainSetup is used to create a new db and chain instance with the genesis
// block already inserted. In addition to the new chain instnce, it returns
// a teardown function the caller should invoke when done testing to clean up.
func chainSetup(dbName string, params *chaincfg.Params) (*blockchain.BlockChain, func(), error) {
if !isSupportedDbType(testDbType) {
return nil, nil, fmt.Errorf("unsupported db type %v", testDbType)
}
// Handle memory database specially since it doesn't need the disk
// specific handling.
var db database.DB
tmdb := new(stake.TicketDB)
var teardown func()
if testDbType == "memdb" {
ndb, err := database.Create(testDbType)
if err != nil {
return nil, nil, fmt.Errorf("error creating db: %v", err)
}
db = ndb
// Setup a teardown function for cleaning up. This function is
// returned to the caller to be invoked when it is done testing.
teardown = func() {
tmdb.Close()
db.Close()
}
} else {
// Create the root directory for test databases.
if !fileExists(testDbRoot) {
if err := os.MkdirAll(testDbRoot, 0700); err != nil {
err := fmt.Errorf("unable to create test db "+
"root: %v", err)
return nil, nil, err
}
}
// Create a new database to store the accepted blocks into.
dbPath := filepath.Join(testDbRoot, dbName)
_ = os.RemoveAll(dbPath)
ndb, err := database.Create(testDbType, dbPath, blockDataNet)
if err != nil {
return nil, nil, fmt.Errorf("error creating db: %v", err)
}
db = ndb
// Setup a teardown function for cleaning up. This function is
// returned to the caller to be invoked when it is done testing.
teardown = func() {
tmdb.Close()
db.Close()
os.RemoveAll(dbPath)
os.RemoveAll(testDbRoot)
}
}
// Create the main chain instance.
chain, err := blockchain.New(&blockchain.Config{
DB: db,
TMDB: tmdb,
ChainParams: params,
})
if err != nil {
teardown()
err := fmt.Errorf("failed to create chain instance: %v", err)
return nil, nil, err
}
// Start the ticket database.
tmdb.Initialize(params, db)
err = tmdb.RescanTicketDB()
if err != nil {
return nil, nil, err
}
return chain, teardown, nil
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
//
@ -26,12 +26,13 @@ import (
// TxType indicates the type of tx (regular or stake type).
type TxType int
// Possible TxTypes
// Possible TxTypes. Statically declare these so that they might be used in
// consensus code.
const (
TxTypeRegular = iota
TxTypeSStx
TxTypeSSGen
TxTypeSSRtx
TxTypeRegular = 0
TxTypeSStx = 1
TxTypeSSGen = 2
TxTypeSSRtx = 3
)
const (
@ -226,23 +227,45 @@ func IsStakeBase(tx *dcrutil.Tx) bool {
return true
}
// GetSStxStakeOutputInfo takes an SStx as input and scans through its outputs,
// MinimalOutput is a struct encoding a minimally sized output for use in parsing
// stake related information.
type MinimalOutput struct {
PkScript []byte
Value int64
Version uint16
}
// ConvertToMinimalOutputs converts a transaction to its minimal outputs
// derivative.
func ConvertToMinimalOutputs(tx *dcrutil.Tx) []*MinimalOutput {
minOuts := make([]*MinimalOutput, len(tx.MsgTx().TxOut))
for i, txOut := range tx.MsgTx().TxOut {
minOuts[i] = &MinimalOutput{
PkScript: txOut.PkScript,
Value: txOut.Value,
Version: txOut.Version,
}
}
return minOuts
}
// SStxStakeOutputInfo takes an SStx as input and scans through its outputs,
// returning the pubkeyhashs and amounts for any NullDataTy's (future
// commitments to stake generation rewards).
func GetSStxStakeOutputInfo(tx *dcrutil.Tx) ([]bool, [][]byte, []int64, []int64,
[][]bool, [][]uint16) {
msgTx := tx.MsgTx()
isP2SH := make([]bool, len(msgTx.TxIn))
addresses := make([][]byte, len(msgTx.TxIn))
amounts := make([]int64, len(msgTx.TxIn))
changeAmounts := make([]int64, len(msgTx.TxIn))
allSpendRules := make([][]bool, len(msgTx.TxIn))
allSpendLimits := make([][]uint16, len(msgTx.TxIn))
func SStxStakeOutputInfo(outs []*MinimalOutput) ([]bool, [][]byte, []int64,
[]int64, [][]bool, [][]uint16) {
expectedInLen := len(outs) / 2
isP2SH := make([]bool, expectedInLen)
addresses := make([][]byte, expectedInLen)
amounts := make([]int64, expectedInLen)
changeAmounts := make([]int64, expectedInLen)
allSpendRules := make([][]bool, expectedInLen)
allSpendLimits := make([][]uint16, expectedInLen)
// Cycle through the inputs and pull the proportional amounts
// and commit to PKHs/SHs.
for idx, out := range msgTx.TxOut {
for idx, out := range outs {
// We only care about the outputs where we get proportional
// amounts and the PKHs/SHs to send rewards to, which is all
// the odd numbered output indexes.
@ -289,6 +312,14 @@ func GetSStxStakeOutputInfo(tx *dcrutil.Tx) ([]bool, [][]byte, []int64, []int64,
allSpendLimits
}
// TxSStxStakeOutputInfo takes an SStx as input and scans through its outputs,
// returning the pubkeyhashs and amounts for any NullDataTy's (future
// commitments to stake generation rewards).
func TxSStxStakeOutputInfo(tx *dcrutil.Tx) ([]bool, [][]byte, []int64, []int64,
[][]bool, [][]uint16) {
return SStxStakeOutputInfo(ConvertToMinimalOutputs(tx))
}
// AddrFromSStxPkScrCommitment extracts a P2SH or P2PKH address from a
// ticket commitment pkScript.
func AddrFromSStxPkScrCommitment(pkScript []byte,
@ -336,10 +367,10 @@ func AmountFromSStxPkScrCommitment(pkScript []byte) (dcrutil.Amount, error) {
return dcrutil.Amount(binary.LittleEndian.Uint64(amtEncoded)), nil
}
// GetSSGenStakeOutputInfo takes an SSGen tx as input and scans through its
// TxSSGenStakeOutputInfo takes an SSGen tx as input and scans through its
// outputs, returning the amount of the output and the PKH or SH that it was
// sent to.
func GetSSGenStakeOutputInfo(tx *dcrutil.Tx, params *chaincfg.Params) ([]bool,
func TxSSGenStakeOutputInfo(tx *dcrutil.Tx, params *chaincfg.Params) ([]bool,
[][]byte, []int64, error) {
msgTx := tx.MsgTx()
numOutputsInSSGen := len(msgTx.TxOut)
@ -384,9 +415,9 @@ func GetSSGenStakeOutputInfo(tx *dcrutil.Tx, params *chaincfg.Params) ([]bool,
return isP2SH, addresses, amounts, nil
}
// GetSSGenBlockVotedOn takes an SSGen tx and returns the block voted on in the
// SSGenBlockVotedOn takes an SSGen tx and returns the block voted on in the
// first OP_RETURN by hash and height.
func GetSSGenBlockVotedOn(tx *dcrutil.Tx) (chainhash.Hash, uint32, error) {
func SSGenBlockVotedOn(tx *dcrutil.Tx) (chainhash.Hash, uint32, error) {
msgTx := tx.MsgTx()
// Get the block header hash.
@ -401,9 +432,9 @@ func GetSSGenBlockVotedOn(tx *dcrutil.Tx) (chainhash.Hash, uint32, error) {
return *blockSha, height, nil
}
// GetSSGenVoteBits takes an SSGen tx as input and scans through its
// SSGenVoteBits takes an SSGen tx as input and scans through its
// outputs, returning the VoteBits of the index 1 output.
func GetSSGenVoteBits(tx *dcrutil.Tx) uint16 {
func SSGenVoteBits(tx *dcrutil.Tx) uint16 {
msgTx := tx.MsgTx()
votebits := binary.LittleEndian.Uint16(msgTx.TxOut[1].PkScript[2:4])
@ -411,9 +442,9 @@ func GetSSGenVoteBits(tx *dcrutil.Tx) uint16 {
return votebits
}
// GetSSRtxStakeOutputInfo takes an SSRtx tx as input and scans through its
// TxSSRtxStakeOutputInfo takes an SSRtx tx as input and scans through its
// outputs, returning the amount of the output and the pkh that it was sent to.
func GetSSRtxStakeOutputInfo(tx *dcrutil.Tx, params *chaincfg.Params) ([]bool,
func TxSSRtxStakeOutputInfo(tx *dcrutil.Tx, params *chaincfg.Params) ([]bool,
[][]byte, []int64, error) {
msgTx := tx.MsgTx()
numOutputsInSSRtx := len(msgTx.TxOut)
@ -455,13 +486,13 @@ func GetSSRtxStakeOutputInfo(tx *dcrutil.Tx, params *chaincfg.Params) ([]bool,
return isP2SH, addresses, amounts, nil
}
// GetSStxNullOutputAmounts takes an array of input amounts, change amounts, and a
// SStxNullOutputAmounts takes an array of input amounts, change amounts, and a
// ticket purchase amount, calculates the adjusted proportion from the purchase
// amount, stores it in an array, then returns the array. That is, for any given
// SStx, this function calculates the proportional outputs that any single user
// should receive.
// Returns: (1) Fees (2) Output Amounts (3) Error
func GetSStxNullOutputAmounts(amounts []int64,
func SStxNullOutputAmounts(amounts []int64,
changeAmounts []int64,
amountTicket int64) (int64, []int64, error) {
lengthAmounts := len(amounts)
@ -501,14 +532,12 @@ func GetSStxNullOutputAmounts(amounts []int64,
return fees, contribAmounts, nil
}
// GetStakeRewards takes a list of SStx adjusted output amounts, the amount used
// CalculateRewards takes a list of SStx adjusted output amounts, the amount used
// to purchase that ticket, and the reward for an SSGen tx and subsequently
// generates what the outputs should be in the SSGen tx. If used for calculating
// the outputs for an SSRtx, pass 0 for subsidy.
func GetStakeRewards(amounts []int64,
amountTicket int64,
func CalculateRewards(amounts []int64, amountTicket int64,
subsidy int64) []int64 {
outputsAmounts := make([]int64, len(amounts))
// SSGen handling
@ -567,7 +596,7 @@ func VerifySStxAmounts(sstxAmts []int64, sstxCalcAmts []int64) error {
for idx, amt := range sstxCalcAmts {
if !(amt == sstxAmts[idx]) {
errStr := fmt.Sprintf("SStx verify error: at index %v incongruent"+
errStr := fmt.Sprintf("SStx verify error: at index %v incongruent "+
"amt %v in SStx calculated reward and amt %v in "+
"SStx", idx, amt, sstxAmts[idx])
return stakeRuleError(ErrVerSStxAmts, errStr)

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -739,7 +739,7 @@ func TestGetSSGenBlockVotedOn(t *testing.T) {
ssgen.SetTree(dcrutil.TxTreeStake)
ssgen.SetIndex(0)
blocksha, height, err := stake.GetSSGenBlockVotedOn(ssgen)
blocksha, height, err := stake.SSGenBlockVotedOn(ssgen)
correctblocksha, _ := chainhash.NewHash(
[]byte{
@ -793,7 +793,7 @@ func TestGetSStxStakeOutputInfo(t *testing.T) {
correctLimit := uint16(4)
typs, pkhs, amts, changeAmts, rules, limits :=
stake.GetSStxStakeOutputInfo(sstx)
stake.TxSStxStakeOutputInfo(sstx)
if typs[2] != correctTyp {
t.Errorf("Error thrown on TestGetSStxStakeOutputInfo: Looking for "+
@ -842,7 +842,7 @@ func TestGetSSGenStakeOutputInfo(t *testing.T) {
correctamt := int64(0x2123e300)
typs, pkhs, amts, err := stake.GetSSGenStakeOutputInfo(ssgen,
typs, pkhs, amts, err := stake.TxSSGenStakeOutputInfo(ssgen,
&chaincfg.SimNetParams)
if err != nil {
t.Errorf("Got unexpected error: %v", err.Error())
@ -871,7 +871,7 @@ func TestGetSSGenVoteBits(t *testing.T) {
correctvbs := uint16(0x8c94)
votebits := stake.GetSSGenVoteBits(ssgen)
votebits := stake.SSGenVoteBits(ssgen)
if correctvbs != votebits {
t.Errorf("Error thrown on TestGetSSGenVoteBits: Looking for "+
@ -895,7 +895,7 @@ func TestGetSSRtxStakeOutputInfo(t *testing.T) {
correctAmt := int64(0x2122e300)
typs, pkhs, amts, err := stake.GetSSRtxStakeOutputInfo(ssrtx,
typs, pkhs, amts, err := stake.TxSSRtxStakeOutputInfo(ssrtx,
&chaincfg.SimNetParams)
if err != nil {
t.Errorf("Got unexpected error: %v", err.Error())
@ -926,7 +926,7 @@ func TestGetSStxNullOutputAmounts(t *testing.T) {
int64(0x02300000)}
amtTicket := int64(0x9122e300)
_, _, err := stake.GetSStxNullOutputAmounts(
_, _, err := stake.SStxNullOutputAmounts(
[]int64{
int64(0x12000000),
int64(0x12300000),
@ -942,7 +942,7 @@ func TestGetSStxNullOutputAmounts(t *testing.T) {
}
// too small amount to commit
_, _, err = stake.GetSStxNullOutputAmounts(
_, _, err = stake.SStxNullOutputAmounts(
commitAmts,
changeAmts,
int64(0x00000000))
@ -956,7 +956,7 @@ func TestGetSStxNullOutputAmounts(t *testing.T) {
int64(0x02000000),
int64(0x12300001)}
_, _, err = stake.GetSStxNullOutputAmounts(
_, _, err = stake.SStxNullOutputAmounts(
commitAmts,
tooMuchChangeAmts,
int64(0x00000020))
@ -965,7 +965,7 @@ func TestGetSStxNullOutputAmounts(t *testing.T) {
t.Errorf("TestGetSStxNullOutputAmounts unexpected error: %v", err)
}
fees, amts, err := stake.GetSStxNullOutputAmounts(commitAmts,
fees, amts, err := stake.SStxNullOutputAmounts(commitAmts,
changeAmts,
amtTicket)
@ -1000,7 +1000,7 @@ func TestGetStakeRewards(t *testing.T) {
amountTicket := int64(42000000)
subsidy := int64(400000)
outAmts := stake.GetStakeRewards(amounts, amountTicket, subsidy)
outAmts := stake.CalculateRewards(amounts, amountTicket, subsidy)
// SSRtx example with 0 subsidy
expectedAmts := []int64{int64(21200000),
@ -1317,7 +1317,7 @@ func TestVerifyRealTxs(t *testing.T) {
sstxMtx.FromBytes(hexSstx)
sstxTx := dcrutil.NewTx(sstxMtx)
sstxTypes, sstxAddrs, sstxAmts, _, sstxRules, sstxLimits :=
stake.GetSStxStakeOutputInfo(sstxTx)
stake.TxSStxStakeOutputInfo(sstxTx)
hexSsrtx, _ := hex.DecodeString("010000000147f4453f244f2589551aea7c714d" +
"771053b667c6612616e9c8fc0e68960a9a100000000001ffffffff0270d7210a00" +
@ -1333,12 +1333,12 @@ func TestVerifyRealTxs(t *testing.T) {
ssrtxTx := dcrutil.NewTx(ssrtxMtx)
ssrtxTypes, ssrtxAddrs, ssrtxAmts, err :=
stake.GetSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
stake.TxSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
if err != nil {
t.Errorf("Unexpected GetSSRtxStakeOutputInfo error: %v", err.Error())
}
ssrtxCalcAmts := stake.GetStakeRewards(sstxAmts, sstxMtx.TxOut[0].Value,
ssrtxCalcAmts := stake.CalculateRewards(sstxAmts, sstxMtx.TxOut[0].Value,
int64(0))
// Here an error is thrown because the second output spends too much.
@ -1366,13 +1366,13 @@ func TestVerifyRealTxs(t *testing.T) {
// Correct this and make sure it passes.
ssrtxTx.MsgTx().TxOut[1].Value = 47460913
sstxTypes, sstxAddrs, sstxAmts, _, sstxRules, sstxLimits =
stake.GetSStxStakeOutputInfo(sstxTx)
stake.TxSStxStakeOutputInfo(sstxTx)
ssrtxTypes, ssrtxAddrs, ssrtxAmts, err =
stake.GetSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
stake.TxSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
if err != nil {
t.Errorf("Unexpected GetSSRtxStakeOutputInfo error: %v", err.Error())
}
ssrtxCalcAmts = stake.GetStakeRewards(sstxAmts, sstxMtx.TxOut[0].Value,
ssrtxCalcAmts = stake.CalculateRewards(sstxAmts, sstxMtx.TxOut[0].Value,
int64(0))
err = stake.VerifyStakingPkhsAndAmounts(sstxTypes,
sstxAddrs,
@ -1392,13 +1392,13 @@ func TestVerifyRealTxs(t *testing.T) {
// make sure it fails.
ssrtxTx.MsgTx().TxOut[0].Value = 0
sstxTypes, sstxAddrs, sstxAmts, _, sstxRules, sstxLimits =
stake.GetSStxStakeOutputInfo(sstxTx)
stake.TxSStxStakeOutputInfo(sstxTx)
ssrtxTypes, ssrtxAddrs, ssrtxAmts, err =
stake.GetSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
stake.TxSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
if err != nil {
t.Errorf("Unexpected GetSSRtxStakeOutputInfo error: %v", err.Error())
}
ssrtxCalcAmts = stake.GetStakeRewards(sstxAmts, sstxMtx.TxOut[0].Value,
ssrtxCalcAmts = stake.CalculateRewards(sstxAmts, sstxMtx.TxOut[0].Value,
int64(0))
err = stake.VerifyStakingPkhsAndAmounts(sstxTypes,
sstxAddrs,
@ -1423,13 +1423,13 @@ func TestVerifyRealTxs(t *testing.T) {
ssrtxTx.MsgTx().TxOut[0].Value = 108730066
ssrtxTx.MsgTx().TxOut[1].Value = 108730066
sstxTypes, sstxAddrs, sstxAmts, _, sstxRules, sstxLimits =
stake.GetSStxStakeOutputInfo(sstxTx)
stake.TxSStxStakeOutputInfo(sstxTx)
ssrtxTypes, ssrtxAddrs, ssrtxAmts, err =
stake.GetSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
stake.TxSSRtxStakeOutputInfo(ssrtxTx, &chaincfg.TestNetParams)
if err != nil {
t.Errorf("Unexpected GetSSRtxStakeOutputInfo error: %v", err.Error())
}
ssrtxCalcAmts = stake.GetStakeRewards(sstxAmts, sstxMtx.TxOut[0].Value,
ssrtxCalcAmts = stake.CalculateRewards(sstxAmts, sstxMtx.TxOut[0].Value,
int64(0))
err = stake.VerifyStakingPkhsAndAmounts(sstxTypes,
sstxAddrs,

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -26,14 +26,15 @@ import (
"fmt"
"io/ioutil"
"math"
"math/big"
"path/filepath"
"sort"
"sync"
"github.com/decred/dcrd/blockchain/dbnamespace"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/txscript"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrutil"
)
@ -230,11 +231,138 @@ func (tm *TicketMaps) GobDecode(buf []byte) error {
type TicketDB struct {
mtx sync.Mutex
maps TicketMaps
database database.Db
database database.DB
chainParams *chaincfg.Params
StakeEnabledHeight int64
}
// bestChainState represents the data to be stored the database for the current
// best chain state.
type bestChainState struct {
hash chainhash.Hash
height uint32
totalTxns uint64
totalSubsidy int64
workSum *big.Int
}
// deserializeBestChainState deserializes the passed serialized best chain
// state. This is data stored in the chain state bucket and is updated after
// every block is connected or disconnected form the main chain.
// block.
func deserializeBestChainState(serializedData []byte) (bestChainState, error) {
// Ensure the serialized data has enough bytes to properly deserialize
// the hash, height, total transactions, total subsidy, current subsidy,
// and work sum length.
expectedMinLen := chainhash.HashSize + 4 + 8 + 8 + 4
if len(serializedData) < expectedMinLen {
return bestChainState{}, database.Error{
ErrorCode: database.ErrCorruption,
Description: fmt.Sprintf("corrupt best chain state size; min %v "+
"got %v", expectedMinLen, len(serializedData)),
}
}
state := bestChainState{}
copy(state.hash[:], serializedData[0:chainhash.HashSize])
offset := uint32(chainhash.HashSize)
state.height = dbnamespace.ByteOrder.Uint32(serializedData[offset : offset+4])
offset += 4
state.totalTxns = dbnamespace.ByteOrder.Uint64(
serializedData[offset : offset+8])
offset += 8
state.totalSubsidy = int64(dbnamespace.ByteOrder.Uint64(
serializedData[offset : offset+8]))
offset += 8
workSumBytesLen := dbnamespace.ByteOrder.Uint32(
serializedData[offset : offset+4])
offset += 4
// Ensure the serialized data has enough bytes to deserialize the work
// sum.
if uint32(len(serializedData[offset:])) < workSumBytesLen {
return bestChainState{}, database.Error{
ErrorCode: database.ErrCorruption,
Description: fmt.Sprintf("corrupt work sum size; want %v "+
"got %v", workSumBytesLen, uint32(len(serializedData[offset:]))),
}
}
workSumBytes := serializedData[offset : offset+workSumBytesLen]
state.workSum = new(big.Int).SetBytes(workSumBytes)
return state, nil
}
// NewestSha returns the newest hash and height as recorded in the database
// of the blockchain.
func (tmdb *TicketDB) NewestSha() (*chainhash.Hash, int64, error) {
var state bestChainState
err := tmdb.database.View(func(dbTx database.Tx) error {
// Fetch the stored chain state from the database metadata.
// When it doesn't exist, it means the database hasn't been
// initialized for use with chain yet, so break out now to allow
// that to happen under a writable database transaction.
serializedData := dbTx.Metadata().Get(dbnamespace.ChainStateKeyName)
if serializedData == nil {
return nil
}
var err error
state, err = deserializeBestChainState(serializedData)
if err != nil {
return err
}
return nil
})
return &state.hash, int64(state.height), err
}
// FetchBlockShaByHeight queries the blockchain's database to find a block's hash
// for some height.
func (tmdb *TicketDB) FetchBlockShaByHeight(height int64) (*chainhash.Hash, error) {
var hash chainhash.Hash
err := tmdb.database.View(func(dbTx database.Tx) error {
var serializedHeight [4]byte
dbnamespace.ByteOrder.PutUint32(serializedHeight[:], uint32(height))
meta := dbTx.Metadata()
heightIndex := meta.Bucket(dbnamespace.HeightIndexBucketName)
hashBytes := heightIndex.Get(serializedHeight[:])
if hashBytes == nil {
return fmt.Errorf("no block at height %d exists", height)
}
copy(hash[:], hashBytes)
return nil
})
return &hash, err
}
// FetchBlockBySha fetches a block from a given hash using the blockchain
// database.
func (tmdb *TicketDB) FetchBlockBySha(hash *chainhash.Hash) (*dcrutil.Block, error) {
var block *dcrutil.Block
err := tmdb.database.View(func(dbTx database.Tx) error {
rawBytes, err := dbTx.FetchBlock(hash)
if err != nil {
return err
}
block, err = dcrutil.NewBlockFromBytes(rawBytes)
if err != nil {
return err
}
return nil
})
block.SetHeight(int64(block.MsgBlock().Header.Height))
return block, err
}
// Initialize allocates buckets for each ticket number in ticketMap and buckets
// for each height up to the declared height from 0. This should be called only
// when no suitable files exist to load the TicketDB from or when
@ -242,7 +370,7 @@ type TicketDB struct {
// WARNING: Height should be 0 for all non-debug uses.
//
// This function is safe for concurrent access.
func (tmdb *TicketDB) Initialize(np *chaincfg.Params, db database.Db) {
func (tmdb *TicketDB) Initialize(np *chaincfg.Params, db database.DB) error {
tmdb.mtx.Lock()
defer tmdb.mtx.Unlock()
@ -255,10 +383,28 @@ func (tmdb *TicketDB) Initialize(np *chaincfg.Params, db database.Db) {
tmdb.StakeEnabledHeight = np.StakeEnabledHeight
// Fill in live ticket buckets
// Fill in live ticket buckets.
for i := 0; i < BucketsSize; i++ {
tmdb.maps.ticketMap[uint8(i)] = make(SStxMemMap)
}
// Get the latest block height from the db.
_, curHeight, err := tmdb.NewestSha()
if err != nil {
return err
}
log.Infof("Block ticket database initialized empty")
if curHeight > 0 {
log.Infof("Db non-empty, resyncing ticket DB")
err := tmdb.RescanTicketDB()
if err != nil {
return err
}
}
return nil
}
// maybeInsertBlock creates a new bucket in the spentTicketMap; this should be
@ -317,7 +463,7 @@ func (tmdb *TicketDB) GetTopBlock() int64 {
//
// This function is safe for concurrent access.
func (tmdb *TicketDB) LoadTicketDBs(tmsPath, tmsLoc string, np *chaincfg.Params,
db database.Db) error {
db database.DB) error {
tmdb.mtx.Lock()
defer tmdb.mtx.Unlock()
@ -346,7 +492,7 @@ func (tmdb *TicketDB) LoadTicketDBs(tmsPath, tmsLoc string, np *chaincfg.Params,
tmdb.maps = loadedTicketMaps
// Get the latest block height from the database.
_, curHeight, err := tmdb.database.NewestSha()
_, curHeight, err := tmdb.NewestSha()
if err != nil {
return err
}
@ -936,69 +1082,6 @@ func (tmdb *TicketDB) GetLiveTicketBucketData() map[int]int {
return ltbd
}
// GetLiveTicketsInBucketData creates a map indicating the ticket hash and the
// owner's address for each bucket. Used for an RPC call.
func (tmdb *TicketDB) GetLiveTicketsInBucketData(
bucket uint8) (map[chainhash.Hash]dcrutil.Address, error) {
tmdb.mtx.Lock()
defer tmdb.mtx.Unlock()
ltbd := make(map[chainhash.Hash]dcrutil.Address)
tickets := tmdb.maps.ticketMap[bucket]
for _, ticket := range tickets {
// Load the ticket from the database and find the address that it's
// going to.
txReply, err := tmdb.database.FetchTxBySha(&ticket.SStxHash)
if err != nil {
return nil, err
}
_, addr, _, err :=
txscript.ExtractPkScriptAddrs(txReply[0].Tx.TxOut[0].Version,
txReply[0].Tx.TxOut[0].PkScript, tmdb.chainParams)
if err != nil {
return nil, err
}
ltbd[ticket.SStxHash] = addr[0]
}
return ltbd, nil
}
// GetLiveTicketsForAddress gets all currently active tickets for a given
// address.
func (tmdb *TicketDB) GetLiveTicketsForAddress(
address dcrutil.Address) ([]chainhash.Hash, error) {
tmdb.mtx.Lock()
defer tmdb.mtx.Unlock()
var ltfa []chainhash.Hash
for i := 0; i < BucketsSize; i++ {
for _, ticket := range tmdb.maps.ticketMap[i] {
// Load the ticket from the database and find the address that it's
// going to.
txReply, err := tmdb.database.FetchTxBySha(&ticket.SStxHash)
if err != nil {
return nil, err
}
_, addr, _, err :=
txscript.ExtractPkScriptAddrs(txReply[0].Tx.TxOut[0].Version,
txReply[0].Tx.TxOut[0].PkScript, tmdb.chainParams)
if err != nil {
return nil, err
}
// Compare the HASH160 result and see if it's equal.
if bytes.Equal(addr[0].ScriptAddress(), address.ScriptAddress()) {
ltfa = append(ltfa, ticket.SStxHash)
}
}
}
return ltfa, nil
}
// spendTickets transfers tickets from the ticketMap to the spentTicketMap. Useful
// when connecting blocks. Also pushes missed tickets to the missed ticket map.
// usedtickets is a map that contains all tickets that were actually used in SSGen
@ -1206,12 +1289,12 @@ func (tmdb *TicketDB) revokeTickets(
// This function MUST be called with the tmdb lock held (for writes).
func (tmdb *TicketDB) unrevokeTickets(height int64) (SStxMemMap, error) {
// Get the block of interest.
var hash, errHash = tmdb.database.FetchBlockShaByHeight(height)
var hash, errHash = tmdb.FetchBlockShaByHeight(height)
if errHash != nil {
return nil, errHash
}
var block, errBlock = tmdb.database.FetchBlockBySha(hash)
var block, errBlock = tmdb.FetchBlockBySha(hash)
if errBlock != nil {
return nil, errBlock
}
@ -1319,12 +1402,12 @@ func (tmdb *TicketDB) getNewTicketsFromHeight(height int64) (SStxMemMap, error)
matureHeight := height - int64(tmdb.chainParams.TicketMaturity)
var hash, errHash = tmdb.database.FetchBlockShaByHeight(matureHeight)
var hash, errHash = tmdb.FetchBlockShaByHeight(matureHeight)
if errHash != nil {
return nil, errHash
}
var block, errBlock = tmdb.database.FetchBlockBySha(hash)
var block, errBlock = tmdb.FetchBlockBySha(hash)
if errBlock != nil {
return nil, errBlock
}
@ -1383,8 +1466,8 @@ func (tmdb *TicketDB) pushMatureTicketsAtHeight(height int64) (SStxMemMap, error
// InsertBlock. See the comment for InsertBlock for more details.
//
// This function MUST be called with the tmdb lock held (for writes).
func (tmdb *TicketDB) insertBlock(block *dcrutil.Block) (SStxMemMap,
SStxMemMap, SStxMemMap, error) {
func (tmdb *TicketDB) insertBlock(block *dcrutil.Block,
parent *dcrutil.Block) (SStxMemMap, SStxMemMap, SStxMemMap, error) {
height := block.Height()
if height < tmdb.StakeEnabledHeight {
@ -1440,12 +1523,7 @@ func (tmdb *TicketDB) insertBlock(block *dcrutil.Block) (SStxMemMap,
}
// Spend or miss all the necessary tickets and do some sanity checks.
parentBlock, err := tmdb.database.FetchBlockBySha(
&block.MsgBlock().Header.PrevBlock)
if err != nil {
return nil, nil, nil, err
}
spentAndMissedTickets, err := tmdb.spendTickets(parentBlock,
spentAndMissedTickets, err := tmdb.spendTickets(parent,
usedTickets,
spendingHashes)
if err != nil {
@ -1493,13 +1571,13 @@ func (tmdb *TicketDB) insertBlock(block *dcrutil.Block) (SStxMemMap,
// a consensus failure somehow.
//
// This function is safe for concurrent access.
func (tmdb *TicketDB) InsertBlock(block *dcrutil.Block) (SStxMemMap,
func (tmdb *TicketDB) InsertBlock(block, parent *dcrutil.Block) (SStxMemMap,
SStxMemMap, SStxMemMap, error) {
tmdb.mtx.Lock()
defer tmdb.mtx.Unlock()
return tmdb.insertBlock(block)
return tmdb.insertBlock(block, parent)
}
// unpushMatureTicketsAtHeight unmatures tickets from TICKET_MATURITY blocks ago by
@ -1536,8 +1614,9 @@ func (tmdb *TicketDB) unpushMatureTicketsAtHeight(height int64) (SStxMemMap,
func (tmdb *TicketDB) removeBlockToHeight(height int64) (map[int64]SStxMemMap,
map[int64]SStxMemMap, map[int64]SStxMemMap, error) {
if height < tmdb.StakeEnabledHeight {
return nil, nil, nil, fmt.Errorf("TicketDB Error: tried to remove " +
"blocks to before minimum maturation height!")
return nil, nil, nil, fmt.Errorf("TicketDB Error: tried to remove "+
"blocks to before minimum maturation height (got %v, min %v)!",
height, tmdb.StakeEnabledHeight)
}
// Discover the current height
@ -1601,7 +1680,7 @@ func (tmdb *TicketDB) RemoveBlockToHeight(height int64) (map[int64]SStxMemMap,
// This function MUST be called with the tmdb lock held (for writes).
func (tmdb *TicketDB) rescanTicketDB() error {
// Get the latest block height from the database.
_, height, err := tmdb.database.NewestSha()
_, height, err := tmdb.NewestSha()
if err != nil {
return err
}
@ -1628,12 +1707,12 @@ func (tmdb *TicketDB) rescanTicketDB() error {
spendHashes[td.SpendHash] = struct{}{}
}
h, err := tmdb.database.FetchBlockShaByHeight(curTmdbHeight)
h, err := tmdb.FetchBlockShaByHeight(curTmdbHeight)
if err != nil {
return err
}
blCur, err := tmdb.database.FetchBlockBySha(h)
blCur, err := tmdb.FetchBlockBySha(h)
if err != nil {
return err
}
@ -1649,14 +1728,19 @@ func (tmdb *TicketDB) rescanTicketDB() error {
}
}
// Handle empty chain exception.
if curTmdbHeight <= tmdb.chainParams.StakeEnabledHeight {
failedToFindBlock = true
}
// We found a matching block in the database at
// curTmdbHeight-1, so sync to it.
if !failedToFindBlock {
log.Infof("Found a previously good height %v in the old "+
"stake database, attempting to sync to tip from it",
curTmdbHeight)
"stake database, attempting to sync to tip height %v "+
"from it", curTmdbHeight, height)
// Remove the top block.
// Remove the top block if we can.
_, _, _, err = tmdb.removeBlockToHeight(curTmdbHeight - 1)
if err != nil {
return err
@ -1665,17 +1749,22 @@ func (tmdb *TicketDB) rescanTicketDB() error {
// Reinsert the top block and sync to the best chain
// height.
for i := curTmdbHeight; i <= height; i++ {
h, err := tmdb.database.FetchBlockShaByHeight(i)
h, err := tmdb.FetchBlockShaByHeight(i)
if err != nil {
return err
}
bl, err := tmdb.database.FetchBlockBySha(h)
bl, err := tmdb.FetchBlockBySha(h)
if err != nil {
return err
}
_, _, _, err = tmdb.insertBlock(bl)
pa, err := tmdb.FetchBlockBySha(&bl.MsgBlock().Header.PrevBlock)
if err != nil {
return err
}
_, _, _, err = tmdb.insertBlock(bl, pa)
if err != nil {
return err
}
@ -1704,17 +1793,23 @@ func (tmdb *TicketDB) rescanTicketDB() error {
for curHeight := tmdb.StakeEnabledHeight; curHeight <= height; curHeight++ {
// Go through the winners and votes for each block and use those to spend
// tickets in the ticket db.
hash, errHash := tmdb.database.FetchBlockShaByHeight(curHeight)
hash, errHash := tmdb.FetchBlockShaByHeight(curHeight)
if errHash != nil {
return errHash
}
block, errBlock := tmdb.database.FetchBlockBySha(hash)
block, errBlock := tmdb.FetchBlockBySha(hash)
if errBlock != nil {
return errBlock
}
_, _, _, err = tmdb.insertBlock(block)
parent, errBlock :=
tmdb.FetchBlockBySha(&block.MsgBlock().Header.PrevBlock)
if errBlock != nil {
return errBlock
}
_, _, _, err = tmdb.insertBlock(block, parent)
if err != nil {
return err
}

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -17,11 +17,10 @@ import (
"testing"
"time"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
@ -62,15 +61,14 @@ func TestTicketDB(t *testing.T) {
// Declare some useful variables
testBCHeight := int64(168)
// Set up a DB
database, err := database.CreateDB("leveldb", "ticketdb_test")
// Set up a blockchain
chain, teardownFunc, err := chainSetup("ticketdbunittests",
simNetParams)
if err != nil {
t.Errorf("Db create error: %v", err.Error())
t.Errorf("Failed to setup chain instance: %v", err)
return
}
// Make a new tmdb to fill with dummy live and used tickets
var tmdb stake.TicketDB
tmdb.Initialize(simNetParams, database)
defer teardownFunc()
filename := filepath.Join("..", "/../blockchain/testdata", "blocks0to168.bz2")
fi, err := os.Open(filename)
@ -83,29 +81,35 @@ func TestTicketDB(t *testing.T) {
// Create decoder from the buffer and a map to store the data
bcDecoder := gob.NewDecoder(bcBuf)
blockchain := make(map[int64][]byte)
testBlockchain := make(map[int64][]byte)
// Decode the blockchain into the map
if err := bcDecoder.Decode(&blockchain); err != nil {
if err := bcDecoder.Decode(&testBlockchain); err != nil {
t.Errorf("error decoding test blockchain")
}
timeSource := blockchain.NewMedianTime()
var CopyOfMapsAtBlock50, CopyOfMapsAtBlock168 stake.TicketMaps
var ticketsToSpendIn167 []chainhash.Hash
var sortedTickets167 []*stake.TicketData
for i := int64(0); i <= testBCHeight; i++ {
block, err := dcrutil.NewBlockFromBytes(blockchain[i])
if i == 0 {
continue
}
block, err := dcrutil.NewBlockFromBytes(testBlockchain[i])
if err != nil {
t.Errorf("block deserialization error on block %v", i)
t.Fatalf("block deserialization error on block %v", i)
}
block.SetHeight(i)
database.InsertBlock(block)
tmdb.InsertBlock(block)
_, _, err = chain.ProcessBlock(block, timeSource, blockchain.BFNone)
if err != nil {
t.Fatalf("failed to process block %v: %v", i, err)
}
if i == 50 {
// Create snapshot of tmdb at block 50
CopyOfMapsAtBlock50, err = cloneTicketDB(&tmdb)
CopyOfMapsAtBlock50, err = cloneTicketDB(chain.TMDB())
if err != nil {
t.Errorf("db cloning at block 50 failure! %v", err)
}
@ -119,7 +123,7 @@ func TestTicketDB(t *testing.T) {
totalTickets := 0
sortedSlice := make([]*stake.TicketData, 0)
for i := 0; i < stake.BucketsSize; i++ {
tix, err := tmdb.DumpLiveTickets(uint8(i))
tix, err := chain.TMDB().DumpLiveTickets(uint8(i))
if err != nil {
t.Errorf("error dumping live tickets")
}
@ -138,7 +142,7 @@ func TestTicketDB(t *testing.T) {
}
if i == 168 {
parentBlock, err := dcrutil.NewBlockFromBytes(blockchain[i-1])
parentBlock, err := dcrutil.NewBlockFromBytes(testBlockchain[i-1])
if err != nil {
t.Errorf("block deserialization error on block %v", i-1)
}
@ -159,7 +163,7 @@ func TestTicketDB(t *testing.T) {
// Make sure that the tickets that were supposed to be spent or
// missed were.
spentTix, err := tmdb.DumpSpentTickets(i)
spentTix, err := chain.TMDB().DumpSpentTickets(i)
if err != nil {
t.Errorf("DumpSpentTickets failure")
}
@ -171,7 +175,7 @@ func TestTicketDB(t *testing.T) {
}
// Create snapshot of tmdb at block 168
CopyOfMapsAtBlock168, err = cloneTicketDB(&tmdb)
CopyOfMapsAtBlock168, err = cloneTicketDB(chain.TMDB())
if err != nil {
t.Errorf("db cloning at block 168 failure! %v", err)
}
@ -179,24 +183,35 @@ func TestTicketDB(t *testing.T) {
}
// Remove five blocks from HEAD~1
_, _, _, err = tmdb.RemoveBlockToHeight(50)
_, _, _, err = chain.TMDB().RemoveBlockToHeight(50)
if err != nil {
t.Errorf("error: %v", err)
}
// Test if the roll back was symmetric to the earlier snapshot
if !reflect.DeepEqual(tmdb.DumpMapsPointer(), CopyOfMapsAtBlock50) {
if !reflect.DeepEqual(chain.TMDB().DumpMapsPointer(), CopyOfMapsAtBlock50) {
t.Errorf("The td did not restore to a previous block height correctly!")
}
// Test rescanning a ticket db
err = tmdb.RescanTicketDB()
err = chain.TMDB().RescanTicketDB()
if err != nil {
t.Errorf("rescanticketdb err: %v", err.Error())
}
// Remove all blocks and rescan too
_, _, _, err =
chain.TMDB().RemoveBlockToHeight(simNetParams.StakeEnabledHeight)
if err != nil {
t.Errorf("error: %v", err)
}
err = chain.TMDB().RescanTicketDB()
if err != nil {
t.Errorf("rescanticketdb err: %v", err.Error())
}
// Test if the db file storage was symmetric to the earlier snapshot
if !reflect.DeepEqual(tmdb.DumpMapsPointer(), CopyOfMapsAtBlock168) {
if !reflect.DeepEqual(chain.TMDB().DumpMapsPointer(), CopyOfMapsAtBlock168) {
t.Errorf("The td did not rescan to HEAD correctly!")
}
@ -206,19 +221,19 @@ func TestTicketDB(t *testing.T) {
}
// Store the ticket db to disk
err = tmdb.Store("testdata/", "testtmdb")
err = chain.TMDB().Store("testdata/", "testtmdb")
if err != nil {
t.Errorf("error: %v", err)
}
var tmdb2 stake.TicketDB
err = tmdb2.LoadTicketDBs("testdata/", "testtmdb", simNetParams, database)
err = tmdb2.LoadTicketDBs("testdata/", "testtmdb", simNetParams, chain.DB())
if err != nil {
t.Errorf("error: %v", err)
}
// Test if the db file storage was symmetric to previously rescanned one
if !reflect.DeepEqual(tmdb.DumpMapsPointer(), tmdb2.DumpMapsPointer()) {
if !reflect.DeepEqual(chain.TMDB().DumpMapsPointer(), tmdb2.DumpMapsPointer()) {
t.Errorf("The td did not rescan to a previous block height correctly!")
}
@ -228,9 +243,9 @@ func TestTicketDB(t *testing.T) {
missedIn152, _ := chainhash.NewHashFromStr(
"84f7f866b0af1cc278cb8e0b2b76024a07542512c76487c83628c14c650de4fa")
tmdb.RemoveBlockToHeight(152)
chain.TMDB().RemoveBlockToHeight(152)
missedTix, err := tmdb.DumpMissedTickets()
missedTix, err := chain.TMDB().DumpMissedTickets()
if err != nil {
t.Errorf("err dumping missed tix: %v", err.Error())
}
@ -240,12 +255,12 @@ func TestTicketDB(t *testing.T) {
missedIn152)
}
tmdb.RescanTicketDB()
chain.TMDB().RescanTicketDB()
// Make sure that the revoked map contains the revoked tx
revokedSlice := []*chainhash.Hash{missedIn152}
revokedTix, err := tmdb.DumpRevokedTickets()
revokedTix, err := chain.TMDB().DumpRevokedTickets()
if err != nil {
t.Errorf("err dumping missed tix: %v", err.Error())
}
@ -262,9 +277,6 @@ func TestTicketDB(t *testing.T) {
"block 152 and later revoked")
}
database.Close()
tmdb.Close()
os.RemoveAll("ticketdb_test")
os.Remove("./ticketdb_test.ver")
os.Remove("testdata/testtmdb")
@ -298,6 +310,7 @@ var simNetParams = &chaincfg.Params{
PowLimitBits: 0x207fffff,
ResetMinDifficulty: false,
GenerateSupported: true,
MaximumBlockSize: 1000000,
TimePerBlock: time.Second * 1,
WorkDiffAlpha: 1,
WorkDiffWindowSize: 8,
@ -351,6 +364,7 @@ var simNetParams = &chaincfg.Params{
MaxFreshStakePerBlock: 40, // 8*TicketsPerBlock
StakeEnabledHeight: 16 + 16, // CoinbaseMaturity + TicketMaturity
StakeValidationHeight: 16 + (64 * 2), // CoinbaseMaturity + TicketPoolSize*2
StakeBaseSigScript: []byte{0xDE, 0xAD, 0xBE, 0xEF},
// Decred organization related parameters
//

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -11,6 +11,7 @@ import (
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
database "github.com/decred/dcrd/database2"
)
// GetNextWinningTickets returns the next tickets eligible for spending as SSGen
@ -19,7 +20,7 @@ import (
func (b *BlockChain) GetNextWinningTickets() ([]chainhash.Hash, int, [6]byte,
error) {
winningTickets, poolSize, finalState, _, err :=
b.getWinningTicketsWithStore(b.bestChain)
b.getWinningTicketsWithStore(b.bestNode)
if err != nil {
return nil, 0, [6]byte{}, err
}
@ -62,9 +63,14 @@ func (b *BlockChain) getWinningTicketsWithStore(node *blockNode) ([]chainhash.Ha
}
if ticketStore != nil {
// We need the viewpoint of spendable tickets given that the
// current block was actually added.
err = b.connectTickets(ticketStore, node, block)
view := NewUtxoViewpoint()
view.SetBestHash(node.hash)
view.SetStakeViewpoint(ViewpointPrevValidInitial)
parent, err := b.getBlockFromHash(&node.header.PrevBlock)
if err != nil {
return nil, 0, [6]byte{}, nil, err
}
err = view.fetchInputUtxos(b.db, block, parent)
if err != nil {
return nil, 0, [6]byte{}, nil, err
}
@ -177,7 +183,8 @@ func (b *BlockChain) getWinningTicketsInclStore(node *blockNode,
totalTickets := 0
var sortedSlice []*stake.TicketData
for i := 0; i < stake.BucketsSize; i++ {
ltb, err := b.GenerateLiveTicketBucket(ticketStore, tpdBucketMap, uint8(i))
ltb, err := b.GenerateLiveTicketBucket(ticketStore, tpdBucketMap,
uint8(i))
if err != nil {
h := node.hash
str := fmt.Sprintf("Failed to generate a live ticket bucket "+
@ -229,9 +236,13 @@ func (b *BlockChain) getWinningTicketsInclStore(node *blockNode,
// GetWinningTickets takes a node block hash and returns the next tickets
// eligible for spending as SSGen.
// This function is NOT safe for concurrent access.
//
// This function is safe for concurrent access.
func (b *BlockChain) GetWinningTickets(nodeHash chainhash.Hash) ([]chainhash.Hash,
int, [6]byte, error) {
b.chainLock.Lock()
defer b.chainLock.Unlock()
var node *blockNode
if n, exists := b.index[nodeHash]; exists {
node = n
@ -253,9 +264,21 @@ func (b *BlockChain) GetWinningTickets(nodeHash chainhash.Hash) ([]chainhash.Has
}
// GetMissedTickets returns a list of currently missed tickets.
//
// This function is NOT safe for concurrent access.
func (b *BlockChain) GetMissedTickets() []chainhash.Hash {
missedTickets := b.tmdb.GetTicketHashesForMissed()
return missedTickets
}
// DB passes the pointer to the database. It is only to be used by testing.
func (b *BlockChain) DB() database.DB {
return b.db
}
// TMDB passes the pointer to the ticket database. It is only to be used by
// testing.
func (b *BlockChain) TMDB() *stake.TicketDB {
return b.tmdb
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -8,13 +8,51 @@ package blockchain
import (
"bytes"
"fmt"
"sync"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// The number of values to precalculate on initialization of the subsidy
// cache.
const subsidyCacheInitWidth = 4
// SubsidyCache is a structure that caches calculated values of subsidy so that
// they're not constantly recalculated. The blockchain struct itself possesses a
// pointer to a preinitialized SubsidyCache.
type SubsidyCache struct {
subsidyCache map[uint64]int64
subsidyCacheLock sync.RWMutex
params *chaincfg.Params
}
// NewSubsidyCache initializes a new subsidy cache for a given height. It
// precalculates the values of the subsidy that are most likely to be seen by
// the client when it connects to the network.
func NewSubsidyCache(height int64, params *chaincfg.Params) *SubsidyCache {
scm := make(map[uint64]int64)
sc := SubsidyCache{
subsidyCache: scm,
params: params,
}
iteration := uint64(height / params.ReductionInterval)
if iteration < subsidyCacheInitWidth {
return &sc
}
for i := iteration - 4; i <= iteration; i++ {
sc.CalcBlockSubsidy(int64(iteration) * params.ReductionInterval)
}
return &sc
}
// CalcBlockSubsidy returns the subsidy amount a block at the provided height
// should have. This is mainly used for determining how much the coinbase for
// newly generated blocks awards as well as validating the coinbase for blocks
@ -26,36 +64,66 @@ import (
// 2 subsidy /= DivSubsidy
//
// Safe for concurrent access.
func calcBlockSubsidy(height int64, params *chaincfg.Params) int64 {
func (s *SubsidyCache) CalcBlockSubsidy(height int64) int64 {
// Block height 1 subsidy is 'special' and used to
// distribute initial tokens, if any.
if height == 1 {
return params.BlockOneSubsidy()
return s.params.BlockOneSubsidy()
}
iterations := height / params.ReductionInterval
subsidy := params.BaseSubsidy
iteration := uint64(height / s.params.ReductionInterval)
// You could stick all these values in a LUT for faster access if you
// wanted to, but this calculation is already really fast until you
// get very very far into the blockchain. The other method you could
// use is storing the total subsidy in a block node and do the
// multiplication and division when needed when adding a block.
if iterations > 0 {
for i := int64(0); i < iterations; i++ {
subsidy *= params.MulSubsidy
subsidy /= params.DivSubsidy
}
if iteration == 0 {
return s.params.BaseSubsidy
}
// First, check the cache.
s.subsidyCacheLock.RLock()
cachedValue, existsInCache := s.subsidyCache[iteration]
s.subsidyCacheLock.RUnlock()
if existsInCache {
return cachedValue
}
// Is the previous one in the cache? If so, calculate
// the subsidy from the previous known value and store
// it in the database and the cache.
s.subsidyCacheLock.RLock()
cachedValue, existsInCache = s.subsidyCache[iteration-1]
s.subsidyCacheLock.RUnlock()
if existsInCache {
cachedValue *= s.params.MulSubsidy
cachedValue /= s.params.DivSubsidy
s.subsidyCacheLock.Lock()
s.subsidyCache[iteration] = cachedValue
s.subsidyCacheLock.Unlock()
return cachedValue
}
// Calculate the subsidy from scratch and store in the
// cache. TODO If there's an older item in the cache,
// calculate it from that to save time.
subsidy := s.params.BaseSubsidy
for i := uint64(0); i < iteration; i++ {
subsidy *= s.params.MulSubsidy
subsidy /= s.params.DivSubsidy
}
s.subsidyCacheLock.Lock()
s.subsidyCache[iteration] = subsidy
s.subsidyCacheLock.Unlock()
return subsidy
}
// CalcBlockWorkSubsidy calculates the proof of work subsidy for a block as a
// proportion of the total subsidy.
func CalcBlockWorkSubsidy(height int64, voters uint16,
params *chaincfg.Params) int64 {
subsidy := calcBlockSubsidy(height, params)
func CalcBlockWorkSubsidy(subsidyCache *SubsidyCache, height int64,
voters uint16, params *chaincfg.Params) int64 {
subsidy := subsidyCache.CalcBlockSubsidy(height)
proportionWork := int64(params.WorkRewardProportion)
proportions := int64(params.TotalSubsidyProportions())
subsidy *= proportionWork
@ -84,12 +152,14 @@ func CalcBlockWorkSubsidy(height int64, voters uint16,
// of its input SStx.
//
// Safe for concurrent access.
func CalcStakeVoteSubsidy(height int64, params *chaincfg.Params) int64 {
func CalcStakeVoteSubsidy(subsidyCache *SubsidyCache, height int64,
params *chaincfg.Params) int64 {
// Calculate the actual reward for this block, then further reduce reward
// proportional to StakeRewardProportion.
// Note that voters/potential voters is 1, so that vote reward is calculated
// irrespective of block reward.
subsidy := calcBlockSubsidy(height, params)
subsidy := subsidyCache.CalcBlockSubsidy(height)
proportionStake := int64(params.StakeRewardProportion)
proportions := int64(params.TotalSubsidyProportions())
subsidy *= proportionStake
@ -102,13 +172,14 @@ func CalcStakeVoteSubsidy(height int64, params *chaincfg.Params) int64 {
// coinbase.
//
// Safe for concurrent access.
func CalcBlockTaxSubsidy(height int64, voters uint16,
func CalcBlockTaxSubsidy(subsidyCache *SubsidyCache, height int64, voters uint16,
params *chaincfg.Params) int64 {
if params.BlockTaxProportion == 0 {
return 0
}
subsidy := calcBlockSubsidy(int64(height), params)
subsidy := subsidyCache.CalcBlockSubsidy(height)
proportionTax := int64(params.BlockTaxProportion)
proportions := int64(params.TotalSubsidyProportions())
subsidy *= proportionTax
@ -134,7 +205,8 @@ func CalcBlockTaxSubsidy(height int64, voters uint16,
// BlockOneCoinbasePaysTokens checks to see if the first block coinbase pays
// out to the network initial token ledger.
func BlockOneCoinbasePaysTokens(tx *dcrutil.Tx, params *chaincfg.Params) error {
func BlockOneCoinbasePaysTokens(tx *dcrutil.Tx,
params *chaincfg.Params) error {
// If no ledger is specified, just return true.
if len(params.BlockOneLedger) == 0 {
return nil
@ -211,8 +283,8 @@ func BlockOneCoinbasePaysTokens(tx *dcrutil.Tx, params *chaincfg.Params) error {
// CoinbasePaysTax checks to see if a given block's coinbase correctly pays
// tax to the developer organization.
func CoinbasePaysTax(tx *dcrutil.Tx, height uint32, voters uint16,
params *chaincfg.Params) error {
func CoinbasePaysTax(subsidyCache *SubsidyCache, tx *dcrutil.Tx, height uint32,
voters uint16, params *chaincfg.Params) error {
// Taxes only apply from block 2 onwards.
if height <= 1 {
return nil
@ -265,7 +337,7 @@ func CoinbasePaysTax(tx *dcrutil.Tx, height uint32, voters uint16,
// Get the amount of subsidy that should have been paid out to
// the organization, then check it.
orgSubsidy := CalcBlockTaxSubsidy(int64(height), voters, params)
orgSubsidy := CalcBlockTaxSubsidy(subsidyCache, int64(height), voters, params)
amountFound := tx.MsgTx().TxOut[0].Value
if orgSubsidy != amountFound {
errStr := fmt.Sprintf("amount in output 0 has non matching org "+
@ -275,3 +347,25 @@ func CoinbasePaysTax(tx *dcrutil.Tx, height uint32, voters uint16,
return nil
}
// CalculateAddedSubsidy calculates the amount of subsidy added by a block
// and its parent. The blocks passed to this function MUST be valid blocks
// that have already been confirmed to abide by the consensus rules of the
// network, or the function might panic.
func CalculateAddedSubsidy(block, parent *dcrutil.Block) int64 {
var subsidy int64
regularTxTreeValid := dcrutil.IsFlagSet16(block.MsgBlock().Header.VoteBits,
dcrutil.BlockValid)
if regularTxTreeValid {
subsidy += parent.MsgBlock().Transactions[0].TxIn[0].ValueIn
}
for _, stx := range block.STransactions() {
if isSSGen, _ := stake.IsSSGen(stx); isSSGen {
subsidy += stx.MsgTx().TxIn[0].ValueIn
}
}
return subsidy
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -14,6 +14,8 @@ import (
func TestBlockSubsidy(t *testing.T) {
mainnet := &chaincfg.MainNetParams
subsidyCache := blockchain.NewSubsidyCache(0, mainnet)
totalSubsidy := mainnet.BlockOneSubsidy()
for i := int64(0); ; i++ {
// Genesis block or first block.
@ -30,12 +32,12 @@ func TestBlockSubsidy(t *testing.T) {
}
height := i - numBlocks
work := blockchain.CalcBlockWorkSubsidy(height,
work := blockchain.CalcBlockWorkSubsidy(subsidyCache, height,
mainnet.TicketsPerBlock, mainnet)
stake := blockchain.CalcStakeVoteSubsidy(subsidyCache, height,
mainnet) * int64(mainnet.TicketsPerBlock)
tax := blockchain.CalcBlockTaxSubsidy(subsidyCache, height,
mainnet.TicketsPerBlock, mainnet)
stake := blockchain.CalcStakeVoteSubsidy(height, mainnet) *
int64(mainnet.TicketsPerBlock)
tax := blockchain.CalcBlockTaxSubsidy(height, mainnet.TicketsPerBlock,
mainnet)
if (work + stake + tax) == 0 {
break
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -12,6 +12,7 @@ import (
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrutil"
)
@ -123,9 +124,8 @@ func (b *BlockChain) GenerateMissedTickets(tixStore TicketStore) (stake.SStxMemM
// from the ticket pool that have been considered spent or missed in this block
// according to the block header. Then, it connects all the newly mature tickets
// to the passed map.
func (b *BlockChain) connectTickets(tixStore TicketStore,
node *blockNode,
block *dcrutil.Block) error {
func (b *BlockChain) connectTickets(tixStore TicketStore, node *blockNode,
block *dcrutil.Block, view *UtxoViewpoint) error {
if tixStore == nil {
return fmt.Errorf("nil ticket store")
}
@ -136,7 +136,7 @@ func (b *BlockChain) connectTickets(tixStore TicketStore,
return nil
}
parentBlock, err := b.GetBlockFromHash(node.parentHash)
parentBlock, err := b.getBlockFromHash(&node.header.PrevBlock)
if err != nil {
return err
}
@ -148,13 +148,6 @@ func (b *BlockChain) connectTickets(tixStore TicketStore,
// Skip a number of validation steps before we requiring chain
// voting.
if node.height >= b.chainParams.StakeValidationHeight {
regularTxTreeValid := dcrutil.IsFlagSet16(node.header.VoteBits,
dcrutil.BlockValid)
thisNodeStakeViewpoint := ViewpointPrevInvalidStake
if regularTxTreeValid {
thisNodeStakeViewpoint = ViewpointPrevValidStake
}
// We need the missed tickets bucket from the original perspective of
// the node.
missedTickets, err := b.GenerateMissedTickets(tixStore)
@ -164,10 +157,20 @@ func (b *BlockChain) connectTickets(tixStore TicketStore,
// TxStore at blockchain HEAD + TxTreeRegular of prevBlock (if
// validated) for this node.
txInputStoreStake, err := b.fetchInputTransactions(node, block,
thisNodeStakeViewpoint)
parent, err := b.getBlockFromHash(&node.header.PrevBlock)
if err != nil {
errStr := fmt.Sprintf("fetchInputTransactions failed for incoming "+
return err
}
regularTxTreeValid := dcrutil.IsFlagSet16(node.header.VoteBits,
dcrutil.BlockValid)
thisNodeStakeViewpoint := ViewpointPrevInvalidStake
if regularTxTreeValid {
thisNodeStakeViewpoint = ViewpointPrevValidStake
}
view.SetStakeViewpoint(thisNodeStakeViewpoint)
err = view.fetchInputUtxos(b.db, block, parent)
if err != nil {
errStr := fmt.Sprintf("fetchInputUtxos failed for incoming "+
"node %v; error given: %v", node.hash, err)
return errors.New(errStr)
}
@ -184,17 +187,16 @@ func (b *BlockChain) connectTickets(tixStore TicketStore,
sstxIn := msgTx.TxIn[1] // sstx input
sstxHash := sstxIn.PreviousOutPoint.Hash
originTx, exists := txInputStoreStake[sstxHash]
if !exists {
originUTXO := view.LookupEntry(&sstxHash)
if originUTXO == nil {
str := fmt.Sprintf("unable to find input transaction "+
"%v for transaction %v", sstxHash, staketx.Sha())
return ruleError(ErrMissingTx, str)
}
sstxHeight := originTx.BlockHeight
// Check maturity of ticket; we can only spend the ticket after it
// hits maturity at height + tM + 1.
sstxHeight := originUTXO.BlockHeight()
if (height - sstxHeight) < (tM + 1) {
blockSha := block.Sha()
errStr := fmt.Sprintf("Error: A ticket spend as an SSGen in "+
@ -472,10 +474,8 @@ func (b *BlockChain) connectTickets(tixStore TicketStore,
// This function should only ever have to disconnect transactions from the main
// chain, so most of the calls are directly the the tmdb which contains all this
// data in an organized bucket.
func (b *BlockChain) disconnectTickets(tixStore TicketStore,
node *blockNode,
func (b *BlockChain) disconnectTickets(tixStore TicketStore, node *blockNode,
block *dcrutil.Block) error {
tM := int64(b.chainParams.TicketMaturity)
height := node.height
@ -581,8 +581,8 @@ func (b *BlockChain) fetchTicketStore(node *blockNode) (TicketStore, error) {
// If we haven't selected a best chain yet or we are extending the main
// (best) chain with a new block, just use the ticket database we already
// have.
if b.bestChain == nil || (prevNode != nil &&
prevNode.hash.IsEqual(b.bestChain.hash)) {
if b.bestNode == nil || (prevNode != nil &&
prevNode.hash.IsEqual(b.bestNode.hash)) {
return nil, nil
}
@ -596,18 +596,41 @@ func (b *BlockChain) fetchTicketStore(node *blockNode) (TicketStore, error) {
// transactions and spend information for the blocks which would be
// disconnected during a reorganize to the point of view of the
// node just before the requested node.
detachNodes, attachNodes, err := b.getReorganizeNodes(prevNode)
detachNodes, attachNodes := b.getReorganizeNodes(node)
if err != nil {
return nil, err
}
view := NewUtxoViewpoint()
view.SetBestHash(b.bestNode.hash)
view.SetStakeViewpoint(ViewpointPrevValidInitial)
for e := detachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
block, err := b.db.FetchBlockBySha(n.hash)
block, err := b.getBlockFromHash(n.hash)
if err != nil {
return nil, err
}
parent, err := b.getBlockFromHash(&n.header.PrevBlock)
if err != nil {
return nil, err
}
// Load all of the spent txos for the block from the spend
// journal.
var stxos []spentTxOut
err = b.db.View(func(dbTx database.Tx) error {
stxos, err = dbFetchSpendJournalEntry(dbTx, block, parent, view)
return err
})
if err != nil {
return nil, err
}
err = b.disconnectTransactions(view, block, parent, stxos)
if err != nil {
return nil, err
}
err = b.disconnectTickets(tixStore, n, block)
if err != nil {
return nil, err
@ -636,10 +659,23 @@ func (b *BlockChain) fetchTicketStore(node *blockNode) (TicketStore, error) {
}
// The number of blocks below this block but above the root of the fork
err = b.connectTickets(tixStore, n, block)
err = b.connectTickets(tixStore, n, block, view)
if err != nil {
return nil, err
}
parent, err := b.getBlockFromHash(&n.header.PrevBlock)
if err != nil {
return nil, err
}
var stxos []spentTxOut
err = b.connectTransactions(view, block, parent, &stxos)
if err != nil {
return nil, err
}
view.SetBestHash(node.hash)
}
return tixStore, nil

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,756 +0,0 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// There are five potential viewpoints we need to worry about.
// ViewpointPrevValidInitial is the viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular, used to validate
// that tx tree regular.
const ViewpointPrevValidInitial = int8(0)
// ViewpointPrevValidStake is the viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular plus the
// contents of the TxTreeRegular, to validate TxTreeStake.
const ViewpointPrevValidStake = int8(1)
// ViewpointPrevInvalidStake is the viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular but without the
// contents of the TxTreeRegular, to validate TxTreeStake.
const ViewpointPrevInvalidStake = int8(2)
// ViewpointPrevValidRegular is the viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular plus the
// contents of the TxTreeRegular and TxTreeStake of current block,
// to validate TxTreeRegular of the current block.
const ViewpointPrevValidRegular = int8(3)
// ViewpointPrevInvalidRegular is the viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular minus the
// contents of the TxTreeRegular and TxTreeStake of current block,
// to validate TxTreeRegular of the current block.
const ViewpointPrevInvalidRegular = int8(4)
// TxData contains contextual information about transactions such as which block
// they were found in and whether or not the outputs are spent.
type TxData struct {
Tx *dcrutil.Tx
Hash *chainhash.Hash
BlockHeight int64
BlockIndex uint32
Spent []bool
Err error
}
// TxStore is used to store transactions needed by other transactions for things
// such as script validation and double spend prevention. This also allows the
// transaction data to be treated as a view since it can contain the information
// from the point-of-view of different points in the chain.
type TxStore map[chainhash.Hash]*TxData
// connectTxTree lets you connect an arbitrary TxTree to a txStore to push it
// forward in history.
// TxTree true == TxTreeRegular
// TxTree false == TxTreeStake
func connectTxTree(txStore TxStore,
block *dcrutil.Block,
txTree bool) {
var transactions []*dcrutil.Tx
if txTree {
transactions = block.Transactions()
} else {
transactions = block.STransactions()
}
// Loop through all of the transactions in the block to see if any of
// them are ones we need to update and spend based on the results map.
for i, tx := range transactions {
// Update the transaction store with the transaction information
// if it's one of the requested transactions.
msgTx := tx.MsgTx()
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = tx
txD.BlockHeight = block.Height()
txD.BlockIndex = uint32(i)
txD.Spent = make([]bool, len(msgTx.TxOut))
txD.Err = nil
}
// Spend the origin transaction output.
for _, txIn := range msgTx.TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
if originTx, exists := txStore[*originHash]; exists {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = true
}
}
}
return
}
func connectTransactions(txStore TxStore, block *dcrutil.Block, parent *dcrutil.Block) error {
// There is no regular tx from before the genesis block, so ignore the genesis
// block for the next step.
if parent != nil && block.Height() != 0 {
mBlock := block.MsgBlock()
votebits := mBlock.Header.VoteBits
regularTxTreeValid := dcrutil.IsFlagSet16(votebits, dcrutil.BlockValid)
// Only add the transactions in the event that the parent block's regular
// tx were validated.
if regularTxTreeValid {
// Loop through all of the regular transactions in the block to see if
// any of them are ones we need to update and spend based on the
// results map.
for i, tx := range parent.Transactions() {
// Update the transaction store with the transaction information
// if it's one of the requested transactions.
msgTx := tx.MsgTx()
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = tx
txD.BlockHeight = block.Height() - 1
txD.BlockIndex = uint32(i)
txD.Spent = make([]bool, len(msgTx.TxOut))
txD.Err = nil
}
// Spend the origin transaction output.
for _, txIn := range msgTx.TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
if originTx, exists := txStore[*originHash]; exists {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = true
}
}
}
}
}
// Loop through all of the stake transactions in the block to see if any of
// them are ones we need to update and spend based on the results map.
for i, tx := range block.STransactions() {
// Update the transaction store with the transaction information
// if it's one of the requested transactions.
msgTx := tx.MsgTx()
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = tx
txD.BlockHeight = block.Height()
txD.BlockIndex = uint32(i)
txD.Spent = make([]bool, len(msgTx.TxOut))
txD.Err = nil
}
// Spend the origin transaction output.
for _, txIn := range msgTx.TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
if originTx, exists := txStore[*originHash]; exists {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = true
}
}
}
return nil
}
// disconnectTransactions updates the passed map by undoing transaction and
// spend information for all transactions in the passed block. Only
// transactions in the passed map are updated.
func disconnectTransactions(txStore TxStore, block *dcrutil.Block, parent *dcrutil.Block) error {
// Loop through all of the stake transactions in the block to see if any of
// them are ones that need to be undone based on the transaction store.
for _, tx := range block.STransactions() {
// Clear this transaction from the transaction store if needed.
// Only clear it rather than deleting it because the transaction
// connect code relies on its presence to decide whether or not
// to update the store and any transactions which exist on both
// sides of a fork would otherwise not be updated.
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = nil
txD.BlockHeight = int64(wire.NullBlockHeight)
txD.BlockIndex = wire.NullBlockIndex
txD.Spent = nil
txD.Err = database.ErrTxShaMissing
}
// Unspend the origin transaction output.
for _, txIn := range tx.MsgTx().TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
originTx, exists := txStore[*originHash]
if exists && originTx.Tx != nil && originTx.Err == nil {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = false
}
}
}
// There is no regular tx from before the genesis block, so ignore the genesis
// block for the next step.
if parent != nil && block.Height() != 0 {
mBlock := block.MsgBlock()
votebits := mBlock.Header.VoteBits
regularTxTreeValid := dcrutil.IsFlagSet16(votebits, dcrutil.BlockValid)
// Only bother to unspend transactions if the parent's tx tree was
// validated. Otherwise, these transactions were never in the blockchain's
// history in the first place.
if regularTxTreeValid {
// Loop through all of the regular transactions in the block to see if
// any of them are ones that need to be undone based on the
// transaction store.
for _, tx := range parent.Transactions() {
// Clear this transaction from the transaction store if needed.
// Only clear it rather than deleting it because the transaction
// connect code relies on its presence to decide whether or not
// to update the store and any transactions which exist on both
// sides of a fork would otherwise not be updated.
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = nil
txD.BlockHeight = int64(wire.NullBlockHeight)
txD.BlockIndex = wire.NullBlockIndex
txD.Spent = nil
txD.Err = database.ErrTxShaMissing
}
// Unspend the origin transaction output.
for _, txIn := range tx.MsgTx().TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
originTx, exists := txStore[*originHash]
if exists && originTx.Tx != nil && originTx.Err == nil {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = false
}
}
}
}
}
return nil
}
// fetchTxStoreMain fetches transaction data about the provided set of
// transactions from the point of view of the end of the main chain. It takes
// a flag which specifies whether or not fully spent transaction should be
// included in the results.
func fetchTxStoreMain(db database.Db, txSet map[chainhash.Hash]struct{}, includeSpent bool) TxStore {
// Just return an empty store now if there are no requested hashes.
txStore := make(TxStore)
if len(txSet) == 0 {
return txStore
}
// The transaction store map needs to have an entry for every requested
// transaction. By default, all the transactions are marked as missing.
// Each entry will be filled in with the appropriate data below.
txList := make([]*chainhash.Hash, 0, len(txSet))
for hash := range txSet {
hashCopy := hash
txStore[hash] = &TxData{Hash: &hashCopy, Err: database.ErrTxShaMissing}
txList = append(txList, &hashCopy)
}
// Ask the database (main chain) for the list of transactions. This
// will return the information from the point of view of the end of the
// main chain. Choose whether or not to include fully spent
// transactions depending on the passed flag.
var txReplyList []*database.TxListReply
if includeSpent {
txReplyList = db.FetchTxByShaList(txList)
} else {
txReplyList = db.FetchUnSpentTxByShaList(txList)
}
for _, txReply := range txReplyList {
// Lookup the existing results entry to modify. Skip
// this reply if there is no corresponding entry in
// the transaction store map which really should not happen, but
// be safe.
txD, ok := txStore[*txReply.Sha]
if !ok {
continue
}
// Fill in the transaction details. A copy is used here since
// there is no guarantee the returned data isn't cached and
// this code modifies the data. A bug caused by modifying the
// cached data would likely be difficult to track down and could
// cause subtle errors, so avoid the potential altogether.
txD.Err = txReply.Err
if txReply.Err == nil {
txD.Tx = dcrutil.NewTx(txReply.Tx)
txD.BlockHeight = txReply.Height
txD.BlockIndex = txReply.Index
txD.Spent = make([]bool, len(txReply.TxSpent))
copy(txD.Spent, txReply.TxSpent)
}
}
return txStore
}
// handleTxStoreViewpoint appends extra Tx Trees to update to a different
// viewpoint.
func handleTxStoreViewpoint(block *dcrutil.Block, parentBlock *dcrutil.Block,
txStore TxStore, viewpoint int8) error {
// We don't need to do anything for the current top block viewpoint.
if viewpoint == ViewpointPrevValidInitial {
return nil
}
// ViewpointPrevValidStake: Append the prev block TxTreeRegular
// txs to fill out TxIns.
if viewpoint == ViewpointPrevValidStake {
connectTxTree(txStore, parentBlock, true)
return nil
}
// ViewpointPrevInvalidStake: Do not append the prev block
// TxTreeRegular txs, since they don't exist.
if viewpoint == ViewpointPrevInvalidStake {
return nil
}
// ViewpointPrevValidRegular: Append the prev block TxTreeRegular
// txs to fill in TxIns, then append the cur block TxTreeStake
// txs to fill in TxInss. TxTreeRegular from current block will
// never be allowed to spend from the stake tree of the current
// block anyway because of the consensus rules regarding output
// maturity, but do it anyway.
if viewpoint == ViewpointPrevValidRegular {
connectTxTree(txStore, parentBlock, true)
connectTxTree(txStore, block, false)
return nil
}
// ViewpointPrevInvalidRegular: Append the cur block TxTreeStake
// txs to fill in TxIns. TxTreeRegular from current block will
// never be allowed to spend from the stake tree of the current
// block anyway because of the consensus rules regarding output
// maturity, but do it anyway.
if viewpoint == ViewpointPrevInvalidRegular {
connectTxTree(txStore, block, false)
return nil
}
return fmt.Errorf("Error: invalid viewpoint '0x%x' given to "+
"handleTxStoreViewpoint", viewpoint)
}
// fetchTxStore fetches transaction data about the provided set of transactions
// from the point of view of the given node. For example, a given node might
// be down a side chain where a transaction hasn't been spent from its point of
// view even though it might have been spent in the main chain (or another side
// chain). Another scenario is where a transaction exists from the point of
// view of the main chain, but doesn't exist in a side chain that branches
// before the block that contains the transaction on the main chain.
func (b *BlockChain) fetchTxStore(node *blockNode, block *dcrutil.Block,
txSet map[chainhash.Hash]struct{}, viewpoint int8) (TxStore, error) {
// Get the previous block node. This function is used over simply
// accessing node.parent directly as it will dynamically create previous
// block nodes as needed. This helps allow only the pieces of the chain
// that are needed to remain in memory.
prevNode, err := b.getPrevNodeFromNode(node)
if err != nil {
return nil, err
}
// We don't care if the previous node doesn't exist because this
// block is the genesis block.
if prevNode == nil {
return nil, nil
}
// Get the previous block, too.
prevBlock, err := b.getBlockFromHash(prevNode.hash)
if err != nil {
return nil, err
}
// If we haven't selected a best chain yet or we are extending the main
// (best) chain with a new block, fetch the requested set from the point
// of view of the end of the main (best) chain without including fully
// spent transactions in the results. This is a little more efficient
// since it means less transaction lookups are needed.
if b.bestChain == nil || (prevNode != nil &&
prevNode.hash.IsEqual(b.bestChain.hash)) {
txStore := fetchTxStoreMain(b.db, txSet, false)
err := handleTxStoreViewpoint(block, prevBlock, txStore, viewpoint)
if err != nil {
return nil, err
}
return txStore, nil
}
// Fetch the requested set from the point of view of the end of the
// main (best) chain including fully spent transactions. The fully
// spent transactions are needed because the following code unspends
// them to get the correct point of view.
txStore := fetchTxStoreMain(b.db, txSet, true)
// The requested node is either on a side chain or is a node on the main
// chain before the end of it. In either case, we need to undo the
// transactions and spend information for the blocks which would be
// disconnected during a reorganize to the point of view of the
// node just before the requested node.
detachNodes, attachNodes, err := b.getReorganizeNodes(prevNode)
if err != nil {
return nil, err
}
for e := detachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
blockDisconnect, err := b.db.FetchBlockBySha(n.hash)
if err != nil {
return nil, err
}
// Load the parent block from either the database or the sidechain.
parentHash := &blockDisconnect.MsgBlock().Header.PrevBlock
parentBlock, errFetchBlock := b.getBlockFromHash(parentHash)
if errFetchBlock != nil {
return nil, errFetchBlock
}
err = disconnectTransactions(txStore, blockDisconnect, parentBlock)
if err != nil {
return nil, err
}
}
// The transaction store is now accurate to either the node where the
// requested node forks off the main chain (in the case where the
// requested node is on a side chain), or the requested node itself if
// the requested node is an old node on the main chain. Entries in the
// attachNodes list indicate the requested node is on a side chain, so
// if there are no nodes to attach, we're done.
if attachNodes.Len() == 0 {
err = handleTxStoreViewpoint(block, prevBlock, txStore, viewpoint)
if err != nil {
return nil, err
}
return txStore, nil
}
// The requested node is on a side chain, so we need to apply the
// transactions and spend information from each of the nodes to attach.
for e := attachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
blockConnect, exists := b.blockCache[*n.hash]
if !exists {
return nil, fmt.Errorf("unable to find block %v in "+
"side chain cache for transaction search",
n.hash)
}
// Load the parent block from either the database or the sidechain.
parentHash := &blockConnect.MsgBlock().Header.PrevBlock
parentBlock, errFetchBlock := b.getBlockFromHash(parentHash)
if errFetchBlock != nil {
return nil, errFetchBlock
}
err = connectTransactions(txStore, blockConnect, parentBlock)
if err != nil {
return nil, err
}
}
err = handleTxStoreViewpoint(block, prevBlock, txStore, viewpoint)
if err != nil {
return nil, err
}
return txStore, nil
}
// fetchInputTransactions fetches the input transactions referenced by the
// transactions in the given block from its point of view. See fetchTxList
// for more details on what the point of view entails.
// Decred: This function is for verifying the validity of the regular tx tree in
// this block for the case that it does get accepted in the next block.
func (b *BlockChain) fetchInputTransactions(node *blockNode, block *dcrutil.Block, viewpoint int8) (TxStore, error) {
// Verify we have the same node as we do block.
blockHash := block.Sha()
if !node.hash.IsEqual(blockHash) {
return nil, fmt.Errorf("node and block hash are different")
}
// If we need the previous block, grab it.
var parentBlock *dcrutil.Block
if viewpoint == ViewpointPrevValidInitial ||
viewpoint == ViewpointPrevValidStake ||
viewpoint == ViewpointPrevValidRegular {
var errFetchBlock error
parentBlock, errFetchBlock = b.getBlockFromHash(node.parentHash)
if errFetchBlock != nil {
return nil, errFetchBlock
}
}
txInFlight := map[chainhash.Hash]int{}
txNeededSet := make(map[chainhash.Hash]struct{})
txStore := make(TxStore)
// Case 1: ViewpointPrevValidInitial. We need the viewpoint of the
// current chain without the TxTreeRegular of the previous block
// added so we can validate that.
if viewpoint == ViewpointPrevValidInitial {
// Build a map of in-flight transactions because some of the inputs in
// this block could be referencing other transactions earlier in this
// block which are not yet in the chain.
transactions := parentBlock.Transactions()
for i, tx := range transactions {
txInFlight[*tx.Sha()] = i
}
// Loop through all of the transaction inputs (except for the coinbase
// which has no inputs) collecting them into sets of what is needed and
// what is already known (in-flight).
for i, tx := range transactions[1:] {
for _, txIn := range tx.MsgTx().TxIn {
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
// It is acceptable for a transaction input to reference
// the output of another transaction in this block only
// if the referenced transaction comes before the
// current one in this block. Update the transaction
// store acccordingly when this is the case. Otherwise,
// we still need the transaction.
//
// NOTE: The >= is correct here because i is one less
// than the actual position of the transaction within
// the block due to skipping the coinbase.
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
i >= inFlightIndex {
originTx := transactions[inFlightIndex]
txD.Tx = originTx
txD.BlockHeight = node.height - 1
txD.BlockIndex = uint32(inFlightIndex)
txD.Spent = make([]bool, len(originTx.MsgTx().TxOut))
txD.Err = nil
} else {
txNeededSet[*originHash] = struct{}{}
}
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, block, txNeededSet, viewpoint)
if err != nil {
return nil, err
}
// Merge the results of the requested transactions and the in-flight
// transactions.
for _, txD := range txNeededStore {
txStore[*txD.Hash] = txD
}
return txStore, nil
}
// Case 2+3: ViewpointPrevValidStake and ViewpointPrevValidStake.
// For ViewpointPrevValidStake, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// added so we can validate the TxTreeStake of the current block.
// For ViewpointPrevInvalidStake, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// missing so we can validate the TxTreeStake of the current block.
if viewpoint == ViewpointPrevValidStake ||
viewpoint == ViewpointPrevInvalidStake {
// We need all of the stake tx txins. None of these are considered
// in-flight in relation to the regular tx tree or to other tx in
// the stake tx tree, so don't do any of those expensive checks and
// just append it to the tx slice.
stransactions := block.STransactions()
for _, tx := range stransactions {
isSSGen, _ := stake.IsSSGen(tx)
for i, txIn := range tx.MsgTx().TxIn {
// Ignore stakebases.
if isSSGen && i == 0 {
continue
}
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
txNeededSet[*originHash] = struct{}{}
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, block, txNeededSet, viewpoint)
if err != nil {
return nil, err
}
return txNeededStore, nil
}
// Case 4+5: ViewpointPrevValidRegular and
// ViewpointPrevInvalidRegular.
// For ViewpointPrevValidRegular, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// and the TxTreeStake of the current block added so we can
// validate the TxTreeRegular of the current block.
// For ViewpointPrevInvalidRegular, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// missing and the TxTreeStake of the current block added so we
// can validate the TxTreeRegular of the current block.
if viewpoint == ViewpointPrevValidRegular ||
viewpoint == ViewpointPrevInvalidRegular {
transactions := block.Transactions()
for i, tx := range transactions {
txInFlight[*tx.Sha()] = i
}
// Loop through all of the transaction inputs (except for the coinbase
// which has no inputs) collecting them into sets of what is needed and
// what is already known (in-flight).
txNeededSet := make(map[chainhash.Hash]struct{})
txStore = make(TxStore)
for i, tx := range transactions[1:] {
for _, txIn := range tx.MsgTx().TxIn {
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
// It is acceptable for a transaction input to reference
// the output of another transaction in this block only
// if the referenced transaction comes before the
// current one in this block. Update the transaction
// store acccordingly when this is the case. Otherwise,
// we still need the transaction.
//
// NOTE: The >= is correct here because i is one less
// than the actual position of the transaction within
// the block due to skipping the coinbase.
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
i >= inFlightIndex {
originTx := transactions[inFlightIndex]
txD.Tx = originTx
txD.BlockHeight = node.height
txD.BlockIndex = uint32(inFlightIndex)
txD.Spent = make([]bool, len(originTx.MsgTx().TxOut))
txD.Err = nil
} else {
txNeededSet[*originHash] = struct{}{}
}
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, block, txNeededSet, viewpoint)
if err != nil {
return nil, err
}
// Merge the results of the requested transactions and the in-flight
// transactions.
for _, txD := range txNeededStore {
txStore[*txD.Hash] = txD
}
return txStore, nil
}
return nil, fmt.Errorf("Invalid viewpoint passed to fetchInputTransactions")
}
// FetchTransactionStore fetches the input transactions referenced by the
// passed transaction from the point of view of the end of the main chain. It
// also attempts to fetch the transaction itself so the returned TxStore can be
// examined for duplicate transactions.
// IsValid indicates if the current block on head has had its TxTreeRegular
// validated by the stake voters.
func (b *BlockChain) FetchTransactionStore(tx *dcrutil.Tx,
isValid bool, includeSpent bool) (TxStore, error) {
isSSGen, _ := stake.IsSSGen(tx)
// Create a set of needed transactions from the transactions referenced
// by the inputs of the passed transaction. Also, add the passed
// transaction itself as a way for the caller to detect duplicates.
txNeededSet := make(map[chainhash.Hash]struct{})
txNeededSet[*tx.Sha()] = struct{}{}
for i, txIn := range tx.MsgTx().TxIn {
// Skip all stakebase inputs.
if isSSGen && (i == 0) {
continue
}
txNeededSet[txIn.PreviousOutPoint.Hash] = struct{}{}
}
// Request the input transactions from the point of view of the end of
// the main chain with or without without including fully spent transactions
// in the results.
txStore := fetchTxStoreMain(b.db, txNeededSet, includeSpent)
topBlock, err := b.getBlockFromHash(b.bestChain.hash)
if err != nil {
return nil, err
}
if isValid {
connectTxTree(txStore, topBlock, true)
}
return txStore, nil
}

1175
blockchain/utxoviewpoint.go Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,7 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
@ -17,7 +16,6 @@ import (
"time"
"github.com/decred/dcrd/blockchain"
// "github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/txscript"
@ -66,13 +64,7 @@ func TestBlockValidationRules(t *testing.T) {
}
defer teardownFunc()
err = chain.GenerateInitialIndex()
if err != nil {
t.Errorf("GenerateInitialIndex: %v", err)
}
// The genesis block should fail to connect since it's already
// inserted.
// The genesis block should fail to connect since it's already inserted.
genesisBlock := simNetParams.GenesisBlock
err = chain.CheckConnectBlock(dcrutil.NewBlock(genesisBlock))
if err == nil {
@ -190,7 +182,7 @@ func TestBlockValidationRules(t *testing.T) {
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock error: %v", err.Error())
t.Fatalf("ProcessBlock error at height %v: %v", i, err.Error())
}
}
@ -267,7 +259,8 @@ func TestBlockValidationRules(t *testing.T) {
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock error: %v", err.Error())
t.Errorf("ProcessBlock error at height %v: %v", i,
err.Error())
}
}
@ -615,16 +608,16 @@ func TestBlockValidationRules(t *testing.T) {
b153test.SetHeight(int64(testsIdx1))
err = blockchain.CheckWorklessBlockSanity(b153test, timeSource, simNetParams)
if err != nil {
t.Errorf("got unexpected error for ErrInvalidRevocations sanity check: %v",
err)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrRevocationsMismatch {
t.Errorf("got unexpected no error or other error for "+
"ErrInvalidRevocations sanity check: %v", err)
}
// Fails and hits ErrInvalidRevocations.
err = chain.CheckConnectBlock(b153test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrInvalidRevNum {
t.Errorf("Unexpected no or wrong error for ErrInvalidRevocations test: %v",
if err != nil {
t.Errorf("Unexpected error for ErrInvalidRevocations test: %v",
err)
}
@ -728,11 +721,11 @@ func TestBlockValidationRules(t *testing.T) {
b153test = dcrutil.NewBlock(badSSRtx153)
b153test.SetHeight(int64(testsIdx1))
err = blockchain.CheckWorklessBlockSanity(b153test, timeSource, simNetParams)
if err != nil {
t.Errorf("got unexpected error for ErrInvalidSSRtx sanity check: %v",
err)
}
// err = blockchain.CheckWorklessBlockSanity(b153test, timeSource, simNetParams)
// if err != nil {
// t.Errorf("got unexpected error for ErrInvalidSSRtx sanity check: %v",
// err)
// }
// Fails and hits ErrInvalidSSRtx.
err = chain.CheckConnectBlock(b153test)
@ -802,18 +795,17 @@ func TestBlockValidationRules(t *testing.T) {
b154test = dcrutil.NewBlock(badFreshStake154)
b154test.SetHeight(int64(testsIdx2))
// This passes.
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err != nil {
t.Errorf("Unexpected error for ErrFreshStakeMismatch test: %v",
err.Error())
}
// Throws an error in stake consensus.
err = chain.CheckConnectBlock(b154test)
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrFreshStakeMismatch {
t.Errorf("Unexpected no error or wrong err for ErrFreshStakeMismatch "+
t.Errorf("Unexpected no or wrong error for ErrFreshStakeMismatch "+
"sanity check test: %v", err.Error())
}
err = chain.CheckConnectBlock(b154test)
if err != nil {
t.Errorf("Unexpected error for ErrFreshStakeMismatch "+
"test: %v", err.Error())
}
@ -827,22 +819,17 @@ func TestBlockValidationRules(t *testing.T) {
notEnoughVotes154 := new(wire.MsgBlock)
notEnoughVotes154.FromBytes(block154Bytes)
notEnoughVotes154.STransactions = notEnoughVotes154.STransactions[0:2]
notEnoughVotes154.Header.FreshStake = 0
recalculateMsgBlockMerkleRootsSize(notEnoughVotes154)
b154test = dcrutil.NewBlock(notEnoughVotes154)
b154test.SetHeight(int64(testsIdx2))
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err != nil {
t.Errorf("Got unexpected block sanity err for "+
"not enough votes (err: %v)", err)
}
// Fails and hits ErrNotEnoughVotes.
err = chain.CheckConnectBlock(b154test)
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrNotEnoughVotes {
t.Errorf("Unexpected no or wrong error for not enough votes test: %v",
err)
t.Errorf("Got no or unexpected block sanity err for "+
"not enough votes (err: %v)", err)
}
// ----------------------------------------------------------------------------
@ -870,20 +857,12 @@ func TestBlockValidationRules(t *testing.T) {
b154test = dcrutil.NewBlock(tooManyVotes154)
b154test.SetHeight(int64(testsIdx2))
// Fails tax amount test.
// Fails and hits ErrTooManyVotes.
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err == nil {
t.Errorf("got unexpected no error for ErrTooManyVotes sanity check")
}
// Fails and hits ErrTooManyVotes.
err = chain.CheckConnectBlock(b154test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrTooManyVotes {
t.Errorf("Unexpected no or wrong error for too many votes test: %v",
err)
}
// ----------------------------------------------------------------------------
// ErrTicketUnavailable
nonChosenTicket154 := new(wire.MsgBlock)
@ -947,18 +926,12 @@ func TestBlockValidationRules(t *testing.T) {
b154test = dcrutil.NewBlock(votesMismatch154)
b154test.SetHeight(int64(testsIdx2))
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err != nil {
t.Errorf("got unexpected error for ErrVotesMismatch sanity check: %v",
err)
}
// Fails and hits ErrVotesMismatch.
err = chain.CheckConnectBlock(b154test)
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrVotesMismatch {
t.Errorf("Unexpected no or wrong error for ErrVotesMismatch test: %v",
err)
t.Errorf("got unexpected no or wrong error for ErrVotesMismatch "+
"sanity check: %v", err)
}
// ----------------------------------------------------------------------------
@ -1466,15 +1439,16 @@ func TestBlockValidationRules(t *testing.T) {
b154test.SetHeight(int64(testsIdx2))
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrNoTax {
t.Errorf("Got no error or unexpected error for ErrNoTax "+
if err != nil {
t.Errorf("Got unexpected error for ErrNoTax "+
"test 1: %v", err)
}
err = chain.CheckConnectBlock(b154test)
if err != nil {
t.Errorf("Got unexpected error for ErrNoTax test 1: %v", err)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrNoTax {
t.Errorf("Got no error or unexpected error for ErrNoTax "+
"test 1: %v", err)
}
// ErrNoTax 2
@ -1488,17 +1462,17 @@ func TestBlockValidationRules(t *testing.T) {
b154test.SetHeight(int64(testsIdx2))
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err != nil {
t.Errorf("Got unexpected error for ErrNoTax test 2: %v", err)
}
err = chain.CheckConnectBlock(b154test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrNoTax {
t.Errorf("Got no error or unexpected error for ErrNoTax "+
"test 2: %v", err)
}
err = chain.CheckConnectBlock(b154test)
if err != nil {
t.Errorf("Got unexpected error for ErrNoTax test 2: %v", err)
}
// ErrNoTax 3
// Wrong amount paid
taxMissing154 = new(wire.MsgBlock)
@ -1510,17 +1484,17 @@ func TestBlockValidationRules(t *testing.T) {
b154test.SetHeight(int64(testsIdx2))
err = blockchain.CheckWorklessBlockSanity(b154test, timeSource, simNetParams)
if err != nil {
t.Errorf("Got unexpected error for ErrNoTax test 3: %v", err)
}
err = chain.CheckConnectBlock(b154test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrNoTax {
t.Errorf("Got no error or unexpected error for ErrNoTax "+
"test 3: %v", err)
}
err = chain.CheckConnectBlock(b154test)
if err != nil {
t.Errorf("Got unexpected error for ErrNoTax test 3: %v", err)
}
// ----------------------------------------------------------------------------
// ErrExpiredTx
mtxFromB = new(wire.MsgTx)
@ -1529,7 +1503,7 @@ func TestBlockValidationRules(t *testing.T) {
expiredTx154 := new(wire.MsgBlock)
expiredTx154.FromBytes(block154Bytes)
expiredTx154.AddTransaction(mtxFromB)
expiredTx154.Transactions[11] = mtxFromB
recalculateMsgBlockMerkleRootsSize(expiredTx154)
b154test = dcrutil.NewBlock(expiredTx154)
b154test.SetHeight(int64(testsIdx2))
@ -1737,7 +1711,9 @@ func TestBlockValidationRules(t *testing.T) {
}
// ----------------------------------------------------------------------------
// ErrZeroValueOutputSpend
// ErrMissingTx (formerly ErrZeroValueOutputSpend). In the latest version of
// the database, zero value outputs are automatically pruned, so the output
// is simply missing.
mtxFromB = new(wire.MsgTx)
mtxFromB.FromBytes(regularTx154)
@ -1770,9 +1746,9 @@ func TestBlockValidationRules(t *testing.T) {
// Fails and hits ErrZeroValueOutputSpend.
err = chain.CheckConnectBlock(b154test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrZeroValueOutputSpend {
blockchain.ErrMissingTx {
t.Errorf("Unexpected no or wrong error for "+
"ErrZeroValueOutputSpend test: %v", err)
"ErrMissingTx test: %v", err)
}
// ----------------------------------------------------------------------------
@ -1891,7 +1867,8 @@ func TestBlockValidationRules(t *testing.T) {
}
// ----------------------------------------------------------------------------
// Try to spend immature change from one SStx in another SStx.
// Try to spend immature change from one SStx in another SStx, hitting
// ErrImmatureSpend.
sstxSpend2Invalid166 := new(wire.MsgBlock)
sstxSpend2Invalid166.FromBytes(block166Bytes)
sstxToUse166 = sstxSpend2Invalid166.STransactions[6]
@ -1921,7 +1898,7 @@ func TestBlockValidationRules(t *testing.T) {
err = blockchain.CheckWorklessBlockSanity(b166test, timeSource, simNetParams)
if err != nil {
t.Errorf("got unexpected error for ErrMissingTx test 3 sanity "+
t.Errorf("got unexpected error for ErrImmatureSpend test sanity "+
"check: %v", err)
}
@ -1933,9 +1910,9 @@ func TestBlockValidationRules(t *testing.T) {
// This output doesn't become legal to spend until the next block.
err = chain.CheckConnectBlock(b166test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrMissingTx {
blockchain.ErrImmatureSpend {
t.Errorf("Unexpected no or wrong error for "+
"ErrMissingTx test 3: %v", err)
"ErrImmatureSpend test: %v", err)
}
// ----------------------------------------------------------------------------
@ -1944,26 +1921,24 @@ func TestBlockValidationRules(t *testing.T) {
sstxSpend3Invalid166.FromBytes(block166Bytes)
sstxToUse166 = sstxSpend3Invalid166.STransactions[6]
sstxToUse166.AddTxIn(sstxSpend3Invalid166.STransactions[5].TxIn[0])
sstxToUse166.AddTxOut(sstxSpend3Invalid166.STransactions[5].TxOut[1])
sstxToUse166.AddTxOut(sstxSpend3Invalid166.STransactions[5].TxOut[2])
recalculateMsgBlockMerkleRootsSize(sstxSpend3Invalid166)
b166test = dcrutil.NewBlock(sstxSpend3Invalid166)
b166test.SetHeight(int64(testsIdx3))
err = blockchain.CheckWorklessBlockSanity(b166test, timeSource, simNetParams)
if err != nil {
t.Errorf("got unexpected error for ErrDoubleSpend test 1 sanity "+
t.Errorf("got unexpected error for double spend test 1 sanity "+
"check: %v", err)
}
// Fails and hits ErrDoubleSpend.
// Fails and hits ErrMissingTx.
err = chain.CheckConnectBlock(b166test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrDoubleSpend {
blockchain.ErrMissingTx {
t.Errorf("Unexpected no or wrong error for "+
"ErrDoubleSpend test 1: %v", err)
"double spend test 1: %v", err)
}
// ----------------------------------------------------------------------------
@ -1980,16 +1955,16 @@ func TestBlockValidationRules(t *testing.T) {
err = blockchain.CheckWorklessBlockSanity(b166test, timeSource, simNetParams)
if err != nil {
t.Errorf("got unexpected error for ErrDoubleSpend test 2 sanity "+
t.Errorf("got unexpected error for deouble spend test 2 sanity "+
"check: %v", err)
}
// Fails and hits ErrDoubleSpend.
// Fails and hits ErrMissingTx.
err = chain.CheckConnectBlock(b166test)
if err == nil || err.(blockchain.RuleError).GetCode() !=
blockchain.ErrDoubleSpend {
blockchain.ErrMissingTx {
t.Errorf("Unexpected no or wrong error for "+
"ErrDoubleSpend test 2: %v", err)
"double spend test 2: %v", err)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
// Copyright (c) 2015 The Decred Developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred Developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2015 The Decred Developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred Developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,10 +1,11 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package main
/*
import (
"fmt"
"sync"
@ -21,6 +22,8 @@ import (
"github.com/btcsuite/golangcrypto/ripemd160"
)
TODO Replace this with a new addrindexer
type indexState int
const (
@ -480,3 +483,4 @@ func (a *addrIndexer) RemoveBlock(block *dcrutil.Block,
return nil
}
*/

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -12,8 +12,8 @@ import (
"github.com/btcsuite/btclog"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrd/limits"
)
@ -28,20 +28,19 @@ var (
)
// loadBlockDB opens the block database and returns a handle to it.
func loadBlockDB() (database.Db, error) {
func loadBlockDB() (database.DB, error) {
// The database name is based on the database type.
dbName := blockDbNamePrefix + "_" + cfg.DbType
if cfg.DbType == "sqlite" {
dbName = dbName + ".db"
}
dbPath := filepath.Join(cfg.DataDir, dbName)
log.Infof("Loading block database from '%s'", dbPath)
db, err := database.OpenDB(cfg.DbType, dbPath)
db, err := database.Open(cfg.DbType, dbPath, activeNetParams.Net)
if err != nil {
// Return the error if it's not because the database doesn't
// exist.
if err != database.ErrDbDoesNotExist {
if dbErr, ok := err.(database.Error); !ok || dbErr.ErrorCode !=
database.ErrDbDoesNotExist {
return nil, err
}
@ -50,20 +49,13 @@ func loadBlockDB() (database.Db, error) {
if err != nil {
return nil, err
}
db, err = database.CreateDB(cfg.DbType, dbPath)
db, err = database.Create(cfg.DbType, dbPath, activeNetParams.Net)
if err != nil {
return nil, err
}
}
// Get the latest block height from the database.
_, height, err := db.NewestSha()
if err != nil {
db.Close()
return nil, err
}
log.Infof("Block database loaded with block height %d", height)
log.Info("Block database loaded")
return db, nil
}
@ -102,7 +94,11 @@ func realMain() error {
// Create a block importer for the database and input file and start it.
// The done channel returned from start will contain an error if
// anything went wrong.
importer := newBlockImporter(db, fi)
importer, err := newBlockImporter(db, fi)
if err != nil {
log.Errorf("Failed create block importer: %v", err)
return err
}
// Perform the import asynchronously. This allows blocks to be
// processed and read in parallel. The results channel returned from

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -12,14 +12,15 @@ import (
flags "github.com/btcsuite/go-flags"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
database "github.com/decred/dcrd/database2"
_ "github.com/decred/dcrd/database2/ffldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
const (
defaultDbType = "leveldb"
defaultDbType = "ffldb"
defaultDataFile = "bootstrap.dat"
defaultProgress = 10
)
@ -27,7 +28,7 @@ const (
var (
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultDataDir = filepath.Join(dcrdHomeDir, "data")
knownDbTypes = database.SupportedDBs()
knownDbTypes = database.SupportedDrivers()
activeNetParams = &chaincfg.MainNetParams
)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -14,8 +14,7 @@ import (
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
database "github.com/decred/dcrd/database2"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
@ -32,7 +31,7 @@ type importResults struct {
// blockImporter houses information about an ongoing import from a block data
// file to the block database.
type blockImporter struct {
db database.Db
db database.DB
chain *blockchain.BlockChain
medianTime blockchain.MedianTimeSource
r io.ReadSeeker
@ -107,7 +106,7 @@ func (bi *blockImporter) processBlock(serializedBlock []byte) (bool, error) {
// Skip blocks that already exist.
blockSha := block.Sha()
exists, err := bi.db.ExistsSha(blockSha)
exists, err := bi.chain.HaveBlock(blockSha)
if err != nil {
return false, err
}
@ -118,7 +117,7 @@ func (bi *blockImporter) processBlock(serializedBlock []byte) (bool, error) {
// Don't bother trying to process orphans.
prevHash := &block.MsgBlock().Header.PrevBlock
if !prevHash.IsEqual(&zeroHash) {
exists, err := bi.db.ExistsSha(prevHash)
exists, err := bi.chain.HaveBlock(prevHash)
if err != nil {
return false, err
}
@ -297,7 +296,15 @@ func (bi *blockImporter) Import() chan *importResults {
// newBlockImporter returns a new importer for the provided file reader seeker
// and database.
func newBlockImporter(db database.Db, r io.ReadSeeker) *blockImporter {
func newBlockImporter(db database.DB, r io.ReadSeeker) (*blockImporter, error) {
chain, err := blockchain.New(&blockchain.Config{
DB: db,
ChainParams: activeNetParams,
})
if err != nil {
return nil, err
}
return &blockImporter{
db: db,
r: r,
@ -305,8 +312,8 @@ func newBlockImporter(db database.Db, r io.ReadSeeker) *blockImporter {
doneChan: make(chan bool),
errChan: make(chan error),
quit: make(chan struct{}),
chain: blockchain.New(db, nil, activeNetParams, nil, nil),
chain: chain,
medianTime: blockchain.NewMedianTime(),
lastLogTime: time.Now(),
}
}, nil
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,225 +0,0 @@
// Copyright (c) 2013 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package main
import (
"errors"
"fmt"
"os"
"path/filepath"
"strconv"
"github.com/btcsuite/btclog"
flags "github.com/btcsuite/go-flags"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
type config struct {
DataDir string `short:"b" long:"datadir" description:"Directory to store data"`
DbType string `long:"dbtype" description:"Database backend"`
TestNet bool `long:"testnet" description:"Use the test network"`
SimNet bool `long:"simnet" description:"Use the simulation test network"`
ShaString string `short:"s" description:"Block SHA to process" required:"true"`
}
var (
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultDataDir = filepath.Join(dcrdHomeDir, "data")
log btclog.Logger
activeNetParams = &chaincfg.MainNetParams
)
const (
argSha = iota
argHeight
)
// netName returns the name used when referring to a decred network. At the
// time of writing, dcrd currently places blocks for testnet version 0 in the
// data and log directory "testnet", which does not match the Name field of the
// chaincfg parameters. This function can be used to override this directory name
// as "testnet" when the passed active network matches wire.TestNet.
//
// A proper upgrade to move the data and log directories for this network to
// "testnet" is planned for the future, at which point this function can be
// removed and the network parameter's name used instead.
func netName(chainParams *chaincfg.Params) string {
switch chainParams.Net {
case wire.TestNet:
return "testnet"
default:
return chainParams.Name
}
}
func main() {
cfg := config{
DbType: "leveldb",
DataDir: defaultDataDir,
}
parser := flags.NewParser(&cfg, flags.Default)
_, err := parser.Parse()
if err != nil {
if e, ok := err.(*flags.Error); !ok || e.Type != flags.ErrHelp {
parser.WriteHelp(os.Stderr)
}
return
}
backendLogger := btclog.NewDefaultBackendLogger()
defer backendLogger.Flush()
log = btclog.NewSubsystemLogger(backendLogger, "")
database.UseLogger(log)
// Multiple networks can't be selected simultaneously.
funcName := "main"
numNets := 0
// Count number of network flags passed; assign active network params
// while we're at it
if cfg.TestNet {
numNets++
activeNetParams = &chaincfg.TestNetParams
}
if cfg.SimNet {
numNets++
activeNetParams = &chaincfg.SimNetParams
}
if numNets > 1 {
str := "%s: The testnet, regtest, and simnet params can't be " +
"used together -- choose one of the three"
err := fmt.Errorf(str, funcName)
fmt.Fprintln(os.Stderr, err)
parser.WriteHelp(os.Stderr)
return
}
cfg.DataDir = filepath.Join(cfg.DataDir, netName(activeNetParams))
blockDbNamePrefix := "blocks"
dbName := blockDbNamePrefix + "_" + cfg.DbType
if cfg.DbType == "sqlite" {
dbName = dbName + ".db"
}
dbPath := filepath.Join(cfg.DataDir, dbName)
log.Infof("loading db")
db, err := database.OpenDB(cfg.DbType, dbPath)
if err != nil {
log.Warnf("db open failed: %v", err)
return
}
defer db.Close()
log.Infof("db load complete")
_, height, err := db.NewestSha()
log.Infof("loaded block height %v", height)
sha, err := getSha(db, cfg.ShaString)
if err != nil {
log.Infof("Invalid block hash %v", cfg.ShaString)
return
}
err = db.DropAfterBlockBySha(&sha)
if err != nil {
log.Warnf("failed %v", err)
}
}
func getSha(db database.Db, str string) (chainhash.Hash, error) {
argtype, idx, sha, err := parsesha(str)
if err != nil {
log.Warnf("unable to decode [%v] %v", str, err)
return chainhash.Hash{}, err
}
switch argtype {
case argSha:
// nothing to do
case argHeight:
sha, err = db.FetchBlockShaByHeight(idx)
if err != nil {
return chainhash.Hash{}, err
}
}
if sha == nil {
fmt.Printf("wtf sha is nil but err is %v", err)
}
return *sha, nil
}
var ntxcnt int64
var txspendcnt int64
var txgivecnt int64
var errBadShaPrefix = errors.New("invalid prefix")
var errBadShaLen = errors.New("invalid len")
var errBadShaChar = errors.New("invalid character")
func parsesha(argstr string) (argtype int, height int64, psha *chainhash.Hash, err error) {
var sha chainhash.Hash
var hashbuf string
switch len(argstr) {
case 64:
hashbuf = argstr
case 66:
if argstr[0:2] != "0x" {
log.Infof("prefix is %v", argstr[0:2])
err = errBadShaPrefix
return
}
hashbuf = argstr[2:]
default:
if len(argstr) <= 16 {
// assume value is height
argtype = argHeight
var h int
h, err = strconv.Atoi(argstr)
if err == nil {
height = int64(h)
return
}
log.Infof("Unable to parse height %v, err %v", height, err)
}
err = errBadShaLen
return
}
var buf [32]byte
for idx, ch := range hashbuf {
var val rune
switch {
case ch >= '0' && ch <= '9':
val = ch - '0'
case ch >= 'a' && ch <= 'f':
val = ch - 'a' + rune(10)
case ch >= 'A' && ch <= 'F':
val = ch - 'A' + rune(10)
default:
err = errBadShaChar
return
}
b := buf[31-idx/2]
if idx&1 == 1 {
b |= byte(val)
} else {
b |= (byte(val) << 4)
}
buf[31-idx/2] = b
}
sha.SetBytes(buf[0:32])
psha = &sha
return
}

View File

@ -1,4 +1,4 @@
// Copyright (c) 2013 The btcsuite developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -12,8 +12,9 @@ import (
flags "github.com/btcsuite/go-flags"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
database "github.com/decred/dcrd/database2"
_ "github.com/decred/dcrd/database2/ffldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
@ -22,13 +23,13 @@ const (
minCandidates = 1
maxCandidates = 20
defaultNumCandidates = 5
defaultDbType = "leveldb"
defaultDbType = "ffldb"
)
var (
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultDataDir = filepath.Join(dcrdHomeDir, "data")
knownDbTypes = database.SupportedDBs()
knownDbTypes = database.SupportedDrivers()
activeNetParams = &chaincfg.MainNetParams
)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -13,8 +13,8 @@ import (
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
database "github.com/decred/dcrd/database2"
)
const blockDbNamePrefix = "blocks"
@ -24,16 +24,12 @@ var (
)
// loadBlockDB opens the block database and returns a handle to it.
func loadBlockDB() (database.Db, error) {
func loadBlockDB() (database.DB, error) {
// The database name is based on the database type.
dbType := cfg.DbType
dbName := blockDbNamePrefix + "_" + dbType
if dbType == "sqlite" {
dbName = dbName + ".db"
}
dbName := blockDbNamePrefix + "_" + cfg.DbType
dbPath := filepath.Join(cfg.DataDir, dbName)
fmt.Printf("Loading block database from '%s'\n", dbPath)
db, err := database.OpenDB(dbType, dbPath)
db, err := database.Open(cfg.DbType, dbPath, activeNetParams.Net)
if err != nil {
return nil, err
}
@ -45,16 +41,14 @@ func loadBlockDB() (database.Db, error) {
// candidates at the last checkpoint that is already hard coded into chain
// since there is no point in finding candidates before already existing
// checkpoints.
func findCandidates(db database.Db, latestHash *chainhash.Hash) ([]*chaincfg.Checkpoint, error) {
func findCandidates(chain *blockchain.BlockChain, latestHash *chainhash.Hash) ([]*chaincfg.Checkpoint, error) {
// Start with the latest block of the main chain.
block, err := db.FetchBlockBySha(latestHash)
block, err := chain.BlockByHash(latestHash)
if err != nil {
return nil, err
}
// Setup chain and get the latest checkpoint. Ignore notifications
// since they aren't needed for this util.
chain := blockchain.New(db, nil, activeNetParams, nil, nil)
// Get the latest known checkpoint.
latestCheckpoint := chain.LatestCheckpoint()
if latestCheckpoint == nil {
// Set the latest checkpoint to the genesis block if there isn't
@ -116,7 +110,7 @@ func findCandidates(db database.Db, latestHash *chainhash.Hash) ([]*chaincfg.Che
}
prevHash := &block.MsgBlock().Header.PrevBlock
block, err = db.FetchBlockBySha(prevHash)
block, err = chain.BlockByHash(prevHash)
if err != nil {
return nil, err
}
@ -156,17 +150,24 @@ func main() {
}
defer db.Close()
// Get the latest block hash and height from the database and report
// status.
latestHash, height, err := db.NewestSha()
// Setup chain. Ignore notifications since they aren't needed for this
// util.
chain, err := blockchain.New(&blockchain.Config{
DB: db,
ChainParams: activeNetParams,
})
if err != nil {
fmt.Fprintln(os.Stderr, err)
fmt.Fprintf(os.Stderr, "failed to initialize chain: %v\n", err)
return
}
fmt.Printf("Block database loaded with block height %d\n", height)
// Get the latest block hash and height from the database and report
// status.
best := chain.BestSnapshot()
fmt.Printf("Block database loaded with block height %d\n", best.Height)
// Find checkpoint candidates.
candidates, err := findCandidates(db, latestHash)
candidates, err := findCandidates(chain, best.Hash)
if err != nil {
fmt.Fprintln(os.Stderr, "Unable to identify candidates:", err)
return

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,4 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -23,9 +23,10 @@ import (
flags "github.com/btcsuite/go-flags"
"github.com/btcsuite/go-socks/socks"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
_ "github.com/decred/dcrd/database/memdb"
database "github.com/decred/dcrd/database2"
_ "github.com/decred/dcrd/database2/ffldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
@ -42,15 +43,15 @@ const (
defaultMaxRPCClients = 10
defaultMaxRPCWebsockets = 25
defaultVerifyEnabled = false
defaultDbType = "leveldb"
defaultDbType = "ffldb"
defaultFreeTxRelayLimit = 15.0
defaultBlockMinSize = 0
defaultBlockMaxSize = 375000
blockMaxSizeMin = 1000
blockMaxSizeMax = wire.MaxBlockPayload - 1000
defaultBlockPrioritySize = 20000
defaultGenerate = false
defaultAddrIndex = false
defaultGenerate = false
defaultNonAggressive = false
defaultNoMiningStateSync = false
defaultAllowOldVotes = false
@ -64,7 +65,7 @@ var (
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultConfigFile = filepath.Join(dcrdHomeDir, defaultConfigFilename)
defaultDataDir = filepath.Join(dcrdHomeDir, defaultDataDirname)
knownDbTypes = database.SupportedDBs()
knownDbTypes = database.SupportedDrivers()
defaultRPCKeyFile = filepath.Join(dcrdHomeDir, "rpc.key")
defaultRPCCertFile = filepath.Join(dcrdHomeDir, "rpc.cert")
defaultLogDir = filepath.Join(dcrdHomeDir, defaultLogDirname)
@ -142,7 +143,7 @@ type config struct {
BlockMaxSize uint32 `long:"blockmaxsize" description:"Maximum block size in bytes to be used when creating a block"`
BlockPrioritySize uint32 `long:"blockprioritysize" description:"Size in bytes for high-priority/low-fee transactions when creating a block"`
GetWorkKeys []string `long:"getworkkey" description:"DEPRECATED -- Use the --miningaddr option instead"`
DropAddrIndex bool `long:"dropaddrindex" description:"Deletes the address-based transaction index from the database on start up, and then exits."`
DropAddrIndex bool `long:"dropaddrindex" description:"Deletes the address-based transaction index from the database on start up and then exits."`
NonAggressive bool `long:"nonaggressive" description:"Disable mining off of the parent block of the blockchain if there aren't enough voters"`
NoMiningStateSync bool `long:"nominingstatesync" description:"Disable synchronizing the mining state with other nodes"`
AllowOldVotes bool `long:"allowoldvotes" description:"Enable the addition of very old votes to the mempool"`
@ -543,14 +544,6 @@ func loadConfig() (*config, []string, error) {
return nil, nil, err
}
// Memdb does not currently support the addrindex.
if cfg.DbType == "memdb" && !cfg.NoAddrIndex {
err := fmt.Errorf("memdb does not currently support the addrindex")
fmt.Fprintln(os.Stderr, err)
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, err
}
// Validate profile port number
if cfg.Profile != "" {
profilePort, err := strconv.Atoi(cfg.Profile)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

Some files were not shown because too many files have changed in this diff Show More