Initial Decred Commit.

Includes work by cjepson, ay-p, jolan, and jcv.

Initial conceptual framework by tacotime.
This commit is contained in:
John C. Vernaleo 2016-01-20 16:46:42 -05:00
parent 09ce6f94d3
commit 5076a00512
366 changed files with 47754 additions and 12618 deletions

35
.gitignore vendored
View File

@ -1,33 +1,4 @@
# Temp files
cmd/dcrd/dcrd
cmd/dcrd/dcrctl
*~
# Databases
btcd.db
*-shm
*-wal
# Log files
*.log
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.pyc

View File

@ -1,17 +0,0 @@
language: go
go:
- 1.3.3
- 1.4.2
sudo: false
before_install:
- gotools=golang.org/x/tools
- if [ "$TRAVIS_GO_VERSION" = "go1.3.3" ]; then gotools=code.google.com/p/go.tools; fi
install:
- go get -d -t -v ./...
- go get -v $gotools/cmd/cover
- go get -v $gotools/cmd/vet
- go get -v github.com/bradfitz/goimports
- go get -v github.com/golang/lint/golint
script:
- export PATH=$PATH:$HOME/gopath/bin
- ./goclean.sh

View File

@ -393,7 +393,7 @@ Changes in 0.8.0-beta (Sun May 25 2014)
- Reduce max bytes allowed for a standard nulldata transaction to 40 for
compatibility with the reference client
- Introduce a new btcnet package which houses all of the network params
for each network (mainnet, testnet3, regtest) to ultimately enable
for each network (mainnet, testnet, regtest) to ultimately enable
easier addition and tweaking of networks without needing to change
several packages
- Fix several script discrepancies found by reference client test data
@ -410,7 +410,7 @@ Changes in 0.8.0-beta (Sun May 25 2014)
- Provide options to control block template creation settings
- Support the getwork RPC
- Allow address identifiers to apply to more than one network since both
testnet3 and the regression test network unfortunately use the same
testnet and the regression test network unfortunately use the same
identifier
- RPC changes:
- Set the content type for HTTP POST RPC connections to application/json

View File

@ -1,4 +1,5 @@
Copyright (c) 2013-2015 The btcsuite developers
Copyright (c) 2015-2016 The Decred developers
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@ -1,10 +1,7 @@
btcd
dcrd
====
[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)]
(https://travis-ci.org/btcsuite/btcd)
btcd is an alternative full node bitcoin implementation written in Go (golang).
dcrd is a Decred full node implementation written in Go (golang).
This project is currently under active development and is in a Beta state. It
is extremely stable and has been in production use for over 6 months as of May
@ -13,7 +10,7 @@ we come out of beta.
It properly downloads, validates, and serves the block chain using the exact
rules (including bugs) for block acceptance as Bitcoin Core. We have taken
great care to avoid btcd causing a fork to the block chain. It passes all of
great care to avoid dcrd causing a fork to the block chain. It passes all of
the 'official' block acceptance tests
(https://github.com/TheBlueMatt/test-scripts) as well as all of the JSON test
data in the Bitcoin Core code.
@ -24,13 +21,13 @@ transactions admitted to the pool follow the rules required by the block chain
and also includes the same checks which filter transactions based on
miner requirements ("standard" transactions) as Bitcoin Core.
One key difference between btcd and Bitcoin Core is that btcd does *NOT* include
One key difference between dcrd and Bitcoin Core is that dcrd does *NOT* include
wallet functionality and this was a very intentional design decision. See the
blog entry [here](https://blog.conformal.com/btcd-not-your-moms-bitcoin-daemon)
blog entry [here](https://blog.conformal.com/dcrd-not-your-moms-bitcoin-daemon)
for more details. This means you can't actually make or receive payments
directly with btcd. That functionality is provided by the
[btcwallet](https://github.com/btcsuite/btcwallet) and
[btcgui](https://github.com/btcsuite/btcgui) projects which are both under
directly with dcrd. That functionality is provided by the
[dcrwallet](https://github.com/decred/dcrwallet) and
[btcgui](https://github.com/decred/btcgui) projects which are both under
active development.
## Requirements
@ -41,7 +38,7 @@ active development.
#### Windows - MSI Available
https://github.com/btcsuite/btcd/releases
https://github.com/decred/dcrd/releases
#### Linux/BSD/MacOSX/POSIX - Build from Source
@ -59,13 +56,13 @@ NOTE: The `GOROOT` and `GOPATH` above must not be the same path. It is
recommended that `GOPATH` is set to a directory in your home directory such as
`~/goprojects` to avoid write permission issues.
- Run the following command to obtain btcd, all dependencies, and install it:
- Run the following command to obtain dcrd, all dependencies, and install it:
```bash
$ go get -u github.com/btcsuite/btcd/...
$ go get -u github.com/decred/dcrd/...
```
- btcd (and utilities) will now be installed in either ```$GOROOT/bin``` or
- dcrd (and utilities) will now be installed in either ```$GOROOT/bin``` or
```$GOPATH/bin``` depending on your configuration. If you did not already
add the bin directory to your system path during Go installation, we
recommend you do so now.
@ -78,70 +75,43 @@ Install a newer MSI
#### Linux/BSD/MacOSX/POSIX - Build from Source
- Run the following command to update btcd, all dependencies, and install it:
- Run the following command to update dcrd, all dependencies, and install it:
```bash
$ go get -u -v github.com/btcsuite/btcd/...
$ go get -u -v github.com/decred/dcrd/...
```
## Getting Started
btcd has several configuration options avilable to tweak how it runs, but all
dcrd has several configuration options avilable to tweak how it runs, but all
of the basic operations described in the intro section work with zero
configuration.
#### Windows (Installed from MSI)
Launch btcd from your Start menu.
Launch dcrd from your Start menu.
#### Linux/BSD/POSIX/Source
```bash
$ ./btcd
$ ./dcrd
````
## IRC
- irc.freenode.net
- channel #btcd
- [webchat](https://webchat.freenode.net/?channels=btcd)
## Mailing lists
- btcd: discussion of btcd and its packages.
- btcd-commits: readonly mail-out of source code changes.
To subscribe to a given list, send email to list+subscribe@opensource.conformal.com
- channel #decred
- [webchat](https://webchat.freenode.net/?channels=decred)
## Issue Tracker
The [integrated github issue tracker](https://github.com/btcsuite/btcd/issues)
The [integrated github issue tracker](https://github.com/decred/dcrd/issues)
is used for this project.
## Documentation
The documentation is a work-in-progress. It is located in the [docs](https://github.com/btcsuite/btcd/tree/master/docs) folder.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
The documentation is a work-in-progress. It is located in the [docs](https://github.com/decred/dcrd/tree/master/docs) folder.
## License
btcd is licensed under the [copyfree](http://copyfree.org) ISC License.
dcrd is licensed under the [copyfree](http://copyfree.org) ISC License.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -22,11 +23,12 @@ import (
"sync/atomic"
"time"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/wire"
)
// AddrManager provides a concurrency safe address manager for caching potential
// peers on the bitcoin network.
// peers on the decred network.
type AddrManager struct {
mtx sync.Mutex
peersFile string
@ -293,13 +295,14 @@ func (a *AddrManager) pickTried(bucket int) *list.Element {
func (a *AddrManager) getNewBucket(netAddr, srcAddr *wire.NetAddress) int {
// bitcoind:
// doublesha256(key + sourcegroup + int64(doublesha256(key + group + sourcegroup))%bucket_per_source_group) % num_new_buckets
// doublesha256(key + sourcegroup + int64(doublesha256(key + group
// + sourcegroup))%bucket_per_source_group) % num_new_buckets
data1 := []byte{}
data1 = append(data1, a.key[:]...)
data1 = append(data1, []byte(GroupKey(netAddr))...)
data1 = append(data1, []byte(GroupKey(srcAddr))...)
hash1 := wire.DoubleSha256(data1)
hash1 := chainhash.HashFuncB(data1)
hash64 := binary.LittleEndian.Uint64(hash1)
hash64 %= newBucketsPerGroup
var hashbuf [8]byte
@ -309,17 +312,18 @@ func (a *AddrManager) getNewBucket(netAddr, srcAddr *wire.NetAddress) int {
data2 = append(data2, GroupKey(srcAddr)...)
data2 = append(data2, hashbuf[:]...)
hash2 := wire.DoubleSha256(data2)
hash2 := chainhash.HashFuncB(data2)
return int(binary.LittleEndian.Uint64(hash2) % newBucketCount)
}
func (a *AddrManager) getTriedBucket(netAddr *wire.NetAddress) int {
// bitcoind hashes this as:
// doublesha256(key + group + truncate_to_64bits(doublesha256(key)) % buckets_per_group) % num_buckets
// doublesha256(key + group + truncate_to_64bits(doublesha256(key))
// % buckets_per_group) % num_buckets
data1 := []byte{}
data1 = append(data1, a.key[:]...)
data1 = append(data1, []byte(NetAddressKey(netAddr))...)
hash1 := wire.DoubleSha256(data1)
hash1 := chainhash.HashFuncB(data1)
hash64 := binary.LittleEndian.Uint64(hash1)
hash64 %= triedBucketsPerGroup
var hashbuf [8]byte
@ -329,7 +333,7 @@ func (a *AddrManager) getTriedBucket(netAddr *wire.NetAddress) int {
data2 = append(data2, GroupKey(netAddr)...)
data2 = append(data2, hashbuf[:]...)
hash2 := wire.DoubleSha256(data2)
hash2 := chainhash.HashFuncB(data2)
return int(binary.LittleEndian.Uint64(hash2) % triedBucketCount)
}
@ -1085,7 +1089,7 @@ func (a *AddrManager) GetBestLocalAddress(remoteAddr *wire.NetAddress) *wire.Net
return bestAddress
}
// New returns a new bitcoin address manager.
// New returns a new decred address manager.
// Use Start to begin processing asynchronous address updates.
func New(dataDir string, lookupFunc func(string) ([]net.IP, error)) *AddrManager {
am := AddrManager{

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -12,8 +13,8 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/addrmgr"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/addrmgr"
"github.com/decred/dcrd/wire"
)
// naTest is used to describe a test to be perfomed against the NetAddressKey

View File

@ -1,14 +1,15 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
/*
Package addrmgr implements concurrency safe Bitcoin address manager.
Package addrmgr implements concurrency safe Decred address manager.
Address Manager Overview
In order maintain the peer-to-peer Bitcoin network, there needs to be a source
of addresses to connect to as nodes come and go. The Bitcoin protocol provides
In order maintain the peer-to-peer Decred network, there needs to be a source
of addresses to connect to as nodes come and go. The Decred protocol provides
a the getaddr and addr messages to allow peers to communicate known addresses
with each other. However, there needs to a mechanism to store those results and
select peers from them. It is also important to note that remote peers can't

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,7 +8,7 @@ package addrmgr
import (
"time"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/wire"
)
func TstKnownAddressIsBad(ka *KnownAddress) bool {

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,7 +8,7 @@ package addrmgr
import (
"time"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/wire"
)
// KnownAddress tracks information about a known network address that is used

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,8 +10,8 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/addrmgr"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/addrmgr"
"github.com/decred/dcrd/wire"
)
func TestChance(t *testing.T) {

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -8,7 +9,7 @@ import (
"fmt"
"net"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/wire"
)
var (

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,8 +10,8 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/addrmgr"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/addrmgr"
"github.com/decred/dcrd/wire"
)
// TestIPTypes ensures the various functions which determine the type of an IP

View File

@ -1,62 +1,62 @@
github.com/conformal/btcd/addrmgr/network.go GroupKey 100.00% (23/23)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.reset 100.00% (6/6)
github.com/conformal/btcd/addrmgr/network.go IsRFC5737 100.00% (4/4)
github.com/conformal/btcd/addrmgr/network.go IsRFC1918 100.00% (4/4)
github.com/conformal/btcd/addrmgr/addrmanager.go New 100.00% (3/3)
github.com/conformal/btcd/addrmgr/addrmanager.go NetAddressKey 100.00% (2/2)
github.com/conformal/btcd/addrmgr/network.go IsRFC4862 100.00% (1/1)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.numAddresses 100.00% (1/1)
github.com/conformal/btcd/addrmgr/log.go init 100.00% (1/1)
github.com/conformal/btcd/addrmgr/log.go DisableLog 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go ipNet 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsIPv4 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsLocal 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsOnionCatTor 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC2544 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC3849 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC3927 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC3964 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC4193 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC4380 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC4843 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC6052 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC6145 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRFC6598 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsValid 100.00% (1/1)
github.com/conformal/btcd/addrmgr/network.go IsRoutable 100.00% (1/1)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.GetBestLocalAddress 94.74% (18/19)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.AddLocalAddress 90.91% (10/11)
github.com/conformal/btcd/addrmgr/addrmanager.go getReachabilityFrom 51.52% (17/33)
github.com/conformal/btcd/addrmgr/addrmanager.go ipString 50.00% (2/4)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.GetAddress 9.30% (4/43)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.deserializePeers 0.00% (0/50)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.Good 0.00% (0/44)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.savePeers 0.00% (0/39)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.updateAddress 0.00% (0/30)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.expireNew 0.00% (0/22)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.AddressCache 0.00% (0/16)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.HostToNetAddress 0.00% (0/15)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.getNewBucket 0.00% (0/15)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.AddAddressByIP 0.00% (0/14)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.getTriedBucket 0.00% (0/14)
github.com/conformal/btcd/addrmgr/knownaddress.go knownAddress.chance 0.00% (0/13)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.loadPeers 0.00% (0/11)
github.com/conformal/btcd/addrmgr/knownaddress.go knownAddress.isBad 0.00% (0/11)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.Connected 0.00% (0/10)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.addressHandler 0.00% (0/9)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.pickTried 0.00% (0/8)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.DeserializeNetAddress 0.00% (0/7)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.Stop 0.00% (0/7)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.Attempt 0.00% (0/7)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.Start 0.00% (0/6)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.AddAddresses 0.00% (0/4)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.NeedMoreAddresses 0.00% (0/3)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.NumAddresses 0.00% (0/3)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.AddAddress 0.00% (0/3)
github.com/conformal/btcd/addrmgr/knownaddress.go knownAddress.LastAttempt 0.00% (0/1)
github.com/conformal/btcd/addrmgr/knownaddress.go knownAddress.NetAddress 0.00% (0/1)
github.com/conformal/btcd/addrmgr/addrmanager.go AddrManager.find 0.00% (0/1)
github.com/conformal/btcd/addrmgr/log.go UseLogger 0.00% (0/1)
github.com/conformal/btcd/addrmgr --------------------------------- 21.04% (113/537)
github.com/decred/dcrd/addrmgr/network.go GroupKey 100.00% (23/23)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.GetBestLocalAddress 100.00% (19/19)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.getNewBucket 100.00% (15/15)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.AddAddressByIP 100.00% (14/14)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.getTriedBucket 100.00% (14/14)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.AddLocalAddress 100.00% (11/11)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.pickTried 100.00% (8/8)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.reset 100.00% (6/6)
github.com/decred/dcrd/addrmgr/network.go IsRFC1918 100.00% (4/4)
github.com/decred/dcrd/addrmgr/network.go IsRFC5737 100.00% (4/4)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.AddAddresses 100.00% (4/4)
github.com/decred/dcrd/addrmgr/addrmanager.go New 100.00% (3/3)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.NeedMoreAddresses 100.00% (3/3)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.AddAddress 100.00% (3/3)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.NumAddresses 100.00% (3/3)
github.com/decred/dcrd/addrmgr/addrmanager.go NetAddressKey 100.00% (2/2)
github.com/decred/dcrd/addrmgr/network.go IsRFC4862 100.00% (1/1)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.numAddresses 100.00% (1/1)
github.com/decred/dcrd/addrmgr/log.go init 100.00% (1/1)
github.com/decred/dcrd/addrmgr/knownaddress.go KnownAddress.NetAddress 100.00% (1/1)
github.com/decred/dcrd/addrmgr/knownaddress.go KnownAddress.LastAttempt 100.00% (1/1)
github.com/decred/dcrd/addrmgr/log.go DisableLog 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go ipNet 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsIPv4 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsLocal 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsOnionCatTor 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC2544 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC3849 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC3927 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC3964 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC4193 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC4380 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC4843 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC6052 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC6145 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRFC6598 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsValid 100.00% (1/1)
github.com/decred/dcrd/addrmgr/network.go IsRoutable 100.00% (1/1)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.find 100.00% (1/1)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.GetAddress 95.35% (41/43)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.Good 93.18% (41/44)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.Connected 90.00% (9/10)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.addressHandler 88.89% (8/9)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.AddressCache 87.50% (14/16)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.Attempt 85.71% (6/7)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.Start 83.33% (5/6)
github.com/decred/dcrd/addrmgr/knownaddress.go KnownAddress.chance 76.92% (10/13)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.DeserializeNetAddress 71.43% (5/7)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.Stop 71.43% (5/7)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.updateAddress 53.33% (16/30)
github.com/decred/dcrd/addrmgr/addrmanager.go getReachabilityFrom 51.52% (17/33)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.savePeers 51.28% (20/39)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.expireNew 50.00% (11/22)
github.com/decred/dcrd/addrmgr/addrmanager.go ipString 50.00% (2/4)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.loadPeers 45.45% (5/11)
github.com/decred/dcrd/addrmgr/knownaddress.go KnownAddress.isBad 36.36% (4/11)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.HostToNetAddress 26.67% (4/15)
github.com/decred/dcrd/addrmgr/addrmanager.go AddrManager.deserializePeers 6.00% (3/50)
github.com/decred/dcrd/addrmgr/log.go UseLogger 0.00% (0/1)
github.com/decred/dcrd/addrmgr --------------------------------- 71.69% (385/537)

View File

@ -1,11 +1,10 @@
blockchain
==========
[![Build Status](http://img.shields.io/travis/btcsuite/btcd.svg)]
(https://travis-ci.org/btcsuite/btcd) [![ISC License]
[![ISC License]
(http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
Package blockchain implements bitcoin block handling and chain selection rules.
Package blockchain implements decred block handling and chain selection rules.
The test coverage is currently only around 60%, but will be increasing over
time. See `test_coverage.txt` for the gocov coverage report. Alternatively, if
you are running a POSIX OS, you can run the `cov_report.sh` script for a
@ -15,29 +14,29 @@ There is an associated blog post about the release of this package
[here](https://blog.conformal.com/btcchain-the-bitcoin-chain-package-from-bctd/).
This package has intentionally been designed so it can be used as a standalone
package for any projects needing to handle processing of blocks into the bitcoin
package for any projects needing to handle processing of blocks into the decred
block chain.
## Documentation
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)]
(http://godoc.org/github.com/btcsuite/btcd/blockchain)
(http://godoc.org/github.com/decred/dcrd/blockchain)
Full `go doc` style documentation for the project can be viewed online without
installing this package by using the GoDoc site here:
http://godoc.org/github.com/btcsuite/btcd/blockchain
http://godoc.org/github.com/decred/dcrd/blockchain
You can also view the documentation locally once the package is installed with
the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to
http://localhost:6060/pkg/github.com/btcsuite/btcd/blockchain
http://localhost:6060/pkg/github.com/decred/dcrd/blockchain
## Installation
```bash
$ go get github.com/btcsuite/btcd/blockchain
$ go get github.com/decred/dcrd/blockchain
```
## Bitcoin Chain Processing Overview
## Decred Chain Processing Overview
Before a block is allowed into the block chain, it must go through an intensive
series of validation rules. The following list serves as a general outline of
@ -75,43 +74,23 @@ is by no means exhaustive:
## Examples
* [ProcessBlock Example]
(http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BlockChain-ProcessBlock)
(http://godoc.org/github.com/decred/dcrd/blockchain#example-BlockChain-ProcessBlock)
Demonstrates how to create a new chain instance and use ProcessBlock to
attempt to attempt add a block to the chain. This example intentionally
attempts to insert a duplicate genesis block to illustrate how an invalid
block is handled.
* [CompactToBig Example]
(http://godoc.org/github.com/btcsuite/btcd/blockchain#example-CompactToBig)
(http://godoc.org/github.com/decred/dcrd/blockchain#example-CompactToBig)
Demonstrates how to convert the compact "bits" in a block header which
represent the target difficulty to a big integer and display it using the
typical hex notation.
* [BigToCompact Example]
(http://godoc.org/github.com/btcsuite/btcd/blockchain#example-BigToCompact)
(http://godoc.org/github.com/decred/dcrd/blockchain#example-BigToCompact)
Demonstrates how to convert how to convert a target difficulty into the
compact "bits" in a block header which represent that target difficulty.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License

View File

@ -1,10 +1,176 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import "github.com/btcsuite/btcutil"
import (
"encoding/binary"
"fmt"
"math"
"time"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrutil"
)
// checkCoinbaseUniqueHeight checks to ensure that for all blocks height > 1
// that the coinbase contains the height encoding to make coinbase hash collisions
// impossible.
func checkCoinbaseUniqueHeight(blockHeight int64, block *dcrutil.Block) error {
if !(len(block.MsgBlock().Transactions) > 0) {
str := fmt.Sprintf("block %v has no coinbase", block.Sha())
return ruleError(ErrNoTransactions, str)
}
// Coinbase TxOut[0] is always tax, TxOut[1] is always
// height + extranonce, so at least two outputs must
// exist.
if !(len(block.MsgBlock().Transactions[0].TxOut) > 1) {
str := fmt.Sprintf("block %v is missing necessary coinbase "+
"outputs", block.Sha())
return ruleError(ErrFirstTxNotCoinbase, str)
}
// The first 4 bytes of the NullData output must be the
// encoded height of the block, so that every coinbase
// created has a unique transaction hash.
nullData, err := txscript.GetNullDataContent(
block.MsgBlock().Transactions[0].TxOut[1].Version,
block.MsgBlock().Transactions[0].TxOut[1].PkScript)
if err != nil {
str := fmt.Sprintf("block %v txOut 1 has wrong pkScript "+
"type", block.Sha())
return ruleError(ErrFirstTxNotCoinbase, str)
}
if len(nullData) < 4 {
str := fmt.Sprintf("block %v txOut 1 has too short nullData "+
"push to contain height", block.Sha())
return ruleError(ErrFirstTxNotCoinbase, str)
}
// Check the height and ensure it is correct.
cbHeight := binary.LittleEndian.Uint32(nullData[0:4])
if cbHeight != uint32(blockHeight) {
prevBlock := block.MsgBlock().Header.PrevBlock
str := fmt.Sprintf("block %v txOut 1 has wrong height in "+
"coinbase; want %v, got %v; prevBlock %v, header height %v",
block.Sha(), blockHeight, cbHeight, prevBlock,
block.MsgBlock().Header.Height)
return ruleError(ErrCoinbaseHeight, str)
}
return nil
}
// IsFinalizedTransaction determines whether or not a transaction is finalized.
func IsFinalizedTransaction(tx *dcrutil.Tx, blockHeight int64,
blockTime time.Time) bool {
msgTx := tx.MsgTx()
// Lock time of zero means the transaction is finalized.
lockTime := msgTx.LockTime
if lockTime == 0 {
return true
}
// The lock time field of a transaction is either a block height at
// which the transaction is finalized or a timestamp depending on if the
// value is before the txscript.LockTimeThreshold. When it is under the
// threshold it is a block height.
blockTimeOrHeight := int64(0)
if lockTime < txscript.LockTimeThreshold {
blockTimeOrHeight = blockHeight
} else {
blockTimeOrHeight = blockTime.Unix()
}
if int64(lockTime) < blockTimeOrHeight {
return true
}
// At this point, the transaction's lock time hasn't occured yet, but
// the transaction might still be finalized if the sequence number
// for all transaction inputs is maxed out.
for _, txIn := range msgTx.TxIn {
if txIn.Sequence != math.MaxUint32 {
return false
}
}
return true
}
// checkBlockContext peforms several validation checks on the block which depend
// on its position within the block chain.
//
// The flags modify the behavior of this function as follows:
// - BFFastAdd: The transaction are not checked to see if they are finalized
// and the somewhat expensive duplication transaction check is not performed.
//
// The flags are also passed to checkBlockHeaderContext. See its documentation
// for how the flags modify its behavior.
func (b *BlockChain) checkBlockContext(block *dcrutil.Block, prevNode *blockNode,
flags BehaviorFlags) error {
// The genesis block is valid by definition.
if prevNode == nil {
return nil
}
// Perform all block header related validation checks.
header := &block.MsgBlock().Header
err := b.checkBlockHeaderContext(header, prevNode, flags)
if err != nil {
return err
}
fastAdd := flags&BFFastAdd == BFFastAdd
if !fastAdd {
// The height of this block is one more than the referenced
// previous block.
blockHeight := prevNode.height + 1
// Ensure all transactions in the block are finalized.
for _, tx := range block.Transactions() {
if !IsFinalizedTransaction(tx, blockHeight,
header.Timestamp) {
str := fmt.Sprintf("block contains unfinalized regular "+
"transaction %v", tx.Sha())
return ruleError(ErrUnfinalizedTx, str)
}
}
for _, stx := range block.STransactions() {
if !IsFinalizedTransaction(stx, blockHeight,
header.Timestamp) {
str := fmt.Sprintf("block contains unfinalized stake "+
"transaction %v", stx.Sha())
return ruleError(ErrUnfinalizedTx, str)
}
}
// Check that the node is at the correct height in the blockchain,
// as specified in the block header.
if blockHeight != int64(block.MsgBlock().Header.Height) {
errStr := fmt.Sprintf("Block header height invalid; expected %v"+
" but %v was found", blockHeight, header.Height)
return ruleError(ErrBadBlockHeight, errStr)
}
// Check that the coinbase contains at minimum the block
// height in output 1.
if blockHeight > 1 {
err := checkCoinbaseUniqueHeight(blockHeight, block)
if err != nil {
return err
}
}
}
return nil
}
// maybeAcceptBlock potentially accepts a block into the memory block chain.
// It performs several validation checks which depend on its position within
@ -14,18 +180,16 @@ import "github.com/btcsuite/btcutil"
// The flags modify the behavior of this function as follows:
// - BFDryRun: The memory chain index will not be pruned and no accept
// notification will be sent since the block is not being accepted.
//
// The flags are also passed to checkBlockContext and connectBestChain. See
// their documentation for how the flags modify their behavior.
func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags) error {
func (b *BlockChain) maybeAcceptBlock(block *dcrutil.Block,
flags BehaviorFlags) (bool, error) {
dryRun := flags&BFDryRun == BFDryRun
// Get a block node for the block previous to this one. Will be nil
// if this is the genesis block.
prevNode, err := b.getPrevNodeFromBlock(block)
if err != nil {
log.Errorf("getPrevNodeFromBlock: %v", err)
return err
log.Debugf("getPrevNodeFromBlock: %v", err)
return false, err
}
// The height of this block is one more than the referenced previous
@ -40,7 +204,7 @@ func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags)
// position of the block within the block chain.
err = b.checkBlockContext(block, prevNode, flags)
if err != nil {
return err
return false, err
}
// Prune block nodes which are no longer needed before creating
@ -48,14 +212,22 @@ func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags)
if !dryRun {
err = b.pruneBlockNodes()
if err != nil {
return err
return false, err
}
}
// Create a new block node for the block and add it to the in-memory
// block chain (could be either a side chain or the main chain).
blockHeader := &block.MsgBlock().Header
newNode := newBlockNode(blockHeader, block.Sha(), blockHeight)
voteBitsStake := make([]uint16, 0)
for _, stx := range block.STransactions() {
if is, _ := stake.IsSSGen(stx); is {
vb := stake.GetSSGenVoteBits(stx)
voteBitsStake = append(voteBitsStake, vb)
}
}
newNode := newBlockNode(blockHeader, block.Sha(), blockHeight, voteBitsStake)
if prevNode != nil {
newNode.parent = prevNode
newNode.height = blockHeight
@ -65,17 +237,19 @@ func (b *BlockChain) maybeAcceptBlock(block *btcutil.Block, flags BehaviorFlags)
// Connect the passed block to the chain while respecting proper chain
// selection according to the chain with the most proof of work. This
// also handles validation of the transaction scripts.
err = b.connectBestChain(newNode, block, flags)
var onMainChain bool
onMainChain, err = b.connectBestChain(newNode, block, flags)
if err != nil {
return err
return false, err
}
// Notify the caller that the new block was accepted into the block
// chain. The caller would typically want to react by relaying the
// inventory to other peers.
if !dryRun {
b.sendNotification(NTBlockAccepted, block)
b.sendNotification(NTBlockAccepted,
&BlockAcceptedNtfnsData{onMainChain, block})
}
return nil
return onMainChain, nil
}

View File

@ -1,32 +1,11 @@
// Copyright (c) 2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
"testing"
import ()
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcutil"
)
// BenchmarkIsCoinBase performs a simple benchmark against the IsCoinBase
// function.
func BenchmarkIsCoinBase(b *testing.B) {
tx, _ := btcutil.NewBlock(&Block100000).Tx(1)
b.ResetTimer()
for i := 0; i < b.N; i++ {
blockchain.IsCoinBase(tx)
}
}
// BenchmarkIsCoinBaseTx performs a simple benchmark against the IsCoinBaseTx
// function.
func BenchmarkIsCoinBaseTx(b *testing.B) {
tx := Block100000.Transactions[1]
b.ResetTimer()
for i := 0; i < b.N; i++ {
blockchain.IsCoinBaseTx(tx)
}
}
// TODO Make benchmarking tests for various functions, such as sidechain
// evaluation.

View File

@ -1,11 +1,13 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/wire"
)
// BlockLocator is used to help locate a specific block. The algorithm for
@ -23,7 +25,7 @@ import (
//
// The block locator for block 17a would be the hashes of blocks:
// [17a 16a 15 14 13 12 11 10 9 8 6 2 genesis]
type BlockLocator []*wire.ShaHash
type BlockLocator []*chainhash.Hash
// BlockLocatorFromHash returns a block locator for the passed block hash.
// See BlockLocator for details on the algotirhm used to create a block locator.
@ -35,7 +37,7 @@ type BlockLocator []*wire.ShaHash
// therefore the block locator will only consist of the genesis hash
// - If the passed hash is not currently known, the block locator will only
// consist of the passed hash
func (b *BlockChain) BlockLocatorFromHash(hash *wire.ShaHash) BlockLocator {
func (b *BlockChain) BlockLocatorFromHash(hash *chainhash.Hash) BlockLocator {
// The locator contains the requested hash at the very least.
locator := make(BlockLocator, 0, wire.MaxBlockLocatorsPerMsg)
locator = append(locator, hash)

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -6,109 +7,10 @@ package blockchain_test
import (
"testing"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
)
// TestHaveBlock tests the HaveBlock API to ensure proper functionality.
func TestHaveBlock(t *testing.T) {
// Load up blocks such that there is a side chain.
// (genesis block) -> 1 -> 2 -> 3 -> 4
// \-> 3a
testFiles := []string{
"blk_0_to_4.dat.bz2",
"blk_3A.dat.bz2",
}
var blocks []*btcutil.Block
for _, file := range testFiles {
blockTmp, err := loadBlocks(file)
if err != nil {
t.Errorf("Error loading file: %v\n", err)
return
}
for _, block := range blockTmp {
blocks = append(blocks, block)
}
}
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("haveblock")
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return
}
defer teardownFunc()
// Since we're not dealing with the real block chain, disable
// checkpoints and set the coinbase maturity to 1.
chain.DisableCheckpoints(true)
blockchain.TstSetCoinbaseMaturity(1)
timeSource := blockchain.NewMedianTime()
for i := 1; i < len(blocks); i++ {
isOrphan, err := chain.ProcessBlock(blocks[i], timeSource,
blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock fail on block %v: %v\n", i, err)
return
}
if isOrphan {
t.Errorf("ProcessBlock incorrectly returned block %v "+
"is an orphan\n", i)
return
}
}
// Insert an orphan block.
isOrphan, err := chain.ProcessBlock(btcutil.NewBlock(&Block100000),
timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("Unable to process block: %v", err)
return
}
if !isOrphan {
t.Errorf("ProcessBlock indicated block is an not orphan when " +
"it should be\n")
return
}
tests := []struct {
hash string
want bool
}{
// Genesis block should be present (in the main chain).
{hash: chaincfg.MainNetParams.GenesisHash.String(), want: true},
// Block 3a should be present (on a side chain).
{hash: "00000000474284d20067a4d33f6a02284e6ef70764a3a26d6a5b9df52ef663dd", want: true},
// Block 100000 should be present (as an orphan).
{hash: "000000000003ba27aa200b1cecaad478d2b00432346c3f1f3986da1afd33e506", want: true},
// Random hashes should not be availble.
{hash: "123", want: false},
}
for i, test := range tests {
hash, err := wire.NewShaHashFromStr(test.hash)
if err != nil {
t.Errorf("NewShaHashFromStr: %v", err)
continue
}
result, err := chain.HaveBlock(hash)
if err != nil {
t.Errorf("HaveBlock #%d unexpected error: %v", i, err)
return
}
if result != test.want {
t.Errorf("HaveBlock #%d got %v want %v", i, result,
test.want)
continue
}
}
// TODO Come up with some kind of new test for this portion of the API?
// HaveBlock is already tested in the reorganization test.
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,10 +8,10 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrutil"
)
// CheckpointConfirmations is the number of blocks before the end of the current
@ -21,8 +22,8 @@ const CheckpointConfirmations = 2016
// wire.ShaHash. It only differs from the one available in wire in that
// it ignores the error since it will only (and must only) be called with
// hard-coded, and therefore known good, hashes.
func newShaHashFromStr(hexStr string) *wire.ShaHash {
sha, _ := wire.NewShaHashFromStr(hexStr)
func newShaHashFromStr(hexStr string) *chainhash.Hash {
sha, _ := chainhash.NewHashFromStr(hexStr)
return sha
}
@ -59,7 +60,7 @@ func (b *BlockChain) LatestCheckpoint() *chaincfg.Checkpoint {
// verifyCheckpoint returns whether the passed block height and hash combination
// match the hard-coded checkpoint data. It also returns true if there is no
// checkpoint data for the passed block height.
func (b *BlockChain) verifyCheckpoint(height int64, hash *wire.ShaHash) bool {
func (b *BlockChain) verifyCheckpoint(height int64, hash *chainhash.Hash) bool {
if b.noCheckpoints || len(b.chainParams.Checkpoints) == 0 {
return true
}
@ -83,7 +84,7 @@ func (b *BlockChain) verifyCheckpoint(height int64, hash *wire.ShaHash) bool {
// available in the downloaded portion of the block chain and returns the
// associated block. It returns nil if a checkpoint can't be found (this should
// really only happen for blocks before the first checkpoint).
func (b *BlockChain) findPreviousCheckpoint() (*btcutil.Block, error) {
func (b *BlockChain) findPreviousCheckpoint() (*dcrutil.Block, error) {
if b.noCheckpoints || len(b.chainParams.Checkpoints) == 0 {
return nil, nil
}
@ -187,12 +188,12 @@ func (b *BlockChain) findPreviousCheckpoint() (*btcutil.Block, error) {
// isNonstandardTransaction determines whether a transaction contains any
// scripts which are not one of the standard types.
func isNonstandardTransaction(tx *btcutil.Tx) bool {
func isNonstandardTransaction(tx *dcrutil.Tx) bool {
// TODO(davec): Should there be checks for the input signature scripts?
// Check all of the output public key scripts for non-standard scripts.
for _, txOut := range tx.MsgTx().TxOut {
scriptClass := txscript.GetScriptClass(txOut.PkScript)
scriptClass := txscript.GetScriptClass(txOut.Version, txOut.PkScript)
if scriptClass == txscript.NonStandardTy {
return true
}
@ -215,7 +216,7 @@ func isNonstandardTransaction(tx *btcutil.Tx) bool {
//
// The intent is that candidates are reviewed by a developer to make the final
// decision and then manually added to the list of checkpoints for a network.
func (b *BlockChain) IsCheckpointCandidate(block *btcutil.Block) (bool, error) {
func (b *BlockChain) IsCheckpointCandidate(block *dcrutil.Block) (bool, error) {
// Checkpoints must be enabled.
if b.noCheckpoints {
return false, fmt.Errorf("checkpoints are disabled")

645
blockchain/common.go Normal file
View File

@ -0,0 +1,645 @@
// common.go
package blockchain
import (
"bytes"
"encoding/binary"
"fmt"
"sort"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// DebugBlockHeaderString dumps a verbose message containing information about
// the block header of a block.
func DebugBlockHeaderString(chainParams *chaincfg.Params, block *dcrutil.Block) string {
bh := block.MsgBlock().Header
var buffer bytes.Buffer
str := fmt.Sprintf("Version: %v\n", bh.Version)
buffer.WriteString(str)
str = fmt.Sprintf("Previous block: %v\n", bh.PrevBlock)
buffer.WriteString(str)
str = fmt.Sprintf("Merkle root (reg): %v\n", bh.MerkleRoot)
buffer.WriteString(str)
str = fmt.Sprintf("Merkle root (stk): %v\n", bh.StakeRoot)
buffer.WriteString(str)
str = fmt.Sprintf("VoteBits: %v\n", bh.VoteBits)
buffer.WriteString(str)
str = fmt.Sprintf("FinalState: %v\n", bh.FinalState)
buffer.WriteString(str)
str = fmt.Sprintf("Voters: %v\n", bh.Voters)
buffer.WriteString(str)
str = fmt.Sprintf("FreshStake: %v\n", bh.FreshStake)
buffer.WriteString(str)
str = fmt.Sprintf("Revocations: %v\n", bh.Revocations)
buffer.WriteString(str)
str = fmt.Sprintf("PoolSize: %v\n", bh.PoolSize)
buffer.WriteString(str)
str = fmt.Sprintf("Timestamp: %v\n", bh.Timestamp)
buffer.WriteString(str)
bitsBig := CompactToBig(bh.Bits)
if bitsBig.Cmp(bigZero) != 0 {
bitsBig.Div(chainParams.PowLimit, bitsBig)
}
diff := bitsBig.Int64()
str = fmt.Sprintf("Bits: %v (Difficulty: %v)\n", bh.Bits, diff)
buffer.WriteString(str)
str = fmt.Sprintf("SBits: %v (In coins: %v)\n", bh.SBits,
float64(bh.SBits)/dcrutil.AtomsPerCoin)
buffer.WriteString(str)
str = fmt.Sprintf("Nonce: %v \n", bh.Nonce)
buffer.WriteString(str)
str = fmt.Sprintf("Height: %v \n", bh.Height)
buffer.WriteString(str)
str = fmt.Sprintf("Size: %v \n", bh.Size)
buffer.WriteString(str)
return buffer.String()
}
// DebugBlockString dumps a verbose message containing information about
// the transactions of a block.
func DebugBlockString(block *dcrutil.Block) string {
if block == nil {
return "block pointer nil"
}
var buffer bytes.Buffer
hash := block.Sha()
str := fmt.Sprintf("Block Header: %v Height: %v \n",
hash, block.Height())
buffer.WriteString(str)
str = fmt.Sprintf("Block contains %v regular transactions "+
"and %v stake transactions \n",
len(block.Transactions()),
len(block.STransactions()))
buffer.WriteString(str)
str = fmt.Sprintf("List of regular transactions \n")
buffer.WriteString(str)
for i, tx := range block.Transactions() {
str = fmt.Sprintf("Index: %v, Hash: %v \n", i, tx.Sha())
buffer.WriteString(str)
}
if len(block.STransactions()) == 0 {
return buffer.String()
}
str = fmt.Sprintf("List of stake transactions \n")
buffer.WriteString(str)
for i, stx := range block.STransactions() {
txTypeStr := ""
txType := stake.DetermineTxType(stx)
switch txType {
case stake.TxTypeSStx:
txTypeStr = "SStx"
case stake.TxTypeSSGen:
txTypeStr = "SSGen"
case stake.TxTypeSSRtx:
txTypeStr = "SSRtx"
default:
txTypeStr = "Error"
}
str = fmt.Sprintf("Index: %v, Type: %v, Hash: %v \n",
i, txTypeStr, stx.Sha())
buffer.WriteString(str)
}
return buffer.String()
}
// DebugMsgTxString dumps a verbose message containing information about the
// contents of a transaction.
func DebugMsgTxString(msgTx *wire.MsgTx) string {
tx := dcrutil.NewTx(msgTx)
isSStx, _ := stake.IsSStx(tx)
isSSGen, _ := stake.IsSSGen(tx)
var sstxType []bool
var sstxPkhs [][]byte
var sstxAmts []int64
var sstxRules [][]bool
var sstxLimits [][]uint16
if isSStx {
sstxType, sstxPkhs, sstxAmts, _, sstxRules, sstxLimits =
stake.GetSStxStakeOutputInfo(tx)
}
var buffer bytes.Buffer
hash := msgTx.TxSha()
str := fmt.Sprintf("Transaction hash: %v, Version %v, Locktime: %v, "+
"Expiry %v\n\n", hash, msgTx.Version, msgTx.LockTime, msgTx.Expiry)
buffer.WriteString(str)
str = fmt.Sprintf("==INPUTS==\nNumber of inputs: %v\n\n",
len(msgTx.TxIn))
buffer.WriteString(str)
for i, input := range msgTx.TxIn {
str = fmt.Sprintf("Input number: %v\n", i)
buffer.WriteString(str)
str = fmt.Sprintf("Previous outpoint hash: %v, ",
input.PreviousOutPoint.Hash)
buffer.WriteString(str)
str = fmt.Sprintf("Previous outpoint index: %v, ",
input.PreviousOutPoint.Index)
buffer.WriteString(str)
str = fmt.Sprintf("Previous outpoint tree: %v \n",
input.PreviousOutPoint.Tree)
buffer.WriteString(str)
str = fmt.Sprintf("Sequence: %v \n",
input.Sequence)
buffer.WriteString(str)
str = fmt.Sprintf("ValueIn: %v \n",
input.ValueIn)
buffer.WriteString(str)
str = fmt.Sprintf("BlockHeight: %v \n",
input.BlockHeight)
buffer.WriteString(str)
str = fmt.Sprintf("BlockIndex: %v \n",
input.BlockIndex)
buffer.WriteString(str)
str = fmt.Sprintf("Raw signature script: %x \n", input.SignatureScript)
buffer.WriteString(str)
sigScr, _ := txscript.DisasmString(input.SignatureScript)
str = fmt.Sprintf("Disasmed signature script: %v \n\n",
sigScr)
buffer.WriteString(str)
}
str = fmt.Sprintf("==OUTPUTS==\nNumber of outputs: %v\n\n",
len(msgTx.TxOut))
buffer.WriteString(str)
for i, output := range msgTx.TxOut {
str = fmt.Sprintf("Output number: %v\n", i)
buffer.WriteString(str)
coins := float64(output.Value) / 1e8
str = fmt.Sprintf("Output amount: %v atoms or %v coins\n", output.Value,
coins)
buffer.WriteString(str)
// SStx OP_RETURNs, dump pkhs and amts committed
if isSStx && i != 0 && i%2 == 1 {
coins := float64(sstxAmts[i/2]) / 1e8
str = fmt.Sprintf("SStx commit amount: %v atoms or %v coins\n",
sstxAmts[i/2], coins)
buffer.WriteString(str)
str = fmt.Sprintf("SStx commit address: %x\n",
sstxPkhs[i/2])
buffer.WriteString(str)
str = fmt.Sprintf("SStx address type is P2SH: %v\n",
sstxType[i/2])
buffer.WriteString(str)
str = fmt.Sprintf("SStx all address types is P2SH: %v\n",
sstxType)
buffer.WriteString(str)
str = fmt.Sprintf("Voting is fee limited: %v\n",
sstxLimits[i/2][0])
buffer.WriteString(str)
if sstxRules[i/2][0] {
str = fmt.Sprintf("Voting limit imposed: %v\n",
sstxLimits[i/2][0])
buffer.WriteString(str)
}
str = fmt.Sprintf("Revoking is fee limited: %v\n",
sstxRules[i/2][1])
buffer.WriteString(str)
if sstxRules[i/2][1] {
str = fmt.Sprintf("Voting limit imposed: %v\n",
sstxLimits[i/2][1])
buffer.WriteString(str)
}
}
// SSGen block/block height OP_RETURN.
if isSSGen && i == 0 {
blkHash, blkHeight, _ := stake.GetSSGenBlockVotedOn(tx)
str = fmt.Sprintf("SSGen block hash voted on: %v, height: %v\n",
blkHash, blkHeight)
buffer.WriteString(str)
}
if isSSGen && i == 1 {
vb := stake.GetSSGenVoteBits(tx)
str = fmt.Sprintf("SSGen vote bits: %v\n", vb)
buffer.WriteString(str)
}
str = fmt.Sprintf("Raw script: %x \n", output.PkScript)
buffer.WriteString(str)
scr, _ := txscript.DisasmString(output.PkScript)
str = fmt.Sprintf("Disasmed script: %v \n\n", scr)
buffer.WriteString(str)
}
return buffer.String()
}
// DebugTicketDataString writes the contents of a ticket data struct
// as a string.
func DebugTicketDataString(td *stake.TicketData) string {
var buffer bytes.Buffer
str := fmt.Sprintf("SStxHash: %v\n", td.SStxHash)
buffer.WriteString(str)
str = fmt.Sprintf("SpendHash: %v\n", td.SpendHash)
buffer.WriteString(str)
str = fmt.Sprintf("BlockHeight: %v\n", td.BlockHeight)
buffer.WriteString(str)
str = fmt.Sprintf("Prefix: %v\n", td.Prefix)
buffer.WriteString(str)
str = fmt.Sprintf("Missed: %v\n", td.Missed)
buffer.WriteString(str)
str = fmt.Sprintf("Expired: %v\n", td.Expired)
buffer.WriteString(str)
return buffer.String()
}
// DebugTicketDBLiveString prints out the number of tickets in each
// bucket of the ticket database as a string.
func DebugTicketDBLiveString(tmdb *stake.TicketDB, chainParams *chaincfg.Params) (string, error) {
var buffer bytes.Buffer
buffer.WriteString("\n")
for i := 0; i < stake.BucketsSize; i++ {
bucketTickets, err := tmdb.DumpLiveTickets(uint8(i))
if err != nil {
return "", err
}
str := fmt.Sprintf("%v: %v\t", i, len(bucketTickets))
buffer.WriteString(str)
// Add newlines.
if (i+1)%4 == 0 {
buffer.WriteString("\n")
}
}
return buffer.String(), nil
}
// DebugTicketDBLiveBucketString returns a string containing the ticket hashes
// found in a specific bucket of the live ticket database. If the verbose flag
// is called, it dumps the contents of the ticket data as well.
func DebugTicketDBLiveBucketString(tmdb *stake.TicketDB, bucket uint8, verbose bool) (string, error) {
var buffer bytes.Buffer
str := fmt.Sprintf("Contents of live ticket bucket %v:\n", bucket)
buffer.WriteString(str)
bucketTickets, err := tmdb.DumpLiveTickets(bucket)
if err != nil {
return "", err
}
for hash, td := range bucketTickets {
str = fmt.Sprintf("%v\n", hash)
buffer.WriteString(str)
if verbose {
str = fmt.Sprintf("%v\n", DebugTicketDataString(td))
buffer.WriteString(str)
}
}
return buffer.String(), nil
}
// DebugTicketDBSpentBucketString prints the contents of the spent tickets
// database bucket indicated to a string that is returned. If the verbose
// flag is indicated, the contents of each ticket are printed as well.
func DebugTicketDBSpentBucketString(tmdb *stake.TicketDB, height int64, verbose bool) (string, error) {
var buffer bytes.Buffer
str := fmt.Sprintf("Contents of spent ticket bucket height %v:\n", height)
buffer.WriteString(str)
bucketTickets, err := tmdb.DumpSpentTickets(height)
if err != nil {
return "", err
}
for hash, td := range bucketTickets {
missedStr := ""
if td.Missed {
missedStr = "Missed"
} else {
missedStr = "Spent"
}
str = fmt.Sprintf("%v (%v)\n", hash, missedStr)
buffer.WriteString(str)
if verbose {
str = fmt.Sprintf("%v\n", DebugTicketDataString(td))
buffer.WriteString(str)
}
}
return buffer.String(), nil
}
// DebugTicketDBMissedString prints out the contents of the missed ticket
// database to a string. If verbose is indicated, the ticket data itself
// is printed along with the ticket hashes.
func DebugTicketDBMissedString(tmdb *stake.TicketDB, verbose bool) (string, error) {
var buffer bytes.Buffer
str := fmt.Sprintf("Contents of missed ticket database:\n")
buffer.WriteString(str)
bucketTickets, err := tmdb.DumpMissedTickets()
if err != nil {
return "", err
}
for hash, td := range bucketTickets {
str = fmt.Sprintf("%v\n", hash)
buffer.WriteString(str)
if verbose {
str = fmt.Sprintf("%v\n", DebugTicketDataString(td))
buffer.WriteString(str)
}
}
return buffer.String(), nil
}
// writeTicketDataToBuf writes some ticket data into a buffer as serialized
// data.
func writeTicketDataToBuf(buf *bytes.Buffer, td *stake.TicketData) {
buf.Write(td.SStxHash[:])
buf.Write(td.SpendHash[:])
// OK for our purposes.
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, uint64(td.BlockHeight))
buf.Write(b)
buf.Write([]byte{byte(td.Prefix)})
if td.Missed {
buf.Write([]byte{0x01})
} else {
buf.Write([]byte{0x00})
}
if td.Expired {
buf.Write([]byte{0x01})
} else {
buf.Write([]byte{0x00})
}
}
// DebugTxStoreData returns a string containing information about the data
// stored in the given TxStore.
func DebugTxStoreData(txs TxStore) string {
if txs == nil {
return ""
}
var buffer bytes.Buffer
for _, txd := range txs {
str := fmt.Sprintf("Hash: %v\n", txd.Hash)
buffer.WriteString(str)
str = fmt.Sprintf("Height: %v\n", txd.BlockHeight)
buffer.WriteString(str)
str = fmt.Sprintf("Tx: %v\n", txd.Tx)
buffer.WriteString(str)
str = fmt.Sprintf("Spent: %v\n", txd.Spent)
buffer.WriteString(str)
str = fmt.Sprintf("Err: %v\n\n", txd.Err)
buffer.WriteString(str)
}
return buffer.String()
}
// TicketDbThumbprint takes all the tickets in the respective ticket db,
// sorts them, hashes their contents into a list, and then hashes that list.
// The resultant hash is the thumbprint of the ticket database, and should
// be the same across all clients that are synced to the same block. Returns
// an array of hashes len 3, containing (1) live tickets (2) spent tickets
// and (3) missed tickets.
// Do NOT use on mainnet or in production. For debug use only! Make sure
// the blockchain is frozen when you call this function.
func TicketDbThumbprint(tmdb *stake.TicketDB, chainParams *chaincfg.Params) ([]*chainhash.Hash, error) {
// Container for the three master hashes to go into.
dbThumbprints := make([]*chainhash.Hash, 3, 3)
// (1) Live tickets.
allLiveTickets := stake.NewTicketDataSliceEmpty()
for i := 0; i < stake.BucketsSize; i++ {
bucketTickets, err := tmdb.DumpLiveTickets(uint8(i))
if err != nil {
return nil, err
}
for _, td := range bucketTickets {
allLiveTickets = append(allLiveTickets, td)
}
}
// Sort by the number data hash, since we already have this implemented
// and it's also unique.
sort.Sort(allLiveTickets)
// Create a buffer, dump all the data into it, and hash.
var buf bytes.Buffer
for _, td := range allLiveTickets {
writeTicketDataToBuf(&buf, td)
}
liveHash := chainhash.HashFunc(buf.Bytes())
liveThumbprint, err := chainhash.NewHash(liveHash[:])
if err != nil {
return nil, err
}
dbThumbprints[0] = liveThumbprint
// (2) Spent tickets.
height := tmdb.GetTopBlock()
allSpentTickets := stake.NewTicketDataSliceEmpty()
for i := int64(chainParams.StakeEnabledHeight); i <= height; i++ {
bucketTickets, err := tmdb.DumpSpentTickets(i)
if err != nil {
return nil, err
}
for _, td := range bucketTickets {
allSpentTickets = append(allSpentTickets, td)
}
}
sort.Sort(allSpentTickets)
buf.Reset() // Flush buffer
for _, td := range allSpentTickets {
writeTicketDataToBuf(&buf, td)
}
spentHash := chainhash.HashFunc(buf.Bytes())
spentThumbprint, err := chainhash.NewHash(spentHash[:])
if err != nil {
return nil, err
}
dbThumbprints[1] = spentThumbprint
// (3) Missed tickets.
allMissedTickets := stake.NewTicketDataSliceEmpty()
missedTickets, err := tmdb.DumpMissedTickets()
if err != nil {
return nil, err
}
for _, td := range missedTickets {
allMissedTickets = append(allMissedTickets, td)
}
sort.Sort(allMissedTickets)
buf.Reset() // Flush buffer
missedHash := chainhash.HashFunc(buf.Bytes())
missedThumbprint, err := chainhash.NewHash(missedHash[:])
if err != nil {
return nil, err
}
dbThumbprints[2] = missedThumbprint
return dbThumbprints, nil
}
// findWhereDoubleSpent determines where a tx was previously doublespent.
// VERY INTENSIVE BLOCKCHAIN SCANNING, USE TO DEBUG SIMULATED BLOCKCHAINS
// ONLY.
func (b *BlockChain) findWhereDoubleSpent(block *dcrutil.Block) error {
height := int64(1)
heightEnd := block.Height()
hashes, err := b.db.FetchHeightRange(height, heightEnd)
if err != nil {
return err
}
var allTxs []*dcrutil.Tx
txs := block.Transactions()[1:]
stxs := block.STransactions()
allTxs = append(txs, stxs...)
for _, hash := range hashes {
curBlock, err := b.getBlockFromHash(&hash)
if err != nil {
return err
}
log.Errorf("Cur block %v", curBlock.Height())
for _, localTx := range allTxs {
for _, localTxIn := range localTx.MsgTx().TxIn {
for _, tx := range curBlock.Transactions()[1:] {
for _, txIn := range tx.MsgTx().TxIn {
if txIn.PreviousOutPoint == localTxIn.PreviousOutPoint {
log.Errorf("Double spend of {hash: %v, idx: %v,"+
" tree: %b}, previously found in tx %v "+
"of block %v txtree regular",
txIn.PreviousOutPoint.Hash,
txIn.PreviousOutPoint.Index,
txIn.PreviousOutPoint.Tree,
tx.Sha(),
hash)
}
}
}
for _, tx := range curBlock.STransactions() {
for _, txIn := range tx.MsgTx().TxIn {
if txIn.PreviousOutPoint == localTxIn.PreviousOutPoint {
log.Errorf("Double spend of {hash: %v, idx: %v,"+
" tree: %b}, previously found in tx %v "+
"of block %v txtree stake\n",
txIn.PreviousOutPoint.Hash,
txIn.PreviousOutPoint.Index,
txIn.PreviousOutPoint.Tree,
tx.Sha(),
hash)
}
}
}
}
}
}
for _, localTx := range stxs {
for _, localTxIn := range localTx.MsgTx().TxIn {
for _, tx := range txs {
for _, txIn := range tx.MsgTx().TxIn {
if txIn.PreviousOutPoint == localTxIn.PreviousOutPoint {
log.Errorf("Double spend of {hash: %v, idx: %v,"+
" tree: %b}, previously found in tx %v "+
"of cur block stake txtree\n",
txIn.PreviousOutPoint.Hash,
txIn.PreviousOutPoint.Index,
txIn.PreviousOutPoint.Tree,
tx.Sha())
}
}
}
}
}
return nil
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -13,17 +14,18 @@ import (
"path/filepath"
"strings"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ldb"
_ "github.com/btcsuite/btcd/database/memdb"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
_ "github.com/decred/dcrd/database/memdb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// testDbType is the database backend type to use for the tests.
const testDbType = "memdb"
const testDbType = "leveldb"
// testDbRoot is the root directory used to create all test databases.
const testDbRoot = "testdbs"
@ -54,7 +56,7 @@ func isSupportedDbType(dbType string) bool {
// chainSetup is used to create a new db and chain instance with the genesis
// block already inserted. In addition to the new chain instnce, it returns
// a teardown function the caller should invoke when done testing to clean up.
func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
func chainSetup(dbName string, params *chaincfg.Params) (*blockchain.BlockChain, func(), error) {
if !isSupportedDbType(testDbType) {
return nil, nil, fmt.Errorf("unsupported db type %v", testDbType)
}
@ -62,6 +64,8 @@ func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
// Handle memory database specially since it doesn't need the disk
// specific handling.
var db database.Db
tmdb := new(stake.TicketDB)
var teardown func()
if testDbType == "memdb" {
ndb, err := database.CreateDB(testDbType)
@ -73,6 +77,7 @@ func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
// Setup a teardown function for cleaning up. This function is
// returned to the caller to be invoked when it is done testing.
teardown = func() {
tmdb.Close()
db.Close()
}
} else {
@ -98,6 +103,7 @@ func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
// returned to the caller to be invoked when it is done testing.
teardown = func() {
dbVersionPath := filepath.Join(testDbRoot, dbName+".ver")
tmdb.Close()
db.Sync()
db.Close()
os.RemoveAll(dbPath)
@ -108,7 +114,8 @@ func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
// Insert the main network genesis block. This is part of the initial
// database setup.
genesisBlock := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
genesisBlock := dcrutil.NewBlock(params.GenesisBlock)
genesisBlock.SetHeight(int64(0))
_, err := db.InsertBlock(genesisBlock)
if err != nil {
teardown()
@ -116,7 +123,11 @@ func chainSetup(dbName string) (*blockchain.BlockChain, func(), error) {
return nil, nil, err
}
chain := blockchain.New(db, &chaincfg.MainNetParams, nil)
// Start the ticket database.
tmdb.Initialize(params, db)
tmdb.RescanTicketDB()
chain := blockchain.New(db, tmdb, params, nil)
return chain, teardown, nil
}
@ -173,7 +184,7 @@ func loadTxStore(filename string) (blockchain.TxStore, error) {
if err != nil {
return nil, err
}
txD.Tx = btcutil.NewTx(&msgTx)
txD.Tx = dcrutil.NewTx(&msgTx)
// Transaction hash.
txHash := msgTx.TxSha()

View File

@ -1,49 +1,22 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"math/big"
"time"
"github.com/btcsuite/btcd/wire"
)
const (
// targetTimespan is the desired amount of time that should elapse
// before block difficulty requirement is examined to determine how
// it should be changed in order to maintain the desired block
// generation rate.
targetTimespan = time.Hour * 24 * 14
// targetSpacing is the desired amount of time to generate each block.
targetSpacing = time.Minute * 10
// BlocksPerRetarget is the number of blocks between each difficulty
// retarget. It is calculated based on the desired block generation
// rate.
BlocksPerRetarget = int64(targetTimespan / targetSpacing)
// retargetAdjustmentFactor is the adjustment factor used to limit
// the minimum and maximum amount of adjustment that can occur between
// difficulty retargets.
retargetAdjustmentFactor = 4
// minRetargetTimespan is the minimum amount of adjustment that can
// occur between difficulty retargets. It equates to 25% of the
// previous difficulty.
minRetargetTimespan = int64(targetTimespan / retargetAdjustmentFactor)
// maxRetargetTimespan is the maximum amount of adjustment that can
// occur between difficulty retargets. It equates to 400% of the
// previous difficulty.
maxRetargetTimespan = int64(targetTimespan * retargetAdjustmentFactor)
"github.com/decred/dcrd/chaincfg/chainhash"
)
var (
// bigZero is 0 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigZero = big.NewInt(0)
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
@ -51,11 +24,15 @@ var (
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
// maxShift is the maximum shift for a difficulty that resets (e.g.
// testnet difficulty).
maxShift = uint(256)
)
// ShaHashToBig converts a wire.ShaHash into a big.Int that can be used to
// perform math comparisons.
func ShaHashToBig(hash *wire.ShaHash) *big.Int {
func ShaHashToBig(hash *chainhash.Hash) *big.Int {
// A ShaHash is in little-endian, but the big package wants the bytes
// in big-endian, so reverse them.
buf := *hash
@ -87,7 +64,7 @@ func ShaHashToBig(hash *wire.ShaHash) *big.Int {
// The formula to calculate N is:
// N = (-1^sign) * mantissa * 256^(exponent-3)
//
// This compact form is only used in bitcoin to encode unsigned 256-bit numbers
// This compact form is only used in decred to encode unsigned 256-bit numbers
// which represent difficulty targets, thus there really is not a need for a
// sign bit, but it is implemented here to stay consistent with bitcoind.
func CompactToBig(compact uint32) *big.Int {
@ -160,7 +137,7 @@ func BigToCompact(n *big.Int) uint32 {
return compact
}
// CalcWork calculates a work value from difficulty bits. Bitcoin increases
// CalcWork calculates a work value from difficulty bits. Decred increases
// the difficulty for generating a block by decreasing the value which the
// generated hash must be less than. This difficulty target is stored in each
// block header using a compact representation as described in the documenation
@ -188,16 +165,19 @@ func CalcWork(bits uint32) *big.Int {
// can have given starting difficulty bits and a duration. It is mainly used to
// verify that claimed proof of work by a block is sane as compared to a
// known good checkpoint.
func (b *BlockChain) calcEasiestDifficulty(bits uint32, duration time.Duration) uint32 {
func (b *BlockChain) calcEasiestDifficulty(bits uint32,
duration time.Duration) uint32 {
// Convert types used in the calculations below.
durationVal := int64(duration)
adjustmentFactor := big.NewInt(retargetAdjustmentFactor)
adjustmentFactor := big.NewInt(b.chainParams.RetargetAdjustmentFactor)
maxRetargetTimespan := int64(b.chainParams.TargetTimespan) *
b.chainParams.RetargetAdjustmentFactor
// The test network rules allow minimum difficulty blocks after more
// than twice the desired amount of time needed to generate a block has
// elapsed.
if b.chainParams.ResetMinDifficulty {
if durationVal > int64(targetSpacing)*2 {
if durationVal > int64(b.chainParams.TimePerBlock)*2 {
return b.chainParams.PowLimitBits
}
}
@ -222,11 +202,14 @@ func (b *BlockChain) calcEasiestDifficulty(bits uint32, duration time.Duration)
// findPrevTestNetDifficulty returns the difficulty of the previous block which
// did not have the special testnet minimum difficulty rule applied.
func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32, error) {
func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32,
error) {
// Search backwards through the chain for the last block without
// the special rule applied.
blocksPerRetarget := b.chainParams.WorkDiffWindowSize *
b.chainParams.WorkDiffWindows
iterNode := startNode
for iterNode != nil && iterNode.height%BlocksPerRetarget != 0 &&
for iterNode != nil && iterNode.height%blocksPerRetarget != 0 &&
iterNode.bits == b.chainParams.PowLimitBits {
// Get the previous block node. This function is used over
@ -256,15 +239,20 @@ func (b *BlockChain) findPrevTestNetDifficulty(startNode *blockNode) (uint32, er
// This function differs from the exported CalcNextRequiredDifficulty in that
// the exported version uses the current best chain as the previous block node
// while this function accepts any block node.
func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTime time.Time) (uint32, error) {
func (b *BlockChain) calcNextRequiredDifficulty(curNode *blockNode,
newBlockTime time.Time) (uint32, error) {
// Genesis block.
if lastNode == nil {
if curNode == nil {
return b.chainParams.PowLimitBits, nil
}
// Return the previous block's difficulty requirements if this block
// is not at a difficulty retarget interval.
if (lastNode.height+1)%BlocksPerRetarget != 0 {
// Get the old difficulty; if we aren't at a block height where it changes,
// just return this.
oldDiff := curNode.header.Bits
oldDiffBig := CompactToBig(curNode.header.Bits)
// We're not at a retarget point, return the oldDiff.
if (curNode.height+1)%b.chainParams.WorkDiffWindowSize != 0 {
// The test network rules allow minimum difficulty blocks after
// more than twice the desired amount of time needed to generate
// a block has elapsed.
@ -272,83 +260,185 @@ func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTim
// Return minimum difficulty when more than twice the
// desired amount of time needed to generate a block has
// elapsed.
allowMinTime := lastNode.timestamp.Add(targetSpacing * 2)
allowMinTime := curNode.timestamp.Add(b.chainParams.TimePerBlock *
b.chainParams.MinDiffResetTimeFactor)
// For every extra target timespan that passes, we halve the
// difficulty.
if newBlockTime.After(allowMinTime) {
return b.chainParams.PowLimitBits, nil
timePassed := newBlockTime.Sub(curNode.timestamp)
timePassed -= (b.chainParams.TimePerBlock *
b.chainParams.MinDiffResetTimeFactor)
shifts := uint((timePassed / b.chainParams.TimePerBlock) + 1)
// Scale the difficulty with time passed.
oldTarget := CompactToBig(curNode.header.Bits)
newTarget := new(big.Int)
if shifts < maxShift {
newTarget.Lsh(oldTarget, shifts)
} else {
newTarget.Set(oneLsh256)
}
// Limit new value to the proof of work limit.
if newTarget.Cmp(b.chainParams.PowLimit) > 0 {
newTarget.Set(b.chainParams.PowLimit)
}
return BigToCompact(newTarget), nil
}
// The block was mined within the desired timeframe, so
// return the difficulty for the last block which did
// not have the special minimum difficulty rule applied.
prevBits, err := b.findPrevTestNetDifficulty(lastNode)
prevBits, err := b.findPrevTestNetDifficulty(curNode)
if err != nil {
return 0, err
}
return prevBits, nil
}
// For the main network (or any unrecognized networks), simply
// return the previous block's difficulty requirements.
return lastNode.bits, nil
return oldDiff, nil
}
// Get the block node at the previous retarget (targetTimespan days
// worth of blocks).
firstNode := lastNode
for i := int64(0); i < BlocksPerRetarget-1 && firstNode != nil; i++ {
// Get the previous block node. This function is used over
// simply accessing firstNode.parent directly as it will
// dynamically create previous block nodes as needed. This
// helps allow only the pieces of the chain that are needed
// to remain in memory.
// Declare some useful variables.
RAFBig := big.NewInt(b.chainParams.RetargetAdjustmentFactor)
nextDiffBigMin := CompactToBig(curNode.header.Bits)
nextDiffBigMin.Div(nextDiffBigMin, RAFBig)
nextDiffBigMax := CompactToBig(curNode.header.Bits)
nextDiffBigMax.Mul(nextDiffBigMax, RAFBig)
alpha := b.chainParams.WorkDiffAlpha
// Number of nodes to traverse while calculating difficulty.
nodesToTraverse := (b.chainParams.WorkDiffWindowSize *
b.chainParams.WorkDiffWindows)
// Initialize bigInt slice for the percentage changes for each window period
// above or below the target.
windowChanges := make([]*big.Int, b.chainParams.WorkDiffWindows)
// Regress through all of the previous blocks and store the percent changes
// per window period; use bigInts to emulate 64.32 bit fixed point.
oldNode := curNode
windowPeriod := int64(0)
weights := uint64(0)
recentTime := curNode.header.Timestamp.UnixNano()
olderTime := int64(0)
for i := int64(0); ; i++ {
// Store and reset after reaching the end of every window period.
if i%b.chainParams.WorkDiffWindowSize == 0 && i != 0 {
olderTime = oldNode.header.Timestamp.UnixNano()
timeDifference := recentTime - olderTime
// Just assume we're at the target (no change) if we've
// gone all the way back to the genesis block.
if oldNode.height == 0 {
timeDifference = int64(b.chainParams.TargetTimespan)
}
timeDifBig := big.NewInt(timeDifference)
timeDifBig.Lsh(timeDifBig, 32) // Add padding
targetTemp := big.NewInt(int64(b.chainParams.TargetTimespan))
windowAdjusted := targetTemp.Div(timeDifBig, targetTemp)
// Weight it exponentially. Be aware that this could at some point
// overflow if alpha or the number of blocks used is really large.
windowAdjusted = windowAdjusted.Lsh(windowAdjusted,
uint((b.chainParams.WorkDiffWindows-windowPeriod)*alpha))
// Sum up all the different weights incrementally.
weights += 1 << uint64((b.chainParams.WorkDiffWindows-windowPeriod)*
alpha)
// Store it in the slice.
windowChanges[windowPeriod] = windowAdjusted
windowPeriod++
recentTime = olderTime
}
if i == nodesToTraverse {
break // Exit for loop when we hit the end.
}
// Get the previous block node.
var err error
firstNode, err = b.getPrevNodeFromNode(firstNode)
tempNode := oldNode
oldNode, err = b.getPrevNodeFromNode(oldNode)
if err != nil {
return 0, err
}
// If we're at the genesis block, reset the oldNode
// so that it stays at the genesis block.
if oldNode == nil {
oldNode = tempNode
}
}
if firstNode == nil {
return 0, fmt.Errorf("unable to obtain previous retarget block")
// Sum up the weighted window periods.
weightedSum := big.NewInt(0)
for i := int64(0); i < b.chainParams.WorkDiffWindows; i++ {
weightedSum.Add(weightedSum, windowChanges[i])
}
// Limit the amount of adjustment that can occur to the previous
// difficulty.
actualTimespan := lastNode.timestamp.UnixNano() - firstNode.timestamp.UnixNano()
adjustedTimespan := actualTimespan
if actualTimespan < minRetargetTimespan {
adjustedTimespan = minRetargetTimespan
} else if actualTimespan > maxRetargetTimespan {
adjustedTimespan = maxRetargetTimespan
}
// Divide by the sum of all weights.
weightsBig := big.NewInt(int64(weights))
weightedSumDiv := weightedSum.Div(weightedSum, weightsBig)
// Calculate new target difficulty as:
// currentDifficulty * (adjustedTimespan / targetTimespan)
// The result uses integer division which means it will be slightly
// rounded down. Bitcoind also uses integer division to calculate this
// result.
oldTarget := CompactToBig(lastNode.bits)
newTarget := new(big.Int).Mul(oldTarget, big.NewInt(adjustedTimespan))
newTarget.Div(newTarget, big.NewInt(int64(targetTimespan)))
// Multiply by the old diff.
nextDiffBig := weightedSumDiv.Mul(weightedSumDiv, oldDiffBig)
// Right shift to restore the original padding (restore non-fixed point).
nextDiffBig = nextDiffBig.Rsh(nextDiffBig, 32)
// Check to see if we're over the limits for the maximum allowable retarget;
// if we are, return the maximum or minimum except in the case that oldDiff
// is zero.
if oldDiffBig.Cmp(bigZero) == 0 { // This should never really happen,
nextDiffBig.Set(nextDiffBig) // but in case it does...
} else if nextDiffBig.Cmp(bigZero) == 0 {
nextDiffBig.Set(b.chainParams.PowLimit)
} else if nextDiffBig.Cmp(nextDiffBigMax) == 1 {
nextDiffBig.Set(nextDiffBigMax)
} else if nextDiffBig.Cmp(nextDiffBigMin) == -1 {
nextDiffBig.Set(nextDiffBigMin)
}
// Limit new value to the proof of work limit.
if newTarget.Cmp(b.chainParams.PowLimit) > 0 {
newTarget.Set(b.chainParams.PowLimit)
if nextDiffBig.Cmp(b.chainParams.PowLimit) > 0 {
nextDiffBig.Set(b.chainParams.PowLimit)
}
// Log new target difficulty and return it. The new target logging is
// intentionally converting the bits back to a number instead of using
// newTarget since conversion to the compact representation loses
// precision.
newTargetBits := BigToCompact(newTarget)
log.Debugf("Difficulty retarget at block height %d", lastNode.height+1)
log.Debugf("Old target %08x (%064x)", lastNode.bits, oldTarget)
log.Debugf("New target %08x (%064x)", newTargetBits, CompactToBig(newTargetBits))
log.Debugf("Actual timespan %v, adjusted timespan %v, target timespan %v",
time.Duration(actualTimespan), time.Duration(adjustedTimespan),
targetTimespan)
nextDiffBits := BigToCompact(nextDiffBig)
log.Debugf("Difficulty retarget at block height %d", curNode.height+1)
log.Debugf("Old target %08x (%064x)", curNode.header.Bits, oldDiffBig)
log.Debugf("New target %08x (%064x)", nextDiffBits, CompactToBig(nextDiffBits))
return newTargetBits, nil
return nextDiffBits, nil
}
// CalcNextRequiredDiffFromNode calculates the required difficulty for the block
// given with the passed hash along with the given timestamp.
//
// This function is NOT safe for concurrent access.
func (b *BlockChain) CalcNextRequiredDiffFromNode(hash *chainhash.Hash,
timestamp time.Time) (uint32, error) {
// Fetch the block to get the difficulty for.
node, err := b.findNode(hash)
if err != nil {
return 0, err
}
return b.calcNextRequiredDifficulty(node, timestamp)
}
// CalcNextRequiredDifficulty calculates the required difficulty for the block
@ -356,6 +446,297 @@ func (b *BlockChain) calcNextRequiredDifficulty(lastNode *blockNode, newBlockTim
// rules.
//
// This function is NOT safe for concurrent access.
func (b *BlockChain) CalcNextRequiredDifficulty(timestamp time.Time) (uint32, error) {
func (b *BlockChain) CalcNextRequiredDifficulty(timestamp time.Time) (uint32,
error) {
return b.calcNextRequiredDifficulty(b.bestChain, timestamp)
}
// mergeDifficulty takes an original stake difficulty and two new, scaled
// stake difficulties, merges the new difficulties, and outputs a new
// merged stake difficulty.
func mergeDifficulty(oldDiff int64, newDiff1 int64, newDiff2 int64) int64 {
newDiff1Big := big.NewInt(newDiff1)
newDiff2Big := big.NewInt(newDiff2)
newDiff2Big.Lsh(newDiff2Big, 32)
oldDiffBig := big.NewInt(oldDiff)
oldDiffBigLSH := big.NewInt(oldDiff)
oldDiffBigLSH.Lsh(oldDiffBig, 32)
newDiff1Big.Div(oldDiffBigLSH, newDiff1Big)
newDiff2Big.Div(newDiff2Big, oldDiffBig)
// Combine the two changes in difficulty.
summedChange := big.NewInt(0)
summedChange.Set(newDiff2Big)
summedChange.Lsh(summedChange, 32)
summedChange.Div(summedChange, newDiff1Big)
summedChange.Mul(summedChange, oldDiffBig)
summedChange.Rsh(summedChange, 32)
return summedChange.Int64()
}
// calcNextRequiredStakeDifficulty calculates the exponentially weighted average
// and then uses it to determine the next stake difficulty.
// TODO: You can combine the first and second for loops below for a speed up
// if you'd like, I'm not sure how much it matters.
func (b *BlockChain) calcNextRequiredStakeDifficulty(curNode *blockNode) (int64,
error) {
alpha := b.chainParams.StakeDiffAlpha
stakeDiffStartHeight := int64(b.chainParams.CoinbaseMaturity) +
1
maxRetarget := int64(b.chainParams.RetargetAdjustmentFactor)
TicketPoolWeight := int64(b.chainParams.TicketPoolSizeWeight)
// Number of nodes to traverse while calculating difficulty.
nodesToTraverse := (b.chainParams.StakeDiffWindowSize *
b.chainParams.StakeDiffWindows)
// Genesis block. Block at height 1 has these parameters.
// Additionally, if we're before the time when people generally begin
// purchasing tickets, just use the MinimumStakeDiff.
// This is sort of sloppy and coded with the hopes that generally by
// stakeDiffStartHeight people will be submitting lots of SStx over the
// past nodesToTraverse many nodes. It should be okay with the default
// Decred parameters, but might do weird things if you use custom
// parameters.
if curNode == nil ||
curNode.height < stakeDiffStartHeight {
return b.chainParams.MinimumStakeDiff, nil
}
// Get the old difficulty; if we aren't at a block height where it changes,
// just return this.
oldDiff := curNode.header.SBits
if (curNode.height+1)%b.chainParams.StakeDiffWindowSize != 0 {
return oldDiff, nil
}
// The target size of the ticketPool in live tickets. Recast these as int64
// to avoid possible overflows for large sizes of either variable in
// params.
targetForTicketPool := int64(b.chainParams.TicketsPerBlock) *
int64(b.chainParams.TicketPoolSize)
// Initialize bigInt slice for the percentage changes for each window period
// above or below the target.
windowChanges := make([]*big.Int, b.chainParams.StakeDiffWindows)
// Regress through all of the previous blocks and store the percent changes
// per window period; use bigInts to emulate 64.32 bit fixed point.
oldNode := curNode
windowPeriod := int64(0)
weights := uint64(0)
for i := int64(0); ; i++ {
// Store and reset after reaching the end of every window period.
if (i+1)%b.chainParams.StakeDiffWindowSize == 0 {
// First adjust based on ticketPoolSize. Skew the difference
// in ticketPoolSize by max adjustment factor to help
// weight ticket pool size versus tickets per block.
poolSizeSkew := (int64(oldNode.header.PoolSize)-
targetForTicketPool)*TicketPoolWeight + targetForTicketPool
// Don't let this be negative or zero.
if poolSizeSkew <= 0 {
poolSizeSkew = 1
}
curPoolSizeTemp := big.NewInt(poolSizeSkew)
curPoolSizeTemp.Lsh(curPoolSizeTemp, 32) // Add padding
targetTemp := big.NewInt(targetForTicketPool)
windowAdjusted := curPoolSizeTemp.Div(curPoolSizeTemp, targetTemp)
// Weight it exponentially. Be aware that this could at some point
// overflow if alpha or the number of blocks used is really large.
windowAdjusted = windowAdjusted.Lsh(windowAdjusted,
uint((b.chainParams.StakeDiffWindows-windowPeriod)*alpha))
// Sum up all the different weights incrementally.
weights += 1 << uint64((b.chainParams.StakeDiffWindows-windowPeriod)*
alpha)
// Store it in the slice.
windowChanges[windowPeriod] = windowAdjusted
// windowFreshStake = 0
windowPeriod++
}
if (i + 1) == nodesToTraverse {
break // Exit for loop when we hit the end.
}
// Get the previous block node.
var err error
tempNode := oldNode
oldNode, err = b.getPrevNodeFromNode(oldNode)
if err != nil {
return 0, err
}
// If we're at the genesis block, reset the oldNode
// so that it stays at the genesis block.
if oldNode == nil {
oldNode = tempNode
}
}
// Sum up the weighted window periods.
weightedSum := big.NewInt(0)
for i := int64(0); i < b.chainParams.StakeDiffWindows; i++ {
weightedSum.Add(weightedSum, windowChanges[i])
}
// Divide by the sum of all weights.
weightsBig := big.NewInt(int64(weights))
weightedSumDiv := weightedSum.Div(weightedSum, weightsBig)
// Multiply by the old stake diff.
oldDiffBig := big.NewInt(oldDiff)
nextDiffBig := weightedSumDiv.Mul(weightedSumDiv, oldDiffBig)
// Right shift to restore the original padding (restore non-fixed point).
nextDiffBig = nextDiffBig.Rsh(nextDiffBig, 32)
nextDiffTicketPool := nextDiffBig.Int64()
// Check to see if we're over the limits for the maximum allowable retarget;
// if we are, return the maximum or minimum except in the case that oldDiff
// is zero.
if oldDiff == 0 { // This should never really happen, but in case it does...
return nextDiffTicketPool, nil
} else if nextDiffTicketPool == 0 {
nextDiffTicketPool = oldDiff / maxRetarget
} else if (nextDiffTicketPool / oldDiff) > (maxRetarget - 1) {
nextDiffTicketPool = oldDiff * maxRetarget
} else if (oldDiff / nextDiffTicketPool) > (maxRetarget - 1) {
nextDiffTicketPool = oldDiff / maxRetarget
}
// The target number of new SStx per block for any given window period.
targetForWindow := b.chainParams.StakeDiffWindowSize *
int64(b.chainParams.TicketsPerBlock)
// Regress through all of the previous blocks and store the percent changes
// per window period; use bigInts to emulate 64.32 bit fixed point.
oldNode = curNode
windowFreshStake := int64(0)
windowPeriod = int64(0)
weights = uint64(0)
for i := int64(0); ; i++ {
// Add the fresh stake into the store for this window period.
windowFreshStake += int64(oldNode.header.FreshStake)
// Store and reset after reaching the end of every window period.
if (i+1)%b.chainParams.StakeDiffWindowSize == 0 {
// Don't let fresh stake be zero.
if windowFreshStake <= 0 {
windowFreshStake = 1
}
freshTemp := big.NewInt(windowFreshStake)
freshTemp.Lsh(freshTemp, 32) // Add padding
targetTemp := big.NewInt(targetForWindow)
// Get the percentage change.
windowAdjusted := freshTemp.Div(freshTemp, targetTemp)
// Weight it exponentially. Be aware that this could at some point
// overflow if alpha or the number of blocks used is really large.
windowAdjusted = windowAdjusted.Lsh(windowAdjusted,
uint((b.chainParams.StakeDiffWindows-windowPeriod)*alpha))
// Sum up all the different weights incrementally.
weights += 1 <<
uint64((b.chainParams.StakeDiffWindows-windowPeriod)*alpha)
// Store it in the slice.
windowChanges[windowPeriod] = windowAdjusted
windowFreshStake = 0
windowPeriod++
}
if (i + 1) == nodesToTraverse {
break // Exit for loop when we hit the end.
}
// Get the previous block node.
var err error
tempNode := oldNode
oldNode, err = b.getPrevNodeFromNode(oldNode)
if err != nil {
return 0, err
}
// If we're at the genesis block, reset the oldNode
// so that it stays at the genesis block.
if oldNode == nil {
oldNode = tempNode
}
}
// Sum up the weighted window periods.
weightedSum = big.NewInt(0)
for i := int64(0); i < b.chainParams.StakeDiffWindows; i++ {
weightedSum.Add(weightedSum, windowChanges[i])
}
// Divide by the sum of all weights.
weightsBig = big.NewInt(int64(weights))
weightedSumDiv = weightedSum.Div(weightedSum, weightsBig)
// Multiply by the old stake diff.
oldDiffBig = big.NewInt(oldDiff)
nextDiffBig = weightedSumDiv.Mul(weightedSumDiv, oldDiffBig)
// Right shift to restore the original padding (restore non-fixed point).
nextDiffBig = nextDiffBig.Rsh(nextDiffBig, 32)
nextDiffFreshStake := nextDiffBig.Int64()
// Check to see if we're over the limits for the maximum allowable retarget;
// if we are, return the maximum or minimum except in the case that oldDiff
// is zero.
if oldDiff == 0 { // This should never really happen, but in case it does...
return nextDiffFreshStake, nil
} else if nextDiffFreshStake == 0 {
nextDiffFreshStake = oldDiff / maxRetarget
} else if (nextDiffFreshStake / oldDiff) > (maxRetarget - 1) {
nextDiffFreshStake = oldDiff * maxRetarget
} else if (oldDiff / nextDiffFreshStake) > (maxRetarget - 1) {
nextDiffFreshStake = oldDiff / maxRetarget
}
// Average the two differences using scaled multiplication.
nextDiff := mergeDifficulty(oldDiff, nextDiffTicketPool, nextDiffFreshStake)
// Check to see if we're over the limits for the maximum allowable retarget;
// if we are, return the maximum or minimum except in the case that oldDiff
// is zero.
if oldDiff == 0 { // This should never really happen, but in case it does...
return oldDiff, nil
} else if nextDiff == 0 {
nextDiff = oldDiff / maxRetarget
} else if (nextDiff / oldDiff) > (maxRetarget - 1) {
nextDiff = oldDiff * maxRetarget
} else if (oldDiff / nextDiff) > (maxRetarget - 1) {
nextDiff = oldDiff / maxRetarget
}
// If the next diff is below the network minimum, set the required stake
// difficulty to the minimum.
if nextDiff < b.chainParams.MinimumStakeDiff {
return b.chainParams.MinimumStakeDiff, nil
}
return nextDiff, nil
}
// CalcNextRequiredStakeDifficulty is the exported version of the above function.
// This function is NOT safe for concurrent access.
func (b *BlockChain) CalcNextRequiredStakeDifficulty() (int64, error) {
return b.calcNextRequiredStakeDifficulty(b.bestChain)
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,8 +8,13 @@ package blockchain_test
import (
"math/big"
"testing"
"time"
"github.com/btcsuite/btcd/blockchain"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
"github.com/decred/dcrutil"
)
func TestBigToCompact(t *testing.T) {
@ -69,3 +75,59 @@ func TestCalcWork(t *testing.T) {
}
}
}
// TODO Make more elaborate tests for difficulty. The difficulty algorithms
// have already been tested to death in simnet/testnet/mainnet simulations,
// but we should really have a unit test for them that includes tests for
// edge cases.
func TestDiff(t *testing.T) {
db, err := database.CreateDB("memdb")
if err != nil {
t.Errorf("Failed to create database: %v\n", err)
return
}
defer db.Close()
var tmdb *stake.TicketDB
genesisBlock := dcrutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
_, err = db.InsertBlock(genesisBlock)
if err != nil {
t.Errorf("Failed to insert genesis block: %v\n", err)
return
}
chain := blockchain.New(db, tmdb, &chaincfg.MainNetParams, nil)
//timeSource := blockchain.NewMedianTime()
// Grab some blocks
// Build fake blockchain
// Calc new difficulty
ts := time.Now()
d, err := chain.CalcNextRequiredDifficulty(ts)
if err != nil {
t.Errorf("Failed to get difficulty: %v\n", err)
return
}
if d != 486604799 { // This is hardcoded in genesis block but not exported anywhere.
t.Error("Failed to get initial difficulty.")
}
sd, err := chain.CalcNextRequiredStakeDifficulty()
if err != nil {
t.Errorf("Failed to get stake difficulty: %v\n", err)
return
}
if sd != chaincfg.MainNetParams.MinimumStakeDiff {
t.Error("Incorrect initial stake difficulty.")
}
// Compare
// Repeat for a few more
}

View File

@ -1,14 +1,15 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
/*
Package blockchain implements bitcoin block handling and chain selection rules.
Package blockchain implements decred block handling and chain selection rules.
The bitcoin block handling and chain selection rules are an integral, and quite
likely the most important, part of bitcoin. Unfortunately, at the time of
The decred block handling and chain selection rules are an integral, and quite
likely the most important, part of decred. Unfortunately, at the time of
this writing, these rules are also largely undocumented and had to be
ascertained from the bitcoind source code. At its core, bitcoin is a
ascertained from the bitcoind source code. At its core, decred is a
distributed consensus of which blocks are valid and which ones will comprise the
main block chain (public ledger) that ultimately determines accepted
transactions, so it is extremely important that fully validating nodes agree on
@ -20,13 +21,13 @@ functionality such as rejecting duplicate blocks, ensuring blocks and
transactions follow all rules, orphan handling, and best chain selection along
with reorganization.
Since this package does not deal with other bitcoin specifics such as network
Since this package does not deal with other decred specifics such as network
communication or wallets, it provides a notification system which gives the
caller a high level of flexibility in how they want to react to certain events
such as orphan blocks which need their parents requested and newly connected
main chain blocks which might result in wallet updates.
Bitcoin Chain Processing Overview
Decred Chain Processing Overview
Before a block is allowed into the block chain, it must go through an intensive
series of validation rules. The following list serves as a general outline of

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -17,10 +18,17 @@ const (
// exists.
ErrDuplicateBlock ErrorCode = iota
// ErrMissingParent indicates that the block was an orphan.
ErrMissingParent
// ErrBlockTooBig indicates the serialized block size exceeds the
// maximum allowed size.
ErrBlockTooBig
// ErrWrongBlockSize indicates that the block size from the header was
// not the actual serialized size of the block.
ErrWrongBlockSize
// ErrBlockVersionTooOld indicates the block version is too old and is
// no longer accepted since the majority of the network has upgraded
// to a newer version.
@ -146,10 +154,22 @@ const (
// is not a coinbase transaction.
ErrFirstTxNotCoinbase
// ErrCoinbaseHeight indicates that the encoded height in the coinbase
// is incorrect.
ErrCoinbaseHeight
// ErrMultipleCoinbases indicates a block contains more than one
// coinbase transaction.
ErrMultipleCoinbases
// ErrStakeTxInRegularTree indicates a stake transaction was found in
// the regular transaction tree.
ErrStakeTxInRegularTree
// ErrRegTxInStakeTree indicates that a regular transaction was found in
// the stake transaction tree.
ErrRegTxInStakeTree
// ErrBadCoinbaseScriptLen indicates the length of the signature script
// for a coinbase transaction is not within the valid range.
ErrBadCoinbaseScriptLen
@ -158,15 +178,29 @@ const (
// not match the expected value of the subsidy plus the sum of all fees.
ErrBadCoinbaseValue
// ErrMissingCoinbaseHeight indicates the coinbase transaction for a
// block does not start with the serialized block block height as
// required for version 2 and higher blocks.
ErrMissingCoinbaseHeight
// ErrBadCoinbaseOutpoint indicates that the outpoint used by a coinbase
// as input was non-null.
ErrBadCoinbaseOutpoint
// ErrBadCoinbaseHeight indicates the serialized block height in the
// coinbase transaction for version 2 and higher blocks does not match
// the expected value.
ErrBadCoinbaseHeight
// ErrBadCoinbaseFraudProof indicates that the fraud proof for a coinbase
// input was non-null.
ErrBadCoinbaseFraudProof
// ErrBadCoinbaseAmountIn indicates that the AmountIn (=subsidy) for a
// coinbase input was incorrect.
ErrBadCoinbaseAmountIn
// ErrBadStakebaseAmountIn indicates that the AmountIn (=subsidy) for a
// stakebase input was incorrect.
ErrBadStakebaseAmountIn
// ErrBadStakebaseScriptLen indicates the length of the signature script
// for a stakebase transaction is not within the valid range.
ErrBadStakebaseScriptLen
// ErrBadStakevaseScrVal indicates the signature script for a stakebase
// transaction was not set to the network consensus value.
ErrBadStakevaseScrVal
// ErrScriptMalformed indicates a transaction script is malformed in
// some way. For example, it might be longer than the maximum allowed
@ -178,48 +212,299 @@ const (
// such signature verification failures and execution past the end of
// the stack.
ErrScriptValidation
// ErrNotEnoughStake indicates that there was for some SStx in a given block,
// the given SStx did not have enough stake to meet the network target.
ErrNotEnoughStake
// ErrStakeBelowMinimum indicates that for some SStx in a given block,
// the given SStx had an amount of stake below the minimum network target.
ErrStakeBelowMinimum
// ErrNonstandardStakeTx indicates that a block contained a stake tx that
// was not one of the allowed types of a stake transactions.
ErrNonstandardStakeTx
// ErrNotEnoughVotes indicates that a block contained less than a majority
// of voters.
ErrNotEnoughVotes
// ErrTooManyVotes indicates that a block contained more than the maximum
// allowable number of votes.
ErrTooManyVotes
// ErrFreshStakeMismatch indicates that a block's header contained a different
// number of SStx as compared to what was found in the block.
ErrFreshStakeMismatch
// ErrTooManySStxs indicates that more than the allowed number of SStx was
// found in a block.
ErrTooManySStxs
// ErrInvalidEarlyStakeTx indicates that a tx type other than SStx was found
// in the stake tx tree before the period when stake validation begins, or
// before the stake tx type could possibly be included in the block.
ErrInvalidEarlyStakeTx
// ErrTicketUnavailable indicates that a vote in the block spent a ticket
// that could not be found.
ErrTicketUnavailable
// ErrVotesOnWrongBlock indicates that an SSGen voted on a block not the
// block's parent, and so was ineligible for inclusion into that block.
ErrVotesOnWrongBlock
// ErrVotesMismatch indicates that the number of SSGen in the block was not
// equivalent to the number of votes provided in the block header.
ErrVotesMismatch
// ErrIncongruentVotebit indicates that the first votebit in votebits was not
// the same as that determined by the majority of voters in the SSGen tx
// included in the block.
ErrIncongruentVotebit
// ErrInvalidSSRtx indicates than an SSRtx in a block could not be found to
// have a valid missed sstx input as per the stake ticket database.
ErrInvalidSSRtx
// ErrInvalidRevNum indicates that the number of revocations from the
// header was not the same as the number of SSRtx included in the block.
ErrInvalidRevNum
// ErrTooManyRevocations indicates more revocations were found in a block
// than were allowed.
ErrTooManyRevocations
// ErrSStxCommitment indicates that the propotional amounts from the inputs
// of an SStx did not match those found in the commitment outputs.
ErrSStxCommitment
// ErrUnparseableSSGen indicates that the SSGen block vote or votebits data
// was unparseable from the null data outputs.
ErrUnparseableSSGen
// ErrInvalidSSGenInput indicates that the input SStx to the SSGen tx was
// invalid because it was not an SStx.
ErrInvalidSSGenInput
// ErrSSGenPayeeNum indicates that the number of payees from the referenced
// SSGen's SStx was not the same as the number of the payees in the outputs
// of the SSGen tx.
ErrSSGenPayeeNum
// ErrSSGenPayeeOuts indicates that the SSGen payee outputs were either not
// the values that would be expected given the rewards and input amounts of
// the original SStx, or that the SSGen addresses did not correctly correspond
// to the null data outputs given in the originating SStx.
ErrSSGenPayeeOuts
// ErrSSGenSubsidy indicates that there was an error in the amount of subsidy
// generated in the vote.
ErrSSGenSubsidy
// ErrSStxInImmature indicates that the OP_SSTX tagged output used as input
// was not yet TicketMaturity many blocks old.
ErrSStxInImmature
// ErrSStxInScrType indicates that the input used in an sstx was not
// pay-to-pubkeyhash or pay-to-script-hash, which is required. It can
// be OP_SS* tagged, but it must be P2PKH or P2SH.
ErrSStxInScrType
// ErrInvalidSSRtxInput indicates that the input for the SSRtx was not from
// an SStx.
ErrInvalidSSRtxInput
// ErrSSRtxPayeesMismatch means that the number of payees in an SSRtx was
// not the same as the number of payees in the outputs of the input SStx.
ErrSSRtxPayeesMismatch
// ErrSSRtxPayees indicates that the SSRtx failed to pay out to the committed
// addresses or amounts from the originating SStx.
ErrSSRtxPayees
// ErrTxSStxOutSpend indicates that a non SSGen or SSRtx tx attempted to spend
// an OP_SSTX tagged output from an SStx.
ErrTxSStxOutSpend
// ErrRegTxSpendStakeOut indicates that a regular tx attempted to spend to
// outputs tagged with stake tags, e.g. OP_SSTX.
ErrRegTxSpendStakeOut
// ErrBIP0030 indicates that a block failed to pass BIP0030.
ErrBIP0030
// ErrInvalidFinalState indicates that the final state of the PRNG included
// in the the block differed from the calculated final state.
ErrInvalidFinalState
// ErrPoolSize indicates an error in the ticket pool size for this block.
ErrPoolSize
// ErrForceReorgWrongChain indicates that a reroganization was attempted
// to be forced, but the chain indicated was not mirrored by b.bestChain.
ErrForceReorgWrongChain
// ErrForceReorgMissingChild indicates that a reroganization was attempted
// to be forced, but the child node to reorganize to could not be found.
ErrForceReorgMissingChild
// ErrBadStakebaseValue indicates that a block's stake tx tree has spent
// more than it is allowed.
ErrBadStakebaseValue
// ErrDiscordantTxTree specifies that a given origin tx's content
// indicated that it should exist in a different tx tree than the
// one given in the TxIn outpoint.
ErrDiscordantTxTree
// ErrStakeFees indicates an error with the fees found in the stake
// transaction tree.
ErrStakeFees
// ErrNoStakeTx indicates there were no stake transactions found in a
// block after stake validation height.
ErrNoStakeTx
// ErrBadBlockHeight indicates that a block header's embedded block height
// was different from where it was actually embedded in the block chain.
ErrBadBlockHeight
// ErrBlockOneTx indicates that block height 1 failed to correct generate
// the block one premine transaction.
ErrBlockOneTx
// ErrBlockOneTx indicates that block height 1 coinbase transaction in
// zero was incorrect in some way.
ErrBlockOneInputs
// ErrBlockOneOutputs indicates that block height 1 failed to incorporate
// the ledger addresses correctly into the transaction's outputs.
ErrBlockOneOutputs
// ErrNoTax indicates that there was no tax present in the coinbase of a
// block after height 1.
ErrNoTax
// ErrExpiredTx indicates that the transaction is currently expired.
ErrExpiredTx
// ErrExpiryTxSpentEarly indicates that an output from a transaction
// that included an expiry field was spent before coinbase maturity
// many blocks had passed in the blockchain.
ErrExpiryTxSpentEarly
// ErrFraudAmountIn indicates the witness amount given was fraudulent.
ErrFraudAmountIn
// ErrFraudBlockHeight indicates the witness block height given was fraudulent.
ErrFraudBlockHeight
// ErrFraudBlockIndex indicates the witness block index given was fraudulent.
ErrFraudBlockIndex
// ErrZeroValueOutputSpend indicates that a transaction attempted to spend a
// zero value output.
ErrZeroValueOutputSpend
// ErrInvalidEarlyVoteBits indicates that a block before stake validation
// height had an unallowed vote bits value.
ErrInvalidEarlyVoteBits
)
// Map of ErrorCode values back to their constant names for pretty printing.
var errorCodeStrings = map[ErrorCode]string{
ErrDuplicateBlock: "ErrDuplicateBlock",
ErrBlockTooBig: "ErrBlockTooBig",
ErrBlockVersionTooOld: "ErrBlockVersionTooOld",
ErrInvalidTime: "ErrInvalidTime",
ErrTimeTooOld: "ErrTimeTooOld",
ErrTimeTooNew: "ErrTimeTooNew",
ErrDifficultyTooLow: "ErrDifficultyTooLow",
ErrUnexpectedDifficulty: "ErrUnexpectedDifficulty",
ErrHighHash: "ErrHighHash",
ErrBadMerkleRoot: "ErrBadMerkleRoot",
ErrBadCheckpoint: "ErrBadCheckpoint",
ErrForkTooOld: "ErrForkTooOld",
ErrCheckpointTimeTooOld: "ErrCheckpointTimeTooOld",
ErrNoTransactions: "ErrNoTransactions",
ErrTooManyTransactions: "ErrTooManyTransactions",
ErrNoTxInputs: "ErrNoTxInputs",
ErrNoTxOutputs: "ErrNoTxOutputs",
ErrTxTooBig: "ErrTxTooBig",
ErrBadTxOutValue: "ErrBadTxOutValue",
ErrDuplicateTxInputs: "ErrDuplicateTxInputs",
ErrBadTxInput: "ErrBadTxInput",
ErrMissingTx: "ErrMissingTx",
ErrUnfinalizedTx: "ErrUnfinalizedTx",
ErrDuplicateTx: "ErrDuplicateTx",
ErrOverwriteTx: "ErrOverwriteTx",
ErrImmatureSpend: "ErrImmatureSpend",
ErrDoubleSpend: "ErrDoubleSpend",
ErrSpendTooHigh: "ErrSpendTooHigh",
ErrBadFees: "ErrBadFees",
ErrTooManySigOps: "ErrTooManySigOps",
ErrFirstTxNotCoinbase: "ErrFirstTxNotCoinbase",
ErrMultipleCoinbases: "ErrMultipleCoinbases",
ErrBadCoinbaseScriptLen: "ErrBadCoinbaseScriptLen",
ErrBadCoinbaseValue: "ErrBadCoinbaseValue",
ErrMissingCoinbaseHeight: "ErrMissingCoinbaseHeight",
ErrBadCoinbaseHeight: "ErrBadCoinbaseHeight",
ErrScriptMalformed: "ErrScriptMalformed",
ErrScriptValidation: "ErrScriptValidation",
ErrDuplicateBlock: "ErrDuplicateBlock",
ErrMissingParent: "ErrMissingParent",
ErrBlockTooBig: "ErrBlockTooBig",
ErrWrongBlockSize: "ErrWrongBlockSize",
ErrBlockVersionTooOld: "ErrBlockVersionTooOld",
ErrInvalidTime: "ErrInvalidTime",
ErrTimeTooOld: "ErrTimeTooOld",
ErrTimeTooNew: "ErrTimeTooNew",
ErrDifficultyTooLow: "ErrDifficultyTooLow",
ErrUnexpectedDifficulty: "ErrUnexpectedDifficulty",
ErrHighHash: "ErrHighHash",
ErrBadMerkleRoot: "ErrBadMerkleRoot",
ErrBadCheckpoint: "ErrBadCheckpoint",
ErrForkTooOld: "ErrForkTooOld",
ErrCheckpointTimeTooOld: "ErrCheckpointTimeTooOld",
ErrNoTransactions: "ErrNoTransactions",
ErrTooManyTransactions: "ErrTooManyTransactions",
ErrNoTxInputs: "ErrNoTxInputs",
ErrNoTxOutputs: "ErrNoTxOutputs",
ErrTxTooBig: "ErrTxTooBig",
ErrBadTxOutValue: "ErrBadTxOutValue",
ErrDuplicateTxInputs: "ErrDuplicateTxInputs",
ErrBadTxInput: "ErrBadTxInput",
ErrMissingTx: "ErrMissingTx",
ErrUnfinalizedTx: "ErrUnfinalizedTx",
ErrDuplicateTx: "ErrDuplicateTx",
ErrOverwriteTx: "ErrOverwriteTx",
ErrImmatureSpend: "ErrImmatureSpend",
ErrDoubleSpend: "ErrDoubleSpend",
ErrSpendTooHigh: "ErrSpendTooHigh",
ErrBadFees: "ErrBadFees",
ErrTooManySigOps: "ErrTooManySigOps",
ErrFirstTxNotCoinbase: "ErrFirstTxNotCoinbase",
ErrMultipleCoinbases: "ErrMultipleCoinbases",
ErrStakeTxInRegularTree: "ErrStakeTxInRegularTree",
ErrRegTxInStakeTree: "ErrRegTxInStakeTree",
ErrBadCoinbaseScriptLen: "ErrBadCoinbaseScriptLen",
ErrBadCoinbaseValue: "ErrBadCoinbaseValue",
ErrBadCoinbaseOutpoint: "ErrBadCoinbaseOutpoint",
ErrBadCoinbaseFraudProof: "ErrBadCoinbaseFraudProof",
ErrBadCoinbaseAmountIn: "ErrBadCoinbaseAmountIn",
ErrBadStakebaseAmountIn: "ErrBadStakebaseAmountIn",
ErrBadStakebaseScriptLen: "ErrBadStakebaseScriptLen",
ErrBadStakevaseScrVal: "ErrBadStakevaseScrVal",
ErrScriptMalformed: "ErrScriptMalformed",
ErrScriptValidation: "ErrScriptValidation",
ErrNotEnoughStake: "ErrNotEnoughStake",
ErrStakeBelowMinimum: "ErrStakeBelowMinimum",
ErrNotEnoughVotes: "ErrNotEnoughVotes",
ErrTooManyVotes: "ErrTooManyVotes",
ErrFreshStakeMismatch: "ErrFreshStakeMismatch",
ErrTooManySStxs: "ErrTooManySStxs",
ErrInvalidEarlyStakeTx: "ErrInvalidEarlyStakeTx",
ErrTicketUnavailable: "ErrTicketUnavailable",
ErrVotesOnWrongBlock: "ErrVotesOnWrongBlock",
ErrVotesMismatch: "ErrVotesMismatch",
ErrIncongruentVotebit: "ErrIncongruentVotebit",
ErrInvalidSSRtx: "ErrInvalidSSRtx",
ErrInvalidRevNum: "ErrInvalidRevNum",
ErrTooManyRevocations: "ErrTooManyRevocations",
ErrSStxCommitment: "ErrSStxCommitment",
ErrUnparseableSSGen: "ErrUnparseableSSGen",
ErrInvalidSSGenInput: "ErrInvalidSSGenInput",
ErrSSGenPayeeOuts: "ErrSSGenPayeeOuts",
ErrSSGenSubsidy: "ErrSSGenSubsidy",
ErrSStxInImmature: "ErrSStxInImmature",
ErrSStxInScrType: "ErrSStxInScrType",
ErrInvalidSSRtxInput: "ErrInvalidSSRtxInput",
ErrSSRtxPayeesMismatch: "ErrSSRtxPayeesMismatch",
ErrSSRtxPayees: "ErrSSRtxPayees",
ErrTxSStxOutSpend: "ErrTxSStxOutSpend",
ErrRegTxSpendStakeOut: "ErrRegTxSpendStakeOut",
ErrInvalidFinalState: "ErrInvalidFinalState",
ErrPoolSize: "ErrPoolSize",
ErrForceReorgWrongChain: "ErrForceReorgWrongChain",
ErrForceReorgMissingChild: "ErrForceReorgMissingChild",
ErrBadStakebaseValue: "ErrBadStakebaseValue",
ErrDiscordantTxTree: "ErrDiscordantTxTree",
ErrStakeFees: "ErrStakeFees",
ErrBadBlockHeight: "ErrBadBlockHeight",
ErrBlockOneTx: "ErrBlockOneTx",
ErrBlockOneInputs: "ErrBlockOneInputs",
ErrBlockOneOutputs: "ErrBlockOneOutputs",
ErrNoTax: "ErrNoTax",
ErrExpiredTx: "ErrExpiredTx",
ErrExpiryTxSpentEarly: "ErrExpiryTxSpentEarly",
ErrFraudAmountIn: "ErrFraudAmountIn",
ErrFraudBlockHeight: "ErrFraudBlockHeight",
ErrFraudBlockIndex: "ErrFraudBlockIndex",
ErrZeroValueOutputSpend: "ErrZeroValueOutputSpend",
ErrInvalidEarlyVoteBits: "ErrInvalidEarlyVoteBits",
}
// String returns the ErrorCode as a human-readable name.
@ -245,6 +530,11 @@ func (e RuleError) Error() string {
return e.Description
}
// Error satisfies the error interface and prints human-readable errors.
func (e RuleError) GetCode() ErrorCode {
return e.ErrorCode
}
// ruleError creates an RuleError given a set of arguments.
func ruleError(c ErrorCode, desc string) RuleError {
return RuleError{ErrorCode: c, Description: desc}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,7 +8,7 @@ package blockchain_test
import (
"testing"
"github.com/btcsuite/btcd/blockchain"
"github.com/decred/dcrd/blockchain"
)
// TestErrorCodeStringer tests the stringized output for the ErrorCode type.
@ -51,8 +52,6 @@ func TestErrorCodeStringer(t *testing.T) {
{blockchain.ErrMultipleCoinbases, "ErrMultipleCoinbases"},
{blockchain.ErrBadCoinbaseScriptLen, "ErrBadCoinbaseScriptLen"},
{blockchain.ErrBadCoinbaseValue, "ErrBadCoinbaseValue"},
{blockchain.ErrMissingCoinbaseHeight, "ErrMissingCoinbaseHeight"},
{blockchain.ErrBadCoinbaseHeight, "ErrBadCoinbaseHeight"},
{blockchain.ErrScriptMalformed, "ErrScriptMalformed"},
{blockchain.ErrScriptValidation, "ErrScriptValidation"},
{0xffff, "Unknown ErrorCode (65535)"},

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -8,16 +9,17 @@ import (
"fmt"
"math/big"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/memdb"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/memdb"
"github.com/decred/dcrutil"
)
// This example demonstrates how to create a new chain instance and use
// ProcessBlock to attempt to attempt add a block to the chain. As the package
// overview documentation describes, this includes all of the Bitcoin consensus
// overview documentation describes, this includes all of the Decred consensus
// rules. This example intentionally attempts to insert a duplicate genesis
// block to illustrate how an invalid block is handled.
func ExampleBlockChain_ProcessBlock() {
@ -32,10 +34,11 @@ func ExampleBlockChain_ProcessBlock() {
}
defer db.Close()
var tmdb *stake.TicketDB
// Insert the main network genesis block. This is part of the initial
// database setup. Like above, this typically would not be needed when
// opening an existing database.
genesisBlock := btcutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
genesisBlock := dcrutil.NewBlock(chaincfg.MainNetParams.GenesisBlock)
_, err = db.InsertBlock(genesisBlock)
if err != nil {
fmt.Printf("Failed to insert genesis block: %v\n", err)
@ -43,8 +46,8 @@ func ExampleBlockChain_ProcessBlock() {
}
// Create a new BlockChain instance using the underlying database for
// the main bitcoin network and ignore notifications.
chain := blockchain.New(db, &chaincfg.MainNetParams, nil)
// the main decred network and ignore notifications.
chain := blockchain.New(db, tmdb, &chaincfg.MainNetParams, nil)
// Create a new median time source that is required by the upcoming
// call to ProcessBlock. Ordinarily this would also add time values
@ -55,22 +58,24 @@ func ExampleBlockChain_ProcessBlock() {
// Process a block. For this example, we are going to intentionally
// cause an error by trying to process the genesis block which already
// exists.
isOrphan, err := chain.ProcessBlock(genesisBlock, timeSource, blockchain.BFNone)
isOrphan, _, err := chain.ProcessBlock(genesisBlock, timeSource, blockchain.BFNone)
if err != nil {
fmt.Printf("Failed to process block: %v\n", err)
return
}
fmt.Printf("Block accepted. Is it an orphan?: %v", isOrphan)
// This output is dependent on the genesis block, and needs to be
// updated if the mainnet genesis block is updated.
// Output:
// Failed to process block: already have block 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
// Failed to process block: already have block 267a53b5ee86c24a48ec37aee4f4e7c0c4004892b7259e695e9f5b321f1ab9d2
}
// This example demonstrates how to convert the compact "bits" in a block header
// which represent the target difficulty to a big integer and display it using
// the typical hex notation.
func ExampleCompactToBig() {
// Convert the bits from block 300000 in the main block chain.
// Convert the bits from block 300000 in the main Decred block chain.
bits := uint32(419465580)
targetDifficulty := blockchain.CompactToBig(bits)

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -17,22 +18,12 @@ import (
"time"
)
// TstSetCoinbaseMaturity makes the ability to set the coinbase maturity
// available to the test package.
func TstSetCoinbaseMaturity(maturity int64) {
coinbaseMaturity = maturity
}
// TstTimeSorter makes the internal timeSorter type available to the test
// package.
func TstTimeSorter(times []time.Time) sort.Interface {
return timeSorter(times)
}
// TstCheckSerializedHeight makes the internal checkSerializedHeight function
// available to the test package.
var TstCheckSerializedHeight = checkSerializedHeight
// TstSetMaxMedianTimeEntries makes the ability to set the maximum number of
// median tiem entries available to the test package.
func TstSetMaxMedianTimeEntries(val int) {

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -183,7 +184,7 @@ func (m *medianTime) AddTimeSample(sourceID string, timeVal time.Time) {
// Warn if none of the time samples are close.
if !remoteHasCloseTime {
log.Warnf("Please check your date and time " +
"are correct! btcd will not work " +
"are correct! dcrd will not work " +
"properly with an invalid time")
}
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,7 +10,7 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/blockchain"
"github.com/decred/dcrd/blockchain"
)
// TestMedianTime tests the medianTime implementation.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,8 +8,8 @@ package blockchain
import (
"math"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrutil"
)
// nextPowerOfTwo returns the next highest power of two from a given number if
@ -28,13 +29,13 @@ func nextPowerOfTwo(n int) int {
// HashMerkleBranches takes two hashes, treated as the left and right tree
// nodes, and returns the hash of their concatenation. This is a helper
// function used to aid in the generation of a merkle tree.
func HashMerkleBranches(left *wire.ShaHash, right *wire.ShaHash) *wire.ShaHash {
func HashMerkleBranches(left *chainhash.Hash, right *chainhash.Hash) *chainhash.Hash {
// Concatenate the left and right nodes.
var sha [wire.HashSize * 2]byte
copy(sha[:wire.HashSize], left[:])
copy(sha[wire.HashSize:], right[:])
var sha [chainhash.HashSize * 2]byte
copy(sha[:chainhash.HashSize], left[:])
copy(sha[chainhash.HashSize:], right[:])
newSha := wire.DoubleSha256SH(sha[:])
newSha := chainhash.HashFuncH(sha[:])
return &newSha
}
@ -45,7 +46,7 @@ func HashMerkleBranches(left *wire.ShaHash, right *wire.ShaHash) *wire.ShaHash {
// is stored in a linear array.
//
// A merkle tree is a tree in which every non-leaf node is the hash of its
// children nodes. A diagram depicting how this works for bitcoin transactions
// children nodes. A diagram depicting how this works for decred transactions
// where h(x) is a double sha256 follows:
//
// root = h1234 = h(h12 + h34)
@ -66,16 +67,26 @@ func HashMerkleBranches(left *wire.ShaHash, right *wire.ShaHash) *wire.ShaHash {
// are calculated by concatenating the left node with itself before hashing.
// Since this function uses nodes that are pointers to the hashes, empty nodes
// will be nil.
func BuildMerkleTreeStore(transactions []*btcutil.Tx) []*wire.ShaHash {
func BuildMerkleTreeStore(transactions []*dcrutil.Tx) []*chainhash.Hash {
// If there's an empty stake tree, return totally zeroed out merkle tree root
// only.
if len(transactions) == 0 {
merkles := make([]*chainhash.Hash, 1)
merkles[0] = &chainhash.Hash{}
return merkles
}
// Calculate how many entries are required to hold the binary merkle
// tree as a linear array and create an array of that size.
nextPoT := nextPowerOfTwo(len(transactions))
arraySize := nextPoT*2 - 1
merkles := make([]*wire.ShaHash, arraySize)
merkles := make([]*chainhash.Hash, arraySize)
// Create the base transaction shas and populate the array with them.
for i, tx := range transactions {
merkles[i] = tx.Sha()
msgTx := tx.MsgTx()
txShaFull := msgTx.TxShaFull()
merkles[i] = &txShaFull
}
// Start the array offset after the last transaction and adjusted to the

View File

@ -1,24 +1,13 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
"testing"
import ()
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcutil"
)
// TestMerkle tests the BuildMerkleTreeStore API.
func TestMerkle(t *testing.T) {
block := btcutil.NewBlock(&Block100000)
merkles := blockchain.BuildMerkleTreeStore(block.Transactions())
calculatedMerkleRoot := merkles[len(merkles)-1]
wantMerkle := &Block100000.Header.MerkleRoot
if !wantMerkle.IsEqual(calculatedMerkleRoot) {
t.Errorf("BuildMerkleTreeStore: merkle root mismatch - "+
"got %v, want %v", calculatedMerkleRoot, wantMerkle)
}
}
// TODO Make tests for merkle root calculation. Merkle root calculation and
// corruption is already well tested in the blockchain error unit tests and
// reorganization unit tests, but it'd be nice to have a specific test for
// these functions and their error paths.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -6,6 +7,10 @@ package blockchain
import (
"fmt"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrutil"
)
// NotificationType represents the type of a notification message.
@ -29,14 +34,29 @@ const (
// NTBlockDisconnected indicates the associated block was disconnected
// from the main chain.
NTBlockDisconnected
// NTReorganization indicates that a blockchain reorganization is in
// progress.
NTReorganization
// NTSpentAndMissedTickets indicates spent or missed tickets from a newly
// accepted block.
NTSpentAndMissedTickets
// NTSpentAndMissedTickets indicates newly maturing tickets from a newly
// accepted block.
NTNewTickets
)
// notificationTypeStrings is a map of notification types back to their constant
// names for pretty printing.
var notificationTypeStrings = map[NotificationType]string{
NTBlockAccepted: "NTBlockAccepted",
NTBlockConnected: "NTBlockConnected",
NTBlockDisconnected: "NTBlockDisconnected",
NTBlockAccepted: "NTBlockAccepted",
NTBlockConnected: "NTBlockConnected",
NTBlockDisconnected: "NTBlockDisconnected",
NTReorganization: "NTReorganization",
NTSpentAndMissedTickets: "NTSpentAndMissedTickets",
NTNewTickets: "NTNewTickets",
}
// String returns the NotificationType in human-readable form.
@ -47,12 +67,40 @@ func (n NotificationType) String() string {
return fmt.Sprintf("Unknown Notification Type (%d)", int(n))
}
// BlockAcceptedNtfnsData is the structure for data indicating information
// about a block being accepted.
type BlockAcceptedNtfnsData struct {
OnMainChain bool
Block *dcrutil.Block
}
// ReorganizationNtfnsData is the structure for data indicating information
// about a reorganization.
type ReorganizationNtfnsData struct {
OldHash chainhash.Hash
OldHeight int64
NewHash chainhash.Hash
NewHeight int64
}
// TicketNotificationsData is the structure for new/spent/missed ticket
// notifications at blockchain HEAD that are outgoing from chain.
type TicketNotificationsData struct {
Hash chainhash.Hash
Height int64
StakeDifficulty int64
TicketMap stake.SStxMemMap
}
// Notification defines notification that is sent to the caller via the callback
// function provided during the call to New and consists of a notification type
// as well as associated data that depends on the type as follows:
// - NTBlockAccepted: *btcutil.Block
// - NTBlockConnected: *btcutil.Block
// - NTBlockDisconnected: *btcutil.Block
// - NTBlockAccepted: *BlockAcceptedNtfnsData
// - NTBlockConnected: []*dcrutil.Block of len 2
// - NTBlockDisconnected: []*dcrutil.Block of len 2
// - NTReorganization: *ReorganizationNtfnsData
// - NTSpentAndMissedTickets: *TicketNotificationsData
// - NTNewTickets: *TicketNotificationsData
type Notification struct {
Type NotificationType
Data interface{}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,8 +8,8 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrutil"
)
// BehaviorFlags is a bitmask defining tweaks to the normal behavior when
@ -38,7 +39,7 @@ const (
// blockExists determines whether a block with the given hash exists either in
// the main chain or any side chains.
func (b *BlockChain) blockExists(hash *wire.ShaHash) (bool, error) {
func (b *BlockChain) blockExists(hash *chainhash.Hash) (bool, error) {
// Check memory chain first (could be main chain or side chain blocks).
if _, ok := b.index[*hash]; ok {
return true, nil
@ -55,11 +56,11 @@ func (b *BlockChain) blockExists(hash *wire.ShaHash) (bool, error) {
//
// The flags do not modify the behavior of this function directly, however they
// are needed to pass along to maybeAcceptBlock.
func (b *BlockChain) processOrphans(hash *wire.ShaHash, flags BehaviorFlags) error {
func (b *BlockChain) processOrphans(hash *chainhash.Hash, flags BehaviorFlags) error {
// Start with processing at least the passed hash. Leave a little room
// for additional orphan blocks that need to be processed without
// needing to grow the array in the common case.
processHashes := make([]*wire.ShaHash, 0, 10)
processHashes := make([]*chainhash.Hash, 0, 10)
processHashes = append(processHashes, hash)
for len(processHashes) > 0 {
// Pop the first hash to process from the slice.
@ -90,7 +91,7 @@ func (b *BlockChain) processOrphans(hash *wire.ShaHash, flags BehaviorFlags) err
i--
// Potentially accept the block into the block chain.
err := b.maybeAcceptBlock(orphan.block, flags)
_, err := b.maybeAcceptBlock(orphan.block, flags)
if err != nil {
return err
}
@ -109,10 +110,14 @@ func (b *BlockChain) processOrphans(hash *wire.ShaHash, flags BehaviorFlags) err
// blocks, ensuring blocks follow all rules, orphan handling, and insertion into
// the block chain along with best chain selection and reorganization.
//
// It returns a bool which indicates whether or not the block is an orphan and
// any errors that occurred during processing. The returned bool is only valid
// when the error is nil.
func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSource, flags BehaviorFlags) (bool, error) {
// It returns a first bool specifying whether or not the block is on on a fork
// or on a side chain. True means it's on the main chain.
//
// It returns a second bool which indicates whether or not the block is an orphan
// and any errors that occurred during processing. The returned bool is only
// valid when the error is nil.
func (b *BlockChain) ProcessBlock(block *dcrutil.Block,
timeSource MedianTimeSource, flags BehaviorFlags) (bool, bool, error) {
fastAdd := flags&BFFastAdd == BFFastAdd
dryRun := flags&BFDryRun == BFDryRun
@ -122,23 +127,23 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
// The block must not already exist in the main chain or side chains.
exists, err := b.blockExists(blockHash)
if err != nil {
return false, err
return false, false, err
}
if exists {
str := fmt.Sprintf("already have block %v", blockHash)
return false, ruleError(ErrDuplicateBlock, str)
return false, false, ruleError(ErrDuplicateBlock, str)
}
// The block must not already exist as an orphan.
if _, exists := b.orphans[*blockHash]; exists {
str := fmt.Sprintf("already have block (orphan) %v", blockHash)
return false, ruleError(ErrDuplicateBlock, str)
return false, false, ruleError(ErrDuplicateBlock, str)
}
// Perform preliminary sanity checks on the block and its transactions.
err = checkBlockSanity(block, b.chainParams.PowLimit, timeSource, flags)
err = checkBlockSanity(block, timeSource, flags, b.chainParams)
if err != nil {
return false, err
return false, false, err
}
// Find the previous checkpoint and perform some additional checks based
@ -150,7 +155,7 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
blockHeader := &block.MsgBlock().Header
checkpointBlock, err := b.findPreviousCheckpoint()
if err != nil {
return false, err
return false, false, err
}
if checkpointBlock != nil {
// Ensure the block timestamp is after the checkpoint timestamp.
@ -160,7 +165,7 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
str := fmt.Sprintf("block %v has timestamp %v before "+
"last checkpoint timestamp %v", blockHash,
blockHeader.Timestamp, checkpointTime)
return false, ruleError(ErrCheckpointTimeTooOld, str)
return false, false, ruleError(ErrCheckpointTimeTooOld, str)
}
if !fastAdd {
// Even though the checks prior to now have already ensured the
@ -177,7 +182,7 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
str := fmt.Sprintf("block target difficulty of %064x "+
"is too low when compared to the previous "+
"checkpoint", currentTarget)
return false, ruleError(ErrDifficultyTooLow, str)
return false, false, ruleError(ErrDifficultyTooLow, str)
}
}
}
@ -187,7 +192,7 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
if !prevHash.IsEqual(zeroHash) {
prevHashExists, err := b.blockExists(prevHash)
if err != nil {
return false, err
return false, false, err
}
if !prevHashExists {
if !dryRun {
@ -196,15 +201,16 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
b.addOrphanBlock(block)
}
return true, nil
return false, true, err
}
}
// The block has passed all context independent checks and appears sane
// enough to potentially accept it into the block chain.
err = b.maybeAcceptBlock(block, flags)
var onMainChain bool
onMainChain, err = b.maybeAcceptBlock(block, flags)
if err != nil {
return false, err
return false, false, err
}
// Don't process any orphans or log when the dry run flag is set.
@ -214,11 +220,11 @@ func (b *BlockChain) ProcessBlock(block *btcutil.Block, timeSource MedianTimeSou
// there are no more.
err := b.processOrphans(blockHash, flags)
if err != nil {
return false, err
return false, false, err
}
log.Debugf("Accepted block %v", blockHash)
}
return false, nil
return onMainChain, false, err
}

View File

@ -1,134 +1,135 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
"bytes"
"compress/bzip2"
"encoding/binary"
"io"
"encoding/gob"
"os"
"path/filepath"
"strings"
"testing"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrutil"
)
// TestReorganization loads a set of test blocks which force a chain
// reorganization to test the block chain handling code.
// The test blocks were originally from a post on the bitcoin talk forums:
// https://bitcointalk.org/index.php?topic=46370.msg577556#msg577556
func TestReorganization(t *testing.T) {
// Intentionally load the side chain blocks out of order to ensure
// orphans are handled properly along with chain reorganization.
testFiles := []string{
"blk_0_to_4.dat.bz2",
"blk_4A.dat.bz2",
"blk_5A.dat.bz2",
"blk_3A.dat.bz2",
}
var blocks []*btcutil.Block
for _, file := range testFiles {
blockTmp, err := loadBlocks(file)
if err != nil {
t.Errorf("Error loading file: %v\n", err)
}
for _, block := range blockTmp {
blocks = append(blocks, block)
}
}
t.Logf("Number of blocks: %v\n", len(blocks))
// Create a new database and chain instance to run tests against.
chain, teardownFunc, err := chainSetup("reorg")
chain, teardownFunc, err := chainSetup("reorgunittest",
simNetParams)
if err != nil {
t.Errorf("Failed to setup chain instance: %v", err)
return
}
defer teardownFunc()
// Since we're not dealing with the real block chain, disable
// checkpoints and set the coinbase maturity to 1.
chain.DisableCheckpoints(true)
blockchain.TstSetCoinbaseMaturity(1)
timeSource := blockchain.NewMedianTime()
expectedOrphans := map[int]struct{}{5: struct{}{}, 6: struct{}{}}
for i := 1; i < len(blocks); i++ {
isOrphan, err := chain.ProcessBlock(blocks[i], timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock fail on block %v: %v\n", i, err)
return
}
if _, ok := expectedOrphans[i]; !ok && isOrphan {
t.Errorf("ProcessBlock incorrectly returned block %v "+
"is an orphan\n", i)
}
}
return
}
// loadBlocks reads files containing bitcoin block data (gzipped but otherwise
// in the format bitcoind writes) from disk and returns them as an array of
// btcutil.Block. This is largely borrowed from the test code in btcdb.
func loadBlocks(filename string) (blocks []*btcutil.Block, err error) {
filename = filepath.Join("testdata/", filename)
var network = wire.MainNet
var dr io.Reader
var fi io.ReadCloser
fi, err = os.Open(filename)
err = chain.GenerateInitialIndex()
if err != nil {
return
t.Errorf("GenerateInitialIndex: %v", err)
}
if strings.HasSuffix(filename, ".bz2") {
dr = bzip2.NewReader(fi)
} else {
dr = fi
// The genesis block should fail to connect since it's already
// inserted.
genesisBlock := simNetParams.GenesisBlock
err = chain.CheckConnectBlock(dcrutil.NewBlock(genesisBlock))
if err == nil {
t.Errorf("CheckConnectBlock: Did not receive expected error")
}
// Load up the rest of the blocks up to HEAD.
filename := filepath.Join("testdata/", "reorgto179.bz2")
fi, err := os.Open(filename)
bcStream := bzip2.NewReader(fi)
defer fi.Close()
var block *btcutil.Block
// Create a buffer of the read file
bcBuf := new(bytes.Buffer)
bcBuf.ReadFrom(bcStream)
err = nil
for height := int64(1); err == nil; height++ {
var rintbuf uint32
err = binary.Read(dr, binary.LittleEndian, &rintbuf)
if err == io.EOF {
// hit end of file at expected offset: no warning
height--
err = nil
break
}
if err != nil {
break
}
if rintbuf != uint32(network) {
break
}
err = binary.Read(dr, binary.LittleEndian, &rintbuf)
blocklen := rintbuf
// Create decoder from the buffer and a map to store the data
bcDecoder := gob.NewDecoder(bcBuf)
blockChain := make(map[int64][]byte)
rbytes := make([]byte, blocklen)
// read block
dr.Read(rbytes)
block, err = btcutil.NewBlockFromBytes(rbytes)
if err != nil {
return
}
blocks = append(blocks, block)
// Decode the blockchain into the map
if err := bcDecoder.Decode(&blockChain); err != nil {
t.Errorf("error decoding test blockchain: %v", err.Error())
}
// Load up the short chain
timeSource := blockchain.NewMedianTime()
finalIdx1 := 179
for i := 1; i < finalIdx1+1; i++ {
bl, err := dcrutil.NewBlockFromBytes(blockChain[int64(i)])
if err != nil {
t.Errorf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock error: %v", err.Error())
}
}
// Load the long chain and begin loading blocks from that too,
// forcing a reorganization
// Load up the rest of the blocks up to HEAD.
filename = filepath.Join("testdata/", "reorgto180.bz2")
fi, err = os.Open(filename)
bcStream = bzip2.NewReader(fi)
defer fi.Close()
// Create a buffer of the read file
bcBuf = new(bytes.Buffer)
bcBuf.ReadFrom(bcStream)
// Create decoder from the buffer and a map to store the data
bcDecoder = gob.NewDecoder(bcBuf)
blockChain = make(map[int64][]byte)
// Decode the blockchain into the map
if err := bcDecoder.Decode(&blockChain); err != nil {
t.Errorf("error decoding test blockchain: %v", err.Error())
}
forkPoint := 131
finalIdx2 := 180
for i := forkPoint; i < finalIdx2+1; i++ {
bl, err := dcrutil.NewBlockFromBytes(blockChain[int64(i)])
if err != nil {
t.Errorf("NewBlockFromBytes error: %v", err.Error())
}
bl.SetHeight(int64(i))
_, _, err = chain.ProcessBlock(bl, timeSource, blockchain.BFNone)
if err != nil {
t.Errorf("ProcessBlock error: %v", err.Error())
}
}
// Ensure our blockchain is at the correct best tip
topBlock, _ := chain.GetTopBlock()
tipHash := topBlock.Sha()
expected, _ := chainhash.NewHashFromStr("5ab969d0afd8295b6cd1506f2a310d" +
"259322015c8bd5633f283a163ce0e50594")
if *tipHash != *expected {
t.Errorf("Failed to correctly reorg; expected tip %v, got tip %v",
expected, tipHash)
}
have, err := chain.HaveBlock(expected)
if !have {
t.Errorf("missing tip block after reorganization test")
}
if err != nil {
t.Errorf("unexpected error testing for presence of new tip block "+
"after reorg test: %v", err)
}
return
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,16 +10,16 @@ import (
"math"
"runtime"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// txValidateItem holds a transaction along with which input to validate.
type txValidateItem struct {
txInIndex int
txIn *wire.TxIn
tx *btcutil.Tx
tx *dcrutil.Tx
}
// txValidator provides a type which asynchronously validates transaction
@ -83,8 +84,10 @@ out:
// Create a new script engine for the script pair.
sigScript := txIn.SignatureScript
pkScript := originMsgTx.TxOut[originTxIndex].PkScript
version := originMsgTx.TxOut[originTxIndex].Version
vm, err := txscript.NewEngine(pkScript, txVI.tx.MsgTx(),
txVI.txInIndex, v.flags)
txVI.txInIndex, v.flags, version)
if err != nil {
str := fmt.Sprintf("failed to parse input "+
"%s:%d which references output %s:%d - "+
@ -191,7 +194,8 @@ func newTxValidator(txStore TxStore, flags txscript.ScriptFlags) *txValidator {
// ValidateTransactionScripts validates the scripts for the passed transaction
// using multiple goroutines.
func ValidateTransactionScripts(tx *btcutil.Tx, txStore TxStore, flags txscript.ScriptFlags) error {
func ValidateTransactionScripts(tx *dcrutil.Tx, txStore TxStore,
flags txscript.ScriptFlags) error {
// Collect all of the transaction inputs and required information for
// validation.
txIns := tx.MsgTx().TxIn
@ -217,21 +221,32 @@ func ValidateTransactionScripts(tx *btcutil.Tx, txStore TxStore, flags txscript.
}
return nil
}
// checkBlockScripts executes and validates the scripts for all transactions in
// the passed block.
func checkBlockScripts(block *btcutil.Block, txStore TxStore,
// txTree = true is TxTreeRegular, txTree = false is TxTreeStake.
func checkBlockScripts(block *dcrutil.Block, txStore TxStore, txTree bool,
scriptFlags txscript.ScriptFlags) error {
// Collect all of the transaction inputs and required information for
// validation for all transactions in the block into a single slice.
numInputs := 0
for _, tx := range block.Transactions() {
var txs []*dcrutil.Tx
// TxTreeRegular handling.
if txTree {
txs = block.Transactions()
} else { // TxTreeStake
txs = block.STransactions()
}
for _, tx := range txs {
numInputs += len(tx.MsgTx().TxIn)
}
txValItems := make([]*txValidateItem, 0, numInputs)
for _, tx := range block.Transactions() {
for _, tx := range txs {
for txInIdx, txIn := range tx.MsgTx().TxIn {
// Skip coinbases.
if txIn.PreviousOutPoint.Index == math.MaxUint32 {

View File

@ -1,46 +1,18 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
"fmt"
"runtime"
"testing"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/txscript"
)
// TestCheckBlockScripts ensures that validating the all of the scripts in a
// known-good block doesn't return an error.
func TestCheckBlockScripts(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
testBlockNum := 277647
blockDataFile := fmt.Sprintf("%d.dat.bz2", testBlockNum)
blocks, err := loadBlocks(blockDataFile)
if err != nil {
t.Errorf("Error loading file: %v\n", err)
return
}
if len(blocks) > 1 {
t.Errorf("The test block file must only have one block in it")
}
txStoreDataFile := fmt.Sprintf("%d.txstore.bz2", testBlockNum)
txStore, err := loadTxStore(txStoreDataFile)
if err != nil {
t.Errorf("Error loading txstore: %v\n", err)
return
}
scriptFlags := txscript.ScriptBip16
err = blockchain.TstCheckBlockScripts(blocks[0], txStore, scriptFlags)
if err != nil {
t.Errorf("Transaction script validation failed: %v\n",
err)
return
}
// TODO In the future, add a block here with a lot of tx to validate.
// The blockchain tests already validate a ton of scripts with signatures,
// so we don't really need to make a new test for this immediately.
}

219
blockchain/stake/error.go Normal file
View File

@ -0,0 +1,219 @@
// Copyright (c) 2014 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package stake
import (
"fmt"
)
// ErrorCode identifies a kind of error.
type ErrorCode int
// These constants are used to identify a specific RuleError.
const (
// ErrSStxTooManyInputs indicates that a given SStx contains too many
// inputs.
ErrSStxTooManyInputs = iota
// ErrSStxTooManyOutputs indicates that a given SStx contains too many
// outputs.
ErrSStxTooManyOutputs
// ErrSStxNoOutputs indicates that a given SStx has no outputs.
ErrSStxNoOutputs
// ErrSStxInvalidInput indicates that an invalid output has been used as
// an input for a SStx; only non-SStx tagged outputs may be used to
// purchase stake tickets.
// TODO: Add this into validate
// Ensure that all inputs are not tagged SStx outputs of some sort,
// along with checks to make sure they exist and are available.
ErrSStxInvalidInputs
// ErrSStxInvalidOutput indicates that the output for an SStx tx is
// invalid; in particular, either the output was not tagged SStx or the
// OP_RETURNs were missing or contained invalid addresses.
ErrSStxInvalidOutputs
// ErrSStxInOutProportions indicates the the number of inputs in an SStx
// was not equal to the number of output minus one.
ErrSStxInOutProportions
// ErrSStxBadCommitAmount indicates that a ticket tried to commit 0 or
// a negative value as the commitment amount.
ErrSStxBadCommitAmount
// ErrSStxBadChangeAmts indicates that the change amount for some SStx
// was invalid, for instance spending more than its inputs.
ErrSStxBadChangeAmts
// ErrSStxVerifyCalcAmts indicates that passed calculated amounts failed
// to conform to the amounts found in the ticket.
ErrSStxVerifyCalcAmts
// ErrSSGenWrongNumInputs indicates that a given SSGen tx contains an
// invalid number of inputs.
ErrSSGenWrongNumInputs
// ErrSSGenTooManyOutputs indicates that a given SSGen tx contains too
// many outputs.
ErrSSGenTooManyOutputs
// ErrSSGenNoOutputs indicates that a given SSGen has no outputs.
ErrSSGenNoOutputs
// ErrSSGenWrongIndex indicates that a given SSGen sstx input was not
// using the correct index.
ErrSSGenWrongIndex
// ErrSSGenWrongTxTree indicates that a given SSGen tx input was not found in
// the stake tx tree.
ErrSSGenWrongTxTree
// ErrSSGenNoStakebase indicates that the SSGen tx did not contain a
// valid StakeBase in the zeroeth position of inputs.
ErrSSGenNoStakebase
// ErrSSGenNoReference indicates that there is no reference OP_RETURN
// included as the first output.
ErrSSGenNoReference
// ErrSSGenNoReference indicates that the OP_RETURN included as the
// first output was corrupted in some way.
ErrSSGenBadReference
// ErrSSGenNoVotePush indicates that there is no vote bits OP_RETURN
// included as the second output.
ErrSSGenNoVotePush
// ErrSSGenBadVotePush indicates that the OP_RETURN included as the
// second output was corrupted in some way.
ErrSSGenBadVotePush
// ErrSSGenBadGenOuts indicates that the something was wrong with the
// stake generation outputs that were present after the first two
// OP_RETURN pushes in an SSGen tx.
ErrSSGenBadGenOuts
// ErrSSRtxWrongNumInputs indicates that a given SSRtx contains an
// invalid number of inputs.
ErrSSRtxWrongNumInputs
// ErrSSRtxTooManyOutputs indicates that a given SSRtx contains too many
// outputs.
ErrSSRtxTooManyOutputs
// ErrSSRtxNoOutputs indicates that a given SSRtx has no outputs.
ErrSSRtxNoOutputs
// ErrSSRtxWrongTxTree indicates that a given SSRtx input was not found in
// the stake tx tree.
ErrSSRtxWrongTxTree
// ErrSSRtxBadGenOuts indicates that there was a non-SSRtx tagged output
// present in an SSRtx.
ErrSSRtxBadOuts
// ErrVerSStxAmts indicates there was an error verifying the calculated
// SStx out amounts and the actual SStx out amounts.
ErrVerSStxAmts
// ErrVerifyInput indicates that there was an error in verification
// function input.
ErrVerifyInput
// ErrVerifyOutType indicates that there was a non-equivalence in the
// output type.
ErrVerifyOutType
// ErrVerifyTooMuchFees indicates that a transaction's output gave
// too much in fees after taking into accounts the limits imposed
// by the SStx output's version field.
ErrVerifyTooMuchFees
// ErrVerifySpendTooMuch indicates that a transaction's output spent more
// than it was allowed to spend based on the calculated subsidy or return
// for a vote or revocation.
ErrVerifySpendTooMuch
// ErrVerifyOutputAmt indicates that for a vote/revocation spend output,
// the rule was given that it must exactly match the calculated maximum,
// however the amount in the output did not (e.g. it gave fees).
ErrVerifyOutputAmt
// ErrVerifyOutPkhs indicates that the recipient of the P2PKH or P2SH
// script was different from that indicated in the SStx input.
ErrVerifyOutPkhs
)
// Map of ErrorCode values back to their constant names for pretty printing.
var errorCodeStrings = map[ErrorCode]string{
ErrSStxTooManyInputs: "ErrSStxTooManyInputs",
ErrSStxTooManyOutputs: "ErrSStxTooManyOutputs",
ErrSStxNoOutputs: "ErrSStxNoOutputs",
ErrSStxInvalidInputs: "ErrSStxInvalidInputs",
ErrSStxInvalidOutputs: "ErrSStxInvalidOutputs",
ErrSStxInOutProportions: "ErrSStxInOutProportions",
ErrSStxBadCommitAmount: "ErrSStxBadCommitAmount",
ErrSStxBadChangeAmts: "ErrSStxBadChangeAmts",
ErrSStxVerifyCalcAmts: "ErrSStxVerifyCalcAmts",
ErrSSGenWrongNumInputs: "ErrSSGenWrongNumInputs",
ErrSSGenTooManyOutputs: "ErrSSGenTooManyOutputs",
ErrSSGenNoOutputs: "ErrSSGenNoOutputs",
ErrSSGenWrongIndex: "ErrSSGenWrongIndex",
ErrSSGenWrongTxTree: "ErrSSGenWrongTxTree",
ErrSSGenNoStakebase: "ErrSSGenNoStakebase",
ErrSSGenNoReference: "ErrSSGenNoReference",
ErrSSGenBadReference: "ErrSSGenBadReference",
ErrSSGenNoVotePush: "ErrSSGenNoVotePush",
ErrSSGenBadVotePush: "ErrSSGenBadVotePush",
ErrSSGenBadGenOuts: "ErrSSGenBadGenOuts",
ErrSSRtxWrongNumInputs: "ErrSSRtxWrongNumInputs",
ErrSSRtxTooManyOutputs: "ErrSSRtxTooManyOutputs",
ErrSSRtxNoOutputs: "ErrSSRtxNoOutputs",
ErrSSRtxWrongTxTree: "ErrSSRtxWrongTxTree",
ErrSSRtxBadOuts: "ErrSSRtxBadOuts",
ErrVerSStxAmts: "ErrVerSStxAmts",
ErrVerifyInput: "ErrVerifyInput",
ErrVerifyOutType: "ErrVerifyOutType",
ErrVerifyTooMuchFees: "ErrVerifyTooMuchFees",
ErrVerifySpendTooMuch: "ErrVerifySpendTooMuch",
ErrVerifyOutputAmt: "ErrVerifyOutputAmt",
ErrVerifyOutPkhs: "ErrVerifyOutPkhs",
}
// String returns the ErrorCode as a human-readable name.
func (e ErrorCode) String() string {
if s := errorCodeStrings[e]; s != "" {
return s
}
return fmt.Sprintf("Unknown ErrorCode (%d)", int(e))
}
// RuleError identifies a rule violation. It is used to indicate that
// processing of a block or transaction failed due to one of the many validation
// rules. The caller can use type assertions to determine if a failure was
// specifically due to a rule violation and access the ErrorCode field to
// ascertain the specific reason for the rule violation.
type StakeRuleError struct {
ErrorCode ErrorCode // Describes the kind of error
Description string // Human readable description of the issue
}
// Error satisfies the error interface and prints human-readable errors.
func (e StakeRuleError) Error() string {
return e.Description
}
// Error satisfies the error interface and prints human-readable errors.
func (e StakeRuleError) GetCode() ErrorCode {
return e.ErrorCode
}
// ruleError creates an RuleError given a set of arguments.
func stakeRuleError(c ErrorCode, desc string) StakeRuleError {
return StakeRuleError{ErrorCode: c, Description: desc}
}

View File

@ -0,0 +1,96 @@
// Copyright (c) 2014 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package stake_test
import (
"testing"
"github.com/decred/dcrd/blockchain"
)
// TestErrorCodeStringer tests the stringized output for the ErrorCode type.
func TestErrorCodeStringer(t *testing.T) {
tests := []struct {
in blockchain.ErrorCode
want string
}{
{blockchain.ErrDuplicateBlock, "ErrDuplicateBlock"},
{blockchain.ErrBlockTooBig, "ErrBlockTooBig"},
{blockchain.ErrBlockVersionTooOld, "ErrBlockVersionTooOld"},
{blockchain.ErrInvalidTime, "ErrInvalidTime"},
{blockchain.ErrTimeTooOld, "ErrTimeTooOld"},
{blockchain.ErrTimeTooNew, "ErrTimeTooNew"},
{blockchain.ErrDifficultyTooLow, "ErrDifficultyTooLow"},
{blockchain.ErrUnexpectedDifficulty, "ErrUnexpectedDifficulty"},
{blockchain.ErrHighHash, "ErrHighHash"},
{blockchain.ErrBadMerkleRoot, "ErrBadMerkleRoot"},
{blockchain.ErrBadCheckpoint, "ErrBadCheckpoint"},
{blockchain.ErrForkTooOld, "ErrForkTooOld"},
{blockchain.ErrCheckpointTimeTooOld, "ErrCheckpointTimeTooOld"},
{blockchain.ErrNoTransactions, "ErrNoTransactions"},
{blockchain.ErrTooManyTransactions, "ErrTooManyTransactions"},
{blockchain.ErrNoTxInputs, "ErrNoTxInputs"},
{blockchain.ErrNoTxOutputs, "ErrNoTxOutputs"},
{blockchain.ErrTxTooBig, "ErrTxTooBig"},
{blockchain.ErrBadTxOutValue, "ErrBadTxOutValue"},
{blockchain.ErrDuplicateTxInputs, "ErrDuplicateTxInputs"},
{blockchain.ErrBadTxInput, "ErrBadTxInput"},
{blockchain.ErrBadCheckpoint, "ErrBadCheckpoint"},
{blockchain.ErrMissingTx, "ErrMissingTx"},
{blockchain.ErrUnfinalizedTx, "ErrUnfinalizedTx"},
{blockchain.ErrDuplicateTx, "ErrDuplicateTx"},
{blockchain.ErrOverwriteTx, "ErrOverwriteTx"},
{blockchain.ErrImmatureSpend, "ErrImmatureSpend"},
{blockchain.ErrDoubleSpend, "ErrDoubleSpend"},
{blockchain.ErrSpendTooHigh, "ErrSpendTooHigh"},
{blockchain.ErrBadFees, "ErrBadFees"},
{blockchain.ErrTooManySigOps, "ErrTooManySigOps"},
{blockchain.ErrFirstTxNotCoinbase, "ErrFirstTxNotCoinbase"},
{blockchain.ErrMultipleCoinbases, "ErrMultipleCoinbases"},
{blockchain.ErrBadCoinbaseScriptLen, "ErrBadCoinbaseScriptLen"},
{blockchain.ErrBadCoinbaseValue, "ErrBadCoinbaseValue"},
{blockchain.ErrScriptMalformed, "ErrScriptMalformed"},
{blockchain.ErrScriptValidation, "ErrScriptValidation"},
{0xffff, "Unknown ErrorCode (65535)"},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
result := test.in.String()
if result != test.want {
t.Errorf("String #%d\n got: %s want: %s", i, result,
test.want)
continue
}
}
}
// TestRuleError tests the error output for the RuleError type.
func TestRuleError(t *testing.T) {
tests := []struct {
in blockchain.RuleError
want string
}{
{
blockchain.RuleError{Description: "duplicate block"},
"duplicate block",
},
{
blockchain.RuleError{Description: "human-readable error"},
"human-readable error",
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
result := test.in.Error()
if result != test.want {
t.Errorf("Error #%d\n got: %s want: %s", i, result,
test.want)
continue
}
}
}

72
blockchain/stake/log.go Normal file
View File

@ -0,0 +1,72 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package stake
import (
"errors"
"io"
"github.com/btcsuite/btclog"
)
// log is a logger that is initialized with no output filters. This
// means the package will not perform any logging by default until the caller
// requests it.
var log btclog.Logger
// The default amount of logging is none.
func init() {
DisableLog()
}
// DisableLog disables all library log output. Logging output is disabled
// by default until either UseLogger or SetLogWriter are called.
func DisableLog() {
log = btclog.Disabled
}
// UseLogger uses a specified Logger to output package logging info.
// This should be used in preference to SetLogWriter if the caller is also
// using btclog.
func UseLogger(logger btclog.Logger) {
log = logger
}
// SetLogWriter uses a specified io.Writer to output package logging info.
// This allows a caller to direct package logging output without needing a
// dependency on seelog. If the caller is also using btclog, UseLogger should
// be used instead.
func SetLogWriter(w io.Writer, level string) error {
if w == nil {
return errors.New("nil writer")
}
lvl, ok := btclog.LogLevelFromString(level)
if !ok {
return errors.New("invalid log level")
}
l, err := btclog.NewLoggerFromWriter(w, lvl)
if err != nil {
return err
}
UseLogger(l)
return nil
}
// LogClosure is a closure that can be printed with %v to be used to
// generate expensive-to-create data for a detailed log level and avoid doing
// the work if the data isn't printed.
type logClosure func() string
func (c logClosure) String() string {
return c()
}
func newLogClosure(c func() string) logClosure {
return logClosure(c)
}

152
blockchain/stake/lottery.go Normal file
View File

@ -0,0 +1,152 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
// Contains useful functions for lottery winner and ticket number determination.
package stake
import (
"encoding/binary"
"fmt"
"github.com/decred/dcrd/chaincfg/chainhash"
)
// Hash256PRNG is a determinstic pseudorandom number generator that uses a
// 256-bit secure hashing function to generate random uint32s starting from
// an initial seed.
type Hash256PRNG struct {
seed []byte // The seed used to initialize
hashIdx int // Position in the cached hash
idx uint64 // Position in the hash iterator
seedState chainhash.Hash // Hash iterator root hash
lastHash chainhash.Hash // Cached last hash used
}
// NewHash256PRNG creates a pointer to a newly created hash256PRNG.
func NewHash256PRNG(seed []byte) *Hash256PRNG {
// idx and lastHash are automatically initialized
// as 0. We initialize the seed by appending a constant
// to it and hashing to give 32 bytes. This ensures
// that regardless of the input, the PRNG is always
// doing a short number of rounds because it only
// has to hash < 64 byte messages. The constant is
// derived from the hexadecimal representation of
// pi.
cst := []byte{0x24, 0x3F, 0x6A, 0x88,
0x85, 0xA3, 0x08, 0xD3}
hp := new(Hash256PRNG)
hp.seed = chainhash.HashFuncB(append(seed, cst...))
initLH, err := chainhash.NewHash(hp.seed)
if err != nil {
return nil
}
hp.seedState = *initLH
hp.lastHash = *initLH
hp.idx = 0
return hp
}
// StateHash returns a hash referencing the current state the deterministic PRNG.
func (hp *Hash256PRNG) StateHash() chainhash.Hash {
fHash := hp.lastHash
fIdx := hp.idx
fHashIdx := hp.hashIdx
finalState := make([]byte, len(fHash)+4+1)
copy(finalState, fHash[:])
binary.BigEndian.PutUint32(finalState[len(fHash):], uint32(fIdx))
finalState[len(fHash)+4] = byte(fHashIdx)
return chainhash.HashFuncH(finalState)
}
// hash256Rand returns a uint32 random number using the pseudorandom number
// generator and updates the state.
func (hp *Hash256PRNG) Hash256Rand() uint32 {
r := binary.BigEndian.Uint32(hp.lastHash[hp.hashIdx*4 : hp.hashIdx*4+4])
hp.hashIdx++
// 'roll over' the hash index to use and store it.
if hp.hashIdx > 7 {
idxB := make([]byte, 4, 4)
binary.BigEndian.PutUint32(idxB, uint32(hp.idx))
hp.lastHash = chainhash.HashFuncH(append(hp.seed, idxB...))
hp.idx++
hp.hashIdx = 0
}
// 'roll over' the PRNG by re-hashing the seed when
// we overflow idx.
if hp.idx > 0xFFFFFFFF {
hp.seedState = chainhash.HashFuncH(hp.seedState[:])
hp.lastHash = hp.seedState
hp.idx = 0
}
return r
}
// uniformRandom returns a random in the range [0 ... upperBound) while avoiding
// modulo bias, thus giving a normal distribution within the specified range.
//
// Ported from
// https://github.com/conformal/clens/blob/master/src/arc4random_uniform.c
func (hp *Hash256PRNG) uniformRandom(upperBound uint32) uint32 {
var r, min uint32
if upperBound < 2 {
return 0
}
if upperBound > 0x80000000 {
min = 1 + ^upperBound
} else {
// (2**32 - (x * 2)) % x == 2**32 % x when x <= 2**31
min = ((0xFFFFFFFF - (upperBound * 2)) + 1) % upperBound
}
for {
r = hp.Hash256Rand()
if r >= min {
break
}
}
return r % upperBound
}
// intInSlice returns true if an integer is in the passed slice, false otherwise.
func intInSlice(i int, sl []int) bool {
for _, e := range sl {
if i == e {
return true
}
}
return false
}
// FindTicketIdxs finds n many unique index numbers for a list length size.
func FindTicketIdxs(size int64, n int, prng *Hash256PRNG) ([]int, error) {
if size < int64(n) {
return nil, fmt.Errorf("list size too small")
}
if size > 0xFFFFFFFF {
return nil, fmt.Errorf("list size too big")
}
sz := uint32(size)
var list []int
listLen := 0
for listLen < n {
r := int(prng.uniformRandom(sz))
if !intInSlice(r, list) {
list = append(list, r)
listLen++
}
}
return list, nil
}

View File

@ -0,0 +1,196 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package stake_test
import (
"bytes"
"encoding/binary"
"math/rand"
"reflect"
"sort"
"testing"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
)
func TestBasicPRNG(t *testing.T) {
seed := chainhash.HashFuncB([]byte{0x01})
prng := stake.NewHash256PRNG(seed)
for i := 0; i < 100000; i++ {
prng.Hash256Rand()
}
lastHashExp, _ := chainhash.NewHashFromStr("24f1cd72aefbfc85a9d3e21e2eb" +
"732615688d3634bf94499af5a81e0eb45c4e4")
lastHash := prng.StateHash()
if *lastHashExp != lastHash {
t.Errorf("expected final state of %v, got %v", lastHashExp, lastHash)
}
}
type TicketData struct {
Prefix uint8 // Bucket prefix
SStxHash chainhash.Hash
SpendHash chainhash.Hash
BlockHeight int64 // Block for where the original sstx was located
TxIndex uint32 // Position within a block, in stake tree
Missed bool // Whether or not the ticket was spent
Expired bool // Whether or not the ticket expired
}
// SStxMemMap is a memory map of SStx keyed to the txHash.
type SStxMemMap map[chainhash.Hash]*TicketData
func swap(s []byte) []byte {
for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 {
s[i], s[j] = s[j], s[i]
}
return s
}
// TicketDataSlice is a sortable data structure of pointers to TicketData.
type TicketDataSlice []*TicketData
func NewTicketDataSliceEmpty() TicketDataSlice {
slice := make([]*TicketData, 0)
return TicketDataSlice(slice)
}
func NewTicketDataSlice(size int) TicketDataSlice {
slice := make([]*TicketData, size)
return TicketDataSlice(slice)
}
// Less determines which of two *TicketData values is smaller; used for sort.
func (tds TicketDataSlice) Less(i, j int) bool {
cmp := bytes.Compare(tds[i].SStxHash[:], tds[j].SStxHash[:])
isISmaller := (cmp == -1)
return isISmaller
}
// Swap swaps two *TicketData values.
func (tds TicketDataSlice) Swap(i, j int) { tds[i], tds[j] = tds[j], tds[i] }
// Len returns the length of the slice.
func (tds TicketDataSlice) Len() int { return len(tds) }
func TestLotteryNumSelection(t *testing.T) {
// Test finding ticket indexes.
seed := chainhash.HashFuncB([]byte{0x01})
prng := stake.NewHash256PRNG(seed)
ticketsInPool := int64(56789)
tooFewTickets := int64(4)
justEnoughTickets := int64(5)
ticketsPerBlock := 5
_, err := stake.FindTicketIdxs(tooFewTickets, ticketsPerBlock, prng)
if err == nil {
t.Errorf("got unexpected no error for FindTicketIdxs too few tickets " +
"test")
}
tickets, err := stake.FindTicketIdxs(ticketsInPool, ticketsPerBlock, prng)
if err != nil {
t.Errorf("got unexpected error for FindTicketIdxs 1 test")
}
ticketsExp := []int{34850, 8346, 27636, 54482, 25482}
if !reflect.DeepEqual(ticketsExp, tickets) {
t.Errorf("Unexpected tickets selected; got %v, want %v", tickets,
ticketsExp)
}
// Ensure that it can find all suitable ticket numbers in a small
// bucket of tickets.
tickets, err = stake.FindTicketIdxs(justEnoughTickets, ticketsPerBlock, prng)
if err != nil {
t.Errorf("got unexpected error for FindTicketIdxs 2 test")
}
ticketsExp = []int{3, 0, 4, 2, 1}
if !reflect.DeepEqual(ticketsExp, tickets) {
t.Errorf("Unexpected tickets selected; got %v, want %v", tickets,
ticketsExp)
}
lastHashExp, _ := chainhash.NewHashFromStr("e97ce54aea63a883a82871e752c" +
"6ec3c5731fffc63dafc3767c06861b0b2fa65")
lastHash := prng.StateHash()
if *lastHashExp != lastHash {
t.Errorf("expected final state of %v, got %v", lastHashExp, lastHash)
}
}
func TestTicketSorting(t *testing.T) {
ticketsPerBlock := 5
ticketPoolSize := uint16(8192)
totalTickets := uint32(ticketPoolSize) * uint32(5)
bucketsSize := 256
randomGen := rand.New(rand.NewSource(12345))
ticketMap := make([]SStxMemMap, int(bucketsSize), int(bucketsSize))
for i := 0; i < bucketsSize; i++ {
ticketMap[i] = make(SStxMemMap)
}
toMake := int(ticketPoolSize) * ticketsPerBlock
for i := 0; i < toMake; i++ {
td := new(TicketData)
rint64 := randomGen.Int63n(1 << 62)
randBytes := make([]byte, 8, 8)
binary.LittleEndian.PutUint64(randBytes, uint64(rint64))
h := chainhash.HashFuncH(randBytes)
td.SStxHash = h
prefix := byte(h[0])
ticketMap[prefix][h] = td
}
// Pre-sort with buckets (faster).
sortedSlice := make([]*TicketData, 0, totalTickets)
for i := 0; i < bucketsSize; i++ {
tempTdSlice := NewTicketDataSlice(len(ticketMap[i]))
itr := 0 // Iterator
for _, td := range ticketMap[i] {
tempTdSlice[itr] = td
itr++
}
sort.Sort(tempTdSlice)
sortedSlice = append(sortedSlice, tempTdSlice...)
}
sortedSlice1 := sortedSlice
// However, it should be the same as a sort without the buckets.
toSortSlice := make([]*TicketData, 0, totalTickets)
for i := 0; i < bucketsSize; i++ {
tempTdSlice := make([]*TicketData, len(ticketMap[i]),
len(ticketMap[i]))
itr := 0 // Iterator
for _, td := range ticketMap[i] {
tempTdSlice[itr] = td
itr++
}
toSortSlice = append(toSortSlice, tempTdSlice...)
}
sortedSlice = NewTicketDataSlice(int(totalTickets))
copy(sortedSlice, toSortSlice)
sort.Sort(TicketDataSlice(sortedSlice))
sortedSlice2 := sortedSlice
if !reflect.DeepEqual(sortedSlice1, sortedSlice2) {
t.Errorf("bucket sort failed to sort to the same slice as global sort")
}
}
func BenchmarkHashPRNG(b *testing.B) {
seed := chainhash.HashFuncB([]byte{0x01})
prng := stake.NewHash256PRNG(seed)
for n := 0; n < b.N; n++ {
prng.Hash256Rand()
}
}

1089
blockchain/stake/staketx.go Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

1641
blockchain/stake/ticketdb.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,272 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package stake_test
import (
"bytes"
"compress/bzip2"
"encoding/gob"
"fmt"
"math/big"
"os"
"path/filepath"
"reflect"
"sort"
"testing"
"time"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// cloneTicketDB makes a deep copy of a ticket DB by
// serializing it to a gob and then deserializing it
// into an empty container.
func cloneTicketDB(tmdb *stake.TicketDB) (stake.TicketMaps, error) {
mapsPointer := tmdb.DumpMapsPointer()
mapsBytes, err := mapsPointer.GobEncode()
if err != nil {
return stake.TicketMaps{},
fmt.Errorf("clone db error: could not serialize ticketMaps")
}
var mapsCopy stake.TicketMaps
if err := mapsCopy.GobDecode(mapsBytes); err != nil {
return stake.TicketMaps{},
fmt.Errorf("clone db error: could not deserialize " +
"ticketMaps")
}
return mapsCopy, nil
}
// hashInSlice returns whether a hash exists in a slice or not.
func hashInSlice(h *chainhash.Hash, list []*chainhash.Hash) bool {
for _, hash := range list {
if h.IsEqual(hash) {
return true
}
}
return false
}
func TestTicketDB(t *testing.T) {
// Declare some useful variables
testBCHeight := int64(168)
// Set up a DB
database, err := database.CreateDB("leveldb", "ticketdb_test")
if err != nil {
t.Errorf("Db create error: %v", err.Error())
}
// Make a new tmdb to fill with dummy live and used tickets
var tmdb stake.TicketDB
tmdb.Initialize(simNetParams, database)
filename := filepath.Join("..", "/../blockchain/testdata", "blocks0to168.bz2")
fi, err := os.Open(filename)
bcStream := bzip2.NewReader(fi)
defer fi.Close()
// Create a buffer of the read file
bcBuf := new(bytes.Buffer)
bcBuf.ReadFrom(bcStream)
// Create decoder from the buffer and a map to store the data
bcDecoder := gob.NewDecoder(bcBuf)
blockchain := make(map[int64][]byte)
// Decode the blockchain into the map
if err := bcDecoder.Decode(&blockchain); err != nil {
t.Errorf("error decoding test blockchain")
}
var CopyOfMapsAtBlock50, CopyOfMapsAtBlock168 stake.TicketMaps
var ticketsToSpendIn167 []chainhash.Hash
var sortedTickets167 []*stake.TicketData
for i := int64(0); i <= testBCHeight; i++ {
block, err := dcrutil.NewBlockFromBytes(blockchain[i])
if err != nil {
t.Errorf("block deserialization error on block %v", i)
}
block.SetHeight(i)
database.InsertBlock(block)
tmdb.InsertBlock(block)
if i == 50 {
// Create snapshot of tmdb at block 50
CopyOfMapsAtBlock50, err = cloneTicketDB(&tmdb)
if err != nil {
t.Errorf("db cloning at block 50 failure! %v", err)
}
}
// Test to make sure that ticket selection is working correctly.
if i == 167 {
// Sort the entire list of tickets lexicographically by sorting
// each bucket and then appending it to the list. Then store it
// to use in the next block.
totalTickets := 0
sortedSlice := make([]*stake.TicketData, 0)
for i := 0; i < stake.BucketsSize; i++ {
tix, err := tmdb.DumpLiveTickets(uint8(i))
if err != nil {
t.Errorf("error dumping live tickets")
}
mapLen := len(tix)
totalTickets += mapLen
tempTdSlice := stake.NewTicketDataSlice(mapLen)
itr := 0 // Iterator
for _, td := range tix {
tempTdSlice[itr] = td
itr++
}
sort.Sort(tempTdSlice)
sortedSlice = append(sortedSlice, tempTdSlice...)
}
sortedTickets167 = sortedSlice
}
if i == 168 {
parentBlock, err := dcrutil.NewBlockFromBytes(blockchain[i-1])
if err != nil {
t.Errorf("block deserialization error on block %v", i-1)
}
pbhB, err := parentBlock.MsgBlock().Header.Bytes()
if err != nil {
t.Errorf("block header serialization error")
}
prng := stake.NewHash256PRNG(pbhB)
ts, err := stake.FindTicketIdxs(int64(len(sortedTickets167)),
int(simNetParams.TicketsPerBlock), prng)
if err != nil {
t.Errorf("failure on FindTicketIdxs")
}
for _, idx := range ts {
ticketsToSpendIn167 =
append(ticketsToSpendIn167, sortedTickets167[idx].SStxHash)
}
// Make sure that the tickets that were supposed to be spent or
// missed were.
spentTix, err := tmdb.DumpSpentTickets(i)
if err != nil {
t.Errorf("DumpSpentTickets failure")
}
for _, h := range ticketsToSpendIn167 {
if _, ok := spentTix[h]; !ok {
t.Errorf("missing ticket %v that should have been missed "+
"or spent in block %v", h, i)
}
}
// Create snapshot of tmdb at block 168
CopyOfMapsAtBlock168, err = cloneTicketDB(&tmdb)
if err != nil {
t.Errorf("db cloning at block 168 failure! %v", err)
}
}
}
// Remove five blocks from HEAD~1
_, _, _, err = tmdb.RemoveBlockToHeight(50)
if err != nil {
t.Errorf("error: %v", err)
}
// Test if the roll back was symmetric to the earlier snapshot
if !reflect.DeepEqual(tmdb.DumpMapsPointer(), CopyOfMapsAtBlock50) {
t.Errorf("The td did not restore to a previous block height correctly!")
}
// Test rescanning a ticket db
err = tmdb.RescanTicketDB()
if err != nil {
t.Errorf("rescanticketdb err: %v", err.Error())
}
// Test if the db file storage was symmetric to the earlier snapshot
if !reflect.DeepEqual(tmdb.DumpMapsPointer(), CopyOfMapsAtBlock168) {
t.Errorf("The td did not rescan to HEAD correctly!")
}
err = os.Mkdir("testdata/", os.FileMode(0700))
if err != nil {
t.Error(err)
}
// Store the ticket db to disk
err = tmdb.Store("testdata/", "testtmdb")
if err != nil {
t.Errorf("error: %v", err)
}
var tmdb2 stake.TicketDB
err = tmdb2.LoadTicketDBs("testdata/", "testtmdb", simNetParams, database)
if err != nil {
t.Errorf("error: %v", err)
}
// Test if the db file storage was symmetric to previously rescanned one
if !reflect.DeepEqual(tmdb.DumpMapsPointer(), tmdb2.DumpMapsPointer()) {
t.Errorf("The td did not rescan to a previous block height correctly!")
}
tmdb2.Close()
// Test dumping missing tickets from block 152
missedIn152, _ := chainhash.NewHashFromStr(
"84f7f866b0af1cc278cb8e0b2b76024a07542512c76487c83628c14c650de4fa")
tmdb.RemoveBlockToHeight(152)
missedTix, err := tmdb.DumpMissedTickets()
if err != nil {
t.Errorf("err dumping missed tix: %v", err.Error())
}
if _, exists := missedTix[*missedIn152]; !exists {
t.Errorf("couldn't finding missed tx 1 %v in tmdb @ block 152!",
missedIn152)
}
tmdb.RescanTicketDB()
// Make sure that the revoked map contains the revoked tx
revokedSlice := []*chainhash.Hash{missedIn152}
revokedTix, err := tmdb.DumpRevokedTickets()
if err != nil {
t.Errorf("err dumping missed tix: %v", err.Error())
}
if len(revokedTix) != 1 {
t.Errorf("revoked ticket map is wrong len, got %v, want %v",
len(revokedTix), 1)
}
_, wasMissedIn152 := revokedTix[*revokedSlice[0]]
ticketsRevoked := wasMissedIn152
if !ticketsRevoked {
t.Errorf("revoked ticket map did not include tickets missed in " +
"block 152 and later revoked")
}
database.Close()
tmdb.Close()
os.RemoveAll("ticketdb_test")
os.Remove("./ticketdb_test.ver")
os.Remove("testdata/testtmdb")
os.Remove("testdata")
}

261
blockchain/stakeext.go Normal file
View File

@ -0,0 +1,261 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"fmt"
"sort"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
)
// GetNextWinningTickets returns the next tickets eligible for spending as SSGen
// on the top block. It also returns the ticket pool size.
// This function is NOT safe for concurrent access.
func (b *BlockChain) GetNextWinningTickets() ([]chainhash.Hash, int, [6]byte,
error) {
winningTickets, poolSize, finalState, _, err :=
b.getWinningTicketsWithStore(b.bestChain)
if err != nil {
return nil, 0, [6]byte{}, err
}
return winningTickets, poolSize, finalState, nil
}
// getWinningTicketsWithStore is a helper function that returns winning tickets
// along with the ticket pool size and transaction store for the given node.
// Note that this function evaluates the lottery data predominantly for mining
// purposes; that is, it retrieves the lottery data which needs to go into
// the next block when mining on top of this block.
// This function is NOT safe for concurrent access.
func (b *BlockChain) getWinningTicketsWithStore(node *blockNode) ([]chainhash.Hash,
int, [6]byte, TicketStore, error) {
if node.height < b.chainParams.StakeEnabledHeight {
return []chainhash.Hash{}, 0, [6]byte{}, nil, nil
}
evalLotteryWinners := false
if node.height >= b.chainParams.StakeValidationHeight-1 {
evalLotteryWinners = true
}
block, err := b.getBlockFromHash(node.hash)
if err != nil {
return nil, 0, [6]byte{}, nil, err
}
headerB, err := node.header.Bytes()
if err != nil {
return nil, 0, [6]byte{}, nil, err
}
ticketStore, err := b.fetchTicketStore(node)
if err != nil {
return nil, 0, [6]byte{}, nil,
fmt.Errorf("Failed to generate ticket store for node %v; "+
"error given: %v", node.hash, err)
}
if ticketStore != nil {
// We need the viewpoint of spendable tickets given that the
// current block was actually added.
err = b.connectTickets(ticketStore, node, block)
if err != nil {
return nil, 0, [6]byte{}, nil, err
}
}
// Sort the entire list of tickets lexicographically by sorting
// each bucket and then appending it to the list.
tpdBucketMap := make(map[uint8][]*TicketPatchData)
for _, tpd := range ticketStore {
// Bucket does not exist.
if _, ok := tpdBucketMap[tpd.td.Prefix]; !ok {
tpdBucketMap[tpd.td.Prefix] = make([]*TicketPatchData, 1)
tpdBucketMap[tpd.td.Prefix][0] = tpd
} else {
// Bucket exists.
data := tpdBucketMap[tpd.td.Prefix]
tpdBucketMap[tpd.td.Prefix] = append(data, tpd)
}
}
totalTickets := 0
sortedSlice := make([]*stake.TicketData, 0)
for i := 0; i < stake.BucketsSize; i++ {
ltb, err := b.GenerateLiveTicketBucket(ticketStore, tpdBucketMap,
uint8(i))
if err != nil {
h := node.hash
str := fmt.Sprintf("Failed to generate a live ticket bucket "+
"to evaluate the lottery data for node %v, height %v! Error "+
"given: %v",
h,
node.height,
err.Error())
return nil, 0, [6]byte{}, nil, fmt.Errorf(str)
}
mapLen := len(ltb)
tempTdSlice := stake.NewTicketDataSlice(mapLen)
itr := 0 // Iterator
for _, td := range ltb {
tempTdSlice[itr] = td
itr++
totalTickets++
}
sort.Sort(tempTdSlice)
sortedSlice = append(sortedSlice, tempTdSlice...)
}
// Use the parent block's header to seed a PRNG that picks the
// lottery winners.
winningTickets := make([]chainhash.Hash, 0)
var finalState [6]byte
stateBuffer := make([]byte, 0,
(b.chainParams.TicketsPerBlock+1)*chainhash.HashSize)
if evalLotteryWinners {
ticketsPerBlock := int(b.chainParams.TicketsPerBlock)
prng := stake.NewHash256PRNG(headerB)
ts, err := stake.FindTicketIdxs(int64(totalTickets), ticketsPerBlock, prng)
if err != nil {
return nil, 0, [6]byte{}, nil, err
}
for _, idx := range ts {
winningTickets = append(winningTickets, sortedSlice[idx].SStxHash)
stateBuffer = append(stateBuffer, sortedSlice[idx].SStxHash[:]...)
}
lastHash := prng.StateHash()
stateBuffer = append(stateBuffer, lastHash[:]...)
copy(finalState[:], chainhash.HashFuncB(stateBuffer)[0:6])
}
return winningTickets, totalTickets, finalState, ticketStore, nil
}
// getWinningTicketsInclStore is a helper function for block validation that
// returns winning tickets along with the ticket pool size and transaction
// store for the given node.
// Note that this function is used for finding the lottery data when
// evaluating a block that builds on a tip, not for mining.
// This function is NOT safe for concurrent access.
func (b *BlockChain) getWinningTicketsInclStore(node *blockNode,
ticketStore TicketStore) ([]chainhash.Hash, int, [6]byte, error) {
if node.height < b.chainParams.StakeEnabledHeight {
return []chainhash.Hash{}, 0, [6]byte{}, nil
}
evalLotteryWinners := false
if node.height >= b.chainParams.StakeValidationHeight-1 {
evalLotteryWinners = true
}
parentHeaderB, err := node.parent.header.Bytes()
if err != nil {
return nil, 0, [6]byte{}, err
}
// Sort the entire list of tickets lexicographically by sorting
// each bucket and then appending it to the list.
tpdBucketMap := make(map[uint8][]*TicketPatchData)
for _, tpd := range ticketStore {
// Bucket does not exist.
if _, ok := tpdBucketMap[tpd.td.Prefix]; !ok {
tpdBucketMap[tpd.td.Prefix] = make([]*TicketPatchData, 1)
tpdBucketMap[tpd.td.Prefix][0] = tpd
} else {
// Bucket exists.
data := tpdBucketMap[tpd.td.Prefix]
tpdBucketMap[tpd.td.Prefix] = append(data, tpd)
}
}
totalTickets := 0
sortedSlice := make([]*stake.TicketData, 0)
for i := 0; i < stake.BucketsSize; i++ {
ltb, err := b.GenerateLiveTicketBucket(ticketStore, tpdBucketMap, uint8(i))
if err != nil {
h := node.hash
str := fmt.Sprintf("Failed to generate a live ticket bucket "+
"to evaluate the lottery data for node %v, height %v! Error "+
"given: %v",
h,
node.height,
err.Error())
return nil, 0, [6]byte{}, fmt.Errorf(str)
}
mapLen := len(ltb)
tempTdSlice := stake.NewTicketDataSlice(mapLen)
itr := 0 // Iterator
for _, td := range ltb {
tempTdSlice[itr] = td
itr++
totalTickets++
}
sort.Sort(tempTdSlice)
sortedSlice = append(sortedSlice, tempTdSlice...)
}
// Use the parent block's header to seed a PRNG that picks the
// lottery winners.
winningTickets := make([]chainhash.Hash, 0)
var finalState [6]byte
stateBuffer := make([]byte, 0,
(b.chainParams.TicketsPerBlock+1)*chainhash.HashSize)
if evalLotteryWinners {
ticketsPerBlock := int(b.chainParams.TicketsPerBlock)
prng := stake.NewHash256PRNG(parentHeaderB)
ts, err := stake.FindTicketIdxs(int64(totalTickets), ticketsPerBlock, prng)
if err != nil {
return nil, 0, [6]byte{}, err
}
for _, idx := range ts {
winningTickets = append(winningTickets, sortedSlice[idx].SStxHash)
stateBuffer = append(stateBuffer, sortedSlice[idx].SStxHash[:]...)
}
lastHash := prng.StateHash()
stateBuffer = append(stateBuffer, lastHash[:]...)
copy(finalState[:], chainhash.HashFuncB(stateBuffer)[0:6])
}
return winningTickets, totalTickets, finalState, nil
}
// GetWinningTickets takes a node block hash and returns the next tickets
// eligible for spending as SSGen.
// This function is NOT safe for concurrent access.
func (b *BlockChain) GetWinningTickets(nodeHash chainhash.Hash) ([]chainhash.Hash,
int, [6]byte, error) {
var node *blockNode
if n, exists := b.index[nodeHash]; exists {
node = n
} else {
node, _ = b.findNode(&nodeHash)
}
if node == nil {
return nil, 0, [6]byte{}, fmt.Errorf("node doesn't exist")
}
winningTickets, poolSize, finalState, _, err :=
b.getWinningTicketsWithStore(node)
if err != nil {
return nil, 0, [6]byte{}, err
}
return winningTickets, poolSize, finalState, nil
}
// GetMissedTickets returns a list of currently missed tickets.
// This function is NOT safe for concurrent access.
func (b *BlockChain) GetMissedTickets() []chainhash.Hash {
missedTickets := b.tmdb.GetTicketHashesForMissed()
return missedTickets
}

277
blockchain/subsidy.go Normal file
View File

@ -0,0 +1,277 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"bytes"
"fmt"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// CalcBlockSubsidy returns the subsidy amount a block at the provided height
// should have. This is mainly used for determining how much the coinbase for
// newly generated blocks awards as well as validating the coinbase for blocks
// has the expected value.
//
// Subsidy calculation for exponential reductions:
// 0 for i in range (0, height / ReductionInterval):
// 1 subsidy *= MulSubsidy
// 2 subsidy /= DivSubsidy
//
// Safe for concurrent access.
func calcBlockSubsidy(height int64, params *chaincfg.Params) int64 {
// Block height 1 subsidy is 'special' and used to
// distribute initial tokens, if any.
if height == 1 {
return params.BlockOneSubsidy()
}
iterations := height / params.ReductionInterval
subsidy := params.BaseSubsidy
// You could stick all these values in a LUT for faster access if you
// wanted to, but this calculation is already really fast until you
// get very very far into the blockchain. The other method you could
// use is storing the total subsidy in a block node and do the
// multiplication and division when needed when adding a block.
if iterations > 0 {
for i := int64(0); i < iterations; i++ {
subsidy *= params.MulSubsidy
subsidy /= params.DivSubsidy
}
}
return subsidy
}
// CalcBlockWorkSubsidy calculates the proof of work subsidy for a block as a
// proportion of the total subsidy.
func CalcBlockWorkSubsidy(height int64, voters uint16,
params *chaincfg.Params) int64 {
subsidy := calcBlockSubsidy(height, params)
proportionWork := int64(params.WorkRewardProportion)
proportions := int64(params.TotalSubsidyProportions())
subsidy *= proportionWork
subsidy /= proportions
// Ignore the voters field of the header before we're at a point
// where there are any voters.
if height < params.StakeValidationHeight {
return subsidy
}
// If there are no voters, subsidy is 0. The block will fail later anyway.
if voters == 0 {
return 0
}
// Adjust for the number of voters. This shouldn't ever overflow if you start
// with 50 * 10^8 Atoms and voters and potentialVoters are uint16.
potentialVoters := params.TicketsPerBlock
actual := (int64(voters) * subsidy) / int64(potentialVoters)
return actual
}
// CalcStakeVoteSubsidy calculates the subsidy for a stake vote based on the height
// of its input SStx.
//
// Safe for concurrent access.
func CalcStakeVoteSubsidy(height int64, params *chaincfg.Params) int64 {
// Calculate the actual reward for this block, then further reduce reward
// proportional to StakeRewardProportion.
// Note that voters/potential voters is 1, so that vote reward is calculated
// irrespective of block reward.
subsidy := calcBlockSubsidy(height, params)
proportionStake := int64(params.StakeRewardProportion)
proportions := int64(params.TotalSubsidyProportions())
subsidy *= proportionStake
subsidy /= (proportions * int64(params.TicketsPerBlock))
return subsidy
}
// CalcBlockTaxSubsidy calculates the subsidy for the organization address in the
// coinbase.
//
// Safe for concurrent access.
func CalcBlockTaxSubsidy(height int64, voters uint16,
params *chaincfg.Params) int64 {
if params.BlockTaxProportion == 0 {
return 0
}
subsidy := calcBlockSubsidy(int64(height), params)
proportionTax := int64(params.BlockTaxProportion)
proportions := int64(params.TotalSubsidyProportions())
subsidy *= proportionTax
subsidy /= proportions
// Assume all voters 'present' before stake voting is turned on.
if height < params.StakeValidationHeight {
voters = 5
}
// If there are no voters, subsidy is 0. The block will fail later anyway.
if voters == 0 && height >= params.StakeValidationHeight {
return 0
}
// Adjust for the number of voters. This shouldn't ever overflow if you start
// with 50 * 10^8 Atoms and voters and potentialVoters are uint16.
potentialVoters := params.TicketsPerBlock
adjusted := (int64(voters) * subsidy) / int64(potentialVoters)
return adjusted
}
// BlockOneCoinbasePaysTokens checks to see if the first block coinbase pays
// out to the network initial token ledger.
func BlockOneCoinbasePaysTokens(tx *dcrutil.Tx, params *chaincfg.Params) error {
// If no ledger is specified, just return true.
if len(params.BlockOneLedger) == 0 {
return nil
}
if tx.MsgTx().LockTime != 0 {
errStr := fmt.Sprintf("block 1 coinbase has invalid locktime")
return ruleError(ErrBlockOneTx, errStr)
}
if tx.MsgTx().Expiry != wire.NoExpiryValue {
errStr := fmt.Sprintf("block 1 coinbase has invalid expiry")
return ruleError(ErrBlockOneTx, errStr)
}
if tx.MsgTx().TxIn[0].Sequence != wire.MaxTxInSequenceNum {
errStr := fmt.Sprintf("block 1 coinbase not finalized")
return ruleError(ErrBlockOneInputs, errStr)
}
if len(tx.MsgTx().TxOut) == 0 {
errStr := fmt.Sprintf("coinbase outputs empty in block 1")
return ruleError(ErrBlockOneOutputs, errStr)
}
ledger := params.BlockOneLedger
if len(ledger) != len(tx.MsgTx().TxOut) {
errStr := fmt.Sprintf("wrong number of outputs in block 1 coinbase; "+
"got %v, expected %v", len(tx.MsgTx().TxOut), len(ledger))
return ruleError(ErrBlockOneOutputs, errStr)
}
// Check the addresses and output amounts against those in the ledger.
for i, txout := range tx.MsgTx().TxOut {
if txout.Version != txscript.DefaultScriptVersion {
errStr := fmt.Sprintf("bad block one output version; want %v, got %v",
txscript.DefaultScriptVersion, txout.Version)
return ruleError(ErrBlockOneOutputs, errStr)
}
// There should only be one address.
_, addrs, _, err :=
txscript.ExtractPkScriptAddrs(txout.Version, txout.PkScript, params)
if len(addrs) != 1 {
errStr := fmt.Sprintf("too many addresses in output")
return ruleError(ErrBlockOneOutputs, errStr)
}
addrLedger, err := dcrutil.DecodeAddress(ledger[i].Address, params)
if err != nil {
return err
}
if !bytes.Equal(addrs[0].ScriptAddress(), addrLedger.ScriptAddress()) {
errStr := fmt.Sprintf("address in output %v has non matching "+
"address; got %v (hash160 %x), want %v (hash160 %x)",
i,
addrs[0].EncodeAddress(),
addrs[0].ScriptAddress(),
addrLedger.EncodeAddress(),
addrLedger.ScriptAddress())
return ruleError(ErrBlockOneOutputs, errStr)
}
if txout.Value != ledger[i].Amount {
errStr := fmt.Sprintf("address in output %v has non matching "+
"amount; got %v, want %v", i, txout.Value, ledger[i].Amount)
return ruleError(ErrBlockOneOutputs, errStr)
}
}
return nil
}
// CoinbasePaysTax checks to see if a given block's coinbase correctly pays
// tax to the developer organization.
func CoinbasePaysTax(tx *dcrutil.Tx, height uint32, voters uint16,
params *chaincfg.Params) error {
// Taxes only apply from block 2 onwards.
if height <= 1 {
return nil
}
// Tax is disabled.
if params.BlockTaxProportion == 0 {
return nil
}
if len(tx.MsgTx().TxOut) == 0 {
errStr := fmt.Sprintf("invalid coinbase (no outputs)")
return ruleError(ErrNoTxOutputs, errStr)
}
// Coinbase output 0 must be the subsidy to the dev organization.
taxPkVersion := tx.MsgTx().TxOut[0].Version
taxPkScript := tx.MsgTx().TxOut[0].PkScript
class, addrs, _, err :=
txscript.ExtractPkScriptAddrs(taxPkVersion, taxPkScript, params)
// The script can't be a weird class.
if !(class == txscript.ScriptHashTy ||
class == txscript.PubKeyHashTy ||
class == txscript.PubKeyTy) {
errStr := fmt.Sprintf("wrong script class for tax output")
return ruleError(ErrNoTax, errStr)
}
// There should only be one address.
if len(addrs) != 1 {
errStr := fmt.Sprintf("no or too many addresses in output")
return ruleError(ErrNoTax, errStr)
}
// Decode the organization address.
addrOrg, err := dcrutil.DecodeAddress(params.OrganizationAddress, params)
if err != nil {
return err
}
if !bytes.Equal(addrs[0].ScriptAddress(), addrOrg.ScriptAddress()) {
errStr := fmt.Sprintf("address in output 0 has non matching org "+
"address; got %v (hash160 %x), want %v (hash160 %x)",
addrs[0].EncodeAddress(),
addrs[0].ScriptAddress(),
addrOrg.EncodeAddress(),
addrOrg.ScriptAddress())
return ruleError(ErrNoTax, errStr)
}
// Get the amount of subsidy that should have been paid out to
// the organization, then check it.
orgSubsidy := CalcBlockTaxSubsidy(int64(height), voters, params)
amountFound := tx.MsgTx().TxOut[0].Value
if orgSubsidy != amountFound {
errStr := fmt.Sprintf("amount in output 0 has non matching org "+
"calculated amount; got %v, want %v", amountFound, orgSubsidy)
return ruleError(ErrNoTax, errStr)
}
return nil
}

View File

@ -0,0 +1,55 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain_test
import (
"testing"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg"
)
func TestBlockSubsidy(t *testing.T) {
mainnet := &chaincfg.MainNetParams
totalSubsidy := mainnet.BlockOneSubsidy()
for i := int64(0); ; i++ {
// Genesis block or first block.
if i == 0 || i == 1 {
continue
}
if i%mainnet.ReductionInterval == 0 {
numBlocks := mainnet.ReductionInterval
// First reduction internal, which is reduction interval - 2
// to skip the genesis block and block one.
if i == mainnet.ReductionInterval {
numBlocks -= 2
}
height := i - numBlocks
work := blockchain.CalcBlockWorkSubsidy(height,
mainnet.TicketsPerBlock, mainnet)
stake := blockchain.CalcStakeVoteSubsidy(height, mainnet) *
int64(mainnet.TicketsPerBlock)
tax := blockchain.CalcBlockTaxSubsidy(height, mainnet.TicketsPerBlock,
mainnet)
if (work + stake + tax) == 0 {
break
}
totalSubsidy += ((work + stake + tax) * numBlocks)
// First reduction internal, subtract the stake subsidy for
// blocks before the staking system is enabled.
if i == mainnet.ReductionInterval {
totalSubsidy -= stake * (mainnet.StakeValidationHeight - 2)
}
}
}
if totalSubsidy != 2099999999800912 {
t.Errorf("Bad total subsidy; want 2099999999800912, got %v", totalSubsidy)
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
blockchain/testdata/blocks0to168.bz2 vendored Normal file

Binary file not shown.

View File

@ -1,180 +0,0 @@
File path: reorgTest/blk_0_to_4.dat
Block 0:
f9beb4d9
1d010000
01000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 3ba3edfd 7a7b12b2 7ac72c3e 67768f61 7fc81bc3 888a5132 3a9fb8aa
4b1e5e4a 29ab5f49 ffff001d 1dac2b7c
01
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff4d04ff ff001d01 04455468 65205469 6d657320 30332f4a
616e2f32 30303920 4368616e 63656c6c 6f72206f 6e206272 696e6b20 6f662073
65636f6e 64206261 696c6f75 7420666f 72206261 6e6b73ff ffffff01 00f2052a
01000000 43410467 8afdb0fe 55482719 67f1a671 30b7105c d6a828e0 3909a679
62e0ea1f 61deb649 f6bc3f4c ef38c4f3 5504e51e c112de5c 384df7ba 0b8d578a
4c702b6b f11d5fac 00000000
Block 1:
f9beb4d9
d4000000
01000000 6fe28c0a b6f1b372 c1a6a246 ae63f74f 931e8365 e15a089c 68d61900
00000000 3bbd67ad e98fbbb7 0718cd80 f9e9acf9 3b5fae91 7bb2b41d 4c3bb82c
77725ca5 81ad5f49 ffff001d 44e69904
01
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04722f 2e2bffff ffff0100 f2052a01 00000043 41046868
0737c76d abb801cb 2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02
b5ac9e8b 4c9f49be 5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ac00
000000
Block 2:
f9beb4d9
95010000
01000000 13ca7940 4c11c63e ca906bbd f190b751 2872b857 1b5143ae e8cb5737
00000000 fc07c983 d7391736 0aeda657 29d0d4d3 2533eb84 76ee9d64 aa27538f
9b4fc00a d9af5f49 ffff001d 630bea22
02
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04eb96 14e5ffff ffff0100 f2052a01 00000043 41046868
0737c76d abb801cb 2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02
b5ac9e8b 4c9f49be 5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ac00
000000
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
4fdcb8ee d2000000 004a4930 46022100 3dde52c6 5e339f45 7fe1015e 70eed208
872eb71e dd484c07 206b190e cb2ec3f8 02210011 c78dcfd0 3d43fa63 61242a33
6291ba2a 8c1ef5bc d5472126 2468f2bf 8dee4d01 ffffffff 0200ca9a 3b000000
001976a9 14cb2abd e8bccacc 32e893df 3a054b9e f7f227a4 ce88ac00 286bee00
00000019 76a914ee 26c56fc1 d942be8d 7a24b2a1 001dd894 69398088 ac000000
00
Block 3:
f9beb4d9
96020000
01000000 7d338254 0506faab 0d4cf179 45dda023 49db51f9 6233f24c 28002258
00000000 4806fe80 bf85931b 882ea645 77ca5a03 22bb8af2 3f277b20 55f160cd
972c8e8b 31b25f49 ffff001d e8f0c653
03
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff044abd 8159ffff ffff0100 f2052a01 00000043 4104b95c
249d84f4 17e3e395 a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c
a5e56c90 f340988d 3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ac00
000000
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77010000 008c4930 46022100 96ee0d02 b35fd61e 4960b44f f396f67e
01fe17f9 de4e0c17 b6a963bd ab2b50a6 02210034 920d4daa 7e9f8abe 5675c931
495809f9 0b9c1189 d05fbaf1 dd6696a5 b0d8f301 41046868 0737c76d abb801cb
2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02 b5ac9e8b 4c9f49be
5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ffff ffff0100 286bee00
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
00
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77000000 008c4930 46022100 b08b922a c4bde411 1c229f92 9fe6eb6a
50161f98 1f4cf47e a9214d35 bf74d380 022100d2 f6640327 e677a1e1 cc474991
b9a48ba5 bd1e0c94 d1c8df49 f7b0193b 7ea4fa01 4104b95c 249d84f4 17e3e395
a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c a5e56c90 f340988d
3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ffff ffff0100 ca9a3b00
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
00
Block 4:
f9beb4d9
73010000
01000000 5da36499 06f35e09 9be42a1d 87b6dd42 11bc1400 6c220694 0807eaae
00000000 48eeeaed 2d9d8522 e6201173 743823fd 4b87cd8a ca8e6408 ec75ca38
302c2ff0 89b45f49 ffff001d 00530839
02
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04d41d 2213ffff ffff0100 f2052a01 00000043 4104678a
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
000000
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
4fdcb8ee d2000000 004a4930 46022100 8c8fd57b 48762135 8d8f3e69 19f33e08
804736ff 83db47aa 248512e2 6df9b8ba 022100b0 c59e5ee7 bfcbfcd1 a4d83da9
55fb260e fda7f42a 25522625 a3d6f2d9 1174a701 ffffffff 0100f205 2a010000
001976a9 14c52266 4fb0e55c dc5c0cea 73b4aad9 7ec83432 3288ac00 000000
File path: reorgTest/blk_3A.dat
Block 3A:
f9beb4d9
96020000
01000000 7d338254 0506faab 0d4cf179 45dda023 49db51f9 6233f24c 28002258
00000000 5a15f573 1177a353 bdca7aab 20e16624 dfe90adc 70accadc 68016732
302c20a7 31b25f49 ffff001d 6a901440
03
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff04ad1b e7d5ffff ffff0100 f2052a01 00000043 4104ed83
704c95d8 29046f1a c2780621 1132102c 34e9ac7f fa1b7111 0658e5b9 d1bdedc4
16f5cefc 1db0625c d0c75de8 192d2b59 2d7e3b00 bcfb4a0e 860d880f d1fcac00
000000
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77010000 008c4930 46022100 96ee0d02 b35fd61e 4960b44f f396f67e
01fe17f9 de4e0c17 b6a963bd ab2b50a6 02210034 920d4daa 7e9f8abe 5675c931
495809f9 0b9c1189 d05fbaf1 dd6696a5 b0d8f301 41046868 0737c76d abb801cb
2204f57d be4e4579 e4f710cd 67dc1b42 27592c81 e9b5cf02 b5ac9e8b 4c9f49be
5251056b 6a6d011e 4c37f6b6 d17ede6b 55faa235 19e2ffff ffff0100 286bee00
00000019 76a914c5 22664fb0 e55cdc5c 0cea73b4 aad97ec8 34323288 ac000000
00
01000000 01f287b5 e067e1cf 80f7da8a f89917b5 505094db d82412d9 35b665eb
bad253d3 77000000 008c4930 46022100 9cc67ddd aa6f592a 6b2babd4 d6ff954f
25a784cf 4fe4bb13 afb9f49b 08955119 022100a2 d99545b7 94080757 fcf2b563
f2e91287 86332f46 0ec6b90f f085fb28 41a69701 4104b95c 249d84f4 17e3e395
a1274254 28b54067 1cc15881 eb828c17 b722a53f c599e21c a5e56c90 f340988d
3933acc7 6beb832f d64cab07 8ddf3ce7 32923031 d1a8ffff ffff0100 ca9a3b00
00000019 76a914ee 26c56fc1 d942be8d 7a24b2a1 001dd894 69398088 ac000000
00
File path: reorgTest/blk_4A.dat
Block 4A:
f9beb4d9
d4000000
01000000 aae77468 2205667d 4f413a58 47cc8fe8 9795f1d5 645d5b24 1daf3c92
00000000 361c9cde a09637a0 d0c05c3b 4e7a5d91 9edb184a 0a4c7633 d92e2ddd
f04cb854 89b45f49 ffff001d 9e9aa1e8
01
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff0401b8 f3eaffff ffff0100 f2052a01 00000043 4104678a
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
000000
File path: reorgTest/blk_5A.dat
Block 5A:
f9beb4d9
73010000
01000000 ebc7d0de 9c31a71b 7f41d275 2c080ba4 11e1854b d45cb2cf 8c1e4624
00000000 a607774b 79b8eb50 b52a5a32 c1754281 ec67f626 9561df28 57d1fe6a
ea82c696 e1b65f49 ffff001d 4a263577
02
01000000 01000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00ffffff ff049971 0c7dffff ffff0100 f2052a01 00000043 4104678a
fdb0fe55 48271967 f1a67130 b7105cd6 a828e039 09a67962 e0ea1f61 deb649f6
bc3f4cef 38c4f355 04e51ec1 12de5c38 4df7ba0b 8d578a4c 702b6bf1 1d5fac00
000000
01000000 0163451d 1002611c 1388d5ba 4ddfdf99 196a86b5 990fb5b0 dc786207
4fdcb8ee d2000000 004a4930 46022100 8c8fd57b 48762135 8d8f3e69 19f33e08
804736ff 83db47aa 248512e2 6df9b8ba 022100b0 c59e5ee7 bfcbfcd1 a4d83da9
55fb260e fda7f42a 25522625 a3d6f2d9 1174a701 ffffffff 0100f205 2a010000
001976a9 14c52266 4fb0e55c dc5c0cea 73b4aad9 7ec83432 3288ac00 000000

BIN
blockchain/testdata/reorgto179.bz2 vendored Normal file

Binary file not shown.

BIN
blockchain/testdata/reorgto180.bz2 vendored Normal file

Binary file not shown.

645
blockchain/ticketlookup.go Normal file
View File

@ -0,0 +1,645 @@
// Copyright (c) 2013-2014 Conformal Systems LLC.
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blockchain
import (
"errors"
"fmt"
"sort"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrutil"
)
// TicketStatus is used to indicate the state of a ticket in the ticket store and
// the ticket database. Non-existing is included because it's possible to have
// a ticket not exist in a sidechain that exists in the mainchain (and thus exists
// in the ticket database), and so you need to indicate this in the ticket store.
// It could also point to a ticket that's been missed and eliminated from the
// ticket db by SSRtx.
type TicketStatus int
const (
TiNonexisting = iota
TiSpent
TiAvailable
TiMissed
TiRevoked
TiError
)
// TicketPatchData contains contextual information about tickets, namely their
// ticket data and whether or not they are spent.
type TicketPatchData struct {
td *stake.TicketData
ts TicketStatus
err error
}
// NewTicketPatchData creates a new TicketPatchData struct.
func NewTicketPatchData(td *stake.TicketData,
ts TicketStatus,
err error) *TicketPatchData {
return &TicketPatchData{td, ts, err}
}
// TicketStore is used to store a patch of the ticket db for use in validating the
// block header and subsequently the block reward. It allows you to observe the
// ticket db from the point-of-view of different points in the chain.
// TicketStore is basically like an extremely inefficient version of the ticket
// database that isn't designed to be able to be easily rolled back, which is fine
// because we're only going to use it in ephermal cases.
type TicketStore map[chainhash.Hash]*TicketPatchData
// GenerateLiveTicketBucket takes ticket patch data and a bucket number as input,
// then recreates a ticket bucket from the patch and the current database state.
func (b *BlockChain) GenerateLiveTicketBucket(ticketStore TicketStore,
tpdBucketMap map[uint8][]*TicketPatchData, bucket uint8) (stake.SStxMemMap,
error) {
bucketTickets := make(stake.SStxMemMap)
// Check the ticketstore for live tickets and add these to the bucket if
// their ticket number matches.
for _, tpd := range tpdBucketMap[bucket] {
if tpd.ts == TiAvailable {
bucketTickets[tpd.td.SStxHash] = tpd.td
}
}
// Check the ticket database for live tickets; prune live tickets that
// have been spent/missed/were otherwise accounted for.
liveTicketsFromDb, err := b.tmdb.DumpLiveTickets(bucket)
if err != nil {
return nil, err
}
for hash, td := range liveTicketsFromDb {
if _, exists := ticketStore[hash]; exists {
continue
}
bucketTickets[hash] = td
}
return bucketTickets, nil
}
// GenerateMissedTickets takes ticket patch data as input, then recreates the
// missed tickets bucket from the patch and the current database state.
func (b *BlockChain) GenerateMissedTickets(tixStore TicketStore) (stake.SStxMemMap,
error) {
missedTickets := make(stake.SStxMemMap)
// Check the ticketstore for live tickets and add these to the bucket if
// their ticket number matches.
for hash, tpd := range tixStore {
if tpd.ts == TiMissed {
missedTickets[hash] = tpd.td
}
}
// Check the ticket database for live tickets; prune live tickets that
// have been spent/missed/were otherwise accounted for.
missedTicketsFromDb, err := b.tmdb.DumpMissedTickets()
if err != nil {
return nil, err
}
for hash, td := range missedTicketsFromDb {
if _, exists := tixStore[hash]; exists {
continue
}
missedTickets[hash] = td
}
return missedTickets, nil
}
// connectTickets updates the passed map by removing removing any tickets
// from the ticket pool that have been considered spent or missed in this block
// according to the block header. Then, it connects all the newly mature tickets
// to the passed map.
func (b *BlockChain) connectTickets(tixStore TicketStore,
node *blockNode,
block *dcrutil.Block) error {
if tixStore == nil {
return fmt.Errorf("nil ticket store!")
}
// Nothing to do if tickets haven't yet possibly matured.
height := node.height
if height < b.chainParams.StakeEnabledHeight {
return nil
}
parentBlock, err := b.GetBlockFromHash(node.parentHash)
if err != nil {
return err
}
revocations := node.header.Revocations
tM := int64(b.chainParams.TicketMaturity)
// Skip a number of validation steps before we requiring chain
// voting.
if node.height >= b.chainParams.StakeValidationHeight {
regularTxTreeValid := dcrutil.IsFlagSet16(node.header.VoteBits,
dcrutil.BlockValid)
thisNodeStakeViewpoint := ViewpointPrevInvalidStake
if regularTxTreeValid {
thisNodeStakeViewpoint = ViewpointPrevValidStake
}
// We need the missed tickets bucket from the original perspective of
// the node.
missedTickets, err := b.GenerateMissedTickets(tixStore)
if err != nil {
return err
}
// TxStore at blockchain HEAD + TxTreeRegular of prevBlock (if
// validated) for this node.
txInputStoreStake, err := b.fetchInputTransactions(node, block,
thisNodeStakeViewpoint)
if err != nil {
errStr := fmt.Sprintf("fetchInputTransactions failed for incoming "+
"node %v; error given: %v", node.hash, err)
return errors.New(errStr)
}
// PART 1: Spend/miss winner tickets
// Iterate through all the SSGen (vote) tx in the block and add them to
// a map of tickets that were actually used.
spentTicketsFromBlock := make(map[chainhash.Hash]bool)
numberOfSSgen := 0
for _, staketx := range block.STransactions() {
if is, _ := stake.IsSSGen(staketx); is {
msgTx := staketx.MsgTx()
sstxIn := msgTx.TxIn[1] // sstx input
sstxHash := sstxIn.PreviousOutPoint.Hash
originTx, exists := txInputStoreStake[sstxHash]
if !exists {
str := fmt.Sprintf("unable to find input transaction "+
"%v for transaction %v", sstxHash, staketx.Sha())
return ruleError(ErrMissingTx, str)
}
sstxHeight := originTx.BlockHeight
// Check maturity of ticket; we can only spend the ticket after it
// hits maturity at height + tM + 1.
if (height - sstxHeight) < (tM + 1) {
blockSha := block.Sha()
errStr := fmt.Sprintf("Error: A ticket spend as an SSGen in "+
"block height %v was immature! Block sha %v",
height,
blockSha)
return errors.New(errStr)
}
// Fill out the ticket data.
spentTicketsFromBlock[sstxHash] = true
numberOfSSgen++
}
}
// Obtain the TicketsPerBlock many tickets that were selected this round,
// then check these against the tickets that were actually used to make
// sure that any SSGen actually match the selected tickets. Commit the
// spent or missed tickets to the ticket store after.
spentAndMissedTickets := make(TicketStore)
tixSpent := 0
tixMissed := 0
// Sort the entire list of tickets lexicographically by sorting
// each bucket and then appending it to the list. Start by generating
// a prefix matched map of tickets to speed up the lookup.
tpdBucketMap := make(map[uint8][]*TicketPatchData)
for _, tpd := range tixStore {
// Bucket does not exist.
if _, ok := tpdBucketMap[tpd.td.Prefix]; !ok {
tpdBucketMap[tpd.td.Prefix] = make([]*TicketPatchData, 1)
tpdBucketMap[tpd.td.Prefix][0] = tpd
} else {
// Bucket exists.
data := tpdBucketMap[tpd.td.Prefix]
tpdBucketMap[tpd.td.Prefix] = append(data, tpd)
}
}
totalTickets := 0
sortedSlice := make([]*stake.TicketData, 0)
for i := 0; i < stake.BucketsSize; i++ {
ltb, err := b.GenerateLiveTicketBucket(tixStore, tpdBucketMap,
uint8(i))
if err != nil {
h := node.hash
str := fmt.Sprintf("Failed to generate live ticket bucket "+
"%v for node %v, height %v! Error: %v",
i,
h,
node.height,
err.Error())
return fmt.Errorf(str)
}
mapLen := len(ltb)
tempTdSlice := stake.NewTicketDataSlice(mapLen)
itr := 0 // Iterator
for _, td := range ltb {
tempTdSlice[itr] = td
itr++
totalTickets++
}
sort.Sort(tempTdSlice)
sortedSlice = append(sortedSlice, tempTdSlice...)
}
// Use the parent block's header to seed a PRNG that picks the
// lottery winners.
ticketsPerBlock := int(b.chainParams.TicketsPerBlock)
pbhB, err := parentBlock.MsgBlock().Header.Bytes()
if err != nil {
return err
}
prng := stake.NewHash256PRNG(pbhB)
ts, err := stake.FindTicketIdxs(int64(totalTickets), ticketsPerBlock, prng)
if err != nil {
return err
}
ticketsToSpendOrMiss := make([]*stake.TicketData, ticketsPerBlock,
ticketsPerBlock)
for i, idx := range ts {
ticketsToSpendOrMiss[i] = sortedSlice[idx]
}
// Spend or miss these tickets by checking for their existence in the
// passed spentTicketsFromBlock map.
for _, ticket := range ticketsToSpendOrMiss {
// Move the ticket from active tickets map into the used tickets
// map if the ticket was spent.
wasSpent, _ := spentTicketsFromBlock[ticket.SStxHash]
if wasSpent {
tpd := NewTicketPatchData(ticket, TiSpent, nil)
spentAndMissedTickets[ticket.SStxHash] = tpd
tixSpent++
} else { // Ticket missed being spent and --> false or nil
tpd := NewTicketPatchData(ticket, TiMissed, nil)
spentAndMissedTickets[ticket.SStxHash] = tpd
tixMissed++
}
}
// This error is thrown if for some reason there exists an SSGen in
// the block that doesn't spend a ticket from the eligible list of
// tickets, thus making it invalid.
if tixSpent != numberOfSSgen {
errStr := fmt.Sprintf("an invalid number %v "+
"tickets was spent, but %v many tickets should "+
"have been spent!", tixSpent, numberOfSSgen)
return errors.New(errStr)
}
if tixMissed != (ticketsPerBlock - numberOfSSgen) {
errStr := fmt.Sprintf("an invalid number %v "+
"tickets was missed, but %v many tickets should "+
"have been missed!", tixMissed,
ticketsPerBlock-numberOfSSgen)
return errors.New(errStr)
}
if (tixSpent + tixMissed) != int(b.chainParams.TicketsPerBlock) {
errStr := fmt.Sprintf("an invalid number %v "+
"tickets was spent and missed, but TicketsPerBlock %v many "+
"tickets should have been spent!", tixSpent,
ticketsPerBlock)
return errors.New(errStr)
}
// Calculate all the tickets expiring this block and mark them as missed.
tpdBucketMap = make(map[uint8][]*TicketPatchData)
for _, tpd := range tixStore {
// Bucket does not exist.
if _, ok := tpdBucketMap[tpd.td.Prefix]; !ok {
tpdBucketMap[tpd.td.Prefix] = make([]*TicketPatchData, 1)
tpdBucketMap[tpd.td.Prefix][0] = tpd
} else {
// Bucket exists.
data := tpdBucketMap[tpd.td.Prefix]
tpdBucketMap[tpd.td.Prefix] = append(data, tpd)
}
}
toExpireHeight := node.height - int64(b.chainParams.TicketExpiry)
if !(toExpireHeight < int64(b.chainParams.StakeEnabledHeight)) {
for i := 0; i < stake.BucketsSize; i++ {
// Generate the live ticket bucket.
ltb, err := b.GenerateLiveTicketBucket(tixStore,
tpdBucketMap, uint8(i))
if err != nil {
return err
}
for _, ticket := range ltb {
if ticket.BlockHeight == toExpireHeight {
tpd := NewTicketPatchData(ticket, TiMissed, nil)
spentAndMissedTickets[ticket.SStxHash] = tpd
}
}
}
}
// Merge the ticket store patch containing the spent and missed tickets
// with the ticket store.
for hash, tpd := range spentAndMissedTickets {
tixStore[hash] = tpd
}
// At this point our tixStore now contains all the spent and missed tx
// as per this block.
// PART 2: Remove tickets that were missed and are now revoked.
// Iterate through all the SSGen (vote) tx in the block and add them to
// a map of tickets that were actually used.
revocationsFromBlock := make(map[chainhash.Hash]struct{})
numberOfSSRtx := 0
for _, staketx := range block.STransactions() {
if is, _ := stake.IsSSRtx(staketx); is {
msgTx := staketx.MsgTx()
sstxIn := msgTx.TxIn[0] // sstx input
sstxHash := sstxIn.PreviousOutPoint.Hash
// Fill out the ticket data.
revocationsFromBlock[sstxHash] = struct{}{}
numberOfSSRtx++
}
}
if numberOfSSRtx != int(revocations) {
errStr := fmt.Sprintf("an invalid revocations %v was calculated "+
"the block header indicates %v instead", numberOfSSRtx,
revocations)
return errors.New(errStr)
}
// Lookup the missed ticket. If we find it in the patch data,
// modify the patch data so that it doesn't exist.
// Otherwise, just modify load the missed ticket data from
// the ticket db and create patch data based on that.
for hash, _ := range revocationsFromBlock {
ticketWasMissed := false
if td, is := missedTickets[hash]; is {
maturedHeight := td.BlockHeight
// Check maturity of ticket; we can only spend the ticket after it
// hits maturity at height + tM + 2.
if height < maturedHeight+2 {
blockSha := block.Sha()
errStr := fmt.Sprintf("Error: A ticket spend as an "+
"SSRtx in block height %v was immature! Block sha %v",
height,
blockSha)
return errors.New(errStr)
}
ticketWasMissed = true
}
if !ticketWasMissed {
errStr := fmt.Sprintf("SSRtx spent missed sstx %v, "+
"but that missed sstx could not be found!",
hash)
return errors.New(errStr)
}
}
}
// PART 3: Add newly maturing tickets
// This is the only chunk we need to do for blocks appearing before
// stake validation height.
// Calculate block number for where new tickets are maturing from and retrieve
// this block from db.
// Get the block that is maturing.
matureNode, err := b.getNodeAtHeightFromTopNode(node, tM)
if err != nil {
return err
}
matureBlock, errBlock := b.getBlockFromHash(matureNode.hash)
if errBlock != nil {
return errBlock
}
// Maturing tickets are from the maturingBlock; fill out the ticket patch data
// and then push them to the tixStore.
for _, stx := range matureBlock.STransactions() {
if is, _ := stake.IsSStx(stx); is {
// Calculate the prefix for pre-sort.
sstxHash := *stx.Sha()
prefix := uint8(sstxHash[0])
// Fill out the ticket data.
td := stake.NewTicketData(sstxHash,
prefix,
chainhash.Hash{},
height,
false, // not missed
false) // not expired
tpd := NewTicketPatchData(td,
TiAvailable,
nil)
tixStore[*stx.Sha()] = tpd
}
}
return nil
}
// disconnectTransactions updates the passed map by undoing transaction and
// spend information for all transactions in the passed block. Only
// transactions in the passed map are updated.
// This function should only ever have to disconnect transactions from the main
// chain, so most of the calls are directly the the tmdb which contains all this
// data in an organized bucket.
func (b *BlockChain) disconnectTickets(tixStore TicketStore,
node *blockNode,
block *dcrutil.Block) error {
tM := int64(b.chainParams.TicketMaturity)
height := node.height
// Nothing to do if tickets haven't yet possibly matured.
if height < b.chainParams.StakeEnabledHeight {
return nil
}
// PART 1: Remove newly maturing tickets
// Calculate block number for where new tickets matured from and retrieve
// this block from db.
matureNode, err := b.getNodeAtHeightFromTopNode(node, tM)
if err != nil {
return err
}
matureBlock, errBlock := b.getBlockFromHash(matureNode.hash)
if errBlock != nil {
return errBlock
}
// Store pointers to empty ticket data in the ticket store and mark them as
// non-existing.
for _, stx := range matureBlock.STransactions() {
if is, _ := stake.IsSStx(stx); is {
// Leave this pointing to nothing, as the ticket technically does not
// exist. It may exist when we add blocks later, but we can fill it
// out then.
td := &stake.TicketData{}
tpd := NewTicketPatchData(td,
TiNonexisting,
nil)
tixStore[*stx.Sha()] = tpd
}
}
// PART 2: Unrevoke any SSRtx in this block and restore them as
// missed tickets.
for _, stx := range block.STransactions() {
if is, _ := stake.IsSSRtx(stx); is {
// Move the revoked ticket to missed tickets. Obtain the
// revoked ticket data from the ticket database.
msgTx := stx.MsgTx()
sstxIn := msgTx.TxIn[0] // sstx input
sstxHash := sstxIn.PreviousOutPoint.Hash
td := b.tmdb.GetRevokedTicket(sstxHash)
if td == nil {
return fmt.Errorf("Failed to find revoked ticket %v in tmdb",
sstxHash)
}
tpd := NewTicketPatchData(td,
TiMissed,
nil)
tixStore[sstxHash] = tpd
}
}
// PART 3: Unspend or unmiss all tickets spent/missed/expired at this block.
// Query the stake db for used tickets (spentTicketDb), which includes all of
// the spent and missed tickets.
spentTickets, errDump := b.tmdb.DumpSpentTickets(height)
if errDump != nil {
return errDump
}
// Move all of these tickets into the ticket store as available tickets.
for hash, td := range spentTickets {
tpd := NewTicketPatchData(td,
TiAvailable,
nil)
tixStore[hash] = tpd
}
return nil
}
// fetchTicketStore fetches ticket data from the point of view of the given node.
// For example, a given node might be down a side chain where a ticket hasn't been
// spent from its point of view even though it might have been spent in the main
// chain (or another side chain). Another scenario is where a ticket exists from
// the point of view of the main chain, but doesn't exist in a side chain that
// branches before the block that contains the ticket on the main chain.
func (b *BlockChain) fetchTicketStore(node *blockNode) (TicketStore, error) {
tixStore := make(TicketStore)
// Get the previous block node. This function is used over simply
// accessing node.parent directly as it will dynamically create previous
// block nodes as needed. This helps allow only the pieces of the chain
// that are needed to remain in memory.
prevNode, err := b.getPrevNodeFromNode(node)
if err != nil {
return nil, err
}
// If we haven't selected a best chain yet or we are extending the main
// (best) chain with a new block, just use the ticket database we already
// have.
if b.bestChain == nil || (prevNode != nil &&
prevNode.hash.IsEqual(b.bestChain.hash)) {
return nil, nil
}
// We don't care about nodes before stake enabled height.
if node.height < b.chainParams.StakeEnabledHeight {
return nil, nil
}
// The requested node is either on a side chain or is a node on the main
// chain before the end of it. In either case, we need to undo the
// transactions and spend information for the blocks which would be
// disconnected during a reorganize to the point of view of the
// node just before the requested node.
detachNodes, attachNodes, err := b.getReorganizeNodes(prevNode)
if err != nil {
return nil, err
}
for e := detachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
block, err := b.db.FetchBlockBySha(n.hash)
if err != nil {
return nil, err
}
err = b.disconnectTickets(tixStore, n, block)
if err != nil {
return nil, err
}
}
// The ticket store is now accurate to either the node where the
// requested node forks off the main chain (in the case where the
// requested node is on a side chain), or the requested node itself if
// the requested node is an old node on the main chain. Entries in the
// attachNodes list indicate the requested node is on a side chain, so
// if there are no nodes to attach, we're done.
if attachNodes.Len() == 0 {
return tixStore, nil
}
// The requested node is on a side chain, so we need to apply the
// transactions and spend information from each of the nodes to attach.
for e := attachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
block, exists := b.blockCache[*n.hash]
if !exists {
return nil, fmt.Errorf("unable to find block %v in "+
"side chain cache for ticket db patch construction",
n.hash)
}
// The number of blocks below this block but above the root of the fork
err = b.connectTickets(tixStore, n, block)
if err != nil {
return nil, err
}
}
return tixStore, nil
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,7 +11,7 @@ import (
"testing"
"time"
"github.com/btcsuite/btcd/blockchain"
"github.com/decred/dcrd/blockchain"
)
// TestTimeSorter tests the timeSorter implementation.

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,17 +8,48 @@ package blockchain
import (
"fmt"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
// There are five potential viewpoints we need to worry about.
// ViewpointPrevValidInitial: Viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular, used to validate
// that tx tree regular.
const ViewpointPrevValidInitial = int8(0)
// ViewpointPrevValidStake: Viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular plus the
// contents of the TxTreeRegular, to validate TxTreeStake.
const ViewpointPrevValidStake = int8(1)
// ViewpointPrevInvalidStake: Viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular but without the
// contents of the TxTreeRegular, to validate TxTreeStake.
const ViewpointPrevInvalidStake = int8(2)
// ViewpointPrevValidRegular: Viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular plus the
// contents of the TxTreeRegular and TxTreeStake of current block,
// to validate TxTreeRegular of the current block.
const ViewpointPrevValidRegular = int8(3)
// ViewpointPrevInvalidRegular: Viewpoint from the perspective of the
// everything up the the previous block's TxTreeRegular minus the
// contents of the TxTreeRegular and TxTreeStake of current block,
// to validate TxTreeRegular of the current block.
const ViewpointPrevInvalidRegular = int8(4)
// TxData contains contextual information about transactions such as which block
// they were found in and whether or not the outputs are spent.
type TxData struct {
Tx *btcutil.Tx
Hash *wire.ShaHash
Tx *dcrutil.Tx
Hash *chainhash.Hash
BlockHeight int64
BlockIndex uint32
Spent []bool
Err error
}
@ -26,21 +58,32 @@ type TxData struct {
// such as script validation and double spend prevention. This also allows the
// transaction data to be treated as a view since it can contain the information
// from the point-of-view of different points in the chain.
type TxStore map[wire.ShaHash]*TxData
type TxStore map[chainhash.Hash]*TxData
// connectTxTree lets you connect an arbitrary TxTree to a txStore to push it
// forward in history.
// TxTree true == TxTreeRegular
// TxTree false == TxTreeStake
func connectTxTree(txStore TxStore,
block *dcrutil.Block,
txTree bool) {
var transactions []*dcrutil.Tx
if txTree {
transactions = block.Transactions()
} else {
transactions = block.STransactions()
}
// connectTransactions updates the passed map by applying transaction and
// spend information for all the transactions in the passed block. Only
// transactions in the passed map are updated.
func connectTransactions(txStore TxStore, block *btcutil.Block) error {
// Loop through all of the transactions in the block to see if any of
// them are ones we need to update and spend based on the results map.
for _, tx := range block.Transactions() {
for i, tx := range transactions {
// Update the transaction store with the transaction information
// if it's one of the requested transactions.
msgTx := tx.MsgTx()
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = tx
txD.BlockHeight = block.Height()
txD.BlockIndex = uint32(i)
txD.Spent = make([]bool, len(msgTx.TxOut))
txD.Err = nil
}
@ -50,7 +93,87 @@ func connectTransactions(txStore TxStore, block *btcutil.Block) error {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
if originTx, exists := txStore[*originHash]; exists {
if originIndex > uint32(len(originTx.Spent)) {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = true
}
}
}
return
}
func connectTransactions(txStore TxStore, block *dcrutil.Block, parent *dcrutil.Block) error {
// There is no regular tx from before the genesis block, so ignore the genesis
// block for the next step.
if parent != nil && block.Height() != 0 {
mBlock := block.MsgBlock()
votebits := mBlock.Header.VoteBits
regularTxTreeValid := dcrutil.IsFlagSet16(votebits, dcrutil.BlockValid)
// Only add the transactions in the event that the parent block's regular
// tx were validated.
if regularTxTreeValid {
// Loop through all of the regular transactions in the block to see if
// any of them are ones we need to update and spend based on the
// results map.
for i, tx := range parent.Transactions() {
// Update the transaction store with the transaction information
// if it's one of the requested transactions.
msgTx := tx.MsgTx()
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = tx
txD.BlockHeight = block.Height() - 1
txD.BlockIndex = uint32(i)
txD.Spent = make([]bool, len(msgTx.TxOut))
txD.Err = nil
}
// Spend the origin transaction output.
for _, txIn := range msgTx.TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
if originTx, exists := txStore[*originHash]; exists {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = true
}
}
}
}
}
// Loop through all of the stake transactions in the block to see if any of
// them are ones we need to update and spend based on the results map.
for i, tx := range block.STransactions() {
// Update the transaction store with the transaction information
// if it's one of the requested transactions.
msgTx := tx.MsgTx()
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = tx
txD.BlockHeight = block.Height()
txD.BlockIndex = uint32(i)
txD.Spent = make([]bool, len(msgTx.TxOut))
txD.Err = nil
}
// Spend the origin transaction output.
for _, txIn := range msgTx.TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
if originTx, exists := txStore[*originHash]; exists {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = true
@ -64,10 +187,10 @@ func connectTransactions(txStore TxStore, block *btcutil.Block) error {
// disconnectTransactions updates the passed map by undoing transaction and
// spend information for all transactions in the passed block. Only
// transactions in the passed map are updated.
func disconnectTransactions(txStore TxStore, block *btcutil.Block) error {
// Loop through all of the transactions in the block to see if any of
func disconnectTransactions(txStore TxStore, block *dcrutil.Block, parent *dcrutil.Block) error {
// Loop through all of the stake transactions in the block to see if any of
// them are ones that need to be undone based on the transaction store.
for _, tx := range block.Transactions() {
for _, tx := range block.STransactions() {
// Clear this transaction from the transaction store if needed.
// Only clear it rather than deleting it because the transaction
// connect code relies on its presence to decide whether or not
@ -75,7 +198,8 @@ func disconnectTransactions(txStore TxStore, block *btcutil.Block) error {
// sides of a fork would otherwise not be updated.
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = nil
txD.BlockHeight = 0
txD.BlockHeight = int64(wire.NullBlockHeight)
txD.BlockIndex = wire.NullBlockIndex
txD.Spent = nil
txD.Err = database.ErrTxShaMissing
}
@ -86,7 +210,10 @@ func disconnectTransactions(txStore TxStore, block *btcutil.Block) error {
originIndex := txIn.PreviousOutPoint.Index
originTx, exists := txStore[*originHash]
if exists && originTx.Tx != nil && originTx.Err == nil {
if originIndex > uint32(len(originTx.Spent)) {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = false
@ -94,6 +221,53 @@ func disconnectTransactions(txStore TxStore, block *btcutil.Block) error {
}
}
// There is no regular tx from before the genesis block, so ignore the genesis
// block for the next step.
if parent != nil && block.Height() != 0 {
mBlock := block.MsgBlock()
votebits := mBlock.Header.VoteBits
regularTxTreeValid := dcrutil.IsFlagSet16(votebits, dcrutil.BlockValid)
// Only bother to unspend transactions if the parent's tx tree was
// validated. Otherwise, these transactions were never in the blockchain's
// history in the first place.
if regularTxTreeValid {
// Loop through all of the regular transactions in the block to see if
// any of them are ones that need to be undone based on the
// transaction store.
for _, tx := range parent.Transactions() {
// Clear this transaction from the transaction store if needed.
// Only clear it rather than deleting it because the transaction
// connect code relies on its presence to decide whether or not
// to update the store and any transactions which exist on both
// sides of a fork would otherwise not be updated.
if txD, exists := txStore[*tx.Sha()]; exists {
txD.Tx = nil
txD.BlockHeight = int64(wire.NullBlockHeight)
txD.BlockIndex = wire.NullBlockIndex
txD.Spent = nil
txD.Err = database.ErrTxShaMissing
}
// Unspend the origin transaction output.
for _, txIn := range tx.MsgTx().TxIn {
originHash := &txIn.PreviousOutPoint.Hash
originIndex := txIn.PreviousOutPoint.Index
originTx, exists := txStore[*originHash]
if exists && originTx.Tx != nil && originTx.Err == nil {
if originTx.Spent == nil {
continue
}
if originIndex >= uint32(len(originTx.Spent)) {
continue
}
originTx.Spent[originIndex] = false
}
}
}
}
}
return nil
}
@ -101,7 +275,7 @@ func disconnectTransactions(txStore TxStore, block *btcutil.Block) error {
// transactions from the point of view of the end of the main chain. It takes
// a flag which specifies whether or not fully spent transaction should be
// included in the results.
func fetchTxStoreMain(db database.Db, txSet map[wire.ShaHash]struct{}, includeSpent bool) TxStore {
func fetchTxStoreMain(db database.Db, txSet map[chainhash.Hash]struct{}, includeSpent bool) TxStore {
// Just return an empty store now if there are no requested hashes.
txStore := make(TxStore)
if len(txSet) == 0 {
@ -111,7 +285,7 @@ func fetchTxStoreMain(db database.Db, txSet map[wire.ShaHash]struct{}, includeSp
// The transaction store map needs to have an entry for every requested
// transaction. By default, all the transactions are marked as missing.
// Each entry will be filled in with the appropriate data below.
txList := make([]*wire.ShaHash, 0, len(txSet))
txList := make([]*chainhash.Hash, 0, len(txSet))
for hash := range txSet {
hashCopy := hash
txStore[hash] = &TxData{Hash: &hashCopy, Err: database.ErrTxShaMissing}
@ -145,8 +319,9 @@ func fetchTxStoreMain(db database.Db, txSet map[wire.ShaHash]struct{}, includeSp
// cause subtle errors, so avoid the potential altogether.
txD.Err = txReply.Err
if txReply.Err == nil {
txD.Tx = btcutil.NewTx(txReply.Tx)
txD.Tx = dcrutil.NewTx(txReply.Tx)
txD.BlockHeight = txReply.Height
txD.BlockIndex = txReply.Index
txD.Spent = make([]bool, len(txReply.TxSpent))
copy(txD.Spent, txReply.TxSpent)
}
@ -155,6 +330,54 @@ func fetchTxStoreMain(db database.Db, txSet map[wire.ShaHash]struct{}, includeSp
return txStore
}
// handleTxStoreViewpoint appends extra Tx Trees to update to a different
// viewpoint.
func handleTxStoreViewpoint(block *dcrutil.Block, parentBlock *dcrutil.Block,
txStore TxStore, viewpoint int8) error {
// We don't need to do anything for the current top block viewpoint.
if viewpoint == ViewpointPrevValidInitial {
return nil
}
// ViewpointPrevValidStake: Append the prev block TxTreeRegular
// txs to fill out TxIns.
if viewpoint == ViewpointPrevValidStake {
connectTxTree(txStore, parentBlock, true)
return nil
}
// ViewpointPrevInvalidStake: Do not append the prev block
// TxTreeRegular txs, since they don't exist.
if viewpoint == ViewpointPrevInvalidStake {
return nil
}
// ViewpointPrevValidRegular: Append the prev block TxTreeRegular
// txs to fill in TxIns, then append the cur block TxTreeStake
// txs to fill in TxInss. TxTreeRegular from current block will
// never be allowed to spend from the stake tree of the current
// block anyway because of the consensus rules regarding output
// maturity, but do it anyway.
if viewpoint == ViewpointPrevValidRegular {
connectTxTree(txStore, parentBlock, true)
connectTxTree(txStore, block, false)
return nil
}
// ViewpointPrevInvalidRegular: Append the cur block TxTreeStake
// txs to fill in TxIns. TxTreeRegular from current block will
// never be allowed to spend from the stake tree of the current
// block anyway because of the consensus rules regarding output
// maturity, but do it anyway.
if viewpoint == ViewpointPrevInvalidRegular {
connectTxTree(txStore, block, false)
return nil
}
return fmt.Errorf("Error: invalid viewpoint '0x%x' given to "+
"handleTxStoreViewpoint", viewpoint)
}
// fetchTxStore fetches transaction data about the provided set of transactions
// from the point of view of the given node. For example, a given node might
// be down a side chain where a transaction hasn't been spent from its point of
@ -162,7 +385,8 @@ func fetchTxStoreMain(db database.Db, txSet map[wire.ShaHash]struct{}, includeSp
// chain). Another scenario is where a transaction exists from the point of
// view of the main chain, but doesn't exist in a side chain that branches
// before the block that contains the transaction on the main chain.
func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[wire.ShaHash]struct{}) (TxStore, error) {
func (b *BlockChain) fetchTxStore(node *blockNode, block *dcrutil.Block,
txSet map[chainhash.Hash]struct{}, viewpoint int8) (TxStore, error) {
// Get the previous block node. This function is used over simply
// accessing node.parent directly as it will dynamically create previous
// block nodes as needed. This helps allow only the pieces of the chain
@ -171,14 +395,30 @@ func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[wire.ShaHash]struct
if err != nil {
return nil, err
}
// We don't care if the previous node doesn't exist because this
// block is the genesis block.
if prevNode == nil {
return nil, nil
}
// Get the previous block, too.
prevBlock, err := b.getBlockFromHash(prevNode.hash)
if err != nil {
return nil, err
}
// If we haven't selected a best chain yet or we are extending the main
// (best) chain with a new block, fetch the requested set from the point
// of view of the end of the main (best) chain without including fully
// spent transactions in the results. This is a little more efficient
// since it means less transaction lookups are needed.
if b.bestChain == nil || (prevNode != nil && prevNode.hash.IsEqual(b.bestChain.hash)) {
if b.bestChain == nil || (prevNode != nil &&
prevNode.hash.IsEqual(b.bestChain.hash)) {
txStore := fetchTxStoreMain(b.db, txSet, false)
err := handleTxStoreViewpoint(block, prevBlock, txStore, viewpoint)
if err != nil {
return nil, err
}
return txStore, nil
}
@ -193,15 +433,30 @@ func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[wire.ShaHash]struct
// transactions and spend information for the blocks which would be
// disconnected during a reorganize to the point of view of the
// node just before the requested node.
detachNodes, attachNodes := b.getReorganizeNodes(prevNode)
detachNodes, attachNodes, err := b.getReorganizeNodes(prevNode)
if err != nil {
return nil, err
}
for e := detachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
block, err := b.db.FetchBlockBySha(n.hash)
blockDisconnect, err := b.db.FetchBlockBySha(n.hash)
if err != nil {
return nil, err
}
disconnectTransactions(txStore, block)
// Load the parent block from either the database or the sidechain.
parentHash := &blockDisconnect.MsgBlock().Header.PrevBlock
parentBlock, errFetchBlock := b.getBlockFromHash(parentHash)
if errFetchBlock != nil {
return nil, errFetchBlock
}
err = disconnectTransactions(txStore, blockDisconnect, parentBlock)
if err != nil {
return nil, err
}
}
// The transaction store is now accurate to either the node where the
@ -211,6 +466,11 @@ func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[wire.ShaHash]struct
// attachNodes list indicate the requested node is on a side chain, so
// if there are no nodes to attach, we're done.
if attachNodes.Len() == 0 {
err = handleTxStoreViewpoint(block, prevBlock, txStore, viewpoint)
if err != nil {
return nil, err
}
return txStore, nil
}
@ -218,14 +478,30 @@ func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[wire.ShaHash]struct
// transactions and spend information from each of the nodes to attach.
for e := attachNodes.Front(); e != nil; e = e.Next() {
n := e.Value.(*blockNode)
block, exists := b.blockCache[*n.hash]
blockConnect, exists := b.blockCache[*n.hash]
if !exists {
return nil, fmt.Errorf("unable to find block %v in "+
"side chain cache for transaction search",
n.hash)
}
connectTransactions(txStore, block)
// Load the parent block from either the database or the sidechain.
parentHash := &blockConnect.MsgBlock().Header.PrevBlock
parentBlock, errFetchBlock := b.getBlockFromHash(parentHash)
if errFetchBlock != nil {
return nil, errFetchBlock
}
err = connectTransactions(txStore, blockConnect, parentBlock)
if err != nil {
return nil, err
}
}
err = handleTxStoreViewpoint(block, prevBlock, txStore, viewpoint)
if err != nil {
return nil, err
}
return txStore, nil
@ -234,86 +510,247 @@ func (b *BlockChain) fetchTxStore(node *blockNode, txSet map[wire.ShaHash]struct
// fetchInputTransactions fetches the input transactions referenced by the
// transactions in the given block from its point of view. See fetchTxList
// for more details on what the point of view entails.
func (b *BlockChain) fetchInputTransactions(node *blockNode, block *btcutil.Block) (TxStore, error) {
// Build a map of in-flight transactions because some of the inputs in
// this block could be referencing other transactions earlier in this
// block which are not yet in the chain.
txInFlight := map[wire.ShaHash]int{}
transactions := block.Transactions()
for i, tx := range transactions {
txInFlight[*tx.Sha()] = i
// Decred: This function is for verifying the validity of the regular tx tree in
// this block for the case that it does get accepted in the next block.
func (b *BlockChain) fetchInputTransactions(node *blockNode, block *dcrutil.Block, viewpoint int8) (TxStore, error) {
// Verify we have the same node as we do block.
blockHash := block.Sha()
if !node.hash.IsEqual(blockHash) {
return nil, fmt.Errorf("node and block hash are different!")
}
// Loop through all of the transaction inputs (except for the coinbase
// which has no inputs) collecting them into sets of what is needed and
// what is already known (in-flight).
txNeededSet := make(map[wire.ShaHash]struct{})
txStore := make(TxStore)
for i, tx := range transactions[1:] {
for _, txIn := range tx.MsgTx().TxIn {
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
// It is acceptable for a transaction input to reference
// the output of another transaction in this block only
// if the referenced transaction comes before the
// current one in this block. Update the transaction
// store acccordingly when this is the case. Otherwise,
// we still need the transaction.
//
// NOTE: The >= is correct here because i is one less
// than the actual position of the transaction within
// the block due to skipping the coinbase.
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
i >= inFlightIndex {
originTx := transactions[inFlightIndex]
txD.Tx = originTx
txD.BlockHeight = node.height
txD.Spent = make([]bool, len(originTx.MsgTx().TxOut))
txD.Err = nil
} else {
txNeededSet[*originHash] = struct{}{}
}
// If we need the previous block, grab it.
var parentBlock *dcrutil.Block
if viewpoint == ViewpointPrevValidInitial ||
viewpoint == ViewpointPrevValidStake ||
viewpoint == ViewpointPrevValidRegular {
var errFetchBlock error
parentBlock, errFetchBlock = b.getBlockFromHash(node.parentHash)
if errFetchBlock != nil {
return nil, errFetchBlock
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, txNeededSet)
if err != nil {
return nil, err
txInFlight := map[chainhash.Hash]int{}
txNeededSet := make(map[chainhash.Hash]struct{})
txStore := make(TxStore)
// Case 1: ViewpointPrevValidInitial. We need the viewpoint of the
// current chain without the TxTreeRegular of the previous block
// added so we can validate that.
if viewpoint == ViewpointPrevValidInitial {
// Build a map of in-flight transactions because some of the inputs in
// this block could be referencing other transactions earlier in this
// block which are not yet in the chain.
transactions := parentBlock.Transactions()
for i, tx := range transactions {
txInFlight[*tx.Sha()] = i
}
// Loop through all of the transaction inputs (except for the coinbase
// which has no inputs) collecting them into sets of what is needed and
// what is already known (in-flight).
for i, tx := range transactions[1:] {
for _, txIn := range tx.MsgTx().TxIn {
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
// It is acceptable for a transaction input to reference
// the output of another transaction in this block only
// if the referenced transaction comes before the
// current one in this block. Update the transaction
// store acccordingly when this is the case. Otherwise,
// we still need the transaction.
//
// NOTE: The >= is correct here because i is one less
// than the actual position of the transaction within
// the block due to skipping the coinbase.
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
i >= inFlightIndex {
originTx := transactions[inFlightIndex]
txD.Tx = originTx
txD.BlockHeight = node.height - 1
txD.BlockIndex = uint32(inFlightIndex)
txD.Spent = make([]bool, len(originTx.MsgTx().TxOut))
txD.Err = nil
} else {
txNeededSet[*originHash] = struct{}{}
}
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, block, txNeededSet, viewpoint)
if err != nil {
return nil, err
}
// Merge the results of the requested transactions and the in-flight
// transactions.
for _, txD := range txNeededStore {
txStore[*txD.Hash] = txD
}
return txStore, nil
}
// Merge the results of the requested transactions and the in-flight
// transactions.
for _, txD := range txNeededStore {
txStore[*txD.Hash] = txD
// Case 2+3: ViewpointPrevValidStake and ViewpointPrevValidStake.
// For ViewpointPrevValidStake, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// added so we can validate the TxTreeStake of the current block.
// For ViewpointPrevInvalidStake, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// missing so we can validate the TxTreeStake of the current block.
if viewpoint == ViewpointPrevValidStake ||
viewpoint == ViewpointPrevInvalidStake {
// We need all of the stake tx txins. None of these are considered
// in-flight in relation to the regular tx tree or to other tx in
// the stake tx tree, so don't do any of those expensive checks and
// just append it to the tx slice.
stransactions := block.STransactions()
for _, tx := range stransactions {
isSSGen, _ := stake.IsSSGen(tx)
for i, txIn := range tx.MsgTx().TxIn {
// Ignore stakebases.
if isSSGen && i == 0 {
continue
}
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
txNeededSet[*originHash] = struct{}{}
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, block, txNeededSet, viewpoint)
if err != nil {
return nil, err
}
return txNeededStore, nil
}
return txStore, nil
// Case 4+5: ViewpointPrevValidRegular and
// ViewpointPrevInvalidRegular.
// For ViewpointPrevValidRegular, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// and the TxTreeStake of the current block added so we can
// validate the TxTreeRegular of the current block.
// For ViewpointPrevInvalidRegular, we need the viewpoint of the
// current chain with the TxTreeRegular of the previous block
// missing and the TxTreeStake of the current block added so we
// can validate the TxTreeRegular of the current block.
if viewpoint == ViewpointPrevValidRegular ||
viewpoint == ViewpointPrevInvalidRegular {
transactions := block.Transactions()
for i, tx := range transactions {
txInFlight[*tx.Sha()] = i
}
// Loop through all of the transaction inputs (except for the coinbase
// which has no inputs) collecting them into sets of what is needed and
// what is already known (in-flight).
txNeededSet := make(map[chainhash.Hash]struct{})
txStore = make(TxStore)
for i, tx := range transactions[1:] {
for _, txIn := range tx.MsgTx().TxIn {
// Add an entry to the transaction store for the needed
// transaction with it set to missing by default.
originHash := &txIn.PreviousOutPoint.Hash
txD := &TxData{Hash: originHash, Err: database.ErrTxShaMissing}
txStore[*originHash] = txD
// It is acceptable for a transaction input to reference
// the output of another transaction in this block only
// if the referenced transaction comes before the
// current one in this block. Update the transaction
// store acccordingly when this is the case. Otherwise,
// we still need the transaction.
//
// NOTE: The >= is correct here because i is one less
// than the actual position of the transaction within
// the block due to skipping the coinbase.
if inFlightIndex, ok := txInFlight[*originHash]; ok &&
i >= inFlightIndex {
originTx := transactions[inFlightIndex]
txD.Tx = originTx
txD.BlockHeight = node.height
txD.BlockIndex = uint32(inFlightIndex)
txD.Spent = make([]bool, len(originTx.MsgTx().TxOut))
txD.Err = nil
} else {
txNeededSet[*originHash] = struct{}{}
}
}
}
// Request the input transactions from the point of view of the node.
txNeededStore, err := b.fetchTxStore(node, block, txNeededSet, viewpoint)
if err != nil {
return nil, err
}
// Merge the results of the requested transactions and the in-flight
// transactions.
for _, txD := range txNeededStore {
txStore[*txD.Hash] = txD
}
return txStore, nil
}
return nil, fmt.Errorf("Invalid viewpoint passed to fetchInputTransactions")
}
// FetchTransactionStore fetches the input transactions referenced by the
// passed transaction from the point of view of the end of the main chain. It
// also attempts to fetch the transaction itself so the returned TxStore can be
// examined for duplicate transactions.
func (b *BlockChain) FetchTransactionStore(tx *btcutil.Tx) (TxStore, error) {
// IsValid indicates if the current block on head has had its TxTreeRegular
// validated by the stake voters.
func (b *BlockChain) FetchTransactionStore(tx *dcrutil.Tx,
isValid bool) (TxStore, error) {
isSSGen, _ := stake.IsSSGen(tx)
// Create a set of needed transactions from the transactions referenced
// by the inputs of the passed transaction. Also, add the passed
// transaction itself as a way for the caller to detect duplicates.
txNeededSet := make(map[wire.ShaHash]struct{})
txNeededSet := make(map[chainhash.Hash]struct{})
txNeededSet[*tx.Sha()] = struct{}{}
for _, txIn := range tx.MsgTx().TxIn {
for i, txIn := range tx.MsgTx().TxIn {
// Skip all stakebase inputs.
if isSSGen && (i == 0) {
continue
}
txNeededSet[txIn.PreviousOutPoint.Hash] = struct{}{}
}
// Request the input transactions from the point of view of the end of
// the main chain without including fully spent trasactions in the
// the main chain without including fully spent transactions in the
// results. Fully spent transactions are only needed for chain
// reorganization which does not apply here.
txStore := fetchTxStoreMain(b.db, txNeededSet, false)
topBlock, err := b.getBlockFromHash(b.bestChain.hash)
if err != nil {
return nil, err
}
if isValid {
connectTxTree(txStore, topBlock, true)
}
return txStore, nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,7 @@ import (
"time"
"github.com/btcsuite/btclog"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrutil"
)
// blockProgressLogger provides periodic logging for other services in order
@ -36,7 +36,7 @@ func newBlockProgressLogger(progressMessage string, logger btclog.Logger) *block
// LogBlockHeight logs a new block height as an information message to show
// progress to the user. In order to prevent spam, it limits logging to one
// message every 10 seconds with duration and totals included.
func (b *blockProgressLogger) LogBlockHeight(block *btcutil.Block) {
func (b *blockProgressLogger) LogBlockHeight(block *dcrutil.Block) {
b.Lock()
defer b.Unlock()

File diff suppressed because it is too large Load Diff

View File

@ -1,84 +0,0 @@
btcec
=====
[![Build Status](https://travis-ci.org/btcsuite/btcd.png?branch=master)]
(https://travis-ci.org/btcsuite/btcec)
Package btcec implements elliptic curve cryptography needed for working with
Bitcoin (secp256k1 only for now). It is designed so that it may be used with the
standard crypto/ecdsa packages provided with go. A comprehensive suite of test
is provided to ensure proper functionality. Package btcec was originally based
on work from ThePiachu which is licensed under the same terms as Go, but it has
signficantly diverged since then. The btcsuite developers original is licensed
under the liberal ISC license.
Although this package was primarily written for btcd, it has intentionally been
designed so it can be used as a standalone package for any projects needing to
use secp256k1 elliptic curve cryptography.
## Documentation
[![GoDoc](https://godoc.org/github.com/btcsuite/btcd/btcec?status.png)]
(http://godoc.org/github.com/btcsuite/btcd/btcec)
Full `go doc` style documentation for the project can be viewed online without
installing this package by using the GoDoc site
[here](http://godoc.org/github.com/btcsuite/btcd/btcec).
You can also view the documentation locally once the package is installed with
the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to
http://localhost:6060/pkg/github.com/btcsuite/btcd/btcec
## Installation
```bash
$ go get github.com/btcsuite/btcd/btcec
```
## Examples
* [Sign Message]
(http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--SignMessage)
Demonstrates signing a message with a secp256k1 private key that is first
parsed form raw bytes and serializing the generated signature.
* [Verify Signature]
(http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--VerifySignature)
Demonstrates verifying a secp256k1 signature against a public key that is
first parsed from raw bytes. The signature is also parsed from raw bytes.
* [Encryption]
(http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--EncryptMessage)
Demonstrates encrypting a message for a public key that is first parsed from
raw bytes, then decrypting it using the corresponding private key.
* [Decryption]
(http://godoc.org/github.com/btcsuite/btcd/btcec#example-package--DecryptMessage)
Demonstrates decrypting a message using a private key that is first parsed
from raw bytes.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
Package btcec is licensed under the [copyfree](http://copyfree.org) ISC License
except for btcec.go and btcec_test.go which is under the same license as Go.

View File

@ -1,17 +1,15 @@
chaincfg
========
[![Build Status](http://img.shields.io/travis/btcsuite/btcd.svg)]
(https://travis-ci.org/btcsuite/btcd) [![ISC License]
(http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
Package chaincfg defines chain configuration parameters for the three standard
Bitcoin networks and provides the ability for callers to define their own custom
Bitcoin networks.
Decred networks and provides the ability for callers to define their own custom
Decred networks.
Although this package was primarily written for btcd, it has intentionally been
Although this package was primarily written for dcrd, it has intentionally been
designed so it can be used as a standalone package for any projects needing to
use parameters for the standard Bitcoin networks or for projects needing to
use parameters for the standard Decred networks or for projects needing to
define their own network.
## Sample Use
@ -24,11 +22,11 @@ import (
"fmt"
"log"
"github.com/btcsuite/btcutil"
"github.com/btcsuite/btcd/chaincfg"
"github.com/decred/dcrutil"
"github.com/decred/dcrd/chaincfg"
)
var testnet = flag.Bool("testnet", false, "operate on the testnet Bitcoin network")
var testnet = flag.Bool("testnet", false, "operate on the testnet Decred network")
// By default (without -testnet), use mainnet.
var chainParams = &chaincfg.MainNetParams
@ -38,7 +36,7 @@ func main() {
// Modify active network parameters if operating on testnet.
if *testnet {
chainParams = &chaincfg.TestNet3Params
chainParams = &chaincfg.TestNetParams
}
// later...
@ -56,42 +54,22 @@ func main() {
## Documentation
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)]
(http://godoc.org/github.com/btcsuite/btcd/chaincfg)
(http://godoc.org/github.com/decred/dcrd/chaincfg)
Full `go doc` style documentation for the project can be viewed online without
installing this package by using the GoDoc site
[here](http://godoc.org/github.com/btcsuite/btcd/chaincfg).
[here](http://godoc.org/github.com/decred/dcrd/chaincfg).
You can also view the documentation locally once the package is installed with
the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to
http://localhost:6060/pkg/github.com/btcsuite/btcd/chaincfg
http://localhost:6060/pkg/github.com/decred/dcrd/chaincfg
## Installation
```bash
$ go get github.com/btcsuite/btcd/chaincfg
$ go get github.com/decred/dcrd/chaincfg
```
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To
verify the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
Package chaincfg is licensed under the [copyfree](http://copyfree.org) ISC

232
chaincfg/chainec/chainec.go Normal file
View File

@ -0,0 +1,232 @@
// Copyright (c) 2015 The Decred Developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainec
import (
"crypto/ecdsa"
"io"
"math/big"
)
// PublicKey is an interface representing a public key and its associated
// functions.
type PublicKey interface {
// Serialize is the default serialization method.
Serialize() []byte
// SerializeUncompressed serializes to the uncompressed format (if
// available).
SerializeUncompressed() []byte
// SerializeCompressed serializes to the compressed format (if
// available).
SerializeCompressed() []byte
// SerializeHybrid serializes to the hybrid format (if
// available).
SerializeHybrid() []byte
// ToECDSA converts the public key to an ECDSA public key.
ToECDSA() *ecdsa.PublicKey
// GetCurve returns the current curve as an interface.
GetCurve() interface{}
// GetX returns the point's X value.
GetX() *big.Int
// GetY returns the point's Y value.
GetY() *big.Int
// GetType returns the ECDSA type of this key.
GetType() int
}
// PrivateKey is an interface representing a private key and its associated
// functions.
type PrivateKey interface {
// Serialize serializes the 32-byte private key scalar to a
// byte slice.
Serialize() []byte
// SerializeSecret serializes the secret to the default serialization
// format. Used for Ed25519.
SerializeSecret() []byte
// Public returns the (X,Y) coordinates of the point produced
// by scalar multiplication of the scalar by the base point,
// AKA the public key.
Public() (*big.Int, *big.Int)
// GetD returns the value of the private scalar.
GetD() *big.Int
// GetType returns the ECDSA type of this key.
GetType() int
}
// Signature is an interface representing a signature and its associated
// functions.
type Signature interface {
// Serialize serializes the signature to the default serialization
// format.
Serialize() []byte
// GetR gets the R value of the signature.
GetR() *big.Int
// GetS gets the S value of the signature.
GetS() *big.Int
// GetType returns the ECDSA type of this key.
GetType() int
}
// DSA is an encapsulating interface for all the functions of a digital
// signature algorithm.
type DSA interface {
// ----------------------------------------------------------------------------
// Constants
//
// GetP gets the prime modulus of the curve.
GetP() *big.Int
// GetN gets the prime order of the curve.
GetN() *big.Int
// ----------------------------------------------------------------------------
// EC Math
//
// Add adds two points on the curve.
Add(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int)
// IsOnCurve checks if a given point is on the curve.
IsOnCurve(x *big.Int, y *big.Int) bool
// ScalarMult gives the product of scalar multiplication of scalar k
// by point (x,y) on the curve.
ScalarMult(x, y *big.Int, k []byte) (*big.Int, *big.Int)
// ScalarBaseMult gives the product of scalar multiplication of
// scalar k by the base point (generator) of the curve.
ScalarBaseMult(k []byte) (*big.Int, *big.Int)
// ----------------------------------------------------------------------------
// Private keys
//
// NewPrivateKey instantiates a new private key for the given
// curve.
NewPrivateKey(*big.Int) PrivateKey
// PrivKeyFromBytes calculates the public key from serialized bytes,
// and returns both it and the private key.
PrivKeyFromBytes(pk []byte) (PrivateKey, PublicKey)
// PrivKeyFromScalar calculates the public key from serialized scalar
// bytes, and returns both it and the private key. Useful for curves
// like Ed25519, where serialized private keys are different from
// serialized private scalars.
PrivKeyFromScalar(pk []byte) (PrivateKey, PublicKey)
// PrivKeyBytesLen returns the length of a serialized private key.
PrivKeyBytesLen() int
// ----------------------------------------------------------------------------
// Public keys
//
// NewPublicKey instantiates a new public key (point) for the
// given curve.
NewPublicKey(x *big.Int, y *big.Int) PublicKey
// ParsePubKey parses a serialized public key for the given
// curve and returns a public key.
ParsePubKey(pubKeyStr []byte) (PublicKey, error)
// PubKeyBytesLen returns the length of the default serialization
// method for a public key.
PubKeyBytesLen() int
// PubKeyBytesLenUncompressed returns the length of the uncompressed
// serialization method for a public key.
PubKeyBytesLenUncompressed() int
// PubKeyBytesLenCompressed returns the length of the compressed
// serialization method for a public key.
PubKeyBytesLenCompressed() int
// PubKeyBytesLenHybrid returns the length of the hybrid
// serialization method for a public key.
PubKeyBytesLenHybrid() int
// ----------------------------------------------------------------------------
// Signatures
//
// NewSignature instantiates a new signature for the given ECDSA
// method.
NewSignature(r *big.Int, s *big.Int) Signature
// ParseDERSignature parses a DER encoded signature for the given
// ECDSA method. If the method doesn't support DER signatures, it
// just parses with the default method.
ParseDERSignature(sigStr []byte) (Signature, error)
// ParseSignature a default encoded signature for the given ECDSA
// method.
ParseSignature(sigStr []byte) (Signature, error)
// RecoverCompact recovers a public key from an encoded signature
// and message, then verifies the signature against the public
// key.
RecoverCompact(signature, hash []byte) (PublicKey, bool, error)
// ----------------------------------------------------------------------------
// ECDSA
//
// GenerateKey generates a new private and public keypair from the
// given reader.
GenerateKey(rand io.Reader) ([]byte, *big.Int, *big.Int, error)
// Sign produces an ECDSA signature in the form of (R,S) using a
// private key and a message.
Sign(priv PrivateKey, hash []byte) (r, s *big.Int, err error)
// Verify verifies an ECDSA signature against a given message and
// public key.
Verify(pub PublicKey, hash []byte, r, s *big.Int) bool
// ----------------------------------------------------------------------------
// Symmetric cipher encryption
//
// GenerateSharedSecret generates a shared secret using a private scalar
// and a public key using ECDH.
GenerateSharedSecret(privkey []byte, x, y *big.Int) []byte
// Encrypt encrypts data to a recipient public key.
Encrypt(x, y *big.Int, in []byte) ([]byte, error)
// Decrypt decrypts data encoded to the public key that originates
// from the passed private scalar.
Decrypt(privkey []byte, in []byte) ([]byte, error)
}
// --------------------------------------------------------------------------------
// Accessible DSA suites for export.
//
const (
ECTypeSecp256k1 int = iota // 0
ECTypeEdwards // 1
ECTypeSecSchnorr // 2
)
// Secp256k1 is the secp256k1 curve and ECDSA system used in Bitcoin.
var Secp256k1 = newSecp256k1DSA()
// Edwards is the Ed25519 ECDSA signature system.
var Edwards = newEdwardsDSA()
// SecSchnorr is a Schnorr signature scheme about the secp256k1 curve
// implemented in libsecp256k1.
var SecSchnorr = newSecSchnorrDSA()

16
chaincfg/chainec/doc.go Normal file
View File

@ -0,0 +1,16 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
/*
Package chainec provides wrapper functions to abstract the ec functions.
Overview
This package provides thin wrappers around the ec or crypto function used
to make it easier to go from btcec (btcd) to ed25519 (decred) for example
without changing the main body of the code.
*/
package chainec

334
chaincfg/chainec/edwards.go Normal file
View File

@ -0,0 +1,334 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainec
import (
"errors"
"io"
"math/big"
"github.com/decred/dcrd/dcrec/edwards"
)
type edwardsDSA struct {
// Constants
getN func() *big.Int
getP func() *big.Int
// EC Math
add func(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int)
isOnCurve func(x *big.Int, y *big.Int) bool
scalarMult func(x, y *big.Int, k []byte) (*big.Int, *big.Int)
scalarBaseMult func(k []byte) (*big.Int, *big.Int)
// Private keys
newPrivateKey func(d *big.Int) PrivateKey
privKeyFromBytes func(pk []byte) (PrivateKey, PublicKey)
privKeyFromScalar func(pk []byte) (PrivateKey, PublicKey)
privKeyBytesLen func() int
// Public keys
newPublicKey func(x *big.Int, y *big.Int) PublicKey
parsePubKey func(pubKeyStr []byte) (PublicKey, error)
pubKeyBytesLen func() int
pubKeyBytesLenUncompressed func() int
pubKeyBytesLenCompressed func() int
pubKeyBytesLenHybrid func() int
// Signatures
newSignature func(r *big.Int, s *big.Int) Signature
parseDERSignature func(sigStr []byte) (Signature, error)
parseSignature func(sigStr []byte) (Signature, error)
recoverCompact func(signature, hash []byte) (PublicKey, bool, error)
// ECDSA
generateKey func(rand io.Reader) ([]byte, *big.Int, *big.Int, error)
sign func(priv PrivateKey, hash []byte) (r, s *big.Int, err error)
verify func(pub PublicKey, hash []byte, r, s *big.Int) bool
// Symmetric cipher encryption
generateSharedSecret func(privkey []byte, x, y *big.Int) []byte
encrypt func(x, y *big.Int, in []byte) ([]byte, error)
decrypt func(privkey []byte, in []byte) ([]byte, error)
}
var (
edwardsCurve = edwards.Edwards()
)
// Boilerplate exported functions to make the struct interact with the interface.
// Constants
func (e edwardsDSA) GetP() *big.Int {
return e.getP()
}
func (e edwardsDSA) GetN() *big.Int {
return e.getN()
}
// EC Math
func (e edwardsDSA) Add(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) {
return e.add(x1, y1, x2, y2)
}
func (e edwardsDSA) IsOnCurve(x, y *big.Int) bool {
return e.isOnCurve(x, y)
}
func (e edwardsDSA) ScalarMult(x, y *big.Int, k []byte) (*big.Int, *big.Int) {
return e.scalarMult(x, y, k)
}
func (e edwardsDSA) ScalarBaseMult(k []byte) (*big.Int, *big.Int) {
return e.scalarBaseMult(k)
}
// Private keys
func (e edwardsDSA) NewPrivateKey(d *big.Int) PrivateKey {
return e.newPrivateKey(d)
}
func (e edwardsDSA) PrivKeyFromBytes(pk []byte) (PrivateKey, PublicKey) {
return e.privKeyFromBytes(pk)
}
func (e edwardsDSA) PrivKeyFromScalar(pk []byte) (PrivateKey, PublicKey) {
return e.privKeyFromScalar(pk)
}
func (e edwardsDSA) PrivKeyBytesLen() int {
return e.privKeyBytesLen()
}
// Public keys
func (e edwardsDSA) NewPublicKey(x *big.Int, y *big.Int) PublicKey {
return e.newPublicKey(x, y)
}
func (e edwardsDSA) ParsePubKey(pubKeyStr []byte) (PublicKey, error) {
return e.parsePubKey(pubKeyStr)
}
func (e edwardsDSA) PubKeyBytesLen() int {
return e.pubKeyBytesLen()
}
func (e edwardsDSA) PubKeyBytesLenUncompressed() int {
return e.pubKeyBytesLenUncompressed()
}
func (e edwardsDSA) PubKeyBytesLenCompressed() int {
return e.pubKeyBytesLenCompressed()
}
func (e edwardsDSA) PubKeyBytesLenHybrid() int {
return e.pubKeyBytesLenCompressed()
}
// Signatures
func (e edwardsDSA) NewSignature(r, s *big.Int) Signature {
return e.newSignature(r, s)
}
func (e edwardsDSA) ParseDERSignature(sigStr []byte) (Signature, error) {
return e.parseDERSignature(sigStr)
}
func (e edwardsDSA) ParseSignature(sigStr []byte) (Signature, error) {
return e.parseSignature(sigStr)
}
func (e edwardsDSA) RecoverCompact(signature, hash []byte) (PublicKey, bool,
error) {
return e.recoverCompact(signature, hash)
}
// ECDSA
func (e edwardsDSA) GenerateKey(rand io.Reader) ([]byte, *big.Int, *big.Int,
error) {
return e.generateKey(rand)
}
func (e edwardsDSA) Sign(priv PrivateKey, hash []byte) (r, s *big.Int,
err error) {
r, s, err = e.sign(priv, hash)
return
}
func (e edwardsDSA) Verify(pub PublicKey, hash []byte, r, s *big.Int) bool {
return e.verify(pub, hash, r, s)
}
// Symmetric cipher encryption
func (e edwardsDSA) GenerateSharedSecret(privkey []byte, x, y *big.Int) []byte {
return e.generateSharedSecret(privkey, x, y)
}
func (e edwardsDSA) Encrypt(x, y *big.Int, in []byte) ([]byte,
error) {
return e.encrypt(x, y, in)
}
func (e edwardsDSA) Decrypt(privkey []byte, in []byte) ([]byte,
error) {
return e.decrypt(privkey, in)
}
// newEdwardsDSA instatiates a function DSA subsystem over the edwards 25519
// curve. A caveat for the functions below is that they're all routed through
// interfaces, and nil returns from the library itself for interfaces must
// ALWAYS be checked by checking the return value by attempted dereference
// (== nil).
func newEdwardsDSA() DSA {
var ed DSA = &edwardsDSA{
// Constants
getP: func() *big.Int {
return edwardsCurve.P
},
getN: func() *big.Int {
return edwardsCurve.N
},
// EC Math
add: func(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) {
return edwardsCurve.Add(x1, y1, x2, y2)
},
isOnCurve: func(x, y *big.Int) bool {
return edwardsCurve.IsOnCurve(x, y)
},
scalarMult: func(x, y *big.Int, k []byte) (*big.Int, *big.Int) {
return edwardsCurve.ScalarMult(x, y, k)
},
scalarBaseMult: func(k []byte) (*big.Int, *big.Int) {
return edwardsCurve.ScalarBaseMult(k)
},
// Private keys
newPrivateKey: func(d *big.Int) PrivateKey {
pk := edwards.NewPrivateKey(edwardsCurve, d)
if pk != nil {
return PrivateKey(*pk)
}
return nil
},
privKeyFromBytes: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub := edwards.PrivKeyFromBytes(edwardsCurve, pk)
if priv == nil {
return nil, nil
}
if pub == nil {
return nil, nil
}
tpriv := PrivateKey(*priv)
tpub := PublicKey(*pub)
return tpriv, tpub
},
privKeyFromScalar: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub, err := edwards.PrivKeyFromScalar(edwardsCurve, pk)
if err != nil {
return nil, nil
}
if priv == nil {
return nil, nil
}
if pub == nil {
return nil, nil
}
tpriv := PrivateKey(*priv)
tpub := PublicKey(*pub)
return tpriv, tpub
},
privKeyBytesLen: func() int {
return edwards.PrivKeyBytesLen
},
// Public keys
newPublicKey: func(x *big.Int, y *big.Int) PublicKey {
pk := edwards.NewPublicKey(edwardsCurve, x, y)
tpk := PublicKey(*pk)
return tpk
},
parsePubKey: func(pubKeyStr []byte) (PublicKey, error) {
pk, err := edwards.ParsePubKey(edwardsCurve, pubKeyStr)
if err != nil {
return nil, err
}
tpk := PublicKey(*pk)
return tpk, err
},
pubKeyBytesLen: func() int {
return edwards.PubKeyBytesLen
},
pubKeyBytesLenUncompressed: func() int {
return edwards.PubKeyBytesLen
},
pubKeyBytesLenCompressed: func() int {
return edwards.PubKeyBytesLen
},
pubKeyBytesLenHybrid: func() int {
return edwards.PubKeyBytesLen
},
// Signatures
newSignature: func(r *big.Int, s *big.Int) Signature {
sig := edwards.NewSignature(r, s)
ts := Signature(*sig)
return ts
},
parseDERSignature: func(sigStr []byte) (Signature, error) {
sig, err := edwards.ParseDERSignature(edwardsCurve, sigStr)
if err != nil {
return nil, err
}
ts := Signature(*sig)
return ts, err
},
parseSignature: func(sigStr []byte) (Signature, error) {
sig, err := edwards.ParseSignature(edwardsCurve, sigStr)
if err != nil {
return nil, err
}
ts := Signature(*sig)
return ts, err
},
recoverCompact: func(signature, hash []byte) (PublicKey, bool, error) {
pk, bl, err := edwards.RecoverCompact(signature, hash)
tpk := PublicKey(*pk)
return tpk, bl, err
},
// ECDSA
generateKey: func(rand io.Reader) ([]byte, *big.Int, *big.Int, error) {
return edwards.GenerateKey(edwardsCurve, rand)
},
sign: func(priv PrivateKey, hash []byte) (r, s *big.Int, err error) {
if priv.GetType() != ECTypeEdwards {
return nil, nil, errors.New("wrong type")
}
epriv, ok := priv.(edwards.PrivateKey)
if !ok {
return nil, nil, errors.New("wrong type")
}
r, s, err = edwards.Sign(edwardsCurve, &epriv, hash)
return
},
verify: func(pub PublicKey, hash []byte, r, s *big.Int) bool {
if pub.GetType() != ECTypeEdwards {
return false
}
epub, ok := pub.(edwards.PublicKey)
if !ok {
return false
}
return edwards.Verify(&epub, hash, r, s)
},
// Symmetric cipher encryption
generateSharedSecret: func(privkey []byte, x, y *big.Int) []byte {
privKeyLocal, _, err := edwards.PrivKeyFromScalar(edwardsCurve,
privkey)
if err != nil {
return nil
}
pubkey := edwards.NewPublicKey(edwardsCurve, x, y)
return edwards.GenerateSharedSecret(privKeyLocal, pubkey)
},
encrypt: func(x, y *big.Int, in []byte) ([]byte, error) {
pubkey := edwards.NewPublicKey(edwardsCurve, x, y)
return edwards.Encrypt(edwardsCurve, pubkey, in)
},
decrypt: func(privkey []byte, in []byte) ([]byte, error) {
privKeyLocal, _, err := edwards.PrivKeyFromScalar(edwardsCurve,
privkey)
if err != nil {
return nil, err
}
return edwards.Decrypt(edwardsCurve, privKeyLocal, in)
},
}
return ed.(DSA)
}

View File

@ -0,0 +1,91 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainec
import (
"bytes"
"encoding/hex"
"testing"
)
func TestGeneralEd25519(t *testing.T) {
// Sample pubkey
samplePubkey, _ := hex.DecodeString("b0d88c8d1d327d1bc6f00f6d7682c98" +
"562869a798b96367bf8d67712c9cb1d17")
_, err := Edwards.ParsePubKey(samplePubkey)
if err != nil {
t.Errorf("failure parsing pubkey: %v", err)
}
// Sample privkey secret
samplePrivKey, _ := hex.DecodeString("a980f892db13c99a3e8971e965b2ff3d4" +
"1eafd54093bc9f34d1fd22d84115bb644b57ee30cdb55829d0a5d4f046baef078f1e97" +
"a7f21b62d75f8e96ea139c35f")
privTest, _ := Edwards.PrivKeyFromBytes(samplePrivKey)
if privTest == nil {
t.Errorf("failure parsing privkey from secret")
}
// Sample privkey scalar
samplePrivKeyScalar, _ := hex.DecodeString("04c723f67789d320bfcccc0ff2bc84" +
"95a09c2356fa63ac6457107c295e6fde68")
privTest, _ = Edwards.PrivKeyFromScalar(samplePrivKeyScalar)
if privTest == nil {
t.Errorf("failure parsing privkey from secret")
}
// Sample signature
sampleSig, _ := hex.DecodeString(
"71301d3212915df23211bbd0bae5e678a51c7212ecc9341a91c48fbe96772e08" +
"cdd3d3b1f8ec828b3546b61a27b53a5472597ffd1771c39219741070ca62a40c")
_, err = Edwards.ParseDERSignature(sampleSig)
if err != nil {
t.Errorf("failure parsing DER signature: %v", err)
}
}
func TestPrivKeysEdwards(t *testing.T) {
tests := []struct {
name string
key []byte
}{
{
name: "check curve",
key: []byte{
0x0e, 0x10, 0xcb, 0xb0, 0x70, 0x27, 0xb9, 0x76,
0x36, 0xf8, 0x36, 0x48, 0xb2, 0xb5, 0x1a, 0x98,
0x7d, 0xad, 0x78, 0x2e, 0xbd, 0xaf, 0xcf, 0xbc,
0x4f, 0xe8, 0xd7, 0x49, 0x84, 0x2b, 0x24, 0xd8,
},
},
}
for _, test := range tests {
priv, pub := Edwards.PrivKeyFromScalar(test.key)
if priv == nil || pub == nil {
t.Errorf("failure deserializing from bytes")
continue
}
hash := []byte{0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8, 0x9}
r, s, err := Edwards.Sign(priv, hash)
if err != nil {
t.Errorf("%s could not sign: %v", test.name, err)
continue
}
sig := Edwards.NewSignature(r, s)
if !Edwards.Verify(pub, hash, sig.GetR(), sig.GetS()) {
t.Errorf("%s could not verify: %v", test.name, err)
continue
}
serializedKey := priv.Serialize()
if !bytes.Equal(serializedKey, test.key) {
t.Errorf("%s unexpected serialized bytes - got: %x, "+
"want: %x", test.name, serializedKey, test.key)
}
}
}

View File

@ -0,0 +1,336 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainec
import (
"errors"
"fmt"
"io"
"math/big"
"github.com/decred/dcrd/dcrec/secp256k1"
)
type secp256k1DSA struct {
// Constants
getN func() *big.Int
getP func() *big.Int
// EC Math
add func(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int)
isOnCurve func(x *big.Int, y *big.Int) bool
scalarMult func(x, y *big.Int, k []byte) (*big.Int, *big.Int)
scalarBaseMult func(k []byte) (*big.Int, *big.Int)
// Private keys
newPrivateKey func(d *big.Int) PrivateKey
privKeyFromBytes func(pk []byte) (PrivateKey, PublicKey)
privKeyFromScalar func(pk []byte) (PrivateKey, PublicKey)
privKeyBytesLen func() int
// Public keys
newPublicKey func(x *big.Int, y *big.Int) PublicKey
parsePubKey func(pubKeyStr []byte) (PublicKey, error)
pubKeyBytesLen func() int
pubKeyBytesLenUncompressed func() int
pubKeyBytesLenCompressed func() int
pubKeyBytesLenHybrid func() int
// Signatures
newSignature func(r *big.Int, s *big.Int) Signature
parseDERSignature func(sigStr []byte) (Signature, error)
parseSignature func(sigStr []byte) (Signature, error)
recoverCompact func(signature, hash []byte) (PublicKey, bool, error)
// ECDSA
generateKey func(rand io.Reader) ([]byte, *big.Int, *big.Int, error)
sign func(priv PrivateKey, hash []byte) (r, s *big.Int, err error)
verify func(pub PublicKey, hash []byte, r, s *big.Int) bool
// Symmetric cipher encryption
generateSharedSecret func(privkey []byte, x, y *big.Int) []byte
encrypt func(x, y *big.Int, in []byte) ([]byte, error)
decrypt func(privkey []byte, in []byte) ([]byte, error)
}
var (
secp256k1Curve = secp256k1.S256()
)
// Boilerplate exported functions to make the struct interact with the interface.
// Constants
func (s secp256k1DSA) GetP() *big.Int {
return s.getP()
}
func (s secp256k1DSA) GetN() *big.Int {
return s.getN()
}
// EC Math
func (s secp256k1DSA) Add(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) {
return s.add(x1, y1, x2, y2)
}
func (s secp256k1DSA) IsOnCurve(x, y *big.Int) bool {
return s.isOnCurve(x, y)
}
func (s secp256k1DSA) ScalarMult(x, y *big.Int, k []byte) (*big.Int, *big.Int) {
return s.scalarMult(x, y, k)
}
func (s secp256k1DSA) ScalarBaseMult(k []byte) (*big.Int, *big.Int) {
return s.scalarBaseMult(k)
}
// Private keys
func (s secp256k1DSA) NewPrivateKey(d *big.Int) PrivateKey {
return s.newPrivateKey(d)
}
func (s secp256k1DSA) PrivKeyFromBytes(pk []byte) (PrivateKey, PublicKey) {
return s.privKeyFromBytes(pk)
}
func (s secp256k1DSA) PrivKeyFromScalar(pk []byte) (PrivateKey, PublicKey) {
return s.privKeyFromScalar(pk)
}
func (s secp256k1DSA) PrivKeyBytesLen() int {
return s.privKeyBytesLen()
}
// Public keys
func (s secp256k1DSA) NewPublicKey(x *big.Int, y *big.Int) PublicKey {
return s.newPublicKey(x, y)
}
func (s secp256k1DSA) ParsePubKey(pubKeyStr []byte) (PublicKey, error) {
return s.parsePubKey(pubKeyStr)
}
func (s secp256k1DSA) PubKeyBytesLen() int {
return s.pubKeyBytesLen()
}
func (s secp256k1DSA) PubKeyBytesLenUncompressed() int {
return s.pubKeyBytesLenUncompressed()
}
func (s secp256k1DSA) PubKeyBytesLenCompressed() int {
return s.pubKeyBytesLenCompressed()
}
func (s secp256k1DSA) PubKeyBytesLenHybrid() int {
return s.pubKeyBytesLenCompressed()
}
// Signatures
func (sp secp256k1DSA) NewSignature(r, s *big.Int) Signature {
return sp.newSignature(r, s)
}
func (s secp256k1DSA) ParseDERSignature(sigStr []byte) (Signature, error) {
return s.parseDERSignature(sigStr)
}
func (s secp256k1DSA) ParseSignature(sigStr []byte) (Signature, error) {
return s.parseSignature(sigStr)
}
func (s secp256k1DSA) RecoverCompact(signature, hash []byte) (PublicKey, bool,
error) {
return s.recoverCompact(signature, hash)
}
// ECDSA
func (s secp256k1DSA) GenerateKey(rand io.Reader) ([]byte, *big.Int, *big.Int,
error) {
return s.generateKey(rand)
}
func (sp secp256k1DSA) Sign(priv PrivateKey, hash []byte) (r, s *big.Int,
err error) {
r, s, err = sp.sign(priv, hash)
return
}
func (sp secp256k1DSA) Verify(pub PublicKey, hash []byte, r, s *big.Int) bool {
return sp.verify(pub, hash, r, s)
}
// Symmetric cipher encryption
func (s secp256k1DSA) GenerateSharedSecret(privkey []byte, x, y *big.Int) []byte {
return s.generateSharedSecret(privkey, x, y)
}
func (s secp256k1DSA) Encrypt(x, y *big.Int, in []byte) ([]byte,
error) {
return s.encrypt(x, y, in)
}
func (s secp256k1DSA) Decrypt(privkey []byte, in []byte) ([]byte,
error) {
return s.decrypt(privkey, in)
}
// newSecp256k1DSA instatiates a function DSA subsystem over the secp256k1
// curve. A caveat for the functions below is that they're all routed through
// interfaces, and nil returns from the library itself for interfaces must
// ALWAYS be checked by checking the return value by attempted dereference
// (== nil).
func newSecp256k1DSA() DSA {
var secp DSA = &secp256k1DSA{
// Constants
getP: func() *big.Int {
return secp256k1Curve.P
},
getN: func() *big.Int {
return secp256k1Curve.N
},
// EC Math
add: func(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) {
return secp256k1Curve.Add(x1, y1, x2, y2)
},
isOnCurve: func(x, y *big.Int) bool {
return secp256k1Curve.IsOnCurve(x, y)
},
scalarMult: func(x, y *big.Int, k []byte) (*big.Int, *big.Int) {
return secp256k1Curve.ScalarMult(x, y, k)
},
scalarBaseMult: func(k []byte) (*big.Int, *big.Int) {
return secp256k1Curve.ScalarBaseMult(k)
},
// Private keys
newPrivateKey: func(d *big.Int) PrivateKey {
if d == nil {
return nil
}
pk := secp256k1.NewPrivateKey(secp256k1Curve, d)
if pk != nil {
return PrivateKey(pk)
}
return nil
},
privKeyFromBytes: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub := secp256k1.PrivKeyFromBytes(secp256k1Curve, pk)
if priv == nil {
return nil, nil
}
if pub == nil {
return nil, nil
}
tpriv := PrivateKey(priv)
tpub := PublicKey(pub)
return tpriv, tpub
},
privKeyFromScalar: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub := secp256k1.PrivKeyFromScalar(secp256k1Curve, pk)
if priv == nil {
return nil, nil
}
if pub == nil {
return nil, nil
}
tpriv := PrivateKey(priv)
tpub := PublicKey(pub)
return tpriv, tpub
},
privKeyBytesLen: func() int {
return secp256k1.PrivKeyBytesLen
},
// Public keys
newPublicKey: func(x *big.Int, y *big.Int) PublicKey {
pk := secp256k1.NewPublicKey(secp256k1Curve, x, y)
tpk := PublicKey(pk)
return tpk
},
parsePubKey: func(pubKeyStr []byte) (PublicKey, error) {
pk, err := secp256k1.ParsePubKey(pubKeyStr, secp256k1Curve)
if err != nil {
return nil, err
}
tpk := PublicKey(pk)
return tpk, err
},
pubKeyBytesLen: func() int {
return secp256k1.PubKeyBytesLenCompressed
},
pubKeyBytesLenUncompressed: func() int {
return secp256k1.PubKeyBytesLenUncompressed
},
pubKeyBytesLenCompressed: func() int {
return secp256k1.PubKeyBytesLenCompressed
},
pubKeyBytesLenHybrid: func() int {
return secp256k1.PubKeyBytesLenHybrid
},
// Signatures
newSignature: func(r *big.Int, s *big.Int) Signature {
sig := secp256k1.NewSignature(r, s)
ts := Signature(sig)
return ts
},
parseDERSignature: func(sigStr []byte) (Signature, error) {
sig, err := secp256k1.ParseDERSignature(sigStr, secp256k1Curve)
if err != nil {
return nil, err
}
ts := Signature(sig)
return ts, err
},
parseSignature: func(sigStr []byte) (Signature, error) {
sig, err := secp256k1.ParseSignature(sigStr, secp256k1Curve)
if err != nil {
return nil, err
}
ts := Signature(sig)
return ts, err
},
recoverCompact: func(signature, hash []byte) (PublicKey, bool, error) {
pk, bl, err := secp256k1.RecoverCompact(secp256k1Curve, signature,
hash)
tpk := PublicKey(pk)
return tpk, bl, err
},
// ECDSA
generateKey: func(rand io.Reader) ([]byte, *big.Int, *big.Int, error) {
return secp256k1.GenerateKey(secp256k1Curve, rand)
},
sign: func(priv PrivateKey, hash []byte) (r, s *big.Int, err error) {
if priv.GetType() != ECTypeSecp256k1 {
return nil, nil, errors.New("wrong type")
}
spriv, ok := priv.(*secp256k1.PrivateKey)
if !ok {
return nil, nil, errors.New("wrong type")
}
sig, err := spriv.Sign(hash)
if sig != nil {
r = sig.GetR()
s = sig.GetS()
}
return
},
verify: func(pub PublicKey, hash []byte, r, s *big.Int) bool {
spub := secp256k1.NewPublicKey(secp256k1Curve, pub.GetX(), pub.GetY())
ssig := secp256k1.NewSignature(r, s)
return ssig.Verify(hash, spub)
},
// Symmetric cipher encryption
generateSharedSecret: func(privkey []byte, x, y *big.Int) []byte {
sprivkey, _ := secp256k1.PrivKeyFromBytes(secp256k1Curve, privkey)
if sprivkey == nil {
return nil
}
spubkey := secp256k1.NewPublicKey(secp256k1Curve, x, y)
return secp256k1.GenerateSharedSecret(sprivkey, spubkey)
},
encrypt: func(x, y *big.Int, in []byte) ([]byte, error) {
spubkey := secp256k1.NewPublicKey(secp256k1Curve, x, y)
return secp256k1.Encrypt(spubkey, in)
},
decrypt: func(privkey []byte, in []byte) ([]byte, error) {
sprivkey, _ := secp256k1.PrivKeyFromBytes(secp256k1Curve, privkey)
if sprivkey == nil {
return nil, fmt.Errorf("failure deserializing privkey")
}
return secp256k1.Decrypt(sprivkey, in)
},
}
return secp.(DSA)
}

View File

@ -0,0 +1,286 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainec
import (
"bytes"
"encoding/hex"
"testing"
"github.com/davecgh/go-spew/spew"
)
func TestGeneralSecp256k1(t *testing.T) {
// Sample expanded pubkey (Satoshi from Genesis block)
samplePubkey, _ := hex.DecodeString("04" +
"678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb6" +
"49f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f")
_, err := Secp256k1.ParsePubKey(samplePubkey)
if err != nil {
t.Errorf("failure parsing pubkey: %v", err)
}
// Sample compressed pubkey
samplePubkey, _ = hex.DecodeString("02" +
"4627032575180c2773b3eedd3a163dc2f3c6c84f9d0a1fc561a9578a15e6d0e3")
_, err = Secp256k1.ParsePubKey(samplePubkey)
if err != nil {
t.Errorf("failure parsing pubkey: %v", err)
}
// Sample signature from https://en.bitcoin.it/wiki/Transaction
sampleSig, _ := hex.DecodeString("30" +
"45" +
"02" +
"20" +
"6e21798a42fae0e854281abd38bacd1aeed3ee3738d9e1446618c4571d1090db" +
"02" +
"21" +
"00e2ac980643b0b82c0e88ffdfec6b64e3e6ba35e7ba5fdd7d5d6cc8d25c6b2415")
_, err = Secp256k1.ParseDERSignature(sampleSig)
if err != nil {
t.Errorf("failure parsing DER signature: %v", err)
}
}
type signatureTest struct {
name string
sig []byte
der bool
isValid bool
}
// decodeHex decodes the passed hex string and returns the resulting bytes. It
// panics if an error occurs. This is only used in the tests as a helper since
// the only way it can fail is if there is an error in the test source code.
func decodeHex(hexStr string) []byte {
b, err := hex.DecodeString(hexStr)
if err != nil {
panic("invalid hex string in test source: err " + err.Error() +
", hex: " + hexStr)
}
return b
}
type pubKeyTest struct {
name string
key []byte
format byte
isValid bool
}
const (
TstPubkeyUncompressed byte = 0x4 // x coord + y coord
TstPubkeyCompressed byte = 0x2 // y_bit + x coord
TstPubkeyHybrid byte = 0x6 // y_bit + x coord + y coord
)
var pubKeyTests = []pubKeyTest{
// pubkey from bitcoin blockchain tx
// 0437cd7f8525ceed2324359c2d0ba26006d92d85
{
name: "uncompressed ok",
key: []byte{0x04, 0x11, 0xdb, 0x93, 0xe1, 0xdc, 0xdb, 0x8a,
0x01, 0x6b, 0x49, 0x84, 0x0f, 0x8c, 0x53, 0xbc, 0x1e,
0xb6, 0x8a, 0x38, 0x2e, 0x97, 0xb1, 0x48, 0x2e, 0xca,
0xd7, 0xb1, 0x48, 0xa6, 0x90, 0x9a, 0x5c, 0xb2, 0xe0,
0xea, 0xdd, 0xfb, 0x84, 0xcc, 0xf9, 0x74, 0x44, 0x64,
0xf8, 0x2e, 0x16, 0x0b, 0xfa, 0x9b, 0x8b, 0x64, 0xf9,
0xd4, 0xc0, 0x3f, 0x99, 0x9b, 0x86, 0x43, 0xf6, 0x56,
0xb4, 0x12, 0xa3,
},
isValid: true,
format: TstPubkeyUncompressed,
},
{
name: "uncompressed as hybrid ok",
key: []byte{0x07, 0x11, 0xdb, 0x93, 0xe1, 0xdc, 0xdb, 0x8a,
0x01, 0x6b, 0x49, 0x84, 0x0f, 0x8c, 0x53, 0xbc, 0x1e,
0xb6, 0x8a, 0x38, 0x2e, 0x97, 0xb1, 0x48, 0x2e, 0xca,
0xd7, 0xb1, 0x48, 0xa6, 0x90, 0x9a, 0x5c, 0xb2, 0xe0,
0xea, 0xdd, 0xfb, 0x84, 0xcc, 0xf9, 0x74, 0x44, 0x64,
0xf8, 0x2e, 0x16, 0x0b, 0xfa, 0x9b, 0x8b, 0x64, 0xf9,
0xd4, 0xc0, 0x3f, 0x99, 0x9b, 0x86, 0x43, 0xf6, 0x56,
0xb4, 0x12, 0xa3,
},
isValid: true,
format: TstPubkeyHybrid,
},
// from tx 0b09c51c51ff762f00fb26217269d2a18e77a4fa87d69b3c363ab4df16543f20
{
name: "compressed ok (ybit = 0)",
key: []byte{0x02, 0xce, 0x0b, 0x14, 0xfb, 0x84, 0x2b, 0x1b,
0xa5, 0x49, 0xfd, 0xd6, 0x75, 0xc9, 0x80, 0x75, 0xf1,
0x2e, 0x9c, 0x51, 0x0f, 0x8e, 0xf5, 0x2b, 0xd0, 0x21,
0xa9, 0xa1, 0xf4, 0x80, 0x9d, 0x3b, 0x4d,
},
isValid: true,
format: TstPubkeyCompressed,
},
// from tx fdeb8e72524e8dab0da507ddbaf5f88fe4a933eb10a66bc4745bb0aa11ea393c
{
name: "compressed ok (ybit = 1)",
key: []byte{0x03, 0x26, 0x89, 0xc7, 0xc2, 0xda, 0xb1, 0x33,
0x09, 0xfb, 0x14, 0x3e, 0x0e, 0x8f, 0xe3, 0x96, 0x34,
0x25, 0x21, 0x88, 0x7e, 0x97, 0x66, 0x90, 0xb6, 0xb4,
0x7f, 0x5b, 0x2a, 0x4b, 0x7d, 0x44, 0x8e,
},
isValid: true,
format: TstPubkeyCompressed,
},
{
name: "hybrid",
key: []byte{0x06, 0x79, 0xbe, 0x66, 0x7e, 0xf9, 0xdc, 0xbb,
0xac, 0x55, 0xa0, 0x62, 0x95, 0xce, 0x87, 0x0b, 0x07,
0x02, 0x9b, 0xfc, 0xdb, 0x2d, 0xce, 0x28, 0xd9, 0x59,
0xf2, 0x81, 0x5b, 0x16, 0xf8, 0x17, 0x98, 0x48, 0x3a,
0xda, 0x77, 0x26, 0xa3, 0xc4, 0x65, 0x5d, 0xa4, 0xfb,
0xfc, 0x0e, 0x11, 0x08, 0xa8, 0xfd, 0x17, 0xb4, 0x48,
0xa6, 0x85, 0x54, 0x19, 0x9c, 0x47, 0xd0, 0x8f, 0xfb,
0x10, 0xd4, 0xb8,
},
format: TstPubkeyHybrid,
isValid: true,
},
}
func TestPubKeys(t *testing.T) {
for _, test := range pubKeyTests {
pk, err := Secp256k1.ParsePubKey(test.key)
if err != nil {
if test.isValid {
t.Errorf("%s pubkey failed when shouldn't %v",
test.name, err)
}
continue
}
if !test.isValid {
t.Errorf("%s counted as valid when it should fail",
test.name)
continue
}
var pkStr []byte
switch test.format {
case TstPubkeyUncompressed:
pkStr = (PublicKey)(pk).SerializeUncompressed()
case TstPubkeyCompressed:
pkStr = (PublicKey)(pk).SerializeCompressed()
case TstPubkeyHybrid:
pkStr = (PublicKey)(pk).SerializeHybrid()
}
if !bytes.Equal(test.key, pkStr) {
t.Errorf("%s pubkey: serialized keys do not match.",
test.name)
spew.Dump(test.key)
spew.Dump(pkStr)
}
}
}
func TestPrivKeys(t *testing.T) {
tests := []struct {
name string
key []byte
}{
{
name: "check curve",
key: []byte{
0xea, 0xf0, 0x2c, 0xa3, 0x48, 0xc5, 0x24, 0xe6,
0x39, 0x26, 0x55, 0xba, 0x4d, 0x29, 0x60, 0x3c,
0xd1, 0xa7, 0x34, 0x7d, 0x9d, 0x65, 0xcf, 0xe9,
0x3c, 0xe1, 0xeb, 0xff, 0xdc, 0xa2, 0x26, 0x94,
},
},
}
for _, test := range tests {
priv, pub := Secp256k1.PrivKeyFromBytes(test.key)
_, err := Secp256k1.ParsePubKey(pub.SerializeUncompressed())
if err != nil {
t.Errorf("%s privkey: %v", test.name, err)
continue
}
hash := []byte{0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8, 0x9}
r, s, err := Secp256k1.Sign(priv, hash)
if err != nil {
t.Errorf("%s could not sign: %v", test.name, err)
continue
}
sig := Secp256k1.NewSignature(r, s)
if !Secp256k1.Verify(pub, hash, sig.GetR(), sig.GetS()) {
t.Errorf("%s could not verify: %v", test.name, err)
continue
}
serializedKey := priv.Serialize()
if !bytes.Equal(serializedKey, test.key) {
t.Errorf("%s unexpected serialized bytes - got: %x, "+
"want: %x", test.name, serializedKey, test.key)
}
}
}
var signatureTests = []signatureTest{
// signatures from bitcoin blockchain tx
// 0437cd7f8525ceed2324359c2d0ba26006d92d85
{
name: "valid signature.",
sig: []byte{0x30, 0x44, 0x02, 0x20, 0x4e, 0x45, 0xe1, 0x69,
0x32, 0xb8, 0xaf, 0x51, 0x49, 0x61, 0xa1, 0xd3, 0xa1,
0xa2, 0x5f, 0xdf, 0x3f, 0x4f, 0x77, 0x32, 0xe9, 0xd6,
0x24, 0xc6, 0xc6, 0x15, 0x48, 0xab, 0x5f, 0xb8, 0xcd,
0x41, 0x02, 0x20, 0x18, 0x15, 0x22, 0xec, 0x8e, 0xca,
0x07, 0xde, 0x48, 0x60, 0xa4, 0xac, 0xdd, 0x12, 0x90,
0x9d, 0x83, 0x1c, 0xc5, 0x6c, 0xbb, 0xac, 0x46, 0x22,
0x08, 0x22, 0x21, 0xa8, 0x76, 0x8d, 0x1d, 0x09,
},
der: true,
isValid: true,
},
{
name: "empty.",
sig: []byte{},
isValid: false,
},
{
name: "bad magic.",
sig: []byte{0x31, 0x44, 0x02, 0x20, 0x4e, 0x45, 0xe1, 0x69,
0x32, 0xb8, 0xaf, 0x51, 0x49, 0x61, 0xa1, 0xd3, 0xa1,
0xa2, 0x5f, 0xdf, 0x3f, 0x4f, 0x77, 0x32, 0xe9, 0xd6,
0x24, 0xc6, 0xc6, 0x15, 0x48, 0xab, 0x5f, 0xb8, 0xcd,
0x41, 0x02, 0x20, 0x18, 0x15, 0x22, 0xec, 0x8e, 0xca,
0x07, 0xde, 0x48, 0x60, 0xa4, 0xac, 0xdd, 0x12, 0x90,
0x9d, 0x83, 0x1c, 0xc5, 0x6c, 0xbb, 0xac, 0x46, 0x22,
0x08, 0x22, 0x21, 0xa8, 0x76, 0x8d, 0x1d, 0x09,
},
der: true,
isValid: false,
},
}
func TestSignatures(t *testing.T) {
for _, test := range signatureTests {
var err error
if test.der {
_, err = Secp256k1.ParseDERSignature(test.sig)
} else {
_, err = Secp256k1.ParseSignature(test.sig)
}
if err != nil {
if test.isValid {
t.Errorf("%s signature failed when shouldn't %v",
test.name, err)
}
continue
}
if !test.isValid {
t.Errorf("%s counted as valid when it should fail",
test.name)
}
}
}

View File

@ -0,0 +1,314 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainec
import (
"fmt"
"io"
"math/big"
"github.com/decred/dcrd/dcrec/secp256k1"
"github.com/decred/dcrd/dcrec/secp256k1/schnorr"
)
type secSchnorrDSA struct {
// Constants
getN func() *big.Int
getP func() *big.Int
// EC Math
add func(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int)
isOnCurve func(x *big.Int, y *big.Int) bool
scalarMult func(x, y *big.Int, k []byte) (*big.Int, *big.Int)
scalarBaseMult func(k []byte) (*big.Int, *big.Int)
// Private keys
newPrivateKey func(d *big.Int) PrivateKey
privKeyFromBytes func(pk []byte) (PrivateKey, PublicKey)
privKeyFromScalar func(pk []byte) (PrivateKey, PublicKey)
privKeyBytesLen func() int
// Public keys
newPublicKey func(x *big.Int, y *big.Int) PublicKey
parsePubKey func(pubKeyStr []byte) (PublicKey, error)
pubKeyBytesLen func() int
pubKeyBytesLenUncompressed func() int
pubKeyBytesLenCompressed func() int
pubKeyBytesLenHybrid func() int
// Signatures
newSignature func(r *big.Int, s *big.Int) Signature
parseDERSignature func(sigStr []byte) (Signature, error)
parseSignature func(sigStr []byte) (Signature, error)
recoverCompact func(signature, hash []byte) (PublicKey, bool, error)
// ECDSA
generateKey func(rand io.Reader) ([]byte, *big.Int, *big.Int, error)
sign func(priv PrivateKey, hash []byte) (r, s *big.Int, err error)
verify func(pub PublicKey, hash []byte, r, s *big.Int) bool
// Symmetric cipher encryption
generateSharedSecret func(privkey []byte, x, y *big.Int) []byte
encrypt func(x, y *big.Int, in []byte) ([]byte, error)
decrypt func(privkey []byte, in []byte) ([]byte, error)
}
// Boilerplate exported functions to make the struct interact with the interface.
// Constants
func (s secSchnorrDSA) GetP() *big.Int {
return s.getP()
}
func (s secSchnorrDSA) GetN() *big.Int {
return s.getN()
}
// EC Math
func (s secSchnorrDSA) Add(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) {
return s.add(x1, y1, x2, y2)
}
func (s secSchnorrDSA) IsOnCurve(x, y *big.Int) bool {
return s.isOnCurve(x, y)
}
func (s secSchnorrDSA) ScalarMult(x, y *big.Int, k []byte) (*big.Int, *big.Int) {
return s.scalarMult(x, y, k)
}
func (s secSchnorrDSA) ScalarBaseMult(k []byte) (*big.Int, *big.Int) {
return s.scalarBaseMult(k)
}
// Private keys
func (s secSchnorrDSA) NewPrivateKey(d *big.Int) PrivateKey {
return s.newPrivateKey(d)
}
func (s secSchnorrDSA) PrivKeyFromBytes(pk []byte) (PrivateKey, PublicKey) {
return s.privKeyFromBytes(pk)
}
func (s secSchnorrDSA) PrivKeyFromScalar(pk []byte) (PrivateKey, PublicKey) {
return s.privKeyFromScalar(pk)
}
func (s secSchnorrDSA) PrivKeyBytesLen() int {
return s.privKeyBytesLen()
}
// Public keys
func (s secSchnorrDSA) NewPublicKey(x *big.Int, y *big.Int) PublicKey {
return s.newPublicKey(x, y)
}
func (s secSchnorrDSA) ParsePubKey(pubKeyStr []byte) (PublicKey, error) {
return s.parsePubKey(pubKeyStr)
}
func (s secSchnorrDSA) PubKeyBytesLen() int {
return s.pubKeyBytesLen()
}
func (s secSchnorrDSA) PubKeyBytesLenUncompressed() int {
return s.pubKeyBytesLenUncompressed()
}
func (s secSchnorrDSA) PubKeyBytesLenCompressed() int {
return s.pubKeyBytesLenCompressed()
}
func (s secSchnorrDSA) PubKeyBytesLenHybrid() int {
return s.pubKeyBytesLenCompressed()
}
// Signatures
func (sp secSchnorrDSA) NewSignature(r, s *big.Int) Signature {
return sp.newSignature(r, s)
}
func (s secSchnorrDSA) ParseDERSignature(sigStr []byte) (Signature, error) {
return s.parseDERSignature(sigStr)
}
func (s secSchnorrDSA) ParseSignature(sigStr []byte) (Signature, error) {
return s.parseSignature(sigStr)
}
func (s secSchnorrDSA) RecoverCompact(signature, hash []byte) (PublicKey, bool,
error) {
return s.recoverCompact(signature, hash)
}
// ECDSA
func (s secSchnorrDSA) GenerateKey(rand io.Reader) ([]byte, *big.Int, *big.Int,
error) {
return s.generateKey(rand)
}
func (sp secSchnorrDSA) Sign(priv PrivateKey, hash []byte) (r, s *big.Int,
err error) {
r, s, err = sp.sign(priv, hash)
return
}
func (sp secSchnorrDSA) Verify(pub PublicKey, hash []byte, r, s *big.Int) bool {
return sp.verify(pub, hash, r, s)
}
// Symmetric cipher encryption
func (s secSchnorrDSA) GenerateSharedSecret(privkey []byte, x, y *big.Int) []byte {
return s.generateSharedSecret(privkey, x, y)
}
func (s secSchnorrDSA) Encrypt(x, y *big.Int, in []byte) ([]byte,
error) {
return s.encrypt(x, y, in)
}
func (s secSchnorrDSA) Decrypt(privkey []byte, in []byte) ([]byte,
error) {
return s.decrypt(privkey, in)
}
// newSecSchnorrDSA instatiates a function DSA subsystem over the secp256k1
// curve. A caveat for the functions below is that they're all routed through
// interfaces, and nil returns from the library itself for interfaces must
// ALWAYS be checked by checking the return value by attempted dereference
// (== nil).
func newSecSchnorrDSA() DSA {
var secp DSA = &secSchnorrDSA{
// Constants
getP: func() *big.Int {
return secp256k1Curve.P
},
getN: func() *big.Int {
return secp256k1Curve.N
},
// EC Math
add: func(x1, y1, x2, y2 *big.Int) (*big.Int, *big.Int) {
return secp256k1Curve.Add(x1, y1, x2, y2)
},
isOnCurve: func(x, y *big.Int) bool {
return secp256k1Curve.IsOnCurve(x, y)
},
scalarMult: func(x, y *big.Int, k []byte) (*big.Int, *big.Int) {
return secp256k1Curve.ScalarMult(x, y, k)
},
scalarBaseMult: func(k []byte) (*big.Int, *big.Int) {
return secp256k1Curve.ScalarBaseMult(k)
},
// Private keys
newPrivateKey: func(d *big.Int) PrivateKey {
pk := secp256k1.NewPrivateKey(secp256k1Curve, d)
if pk != nil {
return PrivateKey(pk)
}
return nil
},
privKeyFromBytes: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub := secp256k1.PrivKeyFromBytes(secp256k1Curve, pk)
if priv == nil {
return nil, nil
}
if pub == nil {
return nil, nil
}
tpriv := PrivateKey(priv)
tpub := PublicKey(pub)
return tpriv, tpub
},
privKeyFromScalar: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub := secp256k1.PrivKeyFromScalar(secp256k1Curve, pk)
if priv == nil {
return nil, nil
}
if pub == nil {
return nil, nil
}
tpriv := PrivateKey(priv)
tpub := PublicKey(pub)
return tpriv, tpub
},
privKeyBytesLen: func() int {
return secp256k1.PrivKeyBytesLen
},
// Public keys
// Note that Schnorr only allows 33 byte public keys, however
// as they are secp256k1 you still have access to the other
// serialization types.
newPublicKey: func(x *big.Int, y *big.Int) PublicKey {
pk := secp256k1.NewPublicKey(secp256k1Curve, x, y)
tpk := PublicKey(pk)
return tpk
},
parsePubKey: func(pubKeyStr []byte) (PublicKey, error) {
pk, err := schnorr.ParsePubKey(secp256k1Curve, pubKeyStr)
if err != nil {
return nil, err
}
tpk := PublicKey(pk)
return tpk, err
},
pubKeyBytesLen: func() int {
return schnorr.PubKeyBytesLen
},
pubKeyBytesLenUncompressed: func() int {
return schnorr.PubKeyBytesLen
},
pubKeyBytesLenCompressed: func() int {
return schnorr.PubKeyBytesLen
},
pubKeyBytesLenHybrid: func() int {
return schnorr.PubKeyBytesLen
},
// Signatures
newSignature: func(r *big.Int, s *big.Int) Signature {
sig := schnorr.NewSignature(r, s)
ts := Signature(sig)
return ts
},
parseDERSignature: func(sigStr []byte) (Signature, error) {
sig, err := schnorr.ParseSignature(sigStr)
ts := Signature(sig)
return ts, err
},
parseSignature: func(sigStr []byte) (Signature, error) {
sig, err := schnorr.ParseSignature(sigStr)
ts := Signature(sig)
return ts, err
},
recoverCompact: func(signature, hash []byte) (PublicKey, bool, error) {
pk, bl, err := schnorr.RecoverPubkey(secp256k1Curve, signature,
hash)
tpk := PublicKey(pk)
return tpk, bl, err
},
// ECDSA
generateKey: func(rand io.Reader) ([]byte, *big.Int, *big.Int, error) {
return secp256k1.GenerateKey(secp256k1Curve, rand)
},
sign: func(priv PrivateKey, hash []byte) (r, s *big.Int, err error) {
spriv := secp256k1.NewPrivateKey(secp256k1Curve, priv.GetD())
return schnorr.Sign(secp256k1Curve, spriv, hash)
},
verify: func(pub PublicKey, hash []byte, r, s *big.Int) bool {
spub := secp256k1.NewPublicKey(secp256k1Curve, pub.GetX(), pub.GetY())
return schnorr.Verify(secp256k1Curve, spub, hash, r, s)
},
// Symmetric cipher encryption
generateSharedSecret: func(privkey []byte, x, y *big.Int) []byte {
sprivkey, _ := secp256k1.PrivKeyFromBytes(secp256k1Curve, privkey)
if sprivkey == nil {
return nil
}
spubkey := secp256k1.NewPublicKey(secp256k1Curve, x, y)
return secp256k1.GenerateSharedSecret(sprivkey, spubkey)
},
encrypt: func(x, y *big.Int, in []byte) ([]byte, error) {
spubkey := secp256k1.NewPublicKey(secp256k1Curve, x, y)
return secp256k1.Encrypt(spubkey, in)
},
decrypt: func(privkey []byte, in []byte) ([]byte, error) {
sprivkey, _ := secp256k1.PrivKeyFromBytes(secp256k1Curve, privkey)
if sprivkey == nil {
return nil, fmt.Errorf("failure deserializing privkey")
}
return secp256k1.Decrypt(sprivkey, in)
},
}
return secp.(DSA)
}

View File

@ -0,0 +1,11 @@
chainhash
=========
chainhash is a wrapper around the hash function used for decred. It
is designed to isolate the code that needs to differ between btcd and
dcrd.
## Installation and updating
```bash
$ go get -u github.com/decred/dcrd/chaincfg/chainhash
```

View File

@ -0,0 +1,6 @@
// Package chainhash defines the hash functions used.
//
// This package provides a wrapper around the hash function used. This is
// designed to isolate the code that needs to be changed to support coins
// with different hash functions (i.e, bitcoin vs decred).
package chainhash

View File

@ -1,31 +1,32 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package wire
package chainhash
import (
"encoding/hex"
"fmt"
)
// Size of array used to store sha hashes. See ShaHash.
// Size of array used to store sha hashes. See Hash.
const HashSize = 32
// MaxHashStringSize is the maximum length of a ShaHash hash string.
// MaxHashStringSize is the maximum length of a Hash hash string.
const MaxHashStringSize = HashSize * 2
// ErrHashStrSize describes an error that indicates the caller specified a hash
// string that has too many characters.
var ErrHashStrSize = fmt.Errorf("max hash string length is %v bytes", MaxHashStringSize)
// ShaHash is used in several of the bitcoin messages and common structures. It
// Hash is used in several of the bitcoin messages and common structures. It
// typically represents the double sha256 of data.
type ShaHash [HashSize]byte
type Hash [HashSize]byte
// String returns the ShaHash as the hexadecimal string of the byte-reversed
// String returns the Hash as the hexadecimal string of the byte-reversed
// hash.
func (hash ShaHash) String() string {
func (hash Hash) String() string {
for i := 0; i < HashSize/2; i++ {
hash[i], hash[HashSize-1-i] = hash[HashSize-1-i], hash[i]
}
@ -37,7 +38,7 @@ func (hash ShaHash) String() string {
// NOTE: This makes a copy of the bytes and should have probably been named
// CloneBytes. It is generally cheaper to just slice the hash directly thereby
// reusing the same bytes rather than calling this method.
func (hash *ShaHash) Bytes() []byte {
func (hash *Hash) Bytes() []byte {
newHash := make([]byte, HashSize)
copy(newHash, hash[:])
@ -46,7 +47,7 @@ func (hash *ShaHash) Bytes() []byte {
// SetBytes sets the bytes which represent the hash. An error is returned if
// the number of bytes passed in is not HashSize.
func (hash *ShaHash) SetBytes(newHash []byte) error {
func (hash *Hash) SetBytes(newHash []byte) error {
nhlen := len(newHash)
if nhlen != HashSize {
return fmt.Errorf("invalid sha length of %v, want %v", nhlen,
@ -58,14 +59,20 @@ func (hash *ShaHash) SetBytes(newHash []byte) error {
}
// IsEqual returns true if target is the same as hash.
func (hash *ShaHash) IsEqual(target *ShaHash) bool {
func (hash *Hash) IsEqual(target *Hash) bool {
if hash == nil && target == nil {
return true
}
if hash == nil || target == nil {
return false
}
return *hash == *target
}
// NewShaHash returns a new ShaHash from a byte slice. An error is returned if
// NewHash returns a new Hash from a byte slice. An error is returned if
// the number of bytes passed in is not HashSize.
func NewShaHash(newHash []byte) (*ShaHash, error) {
var sh ShaHash
func NewHash(newHash []byte) (*Hash, error) {
var sh Hash
err := sh.SetBytes(newHash)
if err != nil {
return nil, err
@ -73,10 +80,10 @@ func NewShaHash(newHash []byte) (*ShaHash, error) {
return &sh, err
}
// NewShaHashFromStr creates a ShaHash from a hash string. The string should be
// NewHashFromStr creates a Hash from a hash string. The string should be
// the hexadecimal string of a byte-reversed hash, but any missing characters
// result in zero padding at the end of the ShaHash.
func NewShaHashFromStr(hash string) (*ShaHash, error) {
// result in zero padding at the end of the Hash.
func NewHashFromStr(hash string) (*Hash, error) {
// Return error if hash string is too long.
if len(hash) > MaxHashStringSize {
return nil, ErrHashStrSize
@ -94,10 +101,10 @@ func NewShaHashFromStr(hash string) (*ShaHash, error) {
}
// Un-reverse the decoded bytes, copying into in leading bytes of a
// ShaHash. There is no need to explicitly pad the result as any
// Hash. There is no need to explicitly pad the result as any
// missing (when len(buf) < HashSize) bytes from the decoded hex string
// will remain zeros at the end of the ShaHash.
var ret ShaHash
// will remain zeros at the end of the Hash.
var ret Hash
blen := len(buf)
mid := blen / 2
if blen%2 != 0 {

View File

@ -0,0 +1,49 @@
// Copyright (c) 2015 The Decred Developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chainhash
import (
"github.com/decred/blake256"
)
// HashFunc calculates the hash of the supplied bytes.
// TODO(jcv) Should modify blake256 so it has the same interface as blake2
// and fastsha256 so these function can look more like btcsuite. Then should
// try to get it to the upstream blake256 repo
func HashFunc(data []byte) [blake256.Size]byte {
var outB [blake256.Size]byte
a := blake256.New()
a.Write(data)
out := a.Sum(nil)
for i, el := range out {
outB[i] = el
}
return outB
}
// HashFuncB calculates hash(b) and returns the resulting bytes.
func HashFuncB(b []byte) []byte {
a := blake256.New()
a.Write(b)
out := a.Sum(nil)
return out
}
// HashFuncH calculates hash(b) and returns the resulting bytes as a Hash.
func HashFuncH(b []byte) Hash {
var outB [blake256.Size]byte
a := blake256.New()
a.Write(b)
out := a.Sum(nil)
for i, el := range out {
outB[i] = el
}
return Hash(outB)
}
const HashBlockSize = blake256.BlockSize

View File

@ -1,8 +1,8 @@
// Package chaincfg defines chain configuration parameters.
//
// In addition to the main Bitcoin network, which is intended for the transfer
// In addition to the main Decred network, which is intended for the transfer
// of monetary value, there also exists two currently active standard networks:
// regression test and testnet (version 3). These networks are incompatible
// regression test and testnet (version 0). These networks are incompatible
// with each other (each sharing a different genesis block) and software should
// handle errors where input intended for one network is used on an application
// instance running on a different network.
@ -10,7 +10,7 @@
// For library packages, chaincfg provides the ability to lookup chain
// parameters and encoding magics when passed a *Params. Older APIs not updated
// to the new convention of passing a *Params may lookup the parameters for a
// wire.BitcoinNet using ParamsForNet, but be aware that this usage is
// wire.DecredNet using ParamsForNet, but be aware that this usage is
// deprecated and will be removed from chaincfg in the future.
//
// For main packages, a (typically global) var may be assigned the address of
@ -25,11 +25,11 @@
// "fmt"
// "log"
//
// "github.com/btcsuite/btcutil"
// "github.com/btcsuite/btcd/chaincfg"
// "github.com/decred/dcrutil"
// "github.com/decred/dcrd/chaincfg"
// )
//
// var testnet = flag.Bool("testnet", false, "operate on the testnet Bitcoin network")
// var testnet = flag.Bool("testnet", false, "operate on the testnet Decred network")
//
// // By default (without -testnet), use mainnet.
// var chainParams = &chaincfg.MainNetParams
@ -39,23 +39,23 @@
//
// // Modify active network parameters if operating on testnet.
// if *testnet {
// chainParams = &chaincfg.TestNet3Params
// chainParams = &chaincfg.TestNetParams
// }
//
// // later...
//
// // Create and print new payment address, specific to the active network.
// pubKeyHash := make([]byte, 20)
// addr, err := btcutil.NewAddressPubKeyHash(pubKeyHash, chainParams)
// addr, err := dcrutil.NewAddressPubKeyHash(pubKeyHash, chainParams)
// if err != nil {
// log.Fatal(err)
// }
// fmt.Println(addr)
// }
//
// If an application does not use one of the three standard Bitcoin networks,
// If an application does not use one of the three standard Decred networks,
// a new Params struct may be created which defines the parameters for the
// non-standard network. As a general rule of thumb, all network parameters
// should be unique to the network, but parameter collisions can still occur
// (unfortunately, this is the case with regtest and testnet3 sharing magics).
// (unfortunately, this is the case with regtest and testnet sharing magics).
package chaincfg

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,165 +8,6 @@ package chaincfg
import (
"time"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/wire"
)
// genesisCoinbaseTx is the coinbase transaction for the genesis blocks for
// the main network, regression test network, and test network (version 3).
var genesisCoinbaseTx = wire.MsgTx{
Version: 1,
TxIn: []*wire.TxIn{
{
PreviousOutPoint: wire.OutPoint{
Hash: wire.ShaHash{},
Index: 0xffffffff,
},
SignatureScript: []byte{
0x04, 0xff, 0xff, 0x00, 0x1d, 0x01, 0x04, 0x45, /* |.......E| */
0x54, 0x68, 0x65, 0x20, 0x54, 0x69, 0x6d, 0x65, /* |The Time| */
0x73, 0x20, 0x30, 0x33, 0x2f, 0x4a, 0x61, 0x6e, /* |s 03/Jan| */
0x2f, 0x32, 0x30, 0x30, 0x39, 0x20, 0x43, 0x68, /* |/2009 Ch| */
0x61, 0x6e, 0x63, 0x65, 0x6c, 0x6c, 0x6f, 0x72, /* |ancellor| */
0x20, 0x6f, 0x6e, 0x20, 0x62, 0x72, 0x69, 0x6e, /* | on brin| */
0x6b, 0x20, 0x6f, 0x66, 0x20, 0x73, 0x65, 0x63, /* |k of sec|*/
0x6f, 0x6e, 0x64, 0x20, 0x62, 0x61, 0x69, 0x6c, /* |ond bail| */
0x6f, 0x75, 0x74, 0x20, 0x66, 0x6f, 0x72, 0x20, /* |out for |*/
0x62, 0x61, 0x6e, 0x6b, 0x73, /* |banks| */
},
Sequence: 0xffffffff,
},
},
TxOut: []*wire.TxOut{
{
Value: 0x12a05f200,
PkScript: []byte{
0x41, 0x04, 0x67, 0x8a, 0xfd, 0xb0, 0xfe, 0x55, /* |A.g....U| */
0x48, 0x27, 0x19, 0x67, 0xf1, 0xa6, 0x71, 0x30, /* |H'.g..q0| */
0xb7, 0x10, 0x5c, 0xd6, 0xa8, 0x28, 0xe0, 0x39, /* |..\..(.9| */
0x09, 0xa6, 0x79, 0x62, 0xe0, 0xea, 0x1f, 0x61, /* |..yb...a| */
0xde, 0xb6, 0x49, 0xf6, 0xbc, 0x3f, 0x4c, 0xef, /* |..I..?L.| */
0x38, 0xc4, 0xf3, 0x55, 0x04, 0xe5, 0x1e, 0xc1, /* |8..U....| */
0x12, 0xde, 0x5c, 0x38, 0x4d, 0xf7, 0xba, 0x0b, /* |..\8M...| */
0x8d, 0x57, 0x8a, 0x4c, 0x70, 0x2b, 0x6b, 0xf1, /* |.W.Lp+k.| */
0x1d, 0x5f, 0xac, /* |._.| */
},
},
},
LockTime: 0,
}
// genesisHash is the hash of the first block in the block chain for the main
// network (genesis block).
var genesisHash = wire.ShaHash([wire.HashSize]byte{ // Make go vet happy.
0x6f, 0xe2, 0x8c, 0x0a, 0xb6, 0xf1, 0xb3, 0x72,
0xc1, 0xa6, 0xa2, 0x46, 0xae, 0x63, 0xf7, 0x4f,
0x93, 0x1e, 0x83, 0x65, 0xe1, 0x5a, 0x08, 0x9c,
0x68, 0xd6, 0x19, 0x00, 0x00, 0x00, 0x00, 0x00,
})
// genesisMerkleRoot is the hash of the first transaction in the genesis block
// for the main network.
var genesisMerkleRoot = wire.ShaHash([wire.HashSize]byte{ // Make go vet happy.
0x3b, 0xa3, 0xed, 0xfd, 0x7a, 0x7b, 0x12, 0xb2,
0x7a, 0xc7, 0x2c, 0x3e, 0x67, 0x76, 0x8f, 0x61,
0x7f, 0xc8, 0x1b, 0xc3, 0x88, 0x8a, 0x51, 0x32,
0x3a, 0x9f, 0xb8, 0xaa, 0x4b, 0x1e, 0x5e, 0x4a,
})
// genesisBlock defines the genesis block of the block chain which serves as the
// public transaction ledger for the main network.
var genesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
PrevBlock: wire.ShaHash{}, // 0000000000000000000000000000000000000000000000000000000000000000
MerkleRoot: genesisMerkleRoot, // 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
Timestamp: time.Unix(0x495fab29, 0), // 2009-01-03 18:15:05 +0000 UTC
Bits: 0x1d00ffff, // 486604799 [00000000ffff0000000000000000000000000000000000000000000000000000]
Nonce: 0x7c2bac1d, // 2083236893
},
Transactions: []*wire.MsgTx{&genesisCoinbaseTx},
}
// regTestGenesisHash is the hash of the first block in the block chain for the
// regression test network (genesis block).
var regTestGenesisHash = wire.ShaHash([wire.HashSize]byte{ // Make go vet happy.
0x06, 0x22, 0x6e, 0x46, 0x11, 0x1a, 0x0b, 0x59,
0xca, 0xaf, 0x12, 0x60, 0x43, 0xeb, 0x5b, 0xbf,
0x28, 0xc3, 0x4f, 0x3a, 0x5e, 0x33, 0x2a, 0x1f,
0xc7, 0xb2, 0xb7, 0x3c, 0xf1, 0x88, 0x91, 0x0f,
})
// regTestGenesisMerkleRoot is the hash of the first transaction in the genesis
// block for the regression test network. It is the same as the merkle root for
// the main network.
var regTestGenesisMerkleRoot = genesisMerkleRoot
// regTestGenesisBlock defines the genesis block of the block chain which serves
// as the public transaction ledger for the regression test network.
var regTestGenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
PrevBlock: wire.ShaHash{}, // 0000000000000000000000000000000000000000000000000000000000000000
MerkleRoot: regTestGenesisMerkleRoot, // 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
Timestamp: time.Unix(1296688602, 0), // 2011-02-02 23:16:42 +0000 UTC
Bits: 0x207fffff, // 545259519 [7fffff0000000000000000000000000000000000000000000000000000000000]
Nonce: 2,
},
Transactions: []*wire.MsgTx{&genesisCoinbaseTx},
}
// testNet3GenesisHash is the hash of the first block in the block chain for the
// test network (version 3).
var testNet3GenesisHash = wire.ShaHash([wire.HashSize]byte{ // Make go vet happy.
0x43, 0x49, 0x7f, 0xd7, 0xf8, 0x26, 0x95, 0x71,
0x08, 0xf4, 0xa3, 0x0f, 0xd9, 0xce, 0xc3, 0xae,
0xba, 0x79, 0x97, 0x20, 0x84, 0xe9, 0x0e, 0xad,
0x01, 0xea, 0x33, 0x09, 0x00, 0x00, 0x00, 0x00,
})
// testNet3GenesisMerkleRoot is the hash of the first transaction in the genesis
// block for the test network (version 3). It is the same as the merkle root
// for the main network.
var testNet3GenesisMerkleRoot = genesisMerkleRoot
// testNet3GenesisBlock defines the genesis block of the block chain which
// serves as the public transaction ledger for the test network (version 3).
var testNet3GenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
PrevBlock: wire.ShaHash{}, // 0000000000000000000000000000000000000000000000000000000000000000
MerkleRoot: testNet3GenesisMerkleRoot, // 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
Timestamp: time.Unix(1296688602, 0), // 2011-02-02 23:16:42 +0000 UTC
Bits: 0x1d00ffff, // 486604799 [00000000ffff0000000000000000000000000000000000000000000000000000]
Nonce: 0x18aea41a, // 414098458
},
Transactions: []*wire.MsgTx{&genesisCoinbaseTx},
}
// simNetGenesisHash is the hash of the first block in the block chain for the
// simulation test network.
var simNetGenesisHash = wire.ShaHash([wire.HashSize]byte{ // Make go vet happy.
0xf6, 0x7a, 0xd7, 0x69, 0x5d, 0x9b, 0x66, 0x2a,
0x72, 0xff, 0x3d, 0x8e, 0xdb, 0xbb, 0x2d, 0xe0,
0xbf, 0xa6, 0x7b, 0x13, 0x97, 0x4b, 0xb9, 0x91,
0x0d, 0x11, 0x6d, 0x5c, 0xbd, 0x86, 0x3e, 0x68,
})
// simNetGenesisMerkleRoot is the hash of the first transaction in the genesis
// block for the simulation test network. It is the same as the merkle root for
// the main network.
var simNetGenesisMerkleRoot = genesisMerkleRoot
// simNetGenesisBlock defines the genesis block of the block chain which serves
// as the public transaction ledger for the simulation test network.
var simNetGenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
PrevBlock: wire.ShaHash{}, // 0000000000000000000000000000000000000000000000000000000000000000
MerkleRoot: simNetGenesisMerkleRoot, // 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b
Timestamp: time.Unix(1401292357, 0), // 2014-05-28 15:52:37 +0000 UTC
Bits: 0x207fffff, // 545259519 [7fffff0000000000000000000000000000000000000000000000000000000000]
Nonce: 2,
},
Transactions: []*wire.MsgTx{&genesisCoinbaseTx},
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -6,279 +7,9 @@ package chaincfg_test
import (
"bytes"
"encoding/hex"
"testing"
"github.com/btcsuite/btcd/chaincfg"
"github.com/davecgh/go-spew/spew"
"github.com/decred/dcrd/chaincfg"
)
// TestGenesisBlock tests the genesis block of the main network for validity by
// checking the encoded bytes and hashes.
func TestGenesisBlock(t *testing.T) {
// Encode the genesis block to raw bytes.
var buf bytes.Buffer
err := chaincfg.MainNetParams.GenesisBlock.Serialize(&buf)
if err != nil {
t.Fatalf("TestGenesisBlock: %v", err)
}
// Ensure the encoded block matches the expected bytes.
if !bytes.Equal(buf.Bytes(), genesisBlockBytes) {
t.Fatalf("TestGenesisBlock: Genesis block does not appear valid - "+
"got %v, want %v", spew.Sdump(buf.Bytes()),
spew.Sdump(genesisBlockBytes))
}
// Check hash of the block against expected hash.
hash := chaincfg.MainNetParams.GenesisBlock.BlockSha()
if !chaincfg.MainNetParams.GenesisHash.IsEqual(&hash) {
t.Fatalf("TestGenesisBlock: Genesis block hash does not "+
"appear valid - got %v, want %v", spew.Sdump(hash),
spew.Sdump(chaincfg.MainNetParams.GenesisHash))
}
}
// TestRegTestGenesisBlock tests the genesis block of the regression test
// network for validity by checking the encoded bytes and hashes.
func TestRegTestGenesisBlock(t *testing.T) {
// Encode the genesis block to raw bytes.
var buf bytes.Buffer
err := chaincfg.RegressionNetParams.GenesisBlock.Serialize(&buf)
if err != nil {
t.Fatalf("TestRegTestGenesisBlock: %v", err)
}
// Ensure the encoded block matches the expected bytes.
if !bytes.Equal(buf.Bytes(), regTestGenesisBlockBytes) {
t.Fatalf("TestRegTestGenesisBlock: Genesis block does not "+
"appear valid - got %v, want %v",
spew.Sdump(buf.Bytes()),
spew.Sdump(regTestGenesisBlockBytes))
}
// Check hash of the block against expected hash.
hash := chaincfg.RegressionNetParams.GenesisBlock.BlockSha()
if !chaincfg.RegressionNetParams.GenesisHash.IsEqual(&hash) {
t.Fatalf("TestRegTestGenesisBlock: Genesis block hash does "+
"not appear valid - got %v, want %v", spew.Sdump(hash),
spew.Sdump(chaincfg.RegressionNetParams.GenesisHash))
}
}
// TestTestNet3GenesisBlock tests the genesis block of the test network (version
// 3) for validity by checking the encoded bytes and hashes.
func TestTestNet3GenesisBlock(t *testing.T) {
// Encode the genesis block to raw bytes.
var buf bytes.Buffer
err := chaincfg.TestNet3Params.GenesisBlock.Serialize(&buf)
if err != nil {
t.Fatalf("TestTestNet3GenesisBlock: %v", err)
}
// Ensure the encoded block matches the expected bytes.
if !bytes.Equal(buf.Bytes(), testNet3GenesisBlockBytes) {
t.Fatalf("TestTestNet3GenesisBlock: Genesis block does not "+
"appear valid - got %v, want %v",
spew.Sdump(buf.Bytes()),
spew.Sdump(testNet3GenesisBlockBytes))
}
// Check hash of the block against expected hash.
hash := chaincfg.TestNet3Params.GenesisBlock.BlockSha()
if !chaincfg.TestNet3Params.GenesisHash.IsEqual(&hash) {
t.Fatalf("TestTestNet3GenesisBlock: Genesis block hash does "+
"not appear valid - got %v, want %v", spew.Sdump(hash),
spew.Sdump(chaincfg.TestNet3Params.GenesisHash))
}
}
// TestSimNetGenesisBlock tests the genesis block of the simulation test network
// for validity by checking the encoded bytes and hashes.
func TestSimNetGenesisBlock(t *testing.T) {
// Encode the genesis block to raw bytes.
var buf bytes.Buffer
err := chaincfg.SimNetParams.GenesisBlock.Serialize(&buf)
if err != nil {
t.Fatalf("TestSimNetGenesisBlock: %v", err)
}
// Ensure the encoded block matches the expected bytes.
if !bytes.Equal(buf.Bytes(), simNetGenesisBlockBytes) {
t.Fatalf("TestSimNetGenesisBlock: Genesis block does not "+
"appear valid - got %v, want %v",
spew.Sdump(buf.Bytes()),
spew.Sdump(simNetGenesisBlockBytes))
}
// Check hash of the block against expected hash.
hash := chaincfg.SimNetParams.GenesisBlock.BlockSha()
if !chaincfg.SimNetParams.GenesisHash.IsEqual(&hash) {
t.Fatalf("TestSimNetGenesisBlock: Genesis block hash does "+
"not appear valid - got %v, want %v", spew.Sdump(hash),
spew.Sdump(chaincfg.SimNetParams.GenesisHash))
}
}
// genesisBlockBytes are the wire encoded bytes for the genesis block of the
// main network as of protocol version 60002.
var genesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x3b, 0xa3, 0xed, 0xfd, /* |....;...| */
0x7a, 0x7b, 0x12, 0xb2, 0x7a, 0xc7, 0x2c, 0x3e, /* |z{..z.,>| */
0x67, 0x76, 0x8f, 0x61, 0x7f, 0xc8, 0x1b, 0xc3, /* |gv.a....| */
0x88, 0x8a, 0x51, 0x32, 0x3a, 0x9f, 0xb8, 0xaa, /* |..Q2:...| */
0x4b, 0x1e, 0x5e, 0x4a, 0x29, 0xab, 0x5f, 0x49, /* |K.^J)._I| */
0xff, 0xff, 0x00, 0x1d, 0x1d, 0xac, 0x2b, 0x7c, /* |......+|| */
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, /* |........| */
0xff, 0xff, 0x4d, 0x04, 0xff, 0xff, 0x00, 0x1d, /* |..M.....| */
0x01, 0x04, 0x45, 0x54, 0x68, 0x65, 0x20, 0x54, /* |..EThe T| */
0x69, 0x6d, 0x65, 0x73, 0x20, 0x30, 0x33, 0x2f, /* |imes 03/| */
0x4a, 0x61, 0x6e, 0x2f, 0x32, 0x30, 0x30, 0x39, /* |Jan/2009| */
0x20, 0x43, 0x68, 0x61, 0x6e, 0x63, 0x65, 0x6c, /* | Chancel| */
0x6c, 0x6f, 0x72, 0x20, 0x6f, 0x6e, 0x20, 0x62, /* |lor on b| */
0x72, 0x69, 0x6e, 0x6b, 0x20, 0x6f, 0x66, 0x20, /* |rink of | */
0x73, 0x65, 0x63, 0x6f, 0x6e, 0x64, 0x20, 0x62, /* |second b| */
0x61, 0x69, 0x6c, 0x6f, 0x75, 0x74, 0x20, 0x66, /* |ailout f| */
0x6f, 0x72, 0x20, 0x62, 0x61, 0x6e, 0x6b, 0x73, /* |or banks| */
0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0xf2, 0x05, /* |........| */
0x2a, 0x01, 0x00, 0x00, 0x00, 0x43, 0x41, 0x04, /* |*....CA.| */
0x67, 0x8a, 0xfd, 0xb0, 0xfe, 0x55, 0x48, 0x27, /* |g....UH'| */
0x19, 0x67, 0xf1, 0xa6, 0x71, 0x30, 0xb7, 0x10, /* |.g..q0..| */
0x5c, 0xd6, 0xa8, 0x28, 0xe0, 0x39, 0x09, 0xa6, /* |\..(.9..| */
0x79, 0x62, 0xe0, 0xea, 0x1f, 0x61, 0xde, 0xb6, /* |yb...a..| */
0x49, 0xf6, 0xbc, 0x3f, 0x4c, 0xef, 0x38, 0xc4, /* |I..?L.8.| */
0xf3, 0x55, 0x04, 0xe5, 0x1e, 0xc1, 0x12, 0xde, /* |.U......| */
0x5c, 0x38, 0x4d, 0xf7, 0xba, 0x0b, 0x8d, 0x57, /* |\8M....W| */
0x8a, 0x4c, 0x70, 0x2b, 0x6b, 0xf1, 0x1d, 0x5f, /* |.Lp+k.._|*/
0xac, 0x00, 0x00, 0x00, 0x00, /* |.....| */
}
// regTestGenesisBlockBytes are the wire encoded bytes for the genesis block of
// the regression test network as of protocol version 60002.
var regTestGenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x3b, 0xa3, 0xed, 0xfd, /* |....;...| */
0x7a, 0x7b, 0x12, 0xb2, 0x7a, 0xc7, 0x2c, 0x3e, /* |z{..z.,>| */
0x67, 0x76, 0x8f, 0x61, 0x7f, 0xc8, 0x1b, 0xc3, /* |gv.a....| */
0x88, 0x8a, 0x51, 0x32, 0x3a, 0x9f, 0xb8, 0xaa, /* |..Q2:...| */
0x4b, 0x1e, 0x5e, 0x4a, 0xda, 0xe5, 0x49, 0x4d, /* |K.^J)._I| */
0xff, 0xff, 0x7f, 0x20, 0x02, 0x00, 0x00, 0x00, /* |......+|| */
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, /* |........| */
0xff, 0xff, 0x4d, 0x04, 0xff, 0xff, 0x00, 0x1d, /* |..M.....| */
0x01, 0x04, 0x45, 0x54, 0x68, 0x65, 0x20, 0x54, /* |..EThe T| */
0x69, 0x6d, 0x65, 0x73, 0x20, 0x30, 0x33, 0x2f, /* |imes 03/| */
0x4a, 0x61, 0x6e, 0x2f, 0x32, 0x30, 0x30, 0x39, /* |Jan/2009| */
0x20, 0x43, 0x68, 0x61, 0x6e, 0x63, 0x65, 0x6c, /* | Chancel| */
0x6c, 0x6f, 0x72, 0x20, 0x6f, 0x6e, 0x20, 0x62, /* |lor on b| */
0x72, 0x69, 0x6e, 0x6b, 0x20, 0x6f, 0x66, 0x20, /* |rink of | */
0x73, 0x65, 0x63, 0x6f, 0x6e, 0x64, 0x20, 0x62, /* |second b| */
0x61, 0x69, 0x6c, 0x6f, 0x75, 0x74, 0x20, 0x66, /* |ailout f| */
0x6f, 0x72, 0x20, 0x62, 0x61, 0x6e, 0x6b, 0x73, /* |or banks| */
0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0xf2, 0x05, /* |........| */
0x2a, 0x01, 0x00, 0x00, 0x00, 0x43, 0x41, 0x04, /* |*....CA.| */
0x67, 0x8a, 0xfd, 0xb0, 0xfe, 0x55, 0x48, 0x27, /* |g....UH'| */
0x19, 0x67, 0xf1, 0xa6, 0x71, 0x30, 0xb7, 0x10, /* |.g..q0..| */
0x5c, 0xd6, 0xa8, 0x28, 0xe0, 0x39, 0x09, 0xa6, /* |\..(.9..| */
0x79, 0x62, 0xe0, 0xea, 0x1f, 0x61, 0xde, 0xb6, /* |yb...a..| */
0x49, 0xf6, 0xbc, 0x3f, 0x4c, 0xef, 0x38, 0xc4, /* |I..?L.8.| */
0xf3, 0x55, 0x04, 0xe5, 0x1e, 0xc1, 0x12, 0xde, /* |.U......| */
0x5c, 0x38, 0x4d, 0xf7, 0xba, 0x0b, 0x8d, 0x57, /* |\8M....W| */
0x8a, 0x4c, 0x70, 0x2b, 0x6b, 0xf1, 0x1d, 0x5f, /* |.Lp+k.._|*/
0xac, 0x00, 0x00, 0x00, 0x00, /* |.....| */
}
// testNet3GenesisBlockBytes are the wire encoded bytes for the genesis block of
// the test network (version 3) as of protocol version 60002.
var testNet3GenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x3b, 0xa3, 0xed, 0xfd, /* |....;...| */
0x7a, 0x7b, 0x12, 0xb2, 0x7a, 0xc7, 0x2c, 0x3e, /* |z{..z.,>| */
0x67, 0x76, 0x8f, 0x61, 0x7f, 0xc8, 0x1b, 0xc3, /* |gv.a....| */
0x88, 0x8a, 0x51, 0x32, 0x3a, 0x9f, 0xb8, 0xaa, /* |..Q2:...| */
0x4b, 0x1e, 0x5e, 0x4a, 0xda, 0xe5, 0x49, 0x4d, /* |K.^J)._I| */
0xff, 0xff, 0x00, 0x1d, 0x1a, 0xa4, 0xae, 0x18, /* |......+|| */
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, /* |........| */
0xff, 0xff, 0x4d, 0x04, 0xff, 0xff, 0x00, 0x1d, /* |..M.....| */
0x01, 0x04, 0x45, 0x54, 0x68, 0x65, 0x20, 0x54, /* |..EThe T| */
0x69, 0x6d, 0x65, 0x73, 0x20, 0x30, 0x33, 0x2f, /* |imes 03/| */
0x4a, 0x61, 0x6e, 0x2f, 0x32, 0x30, 0x30, 0x39, /* |Jan/2009| */
0x20, 0x43, 0x68, 0x61, 0x6e, 0x63, 0x65, 0x6c, /* | Chancel| */
0x6c, 0x6f, 0x72, 0x20, 0x6f, 0x6e, 0x20, 0x62, /* |lor on b| */
0x72, 0x69, 0x6e, 0x6b, 0x20, 0x6f, 0x66, 0x20, /* |rink of | */
0x73, 0x65, 0x63, 0x6f, 0x6e, 0x64, 0x20, 0x62, /* |second b| */
0x61, 0x69, 0x6c, 0x6f, 0x75, 0x74, 0x20, 0x66, /* |ailout f| */
0x6f, 0x72, 0x20, 0x62, 0x61, 0x6e, 0x6b, 0x73, /* |or banks| */
0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0xf2, 0x05, /* |........| */
0x2a, 0x01, 0x00, 0x00, 0x00, 0x43, 0x41, 0x04, /* |*....CA.| */
0x67, 0x8a, 0xfd, 0xb0, 0xfe, 0x55, 0x48, 0x27, /* |g....UH'| */
0x19, 0x67, 0xf1, 0xa6, 0x71, 0x30, 0xb7, 0x10, /* |.g..q0..| */
0x5c, 0xd6, 0xa8, 0x28, 0xe0, 0x39, 0x09, 0xa6, /* |\..(.9..| */
0x79, 0x62, 0xe0, 0xea, 0x1f, 0x61, 0xde, 0xb6, /* |yb...a..| */
0x49, 0xf6, 0xbc, 0x3f, 0x4c, 0xef, 0x38, 0xc4, /* |I..?L.8.| */
0xf3, 0x55, 0x04, 0xe5, 0x1e, 0xc1, 0x12, 0xde, /* |.U......| */
0x5c, 0x38, 0x4d, 0xf7, 0xba, 0x0b, 0x8d, 0x57, /* |\8M....W| */
0x8a, 0x4c, 0x70, 0x2b, 0x6b, 0xf1, 0x1d, 0x5f, /* |.Lp+k.._|*/
0xac, 0x00, 0x00, 0x00, 0x00, /* |.....| */
}
// simNetGenesisBlockBytes are the wire encoded bytes for the genesis block of
// the simulation test network as of protocol version 70002.
var simNetGenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x3b, 0xa3, 0xed, 0xfd, /* |....;...| */
0x7a, 0x7b, 0x12, 0xb2, 0x7a, 0xc7, 0x2c, 0x3e, /* |z{..z.,>| */
0x67, 0x76, 0x8f, 0x61, 0x7f, 0xc8, 0x1b, 0xc3, /* |gv.a....| */
0x88, 0x8a, 0x51, 0x32, 0x3a, 0x9f, 0xb8, 0xaa, /* |..Q2:...| */
0x4b, 0x1e, 0x5e, 0x4a, 0x45, 0x06, 0x86, 0x53, /* |K.^J)._I| */
0xff, 0xff, 0x7f, 0x20, 0x02, 0x00, 0x00, 0x00, /* |......+|| */
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* |........| */
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, /* |........| */
0xff, 0xff, 0x4d, 0x04, 0xff, 0xff, 0x00, 0x1d, /* |..M.....| */
0x01, 0x04, 0x45, 0x54, 0x68, 0x65, 0x20, 0x54, /* |..EThe T| */
0x69, 0x6d, 0x65, 0x73, 0x20, 0x30, 0x33, 0x2f, /* |imes 03/| */
0x4a, 0x61, 0x6e, 0x2f, 0x32, 0x30, 0x30, 0x39, /* |Jan/2009| */
0x20, 0x43, 0x68, 0x61, 0x6e, 0x63, 0x65, 0x6c, /* | Chancel| */
0x6c, 0x6f, 0x72, 0x20, 0x6f, 0x6e, 0x20, 0x62, /* |lor on b| */
0x72, 0x69, 0x6e, 0x6b, 0x20, 0x6f, 0x66, 0x20, /* |rink of | */
0x73, 0x65, 0x63, 0x6f, 0x6e, 0x64, 0x20, 0x62, /* |second b| */
0x61, 0x69, 0x6c, 0x6f, 0x75, 0x74, 0x20, 0x66, /* |ailout f| */
0x6f, 0x72, 0x20, 0x62, 0x61, 0x6e, 0x6b, 0x73, /* |or banks| */
0xff, 0xff, 0xff, 0xff, 0x01, 0x00, 0xf2, 0x05, /* |........| */
0x2a, 0x01, 0x00, 0x00, 0x00, 0x43, 0x41, 0x04, /* |*....CA.| */
0x67, 0x8a, 0xfd, 0xb0, 0xfe, 0x55, 0x48, 0x27, /* |g....UH'| */
0x19, 0x67, 0xf1, 0xa6, 0x71, 0x30, 0xb7, 0x10, /* |.g..q0..| */
0x5c, 0xd6, 0xa8, 0x28, 0xe0, 0x39, 0x09, 0xa6, /* |\..(.9..| */
0x79, 0x62, 0xe0, 0xea, 0x1f, 0x61, 0xde, 0xb6, /* |yb...a..| */
0x49, 0xf6, 0xbc, 0x3f, 0x4c, 0xef, 0x38, 0xc4, /* |I..?L.8.| */
0xf3, 0x55, 0x04, 0xe5, 0x1e, 0xc1, 0x12, 0xde, /* |.U......| */
0x5c, 0x38, 0x4d, 0xf7, 0xba, 0x0b, 0x8d, 0x57, /* |\8M....W| */
0x8a, 0x4c, 0x70, 0x2b, 0x6b, 0xf1, 0x1d, 0x5f, /* |.Lp+k.._|*/
0xac, 0x00, 0x00, 0x00, 0x00, /* |.....| */
}

View File

@ -1,14 +1,17 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chaincfg
import (
"github.com/decred/dcrd/chaincfg/chainhash"
"testing"
)
func TestInvalidShaStr(t *testing.T) {
defer func() {
if r := recover(); r == nil {
t.Errorf("Expected panic for invalid sha string, got nil")
}
}()
newShaHashFromStr("banana")
_, err := chainhash.NewHashFromStr("banana")
if err == nil {
t.Error("Invalid string should fail.")
}
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2014 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -7,420 +8,8 @@ package chaincfg
import (
"errors"
"math/big"
"time"
"github.com/btcsuite/btcd/wire"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/wire"
)
// These variables are the chain proof-of-work limit parameters for each default
// network.
var (
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// mainPowLimit is the highest proof of work value a Bitcoin block can
// have for the main network. It is the value 2^224 - 1.
mainPowLimit = new(big.Int).Sub(new(big.Int).Lsh(bigOne, 224), bigOne)
// regressionPowLimit is the highest proof of work value a Bitcoin block
// can have for the regression test network. It is the value 2^255 - 1.
regressionPowLimit = new(big.Int).Sub(new(big.Int).Lsh(bigOne, 255), bigOne)
// testNet3PowLimit is the highest proof of work value a Bitcoin block
// can have for the test network (version 3). It is the value
// 2^224 - 1.
testNet3PowLimit = new(big.Int).Sub(new(big.Int).Lsh(bigOne, 224), bigOne)
// simNetPowLimit is the highest proof of work value a Bitcoin block
// can have for the simulation test network. It is the value 2^255 - 1.
simNetPowLimit = new(big.Int).Sub(new(big.Int).Lsh(bigOne, 255), bigOne)
)
// Checkpoint identifies a known good point in the block chain. Using
// checkpoints allows a few optimizations for old blocks during initial download
// and also prevents forks from old blocks.
//
// Each checkpoint is selected based upon several factors. See the
// documentation for blockchain.IsCheckpointCandidate for details on the
// selection criteria.
type Checkpoint struct {
Height int64
Hash *wire.ShaHash
}
// Params defines a Bitcoin network by its parameters. These parameters may be
// used by Bitcoin applications to differentiate networks as well as addresses
// and keys for one network from those intended for use on another network.
type Params struct {
Name string
Net wire.BitcoinNet
DefaultPort string
// Chain parameters
GenesisBlock *wire.MsgBlock
GenesisHash *wire.ShaHash
PowLimit *big.Int
PowLimitBits uint32
SubsidyHalvingInterval int32
ResetMinDifficulty bool
GenerateSupported bool
// Checkpoints ordered from oldest to newest.
Checkpoints []Checkpoint
// Enforce current block version once network has
// upgraded. This is part of BIP0034.
BlockEnforceNumRequired uint64
// Reject previous block versions once network has
// upgraded. This is part of BIP0034.
BlockRejectNumRequired uint64
// The number of nodes to check. This is part of BIP0034.
BlockUpgradeNumToCheck uint64
// Mempool parameters
RelayNonStdTxs bool
// Address encoding magics
PubKeyHashAddrID byte // First byte of a P2PKH address
ScriptHashAddrID byte // First byte of a P2SH address
PrivateKeyID byte // First byte of a WIF private key
// BIP32 hierarchical deterministic extended key magics
HDPrivateKeyID [4]byte
HDPublicKeyID [4]byte
// BIP44 coin type used in the hierarchical deterministic path for
// address generation.
HDCoinType uint32
}
// MainNetParams defines the network parameters for the main Bitcoin network.
var MainNetParams = Params{
Name: "mainnet",
Net: wire.MainNet,
DefaultPort: "8333",
// Chain parameters
GenesisBlock: &genesisBlock,
GenesisHash: &genesisHash,
PowLimit: mainPowLimit,
PowLimitBits: 0x1d00ffff,
SubsidyHalvingInterval: 210000,
ResetMinDifficulty: false,
GenerateSupported: false,
// Checkpoints ordered from oldest to newest.
Checkpoints: []Checkpoint{
{11111, newShaHashFromStr("0000000069e244f73d78e8fd29ba2fd2ed618bd6fa2ee92559f542fdb26e7c1d")},
{33333, newShaHashFromStr("000000002dd5588a74784eaa7ab0507a18ad16a236e7b1ce69f00d7ddfb5d0a6")},
{74000, newShaHashFromStr("0000000000573993a3c9e41ce34471c079dcf5f52a0e824a81e7f953b8661a20")},
{105000, newShaHashFromStr("00000000000291ce28027faea320c8d2b054b2e0fe44a773f3eefb151d6bdc97")},
{134444, newShaHashFromStr("00000000000005b12ffd4cd315cd34ffd4a594f430ac814c91184a0d42d2b0fe")},
{168000, newShaHashFromStr("000000000000099e61ea72015e79632f216fe6cb33d7899acb35b75c8303b763")},
{193000, newShaHashFromStr("000000000000059f452a5f7340de6682a977387c17010ff6e6c3bd83ca8b1317")},
{210000, newShaHashFromStr("000000000000048b95347e83192f69cf0366076336c639f9b7228e9ba171342e")},
{216116, newShaHashFromStr("00000000000001b4f4b433e81ee46494af945cf96014816a4e2370f11b23df4e")},
{225430, newShaHashFromStr("00000000000001c108384350f74090433e7fcf79a606b8e797f065b130575932")},
{250000, newShaHashFromStr("000000000000003887df1f29024b06fc2200b55f8af8f35453d7be294df2d214")},
{267300, newShaHashFromStr("000000000000000a83fbd660e918f218bf37edd92b748ad940483c7c116179ac")},
{279000, newShaHashFromStr("0000000000000001ae8c72a0b0c301f67e3afca10e819efa9041e458e9bd7e40")},
{300255, newShaHashFromStr("0000000000000000162804527c6e9b9f0563a280525f9d08c12041def0a0f3b2")},
{319400, newShaHashFromStr("000000000000000021c6052e9becade189495d1c539aa37c58917305fd15f13b")},
{343185, newShaHashFromStr("0000000000000000072b8bf361d01a6ba7d445dd024203fafc78768ed4368554")},
{352940, newShaHashFromStr("000000000000000010755df42dba556bb72be6a32f3ce0b6941ce4430152c9ff")},
},
// Enforce current block version once majority of the network has
// upgraded.
// 75% (750 / 1000)
// Reject previous block versions once a majority of the network has
// upgraded.
// 95% (950 / 1000)
BlockEnforceNumRequired: 750,
BlockRejectNumRequired: 950,
BlockUpgradeNumToCheck: 1000,
// Mempool parameters
RelayNonStdTxs: false,
// Address encoding magics
PubKeyHashAddrID: 0x00, // starts with 1
ScriptHashAddrID: 0x05, // starts with 3
PrivateKeyID: 0x80, // starts with 5 (uncompressed) or K (compressed)
// BIP32 hierarchical deterministic extended key magics
HDPrivateKeyID: [4]byte{0x04, 0x88, 0xad, 0xe4}, // starts with xprv
HDPublicKeyID: [4]byte{0x04, 0x88, 0xb2, 0x1e}, // starts with xpub
// BIP44 coin type used in the hierarchical deterministic path for
// address generation.
HDCoinType: 0,
}
// RegressionNetParams defines the network parameters for the regression test
// Bitcoin network. Not to be confused with the test Bitcoin network (version
// 3), this network is sometimes simply called "testnet".
var RegressionNetParams = Params{
Name: "regtest",
Net: wire.TestNet,
DefaultPort: "18444",
// Chain parameters
GenesisBlock: &regTestGenesisBlock,
GenesisHash: &regTestGenesisHash,
PowLimit: regressionPowLimit,
PowLimitBits: 0x207fffff,
SubsidyHalvingInterval: 150,
ResetMinDifficulty: true,
GenerateSupported: true,
// Checkpoints ordered from oldest to newest.
Checkpoints: nil,
// Enforce current block version once majority of the network has
// upgraded.
// 75% (750 / 1000)
// Reject previous block versions once a majority of the network has
// upgraded.
// 95% (950 / 1000)
BlockEnforceNumRequired: 750,
BlockRejectNumRequired: 950,
BlockUpgradeNumToCheck: 1000,
// Mempool parameters
RelayNonStdTxs: true,
// Address encoding magics
PubKeyHashAddrID: 0x6f, // starts with m or n
ScriptHashAddrID: 0xc4, // starts with 2
PrivateKeyID: 0xef, // starts with 9 (uncompressed) or c (compressed)
// BIP32 hierarchical deterministic extended key magics
HDPrivateKeyID: [4]byte{0x04, 0x35, 0x83, 0x94}, // starts with tprv
HDPublicKeyID: [4]byte{0x04, 0x35, 0x87, 0xcf}, // starts with tpub
// BIP44 coin type used in the hierarchical deterministic path for
// address generation.
HDCoinType: 1,
}
// TestNet3Params defines the network parameters for the test Bitcoin network
// (version 3). Not to be confused with the regression test network, this
// network is sometimes simply called "testnet".
var TestNet3Params = Params{
Name: "testnet3",
Net: wire.TestNet3,
DefaultPort: "18333",
// Chain parameters
GenesisBlock: &testNet3GenesisBlock,
GenesisHash: &testNet3GenesisHash,
PowLimit: testNet3PowLimit,
PowLimitBits: 0x1d00ffff,
SubsidyHalvingInterval: 210000,
ResetMinDifficulty: true,
GenerateSupported: false,
// Checkpoints ordered from oldest to newest.
Checkpoints: []Checkpoint{
{546, newShaHashFromStr("000000002a936ca763904c3c35fce2f3556c559c0214345d31b1bcebf76acb70")},
},
// Enforce current block version once majority of the network has
// upgraded.
// 51% (51 / 100)
// Reject previous block versions once a majority of the network has
// upgraded.
// 75% (75 / 100)
BlockEnforceNumRequired: 51,
BlockRejectNumRequired: 75,
BlockUpgradeNumToCheck: 100,
// Mempool parameters
RelayNonStdTxs: true,
// Address encoding magics
PubKeyHashAddrID: 0x6f, // starts with m or n
ScriptHashAddrID: 0xc4, // starts with 2
PrivateKeyID: 0xef, // starts with 9 (uncompressed) or c (compressed)
// BIP32 hierarchical deterministic extended key magics
HDPrivateKeyID: [4]byte{0x04, 0x35, 0x83, 0x94}, // starts with tprv
HDPublicKeyID: [4]byte{0x04, 0x35, 0x87, 0xcf}, // starts with tpub
// BIP44 coin type used in the hierarchical deterministic path for
// address generation.
HDCoinType: 1,
}
// SimNetParams defines the network parameters for the simulation test Bitcoin
// network. This network is similar to the normal test network except it is
// intended for private use within a group of individuals doing simulation
// testing. The functionality is intended to differ in that the only nodes
// which are specifically specified are used to create the network rather than
// following normal discovery rules. This is important as otherwise it would
// just turn into another public testnet.
var SimNetParams = Params{
Name: "simnet",
Net: wire.SimNet,
DefaultPort: "18555",
// Chain parameters
GenesisBlock: &simNetGenesisBlock,
GenesisHash: &simNetGenesisHash,
PowLimit: simNetPowLimit,
PowLimitBits: 0x207fffff,
SubsidyHalvingInterval: 210000,
ResetMinDifficulty: true,
GenerateSupported: true,
// Checkpoints ordered from oldest to newest.
Checkpoints: nil,
// Enforce current block version once majority of the network has
// upgraded.
// 51% (51 / 100)
// Reject previous block versions once a majority of the network has
// upgraded.
// 75% (75 / 100)
BlockEnforceNumRequired: 51,
BlockRejectNumRequired: 75,
BlockUpgradeNumToCheck: 100,
// Mempool parameters
RelayNonStdTxs: true,
// Address encoding magics
PubKeyHashAddrID: 0x3f, // starts with S
ScriptHashAddrID: 0x7b, // starts with s
PrivateKeyID: 0x64, // starts with 4 (uncompressed) or F (compressed)
// BIP32 hierarchical deterministic extended key magics
HDPrivateKeyID: [4]byte{0x04, 0x20, 0xb9, 0x00}, // starts with sprv
HDPublicKeyID: [4]byte{0x04, 0x20, 0xbd, 0x3a}, // starts with spub
// BIP44 coin type used in the hierarchical deterministic path for
// address generation.
HDCoinType: 115, // ASCII for s
}
var (
// ErrDuplicateNet describes an error where the parameters for a Bitcoin
// network could not be set due to the network already being a standard
// network or previously-registered into this package.
ErrDuplicateNet = errors.New("duplicate Bitcoin network")
// ErrUnknownHDKeyID describes an error where the provided id which
// is intended to identify the network for a hierarchical deterministic
// private extended key is not registered.
ErrUnknownHDKeyID = errors.New("unknown hd private extended key bytes")
)
var (
registeredNets = map[wire.BitcoinNet]struct{}{
MainNetParams.Net: struct{}{},
TestNet3Params.Net: struct{}{},
RegressionNetParams.Net: struct{}{},
SimNetParams.Net: struct{}{},
}
pubKeyHashAddrIDs = map[byte]struct{}{
MainNetParams.PubKeyHashAddrID: struct{}{},
TestNet3Params.PubKeyHashAddrID: struct{}{}, // shared with regtest
SimNetParams.PubKeyHashAddrID: struct{}{},
}
scriptHashAddrIDs = map[byte]struct{}{
MainNetParams.ScriptHashAddrID: struct{}{},
TestNet3Params.ScriptHashAddrID: struct{}{}, // shared with regtest
SimNetParams.ScriptHashAddrID: struct{}{},
}
// Testnet is shared with regtest.
hdPrivToPubKeyIDs = map[[4]byte][]byte{
MainNetParams.HDPrivateKeyID: MainNetParams.HDPublicKeyID[:],
TestNet3Params.HDPrivateKeyID: TestNet3Params.HDPublicKeyID[:],
SimNetParams.HDPrivateKeyID: SimNetParams.HDPublicKeyID[:],
}
)
// Register registers the network parameters for a Bitcoin network. This may
// error with ErrDuplicateNet if the network is already registered (either
// due to a previous Register call, or the network being one of the default
// networks).
//
// Network parameters should be registered into this package by a main package
// as early as possible. Then, library packages may lookup networks or network
// parameters based on inputs and work regardless of the network being standard
// or not.
func Register(params *Params) error {
if _, ok := registeredNets[params.Net]; ok {
return ErrDuplicateNet
}
registeredNets[params.Net] = struct{}{}
pubKeyHashAddrIDs[params.PubKeyHashAddrID] = struct{}{}
scriptHashAddrIDs[params.ScriptHashAddrID] = struct{}{}
hdPrivToPubKeyIDs[params.HDPrivateKeyID] = params.HDPublicKeyID[:]
return nil
}
// IsPubKeyHashAddrID returns whether the id is an identifier known to prefix a
// pay-to-pubkey-hash address on any default or registered network. This is
// used when decoding an address string into a specific address type. It is up
// to the caller to check both this and IsScriptHashAddrID and decide whether an
// address is a pubkey hash address, script hash address, neither, or
// undeterminable (if both return true).
func IsPubKeyHashAddrID(id byte) bool {
_, ok := pubKeyHashAddrIDs[id]
return ok
}
// IsScriptHashAddrID returns whether the id is an identifier known to prefix a
// pay-to-script-hash address on any default or registered network. This is
// used when decoding an address string into a specific address type. It is up
// to the caller to check both this and IsPubKeyHashAddrID and decide whether an
// address is a pubkey hash address, script hash address, neither, or
// undeterminable (if both return true).
func IsScriptHashAddrID(id byte) bool {
_, ok := scriptHashAddrIDs[id]
return ok
}
// HDPrivateKeyToPublicKeyID accepts a private hierarchical deterministic
// extended key id and returns the associated public key id. When the provided
// id is not registered, the ErrUnknownHDKeyID error will be returned.
func HDPrivateKeyToPublicKeyID(id []byte) ([]byte, error) {
if len(id) != 4 {
return nil, ErrUnknownHDKeyID
}
var key [4]byte
copy(key[:], id)
pubBytes, ok := hdPrivToPubKeyIDs[key]
if !ok {
return nil, ErrUnknownHDKeyID
}
return pubBytes, nil
}
// newShaHashFromStr converts the passed big-endian hex string into a
// wire.ShaHash. It only differs from the one available in wire in that
// it panics on an error since it will only (and must only) be called with
// hard-coded, and therefore known good, hashes.
func newShaHashFromStr(hexStr string) *wire.ShaHash {
sha, err := wire.NewShaHashFromStr(hexStr)
if err != nil {
// Ordinarily I don't like panics in library code since it
// can take applications down without them having a chance to
// recover which is extremely annoying, however an exception is
// being made in this case because the only way this can panic
// is if there is an error in the hard-coded hashes. Thus it
// will only ever potentially panic on init and therefore is
// 100% predictable.
panic(err)
}
return sha
}

3172
chaincfg/premine.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,7 @@
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package chaincfg_test
import (
@ -5,7 +9,7 @@ import (
"reflect"
"testing"
. "github.com/btcsuite/btcd/chaincfg"
. "github.com/decred/dcrd/chaincfg"
)
// Define some of the required parameters for a user-registered
@ -14,8 +18,8 @@ import (
var mockNetParams = Params{
Name: "mocknet",
Net: 1<<32 - 1,
PubKeyHashAddrID: 0x9f,
ScriptHashAddrID: 0xf9,
PubKeyHashAddrID: [2]byte{0x9f},
ScriptHashAddrID: [2]byte{0xf9},
HDPrivateKeyID: [4]byte{0x01, 0x02, 0x03, 0x04},
HDPublicKeyID: [4]byte{0x05, 0x06, 0x07, 0x08},
}
@ -27,7 +31,7 @@ func TestRegister(t *testing.T) {
err error
}
type magicTest struct {
magic byte
magic [2]byte
valid bool
}
type hdTest struct {
@ -52,13 +56,8 @@ func TestRegister(t *testing.T) {
err: ErrDuplicateNet,
},
{
name: "duplicate regtest",
params: &RegressionNetParams,
err: ErrDuplicateNet,
},
{
name: "duplicate testnet3",
params: &TestNet3Params,
name: "duplicate testnet",
params: &TestNetParams,
err: ErrDuplicateNet,
},
{
@ -73,11 +72,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: TestNet3Params.PubKeyHashAddrID,
valid: true,
},
{
magic: RegressionNetParams.PubKeyHashAddrID,
magic: TestNetParams.PubKeyHashAddrID,
valid: true,
},
{
@ -89,7 +84,7 @@ func TestRegister(t *testing.T) {
valid: false,
},
{
magic: 0xFF,
magic: [2]byte{0xFF},
valid: false,
},
},
@ -99,11 +94,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: TestNet3Params.ScriptHashAddrID,
valid: true,
},
{
magic: RegressionNetParams.ScriptHashAddrID,
magic: TestNetParams.ScriptHashAddrID,
valid: true,
},
{
@ -115,7 +106,7 @@ func TestRegister(t *testing.T) {
valid: false,
},
{
magic: 0xFF,
magic: [2]byte{0xFF},
valid: false,
},
},
@ -126,13 +117,8 @@ func TestRegister(t *testing.T) {
err: nil,
},
{
priv: TestNet3Params.HDPrivateKeyID[:],
want: TestNet3Params.HDPublicKeyID[:],
err: nil,
},
{
priv: RegressionNetParams.HDPrivateKeyID[:],
want: RegressionNetParams.HDPublicKeyID[:],
priv: TestNetParams.HDPrivateKeyID[:],
want: TestNetParams.HDPublicKeyID[:],
err: nil,
},
{
@ -169,11 +155,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: TestNet3Params.PubKeyHashAddrID,
valid: true,
},
{
magic: RegressionNetParams.PubKeyHashAddrID,
magic: TestNetParams.PubKeyHashAddrID,
valid: true,
},
{
@ -185,7 +167,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: 0xFF,
magic: [2]byte{0xFF},
valid: false,
},
},
@ -195,11 +177,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: TestNet3Params.ScriptHashAddrID,
valid: true,
},
{
magic: RegressionNetParams.ScriptHashAddrID,
magic: TestNetParams.ScriptHashAddrID,
valid: true,
},
{
@ -211,7 +189,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: 0xFF,
magic: [2]byte{0xFF},
valid: false,
},
},
@ -232,13 +210,8 @@ func TestRegister(t *testing.T) {
err: ErrDuplicateNet,
},
{
name: "duplicate regtest",
params: &RegressionNetParams,
err: ErrDuplicateNet,
},
{
name: "duplicate testnet3",
params: &TestNet3Params,
name: "duplicate testnet",
params: &TestNetParams,
err: ErrDuplicateNet,
},
{
@ -258,11 +231,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: TestNet3Params.PubKeyHashAddrID,
valid: true,
},
{
magic: RegressionNetParams.PubKeyHashAddrID,
magic: TestNetParams.PubKeyHashAddrID,
valid: true,
},
{
@ -274,7 +243,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: 0xFF,
magic: [2]byte{0xFF},
valid: false,
},
},
@ -284,11 +253,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: TestNet3Params.ScriptHashAddrID,
valid: true,
},
{
magic: RegressionNetParams.ScriptHashAddrID,
magic: TestNetParams.ScriptHashAddrID,
valid: true,
},
{
@ -300,7 +265,7 @@ func TestRegister(t *testing.T) {
valid: true,
},
{
magic: 0xFF,
magic: [2]byte{0xFF},
valid: false,
},
},
@ -311,13 +276,8 @@ func TestRegister(t *testing.T) {
err: nil,
},
{
priv: TestNet3Params.HDPrivateKeyID[:],
want: TestNet3Params.HDPublicKeyID[:],
err: nil,
},
{
priv: RegressionNetParams.HDPrivateKeyID[:],
want: RegressionNetParams.HDPublicKeyID[:],
priv: TestNetParams.HDPrivateKeyID[:],
want: TestNetParams.HDPublicKeyID[:],
err: nil,
},
{

View File

@ -1,21 +1,23 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package main
import (
"container/heap"
"fmt"
"runtime"
"sync"
"sync/atomic"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/database"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/txscript"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
"github.com/btcsuite/golangcrypto/ripemd160"
)
@ -38,25 +40,6 @@ const (
indexMaintain
)
// Limit the number of goroutines that concurrently
// build the index to catch up based on the number
// of processor cores. This help ensure the system
// stays reasonably responsive under heavy load.
var numCatchUpWorkers = runtime.NumCPU() * 3
// indexBlockMsg packages a request to have the addresses of a block indexed.
type indexBlockMsg struct {
blk *btcutil.Block
done chan struct{}
}
// writeIndexReq represents a request to have a completed address index
// committed to the database.
type writeIndexReq struct {
blk *btcutil.Block
addrIndex database.BlockAddrIndex
}
// addrIndexer provides a concurrent service for indexing the transactions of
// target blocks based on the addresses involved in the transaction.
type addrIndexer struct {
@ -64,10 +47,6 @@ type addrIndexer struct {
started int32
shutdown int32
state indexState
quit chan struct{}
wg sync.WaitGroup
addrIndexJobs chan *indexBlockMsg
writeRequests chan *writeIndexReq
progressLogger *blockProgressLogger
currentIndexTip int64
chainTip int64
@ -96,10 +75,7 @@ func newAddrIndexer(s *server) (*addrIndexer, error) {
ai := &addrIndexer{
server: s,
quit: make(chan struct{}),
state: state,
addrIndexJobs: make(chan *indexBlockMsg),
writeRequests: make(chan *writeIndexReq, numCatchUpWorkers),
currentIndexTip: lastIndexedHeight,
chainTip: chainHeight,
progressLogger: newBlockProgressLogger("Indexed addresses of",
@ -115,9 +91,11 @@ func (a *addrIndexer) Start() {
return
}
adxrLog.Trace("Starting address indexer")
a.wg.Add(2)
go a.indexManager()
go a.indexWriter()
err := a.initialize()
if err != nil {
adxrLog.Errorf("Couldn't start address indexer: %v", err.Error())
return
}
}
// Stop gracefully shuts down the address indexer by stopping all ongoing
@ -129,8 +107,6 @@ func (a *addrIndexer) Stop() error {
return nil
}
adxrLog.Infof("Address indexer shutting down")
close(a.quit)
a.wg.Wait()
return nil
}
@ -142,351 +118,342 @@ func (a *addrIndexer) IsCaughtUp() bool {
return a.state == indexMaintain
}
// indexManager creates, and oversees worker index goroutines.
// indexManager is the main goroutine for the addresses indexer.
// It creates, and oversees worker goroutines to index incoming blocks, with
// the exact behavior depending on the current index state
// (catch up, vs maintain). Completion of catch-up mode is always proceeded by
// a gracefull transition into "maintain" mode.
// NOTE: Must be run as a goroutine.
func (a *addrIndexer) indexManager() {
// initialize starts the address indexer and fills the database up to the
// top height of the current database.
func (a *addrIndexer) initialize() error {
if a.state == indexCatchUp {
adxrLog.Infof("Building up address index from height %v to %v.",
a.currentIndexTip+1, a.chainTip)
// Quit semaphores to gracefully shut down our worker tasks.
runningWorkers := make([]chan struct{}, 0, numCatchUpWorkers)
shutdownWorkers := func() {
for _, quit := range runningWorkers {
close(quit)
}
}
criticalShutdown := func() {
shutdownWorkers()
a.server.Stop()
}
// Spin up all of our "catch up" worker goroutines, giving them
// a quit channel and WaitGroup so we can gracefully exit if
// needed.
var workerWg sync.WaitGroup
catchUpChan := make(chan *indexBlockMsg)
for i := 0; i < numCatchUpWorkers; i++ {
quit := make(chan struct{})
runningWorkers = append(runningWorkers, quit)
workerWg.Add(1)
go a.indexCatchUpWorker(catchUpChan, &workerWg, quit)
}
// Starting from the next block after our current index tip,
// feed our workers each successive block to index until we've
// caught up to the current highest block height.
lastBlockIdxHeight := a.currentIndexTip + 1
for lastBlockIdxHeight <= a.chainTip {
targetSha, err := a.server.db.FetchBlockShaByHeight(lastBlockIdxHeight)
if err != nil {
adxrLog.Errorf("Unable to look up the sha of the "+
"next target block (height %v): %v",
lastBlockIdxHeight, err)
criticalShutdown()
goto fin
}
targetBlock, err := a.server.db.FetchBlockBySha(targetSha)
if err != nil {
// Unable to locate a target block by sha, this
// is a critical error, we may have an
// inconsistency in the DB.
adxrLog.Errorf("Unable to look up the next "+
"target block (sha %v): %v", targetSha, err)
criticalShutdown()
goto fin
}
// Skip the genesis block.
if !(lastBlockIdxHeight == 0) {
targetSha, err := a.server.db.FetchBlockShaByHeight(
lastBlockIdxHeight)
if err != nil {
return fmt.Errorf("Unable to look up the sha of the "+
"next target block (height %v): %v",
lastBlockIdxHeight, err)
}
targetBlock, err := a.server.db.FetchBlockBySha(targetSha)
if err != nil {
// Unable to locate a target block by sha, this
// is a critical error, we may have an
// inconsistency in the DB.
return fmt.Errorf("Unable to look up the next "+
"target block (sha %v): %v", targetSha, err)
}
targetParent, err := a.server.db.FetchBlockBySha(
&targetBlock.MsgBlock().Header.PrevBlock)
if err != nil {
// Unable to locate a target block by sha, this
// is a critical error, we may have an
// inconsistency in the DB.
return fmt.Errorf("Unable to look up the next "+
"target block parent (sha %v): %v",
targetBlock.MsgBlock().Header.PrevBlock, err)
}
// Send off the next job, ready to exit if a shutdown is
// signalled.
indexJob := &indexBlockMsg{blk: targetBlock}
select {
case catchUpChan <- indexJob:
lastBlockIdxHeight++
case <-a.quit:
shutdownWorkers()
goto fin
}
_, a.chainTip, err = a.server.db.NewestSha()
if err != nil {
adxrLog.Errorf("Unable to get latest block height: %v", err)
criticalShutdown()
goto fin
addrIndex, err := a.indexBlockAddrs(targetBlock, targetParent)
if err != nil {
return fmt.Errorf("Unable to index transactions of"+
" block: %v", err)
}
err = a.server.db.UpdateAddrIndexForBlock(targetSha,
lastBlockIdxHeight,
addrIndex)
if err != nil {
return fmt.Errorf("Unable to insert block: %v", err.Error())
}
}
lastBlockIdxHeight++
}
a.Lock()
a.state = indexMaintain
a.Unlock()
// We've finished catching up. Signal our workers to quit, and
// wait until they've all finished.
shutdownWorkers()
workerWg.Wait()
}
adxrLog.Infof("Address indexer has caught up to best height, entering " +
"maintainence mode")
adxrLog.Debugf("Address indexer has queued up to best height, safe " +
"to begin maintainence mode")
// We're all caught up at this point. We now serially process new jobs
// coming in.
for {
select {
case indexJob := <-a.addrIndexJobs:
addrIndex, err := a.indexBlockAddrs(indexJob.blk)
if err != nil {
adxrLog.Errorf("Unable to index transactions of"+
" block: %v", err)
a.server.Stop()
goto fin
}
a.writeRequests <- &writeIndexReq{blk: indexJob.blk,
addrIndex: addrIndex}
case <-a.quit:
goto fin
}
}
fin:
a.wg.Done()
}
// UpdateAddressIndex asynchronously queues a newly solved block to have its
// transactions indexed by address.
func (a *addrIndexer) UpdateAddressIndex(block *btcutil.Block) {
go func() {
job := &indexBlockMsg{blk: block}
a.addrIndexJobs <- job
}()
}
// pendingIndexWrites writes is a priority queue which is used to ensure the
// address index of the block height N+1 is written when our address tip is at
// height N. This ordering is necessary to maintain index consistency in face
// of our concurrent workers, which may not necessarily finish in the order the
// jobs are handed out.
type pendingWriteQueue []*writeIndexReq
// Len returns the number of items in the priority queue. It is part of the
// heap.Interface implementation.
func (pq pendingWriteQueue) Len() int { return len(pq) }
// Less returns whether the item in the priority queue with index i should sort
// before the item with index j. It is part of the heap.Interface implementation.
func (pq pendingWriteQueue) Less(i, j int) bool {
return pq[i].blk.Height() < pq[j].blk.Height()
}
// Swap swaps the items at the passed indices in the priority queue. It is
// part of the heap.Interface implementation.
func (pq pendingWriteQueue) Swap(i, j int) { pq[i], pq[j] = pq[j], pq[i] }
// Push pushes the passed item onto the priority queue. It is part of the
// heap.Interface implementation.
func (pq *pendingWriteQueue) Push(x interface{}) {
*pq = append(*pq, x.(*writeIndexReq))
}
// Pop removes the highest priority item (according to Less) from the priority
// queue and returns it. It is part of the heap.Interface implementation.
func (pq *pendingWriteQueue) Pop() interface{} {
n := len(*pq)
item := (*pq)[n-1]
(*pq)[n-1] = nil
*pq = (*pq)[0 : n-1]
return item
}
// indexWriter commits the populated address indexes created by the
// catch up workers to the database. Since we have concurrent workers, the writer
// ensures indexes are written in ascending order to avoid a possible gap in the
// address index triggered by an unexpected shutdown.
// NOTE: Must be run as a goroutine
func (a *addrIndexer) indexWriter() {
var pendingWrites pendingWriteQueue
minHeightWrite := make(chan *writeIndexReq)
workerQuit := make(chan struct{})
writeFinished := make(chan struct{}, 1)
// Spawn a goroutine to feed our writer address indexes such
// that, if our address tip is at N, the index for block N+1 is always
// written first. We use a priority queue to enforce this condition
// while accepting new write requests.
go func() {
for {
top:
select {
case incomingWrite := <-a.writeRequests:
heap.Push(&pendingWrites, incomingWrite)
// Check if we've found a write request that
// satisfies our condition. If we have, then
// chances are we have some backed up requests
// which wouldn't be written until a previous
// request showed up. If this is the case we'll
// quickly flush our heap of now available in
// order writes. We also accept write requests
// with a block height *before* the current
// index tip, in order to re-index new prior
// blocks added to the main chain during a
// re-org.
writeReq := heap.Pop(&pendingWrites).(*writeIndexReq)
_, addrTip, _ := a.server.db.FetchAddrIndexTip()
for writeReq.blk.Height() == (addrTip+1) ||
writeReq.blk.Height() <= addrTip {
minHeightWrite <- writeReq
// Wait for write to finish so we get a
// fresh view of the addrtip.
<-writeFinished
// Break to grab a new write request
if pendingWrites.Len() == 0 {
break top
}
writeReq = heap.Pop(&pendingWrites).(*writeIndexReq)
_, addrTip, _ = a.server.db.FetchAddrIndexTip()
}
// We haven't found the proper write request yet,
// push back onto our heap and wait for the next
// request which may be our target write.
heap.Push(&pendingWrites, writeReq)
case <-workerQuit:
return
}
}
}()
out:
// Our main writer loop. Here we actually commit the populated address
// indexes to the database.
for {
select {
case nextWrite := <-minHeightWrite:
sha := nextWrite.blk.Sha()
height := nextWrite.blk.Height()
err := a.server.db.UpdateAddrIndexForBlock(sha, height,
nextWrite.addrIndex)
if err != nil {
adxrLog.Errorf("Unable to write index for block, "+
"sha %v, height %v", sha, height)
a.server.Stop()
break out
}
writeFinished <- struct{}{}
a.progressLogger.LogBlockHeight(nextWrite.blk)
case <-a.quit:
break out
}
}
close(workerQuit)
a.wg.Done()
}
// indexCatchUpWorker indexes the transactions of previously validated and
// stored blocks.
// NOTE: Must be run as a goroutine
func (a *addrIndexer) indexCatchUpWorker(workChan chan *indexBlockMsg,
wg *sync.WaitGroup, quit chan struct{}) {
out:
for {
select {
case indexJob := <-workChan:
addrIndex, err := a.indexBlockAddrs(indexJob.blk)
if err != nil {
adxrLog.Errorf("Unable to index transactions of"+
" block: %v", err)
a.server.Stop()
break out
}
a.writeRequests <- &writeIndexReq{blk: indexJob.blk,
addrIndex: addrIndex}
case <-quit:
break out
}
}
wg.Done()
}
// indexScriptPubKey indexes all data pushes greater than 8 bytes within the
// passed SPK. Our "address" index is actually a hash160 index, where in the
// ideal case the data push is either the hash160 of a publicKey (P2PKH) or
// a Script (P2SH).
func indexScriptPubKey(addrIndex database.BlockAddrIndex, scriptPubKey []byte,
locInBlock *wire.TxLoc) error {
dataPushes, err := txscript.PushedData(scriptPubKey)
if err != nil {
adxrLog.Tracef("Couldn't get pushes: %v", err)
return err
}
for _, data := range dataPushes {
// Only index pushes greater than 8 bytes.
if len(data) < 8 {
continue
}
var indexKey [ripemd160.Size]byte
// A perfect little hash160.
if len(data) <= 20 {
copy(indexKey[:], data)
// Otherwise, could be a payToPubKey or an OP_RETURN, so we'll
// make a hash160 out of it.
} else {
copy(indexKey[:], btcutil.Hash160(data))
}
addrIndex[indexKey] = append(addrIndex[indexKey], locInBlock)
}
return nil
}
// convertToAddrIndex indexes all data pushes greater than 8 bytes within the
// passed SPK and returns a TxAddrIndex with the given data. Our "address"
// index is actually a hash160 index, where in the ideal case the data push
// is either the hash160 of a publicKey (P2PKH) or a Script (P2SH).
func convertToAddrIndex(scrVersion uint16, scr []byte, height int64,
locInBlock *wire.TxLoc) ([]*database.TxAddrIndex, error) {
var tais []*database.TxAddrIndex
if scr == nil || locInBlock == nil {
return nil, fmt.Errorf("passed nil pointer")
}
var indexKey [ripemd160.Size]byte
// Get the script classes and extract the PKH if applicable.
// If it's multisig, unknown, etc, just hash the script itself.
class, addrs, _, err := txscript.ExtractPkScriptAddrs(scrVersion, scr,
activeNetParams.Params)
if err != nil {
return nil, fmt.Errorf("script conversion error")
}
knownType := false
for _, addr := range addrs {
switch {
case class == txscript.PubKeyTy:
copy(indexKey[:], addr.Hash160()[:])
case class == txscript.PubkeyAltTy:
copy(indexKey[:], addr.Hash160()[:])
case class == txscript.PubKeyHashTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.PubkeyHashAltTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.StakeSubmissionTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.StakeGenTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.StakeRevocationTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.StakeSubChangeTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.MultiSigTy:
copy(indexKey[:], addr.ScriptAddress()[:])
case class == txscript.ScriptHashTy:
copy(indexKey[:], addr.ScriptAddress()[:])
}
tai := &database.TxAddrIndex{
indexKey,
uint32(height),
uint32(locInBlock.TxStart),
uint32(locInBlock.TxLen),
}
tais = append(tais, tai)
knownType = true
}
if !knownType {
copy(indexKey[:], dcrutil.Hash160(scr))
tai := &database.TxAddrIndex{
indexKey,
uint32(height),
uint32(locInBlock.TxStart),
uint32(locInBlock.TxLen),
}
tais = append(tais, tai)
}
return tais, nil
}
// lookupTransaction is a special transaction lookup function that searches
// the database, the block, and its parent for a transaction. This is needed
// because indexBlockAddrs is called AFTER a block is added/removed in the
// blockchain in blockManager, necessitating that the blocks internally be
// searched for inputs for any given transaction too. Additionally, it's faster
// to get the tx from the blocks here since they're already
func (a *addrIndexer) lookupTransaction(txHash chainhash.Hash, blk *dcrutil.Block,
parent *dcrutil.Block) (*wire.MsgTx, error) {
// Search the previous block and parent first.
txTreeRegularValid := dcrutil.IsFlagSet16(blk.MsgBlock().Header.VoteBits,
dcrutil.BlockValid)
// Search the regular tx tree of this and the last block if the
// tx tree regular was validated.
if txTreeRegularValid {
for _, stx := range parent.STransactions() {
if stx.Sha().IsEqual(&txHash) {
return stx.MsgTx(), nil
}
}
for _, tx := range parent.Transactions() {
if tx.Sha().IsEqual(&txHash) {
return tx.MsgTx(), nil
}
}
for _, tx := range blk.Transactions() {
if tx.Sha().IsEqual(&txHash) {
return tx.MsgTx(), nil
}
}
} else {
// Just search this block's regular tx tree and the previous
// block's stake tx tree.
for _, stx := range parent.STransactions() {
if stx.Sha().IsEqual(&txHash) {
return stx.MsgTx(), nil
}
}
for _, tx := range blk.Transactions() {
if tx.Sha().IsEqual(&txHash) {
return tx.MsgTx(), nil
}
}
}
// Lookup and fetch the referenced output's tx in the database.
txList, err := a.server.db.FetchTxBySha(&txHash)
if err != nil {
adxrLog.Errorf("Error fetching tx %v: %v",
txHash, err)
return nil, err
}
if len(txList) == 0 {
return nil, fmt.Errorf("transaction %v not found",
txHash)
}
return txList[len(txList)-1].Tx, nil
}
// indexBlockAddrs returns a populated index of the all the transactions in the
// passed block based on the addresses involved in each transaction.
func (a *addrIndexer) indexBlockAddrs(blk *btcutil.Block) (database.BlockAddrIndex, error) {
addrIndex := make(database.BlockAddrIndex)
txLocs, err := blk.TxLoc()
func (a *addrIndexer) indexBlockAddrs(blk *dcrutil.Block,
parent *dcrutil.Block) (database.BlockAddrIndex, error) {
var addrIndex database.BlockAddrIndex
_, stxLocs, err := blk.TxLoc()
if err != nil {
return nil, err
}
for txIdx, tx := range blk.Transactions() {
// Tx's offset and length in the block.
locInBlock := &txLocs[txIdx]
txTreeRegularValid := dcrutil.IsFlagSet16(blk.MsgBlock().Header.VoteBits,
dcrutil.BlockValid)
// Coinbases don't have any inputs.
if !blockchain.IsCoinBase(tx) {
// Index the SPK's of each input's previous outpoint
// transaction.
for _, txIn := range tx.MsgTx().TxIn {
// Lookup and fetch the referenced output's tx.
prevOut := txIn.PreviousOutPoint
txList, err := a.server.db.FetchTxBySha(&prevOut.Hash)
if len(txList) == 0 {
return nil, fmt.Errorf("transaction %v not found",
prevOut.Hash)
// Add regular transactions iff the block was validated.
if txTreeRegularValid {
txLocs, _, err := parent.TxLoc()
if err != nil {
return nil, err
}
for txIdx, tx := range parent.Transactions() {
// Tx's offset and length in the block.
locInBlock := &txLocs[txIdx]
// Coinbases don't have any inputs.
if !blockchain.IsCoinBase(tx) {
// Index the SPK's of each input's previous outpoint
// transaction.
for _, txIn := range tx.MsgTx().TxIn {
prevOutTx, err := a.lookupTransaction(
txIn.PreviousOutPoint.Hash,
blk,
parent)
inputOutPoint := prevOutTx.TxOut[txIn.PreviousOutPoint.Index]
toAppend, err := convertToAddrIndex(inputOutPoint.Version,
inputOutPoint.PkScript, parent.Height(), locInBlock)
if err != nil {
adxrLog.Errorf("Error converting tx %v: %v",
txIn.PreviousOutPoint.Hash, err)
return nil, err
}
addrIndex = append(addrIndex, toAppend...)
}
}
for _, txOut := range tx.MsgTx().TxOut {
toAppend, err := convertToAddrIndex(txOut.Version, txOut.PkScript,
parent.Height(), locInBlock)
if err != nil {
adxrLog.Errorf("Error fetching tx %v: %v",
prevOut.Hash, err)
adxrLog.Errorf("Error converting tx %v: %v",
tx.MsgTx().TxSha(), err)
return nil, err
}
prevOutTx := txList[len(txList)-1]
inputOutPoint := prevOutTx.Tx.TxOut[prevOut.Index]
indexScriptPubKey(addrIndex, inputOutPoint.PkScript, locInBlock)
addrIndex = append(addrIndex, toAppend...)
}
}
}
for _, txOut := range tx.MsgTx().TxOut {
indexScriptPubKey(addrIndex, txOut.PkScript, locInBlock)
// Add stake transactions.
for stxIdx, stx := range blk.STransactions() {
// Tx's offset and length in the block.
locInBlock := &stxLocs[stxIdx]
isSSGen, _ := stake.IsSSGen(stx)
// Index the SPK's of each input's previous outpoint
// transaction.
for i, txIn := range stx.MsgTx().TxIn {
// Stakebases don't have any inputs.
if isSSGen && i == 0 {
continue
}
// Lookup and fetch the referenced output's tx.
prevOutTx, err := a.lookupTransaction(
txIn.PreviousOutPoint.Hash,
blk,
parent)
inputOutPoint := prevOutTx.TxOut[txIn.PreviousOutPoint.Index]
toAppend, err := convertToAddrIndex(inputOutPoint.Version,
inputOutPoint.PkScript, blk.Height(), locInBlock)
if err != nil {
adxrLog.Errorf("Error converting stx %v: %v",
txIn.PreviousOutPoint.Hash, err)
return nil, err
}
addrIndex = append(addrIndex, toAppend...)
}
for _, txOut := range stx.MsgTx().TxOut {
toAppend, err := convertToAddrIndex(txOut.Version, txOut.PkScript,
blk.Height(), locInBlock)
if err != nil {
adxrLog.Errorf("Error converting stx %v: %v",
stx.MsgTx().TxSha(), err)
return nil, err
}
addrIndex = append(addrIndex, toAppend...)
}
}
return addrIndex, nil
}
// InsertBlock synchronously queues a newly solved block to have its
// transactions indexed by address.
func (a *addrIndexer) InsertBlock(block *dcrutil.Block, parent *dcrutil.Block) error {
addrIndex, err := a.indexBlockAddrs(block, parent)
if err != nil {
return fmt.Errorf("Unable to index transactions of"+
" block: %v", err)
}
err = a.server.db.UpdateAddrIndexForBlock(block.Sha(),
block.Height(),
addrIndex)
if err != nil {
return fmt.Errorf("Unable to insert block: %v", err.Error())
}
return nil
}
// RemoveBlock removes all transactions from a block on the tip from the
// address index database.
func (a *addrIndexer) RemoveBlock(block *dcrutil.Block,
parent *dcrutil.Block) error {
addrIndex, err := a.indexBlockAddrs(block, parent)
if err != nil {
return fmt.Errorf("Unable to index transactions of"+
" block: %v", err)
}
err = a.server.db.DropAddrIndexForBlock(block.Sha(),
block.Height(),
addrIndex)
if err != nil {
return fmt.Errorf("Unable to remove block: %v", err.Error())
}
return nil
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,15 +10,15 @@ import (
"path/filepath"
"runtime"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ldb"
"github.com/btcsuite/btcd/limits"
"github.com/btcsuite/btclog"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/limits"
)
const (
// blockDbNamePrefix is the prefix for the btcd block database.
// blockDbNamePrefix is the prefix for the dcrd block database.
blockDbNamePrefix = "blocks"
)

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,12 +10,12 @@ import (
"os"
"path/filepath"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ldb"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
flags "github.com/btcsuite/go-flags"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
const (
@ -24,8 +25,8 @@ const (
)
var (
btcdHomeDir = btcutil.AppDataDir("btcd", false)
defaultDataDir = filepath.Join(btcdHomeDir, "data")
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultDataDir = filepath.Join(dcrdHomeDir, "data")
knownDbTypes = database.SupportedDBs()
activeNetParams = &chaincfg.MainNetParams
)
@ -34,13 +35,12 @@ var (
//
// See loadConfig for details on the configuration load process.
type config struct {
DataDir string `short:"b" long:"datadir" description:"Location of the btcd data directory"`
DbType string `long:"dbtype" description:"Database backend to use for the Block Chain"`
TestNet3 bool `long:"testnet" description:"Use the test network"`
RegressionTest bool `long:"regtest" description:"Use the regression test network"`
SimNet bool `long:"simnet" description:"Use the simulation test network"`
InFile string `short:"i" long:"infile" description:"File containing the block(s)"`
Progress int `short:"p" long:"progress" description:"Show a progress message each time this number of seconds have passed -- Use 0 to disable progress announcements"`
DataDir string `short:"b" long:"datadir" description:"Location of the dcrd data directory"`
DbType string `long:"dbtype" description:"Database backend to use for the Block Chain"`
TestNet bool `long:"testnet" description:"Use the test network"`
SimNet bool `long:"simnet" description:"Use the simulation test network"`
InFile string `short:"i" long:"infile" description:"File containing the block(s)"`
Progress int `short:"p" long:"progress" description:"Show a progress message each time this number of seconds have passed -- Use 0 to disable progress announcements"`
}
// filesExists reports whether the named file or directory exists.
@ -65,17 +65,17 @@ func validDbType(dbType string) bool {
}
// netName returns the name used when referring to a bitcoin network. At the
// time of writing, btcd currently places blocks for testnet version 3 in the
// time of writing, dcrd currently places blocks for testnet version 3 in the
// data and log directory "testnet", which does not match the Name field of the
// chaincfg parameters. This function can be used to override this directory name
// as "testnet" when the passed active network matches wire.TestNet3.
// as "testnet" when the passed active network matches wire.TestNet.
//
// A proper upgrade to move the data and log directories for this network to
// "testnet3" is planned for the future, at which point this function can be
// "testnet" is planned for the future, at which point this function can be
// removed and the network parameter's name used instead.
func netName(chainParams *chaincfg.Params) string {
switch chainParams.Net {
case wire.TestNet3:
case wire.TestNet:
return "testnet"
default:
return chainParams.Name
@ -107,13 +107,9 @@ func loadConfig() (*config, []string, error) {
numNets := 0
// Count number of network flags passed; assign active network params
// while we're at it
if cfg.TestNet3 {
if cfg.TestNet {
numNets++
activeNetParams = &chaincfg.TestNet3Params
}
if cfg.RegressionTest {
numNets++
activeNetParams = &chaincfg.RegressionNetParams
activeNetParams = &chaincfg.TestNetParams
}
if cfg.SimNet {
numNets++

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Copyright (c) 2015 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -11,14 +12,15 @@ import (
"sync"
"time"
"github.com/btcsuite/btcd/blockchain"
"github.com/btcsuite/btcd/database"
_ "github.com/btcsuite/btcd/database/ldb"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ldb"
"github.com/decred/dcrd/wire"
"github.com/decred/dcrutil"
)
var zeroHash = wire.ShaHash{}
var zeroHash = chainhash.Hash{}
// importResults houses the stats and result as an import operation.
type importResults struct {
@ -94,7 +96,7 @@ func (bi *blockImporter) readBlock() ([]byte, error) {
// with any potential errors.
func (bi *blockImporter) processBlock(serializedBlock []byte) (bool, error) {
// Deserialize the block which includes checks for malformed blocks.
block, err := btcutil.NewBlockFromBytes(serializedBlock)
block, err := dcrutil.NewBlockFromBytes(serializedBlock)
if err != nil {
return false, err
}
@ -129,7 +131,7 @@ func (bi *blockImporter) processBlock(serializedBlock []byte) (bool, error) {
// Ensure the blocks follows all of the chain rules and match up to the
// known checkpoints.
isOrphan, err := bi.chain.ProcessBlock(block, bi.medianTime,
_, isOrphan, err := bi.chain.ProcessBlock(block, bi.medianTime,
blockchain.BFFastAdd)
if err != nil {
return false, err
@ -303,7 +305,7 @@ func newBlockImporter(db database.Db, r io.ReadSeeker) *blockImporter {
doneChan: make(chan bool),
errChan: make(chan error),
quit: make(chan struct{}),
chain: blockchain.New(db, activeNetParams, nil),
chain: blockchain.New(db, nil, activeNetParams, nil),
medianTime: blockchain.NewMedianTime(),
lastLogTime: time.Now(),
}

View File

@ -1,4 +1,5 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -11,8 +12,9 @@ import (
"path/filepath"
"strings"
"github.com/btcsuite/btcd/btcjson"
"github.com/btcsuite/btcutil"
"github.com/decred/dcrd/dcrjson"
"github.com/decred/dcrutil"
flags "github.com/btcsuite/go-flags"
)
@ -20,17 +22,17 @@ const (
// unusableFlags are the command usage flags which this utility are not
// able to use. In particular it doesn't support websockets and
// consequently notifications.
unusableFlags = btcjson.UFWebsocketOnly | btcjson.UFNotification
unusableFlags = dcrjson.UFWebsocketOnly | dcrjson.UFNotification
)
var (
btcdHomeDir = btcutil.AppDataDir("btcd", false)
btcctlHomeDir = btcutil.AppDataDir("btcctl", false)
btcwalletHomeDir = btcutil.AppDataDir("btcwallet", false)
defaultConfigFile = filepath.Join(btcctlHomeDir, "btcctl.conf")
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
dcrctlHomeDir = dcrutil.AppDataDir("dcrctl", false)
dcrwalletHomeDir = dcrutil.AppDataDir("dcrwallet", false)
defaultConfigFile = filepath.Join(dcrctlHomeDir, "dcrctl.conf")
defaultRPCServer = "localhost"
defaultRPCCertFile = filepath.Join(btcdHomeDir, "rpc.cert")
defaultWalletCertFile = filepath.Join(btcwalletHomeDir, "rpc.cert")
defaultRPCCertFile = filepath.Join(dcrdHomeDir, "rpc.cert")
defaultWalletCertFile = filepath.Join(dcrwalletHomeDir, "rpc.cert")
)
// listCommands categorizes and lists all of the usable commands along with
@ -43,10 +45,10 @@ func listCommands() {
)
// Get a list of registered commands and categorize and filter them.
cmdMethods := btcjson.RegisteredCmdMethods()
cmdMethods := dcrjson.RegisteredCmdMethods()
categorized := make([][]string, numCategories)
for _, method := range cmdMethods {
flags, err := btcjson.MethodUsageFlags(method)
flags, err := dcrjson.MethodUsageFlags(method)
if err != nil {
// This should never happen since the method was just
// returned from the package, but be safe.
@ -58,7 +60,7 @@ func listCommands() {
continue
}
usage, err := btcjson.MethodUsageText(method)
usage, err := dcrjson.MethodUsageText(method)
if err != nil {
// This should never happen since the method was just
// returned from the package, but be safe.
@ -67,7 +69,7 @@ func listCommands() {
// Categorize the command based on the usage flags.
category := categoryChain
if flags&btcjson.UFWalletOnly != 0 {
if flags&dcrjson.UFWalletOnly != 0 {
category = categoryWallet
}
categorized[category] = append(categorized[category], usage)
@ -86,7 +88,7 @@ func listCommands() {
}
}
// config defines the configuration options for btcctl.
// config defines the configuration options for dcrctl.
//
// See loadConfig for details on the configuration load process.
type config struct {
@ -101,36 +103,37 @@ type config struct {
Proxy string `long:"proxy" description:"Connect via SOCKS5 proxy (eg. 127.0.0.1:9050)"`
ProxyUser string `long:"proxyuser" description:"Username for proxy server"`
ProxyPass string `long:"proxypass" default-mask:"-" description:"Password for proxy server"`
TestNet3 bool `long:"testnet" description:"Connect to testnet"`
TestNet bool `long:"testnet" description:"Connect to testnet"`
SimNet bool `long:"simnet" description:"Connect to the simulation test network"`
TLSSkipVerify bool `long:"skipverify" description:"Do not verify tls certificates (not recommended!)"`
Wallet bool `long:"wallet" description:"Connect to wallet"`
Terminal bool `long:"terminal" description:"Allow interactive use in a terminal"`
}
// normalizeAddress returns addr with the passed default port appended if
// there is not already a port specified.
func normalizeAddress(addr string, useTestNet3, useSimNet, useWallet bool) string {
func normalizeAddress(addr string, useTestNet, useSimNet, useWallet bool) string {
_, _, err := net.SplitHostPort(addr)
if err != nil {
var defaultPort string
switch {
case useTestNet3:
case useTestNet:
if useWallet {
defaultPort = "18332"
defaultPort = "19110"
} else {
defaultPort = "18334"
defaultPort = "19109"
}
case useSimNet:
if useWallet {
defaultPort = "18554"
defaultPort = "19557"
} else {
defaultPort = "18556"
defaultPort = "19556"
}
default:
if useWallet {
defaultPort = "8332"
defaultPort = "9110"
} else {
defaultPort = "8334"
defaultPort = "9109"
}
}
@ -144,7 +147,7 @@ func normalizeAddress(addr string, useTestNet3, useSimNet, useWallet bool) strin
func cleanAndExpandPath(path string) string {
// Expand initial ~ to OS specific home directory.
if strings.HasPrefix(path, "~") {
homeDir := filepath.Dir(btcctlHomeDir)
homeDir := filepath.Dir(dcrctlHomeDir)
path = strings.Replace(path, "~", homeDir, 1)
}
@ -174,7 +177,7 @@ func loadConfig() (*config, []string, error) {
}
// Create the home directory if it doesn't already exist.
err := os.MkdirAll(btcdHomeDir, 0700)
err := os.MkdirAll(dcrdHomeDir, 0700)
if err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
os.Exit(-1)
@ -238,7 +241,7 @@ func loadConfig() (*config, []string, error) {
// Multiple networks can't be selected simultaneously.
numNets := 0
if cfg.TestNet3 {
if cfg.TestNet {
numNets++
}
if cfg.SimNet {
@ -263,7 +266,7 @@ func loadConfig() (*config, []string, error) {
// Add default port to RPC server based on --testnet and --wallet flags
// if needed.
cfg.RPCServer = normalizeAddress(cfg.RPCServer, cfg.TestNet3,
cfg.RPCServer = normalizeAddress(cfg.RPCServer, cfg.TestNet,
cfg.SimNet, cfg.Wallet)
return &cfg, remainingArgs, nil

View File

@ -10,7 +10,7 @@ import (
"path/filepath"
"strings"
"github.com/btcsuite/btcd/btcjson"
"github.com/decred/dcrd/dcrjson"
)
const (
@ -20,7 +20,7 @@ const (
// commandUsage display the usage for a specific command.
func commandUsage(method string) {
usage, err := btcjson.MethodUsageText(method)
usage, err := dcrjson.MethodUsageText(method)
if err != nil {
// This should never happen since the method was already checked
// before calling this function, but be safe.
@ -51,6 +51,11 @@ func main() {
if err != nil {
os.Exit(1)
}
if cfg.Terminal {
startTerminal(cfg)
os.Exit(1)
}
if len(args) < 1 {
usage("No command specified")
os.Exit(1)
@ -59,7 +64,7 @@ func main() {
// Ensure the specified method identifies a valid registered command and
// is one of the usable types.
method := args[0]
usageFlags, err := btcjson.MethodUsageFlags(method)
usageFlags, err := dcrjson.MethodUsageFlags(method)
if err != nil {
fmt.Fprintf(os.Stderr, "Unrecognized command '%s'\n", method)
fmt.Fprintln(os.Stderr, listCmdMessage)
@ -104,20 +109,20 @@ func main() {
// Attempt to create the appropriate command using the arguments
// provided by the user.
cmd, err := btcjson.NewCmd(method, params...)
cmd, err := dcrjson.NewCmd(method, params...)
if err != nil {
// Show the error along with its error code when it's a
// btcjson.Error as it reallistcally will always be since the
// dcrjson.Error as it reallistcally will always be since the
// NewCmd function is only supposed to return errors of that
// type.
if jerr, ok := err.(btcjson.Error); ok {
if jerr, ok := err.(dcrjson.Error); ok {
fmt.Fprintf(os.Stderr, "%s command: %v (code: %s)\n",
method, err, jerr.ErrorCode)
method, err, jerr.Code)
commandUsage(method)
os.Exit(1)
}
// The error is not a btcjson.Error and this really should not
// The error is not a dcrjson.Error and this really should not
// happen. Nevertheless, fallback to just showing the error
// if it should happen due to a bug in the package.
fmt.Fprintf(os.Stderr, "%s command: %v\n", method, err)
@ -127,7 +132,7 @@ func main() {
// Marshal the command into a JSON-RPC byte slice in preparation for
// sending it to the RPC server.
marshalledJSON, err := btcjson.MarshalCmd(1, cmd)
marshalledJSON, err := dcrjson.MarshalCmd(1, cmd)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)

View File

@ -10,7 +10,8 @@ import (
"net"
"net/http"
"github.com/btcsuite/btcd/btcjson"
"github.com/decred/dcrd/dcrjson"
"github.com/btcsuite/go-socks/socks"
)
@ -116,7 +117,7 @@ func sendPostRequest(marshalledJSON []byte, cfg *config) ([]byte, error) {
}
// Unmarshal the response.
var resp btcjson.Response
var resp dcrjson.Response
if err := json.Unmarshal(respBytes, &resp); err != nil {
return nil, err
}

View File

@ -0,0 +1,240 @@
[Application Options]
; ------------------------------------------------------------------------------
; Data settings
; ------------------------------------------------------------------------------
; The directory to store data such as the block chain and peer addresses. The
; block chain takes several GB, so this location must have a lot of free space.
; The default is ~/.dcrd/data on POSIX OSes, $LOCALAPPDATA/Dcrd/data on Windows,
; ~/Library/Application Support/Dcrd/data on Mac OS, and $homed/dcrd/data on
; Plan9. Environment variables are expanded so they may be used. NOTE: Windows
; environment variables are typically %VARIABLE%, but they must be accessed with
; $VARIABLE here. Also, ~ is expanded to $LOCALAPPDATA on Windows.
; datadir=~/.dcrd/data
; ------------------------------------------------------------------------------
; Network settings
; ------------------------------------------------------------------------------
; Use testnet.
; testnet=1
; Connect via a SOCKS5 proxy. NOTE: Specifying a proxy will disable listening
; for incoming connections unless listen addresses are provided via the 'listen'
; option.
; proxy=127.0.0.1:9050
; proxyuser=
; proxypass=
; The SOCKS5 proxy above is assumed to be Tor (https://www.torproject.org).
; If the proxy is not tor the the following my be used to prevent using
; tor specific SOCKS queries to lookup addresses (this increases anonymity when
; tor is used by preventing your IP being leaked via DNS).
; noonion=1
; Use an alternative proxy to connect to .onion addresses. The proxy is assumed
; to be a Tor node. Non .onion addresses will be contacted with the main proxy
; or without a proxy if none is set.
; onion=127.0.0.1:9051
; ******************************************************************************
; Summary of 'addpeer' versus 'connect'.
;
; Only one of the following two options, 'addpeer' and 'connect', may be
; specified. Both allow you to specify peers that you want to stay connected
; with, but the behavior is slightly different. By default, dcrd will query DNS
; to find peers to connect to, so unless you have a specific reason such as
; those described below, you probably won't need to modify anything here.
;
; 'addpeer' does not prevent connections to other peers discovered from
; the peers you are connected to and also lets the remote peers know you are
; available so they can notify other peers they can to connect to you. This
; option might be useful if you are having problems finding a node for some
; reason (perhaps due to a firewall).
;
; 'connect', on the other hand, will ONLY connect to the specified peers and
; no others. It also disables listening (unless you explicitly set listen
; addresses via the 'listen' option) and DNS seeding, so you will not be
; advertised as an available peer to the peers you connect to and won't accept
; connections from any other peers. So, the 'connect' option effectively allows
; you to only connect to "trusted" peers.
; ******************************************************************************
; Add persistent peers to connect to as desired. One peer per line.
; You may specify each IP address with or without a port. The default port will
; be added automatically if one is not specified here.
; addpeer=192.168.1.1
; addpeer=10.0.0.2:9108
; addpeer=fe80::1
; addpeer=[fe80::2]:9108
; Add persistent peers that you ONLY want to connect to as desired. One peer
; per line. You may specify each IP address with or without a port. The
; default port will be added automatically if one is not specified here.
; NOTE: Specifying this option has other side effects as described above in
; the 'addpeer' versus 'connect' summary section.
; connect=192.168.1.1
; connect=10.0.0.2:9108
; connect=fe80::1
; connect=[fe80::2]:9108
; Maximum number of inbound and outbound peers.
; maxpeers=8
; How long to ban misbehaving peers. Valid time units are {s, m, h}.
; Minimum 1s.
; banduration=24h
; banduration=11h30m15s
; Disable DNS seeding for peers. By default, when dcrd starts, it will use
; DNS to query for available peers to connect with.
; nodnsseed=1
; Specify the interfaces to listen on. One listen address per line.
; NOTE: The default port is modified by some options such as 'testnet', so it is
; recommended to not specify a port and allow a proper default to be chosen
; unless you have a specific reason to do otherwise.
; All interfaces on default port (this is the default):
; listen=
; All ipv4 interfaces on default port:
; listen=0.0.0.0
; All ipv6 interfaces on default port:
; listen=::
; All interfaces on port 9108:
; listen=:9108
; All ipv4 interfaces on port 9108:
; listen=0.0.0.0:9108
; All ipv6 interfaces on port 9108:
; listen=[::]:9108
; Only ipv4 localhost on port 9108:
; listen=127.0.0.1:9108
; Only ipv6 localhost on port 9108:
; listen=[::1]:9108
; Only ipv4 localhost on non-standard port 8336:
; listen=127.0.0.1:8336
; All interfaces on non-standard port 8336:
; listen=:8336
; All ipv4 interfaces on non-standard port 8336:
; listen=0.0.0.0:8336
; All ipv6 interfaces on non-standard port 8336:
; listen=[::]:8336
; Disable listening for incoming connections. This will override all listeners.
; nolisten=1
; ------------------------------------------------------------------------------
; RPC server options - The following options control the built-in RPC server
; which is used to control and query information from a running dcrd process.
;
; NOTE: The RPC server is disabled by default if rpcuser AND rpcpass, or
; rpclimituser AND rpclimitpass, are not specified.
; ------------------------------------------------------------------------------
; Secure the RPC API by specifying the username and password. You can also
; specify a limited username and password. You must specify at least one
; full set of credentials - limited or admin - or the RPC server will
; be disabled.
; rpcuser=whatever_admin_username_you_want
; rpcpass=
; rpclimituser=whatever_limited_username_you_want
; rpclimitpass=
; Specify the interfaces for the RPC server listen on. One listen address per
; line. NOTE: The default port is modified by some options such as 'testnet',
; so it is recommended to not specify a port and allow a proper default to be
; chosen unless you have a specific reason to do otherwise.
; All interfaces on default port (this is the default):
; rpclisten=
; All ipv4 interfaces on default port:
; rpclisten=0.0.0.0
; All ipv6 interfaces on default port:
; rpclisten=::
; All interfaces on port 1909:
; rpclisten=:1909
; All ipv4 interfaces on port 1909:
; rpclisten=0.0.0.0:1909
; All ipv6 interfaces on port 1909:
; rpclisten=[::]:1909
; Only ipv4 localhost on port 1909:
; rpclisten=127.0.0.1:1909
; Only ipv6 localhost on port 1909:
; rpclisten=[::1]:1909
; Only ipv4 localhost on non-standard port 8337:
; rpclisten=127.0.0.1:8337
; All interfaces on non-standard port 8337:
; rpclisten=:8337
; All ipv4 interfaces on non-standard port 8337:
; rpclisten=0.0.0.0:8337
; All ipv6 interfaces on non-standard port 8337:
; rpclisten=[::]:8337
; Specify the maximum number of concurrent RPC clients for standard connections.
; rpcmaxclients=10
; Specify the maximum number of concurrent RPC websocket clients.
; rpcmaxwebsockets=25
; Use the following setting to disable the RPC server even if the rpcuser and
; rpcpass are specified above. This allows one to quickly disable the RPC
; server without having to remove credentials from the config file.
; norpc=1
; ------------------------------------------------------------------------------
; Coin Generation (Mining) Settings - The following options control the
; generation of block templates used by external mining applications through RPC
; calls as well as the built-in CPU miner (if enabled).
; ------------------------------------------------------------------------------
; Enable built-in CPU mining.
;
; NOTE: This is typically only useful for testing purposes such as testnet or
; simnet since the difficutly on mainnet is far too high for CPU mining to be
; worth your while.
; generate=false
; Add addresses to pay mined blocks to for CPU mining and the block templates
; generated for the getwork RPC as desired. One address per line.
; miningaddr=youraddress
; miningaddr=youraddress2
; miningaddr=youraddress3
; Specify the minimum block size in bytes to create. By default, only
; transactions which have enough fees or a high enough priority will be included
; in generated block templates. Specifying a minimum block size will instead
; attempt to fill generated block templates up with transactions until it is at
; least the specified number of bytes.
; blockminsize=0
; Specify the maximum block size in bytes to create. This value will be limited
; to the consensus limit if it is larger than this value.
; blockmaxsize=750000
; Specify the size in bytes of the high-priority/low-fee area when creating a
; block. Transactions which consist of large amounts, old inputs, and small
; sizes have the highest priority. One consequence of this is that as low-fee
; or free transactions age, they raise in priority thereby making them more
; likely to be included in this section of a new block. This value is limited
; by the blackmaxsize option and will be limited as needed.
; blockprioritysize=50000
; ------------------------------------------------------------------------------
; Debug
; ------------------------------------------------------------------------------
; Debug logging level.
; Valid levels are {trace, debug, info, warn, error, critical}
; You may also specify <subsystem>=<level>,<subsystem2>=<level>,... to set
; log level for individual subsystems. Use dcrd --debuglevel=show to list
; available subsystems.
; debuglevel=info
; The port used to listen for HTTP profile requests. The profile server will
; be disabled if this option is not specified. The profile information can be
; accessed at http://localhost:<profileport>/debug/pprof once running.
; profile=6061

204
cmd/dcrctl/terminal.go Normal file
View File

@ -0,0 +1,204 @@
// terminal
package main
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
"io"
"os"
"strings"
"github.com/decred/dcrd/dcrjson"
"golang.org/x/crypto/ssh/terminal"
)
func execute(quit chan bool, protected *bool, cfg *config, line string) {
switch line {
case "h":
fallthrough
case "help":
fmt.Printf("[h]elp print this message\n")
fmt.Printf("[l]ist list all available commands\n")
fmt.Printf("[p]rotect toggle protected mode (for passwords)\n")
fmt.Printf("[q]uit/ctrl+d exit\n")
fmt.Printf("Enter commands with arguments to execute them.\n")
case "l":
fallthrough
case "list":
listCommands()
case "q":
fallthrough
case "quit":
quit <- true
case "p":
fallthrough
case "protect":
if *protected {
*protected = false
return
}
*protected = true
return
default:
args := strings.Split(line, " ")
if len(args) < 1 {
usage("No command specified")
return
}
// Ensure the specified method identifies a valid registered command and
// is one of the usable types.
listCmdMessageLocal := "Enter [l]ist to list commands"
method := args[0]
usageFlags, err := dcrjson.MethodUsageFlags(method)
if err != nil {
fmt.Fprintf(os.Stderr, "Unrecognized command '%s'\n", method)
fmt.Fprintln(os.Stderr, listCmdMessageLocal)
return
}
if usageFlags&unusableFlags != 0 {
fmt.Fprintf(os.Stderr, "The '%s' command can only be used via "+
"websockets\n", method)
fmt.Fprintln(os.Stderr, listCmdMessageLocal)
return
}
// Convert remaining command line args to a slice of interface values
// to be passed along as parameters to new command creation function.
//
// Since some commands, such as submitblock, can involve data which is
// too large for the Operating System to allow as a normal command line
// parameter, support using '-' as an argument to allow the argument
// to be read from a stdin pipe.
bio := bufio.NewReader(os.Stdin)
params := make([]interface{}, 0, len(args[1:]))
for _, arg := range args[1:] {
if arg == "-" {
param, err := bio.ReadString('\n')
if err != nil && err != io.EOF {
fmt.Fprintf(os.Stderr, "Failed to read data "+
"from stdin: %v\n", err)
return
}
if err == io.EOF && len(param) == 0 {
fmt.Fprintln(os.Stderr, "Not enough lines "+
"provided on stdin")
return
}
param = strings.TrimRight(param, "\r\n")
params = append(params, param)
continue
}
params = append(params, arg)
}
// Attempt to create the appropriate command using the arguments
// provided by the user.
cmd, err := dcrjson.NewCmd(method, params...)
if err != nil {
// Show the error along with its error code when it's a
// dcrjson.Error as it reallistcally will always be since the
// NewCmd function is only supposed to return errors of that
// type.
if jerr, ok := err.(dcrjson.Error); ok {
fmt.Fprintf(os.Stderr, "%s command: %v (code: %s)\n",
method, err, jerr.Code)
commandUsage(method)
return
}
// The error is not a dcrjson.Error and this really should not
// happen. Nevertheless, fallback to just showing the error
// if it should happen due to a bug in the package.
fmt.Fprintf(os.Stderr, "%s command: %v\n", method, err)
commandUsage(method)
return
}
// Marshal the command into a JSON-RPC byte slice in preparation for
// sending it to the RPC server.
marshalledJSON, err := dcrjson.MarshalCmd(1, cmd)
if err != nil {
fmt.Fprintln(os.Stderr, err)
return
}
// Send the JSON-RPC request to the server using the user-specified
// connection configuration.
result, err := sendPostRequest(marshalledJSON, cfg)
if err != nil {
fmt.Fprintln(os.Stderr, err)
return
}
// Choose how to display the result based on its type.
strResult := string(result)
if strings.HasPrefix(strResult, "{") || strings.HasPrefix(strResult, "[") {
var dst bytes.Buffer
if err := json.Indent(&dst, result, "", " "); err != nil {
fmt.Fprintf(os.Stderr, "Failed to format result: %v",
err)
return
}
fmt.Println(dst.String())
return
} else if strings.HasPrefix(strResult, `"`) {
var str string
if err := json.Unmarshal(result, &str); err != nil {
fmt.Fprintf(os.Stderr, "Failed to unmarshal result: %v",
err)
return
}
fmt.Println(str)
return
} else if strResult != "null" {
fmt.Println(strResult)
}
}
}
func startTerminal(c *config) {
fmt.Printf("Starting terminal mode.\n")
fmt.Printf("Enter h for [h]elp.\n")
fmt.Printf("Enter q for [q]uit.\n")
done := make(chan bool)
initState, err := terminal.GetState(0)
protected := false
if err != nil {
fmt.Printf("error getting terminal state: %v\n", err.Error())
return
}
go func() {
terminal.MakeRaw(int(os.Stdin.Fd()))
n := terminal.NewTerminal(os.Stdin, "> ")
for {
var ln string
var err error
if !protected {
ln, err = n.ReadLine()
if err != nil {
done <- true
}
} else {
ln, err = n.ReadPassword(">*")
if err != nil {
done <- true
}
}
execute(done, &protected, c, ln)
}
}()
select {
case <-done:
fmt.Printf("exiting...\n")
terminal.Restore(0, initState)
close(done)
}
}

Some files were not shown because too many files have changed in this diff Show More