mirror of
https://github.com/FlipsideCrypto/dcrd.git
synced 2026-02-06 19:06:51 +00:00
This commit is the first stage of several that are planned to convert
the blockchain package into a concurrent safe package that will
ultimately allow support for multi-peer download and concurrent chain
processing. The goal is to update btcd proper after each step so it can
take advantage of the enhancements as they are developed.
In addition to the aforementioned benefit, this staged approach has been
chosen since it is absolutely critical to maintain consensus.
Separating the changes into several stages makes it easier for reviewers
to logically follow what is happening and therefore helps prevent
consensus bugs. Naturally there are significant automated tests to help
prevent consensus issues as well.
The main focus of this stage is to convert the blockchain package to use
the new database interface and implement the chain-related functionality
which it no longer handles. It also aims to improve efficiency in
various areas by making use of the new database and chain capabilities.
The following is an overview of the chain changes:
- Update to use the new database interface
- Add chain-related functionality that the old database used to handle
- Main chain structure and state
- Transaction spend tracking
- Implement a new pruned unspent transaction output (utxo) set
- Provides efficient direct access to the unspent transaction outputs
- Uses a domain specific compression algorithm that understands the
standard transaction scripts in order to significantly compress them
- Removes reliance on the transaction index and paves the way toward
eventually enabling block pruning
- Modify the New function to accept a Config struct instead of
inidividual parameters
- Replace the old TxStore type with a new UtxoViewpoint type that makes
use of the new pruned utxo set
- Convert code to treat the new UtxoViewpoint as a rolling view that is
used between connects and disconnects to improve efficiency
- Make best chain state always set when the chain instance is created
- Remove now unnecessary logic for dealing with unset best state
- Make all exported functions concurrent safe
- Currently using a single chain state lock as it provides a straight
forward and easy to review path forward however this can be improved
with more fine grained locking
- Optimize various cases where full blocks were being loaded when only
the header is needed to help reduce the I/O load
- Add the ability for callers to get a snapshot of the current best
chain stats in a concurrent safe fashion
- Does not block callers while new blocks are being processed
- Make error messages that reference transaction outputs consistently
use <transaction hash>:<output index>
- Introduce a new AssertError type an convert internal consistency
checks to use it
- Update tests and examples to reflect the changes
- Add a full suite of tests to ensure correct functionality of the new
code
The following is an overview of the btcd changes:
- Update to use the new database and chain interfaces
- Temporarily remove all code related to the transaction index
- Temporarily remove all code related to the address index
- Convert all code that uses transaction stores to use the new utxo
view
- Rework several calls that required the block manager for safe
concurrency to use the chain package directly now that it is
concurrent safe
- Change all calls to obtain the best hash to use the new best state
snapshot capability from the chain package
- Remove workaround for limits on fetching height ranges since the new
database interface no longer imposes them
- Correct the gettxout RPC handler to return the best chain hash as
opposed the hash the txout was found in
- Optimize various RPC handlers:
- Change several of the RPC handlers to use the new chain snapshot
capability to avoid needlessly loading data
- Update several handlers to use new functionality to avoid accessing
the block manager so they are able to return the data without
blocking when the server is busy processing blocks
- Update non-verbose getblock to avoid deserialization and
serialization overhead
- Update getblockheader to request the block height directly from
chain and only load the header
- Update getdifficulty to use the new cached data from chain
- Update getmininginfo to use the new cached data from chain
- Update non-verbose getrawtransaction to avoid deserialization and
serialization overhead
- Update gettxout to use the new utxo store versus loading
full transactions using the transaction index
The following is an overview of the utility changes:
- Update addblock to use the new database and chain interfaces
- Update findcheckpoint to use the new database and chain interfaces
- Remove the dropafter utility which is no longer supported
NOTE: The transaction index and address index will be reimplemented in
another commit.
255 lines
8.3 KiB
Go
255 lines
8.3 KiB
Go
// Copyright (c) 2013-2015 The btcsuite developers
|
|
// Copyright (c) 2015-2016 The Decred developers
|
|
// Use of this source code is governed by an ISC
|
|
// license that can be found in the LICENSE file.
|
|
|
|
package txscript
|
|
|
|
import (
|
|
"encoding/binary"
|
|
"fmt"
|
|
)
|
|
|
|
const (
|
|
// defaultScriptAlloc is the default size used for the backing array
|
|
// for a script being built by the ScriptBuilder. The array will
|
|
// dynamically grow as needed, but this figure is intended to provide
|
|
// enough space for vast majority of scripts without needing to grow the
|
|
// backing array multiple times.
|
|
defaultScriptAlloc = 500
|
|
)
|
|
|
|
// ErrScriptNotCanonical identifies a non-canonical script. The caller can use
|
|
// a type assertion to detect this error type.
|
|
type ErrScriptNotCanonical string
|
|
|
|
// Error implements the error interface.
|
|
func (e ErrScriptNotCanonical) Error() string {
|
|
return string(e)
|
|
}
|
|
|
|
// ScriptBuilder provides a facility for building custom scripts. It allows
|
|
// you to push opcodes, ints, and data while respecting canonical encoding. In
|
|
// general it does not ensure the script will execute correctly, however any
|
|
// data pushes which would exceed the maximum allowed script engine limits and
|
|
// are therefore guaranteed not to execute will not be pushed and will result in
|
|
// the Script function returning an error.
|
|
//
|
|
// For example, the following would build a 2-of-3 multisig script for usage in
|
|
// a pay-to-script-hash (although in this situation MultiSigScript() would be a
|
|
// better choice to generate the script):
|
|
// builder := txscript.NewScriptBuilder()
|
|
// builder.AddOp(txscript.OP_2).AddData(pubKey1).AddData(pubKey2)
|
|
// builder.AddData(pubKey3).AddOp(txscript.OP_3)
|
|
// builder.AddOp(txscript.OP_CHECKMULTISIG)
|
|
// script, err := builder.Script()
|
|
// if err != nil {
|
|
// // Handle the error.
|
|
// return
|
|
// }
|
|
// fmt.Printf("Final multi-sig script: %x\n", script)
|
|
type ScriptBuilder struct {
|
|
script []byte
|
|
err error
|
|
}
|
|
|
|
// AddOp pushes the passed opcode to the end of the script. The script will not
|
|
// be modified if pushing the opcode would cause the script to exceed the
|
|
// maximum allowed script engine size.
|
|
func (b *ScriptBuilder) AddOp(opcode byte) *ScriptBuilder {
|
|
if b.err != nil {
|
|
return b
|
|
}
|
|
|
|
// Pushes that would cause the script to exceed the largest allowed
|
|
// script size would result in a non-canonical script.
|
|
if len(b.script)+1 > maxScriptSize {
|
|
str := fmt.Sprintf("adding an opcode would exceed the maximum "+
|
|
"allowed canonical script length of %d", maxScriptSize)
|
|
b.err = ErrScriptNotCanonical(str)
|
|
return b
|
|
}
|
|
|
|
b.script = append(b.script, opcode)
|
|
return b
|
|
}
|
|
|
|
// canonicalDataSize returns the number of bytes the canonical encoding of the
|
|
// data will take.
|
|
func canonicalDataSize(data []byte) int {
|
|
dataLen := len(data)
|
|
|
|
// When the data consists of a single number that can be represented
|
|
// by one of the "small integer" opcodes, that opcode will be instead
|
|
// of a data push opcode followed by the number.
|
|
if dataLen == 0 {
|
|
return 1
|
|
} else if dataLen == 1 && data[0] <= 16 {
|
|
return 1
|
|
} else if dataLen == 1 && data[0] == 0x81 {
|
|
return 1
|
|
}
|
|
|
|
if dataLen < OP_PUSHDATA1 {
|
|
return 1 + dataLen
|
|
} else if dataLen <= 0xff {
|
|
return 2 + dataLen
|
|
} else if dataLen <= 0xffff {
|
|
return 3 + dataLen
|
|
}
|
|
|
|
return 5 + dataLen
|
|
}
|
|
|
|
// addData is the internal function that actually pushes the passed data to the
|
|
// end of the script. It automatically chooses canonical opcodes depending on
|
|
// the length of the data. A zero length buffer will lead to a push of empty
|
|
// data onto the stack (OP_0). No data limits are enforced with this function.
|
|
func (b *ScriptBuilder) addData(data []byte) *ScriptBuilder {
|
|
dataLen := len(data)
|
|
|
|
// When the data consists of a single number that can be represented
|
|
// by one of the "small integer" opcodes, use that opcode instead of
|
|
// a data push opcode followed by the number.
|
|
if dataLen == 0 || dataLen == 1 && data[0] == 0 {
|
|
b.script = append(b.script, OP_0)
|
|
return b
|
|
} else if dataLen == 1 && data[0] <= 16 {
|
|
b.script = append(b.script, byte((OP_1-1)+data[0]))
|
|
return b
|
|
} else if dataLen == 1 && data[0] == 0x81 {
|
|
b.script = append(b.script, byte(OP_1NEGATE))
|
|
return b
|
|
}
|
|
|
|
// Use one of the OP_DATA_# opcodes if the length of the data is small
|
|
// enough so the data push instruction is only a single byte.
|
|
// Otherwise, choose the smallest possible OP_PUSHDATA# opcode that
|
|
// can represent the length of the data.
|
|
if dataLen < OP_PUSHDATA1 {
|
|
b.script = append(b.script, byte((OP_DATA_1-1)+dataLen))
|
|
} else if dataLen <= 0xff {
|
|
b.script = append(b.script, OP_PUSHDATA1, byte(dataLen))
|
|
} else if dataLen <= 0xffff {
|
|
buf := make([]byte, 2)
|
|
binary.LittleEndian.PutUint16(buf, uint16(dataLen))
|
|
b.script = append(b.script, OP_PUSHDATA2)
|
|
b.script = append(b.script, buf...)
|
|
} else {
|
|
buf := make([]byte, 4)
|
|
binary.LittleEndian.PutUint32(buf, uint32(dataLen))
|
|
b.script = append(b.script, OP_PUSHDATA4)
|
|
b.script = append(b.script, buf...)
|
|
}
|
|
|
|
// Append the actual data.
|
|
b.script = append(b.script, data...)
|
|
|
|
return b
|
|
}
|
|
|
|
// AddFullData should not typically be used by ordinary users as it does not
|
|
// include the checks which prevent data pushes larger than the maximum allowed
|
|
// sizes which leads to scripts that can't be executed. This is provided for
|
|
// testing purposes such as regression tests where sizes are intentionally made
|
|
// larger than allowed.
|
|
//
|
|
// Use AddData instead.
|
|
func (b *ScriptBuilder) AddFullData(data []byte) *ScriptBuilder {
|
|
if b.err != nil {
|
|
return b
|
|
}
|
|
|
|
return b.addData(data)
|
|
}
|
|
|
|
// AddData pushes the passed data to the end of the script. It automatically
|
|
// chooses canonical opcodes depending on the length of the data. A zero length
|
|
// buffer will lead to a push of empty data onto the stack (OP_0) and any push
|
|
// of data greater than MaxScriptElementSize will not modify the script since
|
|
// that is not allowed by the script engine. Also, the script will not be
|
|
// modified if pushing the data would cause the script to exceed the maximum
|
|
// allowed script engine size.
|
|
func (b *ScriptBuilder) AddData(data []byte) *ScriptBuilder {
|
|
if b.err != nil {
|
|
return b
|
|
}
|
|
|
|
// Pushes that would cause the script to exceed the largest allowed
|
|
// script size would result in a non-canonical script.
|
|
dataSize := canonicalDataSize(data)
|
|
if len(b.script)+dataSize > maxScriptSize {
|
|
str := fmt.Sprintf("adding %d bytes of data would exceed the "+
|
|
"maximum allowed canonical script length of %d",
|
|
dataSize, maxScriptSize)
|
|
b.err = ErrScriptNotCanonical(str)
|
|
return b
|
|
}
|
|
|
|
// Pushes larger than the max script element size would result in a
|
|
// script that is not canonical.
|
|
dataLen := len(data)
|
|
if dataLen > MaxScriptElementSize {
|
|
str := fmt.Sprintf("adding a data element of %d bytes would "+
|
|
"exceed the maximum allowed script element size of %d",
|
|
dataLen, maxScriptSize)
|
|
b.err = ErrScriptNotCanonical(str)
|
|
return b
|
|
}
|
|
|
|
return b.addData(data)
|
|
}
|
|
|
|
// AddInt64 pushes the passed integer to the end of the script. The script will
|
|
// not be modified if pushing the data would cause the script to exceed the
|
|
// maximum allowed script engine size.
|
|
func (b *ScriptBuilder) AddInt64(val int64) *ScriptBuilder {
|
|
if b.err != nil {
|
|
return b
|
|
}
|
|
|
|
// Pushes that would cause the script to exceed the largest allowed
|
|
// script size would result in a non-canonical script.
|
|
if len(b.script)+1 > maxScriptSize {
|
|
str := fmt.Sprintf("adding an integer would exceed the "+
|
|
"maximum allow canonical script length of %d",
|
|
maxScriptSize)
|
|
b.err = ErrScriptNotCanonical(str)
|
|
return b
|
|
}
|
|
|
|
// Fast path for small integers and OP_1NEGATE.
|
|
if val == 0 {
|
|
b.script = append(b.script, OP_0)
|
|
return b
|
|
}
|
|
if val == -1 || (val >= 1 && val <= 16) {
|
|
b.script = append(b.script, byte((OP_1-1)+val))
|
|
return b
|
|
}
|
|
|
|
return b.AddData(scriptNum(val).Bytes())
|
|
}
|
|
|
|
// Reset resets the script so it has no content.
|
|
func (b *ScriptBuilder) Reset() *ScriptBuilder {
|
|
b.script = b.script[0:0]
|
|
b.err = nil
|
|
return b
|
|
}
|
|
|
|
// Script returns the currently built script. When any errors occurred while
|
|
// building the script, the script will be returned up the point of the first
|
|
// error along with the error.
|
|
func (b *ScriptBuilder) Script() ([]byte, error) {
|
|
return b.script, b.err
|
|
}
|
|
|
|
// NewScriptBuilder returns a new instance of a script builder. See
|
|
// ScriptBuilder for details.
|
|
func NewScriptBuilder() *ScriptBuilder {
|
|
return &ScriptBuilder{
|
|
script: make([]byte, 0, defaultScriptAlloc),
|
|
}
|
|
}
|