Compare commits

...

72 Commits

Author SHA1 Message Date
Omkar Bhat
2f15b50213 Add docker compose file 2019-10-11 10:09:12 -04:00
Dave Collins
ee4a0e2e2a
multi: Use dcrec/edwards/v2 module.
This updates the following modules to use the dcrec/edwards/v2 module:

- chaincfg/v2
- dcrutil/v2
- txscript/v2
2019-10-08 10:43:08 -05:00
Dave Collins
6e647f731f
multi: Use crypto/ripemd160 module.
This updates the main, dcrutil, and blockchain modules to make use of
the new crypto/ripemd160 module.
2019-10-08 10:21:03 -05:00
Dave Collins
eb3fa80a66
docs: Update for secp256k1 v2 module. 2019-10-08 10:14:16 -05:00
Dave Collins
cebab1ef64
multi: Use secp256k1/v2 module.
This updates the following modules to use the secp256k1/v2 module:

- blockchain
- chaincfg/v2
- dcrutil/v2
- hdkeychain/v2
- mempool/v3
- txscript/v2
- main

The hdkeychain/v3 and txscript/v2 modules both use types from secp256k1
in their public API.

Consequently, in order avoid forcing them to bump their major versions,
secp256k1/v1.0.3 was released with the types redefined in terms of the
secp256k1/v2 module so callers still using v1 of the module that are not
ready to upgrade to the v2 module yet can interoperate by updating to
the latest patch version.
2019-10-08 10:14:13 -05:00
David Hill
474b7cb168 connmgr: support resolving ipv6 hosts over Tor 2019-10-08 09:53:16 -05:00
David Hill
eb5bb6d419 crypto: import ripemd160 2019-10-07 19:54:15 -05:00
Dave Collins
8e258be2f3
gcs: Improve package documentation. 2019-10-07 18:40:08 -05:00
Dave Collins
abe43b0928
secp256k1: Prepare v2.0.0.
This updates the dcrec/secp256k1 module dependencies and serves as a
base for dcrec/secp256k1/v2.0.0.

The updated direct dependencies in this commit are as follows:

- github.com/decred/dcrd/chaincfg/chainhash@v1.0.2

The full list of updated direct dependencies since the previous
dcrec/secp256k1/v1.0.2 release are as follows:

- github.com/decred/dcrd/chaincfg/chainhash@v1.0.2
2019-10-07 15:24:46 -05:00
Dave Collins
e5647c02b7
gcs: Prevent empty data elements in v2 filters.
This ensures any empty/nil data elements in the input array to version 2
filter construction are not added to the filter since empty elements do
not make sense given how the filters are used.  It also updates the
tests to ensure proper behavior.

The single match function already failed attempts to match an empty
element as intended, however, prior to this change, it was possible to
match an empty item in the multi-item matching path.  This is not
desirable since the matching behavior must be consistent in both the
single and multi-match cases.
2019-10-07 11:17:41 -05:00
Donald Adu-Poku
b8dff59d4c dcrec: fix examples links. 2019-10-07 09:58:39 -05:00
David Hill
93dc592615 go.mod: sync 2019-10-07 09:13:58 -05:00
David Hill
e4a6c19fba rpctest: use errgroup to catch errors from go routines 2019-10-07 09:13:58 -05:00
David Hill
9465b06cc5 rpctest: remove always-nil error 2019-10-07 09:13:58 -05:00
David Hill
29064448cb rpcclient: close the unused response body 2019-09-21 13:09:21 -05:00
Dave Collins
3e2208f8c1
build: Replace TravisCI with CI via Github actions. 2019-09-20 19:59:04 -05:00
Donald Adu-Poku
9d89be74da cpuminer: fix race. 2019-09-20 19:51:30 -05:00
Dave Collins
4fcf24f02c
build: Setup github actions for CI. 2019-09-20 12:40:49 -05:00
Matheus Degiovani
450a680097 mempool: Add ErrorCode to returned TxRuleErrors
This adds the ErrorCode member to TxRuleError, filling it with
appropriate values throughout the mempool package. This allows clients
of the package to correctly identify error causes with a greater
granularity and respond appropriately.

It also deprecates the RejectCode attribute and ErrToRejectError
functions, to be removed in the next major version update of the
package.

All call sites that inspect mempool errors were updated to use the new
error codes instead of using RejectionCodes. Additional mempool tests
were added to ensure the correct behavior on some relevant cases.

Finally, given the introduction and use of a new public field, the main
module was updated to use an as-of-yet unfinished mempool v3.1.0, which
will include the required functionality.
2019-09-18 14:27:20 -05:00
Donald Adu-Poku
13ee7e50f1 docs: document module breaking changes process.
This adds a section outlining the process to follow
going forward with regards to making breaking changes
to modules.
2019-09-11 11:28:59 -05:00
David Hill
65a467e0cf mining: fix data race
use a local prng variable
2019-09-11 10:35:24 -05:00
David Hill
5604ecd689 travis: bump golangci-lint to v1.18.0 2019-09-11 10:12:04 -05:00
David Hill
78fd2b33a4 mining: fix data race
Use the read template.
2019-09-11 10:11:36 -05:00
Donald Adu-Poku
a05c85008d rpcserver: update rpcAskWallet rpc set.
This updates the rpc ask wallet set with missing entries.
The set has been ordered alphabetically and some
entries have been removed because they are yet to be
implemented by the wallet.
2019-09-11 09:48:11 -05:00
Dave Collins
2b0f3ceeab
mining: Remove unused error codes. 2019-09-09 13:28:44 -05:00
Dave Collins
cffc300c6a
mining: Minor cleanup of aggressive mining path.
This combines the two conditions for the aggressive mining path into a
single condition and does a bit of light cleanup to remove the template
copies that are no longer necessary due to the removal of the old style
template caching.
2019-09-09 12:19:53 -05:00
Dave Collins
280ccb732c
mining: Remove unused extra nonce update code.
This removes the UpdateExtraNonce function which updated an extra nonce
in the coinbase transaction and recalculated the merkle root since it is
not necessary and wasteful for Decred due to the extra nonce being
available the block header.

Further, due to the aforementioned and the fact the template doesn't
have a height set, it isn't currently actually being called anyway as
can be seen by diffing the decoded output of subsequent getwork calls
and noting the only thing that is being updated in between full
regeneration of new templates is the timestamp as expected.

$ diff -uNp work1.txt work2.txt
--- work1.txt   2019-09-07 08:18:58.410917100 -0500
+++ work2.txt   2019-09-07 08:19:01.216456300 -0500
@@ -16,7 +16,7 @@
     "sbits": 0.00021026,
     "height": 98,
     "size": 7221,
-    "time": 1567862338,
+    "time": 1567862341,
     "nonce": 0,
     "extradata": "0000000000000000000000000000000000000000000000000000000000000000",
     "stakeversion": 0,
2019-09-09 11:57:29 -05:00
Dave Collins
48c0585e6b
mining: Remove dead code.
This removes all of the code related to setting and updating cached
templates in the block manager since they are no longer used.

It is easy to see this is the case by considering that the only places
that set cachedCurrentTemplate and cachedParentTemplate set them to nil.
2019-09-09 11:51:04 -05:00
Donald Adu-Poku
45888b8bcd rpcserver: don't use activeNetParams.
This modifies all of the RPC code to use the chain
parameters that are associated with the RPC server
instead of the global activeNetParams and thus
moves one step closer to being able to split the
RPC server out into a separate package.
2019-09-09 09:43:37 -05:00
Jamie Holdstock
2f0c4ecada docs: Link to btc whitepaper on decred.org
We already host the whitepaper on decred.org, so no need to link to
bitcoin.org.
2019-09-08 07:52:56 -05:00
Josh Rickmar
9eed83b813 rpcserver: Match tx filter on ticket commitments
When a transaction is checked for relevance to a websocket client with
a loaded transaction filter, a call to ExtractPkScriptAddrs is not
enough.  Commitments in tickets are encoded in an OP_RETURN output
which require an additional parse of the script to check for a
committed P2PKH or P2SH HASH160.
2019-09-05 22:30:40 -05:00
David Hill
0781162661 multi: remove unused funcs and vars 2019-09-05 10:13:18 -05:00
Donald Adu-Poku
1e7fe1fe31 multi: use chain ref. from blockmanager config.
This removes the chain field from the block
manager in favour of the chain field in the
block manager config.
2019-09-05 10:13:01 -05:00
Donald Adu-Poku
1bae334dd9 multi: remove getblocktemplate.
This removes the getblocktemplate and its helpers from the codebase.
Ongoing mining updates focused on the voting/block validation process
 with respect to generating block templates  for getwork makes it the
better option for decred.  Also getblocktemplate rpc was buggy and
has been disabled for a while.

Some lint related issues have been addressed as well.
2019-09-05 09:23:56 -05:00
Dave Collins
38203c8f4d
travis: Test go1.13 and drop go1.11. 2019-09-03 15:08:35 -05:00
Dave Collins
fefb993956
peer: Ensure listener tests sync with messages.
This adds an additional read from the ok channel in the peer listener
tests to ensure the version message is consumed as well as the verack so
that the remaining tests line up with the messages that are being tested.
2019-09-03 12:24:38 -05:00
Dave Collins
2d09391768
blockchain: Cleanup subsidy cache init order.
This modifies the order in which the subsidy cache is created when not
provided by a caller to happen before the blockchain instance is created
to be more consistent.
2019-09-03 11:38:45 -05:00
Donald Adu-Poku
a80134fa29 multi: make rebroadcast winners & missed ws only. 2019-09-03 11:36:44 -05:00
Donald Adu-Poku
baac7efb44 multi: update limited user rpcs. 2019-09-03 10:41:52 -05:00
Dave Collins
2c3a4e3054
gcs: Implement version 2 filters.
This implements new version 2 filters which have 4 changes as compared
to version 1 filters:

- Support for independently specifying the false positive rate and
  Golomb coding bin size which allows minimizing the filter size
- A faster (incompatible with version 1) reduction function
- A more compact serialization for the number of members in the set
- Deduplication of all hash collisions prior to reducing and serializing
  the deltas

In addition, it adds a full set of tests and updates the benchmarks to
use the new version 2 filters.

The primary motivating factor for these changes is the ability to
minimize the size of the filters, however, the following is a before and
after comparison of version 1 and 2 filters in terms of performance and
allocations.

It is interesting to note the results for attempting to match a single
item is not very representative due to the fact the actual hash value
itself dominates to the point it can significantly vary due to the very
low ns timings involved.  Those differences average out when matching
multiple items, which is the much more realistic scenario, and the
performance increase is in line with the expected values.  It is also
worth nothing that filter construction now takes a bit longer due to the
additional deduplication step.  While the performance numbers for filter
construction are about 25% larger in relative terms, it is only a few ms
difference in practice and therefore is an acceptable trade off for the
size savings provided.

benchmark                      old ns/op    new ns/op    delta
-----------------------------------------------------------------
BenchmarkFilterBuild50000      16194920     20279043     +25.22%
BenchmarkFilterBuild100000     32609930     41629998     +27.66%
BenchmarkFilterMatch           620          593          -4.35%
BenchmarkFilterMatchAny        2687         2302         -14.33%

benchmark                      old allocs   new allocs   delta
-----------------------------------------------------------------
BenchmarkFilterBuild50000      6            17           +183.33%
BenchmarkFilterBuild100000     6            18           +200.00%
BenchmarkFilterMatch           0            0            +0.00%
BenchmarkFilterMatchAny        0            0            +0.00%

benchmark                      old bytes    new bytes    delta
-----------------------------------------------------------------
BenchmarkFilterBuild50000      688366       2074653      +201.39%
BenchmarkFilterBuild100000     1360064      4132627      +203.86%
BenchmarkFilterMatch           0            0            +0.00%
BenchmarkFilterMatchAny        0            0            +0.00%
2019-09-03 10:30:31 -05:00
Donald Adu-Poku
b8864c39dc multi: update rpc documentation.
This adds missing documentation for rpcs
and orders them alphabetically.
2019-09-02 11:01:38 -05:00
Dave Collins
eab0e4c2a7
blockchain: Refactor best chain state init.
This refactors the best chain state and block index loading code into
separate functions so they are available to upcoming database update
code to build version 2 gcs filters.
2019-09-02 01:24:23 -05:00
Aaron Campbell
adff6a0bac cpuminer: Fix off-by-one issues in nonce handling.
During 32-bit nonce iteration, if a block solution wasn't found, the
iterator variable would overflow back to 0, creating an infinite loop,
thus continuing the puzzle search without ever updating the extra
nonce field.  This bug has never been triggered in practice because
the code in question has only ever been used with difficulties where
a solution exists within the regular nonce space.

The extra nonce iteration logic itself was also imperfect in that it
wouldn't test a value of exactly 2^64 - 1.

The behavior we actually want is to loop through the entire unsigned
integer space for both the regular and extra nonces, and for this
process to continue forever until a solution is found.  Note that
periodic updates to the block header timestamp during iteration ensure
that unique hashes are generated for subsequent generations of the
same nonce values.
2019-08-26 11:09:05 -05:00
Dave Collins
cb79063ef8
gcs: Add tests for bit reader/writer. 2019-08-22 11:37:53 -05:00
Dave Collins
952bd7bba3
gcs: Support independent fp rate and bin size.
This modifies the code to support an independent false positive rate and
Golomb coding bin size.  Among other things, this permits more optimal
parameters for minimizing the filter size to be specified.

This capability will be used in the upcoming version 2 filters that will
ultimately be included in header commitments.

For a concrete example, the current version 1 filter for block 89341 on
mainnet contains 2470 items resulting in a full serialized size of 6,669
bytes.  In contrast, if the optimal parameters were specified as
described by the comments in this commit, with no other changes to the
items included in the filter, that same filter would be 6,505 bytes,
which is a size reduction of about 2.46%.  This might not seem like a
significant amount, but consider that there is a filter for every block,
so it really adds up.

Since the internal filter no longer directly has a P parameter, this
moves the method to obtain it to the FilterV1 type and adds a new test
to ensure it is returned properly.

Additionally, all of the tests are converted to use the parameters while
retaining the same effective parameters to help prove correctness of the
new code.

Finally, it also significantly reduces the number of allocations
required to construct a filter resulting in faster filter construction
and reduced pressure on the GC and does some other minor consistency
cleanup while here.

In terms of the reduction in allocations, the following is a before and
after comparison of building filters with 50k and 100k elements:

benchmark                    old ns/op    new ns/op     delta
--------------------------------------------------------------
BenchmarkFilterBuild50000    18095111     15680001     -13.35%
BenchmarkFilterBuild100000   31980156     31389892     -1.85%

benchmark                    old allocs   new allocs   delta
--------------------------------------------------------------
BenchmarkFilterBuild50000    31           6            -80.65%
BenchmarkFilterBuild100000   34           6            -82.35%

benchmark                    old bytes    new bytes    delta
--------------------------------------------------------------
BenchmarkFilterBuild50000    1202343      688271       -42.76%
BenchmarkFilterBuild100000   2488472      1360000      -45.35%
2019-08-22 10:33:25 -05:00
Dave Collins
3305fcb3fa
gcs: Group V1 filter funcs after filter defs.
This simply rearranges the funcs so they are more logically grouped in
order to provide cleaner diffs for upcoming changes.  There are no
functional changes.
2019-08-22 10:31:47 -05:00
Dave Collins
b67fb74fbc
gcs: Optimize Hash.
This optimizes the Hash method of gcs filters by making use of the new
zero-alloc hashing funcs available in crypto/blake256.

The following is a before and after comparison:

benchmark       old ns/op   new ns/op    delta
-------------------------------------------------
BenchmarkHash   1786        1315         -26.37%

benchmark       old allocs  new allocs   delta
-------------------------------------------------
BenchmarkHash   2           0            -100.00%

benchmark       old bytes   new bytes    delta
-------------------------------------------------
BenchmarkHash   176         0            -100.00%
2019-08-22 10:22:08 -05:00
Dave Collins
6b9f78e58e
gcs: Add benchmark for filter hashing. 2019-08-22 10:22:04 -05:00
Aaron Campbell
8be96a8729 multi: Correct typos.
Correct typos found by reading code, ispell, and creative grepping.
2019-08-22 10:20:03 -05:00
Dave Collins
665ab37c68
gcs: Standardize serialization on a single format.
Currently, the filters provide two different serialization formats per
version.  The first is the raw filter bytes without the number of items
in its data set and is implemented by the Bytes and FromBytesV1
functions.  The second includes that information and is implemented by
the NBytes and FromNBytesV1 functions.

In practice, the ability to serialize the filter independently from the
number of items in its data set is not very useful since that
information is required to be able to query the filter and, unlike the
other parameters which are fixed (e.g. false positive rate and key), the
number of items varies per filter.  For this reason, all usage in
practice calls NBytes and FromNBytesV1.

Consequently, this simplifies the API for working with filters by
standardizing on a single serialization format per filter version which
includes the number of items in its data set.

In order to accomplish this, the current Bytes and FromBytesV1 functions
are removed and the NBytes and FromNBytesV1 functions are renamed to
take their place.

This also updates all tests and callers in the repo accordingly.
2019-08-22 09:50:29 -05:00
Aaron Campbell
8371deb906 txscript: Fix duplicate test name. 2019-08-22 09:02:56 -05:00
Aaron Campbell
d6be7cb8bb rpctest: Remove leftover debug print.
A proper error is message logged on the very next line, so there's
no reason to keep this extra Println call.
2019-08-22 08:33:22 -05:00
Aaron Campbell
4891191de3 rpcserver: Better error message.
There's no such thing as an "SHD" script, so fix the error message
to correctly reference a pay-to-script-hash script instead.
2019-08-22 08:32:34 -05:00
Aaron Campbell
9a368f5edc mining: Remove unused error return value. 2019-08-22 08:31:52 -05:00
Aaron Campbell
68151e588c miningerror: Remove duplicate copyright. 2019-08-22 08:31:24 -05:00
Dave Collins
1d6445be98
gcs: Correct zero hash filter matches.
This ensures filters properly match search items which happen to hash to
zero and adds a test for the condition.  While here, it also rewrites
the MatchAny function to make it easier to reason about.

This was discovered by the new tests which intentionally added tests
with a high false positive rate and random keys.
2019-08-21 11:17:58 -05:00
Dave Collins
90d2deb420
gcs: Add filter version support.
This refactors the primary gcs filter logic into an internal struct with
a version parameter in in order to pave the way for supporting v2
filters which will have a different serialization that makes them
incompatible with v1 filters while still retaining the ability to work
with v1 filters in the interim.

The exported type is renamed to FilterV1 and the new internal struct is
embedded so its methods are externally available.

The tests and all callers in the repo have been updated accordingly.
2019-08-20 23:43:23 -05:00
David Hill
150b54aa0f connmgr: add TorLookupIPContext, deprecate TorLookupIP 2019-08-20 17:06:11 -04:00
Dave Collins
8aa97edada
gcs: Make error consistent with rest of codebase.
This updates the error handling in the gcs package to be consistent with
the rest of the code base to provide a proper error type and error codes
that can be programmatically detected.

This is part of the ongoing process to cleanup and improve the gcs
module to the quality level required by consensus code for ultimate
inclusion in header commitments.
2019-08-20 09:41:07 -05:00
Dave Collins
2a8856d026
gcs: Overhaul tests and benchmarks.
This rewrites the tests to make them more consistent with the rest of
the code base and significantly increases their coverage of the code.

It also reworks the benchmarks to actually benchmark what their names
claim, renames them for consistency, and make them more stable by
ensuring the same prng seed is used each run to eliminate variance
introduced by different values.

Finally, it removes an impossible to hit condition from the bit reader
and adds a couple of additional checks to harden the filters against
potential misuse.

This is part of the ongoing process to cleanup and improve the gcs
module to the quality level required by consensus code for ultimate
inclusion in header commitments.
2019-08-20 09:36:10 -05:00
Dave Collins
468f3287c2
gcs: Support empty filters.
This adds support for empty filters versus being an error along with a
full set of tests to ensure the empty filter works as intended.

It is part of the onging process to cleanup and improve the gcs module
to the quality level required by consensus code for ultimate inclusion
in header commitments.
2019-08-20 09:13:32 -05:00
Dave Collins
feb4ff55e0
gcs: Start v2 module dev cycle.
This removes the unused and undesired FromPBytes and FromNPBytes
functions and associated tests from the gcs module in preparation for
upcoming changes aimed to support new version filters for use
in header commitments.

Since these changes, and several planned upcoming ones, constitute
breaking pubic API changes, this bumps the major version of the gcs
module, adds a replacement for gcs/v2 to the main module and updates all
other modules to make use of it.

It also bumps the rpcclient module to v5 since it makes use of the
gcs.Filter type in its API, adds a replacement for rpcclient/v5 to the
main module and updates all other modules to make use of it.

Note that this also marks the start of a new approach towards handling
module versioning between release cycles to reduce the maintenance
burden.

The new approach is as follows.

Whenever a new breaking change to a module's API is introduced, the
following will happen:

- Bump the major version in the go.mod of the affected module if not
  already done since the last release tag
- Add a replacement to the go.mod in the main module if not already
  done since the last release tag
- Update all imports in the repo to use the new major version as
  necessary
  - Make necessary modifications to allow all other modules to use the
    new version in the same commit
- Repeat the process for any other modules the require a new major as a
  result of consuming the new major(s)

Finally, once the repo is frozen for software release, all modules will
be tagged in dependency order to stabilize them and all module
replacements will be removed in order to ensure releases are only using
fully tagged and released code.
2019-08-20 09:07:07 -05:00
Aaron Campbell
8497b9843b wire: Fix a few messageError string typos. 2019-08-17 01:11:40 -05:00
Aaron Campbell
03678bb754 multi: Correct typos.
Correct typos found by reading code and creative grepping.
2019-08-16 17:37:58 -05:00
Donald Adu-Poku
b69302960f multi: add getnetworkinfo rpc. 2019-08-14 15:21:01 -05:00
Aaron Campbell
b9b863f5a7 blockchain: Implement stricter bounds checking.
This implements stricter bounds checking during transaction spend
journal decoding.
2019-08-14 06:57:26 -05:00
Donald Adu-Poku
77a14a9ead multi: add automatic network address discovery.
This discovers the network address(es) of the daemon
through connected outbound peers. The address(es)
discovered are advertised to subsequent connecting peers.
2019-08-14 06:53:10 -05:00
Dave Collins
25c14e046a
main: Update to use all new major module versions.
This updates all code in the main module to use the latest major modules
versions to pull in the latest updates.

A more general high level overview of the changes is provided below,
however, there is one semantic change worth calling out independently.

The verifymessage RPC will now return an error when provided with
an address that is not for the current active network and the RPC server
version has been bumped accordingly.

Previously, it would return false which indicated the signature is
invalid, even when the provided signature was actually valid for the
other network.  Said behavior was not really incorrect since the
address, signature, and message combination is in fact invalid for the
current active network, however, that result could be somewhat
misleading since a false result could easily be interpreted to mean the
signature is actually invalid altogether which is distinct from the case
of the address being for a different network.  Therefore, it is
preferable to explicitly return an error in the case of an address on
the wrong network to cleanly separate these cases.

The following is a high level overview of the changes:

- Replace all calls to removed blockchain merkle root, pow, subsidy, and
  coinbase funcs with their standalone module equivalents
  - Introduce a new local func named calcTxTreeMerkleRoot that accepts
    dcrutil.Tx as before and defers to the new standalone func
- Update block locator handling to match the new signature required by
  the peer/v2 module
  - Introduce a new local func named chainBlockLocatorToHashes which
    performs the necessary conversion
- Update all references to old v1 chaincfg params global instances to
  use the new v2 functions
- Modify all cases that parse addresses to provide the now required
  current network params
  - Include address params with the wsClientFilter
- Replace removed v1 chaincfg constants with local constants
- Create subsidy cache during server init and pass it to the relevant
  subsystems
  - blockManagerConfig
  - BlkTmplGenerator
  - rpcServer
  - VotingWallet
- Update mining code that creates the block one coinbase transaction to
  create the output scripts as defined in the v2 params
- Replace old v2 dcrjson constant references with new types module
- Fix various comment typos
- Update fees module to use the latest major module versions and bump it v2
2019-08-13 11:22:37 -05:00
Dave Collins
e54dde10e9
docs: Update for mempool v3 module. 2019-08-12 19:54:31 -05:00
Dave Collins
f1ed8d61ad
release: Introduce mempool v3 module. 2019-08-12 19:54:03 -05:00
Dave Collins
4106f792c2
mempool: Use latest major version deps.
This udpates the mempool module to use the latest module major versions
as well as the new blockchain/standard module.

The updated direct dependencies are as follows:

- github.com/decred/dcrd/blockchain/stake/v2@v2.0.1
- github.com/decred/dcrd/blockchain/standalone@v1.0.0
- github.com/decred/dcrd/blockchain/v2@v2.0.2
- github.com/decred/dcrd/chaincfg/v2@v2.2.0
- github.com/decred/dcrd/dcrutil/v2@v2.0.0
- github.com/decred/dcrd/mining/v2@v2.0.0
- github.com/decred/dcrd/txscript/v2@v2.0.0
2019-08-12 18:10:47 -05:00
Dave Collins
fe8ed953bc
release: Freeze version 2 mempool module use.
This freezes the root module usage of v2 of the mempool module by
removing the replacement and bumping the required version.  This means
building the software will still produce binaries based on the v2 module
until the v3 module is fully released.

All future releases will be moving to version 3 of the module.

Consequently, it bumps the required module versions as follows:

- github.com/decred/dcrd/mempool/v2 v2.1.0
2019-08-12 18:00:40 -05:00
290 changed files with 5998 additions and 4676 deletions

28
.github/workflows/go.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Build and Test
on: [push, pull_request]
jobs:
build:
name: Go CI
runs-on: ubuntu-latest
strategy:
matrix:
go: [1.12, 1.13]
steps:
- name: Set up Go
uses: actions/setup-go@v1
with:
go-version: ${{ matrix.go }}
- name: Check out source
uses: actions/checkout@v1
- name: Install Linters
run: "curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.18.0"
- name: Build
env:
GO111MODULE: "on"
run: go build ./...
- name: Test
env:
GO111MODULE: "on"
run: |
export PATH=${PATH}:$(go env GOPATH)/bin
sh ./run_tests.sh

View File

@ -1,35 +0,0 @@
language: go
sudo: false
env:
- GO111MODULE=on
matrix:
include:
- os: linux
go: 1.12.x
cache:
directories:
- $HOME/.cache/go-build
- $HOME/go/pkg/mod
- os: linux
go: 1.11.x
cache:
directories:
- $HOME/.cache/go-build
- $HOME/go/pkg/mod
- os: osx
go: 1.12.x
cache:
directories:
- $HOME/.cache/go-build
- $HOME/go/pkg/mod
- os: osx
go: 1.11.x
cache:
directories:
- $HOME/Library/Caches/go-build
- $HOME/go/pkg/mod
install:
- curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(go env GOPATH)/bin v1.17.1
script:
- env GO111MODULE=on go build ./...
- env GO111MODULE=on ./run_tests.sh

View File

@ -1,7 +1,7 @@
dcrd
====
[![Build Status](https://travis-ci.org/decred/dcrd.png?branch=master)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd)
[![Go Report Card](https://goreportcard.com/badge/github.com/decred/dcrd)](https://goreportcard.com/report/github.com/decred/dcrd)

View File

@ -74,6 +74,13 @@ type localAddress struct {
score AddressPriority
}
// LocalAddr represents network address information for a local address.
type LocalAddr struct {
Address string
Port uint16
Score int32
}
// AddressPriority type is used to describe the hierarchy of local address
// discovery methods.
type AddressPriority int
@ -133,7 +140,7 @@ const (
newBucketsPerAddress = 8
// numMissingDays is the number of days before which we assume an
// address has vanished if we have not seen it announced in that long.
// address has vanished if we have not seen it announced in that long.
numMissingDays = 30
// numRetries is the number of tried without a single success before
@ -375,7 +382,7 @@ func (a *AddrManager) savePeers() {
return
}
// First we make a serialisable datastructure so we can encode it to JSON.
// First we make a serialisable data structure so we can encode it to JSON.
sam := new(serializedAddrManager)
sam.Version = serialisationVersion
copy(sam.Key[:], a.key[:])
@ -536,7 +543,7 @@ func (a *AddrManager) deserializePeers(filePath string) error {
if v.refs > 0 && v.tried {
return fmt.Errorf("address %s after serialisation "+
"which is both new and tried!", k)
"which is both new and tried", k)
}
}
@ -746,7 +753,7 @@ func (a *AddrManager) HostToNetAddress(host string, port uint16, services wire.S
// the relevant .onion address.
func ipString(na *wire.NetAddress) string {
if isOnionCatTor(na) {
// We know now that na.IP is long enogh.
// We know now that na.IP is long enough.
base32 := base32.StdEncoding.EncodeToString(na.IP[6:])
return strings.ToLower(base32) + ".onion"
}
@ -895,7 +902,7 @@ func (a *AddrManager) Good(addr *wire.NetAddress) {
ka.lastattempt = now
ka.attempts = 0
// move to tried set, optionally evicting other addresses if neeed.
// move to tried set, optionally evicting other addresses if needed.
if ka.tried {
return
}
@ -967,7 +974,7 @@ func (a *AddrManager) Good(addr *wire.NetAddress) {
a.addrNew[newBucket][rmkey] = rmka
}
// SetServices sets the services for the giiven address to the provided value.
// SetServices sets the services for the given address to the provided value.
func (a *AddrManager) SetServices(addr *wire.NetAddress, services wire.ServiceFlag) {
a.mtx.Lock()
defer a.mtx.Unlock()
@ -1013,19 +1020,63 @@ func (a *AddrManager) AddLocalAddress(na *wire.NetAddress, priority AddressPrior
return nil
}
// HasLocalAddress asserts if the manager has the provided local address.
func (a *AddrManager) HasLocalAddress(na *wire.NetAddress) bool {
key := NetAddressKey(na)
a.lamtx.Lock()
_, ok := a.localAddresses[key]
a.lamtx.Unlock()
return ok
}
// FetchLocalAddresses fetches a summary of local addresses information for
// the getnetworkinfo rpc.
func (a *AddrManager) FetchLocalAddresses() []LocalAddr {
a.lamtx.Lock()
defer a.lamtx.Unlock()
addrs := make([]LocalAddr, 0, len(a.localAddresses))
for _, addr := range a.localAddresses {
la := LocalAddr{
Address: addr.na.IP.String(),
Port: addr.na.Port,
}
addrs = append(addrs, la)
}
return addrs
}
const (
// Unreachable represents a publicly unreachable connection state
// between two addresses.
Unreachable = 0
// Default represents the default connection state between
// two addresses.
Default = iota
// Teredo represents a connection state between two RFC4380 addresses.
Teredo
// Ipv6Weak represents a weak IPV6 connection state between two
// addresses.
Ipv6Weak
// Ipv4 represents an IPV4 connection state between two addresses.
Ipv4
// Ipv6Strong represents a connection state between two IPV6 addresses.
Ipv6Strong
// Private represents a connection state connect between two Tor addresses.
Private
)
// getReachabilityFrom returns the relative reachability of the provided local
// address to the provided remote address.
func getReachabilityFrom(localAddr, remoteAddr *wire.NetAddress) int {
const (
Unreachable = 0
Default = iota
Teredo
Ipv6Weak
Ipv4
Ipv6Strong
Private
)
if !IsRoutable(remoteAddr) {
return Unreachable
}
@ -1130,6 +1181,15 @@ func (a *AddrManager) GetBestLocalAddress(remoteAddr *wire.NetAddress) *wire.Net
return bestAddress
}
// IsPeerNaValid asserts if the provided local address is routable
// and reachable from the peer that suggested it.
func (a *AddrManager) IsPeerNaValid(localAddr, remoteAddr *wire.NetAddress) bool {
net := getNetwork(localAddr)
reach := getReachabilityFrom(localAddr, remoteAddr)
return (net == IPv4Address && reach == Ipv4) || (net == IPv6Address &&
(reach == Ipv6Weak || reach == Ipv6Strong || reach == Teredo))
}
// New returns a new Decred address manager.
// Use Start to begin processing asynchronous address updates.
// The address manager uses lookupFunc for necessary DNS lookups.

View File

@ -21,7 +21,7 @@ var (
ipNet("192.168.0.0", 16, 32),
}
// rfc2544Net specifies the the IPv4 block as defined by RFC2544
// rfc2544Net specifies the IPv4 block as defined by RFC2544
// (198.18.0.0/15)
rfc2544Net = ipNet("198.18.0.0", 15, 32)
@ -78,7 +78,7 @@ var (
// byte number. It then stores the first 6 bytes of the address as
// 0xfd, 0x87, 0xd8, 0x7e, 0xeb, 0x43.
//
// This is the same range used by OnionCat, which is part part of the
// This is the same range used by OnionCat, which is part of the
// RFC4193 unique local IPv6 range.
//
// In summary the format is:
@ -118,6 +118,33 @@ func isOnionCatTor(na *wire.NetAddress) bool {
return onionCatNet.Contains(na.IP)
}
// NetworkAddress type is used to classify a network address.
type NetworkAddress int
const (
LocalAddress NetworkAddress = iota
IPv4Address
IPv6Address
OnionAddress
)
// getNetwork returns the network address type of the provided network address.
func getNetwork(na *wire.NetAddress) NetworkAddress {
switch {
case isLocal(na):
return LocalAddress
case isIPv4(na):
return IPv4Address
case isOnionCatTor(na):
return OnionAddress
default:
return IPv6Address
}
}
// isRFC1918 returns whether or not the passed address is part of the IPv4
// private network address space as defined by RFC1918 (10.0.0.0/8,
// 172.16.0.0/12, or 192.168.0.0/16).

View File

@ -1,7 +1,7 @@
bech32
==========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd/bech32)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/bech32)

View File

@ -18,7 +18,7 @@ const charset = "qpzry9x8gf2tvdw0s3jn54khce6mua7l"
var gen = []int{0x3b6a57b2, 0x26508e6d, 0x1ea119fa, 0x3d4233dd, 0x2a1462b3}
// toBytes converts each character in the string 'chars' to the value of the
// index of the correspoding character in 'charset'.
// index of the corresponding character in 'charset'.
func toBytes(chars string) ([]byte, error) {
decoded := make([]byte, 0, len(chars))
for i := 0; i < len(chars); i++ {
@ -163,7 +163,7 @@ func DecodeNoLimit(bech string) (string, []byte, error) {
return "", nil, ErrInvalidLength(len(bech))
}
// Only ASCII characters between 33 and 126 are allowed.
// Only ASCII characters between 33 and 126 are allowed.
var hasLower, hasUpper bool
for i := 0; i < len(bech); i++ {
if bech[i] < 33 || bech[i] > 126 {

View File

@ -76,7 +76,7 @@ func TestBech32(t *testing.T) {
str, encoded)
}
// Flip a bit in the string an make sure it is caught.
// Flip a bit in the string and make sure it is caught.
pos := strings.LastIndexAny(str, "1")
flipped := str[:pos+1] + string((str[pos+1] ^ 1)) + str[pos+2:]
_, _, err = Decode(flipped)
@ -115,7 +115,7 @@ func TestCanDecodeUnlimtedBech32(t *testing.T) {
}
// BenchmarkEncodeDecodeCycle performs a benchmark for a full encode/decode
// cycle of a bech32 string. It also reports the allocation count, which we
// cycle of a bech32 string. It also reports the allocation count, which we
// expect to be 2 for a fully optimized cycle.
func BenchmarkEncodeDecodeCycle(b *testing.B) {

View File

@ -1,7 +1,7 @@
blockchain
==========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/blockchain)

View File

@ -584,7 +584,7 @@ func (b *BlockChain) fetchBlockByNode(node *blockNode) (*dcrutil.Block, error) {
// pruneStakeNodes removes references to old stake nodes which should no
// longer be held in memory so as to keep the maximum memory usage down.
// It proceeds from the bestNode back to the determined minimum height node,
// finds all the relevant children, and then drops the the stake nodes from
// finds all the relevant children, and then drops the stake nodes from
// them by assigning nil and allowing the memory to be recovered by GC.
//
// This function MUST be called with the chain state lock held (for writes).
@ -914,7 +914,7 @@ func (b *BlockChain) disconnectBlock(node *blockNode, block, parent *dcrutil.Blo
}
// Update the transaction spend journal by removing the record
// that contains all txos spent by the block .
// that contains all txos spent by the block.
err = dbRemoveSpendJournalEntry(dbTx, block.Hash())
if err != nil {
return err
@ -1118,7 +1118,7 @@ func (b *BlockChain) reorganizeChainInternal(targetTip *blockNode) error {
tip = n.parent
}
// Load the fork block if there are blocks to attach and its not already
// Load the fork block if there are blocks to attach and it's not already
// loaded which will be the case if no nodes were detached. The fork block
// is used as the parent to the first node to be attached below.
forkBlock := nextBlockToDetach
@ -1413,7 +1413,7 @@ func (b *BlockChain) connectBestChain(node *blockNode, block, parent *dcrutil.Bl
// and flush the status changes to the database. It is safe to
// ignore any errors when flushing here as the changes will be
// flushed when a valid block is connected, and the worst case
// scenario if a block a invalid is it would need to be
// scenario if a block is invalid is it would need to be
// revalidated after a restart.
view := NewUtxoViewpoint()
view.SetBestHash(parentHash)
@ -1437,7 +1437,7 @@ func (b *BlockChain) connectBestChain(node *blockNode, block, parent *dcrutil.Bl
// In the fast add case the code to check the block connection
// was skipped, so the utxo view needs to load the referenced
// utxos, spend them, and add the new utxos being created by
// this block. Also, in the case the the block votes against
// this block. Also, in the case the block votes against
// the parent, its regular transaction tree must be
// disconnected.
if fastAdd {
@ -2064,6 +2064,13 @@ func New(config *Config) (*BlockChain, error) {
return nil, err
}
// Either use the subsidy cache provided by the caller or create a new
// one when one was not provided.
subsidyCache := config.SubsidyCache
if subsidyCache == nil {
subsidyCache = standalone.NewSubsidyCache(params)
}
b := BlockChain{
checkpointsByHeight: checkpointsByHeight,
deploymentVers: deploymentVers,
@ -2074,6 +2081,7 @@ func New(config *Config) (*BlockChain, error) {
sigCache: config.SigCache,
indexManager: config.IndexManager,
interrupt: config.Interrupt,
subsidyCache: subsidyCache,
index: newBlockIndex(config.DB),
bestChain: newChainView(nil),
orphans: make(map[chainhash.Hash]*orphanBlock),
@ -2087,6 +2095,7 @@ func New(config *Config) (*BlockChain, error) {
calcVoterVersionIntervalCache: make(map[[chainhash.HashSize]byte]uint32),
calcStakeVersionCache: make(map[[chainhash.HashSize]byte]uint32),
}
b.pruner = newChainPruner(&b)
// Initialize the chain state from the passed database. When the db
// does not yet contain any chain state, both it and the chain state
@ -2104,15 +2113,6 @@ func New(config *Config) (*BlockChain, error) {
}
}
// Either use the subsidy cache provided by the caller or create a new
// one when one was not provided.
subsidyCache := config.SubsidyCache
if subsidyCache == nil {
subsidyCache = standalone.NewSubsidyCache(b.chainParams)
}
b.subsidyCache = subsidyCache
b.pruner = newChainPruner(&b)
// The version 5 database upgrade requires a full reindex. Perform, or
// resume, the reindex as needed.
if err := b.maybeFinishV5Upgrade(); err != nil {

View File

@ -1,7 +1,7 @@
chaingen
========
[![Build Status](https://travis-ci.org/decred/dcrd.png?branch=master)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/blockchain/chaingen)

View File

@ -1152,7 +1152,7 @@ func (hp *hash256prng) Hash256Rand() uint32 {
}
// Roll over the entire PRNG by re-hashing the seed when the hash
// iterator index overlows a uint32.
// iterator index overflows a uint32.
if hp.idx > math.MaxUint32 {
hp.seed = chainhash.HashH(hp.seed[:])
hp.cachedHash = hp.seed
@ -1568,7 +1568,7 @@ func (g *Generator) ReplaceVoteBitsN(voteNum int, voteBits uint16) func(*wire.Ms
stx := b.STransactions[voteNum]
if !isVoteTx(stx) {
panic(fmt.Sprintf("attempt to replace non-vote "+
"transaction #%d for for block %s", voteNum,
"transaction #%d for block %s", voteNum,
b.BlockHash()))
}
@ -2458,7 +2458,7 @@ func (g *Generator) AssertTipBlockSigOpsCount(expected int) {
}
}
// AssertTipBlockSize panics if the if the current tip block associated with the
// AssertTipBlockSize panics if the current tip block associated with the
// generator does not have the specified size when serialized.
func (g *Generator) AssertTipBlockSize(expected int) {
serializeSize := g.tip.SerializeSize()

View File

@ -130,27 +130,48 @@ func deserializeToMinimalOutputs(serialized []byte) ([]*stake.MinimalOutput, int
}
// readDeserializeSizeOfMinimalOutputs reads the size of the stored set of
// minimal outputs without allocating memory for the structs themselves. It
// will panic if the function reads outside of memory bounds.
func readDeserializeSizeOfMinimalOutputs(serialized []byte) int {
// minimal outputs without allocating memory for the structs themselves.
func readDeserializeSizeOfMinimalOutputs(serialized []byte) (int, error) {
numOutputs, offset := deserializeVLQ(serialized)
if offset == 0 {
return offset, errDeserialize("unexpected end of " +
"data during decoding (num outputs)")
}
for i := 0; i < int(numOutputs); i++ {
// Amount
_, bytesRead := deserializeVLQ(serialized[offset:])
if bytesRead == 0 {
return offset, errDeserialize("unexpected end of " +
"data during decoding (output amount)")
}
offset += bytesRead
// Script version
_, bytesRead = deserializeVLQ(serialized[offset:])
if bytesRead == 0 {
return offset, errDeserialize("unexpected end of " +
"data during decoding (output script version)")
}
offset += bytesRead
// Script
var scriptSize uint64
scriptSize, bytesRead = deserializeVLQ(serialized[offset:])
if bytesRead == 0 {
return offset, errDeserialize("unexpected end of " +
"data during decoding (output script size)")
}
offset += bytesRead
if uint64(len(serialized[offset:])) < scriptSize {
return offset, errDeserialize("unexpected end of " +
"data during decoding (output script)")
}
offset += int(scriptSize)
}
return offset
return offset, nil
}
// ConvertUtxosToMinimalOutputs converts the contents of a UTX to a series of
@ -468,7 +489,7 @@ func dbMaybeStoreBlock(dbTx database.Tx, block *dcrutil.Block) error {
// NOTE: The transaction version and flags are only encoded when the spent
// txout was the final unspent output of the containing transaction.
// Otherwise, the header code will be 0 and the version is not serialized at
// all. This is done because that information is only needed when the utxo
// all. This is done because that information is only needed when the utxo
// set no longer has it.
//
// Example:
@ -490,7 +511,7 @@ type spentTxOut struct {
amount int64 // The amount of the output.
txType stake.TxType // The stake type of the transaction.
height uint32 // Height of the the block containing the tx.
height uint32 // Height of the block containing the tx.
index uint32 // Index in the block of the transaction.
scriptVersion uint16 // The version of the scripting language.
txVersion uint16 // The version of creating tx.
@ -565,27 +586,11 @@ func putSpentTxOut(target []byte, stxo *spentTxOut) int {
// An error will be returned if the version is not serialized as a part of the
// stxo and is also not provided to the function.
func decodeSpentTxOut(serialized []byte, stxo *spentTxOut, amount int64, height uint32, index uint32) (int, error) {
// Ensure there are bytes to decode.
if len(serialized) == 0 {
return 0, errDeserialize("no serialized bytes")
}
// Deserialize the header code.
// Deserialize the flags.
flags, offset := deserializeVLQ(serialized)
if offset >= len(serialized) {
return offset, errDeserialize("unexpected end of data after " +
"spent tx out flags")
}
// Decode the flags. If the flags are non-zero, it means that the
// transaction was fully spent at this spend.
if decodeFlagsFullySpent(byte(flags)) {
isCoinBase, hasExpiry, txType, _ := decodeFlags(byte(flags))
stxo.isCoinBase = isCoinBase
stxo.hasExpiry = hasExpiry
stxo.txType = txType
stxo.txFullySpent = true
if offset == 0 {
return 0, errDeserialize("unexpected end of data during " +
"decoding (flags)")
}
// Decode the compressed txout. We pass false for the amount flag,
@ -609,22 +614,28 @@ func decodeSpentTxOut(serialized []byte, stxo *spentTxOut, amount int64, height
// Deserialize the containing transaction if the flags indicate that
// the transaction has been fully spent.
if decodeFlagsFullySpent(byte(flags)) {
isCoinBase, hasExpiry, txType, _ := decodeFlags(byte(flags))
stxo.isCoinBase = isCoinBase
stxo.hasExpiry = hasExpiry
stxo.txType = txType
stxo.txFullySpent = true
txVersion, bytesRead := deserializeVLQ(serialized[offset:])
offset += bytesRead
if offset == 0 || offset > len(serialized) {
return offset, errDeserialize("unexpected end of data " +
"after version")
if bytesRead == 0 {
return offset, errDeserialize("unexpected end of " +
"data during decoding (tx version)")
}
offset += bytesRead
stxo.txVersion = uint16(txVersion)
if stxo.txType == stake.TxTypeSStx {
sz := readDeserializeSizeOfMinimalOutputs(serialized[offset:])
if sz == 0 || sz > len(serialized[offset:]) {
return offset, errDeserialize("corrupt data for ticket " +
"fully spent stxo stakeextra")
sz, err := readDeserializeSizeOfMinimalOutputs(serialized[offset:])
if err != nil {
return offset + sz, errDeserialize(fmt.Sprintf("unable to decode "+
"ticket outputs: %v", err))
}
stakeExtra := make([]byte, sz)
copy(stakeExtra, serialized[offset:offset+sz])
stxo.stakeExtra = stakeExtra
@ -641,7 +652,7 @@ func decodeSpentTxOut(serialized []byte, stxo *spentTxOut, amount int64, height
// Since the serialization format is not self describing, as noted in the
// format comments, this function also requires the transactions that spend the
// txouts and a utxo view that contains any remaining existing utxos in the
// transactions referenced by the inputs to the passed transasctions.
// transactions referenced by the inputs to the passed transactions.
func deserializeSpendJournalEntry(serialized []byte, txns []*wire.MsgTx) ([]spentTxOut, error) {
// Calculate the total number of stxos.
var numStxos int
@ -1439,6 +1450,16 @@ func dbPutBestState(dbTx database.Tx, snapshot *BestState, workSum *big.Int) err
return dbTx.Metadata().Put(dbnamespace.ChainStateKeyName, serializedData)
}
// dbFetchBestState uses an existing database transaction to fetch the best
// chain state.
func dbFetchBestState(dbTx database.Tx) (bestChainState, error) {
// Fetch the stored chain state from the database metadata.
meta := dbTx.Metadata()
serializedData := meta.Get(dbnamespace.ChainStateKeyName)
log.Tracef("Serialized chain state: %x", serializedData)
return deserializeBestChainState(serializedData)
}
// createChainState initializes both the database and the chain state to the
// genesis block. This includes creating the necessary buckets and inserting
// the genesis block, so it must only be called on an uninitialized database.
@ -1526,6 +1547,76 @@ func (b *BlockChain) createChainState() error {
return err
}
// loadBlockIndex loads all of the block index entries from the database and
// constructs the block index into the provided index parameter. It is not safe
// for concurrent access as it is only intended to be used during initialization
// and database migration.
func loadBlockIndex(dbTx database.Tx, genesisHash *chainhash.Hash, index *blockIndex) error {
// Determine how many blocks will be loaded into the index in order to
// allocate the right amount as a single alloc versus a whole bunch of
// little ones to reduce pressure on the GC.
meta := dbTx.Metadata()
blockIndexBucket := meta.Bucket(dbnamespace.BlockIndexBucketName)
var blockCount int32
cursor := blockIndexBucket.Cursor()
for ok := cursor.First(); ok; ok = cursor.Next() {
blockCount++
}
blockNodes := make([]blockNode, blockCount)
// Load all of the block index entries and construct the block index
// accordingly.
//
// NOTE: No locks are used on the block index here since this is
// initialization code.
var i int32
var lastNode *blockNode
cursor = blockIndexBucket.Cursor()
for ok := cursor.First(); ok; ok = cursor.Next() {
entry, err := deserializeBlockIndexEntry(cursor.Value())
if err != nil {
return err
}
header := &entry.header
// Determine the parent block node. Since the block headers are
// iterated in order of height, there is a very good chance the
// previous header processed is the parent.
var parent *blockNode
if lastNode == nil {
blockHash := header.BlockHash()
if blockHash != *genesisHash {
return AssertError(fmt.Sprintf("loadBlockIndex: expected "+
"first entry in block index to be genesis block, "+
"found %s", blockHash))
}
} else if header.PrevBlock == lastNode.hash {
parent = lastNode
} else {
parent = index.lookupNode(&header.PrevBlock)
if parent == nil {
return AssertError(fmt.Sprintf("loadBlockIndex: could not "+
"find parent for block %s", header.BlockHash()))
}
}
// Initialize the block node, connect it, and add it to the block
// index.
node := &blockNodes[i]
initBlockNode(node, header, parent)
node.status = entry.status
node.ticketsVoted = entry.ticketsVoted
node.ticketsRevoked = entry.ticketsRevoked
node.votes = entry.voteInfo
index.addNode(node)
lastNode = node
i++
}
return nil
}
// initChainState attempts to load and initialize the chain state from the
// database. When the db does not yet contain any chain state, both it and the
// chain state are initialized to the genesis block.
@ -1628,17 +1719,8 @@ func (b *BlockChain) initChainState() error {
// Attempt to load the chain state from the database.
err = b.db.View(func(dbTx database.Tx) error {
// Fetch the stored chain state from the database metadata.
// When it doesn't exist, it means the database hasn't been
// initialized for use with chain yet, so break out now to allow
// that to happen under a writable database transaction.
meta := dbTx.Metadata()
serializedData := meta.Get(dbnamespace.ChainStateKeyName)
if serializedData == nil {
return nil
}
log.Tracef("Serialized chain state: %x", serializedData)
state, err := deserializeBestChainState(serializedData)
// Fetch the stored best chain state from the database.
state, err := dbFetchBestState(dbTx)
if err != nil {
return err
}
@ -1646,65 +1728,11 @@ func (b *BlockChain) initChainState() error {
log.Infof("Loading block index...")
bidxStart := time.Now()
// Determine how many blocks will be loaded into the index in order to
// allocate the right amount as a single alloc versus a whole bunch of
// littles ones to reduce pressure on the GC.
blockIndexBucket := meta.Bucket(dbnamespace.BlockIndexBucketName)
var blockCount int32
cursor := blockIndexBucket.Cursor()
for ok := cursor.First(); ok; ok = cursor.Next() {
blockCount++
}
blockNodes := make([]blockNode, blockCount)
// Load all of the block index entries and construct the block index
// accordingly.
//
// NOTE: No locks are used on the block index here since this is
// initialization code.
var i int32
var lastNode *blockNode
cursor = blockIndexBucket.Cursor()
for ok := cursor.First(); ok; ok = cursor.Next() {
entry, err := deserializeBlockIndexEntry(cursor.Value())
if err != nil {
return err
}
header := &entry.header
// Determine the parent block node. Since the block headers are
// iterated in order of height, there is a very good chance the
// previous header processed is the parent.
var parent *blockNode
if lastNode == nil {
blockHash := header.BlockHash()
if blockHash != b.chainParams.GenesisHash {
return AssertError(fmt.Sprintf("initChainState: expected "+
"first entry in block index to be genesis block, "+
"found %s", blockHash))
}
} else if header.PrevBlock == lastNode.hash {
parent = lastNode
} else {
parent = b.index.lookupNode(&header.PrevBlock)
if parent == nil {
return AssertError(fmt.Sprintf("initChainState: could "+
"not find parent for block %s", header.BlockHash()))
}
}
// Initialize the block node, connect it, and add it to the block
// index.
node := &blockNodes[i]
initBlockNode(node, header, parent)
node.status = entry.status
node.ticketsVoted = entry.ticketsVoted
node.ticketsRevoked = entry.ticketsRevoked
node.votes = entry.voteInfo
b.index.addNode(node)
lastNode = node
i++
// Load all of the block index entries from the database and
// construct the block index.
err = loadBlockIndex(dbTx, &b.chainParams.GenesisHash, b.index)
if err != nil {
return err
}
// Set the best chain to the stored best state.

View File

@ -81,7 +81,7 @@ func TestErrNotInMainChain(t *testing.T) {
// Ensure the stringized output for the error is as expected.
if err.Error() != errStr {
t.Fatalf("errNotInMainChain retuned unexpected error string - "+
t.Fatalf("errNotInMainChain returned unexpected error string - "+
"got %q, want %q", err.Error(), errStr)
}
@ -493,53 +493,81 @@ func TestStxoDecodeErrors(t *testing.T) {
tests := []struct {
name string
stxo spentTxOut
txVersion int32 // When the txout is not fully spent.
serialized []byte
bytesRead int // Expected number of bytes read.
errType error
bytesRead int // Expected number of bytes read.
}{
{
name: "nothing serialized",
// [EOF]
name: "nothing serialized (no flags)",
stxo: spentTxOut{},
serialized: hexToBytes(""),
errType: errDeserialize(""),
bytesRead: 0,
},
{
name: "no data after flags w/o version",
// [<flags 00> EOF]
name: "no compressed txout script version",
stxo: spentTxOut{},
serialized: hexToBytes("00"),
errType: errDeserialize(""),
bytesRead: 1,
},
{
name: "no data after flags code",
// [<flags 10> <script version 00> EOF]
name: "no tx version data after empty script for a fully spent regular stxo",
stxo: spentTxOut{},
serialized: hexToBytes("14"),
serialized: hexToBytes("1000"),
errType: errDeserialize(""),
bytesRead: 1,
bytesRead: 2,
},
{
name: "no tx version data after script",
// [<flags 10> <script version 00> <compressed pk script 01 6e ...> EOF]
name: "no tx version data after a pay-to-script-hash script for a fully spent regular stxo",
stxo: spentTxOut{},
serialized: hexToBytes("1400016edbc6c4d31bae9f1ccc38538a114bf42de65e86"),
serialized: hexToBytes("1000016edbc6c4d31bae9f1ccc38538a114bf42de65e86"),
errType: errDeserialize(""),
bytesRead: 23,
},
{
name: "no stakeextra data after script for ticket",
// [<flags 14> <script version 00> <compressed pk script 01 6e ...> <tx version 01> EOF]
name: "no stakeextra data after script for a fully spent ticket stxo",
stxo: spentTxOut{},
serialized: hexToBytes("1400016edbc6c4d31bae9f1ccc38538a114bf42de65e8601"),
errType: errDeserialize(""),
bytesRead: 24,
},
{
name: "incomplete compressed txout",
// [<flags 14> <script version 00> <compressed pk script 01 6e ...> <tx version 01> <stakeextra {num outputs 01}> EOF]
name: "truncated stakeextra data after script for a fully spent ticket stxo (num outputs only)",
stxo: spentTxOut{},
txVersion: 1,
serialized: hexToBytes("1432"),
serialized: hexToBytes("1400016edbc6c4d31bae9f1ccc38538a114bf42de65e860101"),
errType: errDeserialize(""),
bytesRead: 2,
bytesRead: 25,
},
{
// [<flags 14> <script version 00> <compressed pk script 01 6e ...> <tx version 01> <stakeextra {num outputs 01} {amount 0f}> EOF]
name: "truncated stakeextra data after script for a fully spent ticket stxo (num outputs and amount only)",
stxo: spentTxOut{},
serialized: hexToBytes("1400016edbc6c4d31bae9f1ccc38538a114bf42de65e8601010f"),
errType: errDeserialize(""),
bytesRead: 26,
},
{
// [<flags 14> <script version 00> <compressed pk script 01 6e ...> <tx version 01> <stakeextra {num outputs 01} {amount 0f} {script version 00}> EOF]
name: "truncated stakeextra data after script for a fully spent ticket stxo (num outputs, amount, and script version only)",
stxo: spentTxOut{},
serialized: hexToBytes("1400016edbc6c4d31bae9f1ccc38538a114bf42de65e8601010f00"),
errType: errDeserialize(""),
bytesRead: 27,
},
{
// [<flags 14> <script version 00> <compressed pk script 01 6e ...> <tx version 01> <stakeextra {num outputs 01} {amount 0f} {script version 00} {script size 1a} {25 bytes of script instead of 26}> EOF]
name: "truncated stakeextra data after script for a fully spent ticket stxo (script size specified as 0x1a, but only 0x19 bytes provided)",
stxo: spentTxOut{},
serialized: hexToBytes("1400016edbc6c4d31bae9f1ccc38538a114bf42de65e8601010f001aba76a9140cdf9941c0c221243cb8672cd1ad2c4c0933850588"),
errType: errDeserialize(""),
bytesRead: 28,
},
}
@ -903,7 +931,7 @@ func TestSpendJournalErrors(t *testing.T) {
}
// TestUtxoSerialization ensures serializing and deserializing unspent
// trasaction output entries works as expected.
// transaction output entries works as expected.
func TestUtxoSerialization(t *testing.T) {
t.Parallel()

View File

@ -259,7 +259,7 @@ func (c *chainView) next(node *blockNode) *blockNode {
}
// Next returns the successor to the provided node for the chain view. It will
// return nil if there is no successfor or the provided node is not part of the
// return nil if there is no successor or the provided node is not part of the
// view.
//
// For example, assume a block chain with a side chain as depicted below:

View File

@ -375,7 +375,7 @@ testLoop:
// TestChainViewNil ensures that creating and accessing a nil chain view behaves
// as expected.
func TestChainViewNil(t *testing.T) {
// Ensure two unininitialized views are considered equal.
// Ensure two uninitialized views are considered equal.
view := newChainView(nil)
if !view.Equals(newChainView(nil)) {
t.Fatal("uninitialized nil views unequal")

View File

@ -116,7 +116,7 @@ func chainSetup(dbName string, params *chaincfg.Params) (*BlockChain, func(), er
return chain, teardown, nil
}
// newFakeChain returns a chain that is usable for syntetic tests. It is
// newFakeChain returns a chain that is usable for synthetic tests. It is
// important to note that this chain has no database associated with it, so
// it is not usable with all functions and the tests must take care when making
// use of it.
@ -651,7 +651,7 @@ func (g *chaingenHarness) AdvanceToStakeValidationHeight() {
func (g *chaingenHarness) AdvanceFromSVHToActiveAgenda(voteID string) {
g.t.Helper()
// Find the correct deployment for the provided ID along with the the yes
// Find the correct deployment for the provided ID along with the yes
// vote choice within it.
params := g.Params()
deploymentVer, deployment, err := findDeployment(params, voteID)

View File

@ -9,7 +9,7 @@ import (
"fmt"
"github.com/decred/dcrd/blockchain/stake/v2"
"github.com/decred/dcrd/dcrec/secp256k1"
"github.com/decred/dcrd/dcrec/secp256k1/v2"
"github.com/decred/dcrd/txscript/v2"
)
@ -653,9 +653,9 @@ func decodeCompressedTxOut(serialized []byte, compressionVersion uint32,
// remaining for the compressed script.
var compressedAmount uint64
compressedAmount, bytesRead = deserializeVLQ(serialized)
if bytesRead >= len(serialized) {
if bytesRead == 0 {
return 0, 0, nil, bytesRead, errDeserialize("unexpected end of " +
"data after compressed amount")
"data during decoding (compressed amount)")
}
amount = int64(decompressTxOutAmount(compressedAmount))
offset += bytesRead
@ -664,12 +664,17 @@ func decodeCompressedTxOut(serialized []byte, compressionVersion uint32,
// Decode the script version.
var scriptVersion uint64
scriptVersion, bytesRead = deserializeVLQ(serialized[offset:])
if bytesRead == 0 {
return 0, 0, nil, offset, errDeserialize("unexpected end of " +
"data during decoding (script version)")
}
offset += bytesRead
// Decode the compressed script size and ensure there are enough bytes
// left in the slice for it.
scriptSize := decodeCompressedScriptSize(serialized[offset:],
compressionVersion)
// Note: scriptSize == 0 is OK (an empty compressed script is valid)
if scriptSize < 0 {
return 0, 0, nil, offset, errDeserialize("negative script size")
}
@ -718,7 +723,7 @@ const (
// from the flags byte.
txTypeBitmask = 0x0c
// txTypeShift is the number of bits to shift falgs to the right to yield the
// txTypeShift is the number of bits to shift flags to the right to yield the
// correct integer value after applying the bitmask with AND.
txTypeShift = 2
)

View File

@ -20,14 +20,6 @@ var (
// bigZero is 0 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigZero = big.NewInt(0)
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
)
// calcEasiestDifficulty calculates the easiest possible difficulty that a block
@ -614,7 +606,7 @@ func calcNextStakeDiffV2(params *chaincfg.Params, nextHeight, curDiff, prevPoolS
// Calculate the difficulty by multiplying the old stake difficulty
// with two ratios that represent a force to counteract the relative
// change in the pool size (Fc) and a restorative force to push the pool
// size towards the target value (Fr).
// size towards the target value (Fr).
//
// Per DCP0001, the generalized equation is:
//
@ -640,7 +632,7 @@ func calcNextStakeDiffV2(params *chaincfg.Params, nextHeight, curDiff, prevPoolS
// nextDiff = -----------------------------------
// prevPoolSizeAll * targetPoolSizeAll
//
// Further, the Sub parameter must calculate the denomitor first using
// Further, the Sub parameter must calculate the denominator first using
// integer math.
targetPoolSizeAll := votesPerBlock * (ticketPoolSize + ticketMaturity)
curPoolSizeAllBig := big.NewInt(curPoolSizeAll)

View File

@ -110,8 +110,8 @@ const (
// ErrUnexpectedDifficulty indicates specified bits do not align with
// the expected value either because it doesn't match the calculated
// valued based on difficulty regarted rules or it is out of the valid
// range.
// value based on difficulty regarding the rules or it is out of the
// valid range.
ErrUnexpectedDifficulty
// ErrHighHash indicates the block does not hash to a value which is
@ -390,7 +390,7 @@ const (
ErrRegTxCreateStakeOut
// ErrInvalidFinalState indicates that the final state of the PRNG included
// in the the block differed from the calculated final state.
// in the block differed from the calculated final state.
ErrInvalidFinalState
// ErrPoolSize indicates an error in the ticket pool size for this block.
@ -615,7 +615,7 @@ func (e RuleError) Error() string {
return e.Description
}
// ruleError creates an RuleError given a set of arguments.
// ruleError creates a RuleError given a set of arguments.
func ruleError(c ErrorCode, desc string) RuleError {
return RuleError{ErrorCode: c, Description: desc}
}

View File

@ -18,7 +18,7 @@ import (
)
// This example demonstrates how to create a new chain instance and use
// ProcessBlock to attempt to attempt add a block to the chain. As the package
// ProcessBlock to attempt to add a block to the chain. As the package
// overview documentation describes, this includes all of the Decred consensus
// rules. This example intentionally attempts to insert a duplicate genesis
// block to illustrate how an invalid block is handled.

View File

@ -1,7 +1,7 @@
fullblocktests
==============
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/blockchain/fullblocktests)

View File

@ -17,7 +17,7 @@ import (
"github.com/decred/dcrd/blockchain/v2/chaingen"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/dcrec"
"github.com/decred/dcrd/dcrec/secp256k1"
"github.com/decred/dcrd/dcrec/secp256k1/v2"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/txscript/v2"
"github.com/decred/dcrd/wire"
@ -272,7 +272,7 @@ func replaceStakeSigScript(sigScript []byte) func(*wire.MsgBlock) {
}
// additionalPoWTx returns a function that itself takes a block and modifies it
// by adding the the provided transaction to the regular transaction tree.
// by adding the provided transaction to the regular transaction tree.
func additionalPoWTx(tx *wire.MsgTx) func(*wire.MsgBlock) {
return func(b *wire.MsgBlock) {
b.AddTransaction(tx)
@ -307,8 +307,8 @@ func encodeNonCanonicalBlock(b *wire.MsgBlock) []byte {
return buf.Bytes()
}
// assertTipsNonCanonicalBlockSize panics if the if the current tip block
// associated with the generator does not have the specified non-canonical size
// assertTipsNonCanonicalBlockSize panics if the current tip block associated
// with the generator does not have the specified non-canonical size
// when serialized.
func assertTipNonCanonicalBlockSize(g *chaingen.Generator, expected int) {
tip := g.Tip()
@ -726,7 +726,7 @@ func Generate(includeLargeReorg bool) (tests [][]TestInstance, err error) {
// ---------------------------------------------------------------------
// The comments below identify the structure of the chain being built.
//
// The values in parenthesis repesent which outputs are being spent.
// The values in parenthesis represent which outputs are being spent.
//
// For example, b1(0) indicates the first collected spendable output
// which, due to the code above to create the correct number of blocks,
@ -1879,8 +1879,8 @@ func Generate(includeLargeReorg bool) (tests [][]TestInstance, err error) {
// Create block with duplicate transactions in the regular transaction
// tree.
//
// This test relies on the shape of the shape of the merkle tree to test
// the intended condition. That is the reason for the assertion.
// This test relies on the shape of the merkle tree to test the
// intended condition. That is the reason for the assertion.
//
// ... -> brs3(14)
// \-> bmf14(15)

View File

@ -3,16 +3,22 @@ module github.com/decred/dcrd/blockchain/v2
go 1.11
require (
github.com/dchest/blake256 v1.1.0 // indirect
github.com/decred/dcrd/blockchain/stake/v2 v2.0.1
github.com/decred/dcrd/blockchain/standalone v1.0.0
github.com/decred/dcrd/chaincfg/chainhash v1.0.2
github.com/decred/dcrd/chaincfg/v2 v2.2.0
github.com/decred/dcrd/database/v2 v2.0.0
github.com/decred/dcrd/dcrec v1.0.0
github.com/decred/dcrd/dcrec/secp256k1 v1.0.2
github.com/decred/dcrd/dcrec/secp256k1/v2 v2.0.0
github.com/decred/dcrd/dcrutil/v2 v2.0.0
github.com/decred/dcrd/gcs v1.1.0
github.com/decred/dcrd/gcs/v2 v2.0.0-00010101000000-000000000000
github.com/decred/dcrd/txscript/v2 v2.0.0
github.com/decred/dcrd/wire v1.2.0
github.com/decred/slog v1.0.0
)
replace (
github.com/decred/dcrd/chaincfg/v2 => ../chaincfg
github.com/decred/dcrd/gcs/v2 => ../gcs
)

View File

@ -10,6 +10,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dchest/blake256 v1.0.0 h1:6gUgI5MHdz9g0TdrgKqXsoDX+Zjxmm1Sc6OsoGru50I=
github.com/dchest/blake256 v1.0.0/go.mod h1:xXNWCE1jsAP8DAjP+rKw2MbeqLczjI3TRx2VK+9OEYY=
github.com/dchest/blake256 v1.1.0 h1:4AuEhGPT/3TTKFhTfBpZ8hgZE7wJpawcYaEawwsbtqM=
github.com/dchest/blake256 v1.1.0/go.mod h1:xXNWCE1jsAP8DAjP+rKw2MbeqLczjI3TRx2VK+9OEYY=
github.com/dchest/siphash v1.2.1 h1:4cLinnzVJDKxTCl9B01807Yiy+W7ZzVHj/KIroQRvT4=
github.com/dchest/siphash v1.2.1/go.mod h1:q+IRvb2gOSrUnYoPqHiyHXS0FOBBOdl6tONBlVnOnt4=
github.com/decred/base58 v1.0.0 h1:BVi1FQCThIjZ0ehG+I99NJ51o0xcc9A/fDKhmJxY6+w=
@ -41,10 +43,10 @@ github.com/decred/dcrd/dcrec/secp256k1 v1.0.1 h1:EFWVd1p0t0Y5tnsm/dJujgV0ORogRJ6
github.com/decred/dcrd/dcrec/secp256k1 v1.0.1/go.mod h1:lhu4eZFSfTJWUnR3CFRcpD+Vta0KUAqnhTsTksHXgy0=
github.com/decred/dcrd/dcrec/secp256k1 v1.0.2 h1:awk7sYJ4pGWmtkiGHFfctztJjHMKGLV8jctGQhAbKe0=
github.com/decred/dcrd/dcrec/secp256k1 v1.0.2/go.mod h1:CHTUIVfmDDd0KFVFpNX1pFVCBUegxW387nN0IGwNKR0=
github.com/decred/dcrd/dcrec/secp256k1/v2 v2.0.0 h1:3GIJYXQDAKpLEFriGFN8SbSffak10UXHGdIcFaMPykY=
github.com/decred/dcrd/dcrec/secp256k1/v2 v2.0.0/go.mod h1:3s92l0paYkZoIHuj4X93Teg/HB7eGM9x/zokGw+u4mY=
github.com/decred/dcrd/dcrutil/v2 v2.0.0 h1:HTqn2tZ8eqBF4y3hJwjyKBmJt16y7/HjzpE82E/crhY=
github.com/decred/dcrd/dcrutil/v2 v2.0.0/go.mod h1:gUshVAXpd51DlcEhr51QfWL2HJGkMDM1U8chY+9VvQg=
github.com/decred/dcrd/gcs v1.1.0 h1:djuYzaFUzUTJR+6ulMSRZOQ+P9rxtIyuxQeViAEfB8s=
github.com/decred/dcrd/gcs v1.1.0/go.mod h1:yBjhj217Vw5lw3aKnCdHip7fYb9zwMos8bCy5s79M9w=
github.com/decred/dcrd/txscript/v2 v2.0.0 h1:So+NcQY58mDHDN2N2edED5syGZp2ed8Ltxj8mDE5CAs=
github.com/decred/dcrd/txscript/v2 v2.0.0/go.mod h1:WStcyYYJa+PHJB4XjrLDRzV96/Z4thtsu8mZoVrU6C0=
github.com/decred/dcrd/wire v1.2.0 h1:HqJVB7vcklIguzFWgRXw/WYCQ9cD3bUC5TKj53i1Hng=

View File

@ -1,7 +1,7 @@
indexers
========
[![Build Status](https://travis-ci.org/decred/dcrd.png?branch=master)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://godoc.org/github.com/decred/dcrd/blockchain/indexers?status.png)](https://godoc.org/github.com/decred/dcrd/blockchain/indexers)

View File

@ -42,7 +42,8 @@ const (
// consumes. It consists of the address key + 1 byte for the level.
levelKeySize = addrKeySize + 1
// levelOffset is the offset in the level key which identifes the level.
// levelOffset is the offset in the level key which identifies the
// level.
levelOffset = levelKeySize - 1
// addrKeyTypePubKeyHash is the address type in an address key which
@ -159,7 +160,7 @@ func serializeAddrIndexEntry(blockID uint32, txLoc wire.TxLoc, blockIndex uint32
// deserializeAddrIndexEntry decodes the passed serialized byte slice into the
// provided region struct according to the format described in detail above and
// uses the passed block hash fetching function in order to conver the block ID
// uses the passed block hash fetching function in order to convert the block ID
// to the associated block hash.
func deserializeAddrIndexEntry(serialized []byte, entry *TxIndexEntry, fetchBlockHash fetchBlockHashFunc) error {
// Ensure there are enough bytes to decode.
@ -361,7 +362,7 @@ func maxEntriesForLevel(level uint8) int {
return numEntries
}
// dbRemoveAddrIndexEntries removes the specified number of entries from from
// dbRemoveAddrIndexEntries removes the specified number of entries from
// the address index for the provided key. An assertion error will be returned
// if the count exceeds the total number of entries in the index.
func dbRemoveAddrIndexEntries(bucket internalBucket, addrKey [addrKeySize]byte, count int) error {
@ -503,7 +504,7 @@ func dbRemoveAddrIndexEntries(bucket internalBucket, addrKey [addrKeySize]byte,
// be half full. When that is the case, move it up a level to
// simplify the code below which backfills all lower levels that
// are still empty. This also means the current level will be
// empty, so the loop will perform another another iteration to
// empty, so the loop will perform another iteration to
// potentially backfill this level with data from the next one.
curLevelMaxEntries := maxEntriesForLevel(level)
if len(levelData)/txEntrySize != curLevelMaxEntries {

View File

@ -118,7 +118,7 @@ func (b *addrIndexBucket) sanityCheck(addrKey [addrKeySize]byte, expectedTotal i
var totalEntries int
maxEntries := level0MaxEntries
for level := uint8(0); level <= highestLevel; level++ {
// Level 0 can'have more entries than the max allowed if the
// Level 0 can't have more entries than the max allowed if the
// levels after it have data and it can't be empty. All other
// levels must either be half full or full.
data := b.levels[keyForLevel(addrKey, level)]

View File

@ -14,8 +14,8 @@ import (
"github.com/decred/dcrd/chaincfg/v2"
"github.com/decred/dcrd/database/v2"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/gcs"
"github.com/decred/dcrd/gcs/blockcf"
"github.com/decred/dcrd/gcs/v2"
"github.com/decred/dcrd/gcs/v2/blockcf"
"github.com/decred/dcrd/wire"
)
@ -174,7 +174,7 @@ func (idx *CFIndex) Create(dbTx database.Tx) error {
// storeFilter stores a given filter, and performs the steps needed to
// generate the filter's header.
func storeFilter(dbTx database.Tx, block *dcrutil.Block, f *gcs.Filter, filterType wire.FilterType) error {
func storeFilter(dbTx database.Tx, block *dcrutil.Block, f *gcs.FilterV1, filterType wire.FilterType) error {
if uint8(filterType) > maxFilterType {
return errors.New("unsupported filter type")
}
@ -187,7 +187,7 @@ func storeFilter(dbTx database.Tx, block *dcrutil.Block, f *gcs.Filter, filterTy
h := block.Hash()
var basicFilterBytes []byte
if f != nil {
basicFilterBytes = f.NBytes()
basicFilterBytes = f.Bytes()
}
err := dbStoreFilter(dbTx, fkey, h, basicFilterBytes)
if err != nil {
@ -215,7 +215,7 @@ func storeFilter(dbTx database.Tx, block *dcrutil.Block, f *gcs.Filter, filterTy
// every passed block. This is part of the Indexer interface.
func (idx *CFIndex) ConnectBlock(dbTx database.Tx, block, parent *dcrutil.Block, view *blockchain.UtxoViewpoint) error {
f, err := blockcf.Regular(block.MsgBlock())
if err != nil && err != gcs.ErrNoData {
if err != nil {
return err
}
@ -225,7 +225,7 @@ func (idx *CFIndex) ConnectBlock(dbTx database.Tx, block, parent *dcrutil.Block,
}
f, err = blockcf.Extended(block.MsgBlock())
if err != nil && err != gcs.ErrNoData {
if err != nil {
return err
}

View File

@ -19,7 +19,7 @@ const (
maxAllowedOffsetSecs = 70 * 60 // 1 hour 10 minutes
// similarTimeSecs is the number of seconds in either direction from the
// local clock that is used to determine that it is likley wrong and
// local clock that is used to determine that it is likely wrong and
// hence to show a warning.
similarTimeSecs = 5 * 60 // 5 minutes
)

View File

@ -29,10 +29,10 @@ const (
// It should be noted that the block might still ultimately fail to
// become the new main chain tip if it contains invalid scripts, double
// spends, etc. However, this is quite rare in practice because a lot
// of work was expended to create a block which satisifies the proof of
// of work was expended to create a block which satisfies the proof of
// work requirement.
//
// Finally, this notification is only sent if the the chain is believed
// Finally, this notification is only sent if the chain is believed
// to be current and the chain lock is NOT released, so consumers must
// take care to avoid calling blockchain functions to avoid potential
// deadlock.

View File

@ -98,11 +98,11 @@ func (b *BlockChain) processOrphans(hash *chainhash.Hash, flags BehaviorFlags) e
// the block chain along with best chain selection and reorganization.
//
// When no errors occurred during processing, the first return value indicates
// the length of the fork the block extended. In the case it either exteneded
// the length of the fork the block extended. In the case it either extended
// the best chain or is now the tip of the best chain due to causing a
// reorganize, the fork length will be 0. The second return value indicates
// whether or not the block is an orphan, in which case the fork length will
// also be zero as expected, because it, by definition, does not connect ot the
// also be zero as expected, because it, by definition, does not connect to the
// best chain.
//
// This function is safe for concurrent access.

View File

@ -223,7 +223,7 @@ func TestCalcSequenceLock(t *testing.T) {
{
// A transaction with a single input. The input's
// sequence number encodes a relative locktime in blocks
// (3 blocks). The sequence lock should have a value
// (3 blocks). The sequence lock should have a value
// of -1 for seconds, but a height of 2 meaning it can
// be included at height 3.
name: "3 blocks",
@ -381,7 +381,7 @@ func TestCalcSequenceLock(t *testing.T) {
// Ensure both the returned sequence lock seconds and block
// height match the expected values.
if seqLock.MinTime != test.want.MinTime {
t.Errorf("%s: mistmached seconds - got %v, want %v",
t.Errorf("%s: mismatched seconds - got %v, want %v",
test.name, seqLock.MinTime, test.want.MinTime)
continue
}

View File

@ -38,7 +38,7 @@ const (
// OP_RETURNs were missing or contained invalid addresses.
ErrSStxInvalidOutputs
// ErrSStxInOutProportions indicates the the number of inputs in an SStx
// ErrSStxInOutProportions indicates the number of inputs in an SStx
// was not equal to the number of output minus one.
ErrSStxInOutProportions
@ -249,7 +249,7 @@ func (e RuleError) GetCode() ErrorCode {
return e.ErrorCode
}
// stakeRuleError creates an RuleError given a set of arguments.
// stakeRuleError creates a RuleError given a set of arguments.
func stakeRuleError(c ErrorCode, desc string) RuleError {
return RuleError{ErrorCode: c, Description: desc}
}

View File

@ -59,7 +59,7 @@ const (
// v: height
//
// 4. BlockUndo
// Block removal data, for reverting the the first 3 database buckets to
// Block removal data, for reverting the first 3 database buckets to
// a previous state.
//
// k: height

View File

@ -70,7 +70,7 @@ func (e ErrorCode) String() string {
return fmt.Sprintf("Unknown ErrorCode (%d)", int(e))
}
// DBError identifies a an error in the stake database for tickets.
// DBError identifies an error in the stake database for tickets.
// The caller can use type assertions to determine if a failure was
// specifically due to a rule violation and access the ErrorCode field to
// ascertain the specific reason for the rule violation.

View File

@ -1,7 +1,7 @@
tickettreap
===========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/blockchain/stake/internal/tickettreap)

View File

@ -15,7 +15,7 @@ const numTicketKeys = 42500
var (
// generatedTicketKeys is used to store ticket keys generated for use
// in the benchmarks so that they only need to be generatd once for all
// in the benchmarks so that they only need to be generated once for all
// benchmarks that use them.
genTicketKeysLock sync.Mutex
generatedTicketKeys []Key

View File

@ -179,7 +179,7 @@ func (s *parentStack) Push(node *treapNode) {
// This approach is used over append because reslicing the slice to pop
// the item causes the compiler to make unneeded allocations. Also,
// since the max number of items is related to the tree depth which
// requires expontentially more items to increase, only increase the cap
// requires exponentially more items to increase, only increase the cap
// one item at a time. This is more intelligent than the generic append
// expansion algorithm which often doubles the cap.
index := s.index - staticDepth

View File

@ -57,7 +57,7 @@ type Immutable struct {
root *treapNode
count int
// totalSize is the best estimate of the total size of of all data in
// totalSize is the best estimate of the total size of all data in
// the treap including the keys, values, and node sizes.
totalSize uint64
}

View File

@ -373,7 +373,7 @@ func TestImmutableReverseSequential(t *testing.T) {
}
// TestImmutableUnordered ensures that putting keys into an immutable treap in
// no paritcular order works as expected.
// no particular order works as expected.
func TestImmutableUnordered(t *testing.T) {
t.Parallel()
@ -463,7 +463,7 @@ func TestImmutableDuplicatePut(t *testing.T) {
testTreap = testTreap.Put(key, value)
expectedSize += nodeFieldsSize + uint64(len(key)) + nodeValueSize
// Put a duplicate key with the the expected final value.
// Put a duplicate key with the expected final value.
testTreap = testTreap.Put(key, expectedVal)
// Ensure the key still exists and is the new value.

View File

@ -65,12 +65,12 @@ const (
// hash of the block in which voting was missed.
MaxOutputsPerSSRtx = MaxInputsPerSStx
// SStxPKHMinOutSize is the minimum size of of an OP_RETURN commitment output
// SStxPKHMinOutSize is the minimum size of an OP_RETURN commitment output
// for an SStx tx.
// 20 bytes P2SH/P2PKH + 8 byte amount + 4 byte fee range limits
SStxPKHMinOutSize = 32
// SStxPKHMaxOutSize is the maximum size of of an OP_RETURN commitment output
// SStxPKHMaxOutSize is the maximum size of an OP_RETURN commitment output
// for an SStx tx.
SStxPKHMaxOutSize = 77
@ -842,7 +842,7 @@ func CheckSSGen(tx *wire.MsgTx) error {
}
// IsSSGen returns whether or not a transaction is a stake submission generation
// transaction. There are also known as votes.
// transaction. These are also known as votes.
func IsSSGen(tx *wire.MsgTx) bool {
return CheckSSGen(tx) == nil
}
@ -937,7 +937,7 @@ func CheckSSRtx(tx *wire.MsgTx) error {
}
// IsSSRtx returns whether or not a transaction is a stake submission revocation
// transaction. There are also known as revocations.
// transaction. These are also known as revocations.
func IsSSRtx(tx *wire.MsgTx) bool {
return CheckSSRtx(tx) == nil
}

View File

@ -256,7 +256,7 @@ func TestTicketDBLongChain(t *testing.T) {
filename := filepath.Join("testdata", "testexpiry.bz2")
fi, err := os.Open(filename)
if err != nil {
t.Fatalf("failed ot open test data: %v", err)
t.Fatalf("failed to open test data: %v", err)
}
bcStream := bzip2.NewReader(fi)
defer fi.Close()

View File

@ -1,7 +1,7 @@
standalone
==========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/blockchain/standalone)

View File

@ -190,7 +190,7 @@ func (c *SubsidyCache) CalcBlockSubsidy(height int64) int64 {
// subsidy for the requested interval.
if reqInterval > lastCachedInterval {
// Return zero for all intervals after the subsidy reaches zero. This
// enforces an upper bound on the the number of entries in the cache.
// enforces an upper bound on the number of entries in the cache.
if lastCachedSubsidy == 0 {
return 0
}

View File

@ -188,7 +188,7 @@ func (c *thresholdStateCache) Update(hash chainhash.Hash, state ThresholdStateTu
c.entries[hash] = state
}
// MarkFlushed marks all of the current udpates as flushed to the database.
// MarkFlushed marks all of the current updates as flushed to the database.
// This is useful so the caller can ensure the needed database updates are not
// lost until they have successfully been written to the database.
func (c *thresholdStateCache) MarkFlushed() {
@ -531,7 +531,7 @@ func (b *BlockChain) StateLastChangedHeight(hash *chainhash.Hash, version uint32
return 0, HashError(hash.String())
}
// Fetch the treshold state cache for the provided deployment id as well as
// Fetch the threshold state cache for the provided deployment id as well as
// the condition checker.
var cache *thresholdStateCache
var checker thresholdConditionChecker
@ -666,9 +666,9 @@ func (b *BlockChain) isFixSeqLocksAgendaActive(prevNode *blockNode) (bool, error
return state.State == ThresholdActive, nil
}
// IsFixSeqLocksAgendaActive returns whether or not whether or not the fix
// sequence locks agenda vote, as defined in DCP0004 has passed and is now
// active for the block AFTER the current best chain block.
// IsFixSeqLocksAgendaActive returns whether or not the fix sequence locks
// agenda vote, as defined in DCP0004 has passed and is now active for the
// block AFTER the current best chain block.
//
// This function is safe for concurrent access.
func (b *BlockChain) IsFixSeqLocksAgendaActive() (bool, error) {

View File

@ -250,7 +250,7 @@ func TestThresholdState(t *testing.T) {
// version 3.
//
// This will result in triggering enforcement of the stake version and
// that the stake version is 3. The treshold state for the test dummy
// that the stake version is 3. The threshold state for the test dummy
// deployments must still be defined since a v4 majority proof-of-work
// and proof-of-stake upgrade are required before moving to started.
// ---------------------------------------------------------------------
@ -308,7 +308,7 @@ func TestThresholdState(t *testing.T) {
//
// This will result in achieving stake version 4 enforcement.
//
// The treshold state for the dummy deployments must still be defined
// The threshold state for the dummy deployments must still be defined
// since it can only change on a rule change boundary and it still
// requires a v4 majority proof-of-work upgrade before moving to
// started.
@ -338,7 +338,7 @@ func TestThresholdState(t *testing.T) {
// the final two blocks to block version 4 so that majority version 4
// is not achieved, but the final block in the interval is version 4.
//
// The treshold state for the dummy deployments must still be defined
// The threshold state for the dummy deployments must still be defined
// since it still requires a v4 majority proof-of-work upgrade before
// moving to started.
// ---------------------------------------------------------------------
@ -375,7 +375,7 @@ func TestThresholdState(t *testing.T) {
// achieved and this will achieve v4 majority proof-of-work upgrade,
// voting can begin at the next rule change interval.
//
// The treshold state for the dummy deployments must still be defined
// The threshold state for the dummy deployments must still be defined
// since even though all required upgrade conditions are met, the state
// change must not happen until the start of the next rule change
// interval.
@ -405,7 +405,7 @@ func TestThresholdState(t *testing.T) {
// vote bits to include yes votes for the first test dummy agenda and
// no for the second test dummy agenda to ensure they aren't counted.
//
// The treshold state for the dummy deployments must move to started.
// The threshold state for the dummy deployments must move to started.
// Even though the majority of the votes have already been voting yes
// for the first test dummy agenda, and no for the second one, they must
// not count, otherwise it would move straight to lockedin or failed,
@ -437,7 +437,7 @@ func TestThresholdState(t *testing.T) {
// vote bits to include yes votes for the first test dummy agenda and
// no for the second test dummy agenda to ensure they aren't counted.
//
// The treshold state for the dummy deployments must remain in started
// The threshold state for the dummy deployments must remain in started
// because the votes are an old version and thus have a different
// definition and don't apply to version 4.
// ---------------------------------------------------------------------
@ -468,7 +468,7 @@ func TestThresholdState(t *testing.T) {
// votes for the first test dummy agenda and a majority no for the
// second test dummy agenda.
//
// The treshold state for the dummy deployments must remain in started
// The threshold state for the dummy deployments must remain in started
// because quorum was not reached.
// ---------------------------------------------------------------------
@ -504,7 +504,7 @@ func TestThresholdState(t *testing.T) {
// majority yes for the first test dummy agenda and a few votes shy of a
// majority no for the second test dummy agenda.
//
// The treshold state for the dummy deployments must remain in started
// The threshold state for the dummy deployments must remain in started
// because even though quorum was reached, a required majority was not.
// ---------------------------------------------------------------------
@ -547,7 +547,7 @@ func TestThresholdState(t *testing.T) {
// vote bits to yes for the first test dummy agenda and no to the second
// one.
//
// The treshold state for the first dummy deployment must move to
// The threshold state for the first dummy deployment must move to
// lockedin since a majority yes vote was achieved while the second
// dummy deployment must move to failed since a majority no vote was
// achieved.
@ -578,12 +578,12 @@ func TestThresholdState(t *testing.T) {
// vote bits to include no votes for the first test dummy agenda and
// yes votes for the second one.
//
// The treshold state for the first dummy deployment must move to active
// since even though the interval had a majority no votes, lockedin
// status has already been achieved and can't be undone without a new
// agenda. Similarly, the second one must remain in failed even though
// the interval had a majority yes votes since a failed state can't be
// undone.
// The threshold state for the first dummy deployment must move to
// active since even though the interval had a majority no votes,
// lockedin status has already been achieved and can't be undone without
// a new agenda. Similarly, the second one must remain in failed even
// though the interval had a majority yes votes since a failed state
// can't be undone.
// ---------------------------------------------------------------------
blocksNeeded = stakeValidationHeight + ruleChangeInterval*8 - 1 -

View File

@ -21,7 +21,7 @@ func (s timeSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
// Less returns whether the timstamp with index i should sort before the
// Less returns whether the timestamp with index i should sort before the
// timestamp with index j. It is part of the sort.Interface implementation.
func (s timeSorter) Less(i, j int) bool {
return s[i] < s[j]

View File

@ -242,7 +242,7 @@ func upgradeToVersion2(db database.DB, chainParams *chaincfg.Params, dbInfo *dat
}
// migrateBlockIndex migrates all block entries from the v1 block index bucket
// manged by ffldb to the v2 bucket managed by this package. The v1 bucket
// managed by ffldb to the v2 bucket managed by this package. The v1 bucket
// stored all block entries keyed by block hash, whereas the v2 bucket stores
// them keyed by block height + hash. Also, the old block index only stored the
// header, while the new one stores all info needed to recreate block nodes.

View File

@ -725,7 +725,7 @@ func checkBlockSanity(block *dcrutil.Block, timeSource MedianTimeSource, flags B
return ruleError(ErrTooManyRevocations, errStr)
}
// A block must only contain stake transactions of the the allowed
// A block must only contain stake transactions of the allowed
// types.
//
// NOTE: This is not possible to hit at the time this comment was
@ -752,7 +752,7 @@ func checkBlockSanity(block *dcrutil.Block, timeSource MedianTimeSource, flags B
return ruleError(ErrFreshStakeMismatch, errStr)
}
// A block header must commit to the the actual number of votes that are
// A block header must commit to the actual number of votes that are
// in the block.
if int64(header.Voters) != totalVotes {
errStr := fmt.Sprintf("block header commitment to %d votes "+
@ -1027,7 +1027,7 @@ func (b *BlockChain) checkBlockHeaderPositional(header *wire.BlockHeader, prevNo
//
// The flags modify the behavior of this function as follows:
// - BFFastAdd: The transactions are not checked to see if they are expired and
// the coinbae height check is not performed.
// the coinbase height check is not performed.
//
// The flags are also passed to checkBlockHeaderPositional. See its
// documentation for how the flags modify its behavior.
@ -1496,7 +1496,7 @@ func isStakeScriptHash(script []byte, stakeOpcode byte) bool {
}
// isAllowedTicketInputScriptForm returns whether or not the passed public key
// script is a one of the allowed forms for a ticket input.
// script is one of the allowed forms for a ticket input.
func isAllowedTicketInputScriptForm(script []byte) bool {
return isPubKeyHash(script) || isScriptHash(script) ||
isStakePubKeyHash(script, txscript.OP_SSGEN) ||
@ -1726,7 +1726,7 @@ func checkTicketRedeemerCommitments(ticketHash *chainhash.Hash, ticketOuts []*st
}
contributionSumBig := big.NewInt(contributionSum)
// The outputs that satisify the commitments of the ticket start at offset
// The outputs that satisfy the commitments of the ticket start at offset
// 2 for votes while they start at 0 for revocations. Also, the payments
// must be tagged with the appropriate stake opcode depending on whether it
// is a vote or a revocation. Finally, the fee limits in the original
@ -1794,7 +1794,7 @@ func checkTicketRedeemerCommitments(ticketHash *chainhash.Hash, ticketOuts []*st
// revocations).
//
// It should be noted that, due to the scaling, the sum of the generated
// amounts for mult-participant votes might be a few atoms less than
// amounts for multi-participant votes might be a few atoms less than
// the full amount and the difference is treated as a standard
// transaction fee.
commitmentAmt := extractTicketCommitAmount(commitmentScript)
@ -1803,7 +1803,7 @@ func checkTicketRedeemerCommitments(ticketHash *chainhash.Hash, ticketOuts []*st
// Ensure the amount paid adheres to the commitment while taking into
// account any fee limits that might be imposed. The output amount must
// exactly match the calculated amount when when not encumbered with a
// exactly match the calculated amount when not encumbered with a
// fee limit. On the other hand, when it is encumbered, it must be
// between the minimum amount imposed by the fee limit and the
// calculated amount.
@ -1908,7 +1908,7 @@ func checkVoteInputs(subsidyCache *standalone.SubsidyCache, tx *dcrutil.Tx, txHe
ticketHash := &ticketIn.PreviousOutPoint.Hash
ticketUtxo := view.LookupEntry(ticketHash)
if ticketUtxo == nil || ticketUtxo.IsFullySpent() {
str := fmt.Sprintf("ticket output %v referenced by vote %s:%d either "+
str := fmt.Sprintf("ticket output %v referenced by vote %s:%d either "+
"does not exist or has already been spent",
ticketIn.PreviousOutPoint, voteHash, ticketInIdx)
return ruleError(ErrMissingTxOut, str)
@ -2096,7 +2096,7 @@ func CheckTransactionInputs(subsidyCache *standalone.SubsidyCache, tx *dcrutil.T
}
}
// Perform additional checks on vote transactions such as verying that the
// Perform additional checks on vote transactions such as verifying that the
// referenced ticket exists, the stakebase input commits to correct subsidy,
// the output amounts adhere to the commitments of the referenced ticket,
// and the ticket maturity requirements are met.

View File

@ -215,7 +215,7 @@ func TestSequenceLocksActive(t *testing.T) {
got := SequenceLockActive(&seqLock, test.blockHeight,
time.Unix(test.medianTime, 0))
if got != test.want {
t.Errorf("%s: mismatched seqence lock status - got %v, "+
t.Errorf("%s: mismatched sequence lock status - got %v, "+
"want %v", test.name, got, test.want)
continue
}

View File

@ -66,7 +66,7 @@ var (
},
{
Id: "Vote against",
Description: "Vote against all multiple ",
Description: "Vote against all multiple",
Bits: 0x20, // 0b0010 0000
IsAbstain: false,
IsNo: true,

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015-2018 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -8,7 +8,7 @@ import (
"sync"
"time"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/slog"
)

View File

@ -9,21 +9,20 @@ import (
"container/list"
"encoding/binary"
"fmt"
"math/rand"
"os"
"path/filepath"
"sync"
"sync/atomic"
"time"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/stake"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/blockchain/standalone"
"github.com/decred/dcrd/blockchain/v2"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/fees"
"github.com/decred/dcrd/mempool/v2"
"github.com/decred/dcrd/chaincfg/v2"
"github.com/decred/dcrd/database/v2"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/fees/v2"
"github.com/decred/dcrd/mempool/v3"
"github.com/decred/dcrd/wire"
)
@ -239,53 +238,6 @@ type isCurrentMsg struct {
reply chan bool
}
// getCurrentTemplateMsg handles a request for the current mining block template.
type getCurrentTemplateMsg struct {
reply chan getCurrentTemplateResponse
}
// getCurrentTemplateResponse is a response sent to the reply channel of a
// getCurrentTemplateMsg.
type getCurrentTemplateResponse struct {
Template *BlockTemplate
}
// setCurrentTemplateMsg handles a request to change the current mining block
// template.
type setCurrentTemplateMsg struct {
Template *BlockTemplate
reply chan setCurrentTemplateResponse
}
// setCurrentTemplateResponse is a response sent to the reply channel of a
// setCurrentTemplateMsg.
type setCurrentTemplateResponse struct {
}
// getParentTemplateMsg handles a request for the current parent mining block
// template.
type getParentTemplateMsg struct {
reply chan getParentTemplateResponse
}
// getParentTemplateResponse is a response sent to the reply channel of a
// getParentTemplateMsg.
type getParentTemplateResponse struct {
Template *BlockTemplate
}
// setParentTemplateMsg handles a request to change the parent mining block
// template.
type setParentTemplateMsg struct {
Template *BlockTemplate
reply chan setParentTemplateResponse
}
// setParentTemplateResponse is a response sent to the reply channel of a
// setParentTemplateMsg.
type setParentTemplateResponse struct {
}
// headerNode is used as a node in a list of headers that are linked together
// between checkpoints.
type headerNode struct {
@ -296,11 +248,10 @@ type headerNode struct {
// PeerNotifier provides an interface for server peer notifications.
type PeerNotifier interface {
// AnnounceNewTransactions generates and relays inventory vectors and
// notifies both websocket and getblocktemplate long poll clients of
// the passed transactions.
// notifies websocket clients of the passed transactions.
AnnounceNewTransactions(txns []*dcrutil.Tx)
// UpdatePeerHeights updates the heights of all peers who have have
// UpdatePeerHeights updates the heights of all peers who have
// announced the latest connected main chain block, or a recognized orphan.
UpdatePeerHeights(latestBlkHash *chainhash.Hash, latestHeight int64, updateSource *serverPeer)
@ -319,8 +270,9 @@ type blockManagerConfig struct {
TimeSource blockchain.MedianTimeSource
// The following fields are for accessing the chain and its configuration.
Chain *blockchain.BlockChain
ChainParams *chaincfg.Params
Chain *blockchain.BlockChain
ChainParams *chaincfg.Params
SubsidyCache *standalone.SubsidyCache
// The following fields provide access to the fee estimator, mempool and
// the background block template generator.
@ -328,7 +280,7 @@ type blockManagerConfig struct {
TxMemPool *mempool.TxPool
BgBlkTmplGenerator *BgBlkTmplGenerator
// The following fields are blockManger callbacks.
// The following fields are blockManager callbacks.
NotifyWinningTickets func(*WinningTicketsNtfnData)
PruneRebroadcastInventory func()
RpcServer func() *rpcServer
@ -340,7 +292,6 @@ type blockManager struct {
cfg *blockManagerConfig
started int32
shutdown int32
chain *blockchain.BlockChain
rejectedTxns map[chainhash.Hash]struct{}
requestedTxns map[chainhash.Hash]struct{}
requestedBlocks map[chainhash.Hash]struct{}
@ -363,9 +314,7 @@ type blockManager struct {
lotteryDataBroadcast map[chainhash.Hash]struct{}
lotteryDataBroadcastMutex sync.RWMutex
cachedCurrentTemplate *BlockTemplate
cachedParentTemplate *BlockTemplate
AggressiveMining bool
AggressiveMining bool
// The following fields are used to filter duplicate block announcements.
announcedBlockMtx sync.Mutex
@ -410,7 +359,7 @@ func (b *blockManager) findNextHeaderCheckpoint(height int64) *chaincfg.Checkpoi
if cfg.DisableCheckpoints {
return nil
}
checkpoints := b.chain.Checkpoints()
checkpoints := b.cfg.Chain.Checkpoints()
if len(checkpoints) == 0 {
return nil
}
@ -433,6 +382,20 @@ func (b *blockManager) findNextHeaderCheckpoint(height int64) *chaincfg.Checkpoi
return nextCheckpoint
}
// chainBlockLocatorToHashes converts a block locator from chain to a slice
// of hashes.
func chainBlockLocatorToHashes(locator blockchain.BlockLocator) []chainhash.Hash {
if len(locator) == 0 {
return nil
}
result := make([]chainhash.Hash, 0, len(locator))
for _, hash := range locator {
result = append(result, *hash)
}
return result
}
// startSync will choose the best peer among the available candidate peers to
// download/sync the blockchain from. When syncing is already running, it
// simply returns. It also examines the candidates for any which are no longer
@ -443,7 +406,7 @@ func (b *blockManager) startSync(peers *list.List) {
return
}
best := b.chain.BestSnapshot()
best := b.cfg.Chain.BestSnapshot()
var bestPeer *serverPeer
var enext *list.Element
for e := peers.Front(); e != nil; e = enext {
@ -452,7 +415,7 @@ func (b *blockManager) startSync(peers *list.List) {
// Remove sync candidate peers that are no longer candidates due
// to passing their latest known block. NOTE: The < is
// intentional as opposed to <=. While techcnically the peer
// intentional as opposed to <=. While technically the peer
// doesn't have a later block when it's equal, it will likely
// have one soon so it is a reasonable choice. It also allows
// the case where both are at 0 such as during regression test.
@ -477,12 +440,13 @@ func (b *blockManager) startSync(peers *list.List) {
// to send.
b.requestedBlocks = make(map[chainhash.Hash]struct{})
locator, err := b.chain.LatestBlockLocator()
blkLocator, err := b.cfg.Chain.LatestBlockLocator()
if err != nil {
bmgrLog.Errorf("Failed to get block locator for the "+
"latest block: %v", err)
return
}
locator := chainBlockLocatorToHashes(blkLocator)
bmgrLog.Infof("Syncing to block height %d from peer %v",
bestPeer.LastBlock(), bestPeer.Addr())
@ -542,8 +506,8 @@ func (b *blockManager) isSyncCandidate(sp *serverPeer) bool {
return sp.Services()&wire.SFNodeNetwork == wire.SFNodeNetwork
}
// syncMiningStateAfterSync polls the blockMananger for the current sync
// state; if the mananger is synced, it executes a call to the peer to
// syncMiningStateAfterSync polls the blockManager for the current sync
// state; if the manager is synced, it executes a call to the peer to
// sync the mining state to the network.
func (b *blockManager) syncMiningStateAfterSync(sp *serverPeer) {
go func() {
@ -624,13 +588,90 @@ func (b *blockManager) handleDonePeerMsg(peers *list.List, sp *serverPeer) {
if b.syncPeer != nil && b.syncPeer == sp {
b.syncPeer = nil
if b.headersFirstMode {
best := b.chain.BestSnapshot()
best := b.cfg.Chain.BestSnapshot()
b.resetHeaderState(&best.Hash, best.Height)
}
b.startSync(peers)
}
}
// errToWireRejectCode determines the wire rejection code and description for a
// given error. This function can convert some select blockchain and mempool
// error types to the historical rejection codes used on the p2p wire protocol.
func errToWireRejectCode(err error) (wire.RejectCode, string) {
// Unwrap mempool errors.
if rerr, ok := err.(mempool.RuleError); ok {
err = rerr.Err
}
// The default reason to reject a transaction/block is due to it being
// invalid somehow.
code := wire.RejectInvalid
var reason string
switch err := err.(type) {
case blockchain.RuleError:
// Convert the chain error to a reject code.
switch err.ErrorCode {
// Rejected due to duplicate.
case blockchain.ErrDuplicateBlock:
code = wire.RejectDuplicate
// Rejected due to obsolete version.
case blockchain.ErrBlockVersionTooOld:
code = wire.RejectObsolete
// Rejected due to checkpoint.
case blockchain.ErrCheckpointTimeTooOld,
blockchain.ErrDifficultyTooLow,
blockchain.ErrBadCheckpoint,
blockchain.ErrForkTooOld:
code = wire.RejectCheckpoint
}
reason = err.Error()
case mempool.TxRuleError:
switch err.ErrorCode {
// Error codes which map to a duplicate transaction already
// mined or in the mempool.
case mempool.ErrMempoolDoubleSpend,
mempool.ErrAlreadyVoted,
mempool.ErrDuplicate,
mempool.ErrTooManyVotes,
mempool.ErrDuplicateRevocation,
mempool.ErrAlreadyExists,
mempool.ErrOrphan:
code = wire.RejectDuplicate
// Error codes which map to a non-standard transaction being
// relayed.
case mempool.ErrOrphanPolicyViolation,
mempool.ErrOldVote,
mempool.ErrSeqLockUnmet,
mempool.ErrNonStandard:
code = wire.RejectNonstandard
// Error codes which map to an insufficient fee being paid.
case mempool.ErrInsufficientFee,
mempool.ErrInsufficientPriority:
code = wire.RejectInsufficientFee
// Error codes which map to an attempt to create dust outputs.
case mempool.ErrDustOutput:
code = wire.RejectDust
}
reason = err.Error()
default:
reason = fmt.Sprintf("rejected: %v", err)
}
return code, reason
}
// handleTxMsg handles transaction messages from all peers.
func (b *blockManager) handleTxMsg(tmsg *txMsg) {
// NOTE: BitcoinJ, and possibly other wallets, don't follow the spec of
@ -685,7 +726,7 @@ func (b *blockManager) handleTxMsg(tmsg *txMsg) {
// Convert the error into an appropriate reject message and
// send it.
code, reason := mempool.ErrToRejectErr(err)
code, reason := errToWireRejectCode(err)
tmsg.peer.PushRejectMsg(wire.CmdTx, code, reason, txHash,
false)
return
@ -697,7 +738,7 @@ func (b *blockManager) handleTxMsg(tmsg *txMsg) {
// current returns true if we believe we are synced with our peers, false if we
// still have blocks to check
func (b *blockManager) current() bool {
if !b.chain.IsCurrent() {
if !b.cfg.Chain.IsCurrent() {
return false
}
@ -709,183 +750,31 @@ func (b *blockManager) current() bool {
// No matter what chain thinks, if we are below the block we are syncing
// to we are not current.
if b.chain.BestSnapshot().Height < b.syncPeer.LastBlock() {
if b.cfg.Chain.BestSnapshot().Height < b.syncPeer.LastBlock() {
return false
}
return true
}
// checkBlockForHiddenVotes checks to see if a newly added block contains
// any votes that were previously unknown to our daemon. If it does, it
// adds these votes to the cached parent block template.
//
// This is UNSAFE for concurrent access. It must be called in single threaded
// access through the block mananger. All template access must also be routed
// through the block manager.
func (b *blockManager) checkBlockForHiddenVotes(block *dcrutil.Block) {
// Identify the cached parent template; it's possible that
// the parent template hasn't yet been updated, so we may
// need to use the current template.
var template *BlockTemplate
if b.cachedCurrentTemplate != nil {
if b.cachedCurrentTemplate.Height ==
block.Height() {
template = b.cachedCurrentTemplate
}
}
if template == nil &&
b.cachedParentTemplate != nil {
if b.cachedParentTemplate.Height ==
block.Height() {
template = b.cachedParentTemplate
}
// calcTxTreeMerkleRoot calculates and returns the merkle root for the provided
// transactions. The full (including witness data) hashes for the transactions
// are used as required for merkle roots.
func calcTxTreeMerkleRoot(transactions []*dcrutil.Tx) chainhash.Hash {
if len(transactions) == 0 {
// All zero.
return chainhash.Hash{}
}
// No template to alter.
if template == nil {
return
// Note that the backing array is provided with space for one additional
// item when the number of leaves is odd as an optimization for the in-place
// calculation to avoid the need grow the backing array.
allocLen := len(transactions) + len(transactions)&1
leaves := make([]chainhash.Hash, 0, allocLen)
for _, tx := range transactions {
leaves = append(leaves, tx.MsgTx().TxHashFull())
}
// Make sure that the template has the same parent
// as the new block.
if template.Block.Header.PrevBlock !=
block.MsgBlock().Header.PrevBlock {
bmgrLog.Warnf("error found while trying to check incoming " +
"block for hidden votes: template did not have the " +
"same parent as the incoming block")
return
}
votesFromBlock := make([]*dcrutil.Tx, 0,
activeNetParams.TicketsPerBlock)
for _, stx := range block.STransactions() {
if stake.IsSSGen(stx.MsgTx()) {
votesFromBlock = append(votesFromBlock, stx)
}
}
// Now that we have the template, grab the votes and compare
// them with those found in the newly added block. If we don't
// the votes, they will need to be added to our block template.
// Here we map the vote by their ticket hashes, since the vote
// hash itself varies with the settings of voteBits.
var newVotes []*dcrutil.Tx
var oldTickets []*dcrutil.Tx
var oldRevocations []*dcrutil.Tx
oldVoteMap := make(map[chainhash.Hash]struct{},
int(b.cfg.ChainParams.TicketsPerBlock))
templateBlock := dcrutil.NewBlock(template.Block)
// Add all the votes found in our template. Keep their
// hashes in a map for easy lookup in the next loop.
for _, stx := range templateBlock.STransactions() {
mstx := stx.MsgTx()
txType := stake.DetermineTxType(mstx)
if txType == stake.TxTypeSSGen {
ticketH := mstx.TxIn[1].PreviousOutPoint.Hash
oldVoteMap[ticketH] = struct{}{}
newVotes = append(newVotes, stx)
}
// Create a list of old tickets and revocations
// while we're in this loop.
if txType == stake.TxTypeSStx {
oldTickets = append(oldTickets, stx)
}
if txType == stake.TxTypeSSRtx {
oldRevocations = append(oldRevocations, stx)
}
}
// Check the votes seen in the block. If the votes
// are new, append them.
for _, vote := range votesFromBlock {
ticketH := vote.MsgTx().TxIn[1].PreviousOutPoint.Hash
if _, exists := oldVoteMap[ticketH]; !exists {
newVotes = append(newVotes, vote)
}
}
// Check the length of the reconstructed voter list for
// integrity.
votesTotal := len(newVotes)
if votesTotal > int(b.cfg.ChainParams.TicketsPerBlock) {
bmgrLog.Warnf("error found while adding hidden votes "+
"from block %v to the old block template: %v max "+
"votes expected but %v votes found", block.Hash(),
int(b.cfg.ChainParams.TicketsPerBlock), votesTotal)
return
}
// Clear the old stake transactions and begin inserting the
// new vote list along with all the old transactions. Do this
// for both the underlying template msgBlock and a new slice
// of transaction pointers so that a new merkle root can be
// calculated.
template.Block.ClearSTransactions()
updatedTxTreeStake := make([]*dcrutil.Tx, 0,
len(newVotes)+len(oldTickets)+len(oldRevocations))
for _, vote := range newVotes {
updatedTxTreeStake = append(updatedTxTreeStake, vote)
template.Block.AddSTransaction(vote.MsgTx())
}
for _, ticket := range oldTickets {
updatedTxTreeStake = append(updatedTxTreeStake, ticket)
template.Block.AddSTransaction(ticket.MsgTx())
}
for _, revocation := range oldRevocations {
updatedTxTreeStake = append(updatedTxTreeStake, revocation)
template.Block.AddSTransaction(revocation.MsgTx())
}
// Create a new coinbase and update the coinbase pointer
// in the underlying template msgBlock.
random, err := wire.RandomUint64()
if err != nil {
return
}
height := block.MsgBlock().Header.Height
opReturnPkScript, err := standardCoinbaseOpReturn(height, random)
if err != nil {
// Stopping at this step will lead to a corrupted block template
// because the stake tree has already been manipulated, so throw
// an error.
bmgrLog.Errorf("failed to create coinbase OP_RETURN while generating " +
"block with extra found voters")
return
}
coinbase, err := createCoinbaseTx(b.chain.FetchSubsidyCache(),
template.Block.Transactions[0].TxIn[0].SignatureScript,
opReturnPkScript, int64(template.Block.Header.Height),
cfg.miningAddrs[rand.Intn(len(cfg.miningAddrs))],
uint16(votesTotal), b.cfg.ChainParams)
if err != nil {
bmgrLog.Errorf("failed to create coinbase while generating " +
"block with extra found voters")
return
}
template.Block.Transactions[0] = coinbase.MsgTx()
// Patch the header. First, reconstruct the merkle trees, then
// correct the number of voters, and finally recalculate the size.
updatedTxTreeRegular := make([]*dcrutil.Tx, 0,
len(template.Block.Transactions))
updatedTxTreeRegular = append(updatedTxTreeRegular, coinbase)
for i, mtx := range template.Block.Transactions {
// Coinbase
if i == 0 {
continue
}
tx := dcrutil.NewTx(mtx)
updatedTxTreeRegular = append(updatedTxTreeRegular, tx)
}
merkles := blockchain.BuildMerkleTreeStore(updatedTxTreeRegular)
template.Block.Header.StakeRoot = *merkles[len(merkles)-1]
smerkles := blockchain.BuildMerkleTreeStore(updatedTxTreeStake)
template.Block.Header.Voters = uint16(votesTotal)
template.Block.Header.StakeRoot = *smerkles[len(smerkles)-1]
template.Block.Header.Size = uint32(template.Block.SerializeSize())
return standalone.CalcMerkleRootInPlace(leaves)
}
// handleBlockMsg handles block messages from all peers.
@ -931,7 +820,7 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
// Process the block to include validation, best chain selection, orphan
// handling, etc.
forkLen, isOrphan, err := b.chain.ProcessBlock(bmsg.block,
forkLen, isOrphan, err := b.cfg.Chain.ProcessBlock(bmsg.block,
behaviorFlags)
if err != nil {
// When the error is a rule error, it means the block was simply
@ -952,21 +841,21 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
// Convert the error into an appropriate reject message and
// send it.
code, reason := mempool.ErrToRejectErr(err)
code, reason := errToWireRejectCode(err)
bmsg.peer.PushRejectMsg(wire.CmdBlock, code, reason,
blockHash, false)
return
}
// Meta-data about the new block this peer is reporting. We use this
// below to update this peer's lastest block height and the heights of
// below to update this peer's latest block height and the heights of
// other peers based on their last announced block hash. This allows us
// to dynamically update the block heights of peers, avoiding stale
// heights when looking for a new sync peer. Upon acceptance of a block
// or recognition of an orphan, we also use this information to update
// the block heights over other peers who's invs may have been ignored
// if we are actively syncing while the chain is not yet current or
// who may have lost the lock announcment race.
// who may have lost the lock announcement race.
var heightUpdate int64
var blkHashUpdate *chainhash.Hash
@ -982,12 +871,13 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
heightUpdate = int64(cbHeight)
blkHashUpdate = blockHash
orphanRoot := b.chain.GetOrphanRoot(blockHash)
locator, err := b.chain.LatestBlockLocator()
orphanRoot := b.cfg.Chain.GetOrphanRoot(blockHash)
blkLocator, err := b.cfg.Chain.LatestBlockLocator()
if err != nil {
bmgrLog.Warnf("Failed to get block locator for the "+
"latest block: %v", err)
} else {
locator := chainBlockLocatorToHashes(blkLocator)
err = bmsg.peer.PushGetBlocksMsg(locator, orphanRoot)
if err != nil {
bmgrLog.Warnf("Failed to push getblocksmsg for the "+
@ -1001,18 +891,9 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
onMainChain := !isOrphan && forkLen == 0
if onMainChain {
// A new block is connected, however, this new block may have
// votes in it that were hidden from the network and which
// validate our parent block. We should bolt these new votes
// into the tx tree stake of the old block template on parent.
svl := b.cfg.ChainParams.StakeValidationHeight
if b.AggressiveMining && bmsg.block.Height() >= svl {
b.checkBlockForHiddenVotes(bmsg.block)
}
// Notify stake difficulty subscribers and prune invalidated
// transactions.
best := b.chain.BestSnapshot()
best := b.cfg.Chain.BestSnapshot()
r := b.cfg.RpcServer()
if r != nil {
// Update registered websocket clients on the
@ -1028,19 +909,12 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
b.cfg.TxMemPool.PruneExpiredTx()
// Update this peer's latest block height, for future
// potential sync node candidancy.
// potential sync node candidacy.
heightUpdate = best.Height
blkHashUpdate = &best.Hash
// Clear the rejected transactions.
b.rejectedTxns = make(map[chainhash.Hash]struct{})
// Allow any clients performing long polling via the
// getblocktemplate RPC to be notified when the new block causes
// their old block template to become stale.
if r := b.cfg.RpcServer(); r != nil {
r.gbtWorkState.NotifyBlockConnected(blockHash)
}
}
}
@ -1080,7 +954,7 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
prevHash := b.nextCheckpoint.Hash
b.nextCheckpoint = b.findNextHeaderCheckpoint(prevHeight)
if b.nextCheckpoint != nil {
locator := blockchain.BlockLocator([]*chainhash.Hash{prevHash})
locator := []chainhash.Hash{*prevHash}
err := bmsg.peer.PushGetHeadersMsg(locator, b.nextCheckpoint.Hash)
if err != nil {
bmgrLog.Warnf("Failed to send getheaders message to "+
@ -1099,7 +973,7 @@ func (b *blockManager) handleBlockMsg(bmsg *blockMsg) {
b.headersFirstMode = false
b.headerList.Init()
bmgrLog.Infof("Reached the final checkpoint -- switching to normal mode")
locator := blockchain.BlockLocator([]*chainhash.Hash{blockHash})
locator := []chainhash.Hash{*blockHash}
err = bmsg.peer.PushGetBlocksMsg(locator, &zeroHash)
if err != nil {
bmgrLog.Warnf("Failed to send getblocks message to peer %s: %v",
@ -1186,7 +1060,7 @@ func (b *blockManager) handleHeadersMsg(hmsg *headersMsg) {
prevNodeEl := b.headerList.Back()
if prevNodeEl == nil {
bmgrLog.Warnf("Header list does not contain a previous" +
"element as expected -- disconnecting peer")
" element as expected -- disconnecting peer")
hmsg.peer.Disconnect()
return
}
@ -1248,7 +1122,7 @@ func (b *blockManager) handleHeadersMsg(hmsg *headersMsg) {
// This header is not a checkpoint, so request the next batch of
// headers starting from the latest known header and ending with the
// next checkpoint.
locator := blockchain.BlockLocator([]*chainhash.Hash{finalHash})
locator := []chainhash.Hash{*finalHash}
err := hmsg.peer.PushGetHeadersMsg(locator, b.nextCheckpoint.Hash)
if err != nil {
bmgrLog.Warnf("Failed to send getheaders message to "+
@ -1267,7 +1141,7 @@ func (b *blockManager) haveInventory(invVect *wire.InvVect) (bool, error) {
case wire.InvTypeBlock:
// Ask chain if the block is known to it in any form (main
// chain, side chain, or orphan).
return b.chain.HaveBlock(&invVect.Hash)
return b.cfg.Chain.HaveBlock(&invVect.Hash)
case wire.InvTypeTx:
// Ask the transaction memory pool if the transaction is known
@ -1278,14 +1152,14 @@ func (b *blockManager) haveInventory(invVect *wire.InvVect) (bool, error) {
// Check if the transaction exists from the point of view of the
// end of the main chain.
entry, err := b.chain.FetchUtxoEntry(&invVect.Hash)
entry, err := b.cfg.Chain.FetchUtxoEntry(&invVect.Hash)
if err != nil {
return false, err
}
return entry != nil && !entry.IsFullySpent(), nil
}
// The requested inventory is is an unsupported type, so just claim
// The requested inventory is an unsupported type, so just claim
// it is known to avoid requesting it.
return true, nil
}
@ -1325,7 +1199,7 @@ func (b *blockManager) handleInvMsg(imsg *invMsg) {
// If our chain is current and a peer announces a block we already
// know of, then update their current block height.
if lastBlock != -1 && isCurrent {
blkHeight, err := b.chain.BlockHeightByHash(&invVects[lastBlock].Hash)
blkHeight, err := b.cfg.Chain.BlockHeightByHash(&invVects[lastBlock].Hash)
if err == nil {
imsg.peer.UpdateLastBlockHeight(blkHeight)
}
@ -1384,18 +1258,19 @@ func (b *blockManager) handleInvMsg(imsg *invMsg) {
// resending the orphan block as an available block
// to signal there are more missing blocks that need to
// be requested.
if b.chain.IsKnownOrphan(&iv.Hash) {
if b.cfg.Chain.IsKnownOrphan(&iv.Hash) {
// Request blocks starting at the latest known
// up to the root of the orphan that just came
// in.
orphanRoot := b.chain.GetOrphanRoot(&iv.Hash)
locator, err := b.chain.LatestBlockLocator()
orphanRoot := b.cfg.Chain.GetOrphanRoot(&iv.Hash)
blkLocator, err := b.cfg.Chain.LatestBlockLocator()
if err != nil {
bmgrLog.Errorf("PEER: Failed to get block "+
"locator for the latest block: "+
"%v", err)
continue
}
locator := chainBlockLocatorToHashes(blkLocator)
err = imsg.peer.PushGetBlocksMsg(locator, orphanRoot)
if err != nil {
bmgrLog.Errorf("PEER: Failed to push getblocksmsg "+
@ -1412,7 +1287,8 @@ func (b *blockManager) handleInvMsg(imsg *invMsg) {
// Request blocks after this one up to the
// final one the remote peer knows about (zero
// stop hash).
locator := b.chain.BlockLocatorFromHash(&iv.Hash)
blkLocator := b.cfg.Chain.BlockLocatorFromHash(&iv.Hash)
locator := chainBlockLocatorToHashes(blkLocator)
err = imsg.peer.PushGetBlocksMsg(locator, &zeroHash)
if err != nil {
bmgrLog.Errorf("PEER: Failed to push getblocksmsg: "+
@ -1529,7 +1405,7 @@ out:
case calcNextReqDiffNodeMsg:
difficulty, err :=
b.chain.CalcNextRequiredDiffFromNode(msg.hash,
b.cfg.Chain.CalcNextRequiredDiffFromNode(msg.hash,
msg.timestamp)
msg.reply <- calcNextReqDifficultyResponse{
difficulty: difficulty,
@ -1537,20 +1413,20 @@ out:
}
case calcNextReqStakeDifficultyMsg:
stakeDiff, err := b.chain.CalcNextRequiredStakeDifficulty()
stakeDiff, err := b.cfg.Chain.CalcNextRequiredStakeDifficulty()
msg.reply <- calcNextReqStakeDifficultyResponse{
stakeDifficulty: stakeDiff,
err: err,
}
case forceReorganizationMsg:
err := b.chain.ForceHeadReorganization(
err := b.cfg.Chain.ForceHeadReorganization(
msg.formerBest, msg.newBest)
if err == nil {
// Notify stake difficulty subscribers and prune
// invalidated transactions.
best := b.chain.BestSnapshot()
best := b.cfg.Chain.BestSnapshot()
r := b.cfg.RpcServer()
if r != nil {
r.ntfnMgr.NotifyStakeDifficulty(
@ -1570,14 +1446,14 @@ out:
}
case tipGenerationMsg:
g, err := b.chain.TipGeneration()
g, err := b.cfg.Chain.TipGeneration()
msg.reply <- tipGenerationResponse{
hashes: g,
err: err,
}
case processBlockMsg:
forkLen, isOrphan, err := b.chain.ProcessBlock(
forkLen, isOrphan, err := b.cfg.Chain.ProcessBlock(
msg.block, msg.flags)
if err != nil {
msg.reply <- processBlockResponse{
@ -1593,7 +1469,7 @@ out:
if onMainChain {
// Notify stake difficulty subscribers and prune
// invalidated transactions.
best := b.chain.BestSnapshot()
best := b.cfg.Chain.BestSnapshot()
if r != nil {
r.ntfnMgr.NotifyStakeDifficulty(
&StakeDifficultyNtfnData{
@ -1607,13 +1483,6 @@ out:
b.cfg.TxMemPool.PruneExpiredTx()
}
// Allow any clients performing long polling via the
// getblocktemplate RPC to be notified when the new block causes
// their old block template to become stale.
if r != nil {
r.gbtWorkState.NotifyBlockConnected(msg.block.Hash())
}
msg.reply <- processBlockResponse{
isOrphan: isOrphan,
err: nil,
@ -1630,26 +1499,6 @@ out:
case isCurrentMsg:
msg.reply <- b.current()
case getCurrentTemplateMsg:
cur := deepCopyBlockTemplate(b.cachedCurrentTemplate)
msg.reply <- getCurrentTemplateResponse{
Template: cur,
}
case setCurrentTemplateMsg:
b.cachedCurrentTemplate = deepCopyBlockTemplate(msg.Template)
msg.reply <- setCurrentTemplateResponse{}
case getParentTemplateMsg:
par := deepCopyBlockTemplate(b.cachedParentTemplate)
msg.reply <- getParentTemplateResponse{
Template: par,
}
case setParentTemplateMsg:
b.cachedParentTemplate = deepCopyBlockTemplate(msg.Template)
msg.reply <- setParentTemplateResponse{}
default:
bmgrLog.Warnf("Invalid message type in block handler: %T", msg)
}
@ -1689,8 +1538,15 @@ func isDoubleSpendOrDuplicateError(err error) bool {
}
rerr, ok := merr.Err.(mempool.TxRuleError)
if ok && rerr.RejectCode == wire.RejectDuplicate {
return true
if ok {
switch rerr.ErrorCode {
case mempool.ErrDuplicate:
return true
case mempool.ErrAlreadyExists:
return true
default:
return false
}
}
cerr, ok := merr.Err.(blockchain.RuleError)
@ -1715,7 +1571,7 @@ func (b *blockManager) handleBlockchainNotification(notification *blockchain.Not
// which could result in a deadlock.
block, ok := notification.Data.(*dcrutil.Block)
if !ok {
bmgrLog.Warnf("New tip block checkedd notification is not a block.")
bmgrLog.Warnf("New tip block checked notification is not a block.")
break
}
@ -1767,7 +1623,7 @@ func (b *blockManager) handleBlockchainNotification(notification *blockchain.Not
// other words, it is extending the shorter side chain. The reorg depth
// would be 106 - (103 - 3) = 6. This should intuitively make sense,
// because if the side chain were to be extended enough to become the
// best chain, it would result in a a reorg that would remove 6 blocks,
// best chain, it would result in a reorg that would remove 6 blocks,
// namely blocks 101, 102, 103, 104, 105, and 106.
blockHash := block.Hash()
bestHeight := band.BestHeight
@ -1782,7 +1638,7 @@ func (b *blockManager) handleBlockchainNotification(notification *blockchain.Not
// Obtain the winning tickets for this block. handleNotifyMsg
// should be safe for concurrent access of things contained
// within blockchain.
wt, _, _, err := b.chain.LotteryDataForBlock(blockHash)
wt, _, _, err := b.cfg.Chain.LotteryDataForBlock(blockHash)
if err != nil {
bmgrLog.Errorf("Couldn't calculate winning tickets for "+
"accepted block %v: %v", blockHash, err.Error())
@ -1850,7 +1706,7 @@ func (b *blockManager) handleBlockchainNotification(notification *blockchain.Not
// TODO: In the case the new tip disapproves the previous block, any
// transactions the previous block contains in its regular tree which
// double spend the same inputs as transactions in either tree of the
// current tip should ideally be tracked in the pool as eligibile for
// current tip should ideally be tracked in the pool as eligible for
// inclusion in an alternative tip (side chain block) in case the
// current tip block does not get enough votes. However, the
// transaction pool currently does not provide any way to distinguish
@ -2058,10 +1914,6 @@ func (b *blockManager) handleBlockchainNotification(notification *blockchain.Not
if r := b.cfg.RpcServer(); r != nil {
r.ntfnMgr.NotifyReorganization(rd)
}
// Drop the associated mining template from the old chain, since it
// will be no longer valid.
b.cachedCurrentTemplate = nil
}
}
@ -2191,7 +2043,7 @@ func (b *blockManager) requestFromPeer(p *serverPeer, blocks, txs []*chainhash.H
// Check to see if we already have this block, too.
// If so, skip.
exists, err := b.chain.HaveBlock(bh)
exists, err := b.cfg.Chain.HaveBlock(bh)
if err != nil {
return err
}
@ -2228,7 +2080,7 @@ func (b *blockManager) requestFromPeer(p *serverPeer, blocks, txs []*chainhash.H
// Check if the transaction exists from the point of view of the
// end of the main chain.
entry, err := b.chain.FetchUtxoEntry(vh)
entry, err := b.cfg.Chain.FetchUtxoEntry(vh)
if err != nil {
return err
}
@ -2350,37 +2202,7 @@ func (b *blockManager) IsCurrent() bool {
// TicketPoolValue returns the current value of the total stake in the ticket
// pool.
func (b *blockManager) TicketPoolValue() (dcrutil.Amount, error) {
return b.chain.TicketPoolValue()
}
// GetCurrentTemplate gets the current block template for mining.
func (b *blockManager) GetCurrentTemplate() *BlockTemplate {
reply := make(chan getCurrentTemplateResponse)
b.msgChan <- getCurrentTemplateMsg{reply: reply}
response := <-reply
return response.Template
}
// SetCurrentTemplate sets the current block template for mining.
func (b *blockManager) SetCurrentTemplate(bt *BlockTemplate) {
reply := make(chan setCurrentTemplateResponse)
b.msgChan <- setCurrentTemplateMsg{Template: bt, reply: reply}
<-reply
}
// GetParentTemplate gets the current parent block template for mining.
func (b *blockManager) GetParentTemplate() *BlockTemplate {
reply := make(chan getParentTemplateResponse)
b.msgChan <- getParentTemplateMsg{reply: reply}
response := <-reply
return response.Template
}
// SetParentTemplate sets the current parent block template for mining.
func (b *blockManager) SetParentTemplate(bt *BlockTemplate) {
reply := make(chan setParentTemplateResponse)
b.msgChan <- setParentTemplateMsg{Template: bt, reply: reply}
<-reply
return b.cfg.Chain.TicketPoolValue()
}
// newBlockManager returns a new Decred block manager.
@ -2388,7 +2210,6 @@ func (b *blockManager) SetParentTemplate(bt *BlockTemplate) {
func newBlockManager(config *blockManagerConfig) (*blockManager, error) {
bm := blockManager{
cfg: config,
chain: config.Chain,
rejectedTxns: make(map[chainhash.Hash]struct{}),
requestedTxns: make(map[chainhash.Hash]struct{}),
requestedBlocks: make(map[chainhash.Hash]struct{}),
@ -2399,8 +2220,8 @@ func newBlockManager(config *blockManagerConfig) (*blockManager, error) {
quit: make(chan struct{}),
}
best := bm.chain.BestSnapshot()
bm.chain.DisableCheckpoints(cfg.DisableCheckpoints)
best := bm.cfg.Chain.BestSnapshot()
bm.cfg.Chain.DisableCheckpoints(cfg.DisableCheckpoints)
if !cfg.DisableCheckpoints {
// Initialize the next checkpoint based on the current height.
bm.nextCheckpoint = bm.findNextHeaderCheckpoint(best.Height)
@ -2413,7 +2234,7 @@ func newBlockManager(config *blockManagerConfig) (*blockManager, error) {
// Dump the blockchain here if asked for it, and quit.
if cfg.DumpBlockchain != "" {
err := dumpBlockChain(bm.chain, best.Height)
err := dumpBlockChain(bm.cfg.Chain, best.Height)
if err != nil {
return nil, err
}

View File

@ -1,7 +1,7 @@
Certgen
======
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/certgen)

View File

@ -1,7 +1,7 @@
chaincfg
========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/chaincfg)
@ -23,8 +23,8 @@ import (
"fmt"
"log"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/chaincfg/v2"
)
var testnet = flag.Bool("testnet", false, "operate on the testnet Decred network")

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015-2016 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -9,7 +9,7 @@ import (
"io"
"math/big"
"github.com/decred/dcrd/dcrec/edwards"
"github.com/decred/dcrd/dcrec/edwards/v2"
)
type edwardsDSA struct {
@ -153,7 +153,7 @@ func (e edwardsDSA) Decrypt(privkey []byte, in []byte) ([]byte,
return e.decrypt(privkey, in)
}
// newEdwardsDSA instatiates a function DSA subsystem over the edwards 25519
// newEdwardsDSA instantiates a function DSA subsystem over the edwards 25519
// curve. A caveat for the functions below is that they're all routed through
// interfaces, and nil returns from the library itself for interfaces must
// ALWAYS be checked by checking the return value by attempted dereference
@ -184,14 +184,14 @@ func newEdwardsDSA() DSA {
// Private keys
newPrivateKey: func(d *big.Int) PrivateKey {
pk := edwards.NewPrivateKey(edwardsCurve, d)
pk := edwards.NewPrivateKey(d)
if pk != nil {
return PrivateKey(*pk)
}
return nil
},
privKeyFromBytes: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub := edwards.PrivKeyFromBytes(edwardsCurve, pk)
priv, pub := edwards.PrivKeyFromBytes(pk)
if priv == nil {
return nil, nil
}
@ -203,7 +203,7 @@ func newEdwardsDSA() DSA {
return tpriv, tpub
},
privKeyFromScalar: func(pk []byte) (PrivateKey, PublicKey) {
priv, pub, err := edwards.PrivKeyFromScalar(edwardsCurve, pk)
priv, pub, err := edwards.PrivKeyFromScalar(pk)
if err != nil {
return nil, nil
}
@ -223,12 +223,12 @@ func newEdwardsDSA() DSA {
// Public keys
newPublicKey: func(x *big.Int, y *big.Int) PublicKey {
pk := edwards.NewPublicKey(edwardsCurve, x, y)
pk := edwards.NewPublicKey(x, y)
tpk := PublicKey(*pk)
return tpk
},
parsePubKey: func(pubKeyStr []byte) (PublicKey, error) {
pk, err := edwards.ParsePubKey(edwardsCurve, pubKeyStr)
pk, err := edwards.ParsePubKey(pubKeyStr)
if err != nil {
return nil, err
}
@ -252,7 +252,7 @@ func newEdwardsDSA() DSA {
return ts
},
parseDERSignature: func(sigStr []byte) (Signature, error) {
sig, err := edwards.ParseDERSignature(edwardsCurve, sigStr)
sig, err := edwards.ParseDERSignature(sigStr)
if err != nil {
return nil, err
}
@ -260,7 +260,7 @@ func newEdwardsDSA() DSA {
return ts, err
},
parseSignature: func(sigStr []byte) (Signature, error) {
sig, err := edwards.ParseSignature(edwardsCurve, sigStr)
sig, err := edwards.ParseSignature(sigStr)
if err != nil {
return nil, err
}
@ -285,7 +285,7 @@ func newEdwardsDSA() DSA {
if !ok {
return nil, nil, errors.New("wrong type")
}
r, s, err = edwards.Sign(edwardsCurve, &epriv, hash)
r, s, err = edwards.Sign(&epriv, hash)
return
},
verify: func(pub PublicKey, hash []byte, r, s *big.Int) bool {
@ -301,25 +301,23 @@ func newEdwardsDSA() DSA {
// Symmetric cipher encryption
generateSharedSecret: func(privkey []byte, x, y *big.Int) []byte {
privKeyLocal, _, err := edwards.PrivKeyFromScalar(edwardsCurve,
privkey)
privKeyLocal, _, err := edwards.PrivKeyFromScalar(privkey)
if err != nil {
return nil
}
pubkey := edwards.NewPublicKey(edwardsCurve, x, y)
pubkey := edwards.NewPublicKey(x, y)
return edwards.GenerateSharedSecret(privKeyLocal, pubkey)
},
encrypt: func(x, y *big.Int, in []byte) ([]byte, error) {
pubkey := edwards.NewPublicKey(edwardsCurve, x, y)
return edwards.Encrypt(edwardsCurve, pubkey, in)
pubkey := edwards.NewPublicKey(x, y)
return edwards.Encrypt(pubkey, in)
},
decrypt: func(privkey []byte, in []byte) ([]byte, error) {
privKeyLocal, _, err := edwards.PrivKeyFromScalar(edwardsCurve,
privkey)
privKeyLocal, _, err := edwards.PrivKeyFromScalar(privkey)
if err != nil {
return nil, err
}
return edwards.Decrypt(edwardsCurve, privKeyLocal, in)
return edwards.Decrypt(privKeyLocal, in)
},
}

View File

@ -1,4 +1,4 @@
// Copyright (c) 2015-2016 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,7 +10,7 @@ import (
"io"
"math/big"
"github.com/decred/dcrd/dcrec/secp256k1"
"github.com/decred/dcrd/dcrec/secp256k1/v2"
)
type secp256k1DSA struct {
@ -154,7 +154,7 @@ func (sp secp256k1DSA) Decrypt(privkey []byte, in []byte) ([]byte,
return sp.decrypt(privkey, in)
}
// newSecp256k1DSA instatiates a function DSA subsystem over the secp256k1
// newSecp256k1DSA instantiates a function DSA subsystem over the secp256k1
// curve. A caveat for the functions below is that they're all routed through
// interfaces, and nil returns from the library itself for interfaces must
// ALWAYS be checked by checking the return value by attempted dereference

View File

@ -9,8 +9,8 @@ import (
"io"
"math/big"
"github.com/decred/dcrd/dcrec/secp256k1"
"github.com/decred/dcrd/dcrec/secp256k1/schnorr"
"github.com/decred/dcrd/dcrec/secp256k1/v2"
"github.com/decred/dcrd/dcrec/secp256k1/v2/schnorr"
)
type secSchnorrDSA struct {
@ -150,7 +150,7 @@ func (sp secSchnorrDSA) Decrypt(privkey []byte, in []byte) ([]byte,
return sp.decrypt(privkey, in)
}
// newSecSchnorrDSA instatiates a function DSA subsystem over the secp256k1
// newSecSchnorrDSA instantiates a function DSA subsystem over the secp256k1
// curve. A caveat for the functions below is that they're all routed through
// interfaces, and nil returns from the library itself for interfaces must
// ALWAYS be checked by checking the return value by attempted dereference
@ -225,7 +225,7 @@ func newSecSchnorrDSA() DSA {
return tpk
},
parsePubKey: func(pubKeyStr []byte) (PublicKey, error) {
pk, err := schnorr.ParsePubKey(secp256k1Curve, pubKeyStr)
pk, err := schnorr.ParsePubKey(pubKeyStr)
if err != nil {
return nil, err
}

View File

@ -1,7 +1,7 @@
chainhash
=========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/chaincfg/chainhash)

View File

@ -25,8 +25,8 @@
// "fmt"
// "log"
//
// "github.com/decred/dcrd/dcrutil"
// "github.com/decred/dcrd/chaincfg"
// "github.com/decred/dcrd/dcrutil/v2"
// "github.com/decred/dcrd/chaincfg/v2"
// )
//
// var testnet = flag.Bool("testnet", false, "operate on the testnet Decred network")

View File

@ -4,8 +4,8 @@ go 1.11
require (
github.com/davecgh/go-spew v1.1.1
github.com/decred/dcrd/chaincfg/chainhash v1.0.1
github.com/decred/dcrd/dcrec/edwards v1.0.0
github.com/decred/dcrd/dcrec/secp256k1 v1.0.1
github.com/decred/dcrd/chaincfg/chainhash v1.0.2
github.com/decred/dcrd/dcrec/edwards/v2 v2.0.0
github.com/decred/dcrd/dcrec/secp256k1/v2 v2.0.0
github.com/decred/dcrd/wire v1.2.0
)

View File

@ -8,9 +8,13 @@ github.com/dchest/blake256 v1.0.0 h1:6gUgI5MHdz9g0TdrgKqXsoDX+Zjxmm1Sc6OsoGru50I
github.com/dchest/blake256 v1.0.0/go.mod h1:xXNWCE1jsAP8DAjP+rKw2MbeqLczjI3TRx2VK+9OEYY=
github.com/decred/dcrd/chaincfg/chainhash v1.0.1 h1:0vG7U9+dSjSCaHQKdoSKURK2pOb47+b+8FK5q4+Je7M=
github.com/decred/dcrd/chaincfg/chainhash v1.0.1/go.mod h1:OVfvaOsNLS/A1y4Eod0Ip/Lf8qga7VXCQjUQLbkY0Go=
github.com/decred/dcrd/dcrec/edwards v1.0.0 h1:UDcPNzclKiJlWqV3x1Fl8xMCJrolo4PB4X9t8LwKDWU=
github.com/decred/dcrd/dcrec/edwards v1.0.0/go.mod h1:HblVh1OfMt7xSxUL1ufjToaEvpbjpWvvTAUx4yem8BI=
github.com/decred/dcrd/dcrec/secp256k1 v1.0.1 h1:EFWVd1p0t0Y5tnsm/dJujgV0ORogRJ6vo7CMAjLseAc=
github.com/decred/dcrd/dcrec/secp256k1 v1.0.1/go.mod h1:lhu4eZFSfTJWUnR3CFRcpD+Vta0KUAqnhTsTksHXgy0=
github.com/decred/dcrd/chaincfg/chainhash v1.0.2 h1:rt5Vlq/jM3ZawwiacWjPa+smINyLRN07EO0cNBV6DGU=
github.com/decred/dcrd/chaincfg/chainhash v1.0.2/go.mod h1:BpbrGgrPTr3YJYRN3Bm+D9NuaFd+zGyNeIKgrhCXK60=
github.com/decred/dcrd/crypto/blake256 v1.0.0 h1:/8DMNYp9SGi5f0w7uCm6d6M4OU2rGFK09Y2A4Xv7EE0=
github.com/decred/dcrd/crypto/blake256 v1.0.0/go.mod h1:sQl2p6Y26YV+ZOcSTP6thNdn47hh8kt6rqSlvmrXFAc=
github.com/decred/dcrd/dcrec/edwards/v2 v2.0.0 h1:E5KszxGgpjpmW8vN811G6rBAZg0/S/DftdGqN4FW5x4=
github.com/decred/dcrd/dcrec/edwards/v2 v2.0.0/go.mod h1:d0H8xGMWbiIQP7gN3v2rByWUcuZPm9YsgmnfoxgbINc=
github.com/decred/dcrd/dcrec/secp256k1/v2 v2.0.0 h1:3GIJYXQDAKpLEFriGFN8SbSffak10UXHGdIcFaMPykY=
github.com/decred/dcrd/dcrec/secp256k1/v2 v2.0.0/go.mod h1:3s92l0paYkZoIHuj4X93Teg/HB7eGM9x/zokGw+u4mY=
github.com/decred/dcrd/wire v1.2.0 h1:HqJVB7vcklIguzFWgRXw/WYCQ9cD3bUC5TKj53i1Hng=
github.com/decred/dcrd/wire v1.2.0/go.mod h1:/JKOsLInOJu6InN+/zH5AyCq3YDIOW/EqcffvU8fJHM=

View File

@ -95,7 +95,7 @@ type Choice struct {
// (abstain) and exist only once in the Vote.Choices array.
IsAbstain bool
// This coince indicates a hard No Vote. By convention this must exist
// This choice indicates a hard No Vote. By convention this must exist
// only once in the Vote.Choices array.
IsNo bool
}
@ -114,7 +114,7 @@ func (v *Vote) VoteIndex(vote uint16) int {
}
const (
// VoteIDMaxBlockSize is the vote ID for the the maximum block size
// VoteIDMaxBlockSize is the vote ID for the maximum block size
// increase agenda used for the hard fork demo.
VoteIDMaxBlockSize = "maxblocksize"
@ -364,7 +364,7 @@ type Params struct {
// SLIP-0044 registered coin type used for BIP44, used in the hierarchical
// deterministic path for address generation.
// All SLIP-0044 registered coin types are are defined here:
// All SLIP-0044 registered coin types are defined here:
// https://github.com/satoshilabs/slips/blob/master/slip-0044.md
SLIP0044CoinType uint32

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2018 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,9 +10,9 @@ import (
"path/filepath"
"runtime"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/indexers"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/blockchain/v2"
"github.com/decred/dcrd/blockchain/v2/indexers"
"github.com/decred/dcrd/database/v2"
"github.com/decred/dcrd/internal/limits"
"github.com/decred/slog"
)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2018 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,10 +10,10 @@ import (
"os"
"path/filepath"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ffldb"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/chaincfg/v2"
"github.com/decred/dcrd/database/v2"
_ "github.com/decred/dcrd/database/v2/ffldb"
"github.com/decred/dcrd/dcrutil/v2"
flags "github.com/jessevdk/go-flags"
)
@ -27,7 +27,7 @@ var (
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultDataDir = filepath.Join(dcrdHomeDir, "data")
knownDbTypes = database.SupportedDrivers()
activeNetParams = &chaincfg.MainNetParams
activeNetParams = chaincfg.MainNetParams()
)
// config defines the configuration options for findcheckpoint.
@ -45,7 +45,7 @@ type config struct {
Progress int `short:"p" long:"progress" description:"Show a progress message each time this number of seconds have passed -- Use 0 to disable progress announcements"`
}
// filesExists reports whether the named file or directory exists.
// fileExists reports whether the named file or directory exists.
func fileExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {
@ -93,11 +93,11 @@ func loadConfig() (*config, []string, error) {
// while we're at it
if cfg.TestNet {
numNets++
activeNetParams = &chaincfg.TestNet3Params
activeNetParams = chaincfg.TestNet3Params()
}
if cfg.SimNet {
numNets++
activeNetParams = &chaincfg.SimNetParams
activeNetParams = chaincfg.SimNetParams()
}
if numNets > 1 {
str := "%s: the testnet, regtest, and simnet params can't be " +

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -12,11 +12,11 @@ import (
"sync"
"time"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/blockchain/indexers"
"github.com/decred/dcrd/blockchain/v2"
"github.com/decred/dcrd/blockchain/v2/indexers"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/database/v2"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/wire"
)
@ -139,7 +139,7 @@ func (bi *blockImporter) processBlock(serializedBlock []byte) (bool, error) {
}
isMainChain := !isOrphan && forkLen == 0
if !isMainChain {
return false, fmt.Errorf("import file contains an block that "+
return false, fmt.Errorf("import file contains a block that "+
"does not extend the main chain: %v", blockHash)
}
if isOrphan {

View File

@ -17,10 +17,10 @@ import (
"strings"
"github.com/decred/dcrd/dcrjson/v3"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/internal/version"
dcrdtypes "github.com/decred/dcrd/rpc/jsonrpc/types"
dcrdtypes "github.com/decred/dcrd/rpc/jsonrpc/types/v2"
wallettypes "github.com/decred/dcrwallet/rpc/jsonrpc/types"
flags "github.com/jessevdk/go-flags"
@ -212,7 +212,7 @@ func cleanAndExpandPath(path string) string {
return filepath.Join(homeDir, path)
}
// filesExists reports whether the named file or directory exists.
// fileExists reports whether the named file or directory exists.
func fileExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {

View File

@ -16,7 +16,7 @@ import (
"strings"
"github.com/decred/dcrd/dcrjson/v3"
dcrdtypes "github.com/decred/dcrd/rpc/jsonrpc/types"
dcrdtypes "github.com/decred/dcrd/rpc/jsonrpc/types/v2"
wallettypes "github.com/decred/dcrwallet/rpc/jsonrpc/types"
)
@ -120,7 +120,7 @@ func main() {
cmd, err := dcrjson.NewCmd(method, params...)
if err != nil {
// Show the error along with its error code when it's a
// dcrjson.Error as it reallistcally will always be since the
// dcrjson.Error as it realistically will always be since the
// NewCmd function is only supposed to return errors of that
// type.
if jerr, ok := err.(dcrjson.Error); ok {

View File

@ -15,7 +15,7 @@ import (
"net"
"net/http"
"github.com/decred/dcrd/dcrjson/v2"
"github.com/decred/dcrd/dcrjson/v3"
"github.com/decred/go-socks/socks"
)

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2018 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,10 +10,10 @@ import (
"os"
"path/filepath"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ffldb"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/chaincfg/v2"
"github.com/decred/dcrd/database/v2"
_ "github.com/decred/dcrd/database/v2/ffldb"
"github.com/decred/dcrd/dcrutil/v2"
flags "github.com/jessevdk/go-flags"
)
@ -28,7 +28,7 @@ var (
dcrdHomeDir = dcrutil.AppDataDir("dcrd", false)
defaultDataDir = filepath.Join(dcrdHomeDir, "data")
knownDbTypes = database.SupportedDrivers()
activeNetParams = &chaincfg.MainNetParams
activeNetParams = chaincfg.MainNetParams()
)
// config defines the configuration options for findcheckpoint.
@ -80,11 +80,11 @@ func loadConfig() (*config, []string, error) {
// while we're at it
if cfg.TestNet {
numNets++
activeNetParams = &chaincfg.TestNet3Params
activeNetParams = chaincfg.TestNet3Params()
}
if cfg.SimNet {
numNets++
activeNetParams = &chaincfg.SimNetParams
activeNetParams = chaincfg.SimNetParams()
}
if numNets > 1 {
str := "%s: the testnet, regtest, and simnet params can't be " +

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2016 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -10,10 +10,10 @@ import (
"os"
"path/filepath"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/blockchain/v2"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/database"
"github.com/decred/dcrd/chaincfg/v2"
"github.com/decred/dcrd/database/v2"
)
const blockDbNamePrefix = "blocks"
@ -53,7 +53,7 @@ func findCandidates(chain *blockchain.BlockChain, latestHash *chainhash.Hash) ([
// Set the latest checkpoint to the genesis block if there isn't
// already one.
latestCheckpoint = &chaincfg.Checkpoint{
Hash: activeNetParams.GenesisHash,
Hash: &activeNetParams.GenesisHash,
Height: 0,
}
}

View File

@ -134,7 +134,7 @@ func cleanAndExpandPath(path string) string {
return filepath.Join(homeDir, path)
}
// filesExists reports whether the named file or directory exists.
// fileExists reports whether the named file or directory exists.
func fileExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {

View File

@ -1,5 +1,5 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2018 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -21,13 +21,13 @@ import (
"strings"
"time"
"github.com/decred/dcrd/connmgr"
"github.com/decred/dcrd/database"
_ "github.com/decred/dcrd/database/ffldb"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/connmgr/v2"
"github.com/decred/dcrd/database/v2"
_ "github.com/decred/dcrd/database/v2/ffldb"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/internal/version"
"github.com/decred/dcrd/mempool/v2"
"github.com/decred/dcrd/rpc/jsonrpc/types"
"github.com/decred/dcrd/mempool/v3"
"github.com/decred/dcrd/rpc/jsonrpc/types/v2"
"github.com/decred/dcrd/sampleconfig"
"github.com/decred/go-socks/socks"
"github.com/decred/slog"
@ -129,6 +129,7 @@ type config struct {
OnionProxyUser string `long:"onionuser" description:"Username for onion proxy server"`
OnionProxyPass string `long:"onionpass" default-mask:"-" description:"Password for onion proxy server"`
NoOnion bool `long:"noonion" description:"Disable connecting to tor hidden services"`
NoDiscoverIP bool `long:"nodiscoverip" description:"Disable automatic network address discovery"`
TorIsolation bool `long:"torisolation" description:"Enable Tor stream isolation by randomizing user credentials for each connection."`
TestNet bool `long:"testnet" description:"Use the test network"`
SimNet bool `long:"simnet" description:"Use the simulation test network"`
@ -148,7 +149,7 @@ type config struct {
MaxOrphanTxs int `long:"maxorphantx" description:"Max number of orphan transactions to keep in memory"`
Generate bool `long:"generate" description:"Generate (mine) coins using the CPU"`
MiningAddrs []string `long:"miningaddr" description:"Add the specified payment address to the list of addresses to use for generated blocks -- At least one address is required if the generate option is set"`
BlockMinSize uint32 `long:"blockminsize" description:"Mininum block size in bytes to be used when creating a block"`
BlockMinSize uint32 `long:"blockminsize" description:"Minimum block size in bytes to be used when creating a block"`
BlockMaxSize uint32 `long:"blockmaxsize" description:"Maximum block size in bytes to be used when creating a block"`
BlockPrioritySize uint32 `long:"blockprioritysize" description:"Size in bytes for high-priority/low-fee transactions when creating a block"`
SigCacheMaxSize uint `long:"sigcachemaxsize" description:"The maximum number of entries in the signature verification cache"`
@ -266,7 +267,7 @@ func supportedSubsystems() []string {
// the levels accordingly. An appropriate error is returned if anything is
// invalid.
func parseAndSetDebugLevels(debugLevel string) error {
// When the specified string doesn't have any delimters, treat it as
// When the specified string doesn't have any delimiters, treat it as
// the log level for all subsystems.
if !strings.Contains(debugLevel, ",") && !strings.Contains(debugLevel, "=") {
// Validate debug log level.
@ -297,7 +298,7 @@ func parseAndSetDebugLevels(debugLevel string) error {
// Validate subsystem.
if _, exists := subsystemLoggers[subsysID]; !exists {
str := "the specified subsystem [%v] is invalid -- " +
"supported subsytems %v"
"supported subsystems %v"
return fmt.Errorf(str, subsysID, supportedSubsystems())
}
@ -358,7 +359,7 @@ func normalizeAddresses(addrs []string, defaultPort string) []string {
return removeDuplicateAddresses(addrs)
}
// filesExists reports whether the named file or directory exists.
// fileExists reports whether the named file or directory exists.
func fileExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {
@ -898,7 +899,7 @@ func loadConfig() (*config, []string, error) {
return nil, nil, err
}
// Validate the the minrelaytxfee.
// Validate the minrelaytxfee.
cfg.minRelayTxFee, err = dcrutil.NewAmount(cfg.MinRelayTxFee)
if err != nil {
str := "%s: invalid minrelaytxfee: %v"
@ -923,7 +924,7 @@ func loadConfig() (*config, []string, error) {
return nil, nil, err
}
// Limit the max orphan count to a sane vlue.
// Limit the max orphan count to a sane value.
if cfg.MaxOrphanTxs < 0 {
str := "%s: the maxorphantx option may not be less than 0 " +
"-- parsed [%d]"
@ -980,7 +981,7 @@ func loadConfig() (*config, []string, error) {
// !--nocfilters and --dropcfindex do not mix.
if !cfg.NoCFilters && cfg.DropCFIndex {
err := errors.New("dropcfindex cannot be actived without nocfilters")
err := errors.New("dropcfindex cannot be activated without nocfilters")
fmt.Fprintln(os.Stderr, err)
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, err
@ -989,7 +990,7 @@ func loadConfig() (*config, []string, error) {
// Check mining addresses are valid and saved parsed versions.
cfg.miningAddrs = make([]dcrutil.Address, 0, len(cfg.MiningAddrs))
for _, strAddr := range cfg.MiningAddrs {
addr, err := dcrutil.DecodeAddress(strAddr)
addr, err := dcrutil.DecodeAddress(strAddr, activeNetParams.Params)
if err != nil {
str := "%s: mining address '%s' failed to decode: %v"
err := fmt.Errorf(str, funcName, strAddr, err)
@ -997,13 +998,6 @@ func loadConfig() (*config, []string, error) {
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, err
}
if !addr.IsForNet(activeNetParams.Params) {
str := "%s: mining address '%s' is on the wrong network"
err := fmt.Errorf(str, funcName, strAddr)
fmt.Fprintln(os.Stderr, err)
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, err
}
cfg.miningAddrs = append(cfg.miningAddrs, addr)
}

View File

@ -5,7 +5,6 @@
package main
import (
"flag"
"os"
"strings"
"testing"
@ -76,6 +75,5 @@ func TestAltDNSNamesWithArg(t *testing.T) {
// init parses the -test.* flags from the command line arguments list and then
// removes them to allow go-flags tests to succeed.
func init() {
flag.Parse()
os.Args = os.Args[:1]
}

View File

@ -1,7 +1,7 @@
connmgr
=======
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/connmgr)

View File

@ -459,7 +459,7 @@ func TestNetworkFailure(t *testing.T) {
// TestStopFailed tests that failed connections are ignored after connmgr is
// stopped.
//
// We have a dailer which sets the stop flag on the conn manager and returns an
// We have a dialer which sets the stop flag on the conn manager and returns an
// err so that the handler assumes that the conn manager is stopped and ignores
// the failure.
func TestStopFailed(t *testing.T) {

View File

@ -14,7 +14,7 @@ import (
const (
// Halflife defines the time (in seconds) by which the transient part
// of the ban score decays to one half of it's original value.
// of the ban score decays to one half of its original value.
Halflife = 60
// lambda is the decaying constant.

View File

@ -1,11 +1,12 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Copyright (c) 2015-2017 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package connmgr
import (
"context"
"encoding/binary"
"errors"
"net"
@ -21,6 +22,12 @@ const (
torTTLExpired = 0x06
torCmdNotSupported = 0x07
torAddrNotSupported = 0x08
torATypeIPv4 = 1
torATypeDomainName = 3
torATypeIPv6 = 4
torCmdResolve = 240
)
var (
@ -49,17 +56,23 @@ var (
}
)
// TorLookupIP uses Tor to resolve DNS via the SOCKS extension they provide for
// resolution over the Tor network. Tor itself doesn't support ipv6 so this
// doesn't either.
// TorLookupIP uses Tor to resolve DNS via the passed SOCKS proxy.
//
// Deprecated: use TorLookupIPContext instead.
func TorLookupIP(host, proxy string) ([]net.IP, error) {
conn, err := net.Dial("tcp", proxy)
return TorLookupIPContext(context.Background(), host, proxy)
}
// TorLookupIPContext uses Tor to resolve DNS via the passed SOCKS proxy.
func TorLookupIPContext(ctx context.Context, host, proxy string) ([]net.IP, error) {
var dialer net.Dialer
conn, err := dialer.DialContext(ctx, "tcp", proxy)
if err != nil {
return nil, err
}
defer conn.Close()
buf := []byte{'\x05', '\x01', '\x00'}
buf := []byte{0x05, 0x01, 0x00}
_, err = conn.Write(buf)
if err != nil {
return nil, err
@ -70,18 +83,18 @@ func TorLookupIP(host, proxy string) ([]net.IP, error) {
if err != nil {
return nil, err
}
if buf[0] != '\x05' {
if buf[0] != 0x05 {
return nil, ErrTorInvalidProxyResponse
}
if buf[1] != '\x00' {
if buf[1] != 0x00 {
return nil, ErrTorUnrecognizedAuthMethod
}
buf = make([]byte, 7+len(host))
buf[0] = 5 // protocol version
buf[1] = '\xF0' // Tor Resolve
buf[2] = 0 // reserved
buf[3] = 3 // Tor Resolve
buf[0] = 5 // socks protocol version
buf[1] = torCmdResolve
buf[2] = 0 // reserved
buf[3] = torATypeDomainName
buf[4] = byte(len(host))
copy(buf[5:], host)
buf[5+len(host)] = 0 // Port 0
@ -100,34 +113,39 @@ func TorLookupIP(host, proxy string) ([]net.IP, error) {
return nil, ErrTorInvalidProxyResponse
}
if buf[1] != 0 {
if int(buf[1]) > len(torStatusErrors) {
err, exists := torStatusErrors[buf[1]]
if !exists {
err = ErrTorInvalidProxyResponse
} else {
err = torStatusErrors[buf[1]]
if err == nil {
err = ErrTorInvalidProxyResponse
}
}
return nil, err
}
if buf[3] != 1 {
err := torStatusErrors[torGeneralError]
return nil, err
}
buf = make([]byte, 4)
bytes, err := conn.Read(buf)
if err != nil {
return nil, err
}
if bytes != 4 {
if buf[3] != torATypeIPv4 && buf[3] != torATypeIPv6 {
return nil, ErrTorInvalidAddressResponse
}
r := binary.BigEndian.Uint32(buf)
var reply [32 + 2]byte
replyLen, err := conn.Read(reply[:])
if err != nil {
return nil, err
}
addr := make([]net.IP, 1)
addr[0] = net.IPv4(byte(r>>24), byte(r>>16), byte(r>>8), byte(r))
var addr net.IP
switch buf[3] {
case torATypeIPv4:
if replyLen != 4+2 {
return nil, ErrTorInvalidAddressResponse
}
r := binary.BigEndian.Uint32(reply[0:4])
addr = net.IPv4(byte(r>>24), byte(r>>16),
byte(r>>8), byte(r))
case torATypeIPv6:
if replyLen <= 4+2 {
return nil, ErrTorInvalidAddressResponse
}
addr = net.IP(reply[0 : replyLen-2])
default:
return nil, ErrTorInvalidAddressResponse
}
return addr, nil
return []net.IP{addr}, nil
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2014-2016 The btcsuite developers
// Copyright (c) 2015-2018 The Decred developers
// Copyright (c) 2015-2019 The Decred developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
@ -11,12 +11,14 @@ import (
"fmt"
"math/rand"
"sync"
"sync/atomic"
"time"
"github.com/decred/dcrd/blockchain"
"github.com/decred/dcrd/chaincfg"
"github.com/decred/dcrd/blockchain/standalone"
"github.com/decred/dcrd/blockchain/v2"
"github.com/decred/dcrd/chaincfg/chainhash"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/chaincfg/v2"
"github.com/decred/dcrd/dcrutil/v2"
"github.com/decred/dcrd/wire"
)
@ -24,10 +26,6 @@ const (
// maxNonce is the maximum value a nonce can be in a block header.
maxNonce = ^uint32(0) // 2^32 - 1
// maxExtraNonce is the maximum value an extra nonce used in a coinbase
// transaction can be.
maxExtraNonce = ^uint64(0) // 2^64 - 1
// hpsUpdateSecs is the number of seconds to wait in between each
// update to the hashes per second monitor.
hpsUpdateSecs = 10
@ -49,7 +47,7 @@ var (
// defaultNumWorkers is the default number of workers to use for mining
// and is based on the number of processor cores. This helps ensure the
// system stays reasonably responsive under heavy load.
defaultNumWorkers = uint32(chaincfg.CPUMinerThreads)
defaultNumWorkers = uint32(1)
// littleEndian is a convenience variable since binary.LittleEndian is
// quite long.
@ -90,7 +88,7 @@ type cpuminerConfig struct {
// block chain is current. This is used by the automatic persistent
// mining routine to determine whether or it should attempt mining.
// This is useful because there is no point in mining if the chain is
// not current since any solved blocks would be on a side chain and and
// not current since any solved blocks would be on a side chain and
// up orphaned anyways.
IsCurrent func() bool
}
@ -102,10 +100,11 @@ type cpuminerConfig struct {
// function, but the default is based on the number of processor cores in the
// system which is typically sufficient.
type CPUMiner struct {
numWorkers uint32 // update atomically
sync.Mutex
g *BlkTmplGenerator
cfg *cpuminerConfig
numWorkers uint32
started bool
discreteMining bool
submitBlockLock sync.Mutex
@ -119,8 +118,7 @@ type CPUMiner struct {
// This is a map that keeps track of how many blocks have
// been mined on each parent by the CPUMiner. It is only
// for use in simulation networks, to diminish memory
// exhaustion. It should not race because it's only
// accessed in a single threaded loop below.
// exhaustion.
minedOnParents map[chainhash.Hash]uint8
}
@ -240,7 +238,7 @@ func (m *CPUMiner) solveBlock(msgBlock *wire.MsgBlock, ticker *time.Ticker, quit
// Create a couple of convenience variables.
header := &msgBlock.Header
targetDifficulty := blockchain.CompactToBig(header.Bits)
targetDifficulty := standalone.CompactToBig(header.Bits)
// Initial state.
lastGenerated := time.Now()
@ -249,8 +247,10 @@ func (m *CPUMiner) solveBlock(msgBlock *wire.MsgBlock, ticker *time.Ticker, quit
// Note that the entire extra nonce range is iterated and the offset is
// added relying on the fact that overflow will wrap around 0 as
// provided by the Go spec.
for extraNonce := uint64(0); extraNonce < maxExtraNonce; extraNonce++ {
// provided by the Go spec. Furthermore, the break condition has been
// intentionally omitted such that the loop will continue forever until
// a solution is found.
for extraNonce := uint64(0); ; extraNonce++ {
// Update the extra nonce in the block template header with the
// new value.
littleEndian.PutUint64(header.ExtraData[:], extraNonce+enOffset)
@ -258,7 +258,15 @@ func (m *CPUMiner) solveBlock(msgBlock *wire.MsgBlock, ticker *time.Ticker, quit
// Search through the entire nonce range for a solution while
// periodically checking for early quit and stale block
// conditions along with updates to the speed monitor.
for i := uint32(0); i <= maxNonce; i++ {
//
// This loop differs from the outer one in that it does not run
// forever, thus allowing the extraNonce field to be updated
// between each successive iteration of the regular nonce
// space. Note that this is achieved by placing the break
// condition at the end of the code block, as this prevents the
// infinite loop that would otherwise occur if we let the for
// statement overflow the nonce value back to 0.
for nonce := uint32(0); ; nonce++ {
select {
case <-quit:
return false
@ -293,23 +301,25 @@ func (m *CPUMiner) solveBlock(msgBlock *wire.MsgBlock, ticker *time.Ticker, quit
}
// Update the nonce and hash the block header.
header.Nonce = i
header.Nonce = nonce
hash := header.BlockHash()
hashesCompleted++
// The block is solved when the new block hash is less
// than the target difficulty. Yay!
if blockchain.HashToBig(&hash).Cmp(targetDifficulty) <= 0 {
if standalone.HashToBig(&hash).Cmp(targetDifficulty) <= 0 {
select {
case m.updateHashes <- hashesCompleted:
default:
}
return true
}
if nonce == maxNonce {
break
}
}
}
return false
}
// generateBlocks is a worker that is controlled by the miningWorkerController.
@ -383,8 +393,11 @@ out:
// This prevents you from causing memory exhaustion issues
// when mining aggressively in a simulation network.
if m.cfg.PermitConnectionlessMining {
if m.minedOnParents[template.Block.Header.PrevBlock] >=
maxSimnetToMine {
prevBlock := template.Block.Header.PrevBlock
m.Lock()
maxBlocksOnParent := m.minedOnParents[prevBlock] >= maxSimnetToMine
m.Unlock()
if maxBlocksOnParent {
minrLog.Tracef("too many blocks mined on parent, stopping " +
"until there are enough votes on these to make a new " +
"block")
@ -399,7 +412,10 @@ out:
if m.solveBlock(template.Block, ticker, quit) {
block := dcrutil.NewBlock(template.Block)
m.submitBlock(block)
m.Lock()
m.minedOnParents[template.Block.Header.PrevBlock]++
m.Unlock()
}
}
@ -427,28 +443,31 @@ func (m *CPUMiner) miningWorkerController() {
}
// Launch the current number of workers by default.
runningWorkers = make([]chan struct{}, 0, m.numWorkers)
launchWorkers(m.numWorkers)
numWorkers := atomic.LoadUint32(&m.numWorkers)
runningWorkers = make([]chan struct{}, 0, numWorkers)
launchWorkers(numWorkers)
out:
for {
select {
// Update the number of running workers.
case <-m.updateNumWorkers:
// No change.
numRunning := uint32(len(runningWorkers))
if m.numWorkers == numRunning {
numWorkers := atomic.LoadUint32(&m.numWorkers)
// No change.
if numWorkers == numRunning {
continue
}
// Add new workers.
if m.numWorkers > numRunning {
launchWorkers(m.numWorkers - numRunning)
if numWorkers > numRunning {
launchWorkers(numWorkers - numRunning)
continue
}
// Signal the most recently created goroutines to exit.
for i := numRunning - 1; i >= m.numWorkers; i-- {
for i := numRunning - 1; i >= numWorkers; i-- {
close(runningWorkers[i])
runningWorkers[i] = nil
runningWorkers = runningWorkers[:i]
@ -550,16 +569,11 @@ func (m *CPUMiner) SetNumWorkers(numWorkers int32) {
m.Stop()
}
// Don't lock until after the first check since Stop does its own
// locking.
m.Lock()
defer m.Unlock()
// Use default if provided value is negative.
if numWorkers < 0 {
m.numWorkers = defaultNumWorkers
atomic.StoreUint32(&m.numWorkers, defaultNumWorkers)
} else {
m.numWorkers = uint32(numWorkers)
atomic.StoreUint32(&m.numWorkers, uint32(numWorkers))
}
// When the miner is already running, notify the controller about the
@ -573,10 +587,7 @@ func (m *CPUMiner) SetNumWorkers(numWorkers int32) {
//
// This function is safe for concurrent access.
func (m *CPUMiner) NumWorkers() int32 {
m.Lock()
defer m.Unlock()
return int32(m.numWorkers)
return int32(atomic.LoadUint32(&m.numWorkers))
}
// GenerateNBlocks generates the requested number of blocks. It is self
@ -597,7 +608,7 @@ func (m *CPUMiner) GenerateNBlocks(n uint32) ([]*chainhash.Hash, error) {
if m.started || m.discreteMining {
m.Unlock()
return nil, errors.New("server is already CPU mining. Please call " +
"`setgenerate 0` before calling discrete `generate` commands.")
"`setgenerate 0` before calling discrete `generate` commands")
}
m.started = true

3
crypto/ripemd160/go.mod Normal file
View File

@ -0,0 +1,3 @@
module github.com/decred/dcrd/crypto/ripemd160
go 1.11

View File

@ -0,0 +1,120 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package ripemd160 implements the RIPEMD-160 hash algorithm.
package ripemd160
// RIPEMD-160 is designed by Hans Dobbertin, Antoon Bosselaers, and Bart
// Preneel with specifications available at:
// http://homes.esat.kuleuven.be/~cosicart/pdf/AB-9601/AB-9601.pdf.
import (
"crypto"
"hash"
)
func init() {
crypto.RegisterHash(crypto.RIPEMD160, New)
}
// The size of the checksum in bytes.
const Size = 20
// The block size of the hash algorithm in bytes.
const BlockSize = 64
const (
_s0 = 0x67452301
_s1 = 0xefcdab89
_s2 = 0x98badcfe
_s3 = 0x10325476
_s4 = 0xc3d2e1f0
)
// digest represents the partial evaluation of a checksum.
type digest struct {
s [5]uint32 // running context
x [BlockSize]byte // temporary buffer
nx int // index into x
tc uint64 // total count of bytes processed
}
func (d *digest) Reset() {
d.s[0], d.s[1], d.s[2], d.s[3], d.s[4] = _s0, _s1, _s2, _s3, _s4
d.nx = 0
d.tc = 0
}
// New returns a new hash.Hash computing the checksum.
func New() hash.Hash {
result := new(digest)
result.Reset()
return result
}
func (d *digest) Size() int { return Size }
func (d *digest) BlockSize() int { return BlockSize }
func (d *digest) Write(p []byte) (nn int, err error) {
nn = len(p)
d.tc += uint64(nn)
if d.nx > 0 {
n := len(p)
if n > BlockSize-d.nx {
n = BlockSize - d.nx
}
for i := 0; i < n; i++ {
d.x[d.nx+i] = p[i]
}
d.nx += n
if d.nx == BlockSize {
_Block(d, d.x[0:])
d.nx = 0
}
p = p[n:]
}
n := _Block(d, p)
p = p[n:]
if len(p) > 0 {
d.nx = copy(d.x[:], p)
}
return
}
func (d0 *digest) Sum(in []byte) []byte {
// Make a copy of d0 so that caller can keep writing and summing.
d := *d0
// Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
tc := d.tc
var tmp [64]byte
tmp[0] = 0x80
if tc%64 < 56 {
d.Write(tmp[0 : 56-tc%64])
} else {
d.Write(tmp[0 : 64+56-tc%64])
}
// Length in bits.
tc <<= 3
for i := uint(0); i < 8; i++ {
tmp[i] = byte(tc >> (8 * i))
}
d.Write(tmp[0:8])
if d.nx != 0 {
panic("d.nx != 0")
}
var digest [Size]byte
for i, s := range d.s {
digest[i*4] = byte(s)
digest[i*4+1] = byte(s >> 8)
digest[i*4+2] = byte(s >> 16)
digest[i*4+3] = byte(s >> 24)
}
return append(in, digest[:]...)
}

View File

@ -0,0 +1,72 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ripemd160
// Test vectors are from:
// http://homes.esat.kuleuven.be/~bosselae/ripemd160.html
import (
"fmt"
"io"
"testing"
)
type mdTest struct {
out string
in string
}
var vectors = [...]mdTest{
{"9c1185a5c5e9fc54612808977ee8f548b2258d31", ""},
{"0bdc9d2d256b3ee9daae347be6f4dc835a467ffe", "a"},
{"8eb208f7e05d987a9b044a8e98c6b087f15a0bfc", "abc"},
{"5d0689ef49d2fae572b881b123a85ffa21595f36", "message digest"},
{"f71c27109c692c1b56bbdceb5b9d2865b3708dbc", "abcdefghijklmnopqrstuvwxyz"},
{"12a053384a9c0c88e405a06c27dcf49ada62eb2b", "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq"},
{"b0e20b6e3116640286ed3a87a5713079b21f5189", "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"},
{"9b752e45573d4b39f4dbd3323cab82bf63326bfb", "12345678901234567890123456789012345678901234567890123456789012345678901234567890"},
}
func TestVectors(t *testing.T) {
for i := 0; i < len(vectors); i++ {
tv := vectors[i]
md := New()
for j := 0; j < 3; j++ {
if j < 2 {
io.WriteString(md, tv.in)
} else {
io.WriteString(md, tv.in[0:len(tv.in)/2])
md.Sum(nil)
io.WriteString(md, tv.in[len(tv.in)/2:])
}
s := fmt.Sprintf("%x", md.Sum(nil))
if s != tv.out {
t.Fatalf("RIPEMD-160[%d](%s) = %s, expected %s", j, tv.in, s, tv.out)
}
md.Reset()
}
}
}
func millionA() string {
md := New()
for i := 0; i < 100000; i++ {
io.WriteString(md, "aaaaaaaaaa")
}
return fmt.Sprintf("%x", md.Sum(nil))
}
func TestMillionA(t *testing.T) {
const out = "52783243c1697bdbe16d37f97f68f08325dc1528"
if s := millionA(); s != out {
t.Fatalf("RIPEMD-160 (1 million 'a') = %s, expected %s", s, out)
}
}
func BenchmarkMillionA(b *testing.B) {
for i := 0; i < b.N; i++ {
millionA()
}
}

View File

@ -0,0 +1,165 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// RIPEMD-160 block step.
// In its own file so that a faster assembly or C version
// can be substituted easily.
package ripemd160
import (
"math/bits"
)
// work buffer indices and roll amounts for one line
var _n = [80]uint{
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
7, 4, 13, 1, 10, 6, 15, 3, 12, 0, 9, 5, 2, 14, 11, 8,
3, 10, 14, 4, 9, 15, 8, 1, 2, 7, 0, 6, 13, 11, 5, 12,
1, 9, 11, 10, 0, 8, 12, 4, 13, 3, 7, 15, 14, 5, 6, 2,
4, 0, 5, 9, 7, 12, 2, 10, 14, 1, 3, 8, 11, 6, 15, 13,
}
var _r = [80]uint{
11, 14, 15, 12, 5, 8, 7, 9, 11, 13, 14, 15, 6, 7, 9, 8,
7, 6, 8, 13, 11, 9, 7, 15, 7, 12, 15, 9, 11, 7, 13, 12,
11, 13, 6, 7, 14, 9, 13, 15, 14, 8, 13, 6, 5, 12, 7, 5,
11, 12, 14, 15, 14, 15, 9, 8, 9, 14, 5, 6, 8, 6, 5, 12,
9, 15, 5, 11, 6, 8, 13, 12, 5, 12, 13, 14, 11, 8, 5, 6,
}
// same for the other parallel one
var n_ = [80]uint{
5, 14, 7, 0, 9, 2, 11, 4, 13, 6, 15, 8, 1, 10, 3, 12,
6, 11, 3, 7, 0, 13, 5, 10, 14, 15, 8, 12, 4, 9, 1, 2,
15, 5, 1, 3, 7, 14, 6, 9, 11, 8, 12, 2, 10, 0, 4, 13,
8, 6, 4, 1, 3, 11, 15, 0, 5, 12, 2, 13, 9, 7, 10, 14,
12, 15, 10, 4, 1, 5, 8, 7, 6, 2, 13, 14, 0, 3, 9, 11,
}
var r_ = [80]uint{
8, 9, 9, 11, 13, 15, 15, 5, 7, 7, 8, 11, 14, 14, 12, 6,
9, 13, 15, 7, 12, 8, 9, 11, 7, 7, 12, 7, 6, 15, 13, 11,
9, 7, 15, 11, 8, 6, 6, 14, 12, 13, 5, 14, 13, 13, 7, 5,
15, 5, 8, 11, 14, 14, 6, 14, 6, 9, 12, 9, 12, 5, 15, 8,
8, 5, 12, 9, 12, 5, 14, 6, 8, 13, 6, 5, 15, 13, 11, 11,
}
func _Block(md *digest, p []byte) int {
n := 0
var x [16]uint32
var alpha, beta uint32
for len(p) >= BlockSize {
a, b, c, d, e := md.s[0], md.s[1], md.s[2], md.s[3], md.s[4]
aa, bb, cc, dd, ee := a, b, c, d, e
j := 0
for i := 0; i < 16; i++ {
x[i] = uint32(p[j]) | uint32(p[j+1])<<8 | uint32(p[j+2])<<16 | uint32(p[j+3])<<24
j += 4
}
// round 1
i := 0
for i < 16 {
alpha = a + (b ^ c ^ d) + x[_n[i]]
s := int(_r[i])
alpha = bits.RotateLeft32(alpha, s) + e
beta = bits.RotateLeft32(c, 10)
a, b, c, d, e = e, alpha, b, beta, d
// parallel line
alpha = aa + (bb ^ (cc | ^dd)) + x[n_[i]] + 0x50a28be6
s = int(r_[i])
alpha = bits.RotateLeft32(alpha, s) + ee
beta = bits.RotateLeft32(cc, 10)
aa, bb, cc, dd, ee = ee, alpha, bb, beta, dd
i++
}
// round 2
for i < 32 {
alpha = a + (b&c | ^b&d) + x[_n[i]] + 0x5a827999
s := int(_r[i])
alpha = bits.RotateLeft32(alpha, s) + e
beta = bits.RotateLeft32(c, 10)
a, b, c, d, e = e, alpha, b, beta, d
// parallel line
alpha = aa + (bb&dd | cc&^dd) + x[n_[i]] + 0x5c4dd124
s = int(r_[i])
alpha = bits.RotateLeft32(alpha, s) + ee
beta = bits.RotateLeft32(cc, 10)
aa, bb, cc, dd, ee = ee, alpha, bb, beta, dd
i++
}
// round 3
for i < 48 {
alpha = a + (b | ^c ^ d) + x[_n[i]] + 0x6ed9eba1
s := int(_r[i])
alpha = bits.RotateLeft32(alpha, s) + e
beta = bits.RotateLeft32(c, 10)
a, b, c, d, e = e, alpha, b, beta, d
// parallel line
alpha = aa + (bb | ^cc ^ dd) + x[n_[i]] + 0x6d703ef3
s = int(r_[i])
alpha = bits.RotateLeft32(alpha, s) + ee
beta = bits.RotateLeft32(cc, 10)
aa, bb, cc, dd, ee = ee, alpha, bb, beta, dd
i++
}
// round 4
for i < 64 {
alpha = a + (b&d | c&^d) + x[_n[i]] + 0x8f1bbcdc
s := int(_r[i])
alpha = bits.RotateLeft32(alpha, s) + e
beta = bits.RotateLeft32(c, 10)
a, b, c, d, e = e, alpha, b, beta, d
// parallel line
alpha = aa + (bb&cc | ^bb&dd) + x[n_[i]] + 0x7a6d76e9
s = int(r_[i])
alpha = bits.RotateLeft32(alpha, s) + ee
beta = bits.RotateLeft32(cc, 10)
aa, bb, cc, dd, ee = ee, alpha, bb, beta, dd
i++
}
// round 5
for i < 80 {
alpha = a + (b ^ (c | ^d)) + x[_n[i]] + 0xa953fd4e
s := int(_r[i])
alpha = bits.RotateLeft32(alpha, s) + e
beta = bits.RotateLeft32(c, 10)
a, b, c, d, e = e, alpha, b, beta, d
// parallel line
alpha = aa + (bb ^ cc ^ dd) + x[n_[i]]
s = int(r_[i])
alpha = bits.RotateLeft32(alpha, s) + ee
beta = bits.RotateLeft32(cc, 10)
aa, bb, cc, dd, ee = ee, alpha, bb, beta, dd
i++
}
// combine results
dd += c + md.s[1]
md.s[1] = md.s[2] + d + ee
md.s[2] = md.s[3] + e + aa
md.s[3] = md.s[4] + a + bb
md.s[4] = md.s[0] + b + cc
md.s[0] = dd
p = p[BlockSize:]
n += BlockSize
}
return n
}

View File

@ -1,7 +1,7 @@
database
========
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/database)

View File

@ -36,7 +36,7 @@ type Driver struct {
var drivers = make(map[string]*Driver)
// RegisterDriver adds a backend database driver to available interfaces.
// ErrDbTypeRegistered will be retruned if the database type for the driver has
// ErrDbTypeRegistered will be returned if the database type for the driver has
// already been registered.
func RegisterDriver(driver Driver) error {
if _, exists := drivers[driver.DbType]; exists {
@ -63,7 +63,7 @@ func SupportedDrivers() []string {
// arguments are specific to the database type driver. See the documentation
// for the database driver for further details.
//
// ErrDbUnknownType will be returned if the the database type is not registered.
// ErrDbUnknownType will be returned if the database type is not registered.
func Create(dbType string, args ...interface{}) (DB, error) {
drv, exists := drivers[dbType]
if !exists {
@ -78,7 +78,7 @@ func Create(dbType string, args ...interface{}) (DB, error) {
// specific to the database type driver. See the documentation for the database
// driver for further details.
//
// ErrDbUnknownType will be returned if the the database type is not registered.
// ErrDbUnknownType will be returned if the database type is not registered.
func Open(dbType string, args ...interface{}) (DB, error) {
drv, exists := drivers[dbType]
if !exists {

View File

@ -22,7 +22,7 @@ var (
)
// checkDbError ensures the passed error is a database.Error with an error code
// that matches the passed error code.
// that matches the passed error code.
func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool {
dbErr, ok := gotErr.(database.Error)
if !ok {

View File

@ -82,14 +82,14 @@ const (
// ErrKeyRequired indicates at attempt to insert a zero-length key.
ErrKeyRequired
// ErrKeyTooLarge indicates an attmempt to insert a key that is larger
// ErrKeyTooLarge indicates an attempt to insert a key that is larger
// than the max allowed key size. The max key size depends on the
// specific backend driver being used. As a general rule, key sizes
// should be relatively, so this should rarely be an issue.
ErrKeyTooLarge
// ErrValueTooLarge indicates an attmpt to insert a value that is larger
// than max allowed value size. The max key size depends on the
// ErrValueTooLarge indicates an attempt to insert a value that is
// larger than max allowed value size. The max key size depends on the
// specific backend driver being used.
ErrValueTooLarge

View File

@ -24,7 +24,7 @@ func ExampleCreate() {
//
// import (
// "github.com/decred/dcrd/database2"
// _ "github.com/decred/dcrd/database/ffldb"
// _ "github.com/decred/dcrd/database/v2/ffldb"
// )
// Create a database and schedule it to be closed and removed on exit.

View File

@ -4,7 +4,7 @@
// license that can be found in the LICENSE file.
/*
This test file is part of the database package rather than than the
This test file is part of the database package rather than the
database_test package so it can bridge access to the internals to properly test
cases which are either not possible or can't reliably be tested via the public
interface. The functions, constants, and variables are only exported while the

View File

@ -1,7 +1,7 @@
ffldb
=====
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/database/ffldb)

View File

@ -134,10 +134,10 @@ type blockStore struct {
// lruMutex protects concurrent access to the least recently used list
// and lookup map.
//
// openBlocksLRU tracks how the open files are refenced by pushing the
// openBlocksLRU tracks how the open files are referenced by pushing the
// most recently used files to the front of the list thereby trickling
// the least recently used files to end of the list. When a file needs
// to be closed due to exceeding the the max number of allowed open
// to be closed due to exceeding the max number of allowed open
// files, the one at the end of the list is closed.
//
// fileNumToLRUElem is a mapping between a specific block file number
@ -744,7 +744,7 @@ func scanBlockFiles(dbPath string) (int, uint32) {
// and offset set and all fields initialized.
func newBlockStore(basePath string, network wire.CurrencyNet) *blockStore {
// Look for the end of the latest block to file to determine what the
// write cursor position is from the viewpoing of the block files on
// write cursor position is from the viewpoint of the block files on
// disk.
fileNum, fileOff := scanBlockFiles(basePath)
if fileNum == -1 {

View File

@ -132,7 +132,7 @@ func makeDbErr(c database.ErrorCode, desc string, err error) database.Error {
}
// convertErr converts the passed leveldb error into a database error with an
// equivalent error code and the passed description. It also sets the passed
// equivalent error code and the passed description. It also sets the passed
// error as the underlying error.
func convertErr(desc string, ldbErr error) database.Error {
// Use the driver-specific error code by default. The code below will
@ -1015,7 +1015,7 @@ func (tx *transaction) notifyActiveIters() {
tx.activeIterLock.RUnlock()
}
// checkClosed returns an error if the the database or transaction is closed.
// checkClosed returns an error if the database or transaction is closed.
func (tx *transaction) checkClosed() error {
// The transaction is no longer valid if it has been closed.
if tx.closed {
@ -1086,11 +1086,11 @@ func (tx *transaction) fetchKey(key []byte) []byte {
// NOTE: This function must only be called on a writable transaction. Since it
// is an internal helper function, it does not check.
func (tx *transaction) deleteKey(key []byte, notifyIterators bool) {
// Remove the key from the list of pendings keys to be written on
// Remove the key from the list of pending keys to be written on
// transaction commit if needed.
tx.pendingKeys.Delete(key)
// Add the key to the list to be deleted on transaction commit.
// Add the key to the list to be deleted on transaction commit.
tx.pendingRemove.Put(key, nil)
// Notify the active iterators about the change if the flag is set.
@ -1999,7 +1999,7 @@ func (db *db) Close() error {
return closeErr
}
// filesExists reports whether the named file or directory exists.
// fileExists reports whether the named file or directory exists.
func fileExists(name string) bool {
if _, err := os.Stat(name); err != nil {
if os.IsNotExist(err) {

View File

@ -468,9 +468,9 @@ func (c *dbCache) commitTreaps(pendingKeys, pendingRemove TreapForEacher) error
})
}
// flush flushes the database cache to persistent storage. This involes syncing
// the block store and replaying all transactions that have been applied to the
// cache to the underlying database.
// flush flushes the database cache to persistent storage. This involves
// syncing the block store and replaying all transactions that have been
// applied to the cache to the underlying database.
//
// This function MUST be called with the database write lock held.
func (c *dbCache) flush() error {

View File

@ -79,7 +79,7 @@ func init() {
UseLogger: useLogger,
}
if err := database.RegisterDriver(driver); err != nil {
panic(fmt.Sprintf("Failed to regiser database driver '%s': %v",
panic(fmt.Sprintf("Failed to register database driver '%s': %v",
dbType, err))
}
}

View File

@ -4,7 +4,7 @@
// license that can be found in the LICENSE file.
/*
This test file is part of the ffldb package rather than than the ffldb_test
This test file is part of the ffldb package rather than the ffldb_test
package so it can bridge access to the internals to properly test cases which
are either not possible or can't reliably be tested via the public interface.
The functions are only exported while the tests are being run.

View File

@ -89,7 +89,7 @@ func loadBlocks(t *testing.T, dataFile string, network wire.CurrencyNet) ([]*dcr
}
// checkDbError ensures the passed error is a database.Error with an error code
// that matches the passed error code.
// that matches the passed error code.
func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool {
dbErr, ok := gotErr.(database.Error)
if !ok {
@ -230,7 +230,7 @@ func testDeleteValues(tc *testContext, bucket database.Bucket, values []keyPair)
return true
}
// testCursorInterface ensures the cursor itnerface is working properly by
// testCursorInterface ensures the cursor interface is working properly by
// exercising all of its functions on the passed bucket.
func testCursorInterface(tc *testContext, bucket database.Bucket) bool {
// Ensure a cursor can be obtained for the bucket.
@ -615,7 +615,7 @@ func rollbackOnPanic(t *testing.T, tx database.Tx) {
func testMetadataManualTxInterface(tc *testContext) bool {
// populateValues tests that populating values works as expected.
//
// When the writable flag is false, a read-only tranasction is created,
// When the writable flag is false, a read-only transaction is created,
// standard bucket tests for read-only transactions are performed, and
// the Commit function is checked to ensure it fails as expected.
//
@ -1189,7 +1189,7 @@ func testFetchBlockIOMissing(tc *testContext, tx database.Tx) bool {
// testFetchBlockIO ensures all of the block retrieval API functions work as
// expected for the provide set of blocks. The blocks must already be stored in
// the database, or at least stored into the the passed transaction. It also
// the database, or at least stored into the passed transaction. It also
// tests several error conditions such as ensuring the expected errors are
// returned when fetching blocks, headers, and regions that don't exist.
func testFetchBlockIO(tc *testContext, tx database.Tx) bool {

View File

@ -84,7 +84,7 @@ func loadBlocks(t *testing.T, dataFile string, network wire.CurrencyNet) ([]*dcr
}
// checkDbError ensures the passed error is a database.Error with an error code
// that matches the passed error code.
// that matches the passed error code.
func checkDbError(t *testing.T, testName string, gotErr error, wantErrCode database.ErrorCode) bool {
dbErr, ok := gotErr.(database.Error)
if !ok {
@ -142,7 +142,7 @@ func TestConvertErr(t *testing.T) {
func TestCornerCases(t *testing.T) {
t.Parallel()
// Create a file at the datapase path to force the open below to fail.
// Create a file at the database path to force the open below to fail.
dbPath := filepath.Join(os.TempDir(), "ffldb-errors-v2")
_ = os.RemoveAll(dbPath)
fi, err := os.Create(dbPath)
@ -195,7 +195,7 @@ func TestCornerCases(t *testing.T) {
ldb := idb.(*db).cache.ldb
ldb.Close()
// Ensure initilization errors in the underlying database work as
// Ensure initialization errors in the underlying database work as
// expected.
testName = "initDB: reinitialization"
wantErrCode = database.ErrDbNotOpen

View File

@ -449,7 +449,7 @@ type DB interface {
//
// NOTE: The transaction must be closed by calling Rollback or Commit on
// it when it is no longer needed. Failure to do so can result in
// unclaimed memory and/or inablity to close the database due to locks
// unclaimed memory and/or inability to close the database due to locks
// depending on the specific database implementation.
Begin(writable bool) (Tx, error)

View File

@ -1,14 +1,14 @@
treap
=====
[![Build Status](https://img.shields.io/travis/decred/dcrd.svg)](https://travis-ci.org/decred/dcrd)
[![Build Status](https://github.com/decred/dcrd/workflows/Build%20and%20Test/badge.svg)](https://github.com/decred/dcrd/actions)
[![ISC License](https://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](https://godoc.org/github.com/decred/dcrd/database/internal/treap)
Package treap implements a treap data structure that is used to hold ordered
key/value pairs using a combination of binary search tree and heap semantics.
It is a self-organizing and randomized data structure that doesn't require
complex operations to to maintain balance. Search, insert, and delete
complex operations to maintain balance. Search, insert, and delete
operations are all O(log n). Both mutable and immutable variants are provided.
The mutable variant is typically faster since it is able to simply update the

Some files were not shown because too many files have changed in this diff Show More