This adds the ErrorCode member to TxRuleError, filling it with
appropriate values throughout the mempool package. This allows clients
of the package to correctly identify error causes with a greater
granularity and respond appropriately.
It also deprecates the RejectCode attribute and ErrToRejectError
functions, to be removed in the next major version update of the
package.
All call sites that inspect mempool errors were updated to use the new
error codes instead of using RejectionCodes. Additional mempool tests
were added to ensure the correct behavior on some relevant cases.
Finally, given the introduction and use of a new public field, the main
module was updated to use an as-of-yet unfinished mempool v3.1.0, which
will include the required functionality.
This updates the rpc ask wallet set with missing entries.
The set has been ordered alphabetically and some
entries have been removed because they are yet to be
implemented by the wallet.
This combines the two conditions for the aggressive mining path into a
single condition and does a bit of light cleanup to remove the template
copies that are no longer necessary due to the removal of the old style
template caching.
This removes the UpdateExtraNonce function which updated an extra nonce
in the coinbase transaction and recalculated the merkle root since it is
not necessary and wasteful for Decred due to the extra nonce being
available the block header.
Further, due to the aforementioned and the fact the template doesn't
have a height set, it isn't currently actually being called anyway as
can be seen by diffing the decoded output of subsequent getwork calls
and noting the only thing that is being updated in between full
regeneration of new templates is the timestamp as expected.
$ diff -uNp work1.txt work2.txt
--- work1.txt 2019-09-07 08:18:58.410917100 -0500
+++ work2.txt 2019-09-07 08:19:01.216456300 -0500
@@ -16,7 +16,7 @@
"sbits": 0.00021026,
"height": 98,
"size": 7221,
- "time": 1567862338,
+ "time": 1567862341,
"nonce": 0,
"extradata": "0000000000000000000000000000000000000000000000000000000000000000",
"stakeversion": 0,
This removes all of the code related to setting and updating cached
templates in the block manager since they are no longer used.
It is easy to see this is the case by considering that the only places
that set cachedCurrentTemplate and cachedParentTemplate set them to nil.
This modifies all of the RPC code to use the chain
parameters that are associated with the RPC server
instead of the global activeNetParams and thus
moves one step closer to being able to split the
RPC server out into a separate package.
When a transaction is checked for relevance to a websocket client with
a loaded transaction filter, a call to ExtractPkScriptAddrs is not
enough. Commitments in tickets are encoded in an OP_RETURN output
which require an additional parse of the script to check for a
committed P2PKH or P2SH HASH160.
This removes the getblocktemplate and its helpers from the codebase.
Ongoing mining updates focused on the voting/block validation process
with respect to generating block templates for getwork makes it the
better option for decred. Also getblocktemplate rpc was buggy and
has been disabled for a while.
Some lint related issues have been addressed as well.
This adds an additional read from the ok channel in the peer listener
tests to ensure the version message is consumed as well as the verack so
that the remaining tests line up with the messages that are being tested.
This modifies the order in which the subsidy cache is created when not
provided by a caller to happen before the blockchain instance is created
to be more consistent.
This implements new version 2 filters which have 4 changes as compared
to version 1 filters:
- Support for independently specifying the false positive rate and
Golomb coding bin size which allows minimizing the filter size
- A faster (incompatible with version 1) reduction function
- A more compact serialization for the number of members in the set
- Deduplication of all hash collisions prior to reducing and serializing
the deltas
In addition, it adds a full set of tests and updates the benchmarks to
use the new version 2 filters.
The primary motivating factor for these changes is the ability to
minimize the size of the filters, however, the following is a before and
after comparison of version 1 and 2 filters in terms of performance and
allocations.
It is interesting to note the results for attempting to match a single
item is not very representative due to the fact the actual hash value
itself dominates to the point it can significantly vary due to the very
low ns timings involved. Those differences average out when matching
multiple items, which is the much more realistic scenario, and the
performance increase is in line with the expected values. It is also
worth nothing that filter construction now takes a bit longer due to the
additional deduplication step. While the performance numbers for filter
construction are about 25% larger in relative terms, it is only a few ms
difference in practice and therefore is an acceptable trade off for the
size savings provided.
benchmark old ns/op new ns/op delta
-----------------------------------------------------------------
BenchmarkFilterBuild50000 16194920 20279043 +25.22%
BenchmarkFilterBuild100000 32609930 41629998 +27.66%
BenchmarkFilterMatch 620 593 -4.35%
BenchmarkFilterMatchAny 2687 2302 -14.33%
benchmark old allocs new allocs delta
-----------------------------------------------------------------
BenchmarkFilterBuild50000 6 17 +183.33%
BenchmarkFilterBuild100000 6 18 +200.00%
BenchmarkFilterMatch 0 0 +0.00%
BenchmarkFilterMatchAny 0 0 +0.00%
benchmark old bytes new bytes delta
-----------------------------------------------------------------
BenchmarkFilterBuild50000 688366 2074653 +201.39%
BenchmarkFilterBuild100000 1360064 4132627 +203.86%
BenchmarkFilterMatch 0 0 +0.00%
BenchmarkFilterMatchAny 0 0 +0.00%
This refactors the best chain state and block index loading code into
separate functions so they are available to upcoming database update
code to build version 2 gcs filters.
During 32-bit nonce iteration, if a block solution wasn't found, the
iterator variable would overflow back to 0, creating an infinite loop,
thus continuing the puzzle search without ever updating the extra
nonce field. This bug has never been triggered in practice because
the code in question has only ever been used with difficulties where
a solution exists within the regular nonce space.
The extra nonce iteration logic itself was also imperfect in that it
wouldn't test a value of exactly 2^64 - 1.
The behavior we actually want is to loop through the entire unsigned
integer space for both the regular and extra nonces, and for this
process to continue forever until a solution is found. Note that
periodic updates to the block header timestamp during iteration ensure
that unique hashes are generated for subsequent generations of the
same nonce values.
This modifies the code to support an independent false positive rate and
Golomb coding bin size. Among other things, this permits more optimal
parameters for minimizing the filter size to be specified.
This capability will be used in the upcoming version 2 filters that will
ultimately be included in header commitments.
For a concrete example, the current version 1 filter for block 89341 on
mainnet contains 2470 items resulting in a full serialized size of 6,669
bytes. In contrast, if the optimal parameters were specified as
described by the comments in this commit, with no other changes to the
items included in the filter, that same filter would be 6,505 bytes,
which is a size reduction of about 2.46%. This might not seem like a
significant amount, but consider that there is a filter for every block,
so it really adds up.
Since the internal filter no longer directly has a P parameter, this
moves the method to obtain it to the FilterV1 type and adds a new test
to ensure it is returned properly.
Additionally, all of the tests are converted to use the parameters while
retaining the same effective parameters to help prove correctness of the
new code.
Finally, it also significantly reduces the number of allocations
required to construct a filter resulting in faster filter construction
and reduced pressure on the GC and does some other minor consistency
cleanup while here.
In terms of the reduction in allocations, the following is a before and
after comparison of building filters with 50k and 100k elements:
benchmark old ns/op new ns/op delta
--------------------------------------------------------------
BenchmarkFilterBuild50000 18095111 15680001 -13.35%
BenchmarkFilterBuild100000 31980156 31389892 -1.85%
benchmark old allocs new allocs delta
--------------------------------------------------------------
BenchmarkFilterBuild50000 31 6 -80.65%
BenchmarkFilterBuild100000 34 6 -82.35%
benchmark old bytes new bytes delta
--------------------------------------------------------------
BenchmarkFilterBuild50000 1202343 688271 -42.76%
BenchmarkFilterBuild100000 2488472 1360000 -45.35%
This simply rearranges the funcs so they are more logically grouped in
order to provide cleaner diffs for upcoming changes. There are no
functional changes.
This optimizes the Hash method of gcs filters by making use of the new
zero-alloc hashing funcs available in crypto/blake256.
The following is a before and after comparison:
benchmark old ns/op new ns/op delta
-------------------------------------------------
BenchmarkHash 1786 1315 -26.37%
benchmark old allocs new allocs delta
-------------------------------------------------
BenchmarkHash 2 0 -100.00%
benchmark old bytes new bytes delta
-------------------------------------------------
BenchmarkHash 176 0 -100.00%
Currently, the filters provide two different serialization formats per
version. The first is the raw filter bytes without the number of items
in its data set and is implemented by the Bytes and FromBytesV1
functions. The second includes that information and is implemented by
the NBytes and FromNBytesV1 functions.
In practice, the ability to serialize the filter independently from the
number of items in its data set is not very useful since that
information is required to be able to query the filter and, unlike the
other parameters which are fixed (e.g. false positive rate and key), the
number of items varies per filter. For this reason, all usage in
practice calls NBytes and FromNBytesV1.
Consequently, this simplifies the API for working with filters by
standardizing on a single serialization format per filter version which
includes the number of items in its data set.
In order to accomplish this, the current Bytes and FromBytesV1 functions
are removed and the NBytes and FromNBytesV1 functions are renamed to
take their place.
This also updates all tests and callers in the repo accordingly.
This ensures filters properly match search items which happen to hash to
zero and adds a test for the condition. While here, it also rewrites
the MatchAny function to make it easier to reason about.
This was discovered by the new tests which intentionally added tests
with a high false positive rate and random keys.
This refactors the primary gcs filter logic into an internal struct with
a version parameter in in order to pave the way for supporting v2
filters which will have a different serialization that makes them
incompatible with v1 filters while still retaining the ability to work
with v1 filters in the interim.
The exported type is renamed to FilterV1 and the new internal struct is
embedded so its methods are externally available.
The tests and all callers in the repo have been updated accordingly.
This updates the error handling in the gcs package to be consistent with
the rest of the code base to provide a proper error type and error codes
that can be programmatically detected.
This is part of the ongoing process to cleanup and improve the gcs
module to the quality level required by consensus code for ultimate
inclusion in header commitments.
This rewrites the tests to make them more consistent with the rest of
the code base and significantly increases their coverage of the code.
It also reworks the benchmarks to actually benchmark what their names
claim, renames them for consistency, and make them more stable by
ensuring the same prng seed is used each run to eliminate variance
introduced by different values.
Finally, it removes an impossible to hit condition from the bit reader
and adds a couple of additional checks to harden the filters against
potential misuse.
This is part of the ongoing process to cleanup and improve the gcs
module to the quality level required by consensus code for ultimate
inclusion in header commitments.
This adds support for empty filters versus being an error along with a
full set of tests to ensure the empty filter works as intended.
It is part of the onging process to cleanup and improve the gcs module
to the quality level required by consensus code for ultimate inclusion
in header commitments.
This removes the unused and undesired FromPBytes and FromNPBytes
functions and associated tests from the gcs module in preparation for
upcoming changes aimed to support new version filters for use
in header commitments.
Since these changes, and several planned upcoming ones, constitute
breaking pubic API changes, this bumps the major version of the gcs
module, adds a replacement for gcs/v2 to the main module and updates all
other modules to make use of it.
It also bumps the rpcclient module to v5 since it makes use of the
gcs.Filter type in its API, adds a replacement for rpcclient/v5 to the
main module and updates all other modules to make use of it.
Note that this also marks the start of a new approach towards handling
module versioning between release cycles to reduce the maintenance
burden.
The new approach is as follows.
Whenever a new breaking change to a module's API is introduced, the
following will happen:
- Bump the major version in the go.mod of the affected module if not
already done since the last release tag
- Add a replacement to the go.mod in the main module if not already
done since the last release tag
- Update all imports in the repo to use the new major version as
necessary
- Make necessary modifications to allow all other modules to use the
new version in the same commit
- Repeat the process for any other modules the require a new major as a
result of consuming the new major(s)
Finally, once the repo is frozen for software release, all modules will
be tagged in dependency order to stabilize them and all module
replacements will be removed in order to ensure releases are only using
fully tagged and released code.