Fixed serious regression in previous dev build in applying
`csp=` filters. Reported internally by uBO team.
Promote usage of `removeparam` in code instead of `queryprune`,
which is to be deprecated.
Removed test against previously tested hostname in
FilterHostnameDict since as per various benchmark, the
test does not really help.
Remove serialization API in Node.js code as the API is now
present in SNFE itself.
Not sure this can really happen, but if ever Math.random() would
return `0.9999999999999999`, the attribute name would start with
`{`, i.e. an invalid attribute name.
Related issue:
- https://github.com/uBlockOrigin/uBlock-issues/issues/1732
The regression affect filter with the `important` option when
the following conditions were fulfilled:
- The filter pattern is pure hostname
- The filter has not one of the following options:
- domain
- denyallow
- header
- strict1p, strict3p
- csp
- removeparam
- There is a matching exception filter
Related commit:
- a2a8ef7e85
A related mocha test has been added in order to detect this
specific regression in the future through `make test`.
This is a replacement for the dubious approach when the
extension itself was used to run benchmarks to detect
performance and filtering behavior regressions.
The main nodejs flavor is "npm", which is to be used to
lint/test and the publication of an official npm
package -- and by design it has dependencies on mocha,
eslint, etc.
A new flavor "dig" has been created with minimal
dependencies and which purpose is to easily allow to
write specialized code to investigate local code changes
in uBO -- and it's not meant for publication.
Consequently, "make nodejs" has been replaced with
"make npm", and a new "dig" target has been added to the
makefile, to be used for instrumenting local code changes
for investigation purpose.
Whereas before the string segment was encoded as:
LL OOOOOOOOOOOO
where L are the upper 8 bits and used to encode the length
of the segment, and O are the lower 24 bits and used to
encode the offset of the string data in the character
buffer, the new code encode as follow:
OOOOOOOOOOOO LL
And furthermore the most significant bit of the length
LL is now used to mark whether the current string segment
is a label boundary.
This means a cell can't reference a segment longer then
127 characters. To work around this limitation for when a
segment is longer than 127 characters (a rare occurrence),
the algorithm will simply split the segment into multiple
adjacent cells.
As a result, there is no longer a need to encode
"boundariness" into special cells, which simplifies
both the storing and matching algorithms.
Additionally, added minimal documentation for the NPM
package on how to import and use HNTrieContainer as a
standalone API.
For clients who may wish to persist the intermediate compiled form
in order to be able to skip costly parsing operation when the
list is fed to the static network filtering engine.
In the static network filtering engine (snfe), the
compiling-related code was spread across two classes.
This commit makes it so that all the compiling-related
code is in FilterCompiler class, which clear purpose is
to compile raw filters into a form which can be persisted
and later fed to the snfe with no parsing overhead.
To compile raw static network filter, the new approach is:
snfe.createCompiler(parser);
Then for each single raw filter to compile:
compiler.compile(parser, writer);
The caller is responsible to keep a reference to the
compiler instance for as long as it is needed. This removes
the need for the clunky code used to keep an instance of
compiler alive in the snfe.
Additionally, snfe.tokenHistograms() has been moved to
benchmarks.js, as it has no dependency on the snfe, it's
just a utility function.