mirror of https://github.com/yt-dlp/yt-dlp.git
[cleanup] Misc fixes (see desc)
* [tvver] Fix bug in 6837633a4a
- Closes #4054
* [rumble] Fix tests - Closes #3976
* [make] Remove `cat` abuse - Closes #3989
* [make] Revert #3684 - Closes #3814
* [utils] Improve `get_elements_by_class` - Closes #3993
* [utils] Inherit `Namespace` from `types.SimpleNamespace`
* [utils] Use `re.fullmatch` for matching filters
* [jsinterp] Handle quotes in `_separate`
* [make_readme] Allow overshooting last line
Authored by: pukkandan, kwconder, MrRawes, Lesmiscore
This commit is contained in:
parent
56ba69e4c9
commit
64fa820ccf
|
@ -43,7 +43,7 @@ jobs:
|
||||||
run: git push origin ${{ github.event.ref }}
|
run: git push origin ${{ github.event.ref }}
|
||||||
- name: Get Changelog
|
- name: Get Changelog
|
||||||
run: |
|
run: |
|
||||||
changelog=$(cat Changelog.md | grep -oPz '(?s)(?<=### ${{ steps.bump_version.outputs.ytdlp_version }}\n{2}).+?(?=\n{2,3}###)') || true
|
changelog=$(grep -oPz '(?s)(?<=### ${{ steps.bump_version.outputs.ytdlp_version }}\n{2}).+?(?=\n{2,3}###)' Changelog.md) || true
|
||||||
echo "changelog<<EOF" >> $GITHUB_ENV
|
echo "changelog<<EOF" >> $GITHUB_ENV
|
||||||
echo "$changelog" >> $GITHUB_ENV
|
echo "$changelog" >> $GITHUB_ENV
|
||||||
echo "EOF" >> $GITHUB_ENV
|
echo "EOF" >> $GITHUB_ENV
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -43,7 +43,7 @@ PYTHON ?= /usr/bin/env python3
|
||||||
SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
|
SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
|
||||||
|
|
||||||
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
|
# set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
|
||||||
MARKDOWN = $(shell if [ "$(pandoc -v | head -n1 | cut -d" " -f2 | head -c1)" = "2" ]; then echo markdown-smart; else echo markdown; fi)
|
MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
|
||||||
|
|
||||||
install: lazy-extractors yt-dlp yt-dlp.1 completions
|
install: lazy-extractors yt-dlp yt-dlp.1 completions
|
||||||
mkdir -p $(DESTDIR)$(BINDIR)
|
mkdir -p $(DESTDIR)$(BINDIR)
|
||||||
|
|
30
README.md
30
README.md
|
@ -337,8 +337,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
--list-extractors List all supported extractors and exit
|
--list-extractors List all supported extractors and exit
|
||||||
--extractor-descriptions Output descriptions of all supported
|
--extractor-descriptions Output descriptions of all supported
|
||||||
extractors and exit
|
extractors and exit
|
||||||
--force-generic-extractor Force extraction to use the generic
|
--force-generic-extractor Force extraction to use the generic extractor
|
||||||
extractor
|
|
||||||
--default-search PREFIX Use this prefix for unqualified URLs. Eg:
|
--default-search PREFIX Use this prefix for unqualified URLs. Eg:
|
||||||
"gvsearch2:python" downloads two videos from
|
"gvsearch2:python" downloads two videos from
|
||||||
google videos for the search term "python".
|
google videos for the search term "python".
|
||||||
|
@ -397,8 +396,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
aliases; so be carefull to avoid defining
|
aliases; so be carefull to avoid defining
|
||||||
recursive options. As a safety measure, each
|
recursive options. As a safety measure, each
|
||||||
alias may be triggered a maximum of 100
|
alias may be triggered a maximum of 100
|
||||||
times. This option can be used multiple
|
times. This option can be used multiple times
|
||||||
times
|
|
||||||
|
|
||||||
## Network Options:
|
## Network Options:
|
||||||
--proxy URL Use the specified HTTP/HTTPS/SOCKS proxy. To
|
--proxy URL Use the specified HTTP/HTTPS/SOCKS proxy. To
|
||||||
|
@ -425,8 +423,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
explicitly provided two-letter ISO 3166-2
|
explicitly provided two-letter ISO 3166-2
|
||||||
country code
|
country code
|
||||||
--geo-bypass-ip-block IP_BLOCK Force bypass geographic restriction with
|
--geo-bypass-ip-block IP_BLOCK Force bypass geographic restriction with
|
||||||
explicitly provided IP block in CIDR
|
explicitly provided IP block in CIDR notation
|
||||||
notation
|
|
||||||
|
|
||||||
## Video Selection:
|
## Video Selection:
|
||||||
--playlist-start NUMBER Playlist video to start at (default is 1)
|
--playlist-start NUMBER Playlist video to start at (default is 1)
|
||||||
|
@ -636,8 +633,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
modification time (default)
|
modification time (default)
|
||||||
--no-mtime Do not use the Last-modified header to set
|
--no-mtime Do not use the Last-modified header to set
|
||||||
the file modification time
|
the file modification time
|
||||||
--write-description Write video description to a .description
|
--write-description Write video description to a .description file
|
||||||
file
|
|
||||||
--no-write-description Do not write video description (default)
|
--no-write-description Do not write video description (default)
|
||||||
--write-info-json Write video metadata to a .info.json file
|
--write-info-json Write video metadata to a .info.json file
|
||||||
(this may contain personal information)
|
(this may contain personal information)
|
||||||
|
@ -659,8 +655,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
extraction is known to be quick (Alias:
|
extraction is known to be quick (Alias:
|
||||||
--no-get-comments)
|
--no-get-comments)
|
||||||
--load-info-json FILE JSON file containing the video information
|
--load-info-json FILE JSON file containing the video information
|
||||||
(created with the "--write-info-json"
|
(created with the "--write-info-json" option)
|
||||||
option)
|
|
||||||
--cookies FILE Netscape formatted file to read cookies from
|
--cookies FILE Netscape formatted file to read cookies from
|
||||||
and dump cookie jar in
|
and dump cookie jar in
|
||||||
--no-cookies Do not read/dump cookies from/to file
|
--no-cookies Do not read/dump cookies from/to file
|
||||||
|
@ -676,8 +671,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
for decrypting Chromium cookies on Linux can
|
for decrypting Chromium cookies on Linux can
|
||||||
be (optionally) specified after the browser
|
be (optionally) specified after the browser
|
||||||
name separated by a "+". Currently supported
|
name separated by a "+". Currently supported
|
||||||
keyrings are: basictext, gnomekeyring,
|
keyrings are: basictext, gnomekeyring, kwallet
|
||||||
kwallet
|
|
||||||
--no-cookies-from-browser Do not load cookies from browser (default)
|
--no-cookies-from-browser Do not load cookies from browser (default)
|
||||||
--cache-dir DIR Location in the filesystem where youtube-dl
|
--cache-dir DIR Location in the filesystem where youtube-dl
|
||||||
can store some downloaded information (such
|
can store some downloaded information (such
|
||||||
|
@ -689,8 +683,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
|
|
||||||
## Thumbnail Options:
|
## Thumbnail Options:
|
||||||
--write-thumbnail Write thumbnail image to disk
|
--write-thumbnail Write thumbnail image to disk
|
||||||
--no-write-thumbnail Do not write thumbnail image to disk
|
--no-write-thumbnail Do not write thumbnail image to disk (default)
|
||||||
(default)
|
|
||||||
--write-all-thumbnails Write all thumbnail image formats to disk
|
--write-all-thumbnails Write all thumbnail image formats to disk
|
||||||
--list-thumbnails List available thumbnails of each video.
|
--list-thumbnails List available thumbnails of each video.
|
||||||
Simulate unless --no-simulate is used
|
Simulate unless --no-simulate is used
|
||||||
|
@ -976,8 +969,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
otherwise), force (try fixing even if file
|
otherwise), force (try fixing even if file
|
||||||
already exists)
|
already exists)
|
||||||
--ffmpeg-location PATH Location of the ffmpeg binary; either the
|
--ffmpeg-location PATH Location of the ffmpeg binary; either the
|
||||||
path to the binary or its containing
|
path to the binary or its containing directory
|
||||||
directory
|
|
||||||
--exec [WHEN:]CMD Execute a command, optionally prefixed with
|
--exec [WHEN:]CMD Execute a command, optionally prefixed with
|
||||||
when to execute it (after_move if
|
when to execute it (after_move if
|
||||||
unspecified), separated by a ":". Supported
|
unspecified), separated by a ":". Supported
|
||||||
|
@ -1004,8 +996,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
be used with "--paths" and "--output" to set
|
be used with "--paths" and "--output" to set
|
||||||
the output filename for the split files. See
|
the output filename for the split files. See
|
||||||
"OUTPUT TEMPLATE" for details
|
"OUTPUT TEMPLATE" for details
|
||||||
--no-split-chapters Do not split video based on chapters
|
--no-split-chapters Do not split video based on chapters (default)
|
||||||
(default)
|
|
||||||
--remove-chapters REGEX Remove chapters whose title matches the
|
--remove-chapters REGEX Remove chapters whose title matches the
|
||||||
given regular expression. The syntax is the
|
given regular expression. The syntax is the
|
||||||
same as --download-sections. This option can
|
same as --download-sections. This option can
|
||||||
|
@ -1036,8 +1027,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
|
||||||
(after downloading and processing all
|
(after downloading and processing all
|
||||||
formats of a video), or "playlist" (at end
|
formats of a video), or "playlist" (at end
|
||||||
of playlist). This option can be used
|
of playlist). This option can be used
|
||||||
multiple times to add different
|
multiple times to add different postprocessors
|
||||||
postprocessors
|
|
||||||
|
|
||||||
## SponsorBlock Options:
|
## SponsorBlock Options:
|
||||||
Make chapter entries for, or remove various segments (sponsor,
|
Make chapter entries for, or remove various segments (sponsor,
|
||||||
|
|
|
@ -11,6 +11,7 @@ README_FILE = 'README.md'
|
||||||
OPTIONS_START = 'General Options:'
|
OPTIONS_START = 'General Options:'
|
||||||
OPTIONS_END = 'CONFIGURATION'
|
OPTIONS_END = 'CONFIGURATION'
|
||||||
EPILOG_START = 'See full documentation'
|
EPILOG_START = 'See full documentation'
|
||||||
|
ALLOWED_OVERSHOOT = 2
|
||||||
|
|
||||||
DISABLE_PATCH = object()
|
DISABLE_PATCH = object()
|
||||||
|
|
||||||
|
@ -28,6 +29,7 @@ def apply_patch(text, patch):
|
||||||
|
|
||||||
options = take_section(sys.stdin.read(), f'\n {OPTIONS_START}', f'\n{EPILOG_START}', shift=1)
|
options = take_section(sys.stdin.read(), f'\n {OPTIONS_START}', f'\n{EPILOG_START}', shift=1)
|
||||||
|
|
||||||
|
max_width = max(map(len, options.split('\n')))
|
||||||
switch_col_width = len(re.search(r'(?m)^\s{5,}', options).group())
|
switch_col_width = len(re.search(r'(?m)^\s{5,}', options).group())
|
||||||
delim = f'\n{" " * switch_col_width}'
|
delim = f'\n{" " * switch_col_width}'
|
||||||
|
|
||||||
|
@ -44,6 +46,12 @@ PATCHES = (
|
||||||
rf'(?m)({delim}\S+)+$',
|
rf'(?m)({delim}\S+)+$',
|
||||||
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
|
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
|
||||||
),
|
),
|
||||||
|
( # Allow overshooting last line
|
||||||
|
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
|
||||||
|
lambda mobj: (mobj.group().replace(delim, ' ')
|
||||||
|
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
|
||||||
|
else mobj.group())
|
||||||
|
),
|
||||||
( # Avoid newline when a space is available b/w switch and description
|
( # Avoid newline when a space is available b/w switch and description
|
||||||
DISABLE_PATCH, # This creates issues with prepare_manpage
|
DISABLE_PATCH, # This creates issues with prepare_manpage
|
||||||
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
|
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
|
||||||
|
|
|
@ -576,7 +576,7 @@ class YoutubeDL:
|
||||||
)
|
)
|
||||||
self._allow_colors = Namespace(**{
|
self._allow_colors = Namespace(**{
|
||||||
type_: not self.params.get('no_color') and supports_terminal_sequences(stream)
|
type_: not self.params.get('no_color') and supports_terminal_sequences(stream)
|
||||||
for type_, stream in self._out_files if type_ != 'console'
|
for type_, stream in self._out_files.items_ if type_ != 'console'
|
||||||
})
|
})
|
||||||
|
|
||||||
if sys.version_info < (3, 6):
|
if sys.version_info < (3, 6):
|
||||||
|
@ -3671,7 +3671,7 @@ class YoutubeDL:
|
||||||
sys.getfilesystemencoding(),
|
sys.getfilesystemencoding(),
|
||||||
self.get_encoding(),
|
self.get_encoding(),
|
||||||
', '.join(
|
', '.join(
|
||||||
f'{key} {get_encoding(stream)}' for key, stream in self._out_files
|
f'{key} {get_encoding(stream)}' for key, stream in self._out_files.items_
|
||||||
if stream is not None and key != 'console')
|
if stream is not None and key != 'console')
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -302,7 +302,7 @@ class FileDownloader:
|
||||||
)
|
)
|
||||||
|
|
||||||
def _report_progress_status(self, s, default_template):
|
def _report_progress_status(self, s, default_template):
|
||||||
for name, style in self.ProgressStyles:
|
for name, style in self.ProgressStyles.items_:
|
||||||
name = f'_{name}_str'
|
name = f'_{name}_str'
|
||||||
if name not in s:
|
if name not in s:
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -24,6 +24,11 @@ class RumbleEmbedIE(InfoExtractor):
|
||||||
'title': 'WMAR 2 News Latest Headlines | October 20, 6pm',
|
'title': 'WMAR 2 News Latest Headlines | October 20, 6pm',
|
||||||
'timestamp': 1571611968,
|
'timestamp': 1571611968,
|
||||||
'upload_date': '20191020',
|
'upload_date': '20191020',
|
||||||
|
'channel_url': 'https://rumble.com/c/WMAR',
|
||||||
|
'channel': 'WMAR',
|
||||||
|
'thumbnail': 'https://sp.rmbl.ws/s8/1/5/M/z/1/5Mz1a.OvCc-small-WMAR-2-News-Latest-Headline.jpg',
|
||||||
|
'duration': 234,
|
||||||
|
'uploader': 'WMAR',
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://rumble.com/embed/vslb7v',
|
'url': 'https://rumble.com/embed/vslb7v',
|
||||||
|
@ -38,6 +43,7 @@ class RumbleEmbedIE(InfoExtractor):
|
||||||
'channel': 'CTNews',
|
'channel': 'CTNews',
|
||||||
'thumbnail': 'https://sp.rmbl.ws/s8/6/7/i/9/h/7i9hd.OvCc.jpg',
|
'thumbnail': 'https://sp.rmbl.ws/s8/6/7/i/9/h/7i9hd.OvCc.jpg',
|
||||||
'duration': 901,
|
'duration': 901,
|
||||||
|
'uploader': 'CTNews',
|
||||||
}
|
}
|
||||||
}, {
|
}, {
|
||||||
'url': 'https://rumble.com/embed/ufe9n.v5pv5f',
|
'url': 'https://rumble.com/embed/ufe9n.v5pv5f',
|
||||||
|
@ -96,6 +102,7 @@ class RumbleEmbedIE(InfoExtractor):
|
||||||
'channel': author.get('name'),
|
'channel': author.get('name'),
|
||||||
'channel_url': author.get('url'),
|
'channel_url': author.get('url'),
|
||||||
'duration': int_or_none(video.get('duration')),
|
'duration': int_or_none(video.get('duration')),
|
||||||
|
'uploader': author.get('name'),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -24,6 +24,7 @@ _ASSIGN_OPERATORS.append(('=', (lambda cur, right: right)))
|
||||||
_NAME_RE = r'[a-zA-Z_$][a-zA-Z_$0-9]*'
|
_NAME_RE = r'[a-zA-Z_$][a-zA-Z_$0-9]*'
|
||||||
|
|
||||||
_MATCHING_PARENS = dict(zip('({[', ')}]'))
|
_MATCHING_PARENS = dict(zip('({[', ')}]'))
|
||||||
|
_QUOTES = '\'"'
|
||||||
|
|
||||||
|
|
||||||
class JS_Break(ExtractorError):
|
class JS_Break(ExtractorError):
|
||||||
|
@ -69,12 +70,17 @@ class JSInterpreter:
|
||||||
return
|
return
|
||||||
counters = {k: 0 for k in _MATCHING_PARENS.values()}
|
counters = {k: 0 for k in _MATCHING_PARENS.values()}
|
||||||
start, splits, pos, delim_len = 0, 0, 0, len(delim) - 1
|
start, splits, pos, delim_len = 0, 0, 0, len(delim) - 1
|
||||||
|
in_quote, escaping = None, False
|
||||||
for idx, char in enumerate(expr):
|
for idx, char in enumerate(expr):
|
||||||
if char in _MATCHING_PARENS:
|
if char in _MATCHING_PARENS:
|
||||||
counters[_MATCHING_PARENS[char]] += 1
|
counters[_MATCHING_PARENS[char]] += 1
|
||||||
elif char in counters:
|
elif char in counters:
|
||||||
counters[char] -= 1
|
counters[char] -= 1
|
||||||
if char != delim[pos] or any(counters.values()):
|
elif not escaping and char in _QUOTES and in_quote in (char, None):
|
||||||
|
in_quote = None if in_quote else char
|
||||||
|
escaping = not escaping and in_quote and char == '\\'
|
||||||
|
|
||||||
|
if char != delim[pos] or any(counters.values()) or in_quote:
|
||||||
pos = 0
|
pos = 0
|
||||||
continue
|
continue
|
||||||
elif pos != delim_len:
|
elif pos != delim_len:
|
||||||
|
|
|
@ -34,6 +34,7 @@ import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
import time
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
|
import types
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
import xml.etree.ElementTree
|
import xml.etree.ElementTree
|
||||||
import zlib
|
import zlib
|
||||||
|
@ -397,14 +398,14 @@ def get_element_html_by_attribute(attribute, value, html, **kargs):
|
||||||
def get_elements_by_class(class_name, html, **kargs):
|
def get_elements_by_class(class_name, html, **kargs):
|
||||||
"""Return the content of all tags with the specified class in the passed HTML document as a list"""
|
"""Return the content of all tags with the specified class in the passed HTML document as a list"""
|
||||||
return get_elements_by_attribute(
|
return get_elements_by_attribute(
|
||||||
'class', r'[^\'"]*\b%s\b[^\'"]*' % re.escape(class_name),
|
'class', r'[^\'"]*(?<=[\'"\s])%s(?=[\'"\s])[^\'"]*' % re.escape(class_name),
|
||||||
html, escape_value=False)
|
html, escape_value=False)
|
||||||
|
|
||||||
|
|
||||||
def get_elements_html_by_class(class_name, html):
|
def get_elements_html_by_class(class_name, html):
|
||||||
"""Return the html of all tags with the specified class in the passed HTML document as a list"""
|
"""Return the html of all tags with the specified class in the passed HTML document as a list"""
|
||||||
return get_elements_html_by_attribute(
|
return get_elements_html_by_attribute(
|
||||||
'class', r'[^\'"]*\b%s\b[^\'"]*' % re.escape(class_name),
|
'class', r'[^\'"]*(?<=[\'"\s])%s(?=[\'"\s])[^\'"]*' % re.escape(class_name),
|
||||||
html, escape_value=False)
|
html, escape_value=False)
|
||||||
|
|
||||||
|
|
||||||
|
@ -3404,16 +3405,15 @@ def _match_one(filter_part, dct, incomplete):
|
||||||
else:
|
else:
|
||||||
is_incomplete = lambda k: k in incomplete
|
is_incomplete = lambda k: k in incomplete
|
||||||
|
|
||||||
operator_rex = re.compile(r'''(?x)\s*
|
operator_rex = re.compile(r'''(?x)
|
||||||
(?P<key>[a-z_]+)
|
(?P<key>[a-z_]+)
|
||||||
\s*(?P<negation>!\s*)?(?P<op>%s)(?P<none_inclusive>\s*\?)?\s*
|
\s*(?P<negation>!\s*)?(?P<op>%s)(?P<none_inclusive>\s*\?)?\s*
|
||||||
(?:
|
(?:
|
||||||
(?P<quote>["\'])(?P<quotedstrval>.+?)(?P=quote)|
|
(?P<quote>["\'])(?P<quotedstrval>.+?)(?P=quote)|
|
||||||
(?P<strval>.+?)
|
(?P<strval>.+?)
|
||||||
)
|
)
|
||||||
\s*$
|
|
||||||
''' % '|'.join(map(re.escape, COMPARISON_OPERATORS.keys())))
|
''' % '|'.join(map(re.escape, COMPARISON_OPERATORS.keys())))
|
||||||
m = operator_rex.search(filter_part)
|
m = operator_rex.fullmatch(filter_part.strip())
|
||||||
if m:
|
if m:
|
||||||
m = m.groupdict()
|
m = m.groupdict()
|
||||||
unnegated_op = COMPARISON_OPERATORS[m['op']]
|
unnegated_op = COMPARISON_OPERATORS[m['op']]
|
||||||
|
@ -3449,11 +3449,10 @@ def _match_one(filter_part, dct, incomplete):
|
||||||
'': lambda v: (v is True) if isinstance(v, bool) else (v is not None),
|
'': lambda v: (v is True) if isinstance(v, bool) else (v is not None),
|
||||||
'!': lambda v: (v is False) if isinstance(v, bool) else (v is None),
|
'!': lambda v: (v is False) if isinstance(v, bool) else (v is None),
|
||||||
}
|
}
|
||||||
operator_rex = re.compile(r'''(?x)\s*
|
operator_rex = re.compile(r'''(?x)
|
||||||
(?P<op>%s)\s*(?P<key>[a-z_]+)
|
(?P<op>%s)\s*(?P<key>[a-z_]+)
|
||||||
\s*$
|
|
||||||
''' % '|'.join(map(re.escape, UNARY_OPERATORS.keys())))
|
''' % '|'.join(map(re.escape, UNARY_OPERATORS.keys())))
|
||||||
m = operator_rex.search(filter_part)
|
m = operator_rex.fullmatch(filter_part.strip())
|
||||||
if m:
|
if m:
|
||||||
op = UNARY_OPERATORS[m.group('op')]
|
op = UNARY_OPERATORS[m.group('op')]
|
||||||
actual_value = dct.get(m.group('key'))
|
actual_value = dct.get(m.group('key'))
|
||||||
|
@ -5395,23 +5394,15 @@ class classproperty:
|
||||||
return self.func(cls)
|
return self.func(cls)
|
||||||
|
|
||||||
|
|
||||||
class Namespace:
|
class Namespace(types.SimpleNamespace):
|
||||||
"""Immutable namespace"""
|
"""Immutable namespace"""
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
self._dict = kwargs
|
|
||||||
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
return self._dict[attr]
|
|
||||||
|
|
||||||
def __contains__(self, item):
|
|
||||||
return item in self._dict.values()
|
|
||||||
|
|
||||||
def __iter__(self):
|
def __iter__(self):
|
||||||
return iter(self._dict.items())
|
return iter(self.__dict__.values())
|
||||||
|
|
||||||
def __repr__(self):
|
@property
|
||||||
return f'{type(self).__name__}({", ".join(f"{k}={v}" for k, v in self)})'
|
def items_(self):
|
||||||
|
return self.__dict__.items()
|
||||||
|
|
||||||
|
|
||||||
# Deprecated
|
# Deprecated
|
||||||
|
|
Loading…
Reference in New Issue